Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
1,700 | 2,545 | Temporal-Difference Networks
Richard S. Sutton and Brian Tanner
Department of Computing Science
University of Alberta
Edmonton, Alberta, Canada T6G 2E8
{sutton,btanner}@cs.ualberta.ca
Abstract
We introduce a generalization of temporal-difference (TD) learning to
networks of interrelated predictions. Rather than relating a single prediction to itself at a later time, as in conventional TD methods, a TD
network relates each prediction in a set of predictions to other predictions in the set at a later time. TD networks can represent and apply TD
learning to a much wider class of predictions than has previously been
possible. Using a random-walk example, we show that these networks
can be used to learn to predict by a fixed interval, which is not possible with conventional TD methods. Secondly, we show that if the interpredictive relationships are made conditional on action, then the usual
learning-efficiency advantage of TD methods over Monte Carlo (supervised learning) methods becomes particularly pronounced. Thirdly, we
demonstrate that TD networks can learn predictive state representations
that enable exact solution of a non-Markov problem. A very broad range
of inter-predictive temporal relationships can be expressed in these networks. Overall we argue that TD networks represent a substantial extension of the abilities of TD methods and bring us closer to the goal of
representing world knowledge in entirely predictive, grounded terms.
Temporal-difference (TD) learning is widely used in reinforcement learning methods to
learn moment-to-moment predictions of total future reward (value functions). In this setting, TD learning is often simpler and more data-efficient than other methods. But the idea
of TD learning can be used more generally than it is in reinforcement learning. TD learning is a general method for learning predictions whenever multiple predictions are made of
the same event over time, value functions being just one example. The most pertinent of
the more general uses of TD learning have been in learning models of an environment or
task domain (Dayan, 1993; Kaelbling, 1993; Sutton, 1995; Sutton, Precup & Singh, 1999).
In these works, TD learning is used to predict future values of many observations or state
variables of a dynamical system.
The essential idea of TD learning can be described as ?learning a guess from a guess?. In
all previous work, the two guesses involved were predictions of the same quantity at two
points in time, for example, of the discounted future reward at successive time steps. In this
paper we explore a few of the possibilities that open up when the second guess is allowed
to be different from the first.
To be more precise, we must make a distinction between the extensive definition of a prediction, expressing its desired relationship to measurable data, and its TD definition, expressing its desired relationship to other predictions. In reinforcement learning, for example,
state values are extensively defined as an expectation of the discounted sum of future rewards, while they are TD defined as the solution to the Bellman equation (a relationship to
the expectation of the value of successor states, plus the immediate reward). It?s the same
prediction, just defined or expressed in different ways. In past work with TD methods, the
TD relationship was always between predictions with identical or very similar extensive
semantics. In this paper we retain the TD idea of learning predictions based on others, but
allow the predictions to have different extensive semantics.
1 The Learning-to-predict Problem
The problem we consider in this paper is a general one of learning to predict aspects of the
interaction between a decision making agent and its environment. At each of a series of
discrete time steps t, the environment generates an observation o t ? O, and the agent takes
an action at ? A. Whereas A is an arbitrary discrete set, we assume without loss of generality that ot can be represented as a vector of bits. The action and observation events occur
in sequence, o1 , a1 , o2 , a2 , o3 ? ? ?, with each event of course dependent only on those preceding it. This sequence will be called experience. We are interested in predicting not just
each next observation but more general, action-conditional functions of future experience,
as discussed in the next section.
In this paper we use a random-walk problem with seven states, with left and right actions
available in every state:
1
1
0
2
0
3
0
4
0
5
0
6
1
7
The observation upon arriving in a state consists of a special bit that is 1 only at the two ends
of the walk and, in the first two of our three experiments, seven additional bits explicitly
indicating the state number (only one of them is 1). This is a continuing task: reaching an
end state does not end or interrupt experience. Although the sequence depends deterministically on action, we assume that the actions are selected randomly with equal probability
so that the overall system can be viewed as a Markov chain.
The TD networks introduced in this paper can represent a wide variety of predictions, far
more than can be represented by a conventional TD predictor. In this paper we take just
a few steps toward more general predictions. In particular, we consider variations of the
problem of prediction by a fixed interval. This is one of the simplest cases that cannot
otherwise be handled by TD methods. For the seven-state random walk, we will predict
the special observation bit some numbers of discrete steps in advance, first unconditionally
and then conditioned on action sequences.
2 TD Networks
A TD network is a network of nodes, each representing a single scalar prediction. The
nodes are interconnected by links representing the TD relationships among the predictions
and to the observations and actions. These links determine the extensive semantics of
each prediction?its desired or target relationship to the data. They represent what we
seek to predict about the data as opposed to how we try to predict it. We think of these
links as determining a set of questions being asked about the data, and accordingly we
call them the question network. A separate set of interconnections determines the actual
computational process?the updating of the predictions at each node from their previous
values and the current action and observation. We think of this process as providing the
answers to the questions, and accordingly we call them the answer network. The question
network provides targets for a learning process shaping the answer network and does not
otherwise affect the behavior of the TD network. It is natural to consider changing the
question network, but in this paper we take it as fixed and given.
Figure 1a shows a suggestive example of a question network. The three squares across
the top represent three observation bits. The node labeled 1 is directly connected to the
first observation bit and represents a prediction that that bit will be 1 on the next time
step. The node labeled 2 is similarly a prediction of the expected value of node 1 on the
next step. Thus the extensive definition of Node 2?s prediction is the probability that the
first observation bit will be 1 two time steps from now. Node 3 similarly predicts the first
observation bit three time steps in the future. Node 4 is a conventional TD prediction, in this
case of the future discounted sum of the second observation bit, with discount parameter ?.
Its target is the familiar TD target, the data bit plus the node?s own prediction on the next
time step (with weightings 1 ? ? and ? respectively). Nodes 5 and 6 predict the probability
of the third observation bit being 1 if particular actions a or b are taken respectively. Node
7 is a prediction of the average of the first observation bit and Node 4?s prediction, both on
the next step. This is the first case where it is not easy to see or state the extensive semantics
of the prediction in terms of the data. Node 8 predicts another average, this time of nodes 4
and 5, and the question it asks is even harder to express extensively. One could continue in
this way, adding more and more nodes whose extensive definitions are difficult to express
but which would nevertheless be completely defined as long as these local TD relationships
are clear. The thinner links shown entering some nodes are meant to be a suggestion of the
entirely separate answer network determining the actual computation (as opposed to the
goals) of the network. In this paper we consider only simple question networks such as the
left column of Figure 1a and of the action-conditional tree form shown in Figure 1b.
1??
1
4
?
a
5
b
L
6
L
2
7
R
L
R
R
8
3
(a)
(b)
Figure 1: The question networks of two TD networks. (a) a question network discussed in
the text, and (b) a depth-2 fully-action-conditional question network used in Experiments
2 and 3. Observation bits are represented as squares across the top while actual nodes of
the TD network, corresponding each to a separate prediction, are below. The thick lines
represent the question network and the thin lines in (a) suggest the answer network (the bulk
of which is not shown). Note that all of these nodes, arrows, and numbers are completely
different and separate from those representing the random-walk problem on the preceding
page.
More formally and generally, let yti ? [0, 1], i = 1, . . . , n, denote the prediction of the
ith node at time step t. The column vector of predictions yt = (yt1 , . . . , ytn )T is updated
according to a vector-valued function u with modifiable parameter W:
yt = u(yt?1 , at?1 , ot , Wt ) ? <n .
(1)
The update function u corresponds to the answer network, with W being the weights on
its links. Before detailing that process, we turn to the question network, the defining TD
relationships between nodes. The TD target zti for yti is an arbitrary function z i of the
successive predictions and observations. In vector form we have 1
zt = z(ot+1 , ?
yt+1 ) ? <n ,
(2)
where ?
yt+1 is just like yt+1 , as in (1), except calculated with the old weights before they
are updated on the basis of zt :
?
yt = u(yt?1 , at?1 , ot , Wt?1 ) ? <n .
(3)
(This temporal subtlety also arises in conventional TD learning.) For example, for the
1
2
4
nodes in Figure 1a we have zt1 = o1t+1 , zt2 = yt+1
, zt3 = yt+1
, zt4 = (1 ? ?)o2t+1 + ?yt+1
,
1 1
1 4
1 4
1 5
5
6
3
7
8
zt = zt = ot+1 , zt = 2 ot+1 + 2 yt+1 , and zt = 2 yt+1 + 2 yt+1 . The target functions
z i are only part of specifying the question network. The other part has to do with making
them potentially conditional on action and observation. For example, Node 5 in Figure
1a predicts what the third observation bit will be if action a is taken. To arrange for such
semantics we introduce a new vector ct of conditions, cit , indicating the extent to which yti
is held responsible for matching zti , thus making the ith prediction conditional on cit . Each
cit is determined as an arbitrary function ci of at and yt . In vector form we have:
ct = c(at , yt ) ? [0, 1]n .
(4)
For example, for Node 5 in Figure 1a, c5t = 1 if at = a, otherwise c5t = 0.
Equations (2?4) correspond to the question network. Let us now turn to defining u, the
update function for yt mentioned earlier and which corresponds to the answer network. In
general u is an arbitrary function approximator, but for concreteness we define it to be of a
linear form
yt = ?(Wt xt )
(5)
m
where xt ? < is a feature vector, Wt is an n ? m matrix, and ? is the n-vector form
of the identity function (Experiments 1 and 2) or the S-shaped logistic function ?(s) =
1
1+e?s (Experiment 3). The feature vector is an arbitrary function of the preceding action,
observation, and node values:
xt = x(at?1 , ot , yt?1 ) ? <m .
(6)
For example, xt might have one component for each observation bit, one for each possible
action (one of which is 1, the rest 0), and n more for the previous node values y t?1 . The
learning algorithm for each component wtij of Wt is
ij
wt+1
? wtij = ?(zti ? yti )cit
?yti
,
(7)
?wtij
where ? is a step-size parameter. The timing details may be clarified by writing the sequence of quantities in the order in which they are computed:
yt at ct ot+1 xt+1 ?
yt+1 zt Wt+1 yt+1 .
(8)
Finally, the target in the extensive sense for yt is
(9)
y?t = Et,? (1 ? ct ) ? y?t + ct ? z(ot+1 , y?t+1 ) ,
where ? represents component-wise multiplication and ? is the policy being followed,
which is assumed fixed.
1
In general, z is a function of all the future predictions and observations, but in this paper we treat
only the one-step case.
3 Experiment 1: n-step Unconditional Prediction
In this experiment we sought to predict the observation bit precisely n steps in advance,
for n = 1, 2, 5, 10, and 25. In order to predict n steps in advance, of course, we also
have to predict n ? 1 steps in advance, n ? 2 steps in advance, etc., all the way down to
predicting one step ahead. This is specified by a TD network consisting of a single chain of
predictions like the left column of Figure 1a, but of length 25 rather than 3. Random-walk
sequences were constructed by starting at the center state and then taking random actions
for 50, 100, 150, and 200 steps (100 sequences each).
We applied a TD network and a corresponding Monte Carlo method to this data. The Monte
Carlo method learned the same predictions, but learned them by comparing them to the
actual outcomes in the sequence (instead of zti in (7)). This involved significant additional
complexity to store the predictions until their corresponding targets were available. Both
algorithms used feature vectors of 7 binary components, one for each of the seven states, all
of which were zero except for the one corresponding to the current state. Both algorithms
formed their predictions linearly (?(?) was the identity) and unconditionally (c it = 1 ?i, t).
In an initial set of experiments, both algorithms were applied online with a variety of values
for their step-size parameter ?. Under these conditions we did not find that either algorithm
was clearly better in terms of the mean square error in their predictions over the data sets.
We found a clearer result when both algorithms were trained using batch updating, in which
weight changes are collected ?on the side? over an experience sequence and then made all
at once at the end, and the whole process is repeated until convergence. Under batch
updating, convergence is to the same predictions regardless of initial conditions or ? value
(as long as ? is sufficiently small), which greatly simplifies comparison of algorithms. The
predictions learned under batch updating are also the same as would be computed by least
squares algorithms such as LSTD(?) (Bradtke & Barto, 1996; Boyan, 2000; Lagoudakis &
Parr, 2003). The errors in the final predictions are shown in Table 1.
For 1-step predictions, the Monte-Carlo and TD methods performed identically of course,
but for longer predictions a significant difference was observed. The RMSE of the Monte
Carlo method increased with prediction length whereas for the TD network it decreased.
The largest standard error in any of the numbers shown in the table is 0.008, so almost
all of the differences are statistically significant. TD methods appear to have a significant
data-efficiency advantage over non-TD methods in this prediction-by-n context (and this
task) just as they do in conventional multi-step prediction (Sutton, 1988).
Time
Steps
50
100
150
200
1-step
MC/TD
0.205
0.124
0.089
0.076
2-step
MC
TD
0.219 0.172
0.133 0.100
0.103 0.073
0.084 0.060
5-step
MC
TD
0.234 0.159
0.160 0.098
0.121 0.076
0.109 0.065
10-step
MC
TD
0.249 0.139
0.168 0.079
0.130 0.063
0.112 0.056
25-step
MC
TD
0.297 0.129
0.187 0.068
0.153 0.054
0.118 0.049
Table 1: RMSE of Monte-Carlo and TD-network predictions of various lengths and for
increasing amounts of training data on the random-walk example with batch updating.
4 Experiment 2: Action-conditional Prediction
The advantage of TD methods should be greater for predictions that apply only when the
experience sequence unfolds in a particular way, such as when a particular sequence of
actions are made. In a second experiment we sought to learn n-step-ahead predictions
conditional on action selections. The question network for learning all 2-step-ahead pre-
dictions is shown in Figure 1b. The upper two nodes predict the observation bit conditional
on taking a left action (L) or a right action (R). The lower four nodes correspond to the
two-step predictions, e.g., the second lower node is the prediction of what the observation
bit will be if an L action is taken followed by an R action. These predictions are the same
as the e-tests used in some of the work on predictive state representations (Littman, Sutton
& Singh, 2002; Rudary & Singh, 2003).
In this experiment we used a question network like that in Figure 1b except of depth four,
consisting of 30 (2+4+8+16) nodes. The conditions for each node were set to 0 or 1 depending on whether the action taken on the step matched that indicated in the figure. The
feature vectors were as in the previous experiment. Now that we are conditioning on action,
the problem is deterministic and ? can be set uniformly to 1. A Monte Carlo prediction
can be learned only when its corresponding action sequence occurs in its entirety, but then
it is complete and accurate in one step. The TD network, on the other hand, can learn
from incomplete sequences but must propagate them back one level at a time. First the
one-step predictions must be learned, then the two-step predictions from them, and so on.
The results for online and batch training are shown in Tables 2 and 3.
As anticipated, the TD network learns much faster than Monte Carlo with both online and
batch updating. Because the TD network learns its n step predictions based on its n ? 1
step predictions, it has a clear advantage for this task. Once the TD Network has seen
each action in each state, it can quickly learn any prediction 2, 10, or 1000 steps in the
future. Monte Carlo, on the other hand, must sample actual sequences, so each exact action
sequence must be observed.
Time Step
100
200
300
400
500
1-Step
MC/TD
0.153
0.019
0.000
0.000
0.000
2-Step
MC
TD
0.222 0.182
0.092 0.044
0.040 0.000
0.019 0.000
0.019 0.000
3-Step
MC
TD
0.253 0.195
0.142 0.054
0.089 0.013
0.055 0.000
0.038 0.000
4-Step
MC
TD
0.285 0.185
0.196 0.062
0.139 0.017
0.093 0.000
0.062 0.000
Table 2: RMSE of the action-conditional predictions of various lengths for Monte-Carlo
and TD-network methods on the random-walk problem with online updating.
Time Steps
50
100
150
200
MC
53.48%
30.81%
19.26%
11.69%
TD
17.21%
4.50%
1.57%
0.14%
Table 3: Average proportion of incorrect action-conditional predictions for batch-updating
versions of Monte-Carlo and TD-network methods, for various amounts of data, on the
random-walk task. All differences are statistically significant.
5 Experiment 3: Learning a Predictive State Representation
Experiments 1 and 2 showed advantages for TD learning methods in Markov problems.
The feature vectors in both experiments provided complete information about the nominal
state of the random walk. In Experiment 3, on the other hand, we applied TD networks to
a non-Markov version of the random-walk example, in particular, in which only the special
observation bit was visible and not the state number. In this case it is not possible to make
accurate predictions based solely on the current action and observation; the previous time
step?s predictions must be used as well.
As in the previous experiment, we sought to learn n-step predictions using actionconditional question networks of depths 2, 3, and 4. The feature vector xt consisted of
three parts: a constant 1, four binary features to represent the pair of action a t?1 and observation bit ot , and n more features corresponding to the components of y t?1 . The features
vectors were thus of length m = 11, 19, and 35 for the three depths. In this experiment,
?(?) was the S-shaped logistic function. The initial weights W0 and predictions y0 were
both 0.
Fifty random-walk sequences were constructed, each of 250,000 time steps, and presented
to TD networks of the three depths, with a range of step-size parameters ?. We measured
the RMSE of all predictions made by the networks (computed from knowledge of the task)
and also the ?empirical RMSE,? the error in the one-step prediction for the action actually
taken on each step. We found that in all cases the errors approached zero over time, showing
that the problem was completely solved. Figure 2 shows some representative learning
curves for the depth-2 and depth-4 TD networks.
.3
Empirical
RMS error
.2
?=.1
.1
?=.5
?=.5
?=.75
0
0
?=.25
depth 2
50K
100K
150K
200K
250K
Time Steps
Figure 2: Prediction performance on the non-Markov random walk with depth-4 TD networks (and one depth-2 network) with various step-size parameters, averaged over 50 runs
and 1000 time-step bins. The ?bump? most clearly seen with small step sizes is reliably
present and may be due to predictions of different lengths being learned at different times.
In ongoing experiments on other non-Markov problems we have found that TD networks
do not always find such complete solutions. Other problems seem to require more than one
step of history information (the one-step-preceding action and observation), though less
than would be required using history information alone. Our results as a whole suggest that
TD networks may provide an effective alternative learning algorithm for predictive state
representations (Littman et al., 2000). Previous algorithms have been found to be effective
on some tasks but not on others (e.g, Singh et al., 2003; Rudary & Singh, 2004; James &
Singh, 2004). More work is needed to assess the range of effectiveness and learning rate
of TD methods vis-a-vis previous methods, and to explore their combination with history
information.
6 Conclusion
TD networks suggest a large set of possibilities for learning to predict, and in this paper we
have begun exploring the first few. Our results show that even in a fully observable setting
there may be significant advantages to TD methods when learning TD-defined predictions.
Our action-conditional results show that TD methods can learn dramatically faster than
other methods. TD networks allow the expression of many new kinds of predictions whose
extensive semantics is not immediately clear, but which are ultimately fully grounded in
data. It may be fruitful to further explore the expressive potential of TD-defined predictions.
Although most of our experiments have concerned the representational expressiveness and
efficiency of TD-defined predictions, it is also natural to consider using them as state, as in
predictive state representations. Our experiments suggest that this is a promising direction
and that TD learning algorithms may have advantages over previous learning methods.
Finally, we note that adding nodes to a question network produces new predictions and
thus may be a way to address the discovery problem for predictive representations.
Acknowledgments
The authors gratefully acknowledge the ideas and encouragement they have received in this
work from Satinder Singh, Doina Precup, Michael Littman, Mark Ring, Vadim Bulitko,
Eddie Rafols, Anna Koop, Tao Wang, and all the members of the rlai.net group.
References
Boyan, J. A. (2000). Technical update: Least-squares temporal difference learning. Machine Learning 49:233?246.
Bradtke, S. J. and Barto, A. G. (1996). Linear least-squares algorithms for temporal difference learning. Machine Learning 22(1/2/3):33?57.
Dayan, P. (1993). Improving generalization for temporal difference learning: The successor representation. Neural Computation 5(4):613?624.
James, M. and Singh, S. (2004). Learning and discovery of predictive state representations in dynamical systems with reset. In Proceedings of the Twenty-First International Conference on Machine
Learning, pages 417?424.
Kaelbling, L. P. (1993). Hierarchical learning in stochastic domains: Preliminary results. In Proceedings of the Tenth International Conference on Machine Learning, pp. 167?173.
Lagoudakis, M. G. and Parr, R. (2003). Least-squares policy iteration. Journal of Machine Learning
Research 4(Dec):1107?1149.
Littman, M. L., Sutton, R. S. and Singh, S. (2002). Predictive representations of state. In Advances
In Neural Information Processing Systems 14:1555?1561.
Rudary, M. R. and Singh, S. (2004). A nonlinear predictive state representation. In Advances in
Neural Information Processing Systems 16:855?862.
Singh, S., Littman, M. L., Jong, N. K., Pardoe, D. and Stone, P. (2003) Learning predictive state
representations. In Proceedings of the Twentieth Int. Conference on Machine Learning, pp. 712?719.
Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. Machine Learning
3:9?44.
Sutton, R. S. (1995). TD models: Modeling the world at a mixture of time scales. In A. Prieditis
and S. Russell (eds.), Proceedings of the Twelfth International Conference on Machine Learning,
pp. 531?539. Morgan Kaufmann, San Francisco.
Sutton, R. S., Precup, D. and Singh, S. (1999). Between MDPs and semi-MDPs: A framework for
temporal abstraction in reinforcement learning. Artificial Intelligence 112:181?121.
| 2545 |@word version:2 proportion:1 twelfth:1 open:1 seek:1 propagate:1 asks:1 ytn:1 harder:1 moment:2 initial:3 series:1 past:1 o2:1 current:3 comparing:1 must:6 visible:1 pertinent:1 update:3 alone:1 intelligence:1 selected:1 guess:4 accordingly:2 ith:2 provides:1 node:32 clarified:1 successive:2 simpler:1 constructed:2 incorrect:1 consists:1 introduce:2 inter:1 expected:1 behavior:1 multi:1 bellman:1 zti:4 discounted:3 alberta:2 td:78 actual:5 increasing:1 becomes:1 provided:1 matched:1 what:3 rafols:1 kind:1 temporal:10 every:1 appear:1 before:2 local:1 thinner:1 timing:1 treat:1 sutton:10 solely:1 might:1 plus:2 specifying:1 range:3 statistically:2 averaged:1 acknowledgment:1 responsible:1 empirical:2 matching:1 pre:1 suggest:4 cannot:1 selection:1 rlai:1 context:1 writing:1 conventional:6 fruitful:1 deterministic:1 yt:23 center:1 measurable:1 regardless:1 starting:1 immediately:1 variation:1 updated:2 target:8 nominal:1 ualberta:1 exact:2 us:1 particularly:1 updating:8 predicts:3 labeled:2 observed:2 solved:1 wang:1 connected:1 russell:1 e8:1 substantial:1 mentioned:1 environment:3 complexity:1 reward:4 asked:1 littman:5 o1t:1 ultimately:1 trained:1 singh:12 predictive:12 upon:1 efficiency:3 completely:3 basis:1 represented:3 various:4 effective:2 monte:11 artificial:1 approached:1 outcome:1 whose:2 widely:1 valued:1 otherwise:3 interconnection:1 ability:1 think:2 itself:1 final:1 online:4 advantage:7 sequence:16 net:1 interaction:1 interconnected:1 reset:1 representational:1 pronounced:1 convergence:2 produce:1 ring:1 wider:1 depending:1 clearer:1 measured:1 ij:1 received:1 c:1 entirety:1 direction:1 thick:1 stochastic:1 zt3:1 enable:1 successor:2 bin:1 require:1 generalization:2 preliminary:1 brian:1 secondly:1 extension:1 exploring:1 sufficiently:1 predict:14 bump:1 parr:2 arrange:1 sought:3 a2:1 largest:1 clearly:2 always:2 rather:2 reaching:1 barto:2 interrupt:1 greatly:1 sense:1 dayan:2 dependent:1 abstraction:1 interested:1 semantics:6 tao:1 overall:2 among:1 special:3 equal:1 once:2 shaped:2 identical:1 represents:2 broad:1 thin:1 anticipated:1 future:9 others:2 richard:1 few:3 randomly:1 familiar:1 consisting:2 possibility:2 mixture:1 unconditional:1 held:1 chain:2 accurate:2 closer:1 experience:5 tree:1 incomplete:1 continuing:1 detailing:1 walk:13 desired:3 old:1 increased:1 column:3 earlier:1 modeling:1 kaelbling:2 predictor:1 answer:7 international:3 rudary:3 retain:1 tanner:1 michael:1 precup:3 quickly:1 opposed:2 potential:1 int:1 explicitly:1 doina:1 depends:1 vi:2 later:2 try:1 performed:1 rmse:5 ass:1 square:7 formed:1 kaufmann:1 correspond:2 wtij:3 mc:10 carlo:11 history:3 whenever:1 ed:1 definition:4 pp:3 involved:2 james:2 begun:1 knowledge:2 shaping:1 actually:1 back:1 supervised:1 though:1 generality:1 just:6 until:2 hand:3 expressive:1 nonlinear:1 logistic:2 indicated:1 consisted:1 entering:1 o3:1 stone:1 complete:3 demonstrate:1 bradtke:2 bring:1 wise:1 lagoudakis:2 conditioning:1 thirdly:1 discussed:2 relating:1 expressing:2 significant:6 encouragement:1 similarly:2 gratefully:1 longer:1 etc:1 own:1 showed:1 store:1 binary:2 continue:1 seen:2 morgan:1 additional:2 greater:1 preceding:4 determine:1 semi:1 relates:1 multiple:1 technical:1 faster:2 long:2 a1:1 prediction:79 koop:1 expectation:2 iteration:1 represent:7 grounded:2 dec:1 whereas:2 interval:2 decreased:1 ot:10 rest:1 fifty:1 vadim:1 member:1 seem:1 effectiveness:1 call:2 easy:1 identically:1 concerned:1 variety:2 affect:1 idea:4 simplifies:1 prieditis:1 whether:1 expression:1 handled:1 rms:1 action:37 dramatically:1 generally:2 clear:3 pardoe:1 amount:2 discount:1 extensively:2 simplest:1 cit:4 bulk:1 modifiable:1 discrete:3 express:2 group:1 four:3 nevertheless:1 changing:1 tenth:1 concreteness:1 sum:2 run:1 almost:1 yt1:1 decision:1 bit:21 entirely:2 ct:5 followed:2 occur:1 ahead:3 precisely:1 generates:1 aspect:1 department:1 according:1 combination:1 across:2 y0:1 making:3 taken:5 equation:2 previously:1 turn:2 needed:1 end:4 available:2 apply:2 hierarchical:1 c5t:2 batch:7 alternative:1 top:2 question:19 quantity:2 occurs:1 usual:1 link:5 separate:4 w0:1 seven:4 argue:1 extent:1 collected:1 toward:1 length:6 o1:1 relationship:10 providing:1 difficult:1 potentially:1 reliably:1 zt:7 policy:2 twenty:1 upper:1 observation:29 markov:6 acknowledge:1 immediate:1 defining:2 precise:1 arbitrary:5 canada:1 expressiveness:1 introduced:1 pair:1 required:1 specified:1 extensive:9 distinction:1 learned:6 diction:1 address:1 dynamical:2 below:1 zt2:1 event:3 natural:2 boyan:2 predicting:2 representing:4 mdps:2 unconditionally:2 text:1 discovery:2 multiplication:1 determining:2 loss:1 fully:3 suggestion:1 approximator:1 agent:2 t6g:1 course:3 arriving:1 side:1 allow:2 wide:1 taking:2 curve:1 depth:10 calculated:1 world:2 unfolds:1 author:1 made:5 reinforcement:4 san:1 far:1 zt1:1 observable:1 satinder:1 suggestive:1 assumed:1 francisco:1 eddie:1 table:6 promising:1 learn:8 ca:1 improving:1 domain:2 did:1 anna:1 linearly:1 arrow:1 whole:2 allowed:1 repeated:1 representative:1 edmonton:1 deterministically:1 weighting:1 third:2 learns:2 down:1 xt:6 showing:1 essential:1 adding:2 ci:1 conditioned:1 interrelated:1 explore:3 twentieth:1 expressed:2 scalar:1 subtlety:1 lstd:1 corresponds:2 determines:1 conditional:12 goal:2 viewed:1 identity:2 yti:5 change:1 determined:1 except:3 uniformly:1 wt:7 total:1 called:1 indicating:2 formally:1 jong:1 mark:1 arises:1 meant:1 ongoing:1 |
1,701 | 2,546 | Markov Networks for Detecting
Overlapping Elements in Sequence Data
Joseph Bockhorst
Dept. of Computer Sciences
University of Wisconsin
Madison, WI 53706
[email protected]
Mark Craven
Dept. of Biostatistics and Medical Informatics
University of Wisconsin
Madison, WI 53706
[email protected]
Abstract
Many sequential prediction tasks involve locating instances of patterns in sequences. Generative probabilistic language models, such
as hidden Markov models (HMMs), have been successfully applied
to many of these tasks. A limitation of these models however, is
that they cannot naturally handle cases in which pattern instances
overlap in arbitrary ways. We present an alternative approach,
based on conditional Markov networks, that can naturally represent arbitrarily overlapping elements. We show how to efficiently
train and perform inference with these models. Experimental results from a genomics domain show that our models are more accurate at locating instances of overlapping patterns than are baseline
models based on HMMs.
1
Introduction
Hidden Markov models (HMMs) and related probabilistic sequence models have
been among the most accurate methods used for sequence-based prediction tasks
in genomics, natural language processing and other problem domains. One key
limitation of these models, however, is that they cannot represent general overlaps
among sequence elements in a concise and natural manner. We present a novel
approach to modeling and predicting overlapping sequence elements that is based on
undirected Markov networks. Our work is motivated by the task of predicting DNA
sequence elements involved in the regulation of gene expression in bacteria. Like
HMM-based methods, our approach is able to represent and exploit relationships
among different sequence elements of interest. In contrast to HMMs, however, our
approach can naturally represent sequence elements that overlap in arbitrary ways.
We describe and evaluate our approach in the context of predicting a bacterial
genome?s genes and regulatory ?signals? (together its regulatory elements). Part
of the process of understanding a given genome is to assemble a ?parts list?, often
using computational methods, of its regulatory elements. Predictions, in this case,
entail specifying the start and end coordinates of subsequences of interest. It is
common in bacterial genomes for these important sequence elements to overlap.
(a)
(b)
prom 1
gene1
prom2 prom 3
gene 2
START
END
term 1
prom
gene
term
Figure 1: (a) Example arrangement of two genes, three promoters and one terminator in
a DNA sequence. (b) Topology of an HMM for predicting these elements. Large circles
represent element-specific sub-models and small gray circles represent inter-element submodels, one for each allowed pair of adjacent elements. Due to the overlapping elements,
there is no path through the HMM consistent with the configuration in (a).
Our approach to predicting overlapping sequence elements, which is based on discriminatively trained undirected graphical models called conditional Markov networks [5, 10] (also called conditional random fields), uses two key steps to make a
set of predictions. In the first step, candidate elements are generated by having a set
of models independently make predictions. In the second step, a Markov network
is constructed to decide which candidate predictions to accept.
Consider the task of predicting gene, promoter, and terminator elements encoded in
bacterial DNA. Figure 1(a) shows an example arrangement of these elements in a
DNA sequence. Genes are DNA sequences that encode information for constructing
proteins. Promoters and terminators are DNA sequences that regulate transcription, the first step in the synthesis of a protein from a gene. Transcription begins
at a promoter, proceeds downstream (left-to-right in Figure 1(a)), and ends at a
terminator. Regulatory elements often overlap each other, for example prom2 and
prom3 or gene1 and prom2 in Figure 1.
One technique for predicting these elements is first to train a probabilistic sequence
model for each element type (e.g. [2, 9]) and then to ?scan? an input sequence
with each model in turn. Although this approach can predict overlapping elements,
it is limited since it ignores inter-element dependencies. Other methods, based on
HMMs (e.g. [11, 1]), explicitly consider these dependencies. Figure 1(b) shows an
example topology of such an HMM. Given an input sequence, this HMM defines a
probability distribution over parses, partitionings of the sequence into subsequences
corresponding to elements and the regions between them. These models are not naturally suited to representing overlapping elements. For the case shown in Figure 1(a)
for example, even if the subsequences for gene1 and prom2 match their respective
sub-models very well, since both elements cannot be in the same parse there is a
competition between predictions of gene1 and prom2 . One could expand the state
set to include states for specific overlap situations however, the number of states increases exponentially with the number of overlap configurations. Alternatively, one
could use the factorized state representation of factorial HMMs [4]. These models,
however, assume a fixed number of loosely connected processes evolving in parallel,
which is not a good match to our genomics domain.
Like HMMs, our method, called CMN-OP (conditional Markov networks for overlapping patterns), employs element-specific sub-models and probabilistic constraints
on neighboring elements qualitatively expressed in a graph. The key difference between CMN-OP and HMMs is the probability distributions they define for an input
sequence. While, as mentioned above, an HMM defines a probability distribution
over partitions of the sequence, a CMN-OP defines a probability distribution over
all possible joint arrangements of elements in an input sequence. Figure 2 illustrates
this distinction.
(b) CMN?OP
(a) HMM
predicted labels
sample space
predicted signals
sample space
end position
1
2
3
4
5
6
7
8
1
2
3
4
5
6
7
1
8
2
3
4
5
6
7
8
1
2
3
4
5
6
7
8
start position
2
1
3
4
5
6
7
8
Figure 2: An illustration of the difference in the sample spaces on which probability
distributions over labelings are defined by (a) HMMs and (b) CMN-OP models. The left
side of (a) shows a sequence of length eight for which an HMM has predicted that an
element of interest occupies two subsequences, [1:3] and [6:7]. The darker subsequences,
[4:5] and [8:8], represent sequence regions between predicted elements. The right side of
(a) shows the corresponding event in the sample space of the HMM, which associates one
label with each position. The left side of (b) shows four predicted elements made by a
CMN-OP model. The right side of (b) illustrates the corresponding event in the CMN-OP
sample space. Each square corresponds to a subsequence, and an event in this sample
space assigns a (possibly empty) label to each sub-sequence.
2
Models
A conditional Markov network [5, 10] (CMN) defines the conditional probability
distribution Pr(Y|X) where X is a set of observable input random variables and Y
is a set of output random variables. As with standard Markov networks, a CMN
consists of a qualitative graphical component G = (V, E) with vertex set V and
edge set E that encodes a set of conditional independence assertions along with a
quantitative component in the form of a set of potentials ? over the cliques of G.
In CMNs, V = X ? Y. We denote an assignment of values to the set of random
variables U with u. Each clique, q = (Xq , Yq ), in the clique set Q(G) has a potential
function ?q (xq , yq ) ? ? that assigns a non-negative number to each of the joint
settings of (Xq , Y
Qq ). A CMN (G, ?) defines the conditional
P Q probability distribution
1
0
Pr(y|x) = Z(x)
?
(x
,
y
)
where
Z(x)
=
q
q
q
q?Q(G)
y0
q?Q(G) ?q (xq , y q ) is the
x dependent normalization factor called the partition function. One benefit of
CMNs for classification tasks is that they are typically discriminatively trained by
maximizing a function based on the conditional likelihood Pr(Y|X) over a training
set rather than the joint likelihood Pr(Y, X).
A common representation
for the potentials ?q (yq , xq ) is with a log-linear model:
P
?q (yq , xq ) = exp{ b wqb fqb (yq , xq )} = exp{wqT ? fq (yq , xq )}. Here wqb is the weight
of feature fqb and wq and fq are column vectors of q?s weights and features.
Now we show how we use CMNs to predict elements in observation sequences.
Given a sequence x of length L, our task is to identify the types and locations of
all instances of patterns in P = {P1 , ..., PN } that are present in x where P is a set
of pattern types. In the genomics domain x is a DNA sequence and P is a set of
regulatory elements such as {gene, promoter, terminator}.
A match m of a pattern to x specifies a subsequence xi:j and a pattern type Pk ? P.
We denote the set of all matches of pattern types in P to x with M(P, x). We call a
subset C = (m1 , m2 , ..., mM ) of M(P, x) a configuration. Matches in C are allowed
(a)
(b)
X
PROM
START
Y1
Y2
GENE
TERM
END
YL+1
Figure 3: (a) The structure of the CMN-OP induced for the sequence x of length L. The
ath pattern match Ya is conditionally independent of its non-neighbors given its neighbors
X, Ya?1 and Ya+1 . (b) The interaction graph we use in the regulatory element prediction
task. Vertices are the pattern types along with START and END. Edges connect pattern
types that may be adjacent. Edges from START connect to pattern types that may be
the first matches Edges into END come from pattern types that may be the last matches.
to overlap however, we assume that no two matches in C have the same start index 1 .
Thus, the maximum size of a configuration C is L, and the elements of C may be
ordered by start position such that ma ? ma+1 . Our models define a conditional
probability distribution over configurations given an input sequence x.
Given a sequence x of length L, the output random variables of our models are
Y = (Y1 , Y2 , ..., YL , YL+1 ). We represent a configuration C = (m1 , m2 , ..., mM )
with Y in the following way. If a is less than or equal to the configuration size
M , we assign Ya to the ath match in C (Ya = ma ), otherwise we set Ya equal to a
special value null. Note that YL+1 will always be null; it is included for notational
convenience. Our models define the conditional distribution Pr(Y|X).
Our models assume that a pattern match is independent of other matches given
its neighbors. That is, Ya is independent of Ya0 for a0 < a ? 1 or a0 > a + 1
given X, Ya?1 and Ya+1 . This is analogous to the HMM assumption that the next
state depends only on the current state. The conditional Markov network structure
associated with this assumption is shown in Figure 3(a). The cliques in this graph
are {Ya , Ya+1 , X} for 1 ? a ? L. We denote the clique {Ya , Ya+1 , X} with qa .
We define the clique potential of qa for a 6= 1 as the product of a pattern match
term g(ya , x) and a pattern interaction term h(ya , ya+1 , x). The functions g() and
h() are shared among all cliques so ?qa (ya , ya+1 , x) = g(ya , x) ? h(ya , ya+1 , x) for
2 ? a ? L. The first clique q1 includes an additional start placement term ?(y1 , x)
that scores the type and position of the first match y1 . To ensure that real matches
come before any null settings and that additional null settings do not affect
Pr(y|x), we require that g(null, x) = 1, h(null,null, x) = 1 and h(null,ya ,
x) = 0 for all x and ya 6= null. The pattern match term measures the agreement
between the matched subsequence and the pattern type associated with y a . In
the genomics domain our representation of the sequence match term is based on
regulatory element specific HMMs. The pattern interaction term measures the
compatibility between the types and spacing (or overlap) of adjacent matches.
A Conditional Markov Network for Overlapping Patterns (CMN-OP) = (g, h, ?)
specifies a pattern match function g, pattern interaction function h and
start placement function ? that define the conditional distribution Pr(y|x) =
QL
?(y1 ) QL
1
a=1 ?a (qa , x) = Z(x)
a=1 g(ya , x)h(ya , ya+1 , x) where Z(x) is the normalZ(x)
izing partition function. Using the log-linear representation for g() and h() we have
PL
1)
T
T
Pr(y|x) = ?(y
a=1 wg ? fg (ya , x) + wh ? fh (ya , ya+1 , x)}. Here wg , fg , wh
Z(x) exp{
and fh are g() and h()?s weights and features.
1
We only need to require configurations to be ordered sets. We make this slightly more
stringent assumption to simplify the description of the model.
2.1
Representation
Our representation of the pattern match function g() is based on HMMs. We
construct an HMM with parameters ?k for each pattern type Pk along with a single
background HMM with parameters ?B . The pattern match score of ya 6= null
with subsequence xi:j and pattern type Pk is the odds Pr(xi:j |?k )/ Pr(xi:j |?B ).
We have a feature fgk (ya , x) for each pattern type Pk whose value is the logarithm
of the odds if the pattern associated with ya is Pk and zero otherwise. Currently,
the weights wg are not trained and are fixed at 1. So, wgT ? fg (ya , x) = fgk (ya , x) =
log(Pr(xi:j |?k )/ Pr(xi:j |?B )) where Pk is the pattern of ya .
Our representation of the pattern interaction function h() consists of two components: (i) a directed graph I called the interaction graph that contains a vertex
for each pattern type in P along with special vertices START and END and (ii)
a set of weighted features for each edge in I. The interaction graph encodes qualitative domain knowledge about allowable orderings of pattern types. The value
of h(ya , ya+1 , x) = whT ? fh (ya , ya+1 , x) is non-zero only if there is an edge in I
from the pattern type associated with ya to the pattern type associated with ya+1 .
Thus, any configuration with non-zero probability corresponds to a path through
I. Figure 3(b) shows the interaction graph we use to predict bacterial regulatory
elements. It asserts that between the start positions of two genes there may be no
element starts, a single terminator start or zero or more promoter starts with the
requirement that all promoters start after the start of the terminator. Note that
in CMN-OP models, the interaction graph indicates legal orderings over the start
position of matches not over complete matches as in an HMM.
Each of the pattern interaction features f ? fh is associated with an edge in the
interaction graph I. Each edge e in I has single bias feature feb and a set of distance
features feD . The value of feb (ya , ya+1 , x) is 1 if the pattern types connected by e
correspond to the types associated with ya and ya+1 and 0 otherwise. The distance
features for edge e provide a discretized representation of the distance between (or
amount of overlap of) two adjacent matches of types consistent with e. We associate
each distance feature fer ? feD with a range r. The value of fer (ya , ya+1 , x) is 1 if
the (possibly negative) difference between the start position of ya+1 and the end
position of ya is in r, otherwise it is 0. The set of ranges for a given edge are nonoverlapping. So, h(ya , ya+1 , x) = exp(whT ? fh (ya , ya+1 , x)) = exp(web + wer ) where e
is the edge for ya and ya+1 , web is the weight of the bias feature feb and wer is the
weight of the single distance feature fer whose range contains the spacing between
the matches of ya and ya+1 .
3
Inference and Training
Given a trained model with weights w and an input sequence x, the inference task
is to determine properties of the distribution Pr(y|x). Since the cliques of a CMNOP form a chain we could perform exact inference with the belief propagation (BP)
algorithm [8]. The number of joint settings in one clique grows O(L4 ), however,
giving BP a running time of O(L5 ) and which is impractical for longer sequences.
The exact inference procedure we use, which is inspired the energy minimization
algorithm for pictorial structures [3], runs in O(L2 ) time.
Our inference procedure exploits two properties of our representation of the pattern
interaction function h(). First, we use the invariance of h(ya , ya+1 , x) to the start
position of ya and the end position of ya+1 . In this section, we make this explicit by
writing h(ya , ya+1 , x) as h(k, k 0 , d) where k and k 0 are the pattern types of ya and
ya+1 respectively and d is the distance between (or overlap of if negative) ya and
ya+1 . The second property we use is the fact that the difference between h(k, k 0 , d)
and h(k, k 0 , d + 1) is non-zero only if d is the maximum value of the range of one of
the distance features fer ? feD associated with the edge e = k ? k 0
The inference procedure we use for our CMN-OP models consists of a forward
pass and a backward pass. Due to space limitations, we only describe the key
aspects of the forward pass. The forward pass fills an L ? L ? N matrix F
where we define F (i, j, k) to be the sum of the scores of all partial configura?
?
tions y
?P
that end with
Q y where y is the match of xi:j to Pk : F (i, j,? k) ?
?
? = (y1 , y2 , ..., y ) and
g(y , x) y? ?(y1 , x) ya ?(?y\y? ) g(ya , x)h(ya , ya+1 , x) Here y
\ denotes set difference.
F has a recursive formulation:
?
?
i?1 X
L X
N
?
?
X
F (i, j, k) = gk (y ? , x) ?k (i) +
F (i0 , j 0 , k 0 )h(k 0 , k, i ? j 0 ) .
?
?
0
0
0 0
i =1 j =i k =1
The triple sum is over all possible adjacent previous matches. Due to the first
property of h just discussed, the value of the triple sum for setting F (i, j, k) and
F (i, j 0 , k) is the same for any j 0 . We cache the value of the triple sum in the L ? N
matrix Fin where Fin (i, k) holds the value needed for setting F (i, j 0 , k) for any j 0 .
We begin the forward pass with i = 1 and set the values of F (1, j, k) for all j and
k before incrementing i. After i is incremented, we use the second property of h to
update Fin in time O(N 2 B), which is independent of the sequence length L, where
B is the number of ?bins? used in our discretized represenation of distance. The
overall time complexity of the forward pass is O(LN 2 B + L2 N ). The first term is
for updating Fin and the second term is for the constant time setting of the O(L2 N )
elements of F . If the sequence length L dominates N and B, as it does in the gene
regulation domain, the effective running time is O(L2 ).
Training involves estimating the weights w from a training set D. An element d of
? d ) where xd is a fully observable sequence and y
? d is a partially
D is a pair (xd , y
observable configuration for xd . To help avoid overfitting we assume a zero-mean
Gaussian prior over the weights and optimize the log of the MAP objective function
P
T
following Taskar et al. [10]: L(w, D) = d?D (log Pr(y?d |xd )) ? w2??w
2 .
?L(w,D)
The
=
?w
P value of the gradient ?L(w, D) win the direction of weight w ? w is:
? d ] ? E[Cw |xd ]) ? ?2 where Cw is a random variable representing
d?D (E[Cw |xd , y
the number of times the binary feature of w is 1. The expectation is relative to
Pr(y|x) defined by the current setting of w. The value in the summation is the
? to the
difference in the expected number of times w is used given both x and y
expected number of times w is used given just x. The last term is the shrinking
effect of the prior. With the gradient in hand, we can use any of a number of
optimization procedures to set w. We use the quasi-Newton method BFGS [6].
4
Empirical Evaluation
In this section we evaluate our Markov network approach by applying it to recognize
regulatory signals in the E. coli genome. Our hypothesis is that the CMN-OP
models will provide more accurate predictions than either of two baselines: (i)
predicting the signals independently, and (ii) predicting the signals using an HMM.
All three approaches we evaluate ? the Markov networks and the two baselines ?
employ two submodels [1]. The first submodel is an HMM that is used to predict
(a)
(b)
(c)
Promoters
0.6
0.4
0.4
0.2
0
0
0.2
0.4 0.6
Recall
0.8
1
CMN-OP
HMM
SCAN
0.8
0.6
0.2
0
Overlapping Terminators
1
CMN-OP
HMM
SCAN
0.8
Precision
0.8
Precision
Terminators
1
CMN-OP
HMM
SCAN
Precision
1
0.6
0.4
0.2
0
0
0.2
0.4 0.6
Recall
0.8
1
0
0.2
0.4 0.6
Recall
0.8
1
Figure 4: Precision-recall curves for the CMN-OP, HMM and SCAN models on (a) the
promoter localization task, (b) the terminator localization task and (c) the terminator
localization task for terminators known to overlap genes or promoters.
candidate promoters and the second submodel is a stochastic context free grammar
(SCFG) that is used to predict candidate terminators. The first baseline approach,
which we refer to as SCAN, involves ?scanning? a promoter model and a terminator
model along each sequence being processed, and at each position producing a score
indicating the likelihood that a promoter or terminator starts at that position. With
this baseline, each prediction is made independently of all other predictions. The
second baseline is an HMM, similar to the one depicted in Figure 1(b). The HMM
that we use here, does not contain the gene submodel shown in Figure 1(b) because
the sequences we use in our experiments do not contain entire genes. We have the
HMM and CMN-OP models make terminator and promoter predictions for each
position in each test sequence. We do this using posterior decoding which involves
having a model compute the probability that a promoter (terminator) ends at a
specified position given that the model somehow explains the sequence.
The data set we use consists of 2,876 subsequences of the E. coli genome that
collectively contain 471 known promoters and 211 known terminators. Using tenfold cross-validation, we evaluate the three methods by considering how well each
method is able to localize predicted promoters and terminators in the test sequences.
Under this evaluation criterion, a correct prediction predicts a promoter (terminator) within k bases of an actual promoter (terminator). We set k to 10 for promoters
and to 25 for terminators. For all methods, we plot precision-recall (PR) curves by
P
varying a threshold on the prediction confidences. Recall is defined as T PT+F
N , and
TP
precision is defined as T P +F P , where T P is the number of true positive predictions,
F N is the number of false negatives, and F P is the number of false positives.
Figures 4(a) and 4(b) show PR curves for the promoter and terminator localization
tasks, respectively. For both cases, the HMM and CMN-OP models are clearly
superior to the SCAN models. This result indicates the value of taking the regularities of relationships among these signals into account when making predictions.
For the case of localizing terminators, the CMN-OP PR curve dominates the curve
for the HMMs. The difference is not so marked for promoter localization, however.
Although the CMN-OP curve is better at high recall levels, the HMM curve is
somewhat better at low recall levels. Overall, we conclude that these results show
the benefits of representing relationships among predicted signals (as is done in the
HMMs and CMN-OP models) and being able to represent and predict overlapping
signals. Figure 4(c) shows the PR curves specifically for a set of filtered test sets
in which each actual terminator overlaps either a gene or a promoter. These curves
indicate that the CMN-OP models have a particular advantage in these cases.
5
Conclusion
We have presented an approach, based on Markov networks, able to naturally represent and predict overlapping sequence elements. Our approach first generates a
set of candidate elements by having a set of models independently make predictions.
Then, we construct a Markov network to decide which candidate predictions to accept. We have empirically validated our approach by using it to recognize promoter
and terminator ?signals? in a bacterial genome. Our experiments demonstrate that
our approach provides more accurate predictions than baseline HMM models.
Although we describe and evaluate our approach in the context of genomics, we
believe that it has other applications as well. Consider, for example, the task of
segmenting and indexing audio and video streams [7]. We might want to annotate
segments of a stream that correspond to specific types of events or to particular
individuals who appear or are speaking. Clearly, there might be overlapping events
and appearances of people, and moreover, there are likely to be dependencies among
events and appearances. Any problem with these two properties is a good candidate
for our Markov-network approach.
Acknowledgments
This research was supported in part by NSF grant IIS-0093016, and NIH grants
T15-LM07359-01 and R01-LM07050-01.
References
[1] J. Bockhorst, Y. Qiu, J. Glasner, M. Liu, F. Blattner, and M. Craven. Predicting
bacterial transcription units using sequence and expression data. Bioinformatics,
19(Suppl. 1):i34?i43, 2003.
[2] M. Ermolaeva, H. Khalak, O. White, H. Smith, and S. Salzberg. Prediction of transcription terminators in bacterial genomes. J. of Molecular Biology, 301:27?33, 2000.
[3] P. Felzenszwalb and D. Huttenlocher. Efficient matching of pictorial structures. In
Proc. of the 2000 IEEE Conf. on Computer Vision and Pattern Recognition, 66?75.
[4] Z. Ghahramani and M. I. Jordan. Factorial hidden markov models. Machine Learning,
29:245?273, 1997.
[5] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic
models for segmenting and labeling sequence data. In Proc. of the 18th Internat. Conf.
on Machine Learning, pages 282?289, Williamstown, MA, 2001. Morgan Kaufmann.
[6] R. Malouf. A comparison of algorithms for maximum entropy parameter estimation.
Sixth workshop on computational language learning (CoNLL), 2002.
[7] National Institute of Standards and Technology. TREC video retrieval evaluation
(TRECVID), 2004. http://www-nlpir.nist.gov/projects/t01v/.
[8] J. Pearl. Probabalistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, CA, 1988.
[9] A. Pedersen, P. Baldi, S. Brunak, and Y. Chauvin. Characterization of prokaryotic
and eukaryotic promoters using hidden Markov models. In Proc. of the 4th International Conf. on Intelligent Systems for Molecular Biology, pages 182?191, St. Louis,
MO, 1996. AAAI Press.
[10] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational
data. In Proc. of the 18th International Conf. on Uncertainty in Artificial Intelligence,
Edmonton, Alberta, 2002. Morgan Kaufmann.
[11] T. Yada, Y. Totoki, T. Takagi, and K. Nakai. A novel bacterial gene-finding system
with improved accuracy in locating start codons. DNA Research, 8(3):97?106, 2001.
| 2546 |@word q1:1 concise:1 configuration:10 contains:2 score:4 liu:1 current:2 partition:3 plot:1 update:1 generative:1 intelligence:1 mccallum:1 smith:1 filtered:1 detecting:1 provides:1 characterization:1 location:1 along:5 constructed:1 qualitative:2 consists:4 baldi:1 manner:1 inter:2 expected:2 p1:1 discretized:2 codon:1 inspired:1 alberta:1 gov:1 actual:2 cache:1 tenfold:1 considering:1 begin:2 estimating:1 matched:1 moreover:1 project:1 biostatistics:1 factorized:1 null:10 finding:1 impractical:1 quantitative:1 xd:6 partitioning:1 unit:1 medical:1 grant:2 appear:1 producing:1 louis:1 segmenting:2 before:2 positive:2 path:2 might:2 mateo:1 specifying:1 hmms:13 limited:1 wqt:1 range:4 directed:1 acknowledgment:1 recursive:1 procedure:4 empirical:1 evolving:1 matching:1 confidence:1 protein:2 cannot:3 convenience:1 context:3 applying:1 writing:1 www:1 optimize:1 map:1 maximizing:1 independently:4 assigns:2 m2:2 submodel:3 fill:1 handle:1 coordinate:1 analogous:1 qq:1 pt:1 exact:2 us:1 hypothesis:1 agreement:1 associate:2 element:44 recognition:1 updating:1 predicts:1 huttenlocher:1 taskar:2 region:2 connected:2 ordering:2 incremented:1 mentioned:1 complexity:1 trained:4 segment:1 localization:5 joint:4 train:2 describe:3 effective:1 artificial:1 labeling:1 configura:1 whose:2 encoded:1 plausible:1 otherwise:4 wg:3 grammar:1 sequence:45 advantage:1 interaction:12 product:1 fer:4 neighboring:1 ath:2 wht:2 description:1 asserts:1 competition:1 empty:1 requirement:1 regularity:1 tions:1 help:1 op:22 c:1 predicted:7 come:2 involves:3 indicate:1 direction:1 correct:1 stochastic:1 occupies:1 stringent:1 bin:1 explains:1 require:2 assign:1 abbeel:1 summation:1 pl:1 mm:2 hold:1 exp:5 predict:7 mo:1 fh:5 estimation:1 proc:4 label:3 currently:1 successfully:1 weighted:1 minimization:1 clearly:2 always:1 gaussian:1 rather:1 pn:1 avoid:1 varying:1 encode:1 validated:1 notational:1 likelihood:3 fq:2 indicates:2 contrast:1 baseline:7 inference:8 dependent:1 i0:1 typically:1 entire:1 accept:2 a0:2 hidden:4 koller:1 expand:1 quasi:1 labelings:1 compatibility:1 prom:4 among:7 classification:1 overall:2 special:2 bacterial:8 field:2 equal:2 having:3 construct:2 biology:2 simplify:1 intelligent:2 employ:2 recognize:2 national:1 individual:1 pictorial:2 prokaryotic:1 interest:3 evaluation:3 chain:1 accurate:4 edge:12 partial:1 bacteria:1 respective:1 loosely:1 logarithm:1 circle:2 instance:4 column:1 modeling:1 salzberg:1 assertion:1 tp:1 localizing:1 assignment:1 vertex:4 subset:1 dependency:3 connect:2 scanning:1 st:1 international:2 l5:1 probabilistic:6 yl:4 informatics:1 decoding:1 together:1 synthesis:1 aaai:1 possibly:2 conf:4 coli:2 account:1 potential:4 bfgs:1 nonoverlapping:1 includes:1 explicitly:1 depends:1 stream:2 start:22 parallel:1 square:1 accuracy:1 kaufmann:3 who:1 efficiently:1 correspond:2 identify:1 pedersen:1 biostat:1 sixth:1 energy:1 involved:1 naturally:5 associated:8 wh:2 recall:8 knowledge:1 improved:1 formulation:1 done:1 just:2 hand:1 parse:1 web:2 overlapping:14 propagation:1 somehow:1 defines:5 gray:1 believe:1 grows:1 effect:1 contain:3 y2:3 true:1 white:1 conditionally:1 adjacent:5 criterion:1 allowable:1 complete:1 demonstrate:1 reasoning:1 novel:2 nih:1 common:2 superior:1 empirically:1 exponentially:1 discussed:1 m1:2 refer:1 malouf:1 language:3 entail:1 longer:1 internat:1 feb:3 base:1 posterior:1 binary:1 arbitrarily:1 morgan:3 additional:2 somewhat:1 determine:1 signal:9 ii:3 match:27 cross:1 retrieval:1 molecular:2 glasner:1 prediction:20 vision:1 expectation:1 annotate:1 represent:10 normalization:1 suppl:1 background:1 want:1 spacing:2 w2:1 induced:1 undirected:2 lafferty:1 jordan:1 call:1 odds:2 independence:1 affect:1 topology:2 motivated:1 expression:2 locating:3 speaking:1 involve:1 factorial:2 amount:1 processed:1 dna:8 http:1 specifies:2 nsf:1 key:4 four:1 threshold:1 localize:1 wisc:2 represenation:1 backward:1 graph:9 downstream:1 sum:4 run:1 wer:2 uncertainty:1 nakai:1 decide:2 submodels:2 conll:1 assemble:1 placement:2 constraint:1 bp:2 encodes:2 generates:1 aspect:1 craven:3 slightly:1 y0:1 wi:2 joseph:1 making:1 pr:19 indexing:1 legal:1 ln:1 turn:1 needed:1 fed:3 end:12 eight:1 regulate:1 alternative:1 scfg:1 denotes:1 running:2 include:1 ensure:1 graphical:2 madison:2 newton:1 exploit:2 giving:1 ghahramani:1 r01:1 objective:1 arrangement:3 gradient:2 win:1 distance:8 cw:3 hmm:25 evaluate:5 chauvin:1 length:6 index:1 relationship:3 illustration:1 regulation:2 ql:2 gk:1 negative:4 lm07050:1 perform:2 observation:1 markov:19 fin:4 nist:1 takagi:1 situation:1 relational:1 y1:7 trec:1 arbitrary:2 pair:2 specified:1 distinction:1 pearl:1 qa:4 able:4 proceeds:1 pattern:40 video:2 belief:1 overlap:13 event:6 natural:2 predicting:10 representing:3 gene1:4 technology:1 yq:6 genomics:6 xq:8 prior:2 understanding:1 l2:4 relative:1 wisconsin:2 fully:1 par:1 discriminatively:2 limitation:3 triple:3 validation:1 consistent:2 supported:1 last:2 free:1 side:4 bias:2 institute:1 neighbor:3 taking:1 felzenszwalb:1 fg:3 benefit:2 curve:9 genome:7 ignores:1 forward:5 qualitatively:1 made:2 san:1 ya0:1 observable:3 transcription:4 gene:17 clique:10 overfitting:1 conclude:1 xi:7 discriminative:1 alternatively:1 subsequence:10 regulatory:9 brunak:1 ca:1 probabalistic:1 constructing:1 domain:7 terminator:27 eukaryotic:1 cmn:25 pk:7 promoter:25 incrementing:1 bockhorst:2 qiu:1 allowed:2 edmonton:1 darker:1 trecvid:1 shrinking:1 sub:4 position:15 precision:6 explicit:1 pereira:1 candidate:7 specific:5 list:1 dominates:2 workshop:1 izing:1 false:2 sequential:1 illustrates:2 suited:1 entropy:1 depicted:1 appearance:2 likely:1 expressed:1 ordered:2 partially:1 collectively:1 corresponds:2 williamstown:1 ma:4 wgt:1 conditional:15 marked:1 shared:1 included:1 specifically:1 called:5 pas:6 invariance:1 experimental:1 ya:71 t15:1 l4:1 indicating:1 wq:1 mark:1 people:1 scan:7 bioinformatics:1 dept:2 audio:1 |
1,702 | 2,547 | Two-Dimensional Linear Discriminant Analysis
Jieping Ye
Department of CSE
University of Minnesota
[email protected]
Ravi Janardan
Department of CSE
University of Minnesota
[email protected]
Qi Li
Department of CIS
University of Delaware
[email protected]
Abstract
Linear Discriminant Analysis (LDA) is a well-known scheme for feature
extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as face recognition and
image retrieval. An intrinsic limitation of classical LDA is the so-called
singularity problem, that is, it fails when all scatter matrices are singular. A well-known approach to deal with the singularity problem is to
apply an intermediate dimension reduction stage using Principal Component Analysis (PCA) before LDA. The algorithm, called PCA+LDA,
is used widely in face recognition. However, PCA+LDA has high costs
in time and space, due to the need for an eigen-decomposition involving
the scatter matrices.
In this paper, we propose a novel LDA algorithm, namely 2DLDA, which
stands for 2-Dimensional Linear Discriminant Analysis. 2DLDA overcomes the singularity problem implicitly, while achieving efficiency. The
key difference between 2DLDA and classical LDA lies in the model for
data representation. Classical LDA works with vectorized representations of data, while the 2DLDA algorithm works with data in matrix
representation. To further reduce the dimension by 2DLDA, the combination of 2DLDA and classical LDA, namely 2DLDA+LDA, is studied,
where LDA is preceded by 2DLDA. The proposed algorithms are applied on face recognition and compared with PCA+LDA. Experiments
show that 2DLDA and 2DLDA+LDA achieve competitive recognition
accuracy, while being much more efficient.
1
Introduction
Linear Discriminant Analysis [2, 4] is a well-known scheme for feature extraction and dimension reduction. It has been used widely in many applications such as face recognition
[1], image retrieval [6], microarray data classification [3], etc. Classical LDA projects the
data onto a lower-dimensional vector space such that the ratio of the between-class distance to the within-class distance is maximized, thus achieving maximum discrimination.
The optimal projection (transformation) can be readily computed by applying the eigendecomposition on the scatter matrices. An intrinsic limitation of classical LDA is that its
objective function requires the nonsingularity of one of the scatter matrices. For many applications, such as face recognition, all scatter matrices in question can be singular since
the data is from a very high-dimensional space, and in general, the dimension exceeds the
number of data points. This is known as the undersampled or singularity problem [5].
In recent years, many approaches have been brought to bear on such high-dimensional, undersampled problems, including pseudo-inverse LDA, PCA+LDA, and regularized LDA.
More details can be found in [5]. Among these LDA extensions, PCA+LDA has received a
lot of attention, especially for face recognition [1]. In this two-stage algorithm, an intermediate dimension reduction stage using PCA is applied before LDA. The common aspect of
previous LDA extensions is the computation of eigen-decomposition of certain large matrices, which not only degrades the efficiency but also makes it hard to scale them to large
datasets.
In this paper, we present a novel approach to alleviate the expensive computation of the
eigen-decomposition in previous LDA extensions. The novelty lies in a different data representation model. Under this model, each datum is represented as a matrix, instead of as
a vector, and the collection of data is represented as a collection of matrices, instead of as
a single large matrix. This model has been previously used in [8, 9, 7] for the generalization of SVD and PCA. Unlike classical LDA, we consider the projection of the data onto a
space which is the tensor product of two vector spaces. We formulate our dimension reduction problem as an optimization problem in Section 3. Unlike classical LDA, there is no
closed form solution for the optimization problem; instead, we derive a heuristic, namely
2DLDA. To further reduce the dimension, which is desirable for efficient querying, we consider the combination of 2DLDA and LDA, namely 2DLDA+LDA, where the dimension
of the space transformed by 2DLDA is further reduced by LDA.
We perform experiments on three well-known face datasets to evaluate the effectiveness
of 2DLDA and 2DLDA+LDA and compare with PCA+LDA, which is used widely in face
recognition. Our experiments show that: (1) 2DLDA is applicable to high-dimensional
undersampled data such as face images, i.e., it implicitly avoids the singularity problem
encountered in classical LDA; and (2) 2DLDA and 2DLDA+LDA have distinctly lower
costs in time and space than PCA+LDA, and achieve classification accuracy that is competitive with PCA+LDA.
2
An overview of LDA
In this section, we give a brief overview of classical LDA. Some of the important notations
used in the rest of this paper are listed in Table 1.
Given a data matrix A ? IRN ?n , classical LDA aims to find a transformation G ? IRN ?
that maps each column ai of A, for 1 ? i ? n, in the N -dimensional space to a vector bi in
the -dimensional space. That is G : ai ? IRN ? bi = GT ai ? IR ( < N ). Equivalently,
classical LDA aims to find a vector space G spanned by {gi }i=1 , where G = [g1 , ? ? ? , g ],
such that each ai is projected onto G by (g1T ? ai , ? ? ? , gT ? ai )T ? IR .
Assume that the original data in A is partitioned into k classes as A = {?1 , ? ? ? , ?k }, where
k
?i contains ni data points from the ith class, and i=1 ni = n. Classical LDA aims to find
the optimal transformation G such that the class structure of the original high-dimensional
space is preserved in the low-dimensional space.
In general, if each class is tightly grouped, but well separated from the other classes, the
quality of the cluster is considered to be high. In discriminant analysis, two scatter matrices, called within-class (Sw ) and between-class (Sb ) matrices, are defined to quantify
k
the quality of the cluster, as follows [4]: Sw =
(x ? mi )(x ? mi )T , and
i=1
k
x??i
1
T
Sb = i=1 ni (mi ? m)(mi ? m) , where mi = ni x??i x is the mean of the ith class,
k
and m = n1 i=1 x??i x is the global mean.
Notation
n
k
Ai
ai
r
c
N
?j
L
R
I
Bi
1
2
Description
number of images in the dataset
number of classes in the dataset
ith image in matrix representation
ith image in vectorized representation
number of rows in Ai
number of columns in Ai
dimension of ai (N = r ? c)
jth class in the dataset
transformation matrix (left) by 2DLDA
transformation matrix (right) by 2DLDA
number of iterations in 2DLDA
reduced representation of Ai by 2DLDA
number of rows in Bi
number of columns in Bi
Table 1: Notation
It is easy to verify that trace(Sw ) measures the closeness of the vectors within the classes,
while trace(Sb ) measures the separation between classes. In the low-dimensional space
resulting from the linear transformation G (or the linear projection onto the vector space G),
L
the within-class and between-class matrices become SbL = GT Sb G, and Sw
= GT Sw G.
L
). ComAn optimal transformation G would maximize trace(SbL ) and minimize trace(Sw
mon optimizations in classical discriminant analysis include (see [4]):
L ?1 L
L
max trace((Sw
) Sb ) and min trace((SbL )?1 Sw
) .
(1)
G
G
The optimization problems in Eq. (1) are equivalent to the following generalized eigenvalue
problem: Sb x = ?Sw x, for ? = 0. The solution can be obtained by applying an eigen?1
decomposition to the matrix Sw
Sb , if Sw is nonsingular, or Sb?1 Sw , if Sb is nonsingular.
There are at most k ? 1 eigenvectors corresponding to nonzero eigenvalues, since the rank
of the matrix Sb is bounded from above by k ? 1. Therefore, the reduced dimension by
classical LDA is at most k ? 1. A stable way to compute the eigen-decomposition is to
apply SVD on the scatter matrices. Details can be found in [6].
Note that a limitation of classical LDA in many applications involving undersampled data,
such as text documents and images, is that at least one of the scatter matrices is required to
be nonsingular. Several extensions, including pseudo-inverse LDA, regularized LDA, and
PCA+LDA, were proposed in the past to deal with the singularity problem. Details can be
found in [5].
3
2-Dimensional LDA
The key difference between classical LDA and the 2DLDA that we propose in this paper
is in the representation of data. While classical LDA uses the vectorized representation,
2DLDA works with data in matrix representation.
We will see later in this section that the matrix representation in 2DLDA leads to an eigendecomposition on matrices with much smaller sizes. More specifically, 2DLDA involves
the eigen-decomposition of matrices with sizes r?r and c?c, which are much smaller than
the matrices in classical LDA. This dramatically reduces the time and space complexities
of 2DLDA over LDA.
Unlike classical LDA, 2DLDA considers the following (1 ? 2 )-dimensional space L ? R,
1
which is the tensor product of the following two spaces: L spanned by {ui }i=1
and
2
r?1
and R =
R spanned by {vi }i=1 . Define two matrices L = [u1 , ? ? ? , u1 ] ? IR
[v1 , ? ? ? , v2 ] ? IRc?2 . Then the projection of X ? IRr?c onto the space L ? R is
LT XR ? R1 ?2 .
Let Ai ? IRr?c , for i = 1, ? ? ? , n, be the n images in
the dataset, clustered into classes
?1 , ? ? ? , ?k , where ?i has ni images. Let Mi = n1i X??i X be the mean of the ith
k
class, 1 ? i ? k, and M = n1 i=1 X??i X be the global mean. In 2DLDA, we
consider images as two-dimensional signals and aim to find two transformation matrices
L ? IRr?1 and R ? IRc?2 that map each Ai ? IRr?c , for 1 ? i ? n, to a matrix
Bi ? IR1 ?2 such that Bi = LT Ai R.
Like classical LDA, 2DLDA aims to find the optimal transformations (projections) L and
R such that the class structure of the original high-dimensional space is preserved in the
low-dimensional space.
A natural similarity metric between matrices is the Frobenius norm [8]. Under this metric,
the (squared) within-class and between-class distances Dw and Db can be computed as
follows:
k
k
Dw =
||X ? Mi ||2F , Db =
ni ||Mi ? M ||2F .
i=1 X??i
i=1
Using the property of the trace, that is, trace(M M T ) = ||M ||2F , for any matrix M , we can
rewrite Dw and Db as follows:
k
T
,
(X ? Mi )(X ? Mi )
Dw = trace
i=1 X??i
Db
=
trace
k
ni (Mi ? M )(Mi ? M )
T
.
i=1
In the low-dimensional space resulting from the linear transformations L and R, the withinclass and between-class distances become
k
T
T
T
?
Dw = trace
L (X ? Mi )RR (X ? Mi ) L ,
i=1 X??i
?b
D
=
trace
k
ni L (Mi ? M )RR (Mi ? M ) L .
T
T
T
i=1
? w . Due to
? b and minimize D
The optimal transformations L and R would maximize D
the difficulty of computing the optimal L and R simultaneously, we derive an iterative
algorithm in the following. More specifically, for a fixed R, we can compute the optimal
L by solving an optimization problem similar to the one in Eq. (1). With the computed
L, we can then update R by solving another optimization problem as the one in Eq. (1).
Details are given below. The procedure is repeated a certain number of times, as discussed
in Section 4.
Computation of L
? w and D
? b can be rewritten as
For a fixed R, D
R
?
? b = trace LT S R L ,
Dw = trace LT Sw
L ,D
b
Algorithm 2DLDA(A1 , ? ? ? , An , 1 , 2 )
Input: A1 , ? ? ? , An , 1 , 2
Output: L, R, B1 , ? ? ? , Bn
1. Compute the mean Mi of ith class for each i as Mi = n1i X??i X;
k
2. Compute the global mean M = n1 i=1 X??i X;
T
3. R0 ? (I2 , 0) ;
4. For j from 1 to I
k
R
T
5.
Sw
? i=1 X??i (X ? Mi )Rj?1 Rj?1
(X ? Mi )T ,
k
T
SbR ? i=1 ni (Mi ? M )Rj?1 Rj?1
(Mi ? M )T ;
R ?1 R
L 1
Sb ;
6.
Compute
Lthe first L1
eigenvectors {? }=1 of Sw
7.
Lj ? ?1 , ? ? ? , ?1
k
L
? i=1 X??i (X ? Mi )T Lj LTj (X ? Mi ),
8.
Sw
k
SbL ? i=1 ni (Mi ? M )T Lj LTj (Mi ? M );
L ?1 L
2
Sb ;
9.
Compute
first 2
eigenvectors {?R
}=1 of Sw
the
R
10. Rj ? ?R
1 , ? ? ? , ?2 ;
11. EndFor
12. L ? LI , R ? RI ;
13. B ? LT A R, for = 1, ? ? ? , n;
14. return(L, R, B1 , ? ? ? , Bn ).
where
R
=
Sw
k
(X ? Mi )RRT (X ? Mi )T ,
SbR =
i=1 X??i
k
ni (Mi ? M )RRT (Mi ? M )T .
i=1
Similar to the optimization problem in Eq. (1), the optimal L can be computed
by solving
R
L)?1 (LT SbR L) . The solution
the following optimization problem: maxL trace (LT Sw
R
x = ?SbR x.
can be obtained by solving the following generalized eigenvalue problem: Sw
R
Since Sw is in general nonsingular, the optimal L can be obtained by computing an eigen R ?1 R
R
decomposition on Sw
Sb . Note that the size of the matrices Sw
and SbR is r ? r,
which is much smaller than the size of the matrices Sw and Sb in classical LDA.
Computation of R
? w and D
?b
Next, consider the computation of R, for a fixed L. A key observation is that D
can be rewritten as
? b = trace RT S L R ,
? w = trace RT S L R , D
D
w
b
where
L
Sw
=
k
i=1 X??i
(X ? Mi ) LL (X ? Mi ),
T
T
SbL
=
k
ni (Mi ? M )T LLT (Mi ? M ).
i=1
This follows from the following property of trace, that is, trace(AB) = trace(BA), for any
two matrices A and B.
Similarly, the
R can be computed
by solving the following optimization problem:
optimal
L
maxR trace (RT Sw
R)?1 (RT SbL R) . The solution can be obtained by solving the followL
L
x = ?SbL x. Since Sw
is in general nonsingular,
ing generalized eigenvalue problem: Sw
L ?1 L
the optimal R can be obtained by computing an eigen-decomposition on Sw
Sb . Note
L
and SbL is c ? c, much smaller than Sw and Sb .
that the size of the matrices Sw
The pseudo-code for the 2DLDA algorithm is given in Algorithm 2DLDA. It is clear that
the most expensive steps in Algorithm 2DLDA are in Lines 5, 8 and 13 and the total
time complexity is O(n max(1 , 2 )(r + c)2 I), where I is the number of iterations. The
2DLDA algorithm depends on the initial choice R0 . Our experiments show that choosing
T
R0 = (I2 , 0) , where I2 is the identity matrix, produces excellent results. We use this
initial R0 in all the experiments.
Since the number of rows (r)?
and the number of columns (c) of an image Ai are generally
comparable, i.e., r ? c ? N , we set 1 and 2 to a common value d in the rest of
this paper, for simplicity. However, the algorithm works in the general case. With this
simplification, the time complexity of the 2DLDA algorithm becomes O(ndN I).
The space complexity of 2DLDA is O(rc) = O(N ). The key to the low space complexity
R
L
of the algorithm is that the matrices Sw
, SbR , Sw
, and SbL can be formed by reading the
matrices A incrementally.
3.1
2DLDA+LDA
As mentioned in the Introduction, PCA is commonly applied as an intermediate dimensionreduction stage before LDA to overcome the singularity problem of classical LDA. In this
section, we consider the combination of 2DLDA and LDA, namely 2DLDA+LDA, where
the dimension by 2DLDA is further reduced by LDA, since small reduced dimension is
desirable for efficient querying. More specifically, in the first stage of 2DLDA+LDA, each
image Ai ? IRr?c is reduced to Bi ? IRd?d by 2DLDA, with d < min(r, c). In the second
2
stage, each Bi is first transformed to a vector bi ? IRd (matrix-to-vector alignment), then
k?1
bi is further reduced to bL
by LDA with k ? 1 < d2 , where k is the number
i ? IR
of classes. Here, ?matrix-to-vector alignment? means that the matrix is transformed to a
vector by concatenating all its rows together consecutively.
The time complexity of the first stage by 2DLDA is O(ndN I). The second stage applies
2 2
2
classical LDA to data in d2 -dimensional space, hence takes
assuming n > d .
O(n(d )3 ),
Hence the total time complexity of 2DLDA+LDA is O nd(N I + d ) .
4
Experiments
In this section, we experimentally evaluate the performance of 2DLDA and 2DLDA+LDA
on face images and compare with PCA+LDA, used widely in face recognition. For
PCA+LDA, we use 200 principal components in the PCA stage, as it produces good overall
results. All of our experiments are performed on a P4 1.80GHz Linux machine with 1GB
memory. For all the experiments, the 1-Nearest-Neighbor (1NN) algorithm is applied for
classification and ten-fold cross validation is used for computing the classification accuracy.
Datasets: We use three face datasets in our study: PIX, ORL, and PIE, which are
publicly available. PIX (available at http://peipa.essex.ac.uk/ipa/pix/faces/manchester/testhard/), contains 300 face images of 30 persons. The image size is 512 ? 512. We
subsample the images down to a size of 100 ? 100 = 10000. ORL (available at
http://www.uk.research.att.com/facedatabase.html), contains 400 face images of 40 persons. The image size is 92 ? 112. PIE is a subset of the CMU?PIE face image dataset
(available at http://www.ri.cmu.edu/projects/project 418.html). It contains 6615 face images of 63 persons. The image size is 640 ? 480. We subsample the images down to a size
of 220 ? 175 = 38500. Note that PIE is much larger than the other two datasets.
0.96
0.96
0.94
0.94
0.92
0.9
2DLDA+LDA
2DLDA
0.88
0.9
0.86
0.84
0.82
0.82
0.8
0.8
6
8 10 12 14
Number of iterations
16
18
20
2DLDA+LDA
2DLDA
0.88
0.84
4
1
0.92
0.86
2
1.05
Accuracy
1
0.98
Accuracy
Accuracy
1
0.98
0.95
2DLDA+LDA
2DLDA
0.9
0.85
0.8
2
4
6
8 10 12 14
Number of iterations
16
18
20
2
4
6
8 10 12 14
Number of iterations
16
18
20
Figure 1: Effect of the number of iterations on 2DLDA and 2DLDA+LDA using the three
face datasets; PIX, ORL and PIE (from left to right).
The impact of the number, I, of iterations: In this experiment, we study the effect of
the number of iterations (I in Algorithm 2DLDA) on 2DLDA and 2DLDA+LDA. The
results are shown in Figure 1, where the x-axis denotes the number of iterations, and the
y-axis denotes the classification accuracy. d = 10 is used for both algorithms. It is clear
that both accuracy curves are stable with respect to the number of iterations. In general,
the accuracy curves of 2DLDA+LDA are slightly more stable than those of 2DLDA. The
key consequence is that we only need to run the ?for? loop (from Line 4 to Line 11) in
Algorithm 2DLDA only once, i.e., I = 1, which significantly reduces the total running
time of both algorithms.
The impact of the value of the reduced dimension d: In this experiment, we study the
effect of the value of d on 2DLDA and 2DLDA+LDA, where the value of d determines the
dimensionality in the transformed space by 2DLDA. We did extensive experiments using
different values of d on the face image datasets. The results are summarized in Figure 2,
where the x-axis denotes the values of d (between 1 and 15) and the y-axis denotes the
classification accuracy with 1-Nearest-Neighbor as the classifier. As shown in Figure 2, the
accuracy curves on all datasets stabilize around d = 4 to 6.
Comparison on classification accuracy and efficiency: In this experiment, we evaluate the effectiveness of the proposed algorithms in terms of classification accuracy and
efficiency and compare with PCA+LDA. The results are summarized in Table 2. We can
observe that 2DLDA+LDA has similar performance as PCA+LDA in classification, while
it outperforms 2DLDA. Hence the LDA stage in 2DLDA+LDA not only reduces the dimension, but also increases the accuracy. Another key observation from Table 2 is that
2DLDA is almost one order of magnitude faster than PCA+LDA, while, the running time
of 2DLDA+LDA is close to that of 2DLDA.
Hence 2DLDA+LDA is a more effective dimension reduction algorithm than PCA+LDA,
as it is competitive to PCA+LDA in classification and has the same number of reduced
dimensions in the transformed space, while it has much lower time and space costs.
5
Conclusions
An efficient algorithm, namely 2DLDA, is presented for dimension reduction. 2DLDA is
an extension of LDA. The key difference between 2DLDA and LDA is that 2DLDA works
on the matrix representation of images directly, while LDA uses a vector representation.
2DLDA has asymptotically minimum memory requirements, and lower time complexity
than LDA, which is desirable for large face datasets, while it implicitly avoids the singularity problem encountered in classical LDA. We also study the combination of 2DLDA
and LDA, namely 2DLDA+LDA, where the dimension by 2DLDA is further reduced by
LDA. Experiments show that 2DLDA and 2DLDA+LDA are competitive with PCA+LDA,
in terms of classification accuracy, while they have significantly lower time and space costs.
1
0.9
1
0.9
0.8
0.8
0.7
0.7
0.6
2DLDA+LDA
2DLDA
Accuracy
0.7
Accuracy
Accuracy
0.8
1
0.9
0.6
0.5
0.4
2DLDA+LDA
2DLDA
0.3
0.5
4
6
8
10
Value of d
12
14
0.4
2DLDA+LDA
2DLDA
0.2
0.1
0.1
2
0.5
0.3
0.2
0.4
0.6
0
2
4
6
8
10
Value of d
12
14
2
4
6
8
10
Value of d
12
14
Figure 2: Effect of the value of the reduced dimension d on 2DLDA and 2DLDA+LDA
using the three face datasets; PIX, ORL and PIE (from left to right).
Dataset
PIX
ORL
PIE
PCA+LDA
Accuracy Time(Sec)
98.00%
7.73
97.75%
12.5
?
?
2DLDA
Accuracy Time(Sec)
97.33%
1.69
97.50%
2.14
99.32%
153
2DLDA+LDA
Accuracy Time(Sec)
98.50%
1.73
98.00%
2.19
100%
157
Table 2: Comparison on classification accuracy and efficiency: ??? means that PCA+LDA
is not applicable for PIE, due to its large size. Note that PCA+LDA involves an eigendecomposition of the scatter matrices, which requires the whole data matrix to reside in
main memory.
Acknowledgment Research of J. Ye and R. Janardan is sponsored, in part, by the Army
High Performance Computing Research Center under the auspices of the Department of the
Army, Army Research Laboratory cooperative agreement number DAAD19-01-2-0014,
the content of which does not necessarily reflect the position or the policy of the government, and no official endorsement should be inferred.
References
[1] P.N. Belhumeour, J.P. Hespanha, and D.J. Kriegman. Eigenfaces vs. Fisherfaces: Recognition
using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7):711?720, 1997.
[2] R.O. Duda, P.E. Hart, and D. Stork. Pattern Classification. Wiley, 2000.
[3] S. Dudoit, J. Fridlyand, and T. P. Speed. Comparison of discrimination methods for the classification of tumors using gene expression data. Journal of the American Statistical Association,
97(457):77?87, 2002.
[4] K. Fukunaga. Introduction to Statistical Pattern Classification. Academic Press, San Diego,
California, USA, 1990.
[5] W.J. Krzanowski, P. Jonathan, W.V McCarthy, and M.R. Thomas. Discriminant analysis with
singular covariance matrices: methods and applications to spectroscopic data. Applied Statistics,
44:101?115, 1995.
[6] Daniel L. Swets and Juyang Weng. Using discriminant eigenfeatures for image retrieval. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 18(8):831?836, 1996.
[7] J. Yang, D. Zhang, A.F. Frangi, and J.Y. Yang. Two-dimensional PCA: a new approach to
appearance-based face representation and recognition. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 26(1):131?137, 2004.
[8] J. Ye. Generalized low rank approximations of matrices. In ICML Conference Proceedings,
pages 887?894, 2004.
[9] J. Ye, R. Janardan, and Q. Li. GPCA: An efficient dimension reduction scheme for image compression and retrieval. In ACM SIGKDD Conference Proceedings, pages 354?363, 2004.
| 2547 |@word compression:1 norm:1 duda:1 nd:1 d2:2 bn:2 decomposition:8 covariance:1 reduction:8 initial:2 contains:4 att:1 daniel:1 document:1 past:1 outperforms:1 com:1 scatter:9 readily:1 sponsored:1 update:1 discrimination:2 rrt:2 v:1 intelligence:3 ith:6 eigenfeatures:1 gpca:1 cse:2 zhang:1 rc:1 become:2 swets:1 becomes:1 project:3 notation:3 bounded:1 transformation:11 pseudo:3 classifier:1 uk:2 before:3 sbr:6 consequence:1 studied:1 bi:11 acknowledgment:1 daad19:1 xr:1 procedure:1 significantly:2 projection:6 janardan:4 onto:5 close:1 krzanowski:1 applying:2 www:2 equivalent:1 map:2 jieping:2 center:1 attention:1 formulate:1 simplicity:1 spanned:3 dw:6 diego:1 us:2 agreement:1 recognition:11 expensive:2 cooperative:1 mentioned:1 complexity:8 ui:1 kriegman:1 rewrite:1 solving:6 efficiency:5 represented:2 separated:1 effective:1 choosing:1 mon:1 heuristic:1 widely:5 larger:1 statistic:1 gi:1 g1:1 eigenvalue:4 rr:2 propose:2 product:2 p4:1 udel:1 loop:1 achieve:2 description:1 frobenius:1 ltj:2 manchester:1 cluster:2 requirement:1 produce:2 derive:2 ac:1 nearest:2 received:1 eq:4 c:2 involves:2 quantify:1 consecutively:1 government:1 generalization:1 clustered:1 alleviate:1 spectroscopic:1 singularity:8 extension:5 around:1 considered:1 endfor:1 applicable:2 grouped:1 brought:1 aim:5 rank:2 sigkdd:1 facedatabase:1 nn:1 sb:16 lj:3 essex:1 irn:3 transformed:5 overall:1 classification:15 among:1 html:2 once:1 extraction:2 icml:1 simultaneously:1 tightly:1 n1:3 ab:1 alignment:2 umn:2 weng:1 column:4 cost:4 subset:1 person:3 together:1 linux:1 squared:1 reflect:1 american:1 return:1 li:3 summarized:2 stabilize:1 sec:3 vi:1 depends:1 later:1 performed:1 lot:1 closed:1 competitive:4 minimize:2 formed:1 ir:5 accuracy:22 ni:12 publicly:1 maximized:1 nonsingular:5 ndn:2 llt:1 mi:34 dataset:6 dimensionality:1 irr:5 stage:10 incrementally:1 lda:100 quality:2 usa:1 effect:4 ye:4 verify:1 hence:4 nonzero:1 laboratory:1 deal:2 ll:1 generalized:4 image:26 novel:2 sbl:9 fridlyand:1 common:2 preceded:1 overview:2 stork:1 discussed:1 association:1 ai:17 dimensionreduction:1 similarly:1 minnesota:2 stable:3 similarity:1 etc:1 gt:3 mccarthy:1 recent:1 certain:2 minimum:1 r0:4 novelty:1 maximize:2 signal:1 desirable:3 rj:5 reduces:3 exceeds:1 ing:1 faster:1 academic:1 cross:1 retrieval:4 hart:1 a1:2 qi:1 impact:2 involving:3 metric:2 cmu:2 iteration:10 preserved:2 singular:3 microarray:1 rest:2 unlike:3 n1i:2 db:4 effectiveness:2 yang:2 intermediate:3 easy:1 reduce:2 delaware:1 withinclass:1 expression:1 pca:26 gb:1 ird:2 dramatically:1 generally:1 clear:2 listed:1 eigenvectors:3 ten:1 reduced:11 http:3 ipa:1 key:7 coman:1 achieving:2 ravi:1 v1:1 asymptotically:1 year:1 pix:6 inverse:2 run:1 almost:1 separation:1 endorsement:1 orl:5 comparable:1 datum:1 simplification:1 fold:1 encountered:2 ri:2 auspex:1 aspect:1 u1:1 speed:1 min:2 fukunaga:1 department:4 combination:4 smaller:4 slightly:1 partitioned:1 previously:1 available:4 rewritten:2 apply:2 observe:1 eigen:8 original:3 thomas:1 denotes:4 running:2 include:1 sw:33 especially:1 classical:25 bl:1 tensor:2 objective:1 question:1 degrades:1 rt:4 distance:4 considers:1 discriminant:8 lthe:1 assuming:1 code:1 ratio:1 equivalently:1 pie:8 trace:21 hespanha:1 ba:1 policy:1 fisherfaces:1 perform:1 observation:2 datasets:10 inferred:1 namely:7 required:1 extensive:1 california:1 below:1 pattern:5 reading:1 max:2 including:2 memory:3 irc:2 natural:1 regularized:2 difficulty:1 undersampled:4 scheme:3 brief:1 axis:4 text:1 bear:1 frangi:1 limitation:3 querying:2 validation:1 eigendecomposition:3 vectorized:3 row:4 jth:1 neighbor:2 eigenfaces:1 face:22 distinctly:1 ghz:1 overcome:1 dimension:21 maxl:1 stand:1 avoids:2 curve:3 reside:1 collection:2 commonly:1 projected:1 san:1 transaction:3 implicitly:3 overcomes:1 gene:1 global:3 maxr:1 b1:2 iterative:1 table:5 excellent:1 necessarily:1 official:1 did:1 g1t:1 main:1 whole:1 subsample:2 repeated:1 wiley:1 fails:1 position:1 concatenating:1 lie:2 down:2 specific:1 closeness:1 intrinsic:2 ci:2 magnitude:1 lt:7 army:3 appearance:1 applies:1 determines:1 acm:1 identity:1 dudoit:1 content:1 hard:1 experimentally:1 specifically:3 principal:2 tumor:1 called:3 total:3 svd:2 jonathan:1 evaluate:3 |
1,703 | 2,548 | Inference, Attention, and Decision
in a Bayesian Neural Architecture
Angela J. Yu
Peter Dayan
Gatsby Computational Neuroscience Unit, UCL
17 Queen Square, London WC1N 3AR, United Kingdom.
[email protected]
[email protected]
Abstract
We study the synthesis of neural coding, selective attention and perceptual decision making. A hierarchical neural architecture is proposed,
which implements Bayesian integration of noisy sensory input and topdown attentional priors, leading to sound perceptual discrimination. The
model offers an explicit explanation for the experimentally observed
modulation that prior information in one stimulus feature (location) can
have on an independent feature (orientation). The network?s intermediate
levels of representation instantiate known physiological properties of visual cortical neurons. The model also illustrates a possible reconciliation
of cortical and neuromodulatory representations of uncertainty.
1
Introduction
A constant stream of noisy and ambiguous sensory inputs bombards our brains, informing
on-going inferential processes and directing perceptual decision-making. Neurophysiologists and psychologists have long studied inference and decision-making in isolation, as
well as the careful attentional filtering that is necessary to optimize them. The recent focus
on their interactions poses an important opportunity and challenge for computational models. In this paper, we study an attentional task which involves all three components, and
thereby directly confront their interaction. We first discuss the background of the individual
elements; then describe our model.
The first element involves the representation and manipulation of uncertainty in sensory
inputs and contextual information. There are two broad families of suggestions. One is microscopic, for which individual cortical neurons and populations either implicitly or explicitly represent the uncertainty. This spans a broad spectrum, from distributional codes that
can also encode restricted aspects of uncertainty [1] to more exotic interpretations of codes
as representing complex distributions [1, 2, 3, 4, 5]. The other family is macroscopic, with
cholinergic (ACh) and noradrenergic (NE) neuromodulatory systems reporting computationally distinct forms of uncertainty to influence the way that information in differentially
reliable cortical areas is integrated and learned [6, 7]. How microscopic and macroscopic
families work together is hitherto largely unexplored.
The second element is selective attention and top-down influences over sensory processing.
Here, the key challenge is to couple the many ideas about the way that attention should,
from a sound statistical viewpoint, modify sensory processing, to the measurable effects of
attention on the neural substrate. For instance, one typical consequence of (visual) featural
and spatial attention is an increase in the activities of neurons in cortical populations repre-
senting those features, which is equivalent to multiplying their tuning functions by a factor
[8]. Under the sort of probabilistic representational scheme in which the population activity
codes for uncertainty in the underlying variable, it is of obvious importance to understand
how this multiplication changes the implied uncertainty, and what statistical characteristic
of the attention licenses this change [9].
The third element is the coupling between sensory processing and perceptual decisions.
Implementational and computational issues underlying binary decisions, especially in simple cases, have been extensively explored, with psychologists [11, 12], and neuroscientists
[13, 14] converging on common statistical [10] ideas about drift-diffusion processes.
In order to explore the interaction of these elements, we model an extensively studied attentional task (due to Posner [15]), in which probabilistic spatial cueing is used to manipulate
attentional modulation of visual discrimination. We employ a hierarchical neural architecture in which top-down attentional priors are integrated with sequentially sampled sensory
input in a sound Bayesian manner, using a logarithmic mapping between cortical neural
activities and uncertainty [4]. In the model, the information provided by the cue is realized
as a change in the prior distribution over the cued dimension (space). The effect of the
prior is to eliminate inputs from spatial locations considered irrelevant for the task, thus
improving discrimination in another dimension (orientation).
In section 2, we introduce the Posner task and give a Bayesian description of the computations underlying successful performance. In section 3, we describe the probabilistic semantics of the layers, and their functional connections, in the hierarchical neural architecture.
In section 4, we compare the perceptual performance of the network to psychophysics data,
and the intermediate layers? activities to the relevant physiological data.
2
Spatial Attention as Prior Information
In the classic version of Posner?s task [15], a subject is presented with a cue that predicts
the location of a subsequent target with a certain probability termed its validity. The cue is
valid if it makes a correct prediction, and invalid otherwise. Subjects typically perform detection or discrimination on the target more rapidly and accurately on a valid-cue trial than
an invalid one, reflecting cue-induced attentional modulation of visual processing and/or
decision making [15]. This difference in reaction time or accuracy is often termed the
validity effect [16], and depends on the cue validity [17].
We consider sensory stimuli with two feature dimensions, a periodic variable, orientation,
? = ?? , about which decisions are to be made, and a linear variable, space, ? = ?? which
is cued. The cue induces a top-down spatial prior, which we model as a mixture of a component sharply peaked at the cued location and a broader component capturing contextual
and bottom-up saliency factors (including the possibility of invalidity). For simplicity, we
use a Gaussian for the peaked component, and a uniform distribution for the broader one,
although more complex priors of a similar nature would not change the model behavior:
p(?) = ?N (?
?, ? 2 ) + (1??)c. Given lower-layer activation patterns Xt ? {x1 , ..., xt }, assumed to be iid samples (with Gaussian noise) of bell-shaped tuning responses to the true
underlying stimulus values ?? , ?? : fij (??, ?? ) = Z exp(?(?i??? )2 /2??2 +k cos(?j??? )),
the task is to infer a posterior distribution P (?|Xt ), involving the following steps:
Q
p(xt |?, ?) = ij p(xij (t)|?, ?)
Likelihood
R
p(?|xt ) = p(?, ?)p(xt |?, ?)d? Prior-weighted marginalization
p(?|Xt ) ? p(?|xt?1
1 )p(?, ?|xt )
Temporal accumulation
Because the marginalization step is weighted by the priors, a valid cue results in the inte-
Layer V
0.8
0.6
?
log P (?i )
rj5 (t) = exp(rj4 (t))/(
0.4
0.2
P
k
exp(rk4 (t)))
0
1
2
3
4
5
6
?
0.4
Layer IV
0.2
0
?1.5
?1
?0.5
0
0.5
1
10
1.5
rj4 (t) = rj4 (t ? 1) + rj3 (t) + ct
5
?
?
0
1
2
3
?
4
5
6
Layer III
4
rj3 (t) = log
2
?
0
1
2
3
?
4
5
P
2
i exp(rij (t))
+ bt
6
Layer II
15
?
5
6
?
fij (?? , ?? )
rij2 (t) = rij1 (t) + log P (?i ) + at
10
4
0
1.5
1
0.5
?
0
2
?0.5
?1
?1.5
?
Layer I
3
rij1 (t) = log p(xt |?i , ?j )
10
8
2
6
?
1
6
?
4
0
1.5
1
0.5
2
0
?0.5
?
?1
?1.5
?
4
6
2
4
0
1.5
1
0.5
2
0
?
?0.5
?1
?1.5
?
Figure 1: A Bayesian neural architecture. Layer I activities represent the log likelihood of the data
given each possible setting of ?i and ?j . This gives a noisy version of the smooth bell-shaped tuning
curve (shown on the left). In layer II, the log likelihood of each ? i and ?j is modulated by the prior
information log P (?j ), shown on the upper left. The prior in ? strongly suppresses the noisy input
in the irrelevant part of the ? dimension, thus enabling improved inference based on the underlying
tuning response fij . The layer III neurons represent the log marginal posterior of ? by integrating
out the ? dimension of layer II activities. Layer IV neurons combine recurrent information and
feedforward input from layer III to compute the log marginal posterior given all data so far observed.
Layer V computes the cumulative posterior distribution of ? using a softmax operation. Due to the
strong nonlinearity of softmax, its activity is much more peaked than in layer III and IV. Solid lines in
the diagram represent excitatory connections, dashed lines inhibitory. Blue circles illustrate how the
activities of one row of inputs in Layer I travels through the hierarchy to affect the final decision layer.
Brown circles illustrate how one unit in the spatial prior layer comes into the integration process.
gration of more ?signal? and less ?noise? into the marginal posterior, whereas the opposite
? we use an exresults from an invalid cue. To turn this on-line posterior into a decision ?,
tension of the Sequential Probability Ratio Test (SPRT [10]): observe x1 , x2 , ... until the
first time that max P (?j |Xt ) exceeds a fixed threshold q, then terminate the observation
? argmaxP (?j |Xt ) as the estimate of ? for the current trial.
process and report ?=
3
A Bayesian Neural Architecture
The neural architecture implements the above computational steps exactly through a loga1
rithmic transform, and has five layers (Fig 1). In layer I, activity of neuron ij, r ij
(t), reports
the log likelihood, log p(xt |?i , ?j ) (throughout, we discretize space and orientation). Layer
2
1
II combines this log likelihood information with the prior, rij
(t) = rij
(t) + log P (?i ) + at ,
2
to yield the joint log posterior up to an additive constant at that makes min rij
= 0. Layer
P
3
2
III performs the marginalization rj (t) = log i exp(rij )+bt , to give the marginal posterior
in ? (up to a constant bt that makes min rj3 (t) = 0). While this step (?log-of-sums?) looks
computationally formidable for neural hardware, it has been shown [4] that P
under certain
2
conditions it can be well approximated by a (weighted) ?sum-of-logs? rj3 (t) ? i ci rij
+bt ,
where ci are weights optimized to minimize approximation error. Layer IV neurons combine recurrent information and feedforward input from layer III to compute the log marginal
?
(b) Valid & Invalid ?
(a) Model Valid & Invalid RT
150
val
inv
100
(c)
Reaction Time vs.
?
0.6
200
0.4
50
150
0
10
20
30
40
0.2
50
Empirical Valid & Invalid RT
.50
100
.75
Error Rate vs.
.99
?
1
50
0.5
0
0
?/2
?
0
time
?
.50
.75
?
.99
Figure 2: Validity effect and dependence on ?. (a) The distribution of reaction times for the invalid
condition (? = 0.5) has a greater mean and longer tail than the valid condition in model simulation
results (top). Compare to similar results (bottom) from a Posner task in rats [18]. (b) Distribution of
inferred ?? is more tightly clustered around the true ?? (red dashed line) in valid case (blue) than the
invalid case (red). ? = 0.75 (c) Validity effect, in both reaction time (top) and error rate (bottom)
increases with increasing ?. {?i } = {?1.5, ?1.4, ..., 1.5}, {?j } = {?/8, 2?/8, ..., 16?/8}, ?? =
0.1, ?? = ?/16, q = 0.90, ?? = 0.5, ? ? {0.5, .75, .99}, ? = 0.05, 300 trials each of valid and
invalid trials. 100 trials of each ? value.
posterior given all data so far observed, rj4 (t) = rj4 (t?1) + rj3 (t) + ct , up to a constant ct .
Finally, layer V neurons
Pperform a softmax operation to retrieve the exact marginal posterior, rj5 (t) = exp(rj4 )/ k exp(rk4 ) = P (?j |Xt ), with the additive constants dropping out.
Note that a pathway parallel to III-IV-V consisting of neurons that only care about ? and
not ? can be constructed in exactly the same manner. Its corresponding layers would report
log p(xt , ?i ), log p(Xt , ?i ), and p(?i |Xt ). An example of activities at each layer of the
network, along with the choice of prior p(?) and tuning function fij , is shown in Fig 1.
4
Results
We first verify that the model indeed exhibits the cue-induced validity effect, ie shorter RT
and greater accuracy for valid-cue trials than invalid ones. ?Reaction time? on a trial is the
number of iid samples necessary to reach a decision, and ?error rate? is the average angular
distance between the estimated ?? and the true ?? . Figure 2 shows simulation results for 300
trials each of valid and invalid cue trials, for different values of ?, reflecting the model?s
belief as to cue validity. Reassuringly, the RT distribution for valid-cue trials distribution is
tighter and left-shifted compared to invalid-cue trials (Figure 2(a), top panel), as observed
in experimental data [15, 18] (Fig 2(a), bottom panel); (b) shows that accuracy is also
higher for valid-cue trials. Consistent with data from a human Posner task [17], (c) shows
that the VE increases with increasing perceived cue validity, as parameterized by ?, in both
reaction times and error rates (precluding a simple speed-error trade-off).
Since we have an explicit model of not only the ?behavioral output? but also the whole
neural hierarchy, we can relate activities at various levels of representation to existing physiological data. Ample evidence indicates that spatial attention to one side of the visual field
increases stimulus-induced activities in the corresponding part of the visual cortex [19, 20].
Fig 3(a) shows that our model qualitatively reproduces this effect; indeed it increases with
?, the perceived cue validity. Electrophysiological data also shows that spatial attention
has a multiplicative effect on orientation tuning responses in visual cortical neurons [8]
(Fig 3(b)). We see a similar phenomenon in the layer IV neurons (Fig 3(c); layer III similar, data not shown). Fig 3(d) is a scatter-plot of hlog p(xt , ?j )+c1 it for the valid condition
versus the invalid condition, for various values of ?, along with the slope fit to the experiment of Fig 3(b) (Layer III similar, data not shown). The linear least square error fits are
good, and the slope increases with increasing confidence in the cued location (larger ?). In
(a)
Cued vs. Uncued Activities
7.5
(b)
Attention & V4 Activities
E
D
rj4
6.5
(d)
Valid vs. Invalid Cueing
30
cued
uncued
2 7
rij
D
(c)
Multiplicative Gain
? = .5
val
inv
E 20
? = .75
Valid
D
rj4
E20
6
.50
.75
?
0
0
.99
? = .99
10
10
?/2
?
0
0
?
5
Invalid
D
10
E
rj4
2
Figure 3: Multiplicative gain modulation by spatial attention. (a) rij
activities, averaged over the half
of layer II where the prior peaks, are greater for valid (blue, left) than invalid (red, right) conditions.
(b) Experimentally observed multiplicative modulation of V4 orientation tunings by spatial attention
[8]. (c) Similar multiplicative effect in layer IV in the model. (d) Linear fits to scatter-plot of layer
III activities for valid cue condition vs. invalid cue condition show that the slope is greatest for large
? and smallest for small ? (magenta: ? = 0.99, blue: ? = 0.75, red: ? = 0.5, black: linear fit to
study in (b)). Simulation parameters are same as in Fig 2. Error bars: standard errors of the mean.
the model, the slope not only depends on ? but also the noise model, the discretization, and
so on, so the comparison of Figure 3(d) should be interpreted loosely.
In valid cases, the effect of attention is to increase the certainty in the posterior marginal
over ?, since the correct prior allows the relative suppression of noisy input from the irrelevant part of space. Were the posterior marginal exactly Gaussian, the increased certainty
would translate into a decreased variance. For Gaussian probability distributions, logarithmic coding amounts to something close to a quadratic (adjusted for the circularity of orientation), with a curvature determined by the variance. Decreasing the variance increases
the curvature, and therefore has a multiplicative effect on the activities (as in figure 3).
The approximate gaussianity of the marginal posterior comes from the accumulation of
many independent samples over time and space, and something like the central limit theorem. While it is difficult to show this multiplicative modulation rigorously, we can at least
demonstrate it mathematically for the case where the spatial prior is very sharply peaked at
its Gaussian mean y?. In this case, (hlog p1 (x(t), ?j )it +c1 )/(hlog p2 (x(t), ?j )it +c2 ) ? R,
where c1 , c2 , and R are constants independent of ?Rj and ?i . Based on the peaked prior assumption, p(?) ? ?(?? ?
?), we have p(x(t), ?) = p(x(t)|?, ?)p(?)p(?) ? p(x(t)|?, ?
?).
We can expand log p(x(t)|?
?, ?) and compute its average over time
hlog p(x(t)|?
?, ?)it = C ?
N
(fij (?? , ?? ) ? fij (?
?, ?))2 ij .
2
2?n
(1)
Then using the tuning function defined earlier, we can compare the joint probabilities given
valid (val) and invalid (inv) cues:
D
E
2
?(?i ??? )2 /??
?
?
?
e
hg(?)ij
1
log pval (x(t), ?) t
i
=
D
E
,
(2)
? 2
2
2
log pinv (x(t), ?) t
?2 ? ? e?((?i ?? ) +(?i ???) )/2?? hg(?)ij
i
and therefore,
?
2
2
hlog pval (xt , ?)it + c1
? e(? ???) /(4?? ) = R.
hlog pinv (xt , ?)it + c2
(3)
The derivation for a multiplicative effect on layer IV activities is very similar.
Another aspect of intermediate representation of interest is the way attention modifies the
evidence accumulation process over time. Fig 4 show the effect of cueing on the activities
of neuron rj5? (t), or P (?? |Xt ), for all trials with correct responses. The mean activity
trajectory is higher for the valid cue case than the invalid one: in this case, spatial attention mainly acts through increasing the rate of evidence accumulation after stimulus onset
(a)
(b)
1
rj5? 0.5
0
0
(d)
1
rj5? 0.5
val
inv
(e)
?=.5
?=.75
?=.99
1
50
Time
0
0
Time
50
(f)
?=.5
?=.75
?=.99
0.8
0.6
rj5?
0.6
0.4
0.4
0.2
0.2
0
0
1
rj5? 0.5
0
0
50
Time
0.8
rj5?
(c)
1
10
Time
20
0
?5
0
Time
Figure 4: Accumulation of iid samples in orientation discrimination, and dependence on prior belief
about stimulus location. (a-c) Average activity of neuron rj5? , which represents P (?? |Xt ), saturates
to 100% certainty much faster for valid cue trials (blue) than invalid cue trials (red). The difference
is more drastic when ? is larger, or when there is more prior confidence in the cued target location.
(a) ? = 0.5, (b) ? = 0.75, (c) ? = 0.99. Cyan dashed line indicates stimulus onset. (d) First 15
time steps (from stimulus onset) of the invalid cue traces from (a-c) are aligned to stimulus onset;
cyan line denotes stimulus onset. The differential rates of rise are apparent. (e) Last 8 time steps of
the invalid traces from (a-c) are aligned to decision threshold-crossing; there is no clear separation
as a function ?. (f) Multiplicative gain modulation of attention on V4 orientation tuning curves.
Simulation parameters are same as in Fig 2.
(steeper rise). This attentional effect is more pronounced when the system is more confident about its prior information ((a) ? = 0.5, (b) ? = 0.75, (c) ? = 0.99). Effectively,
increasing ? for invalid-cue trials is increasing input noise. Figure 4 (d) shows the average
traces for invalid-cueing trials aligned to the stimulus onset and (e) to the decision threshold
crossing. These results bear remarkable similarities to the LIP neuronal activities recorded
during monkey perceptual decision-making [13] (shown in (f)). In the stimulus-aligned
case, the traces rise linearly at first and then tail off somewhat, and the rate of rise increases
for lower (effective) noise. In the decision-aligned case, the traces rise steeply and together.
All these characteristics can also be seen in the experimental results in (f), where the input
noise level is explicitly varied.
5
Discussion
We have presented a hierarchical neural architecture that implements optimal probabilistic
integration of top-down information and sequentially observed data. We consider a class
of attentional tasks for which top-down modulation of sensory processing can be conceptualized as changes in the prior distribution over implicit stimulus dimensions. We use the
specific example of the Posner spatial cueing task to relate the characteristics of this neural
architecture to experimental literature. The network produces a reaction time distribution
and error rates that qualitatively replicate experimental data. The way these measures depend on valid versus invalid cueing, and on the exact perceived validity of the cue, are
similar to those observed in attentional experiments. Moreover, the activities in various
levels of the hierarchy resemble electrophysiologically recorded activities in the visual cortical neurons during attentional modulation and perceptual discrimination, lending farther
credence to the particular encoding and computational mechanisms that we have proposed.
In particular, the intermediate layers demonstrate a multiplicative gain modulation by attention, as observed in primate V4 neurons [8]; and the temporal behavior of the final layer,
representing the marginal posterior, qualitative replicates the experimental observation that
LIP neurons show noise-dependent firing rate increase when aligned to stimulus onset, and
noise-independent rise when aligned to the decision [13].
Our results illustrate the important concept that priors in a variable in one dimension (space)
can dramatically alter the inferential performance in a completely independent variable
dimension (orientation). In this case, the spatial prior affects the marginal posterior over ?
by altering the relative importance of joint posterior terms in the marginalization process.
This leads to the difference in performance between valid and invalid trials, a difference
that increases with ?. This model elaborates on an earlier phenomenological model [9], by
showing explicitly how marginalizing (in layer III) over activities biased by the prior (in
layer II) produces the effect.
This work has various theoretical and experimental implications. The model presents one
possible reconciliation of cortical and neuromodulatory representations of uncertainty. The
sensory-driven activities (layer I in this model) themselves encode bottom-up uncertainty,
including sensory receptor noise and any processing noise that have occurred up until then.
The top-down information, which specifies the Gaussian component of the spatial prior
p(?), involves two kinds of uncertainty. One determines the locus and spatial extent of
visual attention, the other specifies the relative importance of this top-down bias compared
to the bottom-up stimulus-driven input. The first is highly specific in modality and featural
dimension, presumably originating from higher visual cortical areas (eg parietal cortex for
spatial attention, inferotemporal cortex for complex featural attention). The second is more
generic and may affect different featural dimensions and maybe even different modalities
simultaneously, and is thus more appropriately signalled by a diffusely-projecting neuromodulator such as ACh. This characterization is also in keeping with our previous models
of ACh [21, 7] and experimental data showing that ACh selectively suppresses corticocortical transmission relative to bottom-up processing in primary sensory cortices [22].
The perceptual decision strategy employed in this model is a natural multi-dimensional
extension of SPRT [10], by monitoring the first-time passage of any one of the posterior
values crossing a fixed decision threshold.. Note that the distribution of reaction times is
skewed to the right (Fig 2(a)), as is commonly observed in visual discrimination tasks [11].
For binary decision tasks modeled using continuous diffusion processes [10, 11, 12, 13, 14],
this skew arises from the properties of the first-passage time distribution (the time at which
a diffusion barrier is first breached, corresponding to a fixed threshold confidence level in
the binary choice). Our multi-choice decision-making realization of visual discrimination,
as an extension of SPRT, also retains this skewed first-passage time distribution. Given
that SPRT is optimal for binary decisions (smallest average response time for a given error
rate), and that MAP estimate is optimal for 0-1 loss, we conjecture that our particular n-dim
generalization of SPRT should be optimal for sequential decision-making under 0-1 loss.
This is an area of active research.
There are several important open issues. One is that of noise: our network performs exact
Bayesian inference when activities are deterministic. The potentially deleterious effects of
noise, particularly in log probability space, needs to be explored. Another important question is how uncertainty in signal strength, including the absence of a signal, can be detected
and encoded. If the stimulus strength is unknown and can vary over time, then naive integration of bottom-up inputs ignoring the signal-to-noise ratio is no longer optimal. Based
on a slightly different task involving sustained attention or vigilance [23], Brown et al [24]
have made the interesting suggestion that one role for noradrenergic neuromodulation is
to implement a change in the integration strategy when a stimulus is detected. We have
also addressed this issue by ascribing to phasic norepinephrine a related but distinct role in
signaling unexpected state uncertainty (in preparation).
Acknowledgement
We are grateful to Eric Brown, Jonathan Cohen, Phil Holmes, Peter Latham, and Iain Murray for helpful discussions. Funding was from the Gatsby Charitable Foundation.
References
[1] Zemel, R S, Dayan, P, & Pouget, A (1998). Probabilistic interpretation of population codes.
Neural Comput 10: 403-30.
[2] Sahani, M & Dayan, P (2003). Doubly distributional population codes: simultaneous representation of uncertainty and multiplicity. Neural Comput 15: 2255-79.
[3] Barber, M J, Clark, J W, & Anderson, C H (2003). Neural representation of probabilistic information. Neural Comput 15: 1843-64
[4] Rao, R P (2004). Bayesian computation in recurrent neural circuits. Neural Comput 16: 1-38.
[5] Weiss, Y & Fleet D J(2002). Velocity likelihoods in biological and machine vision. In Prob
Models of the Brain: Perc and Neural Function. Cambridge, MA: MIT Press.
[6] Dayan, P & Yu, A J (2002). Acetylcholine, uncertainty, and cortical inference. In Adv in Neural
Info Process Systems 14.
[7] Yu, A J & Dayan, P (2003). Expected and unexpected uncertainty: ACh and NE in the neocortex. In Adv in Neural Info Process Systems 15.
[8] McAdams, C J & Maunsell, J H R (1999). Effects of attention on orientation-tuning functions
of single neurons in Macaque cortical area V4. J. Neurosci 19: 431-41.
[9] Dayan, P & Zemel R S (1999). Statistical models and sensory attention. In ICANN 1999.
[10] Wald, A (1947). Sequential Analysis. New York: John Wiley & Sons, Inc.
[11] Luce, R D (1986). Response Times: Their Role in Inferring Elementary Mental Organization.
New York: Oxford Univ. Press.
[12] Ratcliff, R (2001). Putting noise into neurophysiological models of simple decision making.
Nat Neurosci 4: 336-7.
[13] Gold, J I & Shadlen, M N (2002). Banburismus and the brain: decoding the relationship between sensory stimuli, decisions, and reward. Neuron 36: 299-308.
[14] Bogacz, Brown, Moehlis, Holmes, & Cohen (2004). The physics of optimal decision making:
a formal analysis of models of performance in two-alternative forced choice tasks, in press.
[15] Posner, M I (1980). Orienting of attention. Q J Exp Psychol 32: 3-25.
[16] Phillips, J M, et al (2000). Cholinergic neurotransmission influences overt orientation of visuospatial attention in the rat. Psychopharm 150:112-6.
[17] Yu, A J et al (2004). Expected and unexpected uncertainties control allocation of attention in a
novel attentional learning task. Soc Neurosci Abst 30:176.17.
[18] Bowman, E M, Brown, V, Kertzman, C, Schwarz, U, & Robinson, D L (2003). Covert orienting
of attention in Macaques: I. effects of behavioral context. J Neurophys 70: 431-434.
[19] Reynolds, J H & Chelazzi, L (2004). Attentional modulation of visual processing. Annu Rev
Neurosci 27: 611-47.
[20] Kastner, S & Ungerleider, L G (2000). Mechanisms of visual attention in the human cortex.
Annu Rev Neurosci 23: 315-41.
[21] Yu, A J & Dayan, P (2002). Acetylcholine in cortical inference. Neural Networks 15: 719-30.
[22] Kimura, F, Fukuada, M, & Tusomoto, T (1999). Acetylcholine suppresses the spread of excitation in the visual cortex revealed by optical recording: possible differential effect depending
on the source of input. Eur J Neurosci 11: 3597-609.
[23] Rajkowski, J, Kubiak, P, & Aston-Jones, P (1994). Locus coeruleus activity in monkey: phasic
and tonic changes are associated with altered vigilance. Synapse 4: 162-4.
[24] Brown, E et al (2004). Simple neural networks that optimize decisions. Int J Bifurcation and
Chaos, in press.
| 2548 |@word noradrenergic:2 trial:18 version:2 replicate:1 open:1 simulation:4 thereby:1 solid:1 united:1 precluding:1 reynolds:1 reaction:8 existing:1 current:1 contextual:2 discretization:1 neurophys:1 activation:1 scatter:2 john:1 subsequent:1 additive:2 plot:2 discrimination:8 v:5 cue:27 instantiate:1 half:1 credence:1 farther:1 mental:1 characterization:1 lending:1 location:7 five:1 along:2 constructed:1 c2:3 differential:2 bowman:1 qualitative:1 sustained:1 doubly:1 combine:3 pathway:1 behavioral:2 introduce:1 manner:2 expected:2 indeed:2 behavior:2 p1:1 themselves:1 multi:2 brain:3 decreasing:1 neurotransmission:1 increasing:6 provided:1 exotic:1 underlying:5 formidable:1 panel:2 moreover:1 circuit:1 hitherto:1 what:1 kind:1 interpreted:1 bogacz:1 monkey:2 suppresses:3 kimura:1 temporal:2 certainty:3 unexplored:1 act:1 exactly:3 uk:2 control:1 unit:2 maunsell:1 modify:1 limit:1 consequence:1 receptor:1 encoding:1 oxford:1 firing:1 modulation:11 black:1 studied:2 co:1 averaged:1 rajkowski:1 implement:4 signaling:1 area:4 empirical:1 bell:2 inferential:2 confidence:3 integrating:1 close:1 context:1 influence:3 optimize:2 measurable:1 equivalent:1 accumulation:5 map:1 conceptualized:1 modifies:1 deterministic:1 attention:29 phil:1 simplicity:1 pouget:1 iain:1 holmes:2 posner:7 retrieve:1 population:5 classic:1 target:3 hierarchy:3 exact:3 substrate:1 element:5 crossing:3 approximated:1 particularly:1 corticocortical:1 velocity:1 distributional:2 predicts:1 observed:9 bottom:8 role:3 rij:8 adv:2 trade:1 reward:1 rigorously:1 depend:1 grateful:1 eric:1 completely:1 joint:3 various:4 derivation:1 univ:1 distinct:2 forced:1 describe:2 london:1 effective:1 detected:2 zemel:2 apparent:1 encoded:1 larger:2 otherwise:1 elaborates:1 breached:1 transform:1 noisy:5 final:2 mcadams:1 abst:1 ucl:3 interaction:3 relevant:1 aligned:7 realization:1 rapidly:1 translate:1 representational:1 gold:1 description:1 pronounced:1 differentially:1 ach:5 transmission:1 argmaxp:1 produce:2 cued:7 coupling:1 recurrent:3 ac:2 pose:1 illustrate:3 depending:1 ij:6 p2:1 soc:1 strong:1 involves:3 come:2 resemble:1 fij:6 correct:3 rj4:9 human:2 clustered:1 generalization:1 tighter:1 biological:1 elementary:1 mathematically:1 adjusted:1 extension:2 around:1 considered:1 ungerleider:1 exp:8 presumably:1 mapping:1 vary:1 smallest:2 perceived:3 travel:1 overt:1 schwarz:1 weighted:3 mit:1 gaussian:6 broader:2 acetylcholine:3 encode:2 focus:1 likelihood:6 indicates:2 mainly:1 steeply:1 ratcliff:1 suppression:1 dim:1 inference:6 helpful:1 dayan:8 dependent:1 integrated:2 eliminate:1 typically:1 bt:4 expand:1 selective:2 originating:1 going:1 semantics:1 issue:3 orientation:12 spatial:17 integration:5 psychophysics:1 softmax:3 marginal:11 field:1 bifurcation:1 shaped:2 represents:1 broad:2 yu:5 look:1 kastner:1 jones:1 alter:1 peaked:5 report:3 stimulus:18 employ:1 simultaneously:1 tightly:1 ve:1 individual:2 consisting:1 detection:1 neuroscientist:1 interest:1 organization:1 possibility:1 highly:1 cholinergic:2 replicates:1 signalled:1 mixture:1 circularity:1 hg:2 wc1n:1 implication:1 necessary:2 moehlis:1 shorter:1 iv:8 loosely:1 circle:2 theoretical:1 instance:1 increased:1 earlier:2 rao:1 ar:1 implementational:1 altering:1 retains:1 queen:1 uniform:1 successful:1 ascribing:1 periodic:1 confident:1 eur:1 peak:1 kubiak:1 ie:1 probabilistic:6 off:2 v4:5 decoding:1 physic:1 synthesis:1 together:2 central:1 recorded:2 vigilance:2 perc:1 leading:1 coding:2 gaussianity:1 int:1 inc:1 explicitly:3 depends:2 stream:1 onset:7 multiplicative:10 steeper:1 red:5 repre:1 sort:1 parallel:1 slope:4 minimize:1 square:2 accuracy:3 variance:3 largely:1 characteristic:3 yield:1 saliency:1 bayesian:8 accurately:1 iid:3 multiplying:1 trajectory:1 monitoring:1 simultaneous:1 reach:1 obvious:1 associated:1 couple:1 sampled:1 cueing:6 gain:4 electrophysiological:1 reflecting:2 higher:3 tension:1 response:6 improved:1 wei:1 synapse:1 strongly:1 anderson:1 angular:1 implicit:1 until:2 orienting:2 effect:19 validity:10 brown:6 true:3 verify:1 concept:1 eg:1 during:2 skewed:2 ambiguous:1 excitation:1 rat:2 demonstrate:2 latham:1 performs:2 covert:1 passage:3 chaos:1 novel:1 funding:1 common:1 functional:1 diffusely:1 cohen:2 tail:2 interpretation:2 occurred:1 cambridge:1 phillips:1 neuromodulatory:3 tuning:10 nonlinearity:1 phenomenological:1 longer:2 cortex:6 similarity:1 inferotemporal:1 something:2 curvature:2 posterior:17 recent:1 irrelevant:3 driven:2 manipulation:1 termed:2 certain:2 binary:4 seen:1 greater:3 care:1 somewhat:1 employed:1 dashed:3 ii:6 signal:4 sound:3 reconciliation:2 infer:1 rj:2 smooth:1 exceeds:1 faster:1 offer:1 long:1 manipulate:1 converging:1 prediction:1 involving:2 wald:1 confront:1 vision:1 represent:4 c1:4 background:1 whereas:1 decreased:1 addressed:1 diagram:1 source:1 macroscopic:2 modality:2 biased:1 appropriately:1 subject:2 induced:3 recording:1 ample:1 intermediate:4 iii:11 feedforward:2 revealed:1 marginalization:4 isolation:1 affect:3 fit:4 architecture:9 opposite:1 idea:2 reassuringly:1 luce:1 fleet:1 deleterious:1 peter:2 york:2 dramatically:1 clear:1 maybe:1 amount:1 neocortex:1 extensively:2 induces:1 hardware:1 pinv:2 specifies:2 xij:1 inhibitory:1 shifted:1 visuospatial:1 neuroscience:1 estimated:1 blue:5 dropping:1 key:1 putting:1 threshold:5 license:1 diffusion:3 sum:2 prob:1 parameterized:1 uncertainty:17 reporting:1 family:3 throughout:1 separation:1 decision:26 capturing:1 layer:40 ct:3 cyan:2 electrophysiologically:1 quadratic:1 chelazzi:1 activity:28 strength:2 sharply:2 x2:1 aspect:2 speed:1 span:1 min:2 optical:1 conjecture:1 feraina:1 slightly:1 son:1 rev:2 making:9 primate:1 psychologist:2 projecting:1 restricted:1 multiplicity:1 computationally:2 discus:1 turn:1 mechanism:2 skew:1 neuromodulation:1 locus:2 phasic:2 drastic:1 operation:2 observe:1 hierarchical:4 generic:1 alternative:1 angela:1 top:10 denotes:1 opportunity:1 especially:1 murray:1 implied:1 question:1 realized:1 strategy:2 primary:1 rt:4 dependence:2 microscopic:2 exhibit:1 distance:1 attentional:13 barber:1 extent:1 banburismus:1 code:5 modeled:1 relationship:1 ratio:2 inte:1 kingdom:1 difficult:1 hlog:6 potentially:1 relate:2 info:2 trace:5 rise:6 sprt:5 unknown:1 perform:1 upper:1 discretize:1 neuron:18 observation:2 enabling:1 parietal:1 saturates:1 tonic:1 directing:1 varied:1 inv:4 drift:1 inferred:1 connection:2 optimized:1 learned:1 macaque:2 robinson:1 bar:1 topdown:1 pattern:1 challenge:2 reliable:1 including:3 explanation:1 max:1 belief:2 greatest:1 natural:1 representing:2 scheme:1 altered:1 aston:1 ne:2 psychol:1 naive:1 featural:4 sahani:1 prior:27 literature:1 acknowledgement:1 val:4 multiplication:1 marginalizing:1 relative:4 loss:2 bear:1 suggestion:2 interesting:1 filtering:1 allocation:1 versus:2 remarkable:1 clark:1 foundation:1 consistent:1 shadlen:1 viewpoint:1 charitable:1 row:1 neuromodulator:1 excitatory:1 last:1 keeping:1 side:1 bias:1 understand:1 formal:1 barrier:1 curve:2 dimension:10 cortical:13 valid:24 cumulative:1 computes:1 sensory:14 made:2 qualitatively:2 commonly:1 far:2 approximate:1 implicitly:1 reproduces:1 sequentially:2 active:1 assumed:1 spectrum:1 continuous:1 norepinephrine:1 lip:2 nature:1 terminate:1 ignoring:1 improving:1 complex:3 e20:1 icann:1 spread:1 linearly:1 neurosci:6 whole:1 noise:14 x1:2 neuronal:1 fig:12 gatsby:4 wiley:1 inferring:1 explicit:2 comput:4 perceptual:8 third:1 down:7 magenta:1 theorem:1 annu:2 xt:22 specific:2 showing:2 explored:2 physiological:3 coeruleus:1 evidence:3 sequential:3 effectively:1 importance:3 ci:2 nat:1 illustrates:1 logarithmic:2 explore:1 rithmic:1 neurophysiological:1 visual:15 unexpected:3 determines:1 ma:1 careful:1 informing:1 invalid:26 absence:1 experimentally:2 change:7 neurophysiologists:1 typical:1 determined:1 rk4:2 experimental:7 selectively:1 modulated:1 arises:1 jonathan:1 preparation:1 senting:1 phenomenon:1 |
1,704 | 2,549 | The Power of Selective Memory:
Self-Bounded Learning of Prediction Suffix Trees
Ofer Dekel Shai Shalev-Shwartz Yoram Singer
School of Computer Science & Engineering
The Hebrew University, Jerusalem 91904, Israel
{oferd,shais,singer}@cs.huji.ac.il
Abstract
Prediction suffix trees (PST) provide a popular and effective tool for tasks
such as compression, classification, and language modeling. In this paper we take a decision theoretic view of PSTs for the task of sequence
prediction. Generalizing the notion of margin to PSTs, we present an online PST learning algorithm and derive a loss bound for it. The depth of
the PST generated by this algorithm scales linearly with the length of the
input. We then describe a self-bounded enhancement of our learning algorithm which automatically grows a bounded-depth PST. We also prove
an analogous mistake-bound for the self-bounded algorithm. The result
is an efficient algorithm that neither relies on a-priori assumptions on the
shape or maximal depth of the target PST nor does it require any parameters. To our knowledge, this is the first provably-correct PST learning
algorithm which generates a bounded-depth PST while being competitive with any fixed PST determined in hindsight.
1
Introduction
Prediction suffix trees are elegant, effective, and well studied models for tasks such as
compression, temporal classification, and probabilistic modeling of sequences (see for instance [13, 11, 7, 10, 2]). Different scientific communities gave different names to variants
of prediction suffix trees such as context tree weighting [13] and variable length Markov
models [11, 2]. A PST receives an input sequence of symbols, one symbol at a time, and
predicts the identity of the next symbol in the sequence based on the most recently observed symbols. Techniques for finding a good prediction tree include online Bayesian
mixtures [13], tree growing based on PAC-learning [11], and tree pruning based on structural risk minimization [8]. All of these algorithms either assume an a-priori bound on the
maximal number of previous symbols which may be used to extend predictions or use a
pre-defined template-tree beyond which the learned tree cannot grow. Motivated by statistical modeling of biological sequences, Apostolico and Bejerano [1] showed that the bound
on the maximal depth can be removed by devising a smart modification of Ron et. al?s algorithm [11] (and in fact many other variants), yielding an algorithm with time and space
requirements that are linear in the length of the input. However, when modeling very long
sequences, both the a-priori bound and the linear space modification might impose serious
computational problems.
In this paper we describe a variant of prediction trees for
0
which we are able to devise a learning algorithm that grows
bounded-depth trees, while remaining competitive with any
fixed prediction tree chosen in hindsight. The resulting
?3
?1
time and space requirements of our algorithm are bounded
and scale polynomially with the complexity of the best prediction tree. Thus, we are able to sidestep the pitfalls of pre1
4
vious algorithms. The setting we employ is slightly more
general than context-based sequence modeling as we assume that we are provided with both an input stream and an
?2
7
output stream. For concreteness, we assume that the input
n
stream is a sequence of vectors x1 , x2 , . . . (xt ? R ) and
Figure 1: An illustration
the output stream is a sequence of symbols y1 , y2 , . . . over
of the prediction process ina finite alphabet Y. We denote a sub-sequence yi , . . . , yj
duced by a PST. The context
j
of the output stream by yi and the set of all possible sein this example is: + + +
?
quences by Y . We denote the length of a sequence s by
|s|. Our goal is to correctly predict each symbol in the output stream y1 , y2 , . . .. On each time-step t we predict the symbol yt based on an arbitrarily
long context of previously observed output stream symbols, y1t-1 , and based on the current
input vector xt . For simplicity, we focus on the binary prediction case where |Y| = 2 and
for convenience we use Y = {?1, +1} (or {?, +} for short) as our output alphabet. Our
algorithms and analysis can be adapted to larger output alphabets using ideas from [5].
The hypotheses we use are confidence-rated and are of the form h : X ?Y ? ? R where the
sign of h is the predicted symbol and the magnitude of h is the confidence in this prediction.
Each hypothesis is parameterized by a triplet (w, T , g) where w ? Rn , T is a suffix-closed
subset of Y ? and g is a context function from T into R (T is suffix closed if ? s ? T it
holds that all of the suffixes of s are also in T ). The prediction extended by a hypothesis
h = (w, T , g) for the t?th symbol is,
X
t-1
h(xt , y1t-1 ) = w ? xt +
2-i/2 g yt-i
.
(1)
t-1 ?T
i: yt-i
In words, the prediction is the sum of an inner product between the current input vector
xt with the weight vector w and the application of the function g to all the suffixes of the
output stream observed thus far that also belong to T . Since T is a suffix-closed set, it can
be described as a rooted tree whose nodes are the sequences constituting T . The children of
a node s ? T are all the sequences ?s ? T (? ? Y). Following the terminology of [11], we
use the term prediction suffix tree (PST) for T and refer to s ? T as a sequence and a node
interchangeably. We denote the length of the longest sequence in T by depth(T ). Given
g, each node s ? T is associated with a value g(s). Note that in the prediction process, the
t-1
contribution of each context yt-i
is multiplied by a factor which is exponentially decreasing
t-1
in the length of yt-i . This type of demotion of long suffixes is common to most PSTbased approaches [13, 7, 10] and reflects the a-priori assumption that statistical correlations
tend to decrease as the time between events increases. An illustration of a PST where
T = {, ?, +, +?, ++, ? + +, + + +}, with the associated prediction for y6 given the
context y15 = ??+++ is shown in Fig. 1. The predicted value of y6 in the example is
sign(w ? xt + 2?1/2 ? (?1) + 2?1 ? 4 + 2?3/2 ? 7). Given T and g we define the extension
of g to all strings over Y ? by setting g(s) = 0 for s 6? T . Using this extension, Eq. (1) can
be simplified to,
t?1
X
t-1
h(xt , y1t-1 ) = w ? xt +
2-i/2 g yt-i
.
(2)
i=1
We use the online learning loss-bound model to analyze our algorithms. In the online
model, learning takes place in rounds. On each round, an instance xt is presented to the
online algorithm, which in return predicts the next output symbol. The predicted symbol,
denoted y?t is defined to be the sign of ht (xt , y1t-1 ). Then, the correct symbol yt is revealed
and with the new input-output pair (xt , yt ) on hand, a new hypothesis ht+1 is generated
which will be used to predict the next output symbol, yt+1 . In our setting, the hypotheses
ht we generate are of the form given by Eq. (2). Most previous PST learning algorithms
employed probabilistic approaches for learning. In contrast, we use a decision theoretic
approach by adapting the notion of margin to our setting. In the context of PSTs, this approach was first suggested by Eskin in [6]. We define the margin attained by the hypothesis
ht to be yt ht (xt , y1t?1 ). Whenever the current symbol yt and the output of the hypothesis
agree in their sign, the margin is positive. We would like our online algorithm to correctly
predict the output stream y1 , . . . , yT with a sufficiently large margin of at least 1. This
construction is common to many online and batch learning algorithms for classification
[12, 4]. Specifically, we use the hinge loss as our margin-based loss function which serves
as a proxy for
the prediction error. Formally, the hinge loss attained on round t is defined as,
`t = max 0, 1 ? yt ht xt , y1t-1 . The hinge-loss equals zero when the margin exceeds
1 and otherwise grows linearly as the margin gets smaller. The online algorithms discussed
in this paper are designed to suffer small cumulative hinge-loss.
Our algorithms are analyzed by comparing their cumulative hinge-losses and prediction
errors with those of any fixed hypothesis h? = (w? , T ? , g ? ) which can be chosen in hindsight, after observing the entire input and output streams. In deriving our loss and mistake
bounds we take into account the complexity of h? . Informally, the larger T ? and the bigger
the coefficients of g ? (s), the more difficult it is to compete with h? . The squared norm of
the context function g is defined as,
X
kgk2 =
(g(s))2 .
(3)
s?T
?
The complexity of a hypothesis h (and h in particular) is defined as the sum of kwk2 and
kgk2 . Using the extension of g to Y ? we can evaluate kgk2 by summing over all s ? Y ? .
We present two online algorithms for learning large-margin PSTs. The first incrementally
constructs a PST which grows linearly with the length of the input and output sequences,
and thus can be arbitrarily large. While this construction is quite standard and similar
methods were employed by previous PST-learning algorithms, it provides us with an infrastructure for our second algorithm which grows bounded-depth PSTs. We derive an
explicit bound on the maximal depth of the PSTs generated by this algorithm. We prove
that both algorithms are competitive with any fixed PST constructed in hindsight. To our
knowledge, this is the first provably correct construction of a PST-learning algorithm whose
space complexity does not depend on the length of the input-output sequences.
2
Learning PSTs of Unbounded Depth
Having described the online prediction paradigm and the form of hypotheses used, we
are left with the task of defining the initial hypothesis h1 and the hypothesis update rule.
To facilitate our presentation, we assume that all of the instances presented to the online
algorithm have a bounded Euclidean norm, namely, kxt k ? 1. First, we define the initial
hypothesis to be h1 ? 0. We do so by setting w1 = (0, . . . , 0), T1 = {} and g1 (?) ? 0.
As a consequence, the first prediction always incurs a unit loss. Next, we define the updates
applied to the weight vector wt and to the PST at the end of round t. The weight vector is
updated by wt+1 = wt + yt ?t xt , where ?t = `t /(kxt k2 + 3). Note that if the margin
attained on this round is at least 1 then `t = 0 and thus wt+1 = wt . This type of update
is common to other online learning algorithms (e.g. [3]). We would like to note in passing
that the operation wt ? xt in Eq. (2) can be replaced with an inner product defined via a
Pt?1
Mercer kernel. To see this, note that wt can be rewritten explicitly as i=1 yi ?i xi and
if (`t ? 1/2) then
Set: ?t = 0, Pt = Pt?1 , dt = 0, and continue to the next iteration
else
?mo
n
l
?p
2
Pt-1
+ ?t `t ? Pt-1
Set: dt = max j , 2 log2 (2?t ) ? 2 log2
Set: Pt = Pt-1 + 2?t 2-dt /2
modification required for
self-bounded version
initialize: w1 = (0, . . . , 0), T1 = {}, g1 (s) = 0 ?s ? Y ? , P0 = 0
for t = 1, 2, . . . do
Receive an instance xt s.t. kxt k ? 1
t-1
Define: j = max{i : yt-i
? Tt }
`
?
` t-1 ?
Pj
t-1
-i/2
Calculate: ht xt , y1 = wt ? xt +
gt yt-i
i=1 2
` `
??
Predict: y?t = sign ht xt , y1t-1
??
?
`
Receive yt and suffer loss: `t = max 0, 1 ? yt ht xt , y1t-1
`
?
Set: ?t = `t / kxt k2 + 3 and dt = t ? 1
Update weight vector: wt+1 = wt + yt ?t xt
Update tree:
t-1
Tt+1 = Tt ?
? {yt-i
: 1 ? i ? dt }
t-1
gt (s) + yt 2-|s|/2 ?t if s ? {yt-i
: 1 ? i ? dt }
gt+1 (s) =
gt (s)
otherwise
Figure 2: The online algorithms for learning a PST. The code outside the boxes defines the
base algorithm for learning unbounded-depth PSTs. Including the pseudocode inside the
boxes gives the self-bounded version.
P
therefore wt ? xt = i yi ?P
i xi ? xt . Using a kernel operator K simply amounts to replacing
the latter expression with i yi ?i K(xi , xt ).
The update applied to the context function gt also depends on the scaling factor ?t . However, gt is updated only on those strings which participated in the prediction of y?t , namely
t-1
strings of the form yt-i
for 1 ? i < t. Formally, for 1 ? i < t our update takes the form
t-1
t-1
gt+1 (yt-i
) = gt (yt-i
) + yt 2-i/2 ?t . For any other string s, gt+1 (s) = gt (s). The pseudocode of our algorithm is given in Fig. 2. The following theorem states that the algorithm
in Fig. 2 is 2-competitive with any fixed hypothesis h? for which kg ? k is finite.
Theorem 1. Let x1 , . . . , xT be an input stream and let y1 , . . . , yT be an output stream,
where every xt ? Rn , kxt k ? 1 and every yt ? {-1, 1}. Let h? = (w? , T ? , g ? ) be an
arbitrary hypothesis such that kg ? k < ? and which attains the loss values `?1 , . . . , `?T on
the input-output streams. Let `1 , . . . , `T be the sequence of loss values attained by the
unbounded-depth algorithm in Fig. 2 on the input-output streams. Then it holds that,
T
X
t=1
`2t ? 4 kw? k2 + kg ? k2
+ 2
T
X
2
(`?t )
.
t=1
In particular, the above bounds the number of prediction mistakes made by the algorithm.
Proof. For every t = 1, . . . , T define ?t = kwt ? w? k2 ? kwt+1 ? w? k2 and,
X
X
2
2
?t =
?
gt (s) ? g ? (s) ?
gt+1 (s) ? g ? (s) .
s?Y ?
(4)
s?Y ?
Note that kgt k2 is finite for any value of t and that kg ? k2 is finite due to our assumption,
? t is finite and well-defined. We prove the theorem by devising upper and lower
therefore ?
P
? t ), beginning with the upper bound. P ?t is a telescopic sum
bounds on t (?t + ?
t
which collapses to kw1 ? w? k2 ? kwt+1 ? w? k2 . Similarly,
T
X
?t =
?
X
t=1
s?Y ?
X
2
2
g1 (s) ? g ? (s) ?
gt+1 (s) ? g ? (s) .
(5)
s?Y ?
Omitting negative terms and using the facts that w1 = (0, . . . , 0) and g1 (?) ? 0, we get,
T
X
?t
?t + ?
t=1
? kw? k2 +
X
s?Y ?
(g ? (s))
2
= kw? k2 + kg ? k2 .
(6)
P
? t ), we turn to the lower bound. First, ?t can
Having proven an upper bound on t (?t + ?
? 2
be rewritten as ?t = kwt ? w k ? k(wt+1 ? wt ) + (wt ? w? )k2 and by expansion of the
right-hand term we get that ?t = ?kwt+1 ? wt k2 ? 2(wt+1 ? wt ) ? (wt ? w? ). Using the
value of wt+1 as defined in the update rule of the algorithm (wt+1 = wt + yt ?t xt ) gives,
?t = ? ?t2 kxt k2 ? 2 yt ?t xt ? (wt ? w? ) .
(7)
? t . Unifying the two sums that make up ?
?t
Next, we use similar manipulations to rewrite ?
in Eq. (4) and adding null terms of the form 0 = gt (s) ? gt (s), we obtain,
h
2 i
?
? t = P ? gt (s) ? g ? (s) 2 ?
g
(s)
?
g
(s)
+
g
(s)
?
g
(s)
?
t+1
t
t
s?Y
h
2
i
P
=
? 2 gt+1 (s) ? gt (s) gt (s) ? g ? (s)
.
s?Y ? ? gt+1 (s) ? gt (s)
Let dt = t ? 1 as defined in Fig. 2. Using the fact that gt+1 differs from gt only on strings
t-1
t-1
t-1
? t as,
+ yt 2-i/2 ?t , we can write ?
= gt yt-i
of the form yt-i
, where gt+1 yt-i
?t =
?
dt
X
i=1
?2-i ?t2 ? 2
dt
X
i=1
t-1
t-1
yt 2-i/2 ?t gt yt-i
? g ? yt-i
.
(8)
Summing Eqs. (7-8) gives,
? t = ??t2 kxt k2 + Pdt 2-i ? 2?t yt wt ? xt + Pdt 2-i/2 gt yt-1
?t + ?
t-i
i=1
i=1
Pdt -i/2 ? t-1
?
.
(9)
+ 2?t yt w ? xt + i=1 2
g yt-i
Pdt
2?i ? 1 with the definitions of ht and h? from Eq. (2), we get that,
? t ? ? ?t2 (kxt k2 + 1) ? 2?t yt ht xt , yt?1 + 2?t yt h? xt , yt?1 . (10)
?t + ?
1
1
Using
i=1
Denote the right-hand side of Eq. (10) by ?t and recall that the loss is defined as max{0, 1?
yt ht (xt , y1t-1 )}. Therefore, if `t > 0 then ?yt ht (xt , y1t-1 ) = `t ? 1. Multiplying both sides
of this equality by ?t gives ??t yt ht (xt , y1t?1 ) = ?t (`t ? 1). Now note that this equality
also holds when `t = 0 since then ?t = 0 and both sides of the equality simply equal
zero. Similarly, we have that yt h? (xt , y1t-1 ) ? 1 ? `?t . Plugging these two inequalities into
Eq. (10) gives that,
?t ? ? ?t2 (kxt k2 + 1) + 2?t (`t ? 1) + 2?t (1 ? `?t ) ,
which in turn equals ??t2 (kxt k2 + 1) + 2?t `t ? 2?t `?t . The lower bound on ?t still holds
if we subtract from it the non-negative term (21/2 ?t ? 2?1/2 `?t )2 , yielding,
?t ? ??t2 (kxt k2 + 1) + 2?t `t ? 2?t `?t ? 2?t2 ? 2?t `?t + (`?t )2 /2
= ??t2 (kxt k2 + 3) + 2?t `t ? (`?t )2 /2 .
Using the definition of ?t and using the assumption that kxt k2 ? 1, we get,
?t ? ? ?t `t + 2?t `t ?
(`?t )2
`2t
(`?t )2
=
?
? `2t /4 ? (`?t )2 /2 .
2
kxt k2 + 3
2
(11)
? t ? ?t , summing ?t + ?
? t over all values of t gives,
Since Eq. (10) implies that ?t + ?
T
X
?t
?t + ?
t=1
T
?
T
1X ? 2
1X 2
`t ?
(` ) .
4 t=1
2 t=1 t
Combining the bound above with Eq. (6) gives the bound stated by the theorem. Finally, we
obtain a mistake bound by noting that whenever a prediction mistake occurs, `t ? 1.
We would like to note that the algorithm for learning unbounded-depth PSTs constructs a
sequence of PSTs, T1 , . . . , TT , such that depth(Tt ) may equal t. Furthermore, the number
of new nodes added to the tree on round t is on the order of t, resulting in Tt having
O(t2 ) nodes. However, PST implementation tricks in [1] can be used to reduce the space
complexity of the algorithm from quadratic to linear in t.
3
Self-Bounded Learning of PSTs
The online learning algorithm presented in the previous section has one major drawback,
the PSTs it generates can keep growing with each online round. We now describe a modification to the algorithm which casts a limit on the depth of the PST that is learned. Our
technique does not rely on arbitrary assumptions on the structure of the tree (e.g. maximal tree depth) nor does it require any parameters. The algorithm determines the depth to
which the PST should be updated automatically, and is therefore named the self-bounded
algorithm for PST learning. The self-bounded algorithm is obtained from the original unbounded algorithm by adding the lines enclosed in boxes in Fig. 2.
A new variable dt is calculated on every online iteration. On rounds where an update takes
place, the algorithm updates the PST up to depth dt , adding nodes if necessary. Below
this depth, no nodes are added and the context function is not modified. The definition
of dt is slightly involved, however it enables us to prove that we remain competitive with
any fixed hypothesis (Thm. 2) while maintaining a bounded-depth PST (Thm. 3). A point
worth noting is that the criterion for performing updates has also changed. Before, the
online hypothesis was modified whenever `t > 0. Now, an update occurs only when
`t > 1/2, tolerating small values of loss. Intuitively, this relaxed margin requirement is
what enables us to avoid deepening the tree. The algorithm is allowed to predict with lower
confidence and in exchange the PST can be kept small. The trade-off between PST size
and confidence of prediction is adjusted automatically, extending ideas from [9]. While the
following theorem provides a loss bound, this bound can be immediately used to bound the
number of prediction mistakes made by the algorithm.
Theorem 2. Let x1 , . . . , xT be an input stream and let y1 , . . . , yT be an output stream,
where every xt ? Rn , kxt k ? 1 and every yt ? {-1, 1}. Let h? = (w? , T ? , g ? ) be an
arbitrary hypothesis such that kg ? k < ? and which attains the loss values `?1 , . . . , `?T on
the input-output streams. Let `1 , . . . , `T be the sequence of loss values attained by the selfbounded algorithm in Fig. 2 on the input-output streams. Then the sum of squared-losses
attained on those rounds where `t > 1/2 is bounded by,
X
t:`t > 12
`2t ?
(1 +
T
X
1/2 2
?
5) kg ? k + 2 kw? k + 2
(`?t )2
.
t=1
? t as in the proof of Thm. 1. First note that the inequality in
Proof. We define ?t and ?
Pdt ?i
Eq. (9) in the proof of Thm. 1 still holds. Using the fact that i=1
2 ? 1 with the
?
definitions of ht and h from Eq. (2), Eq. (9) becomes,
? t ? ? ? 2 (kxt k2 + 1) ? 2?t yt ht xt , yt?1 + 2?t yt h? xt , yt?1
?t + ?
t
1
1
(12)
Pt?1
-i/2 ?
t-1
? 2?t yt
g yt-i
.
i=dt +1 2
Using the Cauchy-Schwartz inequality we get that
t?1
t?1
t?1
X
X
1/2 X
t-1
2-i/2 g ? yt-i
2-i
?
i=dt +1
i=dt +1
i=dt +1
t-1
g ? yt-i
2 1/2
? 2-dt /2 kg ? k .
Plugging the above into Eq. (12) and using the definition of ?t from the proof of Thm. 1
? t ? ?t ? 2?t 2-dt /2 kg ? k. Using the upper bound on ?t from Eq. (11) gives,
gives ?t + ?
? t ? ?t `t ? (`?t )2 /2 ? 2 ?t 2-dt /2 kg ? k .
?t + ?
(13)
Pt
Pt
For every 1 ? t ? T , define Lt = i=1 ?i `i and Pt = i=1 ?i 21?di /2 , and let P0 =
L0 = 0. Summing Eq. (13) over t and comparing to the upper bound in Eq. (6) we get,
LT ? kg ? k2 + kw? k2 + (1/2)
T
X
t=1
(`?t )2 + kg ? k PT .
(14)
?
We now use an inductive argument to prove that Pt ? Lt for all 0 ? t ? T . This
2
inequality trivially holds for t = 0. Assume that Pt?1
? Lt?1 . Expanding Pt we get that
2
2
Pt2 = Pt?1 + ?t 21?dt /2
= Pt?1
+ Pt?1 22?dt /2 ?t + 22?dt ?t2 .
(15)
We therefore need to show that the right-hand side of Eq. (15) is at most Lt . The definition
2
of dt implies that 2?dt /2 is at most (Pt?1
+ ?t `t )1/2 ? Pt?1 /(2?t ). Plugging this fact
2
into the right-hand side of Eq. (15) gives that Pt2 cannot exceed Pt?1
+ ?t `t . Using the
2
2
inductive assumption Pt?1 ? Lt?1 we get that Pt ? Lt?1 + ?t `t = Lt and the induc?
tive argument is proven. In particular, we have shown that PT ? LT . Combining this
inequality with Eq. (14) we get that
p
LT
2
? kg ? k
T
X
p
LT ? kg ? k2 ? kw? k2 ? (1/2)
(`?t )2 ? 0 .
t=1
?
?
The above equation is a quadratic inequality in LT from which it follows that Lt can
be at most as large as the positive root of this equation, namely,
p
LT ?
Using the the fact that
p
T
X
1/2
1 ?
kg k + 5 kg ? k2 + 4 kw? k2 + 2
(`?t )2
.
2
t=1
?
LT
a2 + b2 ? (a + b) (a, b ? 0) we get that,
?
T
1 X
1+ 5 ?
?
kg k + kw? k +
(`?t )2 1/2 .
2
2 t=1
(16)
If `t ? 1/2 then ?t `t = 0 and otherwise ?t `t ? `2t /4. Therefore, the sum of `2t over the
rounds for which `t > 1/2 is less than 4 Lt , which yields the bound of the theorem.
Note that if there exists a fixed hypothesis with kg ? k < ? which attains a margin of 1 on
the entire input sequence, then the bound of Thm. 2 reduces to a constant. Our next theorem
states that the algorithm indeed produces bounded-depth PSTs. Its proof is omitted due to
the lack of space.
Theorem 3. Under the conditions of Thm. 2, let T1 , . . . , TT be the sequence of PSTs generated by the algorithm in Fig. 2. Then, for all 1 ? t ? T ,
T
1 X
depth(Tt ) ? 9 + 2 log2 2 kg ? k + kw? k +
(`?t )2 1/2 + 1 .
2 t=1
The bound on tree depth given in Thm.
P 3 becomes particularly interesting when there exists
some fixed hypothesis h? for which t (`?t )2 is finite and independent of the total length of
the output sequence, denoted by T . In this case, Thm. 3 guarantees that the depth of the PST
generated by the self-bounded algorithm is smaller than a constant which does not depend
on T . We also would like to emphasize that our algorithm is competitive even with a PST
which is deeper than the PST constructed by the algorithm. This can be accomplished by
allowing the algorithm?s predictions to attain lower confidence than the predictions made
by the fixed PST with which it is competing.
Acknowledgments This work was supported by the Programme of the European Community, under the PASCAL Network of Excellence, IST-2002-506778 and by the Israeli
Science Foundation grant number 522-04.
References
[1] G. Bejerano and A. Apostolico. Optimal amnesic probabilistic automata, or, how
to learn and classify proteins in linear time and space. Journal of Computational
Biology, 7(3/4):381?393, 2000.
[2] P. Buhlmann and A.J. Wyner. Variable length markov chains. The Annals of Statistics,
27(2):480?513, 1999.
[3] K. Crammer, O. Dekel, S. Shalev-Shwartz, and Y. Singer. Online passive aggressive
algorithms. In Advances in Neural Information Processing Systems 16, 2003.
[4] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines.
Cambridge University Press, 2000.
[5] O. Dekel, J. Keshet, and Y. Singer. Large margin hierarchical classification. In Proceedings of the Twenty-First International Conference on Machine Learning, 2004.
[6] E. Eskin. Sparse Sequence Modeling with Applications to Computational Biology
and Intrusion Detection. PhD thesis, Columbia University, 2002.
[7] D.P. Helmbold and R.E. Schapire. Predicting nearly as well as the best pruning of a
decision tree. Machine Learning, 27(1):51?68, April 1997.
[8] M. Kearns and Y. Mansour. A fast, bottom-up decision tree pruning algorithm with
near-optimal generalization. In Proceedings of the Fourteenth International Conference on Machine Learning, 1996.
[9] P. Auer, N. Cesa-Bianchi and C. Gentile. Adaptive and self-confident on-line learning
algorithms. Journal of Computer and System Sciences, 64(1):48?75, 2002.
[10] F.C. Pereira and Y. Singer. An efficient extension to mixture techniques for prediction
and decision trees. Machine Learning, 36(3):183?199, 1999.
[11] D. Ron, Y. Singer, and N. Tishby. The power of amnesia: learning probabilistic
automata with variable memory length. Machine Learning, 25(2):117?150, 1996.
[12] V.N. Vapnik. Statistical Learning Theory. Wiley, 1998.
[13] F.M.J. Willems, Y.M. Shtarkov, and T.J. Tjalkens. The context tree weighting method:
basic properties. IEEE Transactions on Information Theory, 41(3):653?664, 1995.
| 2549 |@word version:2 compression:2 norm:2 dekel:3 p0:2 incurs:1 initial:2 bejerano:2 current:3 comparing:2 shape:1 enables:2 designed:1 update:12 devising:2 beginning:1 short:1 eskin:2 provides:2 infrastructure:1 node:8 ron:2 unbounded:5 shtarkov:1 constructed:2 amnesia:1 prove:5 inside:1 excellence:1 indeed:1 nor:2 growing:2 decreasing:1 pitfall:1 automatically:3 becomes:2 provided:1 bounded:18 null:1 israel:1 kg:19 what:1 string:5 hindsight:4 finding:1 guarantee:1 temporal:1 every:7 k2:31 schwartz:1 unit:1 grant:1 positive:2 t1:4 engineering:1 before:1 mistake:6 consequence:1 limit:1 might:1 studied:1 collapse:1 acknowledgment:1 yj:1 differs:1 vious:1 adapting:1 attain:1 pre:1 confidence:5 word:1 protein:1 get:11 cannot:2 convenience:1 operator:1 context:12 risk:1 telescopic:1 yt:59 jerusalem:1 tjalkens:1 automaton:2 simplicity:1 immediately:1 helmbold:1 rule:2 deriving:1 notion:2 analogous:1 updated:3 annals:1 target:1 construction:3 pt:24 hypothesis:20 trick:1 particularly:1 predicts:2 observed:3 bottom:1 calculate:1 decrease:1 removed:1 trade:1 complexity:5 cristianini:1 depend:2 rewrite:1 smart:1 alphabet:3 fast:1 effective:2 describe:3 shalev:2 outside:1 whose:2 quite:1 y1t:12 larger:2 otherwise:3 statistic:1 g1:4 online:18 sequence:24 kxt:16 maximal:5 product:2 combining:2 enhancement:1 requirement:3 extending:1 produce:1 derive:2 ac:1 school:1 eq:20 c:1 predicted:3 implies:2 drawback:1 correct:3 kgt:1 require:2 exchange:1 generalization:1 biological:1 adjusted:1 extension:4 hold:6 sufficiently:1 predict:6 mo:1 major:1 a2:1 omitted:1 tool:1 reflects:1 minimization:1 always:1 modified:2 avoid:1 l0:1 focus:1 longest:1 intrusion:1 contrast:1 attains:3 suffix:11 entire:2 selective:1 provably:2 classification:4 pascal:1 denoted:2 priori:4 initialize:1 equal:4 construct:2 having:3 biology:2 y6:2 kw:9 nearly:1 t2:11 serious:1 employ:1 kwt:5 replaced:1 detection:1 mixture:2 analyzed:1 yielding:2 chain:1 necessary:1 tree:26 euclidean:1 taylor:1 instance:4 classify:1 modeling:6 subset:1 tishby:1 confident:1 international:2 huji:1 probabilistic:4 off:1 w1:3 squared:2 thesis:1 cesa:1 deepening:1 sidestep:1 return:1 account:1 aggressive:1 b2:1 coefficient:1 explicitly:1 depends:1 stream:18 view:1 h1:2 closed:3 root:1 analyze:1 observing:1 competitive:6 shai:1 kgk2:3 contribution:1 il:1 yield:1 bayesian:1 tolerating:1 multiplying:1 worth:1 whenever:3 definition:6 involved:1 associated:2 proof:6 di:1 pst:45 popular:1 recall:1 knowledge:2 auer:1 attained:6 dt:24 april:1 box:3 furthermore:1 correlation:1 hand:5 receives:1 replacing:1 lack:1 incrementally:1 defines:1 scientific:1 grows:5 facilitate:1 omitting:1 name:1 y2:2 inductive:2 equality:3 round:10 interchangeably:1 self:10 rooted:1 criterion:1 theoretic:2 tt:8 passive:1 recently:1 common:3 pseudocode:2 quences:1 exponentially:1 induc:1 extend:1 sein:1 belong:1 discussed:1 kwk2:1 refer:1 cambridge:1 trivially:1 similarly:2 language:1 shawe:1 kw1:1 gt:27 base:1 showed:1 manipulation:1 inequality:6 binary:1 arbitrarily:2 continue:1 yi:5 devise:1 accomplished:1 gentile:1 relaxed:1 impose:1 employed:2 paradigm:1 pre1:1 reduces:1 exceeds:1 long:3 bigger:1 plugging:3 prediction:30 variant:3 basic:1 iteration:2 kernel:2 receive:2 participated:1 else:1 grow:1 tend:1 elegant:1 structural:1 near:1 noting:2 revealed:1 exceed:1 gave:1 competing:1 inner:2 idea:2 reduce:1 motivated:1 expression:1 suffer:2 passing:1 informally:1 amount:1 generate:1 schapire:1 sign:5 correctly:2 demotion:1 write:1 ist:1 terminology:1 neither:1 pj:1 ht:16 kept:1 concreteness:1 sum:6 compete:1 fourteenth:1 parameterized:1 named:1 place:2 decision:5 scaling:1 bound:25 quadratic:2 adapted:1 x2:1 generates:2 argument:2 performing:1 smaller:2 slightly:2 remain:1 modification:4 intuitively:1 equation:2 agree:1 previously:1 turn:2 singer:6 serf:1 end:1 ofer:1 operation:1 rewritten:2 multiplied:1 pdt:5 hierarchical:1 batch:1 original:1 remaining:1 include:1 log2:3 hinge:5 maintaining:1 unifying:1 pt2:2 yoram:1 added:2 occurs:2 shais:1 cauchy:1 length:11 code:1 illustration:2 hebrew:1 difficult:1 negative:2 stated:1 implementation:1 twenty:1 allowing:1 upper:5 bianchi:1 apostolico:2 amnesic:1 markov:2 willems:1 finite:6 defining:1 extended:1 y1:6 rn:3 mansour:1 arbitrary:3 thm:9 community:2 duced:1 buhlmann:1 tive:1 pair:1 namely:3 required:1 cast:1 learned:2 israeli:1 beyond:1 able:2 suggested:1 below:1 max:5 memory:2 including:1 power:2 event:1 rely:1 predicting:1 rated:1 wyner:1 columbia:1 ina:1 loss:19 interesting:1 proven:2 enclosed:1 foundation:1 proxy:1 mercer:1 changed:1 supported:1 side:5 deeper:1 template:1 sparse:1 depth:24 calculated:1 cumulative:2 made:3 adaptive:1 simplified:1 programme:1 far:1 polynomially:1 constituting:1 transaction:1 pruning:3 emphasize:1 keep:1 summing:4 xi:3 shwartz:2 triplet:1 learn:1 expanding:1 expansion:1 european:1 linearly:3 oferd:1 child:1 allowed:1 x1:3 fig:8 wiley:1 sub:1 pereira:1 explicit:1 weighting:2 theorem:9 xt:40 pac:1 symbol:16 exists:2 vapnik:1 adding:3 keshet:1 magnitude:1 phd:1 margin:13 subtract:1 generalizing:1 lt:16 simply:2 determines:1 relies:1 identity:1 goal:1 presentation:1 determined:1 specifically:1 wt:23 kearns:1 total:1 formally:2 support:1 latter:1 crammer:1 evaluate:1 |
1,705 | 255 | 10
Spence and Pearson
The Computation of Sound Source Elevation
the Barn Owl
.
'In
Clay D. Spence
John C. Pearson
David Sarnoff Research Center
CN5300
Princeton, NJ 08543-5300
ABSTRACT
The midbrain of the barn owl contains a map-like representation of
sound source direction which is used to precisely orient the head toward targets of interest. Elevation is computed from the interaural
difference in sound level. We present models and computer simulations of two stages of level difference processing which qualitatively
agree with known anatomy and physiology, and make several striking predictions.
1
INTRODUCTION
The auditory system of the barn owl constructs a map of sound direction in the
external nucleus of the inferior colliculus (lex) after several stages of processing the
output of the cochlea. This representation of space enables the owl to orient its head
to sounds with an accuracy greater than any other tested land animal [Knudsen,
et aI, 1979]. Elevation and azimuth are processed in separate streams before being
merged in the ICx [Konishi, 1986]. Much of this processing is done with neuronal
maps, regions of tissue in which the position of active neurons varies continuously
with some parameters, e.g., the retina is a map of spatial direction. In this paper
we present models and simulations of two of the stages of elevation processing
that make several testable predictions. The relatively elaborate structure of this
system emphasizes one difference between the sum-and-sigmoid model neuron and
real neurons, namely the difficulty of doing subtraction with real neurons. We first
briefly review the available data on the elevation system.
The Computation of Sound Source Elc\'ution in the Bam Owl
C
__
ICX_)
_ iijj
(!(,\,\ \
\ '1'1?
~.
ICl
----IlD SENSITIVE
- i If?'
~
dorsal IlD & ASI
SENSITIVE
VlVp
central -
+
-------- -L'
....
NA
(
ventral
.~
IlD
MONAURAL) -
~~
Intensity
Figure 1: Overview of the Barn Owl's Elevation System. ABI: average binaural
intensity. ILD: Inter aural level difference. Graphs show cell responses as a function
of ILD (or monaural intensity for NA).
2
KNOWN PROPERTIES OF THE ELEVATION SYSTEM
The owl computes the elevation to a sound source from the inter-aural sound pressure level difference (ILD).l Elevation is related to ILD because the owl's ears are
asymmetric, so that the right ear is most sensitive to sounds from above, and the
left ear is most sensitive to sounds from below [Moiseff, 1989].
After the cochlea, the first nucleus in the ILD system is nucleus angularis (NA)
(Fig. 1). NA neurons are monaural, responding only to ipsilateral stimuli. 2 Their
outputs are a simple spike rate code for the sound pressure level on that side of the
head, with firing rates that increase monotonically with sound pressure level over a
rather broad range, typically 30 dB [Sullivan and Konishi, 1984].
1 Azimuth
2 Neurons
curves.
is computed from the interaural time or phase delay.
in all of the nuclei we will discuss except rex have fairly narrow frequency tuning
11
12
Spence and Pearson
Each NA projects to the contralateral nucleus ventralis lemnisci lateralis pars posterior (VLVp). VLVp neurons are excited by contralateral stimuli, but inhibited
by ipsilateral stimuli. The source of the ipsilateral inhibition is the contralateral
VLVp [Takahashi, 1988]. VLVp neurons are said to be sensitive to ILD, that is
their ILD response curves are sigmoidal, in contrast to ICx neurons which are said
to be tuned to ILD, that is their ILD response curves are bell-shaped. Frequency
is mapped along the anterior-posterior direction, with slabs of similarly tuned cells
perpendicular to this axis. Within such a slab, cell responses to ILD vary systematically along the dorsal-ventral axis, and show no variation along the medio-Iateral
axis. The strength of ipsilateral inhibition3 varies roughly sigmoidally along the
dorsal-ventral axis, being nearly 100% dorsally and nearly 0% ventrally. The ILD
threshold, or ILD at which the cell's response is half its maximum value, varies from
about 20 dB dorsally to -20 dB ventrally. The response of these neurons is not independent of the average binaural intensity (ABI), so they cannot code elevation
unambiguously. As the ABI is increased, the ILD response curves of dorsal cells
shift to higher ILD, those of ventral cells shift to lower ILD, and those of central
cells keep the same thresholds, but their slopes increase (Fig. 1) [Manley, et aI,
1988].
Each VLVp projects contralaterally to the lateral shell of the central nucleus of the
inferior colli cuI us (ICL) [T. T. Takahashi and M. Konishi, unpublished]. The ICL
appears to be the nucleus in which azimuth and elevation information is merged
before forming the space map in the ICx [Spence, et aI, 1989]. At least two kinds
of ICL neurons have been observed, some with ILD-sensitive responses as in the
VLVp and some with ILD-tuned responses as in the ICx [Fujita and Konishi, 1989].
Manley, Koppl and Konishi have suggested that inputs from both VLVps could
interact to form the tuned responses [Manley, et aI, 1988]. The second model we
will present suggests a simple method for forming tuned responses in the ICL with
input from only one VLVp.
3
A MODEL OF THE VLVp
We have developed simulations of matched iso-frequency slabs from each VLVp in
order to investigate the consequences of different patterns of connections between
them. We attempted to account for the observed gradient of inhibition by using a
gradient in the number of inhibitory cells. A dorsal-ventral gradient in the number
density of different cell types has been observed in staining experiments [C. E. Carr,
et aI, 1989], with GABAergic cells4 more numerous at the dorsal end and a nonGABAergic type more numerous at the ventral end.
To model this, our simulation has a "unit" representing a group of neurons at each
of forty positions along the VLVp. Each unit has a voltage v which obeys the
equation
3 measured
functionally, not actual synaptic strength. See [Manley, et al, 1988] for details.
cells are usually thought to be inhibitory.
4 GABAergic
The Computation or Sound Source Elevation in the Bam Owl
SHELL
~
~
....
..........,.,/
....."
......
'
o
25
50
InlensUy
LEFT
.'
""/
,.'
NA
o
25
50
Inlenslly
RIGHT
Figure 2: Models of Level Difference Computation in the VLVps and Generation
of Tuned Responses in the ICL. Sizes of Circles represent the number density of
inhibitory neurons, while triangles represent excitatory neurons.
This describes the charging and discharging ofthe capacitance C through the various
conductances g, driven by the voltages VN, all of these being properties of the cell
membrane. The subscript L refers to passive leakage variables, E refers to excitatory
variables, and I refers to inhibitory variables. These model units have firing rates
which are sigmoidal functions of v. The output on a given time step is a number
of spikes, which is chosen randomly with a Poisson distribution whose mean is the
unit's current firing rate times the length of the time step. gE and g[ obey the
equation
d2 g
dg
2
dt 2 = -"I dt - w g,
the equation for a damped harmonic oscillator. The effect of one unit's spike on
another unit is to "kick" its conductance g, that is it simply increments the conductance's time derivative by some amount depending on the strength of the connection.
13
14
Spence and ~arson
ILD =?20 dB
ILD=OdB
ILD = 20 dB
dorsal
ventral
LEFT ~ RATE ~RIGHT
Figure 3: Output of Simulation of VLVps at Several ILDs. Position is represented
on the vertical axis. Firing rate is represented by the horizontal length of the black
bars.
Inhibitory neurons increment dgI/dt, while excitatory neurons increment dgE/dt. 'Y
and ware chosen so that the oscillator is at least critically damped, and 9 remains
non-negative. This model gives a fairly realistic post-synaptic potential, and the
effects of multiple spikes naturally add. The gradient of cell types is modeled by
having a different maximum firing rate at each level in the VLVp.
The VLVp model is shown in figure 2. Here, central neurons of each VLVp project
to central neurons of the other VLVp, while more dorsal neurons project to more
ventral neurons, and conversely. This forms a sort of "criss-cross" pattern ofprojections. In our simulation these projections are somewhat broad, each unit projecting
with equal strength to all units in a small patch. In order for the dorsal neurons to
be more strongly inhibited, there must be more inhibitory neurons at the ventral
end of each VLVp, so in our simulation the maximum firing rate is higher there and
decreases linearly toward the dorsal end. A presumed second neuron type is used
for ouput, but we assumed its inputs and dynamics were the same as the inhibitory
neurons and so we didn't model them. The input to the VLVps from the two NAs
was modeled as a constant input proportional to the sound pressure level in the
corresponding ear. We did not use Poisson distributed firing in this case because
the spike trains of NA neurons are very regular [Sullivan and Konishi, 1984]. NA
input was the same to each unit in the VLVp.
Figure 3 shows spatial activity patterns of the two simulated VLVps for three different ILDs, all at the same ABI. The criss-cross inhibitory connections effectively
cause these bars of activity to compete with each other so that their lengths are
always approximately complementary. Figure 4 presents results of both models
discussed in this paper for various ABIs and ILDs. The output of VLVp units
qualitatively matches the experimentally determined responses, in particular the
ILD response curves show similar shifts with ABI. for the different dorsal-ventral
positions in the VLVp (see Fig. 3 in [Manley, et aI, 1988]). Since the observed
non-GABAergic neurons are more numerous at the ventral end of the VLVp and
The Computation of Sound Source Elevation in the Barn Owl
VLVp
~
100
80
DORSAL
~
.........
r..::I
~
~
Z
......
~
....,-
40
0
LIne Type
....................
----
-...
._._
so
-... ~ ..:=:. .."':':"....~ ..~,...-
100
80
CENTRAL VLVp input
CENTRAL
60
40
Z
20
~
0
~
100
~
DORSAL VLVp input
20
~
~
ABI(dB)
10
20
3D
40
60
~
~
IeL
8
~~~~~
__?-~--~
0
/,,I
.......
I
40
.'
I
...../
...
I
/
o ................
-20
//
VENTRAL VL Vp input
............
..?..?...?
///
60
20
~.,.
./
VENTRAL
.:
.........!
-10
o
ILD (dB)
10
20
-20
-10
0
10
20
ILD (dB)
Figure 4: ILD Response Curves of the VLVp and ICL models. Curves show percent
of maximum firing rate versus ILD for several ABls.
15
16
Spence and Pearson
our model's inhibitory neurons are also more numerous there, this model predicts
that at least some of the non-GABAergic cells in the VLVp are the neurons which
provide the mutual inhibition between the VLVps.
4
A MODEL OF ILD-TUNED NEURONS IN THE ICL
In this section we present a model to explain how leL neurons can be tuned to
ILD if they only receive input from the ILD-sensitive neurons in one VLVp. The
model essentially takes the derivative of the spatial activity pattern in the VLVp,
converting the sigmoidal activity pattern into a pattern with a localized region of
activity corresponding to the end of the bar.
The model is shown in figure 2. The VLVp projects topographically to ICL neurons,
exciting two different types. This would excite bars of activity in the ICL, except
one type of leL neuron inhibits the other type. Each inhibitory neuron projects
to tuned neurons which represent a smaller ILD, to one side in the map. The
inhibitory neurons acquire the bar shaped activity pattern from the VLVp, and
are ILD-sensitive as a result. Of the neurons of the second type, only those which
receive input from the end of the bar are not also inhibited and prevented from
firing.
Our simulation used the model neurons described above, with input to the ICL
taken from our model of the VLVp. Each unit in the VLVp projected to a patch
of units in the leL with connection strengths proportional to a gaussian function
of distance from the center of the patch. (Equal strengths for the connections from
a given neuron worked poorly.) The results are shown in figure 4. The model
shows sharp tuning, although the maximum firing rates are rather small. The ILD
response curves show the same kind of ABI dependence as those of the VLVp model.
There is no published data to confirm or refute this, but we know that neurons in
the space map in the ICx do not show ABI dependence. There is a direct input
from the contralateral NA to the ICL which may be involved in removing ABI
dependence, but we have not considered that possibility in this work.
5
CONCLUSION
We have presented two models of parts of the owl's elevation or interaural level
difference (ILD) system. One predicts a "criss-cross" geometry for the connections
between the owl's two VLVps. In this geometry cells at the dorsal end of either
VLVp inhibit cells at the ventral end of the other, and are inhibited by them.
Cells closer to the center of one VLVp interact with cells closer to the center of
the other, so that the central cells of each VLVp interact with each other (Fig. 2).
This model also predicts that the non-GABAergic cells in the VLVp are the cells
which project to the other VLVp. The other model explains how the ICL, with
input from one VLVp, can contain neurons tuned to ILD. It does this essentially by
computing the spatial derivative of the activity pattern in the VLVp. This model
predicts that the ILD-sensitive neurons in the ICL inhibit the ILD-tuned neurons
in the ICL. Simulations with semi-realistic model neurons show that these models
The Computation of Sound Source Elevation in the Barn Owl
are plausible, that is they can qualitatively reproduce the published data on the
responses of neurons in the VLVp and the leL to different intensities of sound in
the two ears.
Although these are models, they are good examples of the simplicity of information
processing in neuronal maps. One interesting feature of this system is the elaborate mechanism used to do subtraction. With the usual model of a neuron, which
calculates a sigmoidal function of a weighted sum of its inputs, subtraction would
be very easy. This demonstrates the inadequacy of such simple model neurons to
provide insight into some real neural functions.
Acknowledgements
This work was supported by AFOSR contract F49620-89-C-0131.
References
C. E. Carr, I. Fujita, and M. Konishi. (1989) Distribution of GABAergic neurons
and terminals in the auditory system of the barn owl. The Journal of Comparative
Neurology 286: 190-207.
I. Fujita and M. Konishi. (1989) Transition from single to multiple frequency channels in the processing of binaural disparity cues in the owl's midbrain. Society for
Neuroscience Abstracts 15: 114.
E. I. Knudsen, G. G. Blasdel, and M. Konishi. (1979) Sound localization by the barn
owl measured with the search coil technique. Journal of Comparative Physiology
133:1-11.
M. Konishi. (1986) Centrally synthesized maps of sensory space. Trends in Neurosciences April, 163-168.
G. A. Manley, C. Koppl, and M. Konishi. (1988) A neural map of interaural intensity differences in the brain stem of the barn owl. The Journal of Neuroscience
8(8): 2665-2676.
A. Moiseff. (1989) Binaural disparity cues available to the barn owl for sound
localization. Journal of Comparative Physiology 164: 629-636.
C. D. Spence, J. C. Pearson, J. J. Gelfand, R. M. Peterson, and W. E. Sullivan.
(1989) Neuronal maps for sensory-motor control in the barn owl. In D. S. Touretzky
(ed.), Advances in Neural Information Processing Systems 1, 748-760. San Mateo,
CA: Morgan Kaufmann.
W. E. Sullivan and M. Konishi. (1984) Segregation of stimulus phase and intensity
coding in the cochlear nucleus of the barn owl. The Journal of Neuroscience 4(7):
1787-1799.
T. T. Takahashi. (1988) Commissural projections mediate inhibition in a lateral
lemniscal nucleus of the barn owl. Society for Neuroscience Abstracts 14: 323.
17
| 255 |@word briefly:1 d2:1 simulation:9 excited:1 pressure:4 contains:1 disparity:2 tuned:11 current:1 anterior:1 must:1 john:1 realistic:2 enables:1 motor:1 half:1 cue:2 iso:1 sigmoidal:4 along:5 direct:1 ouput:1 interaural:4 inter:2 presumed:1 roughly:1 brain:1 terminal:1 actual:1 project:7 matched:1 didn:1 kind:2 developed:1 nj:1 demonstrates:1 control:1 unit:12 discharging:1 before:2 consequence:1 subscript:1 firing:10 ware:1 approximately:1 black:1 mateo:1 suggests:1 conversely:1 sarnoff:1 range:1 perpendicular:1 obeys:1 spence:7 sullivan:4 asi:1 physiology:3 bell:1 thought:1 projection:2 refers:3 regular:1 cannot:1 map:11 center:4 simplicity:1 insight:1 konishi:12 variation:1 increment:3 target:1 trend:1 asymmetric:1 predicts:4 observed:4 region:2 decrease:1 inhibit:2 colli:1 dynamic:1 topographically:1 localization:2 triangle:1 binaural:4 various:2 represented:2 train:1 pearson:5 whose:1 gelfand:1 plausible:1 poorly:1 comparative:3 depending:1 measured:2 direction:4 dorsally:2 anatomy:1 merged:2 owl:21 explains:1 elevation:15 refute:1 considered:1 barn:13 blasdel:1 slab:3 ventral:14 vary:1 ventrally:2 sensitive:9 weighted:1 always:1 gaussian:1 rather:2 voltage:2 contrast:1 vl:1 typically:1 reproduce:1 fujita:3 animal:1 spatial:4 moiseff:2 fairly:2 mutual:1 equal:2 construct:1 contralaterally:1 shaped:2 having:1 broad:2 nearly:2 stimulus:4 inhibited:4 retina:1 randomly:1 dg:1 phase:2 geometry:2 conductance:3 interest:1 investigate:1 possibility:1 damped:2 closer:2 circle:1 increased:1 contralateral:4 delay:1 azimuth:3 rex:1 varies:3 density:2 contract:1 continuously:1 na:10 central:8 ear:5 external:1 derivative:3 takahashi:3 account:1 potential:1 coding:1 vlvp:40 stream:1 doing:1 sort:1 slope:1 accuracy:1 kaufmann:1 ofthe:1 vp:1 emphasizes:1 critically:1 published:2 tissue:1 explain:1 touretzky:1 synaptic:2 ed:1 frequency:4 involved:1 naturally:1 commissural:1 auditory:2 icl:15 clay:1 appears:1 higher:2 dt:4 unambiguously:1 response:17 april:1 done:1 strongly:1 stage:3 elc:1 horizontal:1 ild:38 dge:1 effect:2 contain:1 dgi:1 inferior:2 carr:2 passive:1 percent:1 harmonic:1 sigmoid:1 overview:1 discussed:1 functionally:1 synthesized:1 ai:6 tuning:2 similarly:1 aural:2 cn5300:1 odb:1 inhibition:4 add:1 posterior:2 driven:1 morgan:1 greater:1 somewhat:1 converting:1 subtraction:3 forty:1 monotonically:1 semi:1 multiple:2 sound:19 stem:1 match:1 cross:3 post:1 prevented:1 calculates:1 prediction:2 essentially:2 poisson:2 cochlea:2 represent:3 cell:20 receive:2 source:8 abi:9 db:8 kick:1 easy:1 criss:3 ution:1 shift:3 inadequacy:1 cause:1 amount:1 processed:1 inhibitory:11 neuroscience:5 ipsilateral:4 group:1 threshold:2 lateralis:1 ilds:3 graph:1 sum:2 colliculus:1 orient:2 compete:1 striking:1 vn:1 patch:3 centrally:1 activity:8 strength:6 precisely:1 worked:1 lemniscus:1 relatively:1 inhibits:1 cui:1 membrane:1 describes:1 smaller:1 midbrain:2 projecting:1 taken:1 equation:3 agree:1 remains:1 segregation:1 discus:1 mechanism:1 know:1 ge:1 end:9 available:2 obey:1 responding:1 lel:4 testable:1 society:2 leakage:1 ventralis:1 capacitance:1 lex:1 spike:5 dependence:3 usual:1 said:2 gradient:4 manley:6 distance:1 separate:1 mapped:1 lateral:2 simulated:1 cochlear:1 toward:2 code:2 length:3 modeled:2 acquire:1 negative:1 vertical:1 neuron:48 knudsen:2 head:3 monaural:3 sharp:1 intensity:7 david:1 namely:1 unpublished:1 connection:6 icx:5 narrow:1 suggested:1 bar:6 below:1 bam:2 pattern:8 usually:1 charging:1 difficulty:1 representing:1 numerous:4 axis:5 gabaergic:6 review:1 acknowledgement:1 afosr:1 par:1 generation:1 interesting:1 proportional:2 versus:1 localized:1 nucleus:9 exciting:1 systematically:1 land:1 excitatory:3 supported:1 side:2 peterson:1 distributed:1 f49620:1 curve:8 transition:1 computes:1 sensory:2 qualitatively:3 projected:1 san:1 lemniscal:1 keep:1 confirm:1 active:1 assumed:1 excite:1 neurology:1 search:1 channel:1 ca:1 interact:3 did:1 linearly:1 mediate:1 complementary:1 neuronal:3 fig:4 elaborate:2 sigmoidally:1 position:4 removing:1 effectively:1 iel:1 simply:1 forming:2 shell:2 coil:1 oscillator:2 experimentally:1 determined:1 except:2 attempted:1 dorsal:14 staining:1 princeton:1 tested:1 |
1,706 | 2,550 | Efficient Kernel Machines Using the
Improved Fast Gauss Transform
Changjiang Yang, Ramani Duraiswami and Larry Davis
Department of Computer Science, Perceptual Interfaces and Reality Laboratory
University of Maryland, College Park, MD 20742
{yangcj,ramani,lsd}@umiacs.umd.edu
Abstract
The computation and memory required for kernel machines with N training samples is at least O(N 2 ). Such a complexity is significant even for
moderate size problems and is prohibitive for large datasets. We present
an approximation technique based on the improved fast Gauss transform
to reduce the computation to O(N ). We also give an error bound for the
approximation, and provide experimental results on the UCI datasets.
1
Introduction
Kernel based methods, including support vector machines [16], regularization networks [5]
and Gaussian processes [18], have attracted much attention in machine learning. The solid
theoretical foundations and good practical performance of kernel methods make them very
popular. However one major drawback of the kernel methods is their scalability. Kernel methods require O(N 2 ) storage and O(N 3 ) operations for direct methods, or O(N 2 )
operations per iteration for iterative methods, which is impractical for large datasets.
To deal with this scalability problem, many approaches have been proposed, including the
Nystr?om method [19], sparse greedy approximation [13, 12], low rank kernel approximation [3] and reduced support vector machines [9]. All these try to find a reduced subset
of the original dataset using either random selection or greedy approximation. In these
methods there is no guarantee on the approximation of the kernel matrix in a deterministic
sense. An assumption made in these methods is that most eigenvalues of the kernel matrix
are zero. This is not always true and its violation results in either performance degradation
or negligible reduction in computational time or memory.
We explore a deterministic method to speed up kernel machines using the improved fast
Gauss transform (IFGT) [20, 21]. The kernel machine is solved iteratively using the conjugate gradient method, where the dominant computation is the matrix-vector product which
we accelerate using the IFGT. Rather than approximating the kernel matrix by a low-rank
representation, we approximate the matrix-vector product by the improved fast Gauss transform to any desired precision. The total computational and storage costs are of linear order
in the size of the dataset. We present the application of the IFGT to kernel methods in
the context of the Regularized Least-Squares Classification (RLSC) [11, 10], though the
approach is general and can be extended to other kernel methods.
2
Regularized Least-Squares Classification
The RLSC algorithm [11, 10] solves the binary classification problems in Reproducing
Kernel Hilbert Space (RKHS) [17]: given N training samples in d-dimensional space x i ?
Rd and the labels yi ? {?1, 1}, find f ? H that minimizes the regularized risk functional
N
1 X
V (yi , f (xi )) + ?kf k2K ,
f ?H N
i=1
min
(1)
where H is an RKHS with reproducing kernel K, V is a convex cost function and ? is
the regularization parameter controlling the tradeoff between the cost and the smoothness.
Based on the Representer Theorem [17], the solution has a representation as
f? (x) =
N
X
ci K(x, xi ).
(2)
i=1
If the loss function V is the hinge function, V (y, f ) = (1 ? yf )+ , where (? )+ = ? for
? > 0 and 0 otherwise, then the minimization of (1) leads to the popular Support Vector
Machines which can be solved using quadratic programming.
If the loss function V is the square-loss function, V (y, f ) = (y ? f )2 , the minimization
of (1) leads to the so-called Regularized Least-Squares Classification which requires only
the solution of a linear system. The algorithm has been rediscovered several times and
has many different names [11, 10, 4, 15]. In this paper, we stick to the term ?RLSC? for
consistency. It has been shown in [11, 4] that RLSC achieves accuracy comparable to the
popular SVMs for binary classification problems.
If we substitute (2) into (1), and denote c = [c1 , . . . , cN ]T , K = K(xi , xj ), we can find
the solution of (1) by solving the linear system
(K + ?0 I)c = y
(3)
where ?0 = ?N , I is the identity matrix, and y = [y1 , . . . , yN ]T .
There are many choices for the kernel function K. The Gaussian is a good kernel for classification and is used in many applications. If a Gaussian kernel is applied, as shown in [10],
the classification problem can be solved by the solution of a linear system, i.e., Regularized
Least-Squares Classification. A direct solution of the linear system will require O(N 3 )
computation and O(N 2 ) storage, which is impractical even for problems of moderate size.
Algorithm 1 Regularized Least-Squares Classification
Require: Training dataset SN = (xi , yi )N
i=1 .
0 2
2
1. Choose the Gaussian kernel: K(x, x0 ) = e?kx?x k /? .
PN
2. Find the solution as f (x) = i=1 ci K(x, xi ), where c satisfies the linear system (3).
3. Solve the linear system (3).
An effective way to solve the large-scale linear system (3) is to use iterative methods.
Since the matrix K is symmetric, we consider the well-known conjugate gradient method.
The conjugate gradient method solves the linear system (3) by iteratively performing the
matrix-vector multiplication Kc. If rank(K) = r, then the conjugate gradient algorithm
converges in at most r +1 steps. Only one matrix-vector multiplication and 10N arithmetic
operations are required per iteration. Only four N -vectors are required for storage. So the
computational complexity is O(N 2 ) for low-rank K and the storage requirement is O(N 2 ).
While this represents an improvement for most problems, the rank of the matrix may not
be small, and moreover the quadratic storage and computational complexity are still too
high for large datasets. In the following sections, we present an algorithm to reduce the
computational and storage complexity to linear order.
3
Fast Gauss Transform
The matrix-vector product Kc can be written in the form of the so-called discrete Gauss
transform [8]
N
X
2
2
G(yj ) =
ci e?kxi ?yj k /? ,
(4)
i=1
where ci are the weight coefficients, {xi }N
i=1 are the centers of the Gaussians (called
?sources?), and ? is the bandwidth parameter of the Gaussians. The sum of the Gaussians is evaluated at each of the ?target? points {yj }M
j=1 . Direct evaluation of the Gauss
transform at M target points due to N sources requires O(M N ) operations.
The Fast Gauss Transform (FGT) was invented by Greengard and Strain [8] for efficient
evaluation of the Gauss transform in O(M + N ) operations. It is an important variant of
the more general Fast Multipole Method [7].
The FGT [8] expands the Gaussian function into Hermite functions. The expansion of the
univariate Gaussian is
n
p?1
X
1 xi ? x ?
yj ? x ?
?kyj ?xi k2 /? 2
e
=
hn
+ (p),
(5)
n!
?
?
n=0
dn
?x2
where hn (x) are the Hermite functions defined by hn (x) = (?1)n dx
e
, and x?
n
is the expansion center. The d-dimensional Gaussian function is treated as a Kronecker
product of d univariate Gaussians. For simplicity, we adopt the multi-index notation of
the original FGT papers [8]. A multi-index ? = (?1 , . . . , ?d ) is a d-tuple of nonnegative
integers. For any multi-index ? ? Nd and any x ? Rd , we have the monomial x? =
?d
1 ?2
x?
1 x2 ? ? ? xd . The length and the factorial of ? are defined as |?| = ? 1 + ?2 + . . . + ?d ,
?! = ?1 !?2 ! ? ? ? ?d !. The multidimensional Hermite functions are defined by
h? (x) = h?1 (x1 )h?2 (x2 ) ? ? ? h?d (xd ).
The sum (4) is then equal to the Hermite expansion about center x? :
G(yj ) =
X
??0
C ? h?
yj ? x ?
h
N
,
1 X
C? =
ci
?! i=1
xi ? x ?
h
?
.
(6)
where C? are the coefficients of the Hermite expansions.
If we truncate each of the Hermite series (6) after p terms (or equivalently order p ? 1),
then each of the coefficients C? is a d-dimensional matrix with pd terms. The total computational complexity for a single Hermite expansion is O((M + N )pd ). The factor O(pd )
grows exponentially as the dimensionality d increases. Despite this defect in higher dimensions, the FGT is quite effective for two and three-dimensional problems, and has
achieved success in some physics, computer vision and pattern recognition applications.
In practice a single expansion about one center is not always valid or accurate over the entire domain. A space subdivision scheme is applied in the FGT and the Gaussian functions
are expanded at multiple centers. The original FGT subdivides space into uniform boxes,
which is simple, but highly inefficient in higher dimensions. The number of boxes grows
exponentially with dimensionality, which makes it inefficient for storage and for searching
nonempty neighbor boxes. Most important, since the ratio of volume of the hypercube to
that of the inscribed sphere grows exponentially with dimension, points have a high probability of falling into the area inside the box and outside the sphere, where the truncation
error of the Hermite expansion is much larger than inside of the sphere.
3.1 Improved Fast Gauss Transform
In brief, the original FGT suffers from the following two defects:
1. The exponential growth of computationally complexity with dimensionality.
2. The use of the box data structure in the FGT is inefficient in higher dimensions.
We introduced the improved FGT [20, 21] to address these deficiencies, and it is summarized below.
3.1.1 Multivariate Taylor Expansions
Instead of expanding the Gaussian into Hermite functions, we factorize it as
e?kyj ?xi k
2
/? 2
= e?k?yj k
2
/? 2
e?k?xi k
2
/? 2
2
e2?yj ??xi /? ,
(7)
where x? is the center of the sources, ?yj = yj ? x? , ?xi = xi ? x? . The first two
exponential terms can be evaluated individually at the source points or target points. In the
third term, the sources and the targets are entangled. Here we break the entanglement by
expanding it into a multivariate Taylor series
n
?
X 2|?| ?xi ? ?yj ?
X
2
?xi ?yj
?
=
.
(8)
e2?yj ??xi /? =
2n
?
?
?!
?
?
n=0
|?|?0
If we truncate the series after total order p ? 1, then the number of terms is rp?1,d =
p+d?1
which is much less than pd in higher dimensions. For d = 12 and p = 10, the
d
original FGT needs 1012 terms, while the multivariate Taylor expansion needs only 293930.
For d ? ? and moderate p, the number of terms is O(dp ), a substantial reduction.
From Eqs.(7) and (8), the weighted sum of Gaussians (4) can be expressed as a multivariate
Taylor expansions about center x? :
?
X
yj ? x ?
?kyj ?x? k2 /? 2
G(yj ) =
C? e
,
(9)
?
|?|?0
where the coefficients C? are given by
?
N
2|?| X ?kxi ?x? k2 /?2 xi ? x?
ci e
.
C? =
?! i=1
?
(10)
The coefficients C? can be efficiently evaluate with rnd storage and rnd ? 1 multiplications
using the multivariate Horner?s rule [20].
3.1.2 Spatial Data Structures
To efficiently subdivide the space, we need a scheme that adaptively subdivides the space
according to the distribution of points. It is also desirable to generate cells as compact as
possible. Based on these consideration, we model the space subdivision task as a k-center
problem [1]: given a set of N points and a predefined number of clusters k, find a partition
of the points into clusters S1 , . . . , Sk , with cluster centers c1 , . . . , ck , that minimizes the
maximum radius of any cluster:
max max kv ? ci k.
i
v?Si
The k-center problem is known to be N P -hard. Gonzalez [6] proposed a very simple
greedy algorithm, called farthest-point clustering. Initially, pick an arbitrary point v 0 as
the center of the first cluster and add it to the center set C. Then, for i = 1 to k do
the follows: in iteration i, for every point, compute its distance to the set C: di (v, C) =
minc?C kv ? ck. Let vi be a point that is farthest away from C, i.e., a point for which
di (vi , C) = maxv di (v, C). Add vi to the center set C. After k iterations, report the points
v0 , v1 , . . . , vk?1 as the cluster centers. Each point is then assigned to its nearest center.
Gonzalez [6] proved that farthest-point clustering is a 2-approximation algorithm, i.e., it
computes a partition with maximum radius at most twice the optimum. The direct implementation of farthest-point clustering has running time O(N k). Feder and Greene [2] give
a two-phase algorithm with optimal running time O(N log k). In practice, we used circular
lists to index the points and achieve the complexity O(N log k) empirically.
3.1.3 The Algorithm and Error Bound
The improved fast Gauss transform consists of the following steps:
Algorithm 2 Improved Fast Gauss Transform
1. Assign N sources into k clusters using the farthest-point clustering algorithm such
that the radius is less than ??x .
2. Choose p sufficiently large such that the error estimate (11) is less than the desired
precision .
3. For each cluster Sk with center ck , compute the coefficients given by (10).
4. Repeat for each target yj , find its neighbor clusters whose centers lie within the range
??y . Then the sum of Gaussians (4) can be evaluated by the expression (9).
The amount of work required in step 1 is O(N log k) using Feder and Greene?s algorithm [2]. The amount of work required in step 3 is of O(N rpd ). The work required
in step 4 is O(M n rpd ), where n ? k is the maximum number of neighbor clusters for
each target. So, the improved fast Gauss transform achieves linear running time. The algorithm needs to store the k coefficients of size rpd , so the storage complexity is reduced to
O(Krpd ). To verify the linear order of our algorithm, we generate N source points and N
target points in 4, 6, 8, 10 dimensional unit hypercubes using a uniform distribution. The
weights on the source points are generated from a uniform distribution in the interval [0, 1]
and ? = 1. The results of the IFGT and the direct evaluation are displayed in Figure 1(a),
(b), and confirm the linear order of the IFGT.
The error of the improved fast Gauss transform (2) is bounded by
p
N
X
2
2 p p
|E(G(yj ))| ?
|ci |
?x ?y + e?(?y ??x ) .
p!
i=1
(11)
The details are in [21]. The comparison between the maximum absolute errors in the
simulation and the estimated error bound (11) is displayed in Figure 1(c) and (d). It shows
that the error bound is very conservative compared with the real errors. Empirically we can
obtain the parameters on a randomly selected subset and use them on the entire dataset.
4
IFGT Accelerated RLSC: Discussion and Experiments
The key idea of all acceleration methods is to reduce the cost of the matrix-vector product.
In reduced subset methods, this is performed by evaluating the product at a few points,
assuming that the matrix is low rank. The general Fast Multipole Methods (FMM) seek to
analytically approximate the possibly full-rank matrix as a sum of low rank approximations
with a tight error bound [14] (The FGT is a variant of the FMM with Gaussian kernel). It is
expected that these methods can be more robust, while at the same time achieve significant
acceleration.
The problems to which kernel methods are usually applied are in higher dimensions, though
the intrinsic dimensionality of the data is expected to be much smaller. The original FGT
does not scale well to higher dimensions. Its cost is of linear order in the number of samples, but exponential order in the number of dimensions. The improved FGT uses new data
structures and a modified expansion to reduce this to polynomial order.
Despite this improvement, at first glance, even with the use of the IFGT, it is not clear if the
reduction in complexity will be competitive with the other approaches proposed. Reason
2
?3
10
10
10
0
?4
10
Max abs error
10
CPU time
4D
6D
8D
10D
direct method, 4D
fast method, 4D
direct method, 6D
fast method, 6D
direct method, 8D
fast method, 8D
direct method, 10D
fast method, 10D
1
?1
10
?2
10
?5
10
?3
10
?4
10
2
3
10
4
10
N
10
?6
10
2
3
10
4
10
N
(a)
10
(b)
3
10
4
10
Real max abs error
Estimated error bound
2
3
10
10
Real max abs error
Estimated error bound
2
1
10
0
10
10
Error
Error
1
10
0
10
?1
10
?1
10
?2
10
?2
10
?3
10
?3
10
?4
10
0
2
4
6
8
10
p
(c)
12
14
16
18
20
?4
10
0.3
0.4
0.5
rx
0.6
0.7
0.8
(d)
Figure 1: (a) Running time and (b) maximum absolute error w.r.t. N in d = 4, 6, 8, 10. The
comparison between the real maximum absolute errors and the estimated error bound (11) w.r.t. (c)
the order of the Taylor series p, and (d) the radius of the farthest-point clustering algorithm r x = ??x .
The uniformly distributed sources and target points are in 4-dimension.
for hope is provided by the fact that in high dimensions we expect that the IFGT with very
low order expansions will converge rapidly (because of the sharply vanishing exponential
terms multiplying the expansion in factorization (7). Thus we expect that combined with a
dimensionality reduction technique, we can achieve very competitive solutions.
In this paper we explore the application of the IFGT accelerated RLSC to certain standard
problems that have already been solved by the other techniques. While dimensionality
reduction would be desirable, here we do not perform such a reduction for fair comparison.
We use small order expansions (p = 1 and p = 2) in the IFGT and run the iterative solver.
In the first experiment, we compared the performance of the IFGT on approximating the
sums (4) with the Nystr?om method [19]. The experiments were carried out on a Pentium
4 1.4GHz PC with 512MB memory. We generate N source points and N target points in
100 dimensional unit hypercubes using a uniform distribution. The weights on the source
points are generated using a uniform distribution in the interval [0, 1]. We directly evaluate
the sums (4) as the ground truth, where ? 2 = (0.5)d and d is the dimensionality of the
data. Then we estimate it using the improved fast Gauss transform and Nystro? m method.
To compare the results, we use the maximum relative error to measure the precision of the
approximations. Given a precision of 0.5%, we use the error bound (11) to find the parameters of the IFGT, and use a trial and error method to find the parameter of the Nystr o? m
method. Then we vary the number of points, N , from 500 to 5000 and plot the time against
N in Figure 2 (a). The results show the IFGT is much faster than the Nystro? m method. We
also fix the number of points to N = 1000 and vary the size of centers (or random subset)
k from 10 to 1000 and plot the results in Figure 2 (b). The results show that the errors of
the IFGT are not sensitive to the number of the centers, which means we can use very a
small number of centers to achieve a good approximation. The accuracy of the Nystr o? m
method catches up at large k, where the direct evaluation may be even faster. The intuition
is that the use of expansions improves the accuracy of the approximation and relaxes the
requirement of the centers.
0.07
IFGT, p=1
IFGT, p=2
Nystrom
IFGT, p=1
IFGT, p=2
Nystrom
0.06
?1
10
Time (s)
Max Relative Error
0.05
?2
10
0.04
0.03
0.02
0.01
0 1
10
3
10
(a)
2
3
10
N
10
k
(b)
Figure 2: Performance comparison between the approximation methods. (a) Running time against
N and (b) maximum relative error against k for fixed N = 1000 in 100 dimensions.
Table 1: Ten-fold training and testing accuracy in percentage and training time in seconds using the
four classifiers on the five UCI datasets. Same value of ?2 = (0.5)d is used in all the classifiers. A
rectangular kernel matrix with random subset size of 20% of N was used in PSVM on Galaxy Dim
and Mushroom datasets.
Dataset
Size ? Dimension
Ionosphere
251 ? 34
BUPA Liver
345 ? 6
Tic-Tac-Toe
958 ? 9
Galaxy Dim
4192 ? 14
Mushroom
8124 ? 22
RLSC+FGT
%, %, s
94.8400
91.7302
0.3711
79.6789
71.0336
0.1279
88.7263
86.9507
0.3476
93.2967
93.2014
2.0972
88.2556
87.9615
14.7422
RLSC
%, %, s
97.7209
90.6032
1.1673
81.7318
67.8403
0.4833
88.6917
85.4890
2.9676
93.3206
93.2258
78.3526
87.9001
87.6658
341.7148
Nystr?om
%, %, s
91.8656
88.8889
0.4096
76.7488
69.2857
0.1475
88.4945
84.1272
1.8326
93.7023
93.7020
3.1081
failed
PSVM
%, %, s
95.1250
94.0079
0.8862
75.8134
71.4874
0.3468
92.9715
87.2680
3.9891
93.6705
93.5589
44.5143
85.5955
85.4629
285.1126
In the second experiment, five datasets from the UCI repository are used to compare the
performance of four different methods for classification: RLSC with the IFGT, RLSC with
full kernel evaluation, RLSC with the Nystro? m method and the Proximal Support Vector
Machines (PSVM) [4]. The Gaussian kernel is used for all these methods. We use the
same value of ? 2 = (0.5)d for a fair comparison. The ten-fold cross validation accuracy
on training and testing and the training time are listed in Table 1. The RLSC with the
IFGT is fastest among the four classifiers on all five datasets, while the training and testing
accuracy is close to the accuracy of the RLSC with full kernel evaluation. The RLSC
with the Nystr?om approximation is nearly as fast, but the accuracy is lower than the other
methods. Worst of all, it is not always feasible to solve the linear systems, which results in
the failure on the Mushroom dataset. The PSVM is accurate on the training and testing, but
slow and memory demanding for large datasets, even with subset reduction.
5
Conclusions and Discussion
We presented an improved fast Gauss transform to speed up kernel machines with Gaussian
kernel to linear order. The simulations and the classification experiments show that the
algorithm is in general faster and more accurate than other matrix approximation methods.
At present, we do not consider the reduction from the support vector set or dimensionality
reduction. The combination of the improved fast Gauss transform with these techniques
should bring even more reduction in computation. Another improvement to the algorithm
is an automatic procedure to tune the parameters. A possible solution could be running a
series of testing problems and tuning the parameters accordingly. If the bandwidth is very
small compared with the data range, the nearest neighbor searching algorithms could be a
better solution to these problems.
Acknowledgments
We would like to thank Dr. Nail Gumerov for many discussions. We also gratefully acknowledge
support of NSF awards 9987944, 0086075 and 0219681.
References
[1] M. Bern and D. Eppstein. Approximation algorithms for geometric problems. In D. Hochbaum,
editor, Approximation Algorithms for NP-Hard Problems, chapter 8, pages 296?345. PWS Publishing Company, Boston, 1997.
[2] T. Feder and D. Greene. Optimal algorithms for approximate clustering. In Proc. 20th ACM
Symp. Theory of computing, pages 434?444, Chicago, Illinois, 1988.
[3] S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representations. Journal of Machine Learning Research, 2:243?264, Dec. 2001.
[4] G. Fung and O. L. Mangasarian. Proximal support vector machine classifiers. In Proceedings
KDD-2001: Knowledge Discovery and Data Mining, pages 77?86, San Francisco, CA, 2001.
[5] F. Girosi, M. Jones, and T. Poggio. Regularization theory and neural networks architectures.
Neural Computation, 7(2):219?269, 1995.
[6] T. Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer
Science, 38:293?306, 1985.
[7] L. Greengard and V. Rokhlin. A fast algorithm for particle simulations. J. Comput. Phys.,
73(2):325?348, 1987.
[8] L. Greengard and J. Strain. The fast Gauss transform. SIAM J. Sci. Statist. Comput., 12(1):79?
94, 1991.
[9] Y.-J. Lee and O. Mangasarian. RSVM: Reduced support vector machines. In First SIAM
International Conference on Data Mining, Chicago, 2001.
[10] T. Poggio and S. Smale. The mathematics of learning: Dealing with data. Notices of the
American Mathematical Society (AMS), 50(5):537?544, 2003.
[11] R. Rifkin. Everything Old Is New Again: A Fresh Look at Historical Approaches in Machine
Learning. PhD thesis, MIT, Cambridge, MA, 2002.
[12] A. Smola and P. Bartlett. Sparse greedy gaussian process regression. In Advances in Neural
Information Processing Systems, pages 619?625. MIT Press, 2001.
[13] A. Smola and B. Sch?
olkopf. Sparse greedy matrix approximation for machine learning. In
Proc. Int?l Conf. Machine Learning, pages 911?918. Morgan Kaufmann, 2000.
[14] X. Sun and N. P. Pitsianis. A matrix version of the fast multipole method. SIAM Review,
43(2):289?300, 2001.
[15] J. A. K. Suykens and J. Vandewalle. Least squares support vector machine classifiers. Neural
Processing Letters, 9(3):293?300, 1999.
[16] V. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, 1995.
[17] G. Wahba. Spline Models for Observational Data. SIAM, Philadelphia, PA, 1990.
[18] C. K. Williams and D. Barber. Bayesian classification with gaussian processes. IEEE Trans.
Pattern Anal. Mach. Intell., 20(12):1342?1351, Dec. 1998.
[19] C. K. I. Williams and M. Seeger. Using the Nystro?m method to speed up kernel machines. In
Advances in Neural Information Processing Systems, pages 682?688. MIT Press, 2001.
[20] C. Yang, R. Duraiswami, N. Gumerov, and L. Davis. Improved fast Gauss transform and efficient kernel density estimation. In Proc. ICCV 2003, pages 464?471, 2003.
[21] C. Yang, R. Duraiswami, and N. A. Gumerov. Improved fast gauss transform. Technical Report
CS-TR-4495, UMIACS, Univ. of Maryland, College Park, 2003.
| 2550 |@word trial:1 repository:1 version:1 polynomial:1 nd:1 simulation:3 seek:1 pick:1 nystr:6 solid:1 tr:1 reduction:10 series:5 rkhs:2 fgt:14 si:1 mushroom:3 dx:1 attracted:1 written:1 chicago:2 partition:2 kdd:1 girosi:1 plot:2 maxv:1 greedy:5 prohibitive:1 selected:1 accordingly:1 vanishing:1 hermite:9 five:3 mathematical:1 dn:1 direct:10 consists:1 symp:1 inside:2 x0:1 expected:2 fmm:2 multi:3 company:1 cpu:1 solver:1 provided:1 moreover:1 notation:1 bounded:1 tic:1 minimizes:2 impractical:2 guarantee:1 every:1 multidimensional:1 expands:1 xd:2 growth:1 k2:3 classifier:5 stick:1 unit:2 farthest:6 yn:1 negligible:1 despite:2 mach:1 lsd:1 twice:1 bupa:1 fastest:1 factorization:1 range:2 practical:1 acknowledgment:1 yj:17 testing:5 practice:2 nystro:4 procedure:1 area:1 close:1 selection:1 storage:10 context:1 risk:1 deterministic:2 center:21 williams:2 attention:1 convex:1 rectangular:1 simplicity:1 rule:1 searching:2 controlling:1 target:9 programming:1 us:1 pa:1 recognition:1 invented:1 solved:4 worst:1 sun:1 substantial:1 intuition:1 pd:4 entanglement:1 complexity:9 solving:1 tight:1 subdivides:2 accelerate:1 chapter:1 univ:1 fast:26 effective:2 outside:1 quite:1 whose:1 larger:1 solve:3 otherwise:1 transform:20 eigenvalue:1 product:6 mb:1 uci:3 rapidly:1 rifkin:1 achieve:4 kv:2 scalability:2 olkopf:1 cluster:10 requirement:2 optimum:1 converges:1 liver:1 nearest:2 eq:1 solves:2 c:1 radius:4 drawback:1 observational:1 larry:1 everything:1 require:3 assign:1 fix:1 rpd:3 sufficiently:1 ground:1 major:1 achieves:2 adopt:1 vary:2 estimation:1 proc:3 label:1 sensitive:1 individually:1 weighted:1 minimization:2 hope:1 mit:3 gaussian:14 always:3 modified:1 rather:1 ck:3 pn:1 minc:1 improvement:3 vk:1 rank:9 pentium:1 seeger:1 sense:1 am:1 dim:2 entire:2 initially:1 kc:2 classification:12 among:1 spatial:1 equal:1 represents:1 park:2 jones:1 look:1 nearly:1 representer:1 report:2 np:1 spline:1 few:1 randomly:1 intell:1 phase:1 ab:3 mining:2 rediscovered:1 highly:1 circular:1 evaluation:6 eppstein:1 violation:1 pc:1 predefined:1 accurate:3 tuple:1 poggio:2 taylor:5 old:1 desired:2 theoretical:2 cost:5 subset:6 uniform:5 vandewalle:1 too:1 kxi:2 proximal:2 combined:1 adaptively:1 hypercubes:2 density:1 international:1 siam:4 lee:1 physic:1 again:1 thesis:1 choose:2 hn:3 possibly:1 dr:1 conf:1 american:1 inefficient:3 summarized:1 coefficient:7 changjiang:1 int:1 vi:3 performed:1 try:1 break:1 competitive:2 om:4 square:7 minimize:1 accuracy:8 kaufmann:1 efficiently:2 bayesian:1 rx:1 multiplying:1 phys:1 suffers:1 against:3 failure:1 galaxy:2 e2:2 nystrom:2 toe:1 di:3 dataset:6 proved:1 popular:3 knowledge:1 dimensionality:8 ramani:2 hilbert:1 improves:1 higher:6 improved:16 duraiswami:3 evaluated:3 though:2 box:5 smola:2 glance:1 yf:1 grows:3 name:1 verify:1 true:1 regularization:3 assigned:1 analytically:1 symmetric:1 laboratory:1 iteratively:2 deal:1 davis:2 interface:1 bring:1 consideration:1 mangasarian:2 functional:1 empirically:2 exponentially:3 volume:1 significant:2 cambridge:1 tac:1 smoothness:1 rd:2 automatic:1 consistency:1 tuning:1 mathematics:1 particle:1 illinois:1 gratefully:1 v0:1 add:2 dominant:1 multivariate:5 moderate:3 store:1 certain:1 binary:2 success:1 yi:3 morgan:1 converge:1 arithmetic:1 multiple:1 desirable:2 full:3 technical:1 faster:3 cross:1 sphere:3 rlsc:14 award:1 variant:2 regression:1 vision:1 iteration:4 kernel:31 hochbaum:1 achieved:1 cell:1 c1:2 dec:2 suykens:1 fine:1 interval:2 entangled:1 source:11 sch:1 umiacs:2 umd:1 ifgt:20 integer:1 inscribed:1 yang:3 relaxes:1 xj:1 architecture:1 bandwidth:2 wahba:1 reduce:4 idea:1 cn:1 tradeoff:1 expression:1 bartlett:1 feder:3 york:1 clear:1 listed:1 tune:1 factorial:1 amount:2 ten:2 statist:1 svms:1 reduced:5 generate:3 percentage:1 nsf:1 notice:1 estimated:4 per:2 discrete:1 key:1 four:4 falling:1 v1:1 defect:2 sum:7 run:1 letter:1 nail:1 gonzalez:3 comparable:1 bound:9 fold:2 quadratic:2 nonnegative:1 greene:3 kronecker:1 deficiency:1 sharply:1 x2:3 speed:3 min:1 performing:1 expanded:1 department:1 fung:1 according:1 truncate:2 combination:1 conjugate:4 smaller:1 s1:1 iccv:1 computationally:1 scheinberg:1 nonempty:1 operation:5 gaussians:6 greengard:3 away:1 subdivide:1 rp:1 original:6 substitute:1 multipole:3 clustering:7 running:6 publishing:1 hinge:1 approximating:2 hypercube:1 society:1 already:1 md:1 gradient:4 dp:1 distance:2 thank:1 maryland:2 sci:1 barber:1 reason:1 fresh:1 assuming:1 length:1 index:4 ratio:1 equivalently:1 smale:1 implementation:1 anal:1 perform:1 gumerov:3 datasets:9 acknowledge:1 displayed:2 extended:1 strain:2 y1:1 reproducing:2 arbitrary:1 introduced:1 required:6 horner:1 trans:1 address:1 below:1 pattern:2 usually:1 including:2 memory:4 max:6 demanding:1 treated:1 psvm:4 regularized:6 scheme:2 brief:1 carried:1 catch:1 philadelphia:1 sn:1 review:1 geometric:1 discovery:1 kf:1 multiplication:3 relative:3 loss:3 expect:2 validation:1 foundation:1 editor:1 repeat:1 truncation:1 bern:1 monomial:1 neighbor:4 absolute:3 sparse:3 distributed:1 ghz:1 rsvm:1 dimension:12 valid:1 kyj:3 evaluating:1 computes:1 made:1 san:1 historical:1 approximate:3 compact:1 confirm:1 dealing:1 francisco:1 xi:18 factorize:1 iterative:3 sk:2 reality:1 table:2 nature:1 robust:1 expanding:2 ca:1 expansion:15 domain:1 k2k:1 fair:2 x1:1 slow:1 precision:4 exponential:4 comput:2 lie:1 perceptual:1 third:1 theorem:1 pws:1 list:1 svm:1 ionosphere:1 intrinsic:1 vapnik:1 ci:8 phd:1 kx:1 boston:1 explore:2 univariate:2 failed:1 expressed:1 rnd:2 springer:1 truth:1 satisfies:1 acm:1 ma:1 intercluster:1 identity:1 acceleration:2 feasible:1 hard:2 uniformly:1 degradation:1 conservative:1 total:3 called:4 gauss:20 experimental:1 subdivision:2 college:2 rokhlin:1 support:9 accelerated:2 evaluate:2 |
1,707 | 2,551 | An Auditory Paradigm for
Brain?Computer Interfaces
N. Jeremy Hill1 , T. Navin Lal1 , Karin Bierig1
Niels Birbaumer2 and Bernhard Sch?
olkopf1
1
Max Planck Institute for Biological Cybernetics,
Spemannstra?e 38, 72076 T?
ubingen, Germany.
{jez|navin|bierig|bs}@tuebingen.mpg.de
2
Institute for Medical Psychology and
Behavioural Neurobiology, University of T?
ubingen,
Gartenstra?e 29, 72074 T?
ubingen, Germany.
[email protected]
Abstract
Motivated by the particular problems involved in communicating
with ?locked-in? paralysed patients, we aim to develop a braincomputer interface that uses auditory stimuli. We describe a
paradigm that allows a user to make a binary decision by focusing
attention on one of two concurrent auditory stimulus sequences.
Using Support Vector Machine classification and Recursive Channel Elimination on the independent components of averaged eventrelated potentials, we show that an untrained user?s EEG data can
be classified with an encouragingly high level of accuracy. This
suggests that it is possible for users to modulate EEG signals in a
single trial by the conscious direction of attention, well enough to
be useful in BCI.
1
Introduction
The aim of research into brain-computer interfaces (BCIs) is to allow a person to
control a computer using signals from the brain, without the need for any muscular
movement?for example, to allow a completely paralysed patient to communicate.
Total or near-total paralysis can result in cases of brain-stem stroke, cerebral palsy,
and Amytrophic Lateral Sclerosis (ALS, also known as Lou Gehrig?s disease). It
has been shown that some patients in a ?locked-in? state, in which most cognitive
functions are intact despite complete paralysis, can learn to communicate via an
interface that interprets electrical signals from the brain, measured externally by
electro-encephalogram (EEG) [1]. Successful approaches to such BCIs include using
feedback to train the patient to modulate slow cortical potentials (SCPs) to meet a
fixed criterion [1], machine classification of signals correlated with imagined muscle
movements, recorded from motor and pre-motor cortical areas [2, 3], and detection
of an event-related potential (ERP) in response to a visual stimulus event [4].
The experience of clinical groups applying BCI is that different paradigms work to
varying degrees with different patients. For some patients, long immobility and the
degeneration of the pyramidal cells of the motor cortex may make it difficult to
produce imagined-movement signals. Another concern is that in very severe cases,
the entire visual modality becomes unreliable: the eyes cannot adjust focus, the
fovea cannot be moved to inspect different locations in the visual scene, meaning
that most of a given image will stimulate peripheral regions of retina which have
low spatial resolution, and since the responses of retinal ganglion cells that form the
input to the visual system are temporally band-pass, complete immobility of the eye
means that steady visual signals will quickly fade [5]. Thus, there is considerable
motivation to add to the palette of available BCI paradigms by exploring EEG
signals that occur in response to auditory stimuli?a patient?s sense of hearing is
often uncompromised by their condition.
Here, we report the results of an experiment on healthy subjects, designed to develop
a BCI paradigm in which a user can make a binary choice. We attempt to classify
EEG signals that occur in response to two simultaneous auditory stimulus streams.
To communicate a binary decision, the subject focuses attention on one of the two
streams, left or right. Hillyard et al. [6] and others reported in the 60?s and 70?s that
selective attention in a dichotic listening task caused a measurable modulation of
EEG signals (see [7, 8] for a review). This modulation was significant when signals
were averaged over a large number of instances, but our aim is to discover whether
single trials are classifiable, using machine-learning algorithms, with a high enough
accuracy to be useful in a BCI.
2
Stimuli and methods
EEG signals were recorded from 15 healthy untrained subjects (9 female, 6 male)
between the ages of 20 and 38, using 39 silver chloride electrodes, referenced to
the ears. An additional EOG electrode was positioned lateral to and slightly below
the left eye, to record eye movement artefacts?blinks and horizontal and vertical
saccades all produced clearly identifiable signals on the EOG channel. The signals
were filtered by an analog band-pass filter between 0.1 and 40 Hz, before being
sampled at 256 Hz.
Subjects sat 1.5m from a computer monitor screen, and performed eight 10-minute
blocks each consisting of 50 trials. On each trial, the appearance of a fixation point
on screen was followed after 1 sec by an arrow pointing left or right (25 left, 25
right in each block, in random order). The arrow disappeared after 500 msec, after
which there was a pause of 500 msec, and then the auditory stimulus was presented,
lasting 4 seconds. 500 msec after the end of the auditory stimulus, the fixation point
disappeared and there was a pause of between 2 and 4 seconds for the subject to
relax. While the fixation point was present, subjects were asked to keep their gaze
fixed on it, to blink as little as possible, and not to swallow or make any other
movements (we wished to ensure that, as far as possible, our signals were free of
artefacts from signals that a paralysed patient would be unable to produce).
The auditory stimulus consisted of two periodic sequences of 50-msec-long squarewave beeps, one presented from a speaker to the left of the subject, and the other
from a speaker to the right. Each sequence contained ?target? and ?non-target?
beeps: the first three in the sequence were always non-targets, after which they could
be targets with independent probability 0.3. The right-hand sequence consisted of
eight beeps of frequencies 1500 Hz (non-target) and 1650 Hz (target), repeating
with a period of 490 msec. The left-hand sequence consisted of seven beeps of
frequencies 800 Hz (non-target) and 880 Hz (target), starting 70 msec after start of
the right-hand sequence and repeating with a period of 555 msec.
According to the direction of the arrow on each trial, subjects were instructed to
count the number of target beeps in either the left or right sequence. In the pause
between trials, they were instructed to report the number of target beeps using a
numeric keypad.1
deviant tones
(longer duration, or absent)
time (sec)
0
0.5
1
1.5
2
2.5
3
3.5
4
+1
amplitude
+0.5
concatenated
averaged
signal
LEFT and RIGHT
sound signals of
different periods
0
-0.5
-1
1 P
n
(epoch averaging)
1 P
n
Figure 1: Schematic illustration of the acoustic stimuli used in the experiment, and
of the averaging process used in preprocessing method A (illustrated by showing
what would happen if the sound signals themselves were averaged)
The sequences differed in location and pitch in order to help the subjects focus
their attention on one sequence and ignore the other. The task of reporting the
number of target beeps was instituted in order to keep the subjects alert, and to
make the task more concrete, because subjects in pilot experiments found that just
being asked ?listen to the left? or ?listen to the right? was too vague a task demand
to perform well over the course of 400 repetitions.2 The regular repetition of the
beeps, at the two different periods, was designed to allow the average ERP to a lefthand beep on a single trial to be examined with minimal contamination by ERPs
to right-hand beeps, and vice versa: figure 2 illustrates that, when the periods of
one signal are averaged, signals correlated with that sequence add in phase, whereas
signals correlated with the other sequence spread out, out of phase. Comparison
of the average response to a left beat with the average response to a right beat,
on a single trial, should thus emphasize any modulating effect of the direction of
attention on the ERP, of the kind described by Hillyard et al. [6].
1
In order to avoid contamination of the EEG signals with movement artefacts, a few
practice trials were performed before the first block, so that subjects learned to wait until
the fixation point was out before looking at the keypad or beginning the hand movement
toward it.
2
Although a paralysed patient would clearly be unable to give responses in this way, it
is hoped that this extra motivation would not be necessary.
An additional stimulus feature was designed to investigate whether mismatch negativity (MMN) could form a useful basis for a BCI. Mismatch negativity is a difference between the ERP to standard stimuli and the ERP to deviant stimuli, i.e. rare
stimulus events (with probability of occurrence typically around 0.1) which differ
in some manner from the more regular standards. MMN is treated in detail by
N?
a?
at?
anen [9]. It has been associated with the distracting effect of the occurrence of
a deviant while processing standards, and while it occurs to stimuli outside as well
as inside the focus of attention, there is evidence to suggest that this distraction
effect is larger the more similar the (task-irrelevant) deviant stimulus is to the (taskrelevant) standards [10]. Thus there is the possibility that a deviant stimulus (say,
a longer beep) inserted into the sequence to which the subject is attending (same
side, same pitch) might elicit a larger MMN signal than a deviant in the unattended
sequence. To explore this, after at least two standard beats of each trial, one of the
beats (randomly chosen, with the constraint that the epoch following the deviant
on the left should not overlap with the epoch following the deviant on the right)
was made to deviate on each trial. (Note the frequencies of occurrence of the deviants were 1/7 and 1/8 rather than the ideal 1/10: the double contraint of having
manageably short trials and a reasonable epoch length meant that the number of
beeps in the left and right sequences was limited to seven and eight respectively,
and clearly to use MMN in BCI, every trial has to have at least one deviant in each
sequence.) For 8 subjects, the deviant beat was simply a silent beat?a disruptive
pause in the otherwise regular sequence. For the remaining 7 subjects, the deviant
beat was a beep lasting 100 msec instead of the usual 50 msec (as in the distraction
paradigm of Schr?
oger and Wolff [10], the difference between deviant and standard is
on a task-irrelevant dimension?in our case duration, the task being to discriminate
pitch). A sixteenth subject, in the long-deviant condition, had to be eliminated
because of poor signal quality.
3
Analysis
As a first step in analyzing the data, the raw EEG signals were examined by eye for
each of the 400 trials of each of the subjects. Trials were rejected if they contained
obvious large artefact signals caused by blinks or saccades (visible in the EOG and
across most of the frontal positions), small periodic eye movements, or other muscle
movements (neck and brow, judged from electrode positions O9 and O10, Fp1, Fpz
and Fp2). Between 6 and 228 trials had to be rejected out of 400, depending on
the subject.
One of two alternative preprocessing methods was then used. In order to look for
effects of the attention-modulation reported by Hillyard et al, method (A) took the
average ERP in response to standard beats (discarding the first beat). In order to
look for possible attention-modulation of MMN, method (B) subtracted the average
response to standards from the response to the deviant beat. In both methods, the
average ERP signal to beats on the left was concatenated with the average ERP
signal following beats on the right, as depicted in figure 2 (for illustrative purposes
the figure uses the sound signal itself, rather than an ERP). For each trial, either
preprocessing method resulted in a signal of 142 (left) + 125 (right) = 267 time
samples for each of 40 channels (39 EEG channels plus one EOG), for a total of
10680 input dimensions to the classifier.
The classifier used was a linear hard-margin Support Vector Machine (SVM) [11].
To evaluate its performance, the trials from a single subject were split into ten nonoverlapping partitions of equal size: each such partition was used in turn as a test
set for evaluating the performance of the classifier trained on the other 90% of the
trials. Before training, linear Independent Component Analysis (ICA) was carried
out on the training set in order to perform blind source separation?this is a common
technique in the analysis of EEG data [12, 13], since signals measured through the
intervening skull, meninges and cerebro-spinal fluid are of low spatial resolution,
and the activity measured from neighbouring EEG electrodes can be assumed to be
highly correlated mixtures of the underlying sources. For the purposes of the ICA,
the concatenation of all the preprocessed signals from one EEG channel, from all
trials in the training partition, was treated as a single mixture signal. A 40-by-40
separating matrix was obtained using the stabilized deflation algorithm from version
2.1 of FastICA [14]. This matrix, computed only from the training set, was then
used to separate the signals in both the training set and the test set. Then, the
signals were centered and normalized: for each averaged (unmixed) ERP in each of
the 40 ICs of each trial, the mean was subtracted, and the signal was divided by its
2-norm. Thus the entry Kij in the kernel matrix of the SVM was proportional to the
sum of the coefficients of correlation between corresponding epochs in trials i and j.
The SVM was then trained and tested. Single-trial error rate was estimated as the
mean proportion of misclassified test trials across the ten folds. For comparison,
the classification was also performed on the mixture signals without ICA, and with
and without the normalizing step.
Results are shown in table 1. Due to space constraints, standard error values
for the estimated error rates are not shown: standard error was typically ?0.025,
and maximally ?0.04. It can be seen that the best error rate obtainable with
a given subject varies according to the subject, between 3% and 37%, in a way
that is not entirely explained by the differences in the numbers of good (artefactfree) trials available. ICA generally improved the results, by anything up to 14%.
Preprocessing method (B) generally performed poorly (minimum 19% error, and
generally over 35%). Any attention-dependent modulation of an MMN signal is
apparently too small relative to the noise (signals from method B were generally
noisier than those from method A, because the latter, averaged over 5 or 6 epochs
within a trial, are subtracted from signals that come from only one epoch per
trial in order to produce the method B average). For preprocessing method A,
normalization generally produced a small improvement.
Thus, promising results can be obtained using the average ERP in response to
standard beeps, using ICA followed by normalization (fourth results column): error
rates of 5?15% for some some subjects are comparable with the performance of, for
example, well-trained patients in an SCP paradigm [1], and correspond to information transfer rates of 0.4?0.7 bits per trial (say, 4?7 bits per minute). Note that,
despite the fact that this method does not use the ERPs that occur in response
to deviant beats, the results for subject in the silent-deviant condition were generally better than for those in the long-deviant condition. It may be that the more
irregular-sounding sequences with silent beats forced the subjects to concentrate
harder in order to perform the counting task?alternatively, it may simply be that
this group of subjects could concentrate less well, an interpretation which is also
suggested by the fact that more trials had to be rejected from their data sets).
In order to examine the extent to which the dimensionality of the classification
problem could be reduced, recursive feature elimination [15] was performed (limited
now to preprocessing method A with ICA and normalization). For each of ten folds,
ICA and normalization was performed, then an SVM was trained and
P tested. For
each independent component j, an elimination criterion value cj = i?Fj wi2 was
computed, where w is the hyperplane normal vector of the trained SVM, and Fj
is the set of indices to features that are part of component j. The IC with the
lowest criterion score cj was deemed to be the least influential for classification,
Table 1: SVM classification error rates: the best rates for each of the preprocessing
methods, A and B (see text), are in bold. The symbol k ? k is used to denote
normalization during pre-processing as described in the text, and the symbol ? is
used to denote no normalization.
subj.
CM
CN
GH
JH
KT
KW
TD
TT
AH
AK
CG
CH
DK
KB
SK
deviant
duration
(msec)
0
0
0
0
0
0
0
0
100
100
100
100
100
100
100
#
good
trials
326
250
198
348
380
394
371
367
353
172
271
375
241
363
239
Method A
no ICA
ICA
?
k?k
?
k?k
0.08
0.06 0.06 0.04
0.26
0.19 0.28 0.14
0.34
0.27 0.35 0.22
0.21
0.19 0.14 0.08
0.23
0.21 0.15 0.07
0.18
0.14 0.06 0.03
0.22
0.18 0.15 0.10
0.32 0.31 0.33
0.32
0.22
0.22 0.17 0.16
0.35
0.31 0.34 0.22
0.37
0.29 0.31 0.28
0.31
0.28 0.26 0.22
0.34
0.34 0.35 0.30
0.21
0.21 0.15 0.10
0.47
0.43 0.40 0.37
Method B
no ICA
ICA
?
k?k
?
k?k
0.36
0.35
0.26 0.25
0.43
0.44 0.38
0.40
0.41
0.41 0.39
0.43
0.31
0.42 0.28
0.35
0.41
0.36
0.35 0.34
0.34
0.39 0.19
0.23
0.35
0.39
0.29 0.28
0.40
0.42 0.39
0.43
0.41 0.41
0.45
0.46
0.50
0.46
0.50 0.42
0.51
0.47
0.48 0.44
0.49
0.46
0.46 0.44
0.45
0.44
0.42 0.40
0.42
0.47 0.39
0.41
0.46
0.49 0.45
0.51
and the corresponding features Fj were removed. Then the SVM was re-trained
and re-tested, and the elimination process iterated until one channel remained.
The removal of batches of features in this way is similar to the Recursive Channel
Elimination approach to BCI introduced by Lal et al. [3], except that independent
components are removed instead of mixtures (a convenient acronym would therefore
be RICE, for Recursive Independent Component Elimination).
Results for the two subject groups are plotted in the left and right panels of figure 3,
showing estimated error rates averaged over ten folds against the number of ICs used
for classification. Each subject?s initials, together with the number of useable trials
that subject performed, are printed to the right of the corresponding curve.3 It can
be seen that a fairly large number of ICs (around 20?25 out of the 40) contribute to
the classification: this may indicate that the useful information in the EEG signals
is diffused fairly widely between the areas of the brain from which we are detecting
signals (indeed, this is in accordance with much auditory-ERP and -MMN research,
in which strong signals are often measured at the vertex, quite far from the auditory
cortex [6, 7, 8, 9]). One of the motivations for reducing the dimensionality of the
data is to determine whether performance can be improved as irrelevant noise is
eliminated, and as the probability of overfitting decreases. However, these factors do
not seem to limit performance on the current data: for most subjects, performance
does not improve as features are eliminated, instead remaining roughly constant
until fewer than 20?25 ICs remain. A possible exception is KT, whose performance
may improve by 2?3% after elimination of 20 components, and a clearer exception
3
RICE was also carried out using the full 400 trials for each subject (results not shown).
Despite the (sometimes drastic) reduction in the number of trials, rejection by eye of
artefact trials did not raise the classification error rate by an appreciable amount. The
one exception was subject SK, for whom the probability of mis-classification increased by
about 0.1 when 161 trials containing strong movement signals were removed?clearly this
subject?s movements were classifiably dependent on whether he was attending to the left
or to the right.
0.5
deviant duration = 100 msec
deviant duration = 0
0.45
0.4
SK (239)
classification error rate
0.35
TT (367)
DK (241)
0.3
CG (271)
0.25
AK (172)
CH (375)
GH (198)
0.2
AH (353)
0.15
CN (250)
TD (371)
JH (348)
KT (380)
CM (326)
KW (394)
0.1
0.05
0
5 10 15 20 25 30 35 40
number of ICs retained
KB (363)
5 10 15 20 25 30 35 40
number of ICs retained
Figure 2: Results of recursive independent component elimination
is CG, for whom elimination of 25 components yields an improvement of roughly
10%.
The ranking returned by the RICE method is somewhat difficult to interpret, not
least because each fold of the procedure can compute a different ICA decomposition,
whose independent components are not necessarily readily identifiable with one
another. A thorough analysis is not possible here?however, with the mixture
weightings for many ICs spread very widely around the electrode array, we found
no strong evidence for or against the particular involvement of muscle movement
artefact signals in the classification.
4
Conclusion
Despite wide variation in performance between subjects, which is to be expected
in the analysis of EEG data, our classification results suggest that it is possible
for a user with no previous training to direct conscious attention, and thereby
modulate the event-related potentials that occur in response to auditory stimuli
reliably enough, on a single trial, to provide a useful basis for a BCI. The information
used by the classifier seems to be diffused fairly widely over the scalp. While
the ranking from recursive independent component elimination did not reveal any
evidence of an overwhelming contribution from artefacts related to muscle activity,
it is not possible to rule out completely the involvement of such artefacts?possibly
the only way to be sure of this is to implement the interface with locked-in patients,
preparations for which are underway.
Acknowledgments
Many thanks to Prof. Kuno Kirschfeld and Bernd Battes for the use of their laboratory.
References
[1] N. Birbaumer, A. K?
ubler, N. Ghanayim, T. Hinterberger, J. Perelmouter, J. Kaiser,
I. Iversen, B. Kotchoubey, N. Neumann, and H. Flor. The Thought Translation Device (TTD) for Completely Paralyzed Patients. IEEE Transactions on Rehabilitation
Engineering, 8(2):190?193, June 2000.
[2] G. Pfurtscheller., C. Neuper amd A. Schl?
ogl, and K. Lugger. Separability of EEG
signals recorded during right and left motor imagery using adaptive autoregressive
parameters. IEEE Transactions on Rehabilitation Engineering, 6(3):316?325, 1998.
[3] T.N. Lal, M. Schr?
oder, T. Hinterberger, J. Weston, M. Bogdan, N. Birbaumer,
and B. Sch?
olkopf. Support Vector Channel Selection in BCI. IEEE Transactions
on Biomedical Engineering. Special Issue on Brain-Computer Interfaces, 51(6):1003?
1010, June 2004.
[4] E. Donchin, K.M. Spencer, and R. Wijesinghe. The menatal prosthesis: Assessing the
speed of a P300-based brain-computer interface. IEEE Transactions on Rehabilitation
Engineering, 8:174?179, 2000.
[5] L.A. Riggs, F. Ratliff, J.C. Cornsweet, and T.N. Cornsweet. The disappearance of
steadily fixated visual test objects. Journal of the Optical Society of America, 43:495?
501, 1953.
[6] S.A. Hillyard, R.F. Hink, V.L. Schwent, and T.W. Picton. Electrical signs of selective
attention in the human brain. Science, 182:177?180, 1973.
[7] R. N?
a?
at?
anen. Processing negativity: an evoked-potential reflection of selective attention. Psychological Bulletin, 92(3):605?640, 1982.
[8] R. N?
a?
at?
anen. The role of attention in auditory information processing as revealed by
event-related potentials and other brain measures of cognitive function. Behavioral
and Brain Sciences, 13:201?288, 1990.
[9] R. N?
a?
at?
anen. Attention and Brain Function. Erlbaum, Hillsdale NJ, 1992.
[10] E. Schr?
oger and C. Wolff. Behavioral and electrophysiological effects of task-irrelevant
sound change: a new distraction paradigm. Cognitive Brain Research, 7:71?87, 1998.
[11] B. Sch?
olkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, USA,
2002.
[12] K.R. M?
uller, J. Kohlmorgen, A. Ziehe, and B. Blankertz. Decomposition algorithms
for analysing brain signals. In S. Haykin, editor, Adaptive Systems for Signal Processing, Communications and Control, pages 105?110, 2000.
[13] A. Delorme and S. Makeig. EEGLAB: an open source toolbox for analysis of singletrial EEG dynamics including Independent Component Analysis. Journal of Neuroscience Methods, 134:9?21, 2004.
[14] A. Hyv?
arinen. Fast and robust fixed-point algorithms for Independent Component
Analysis. IEEE Transactions on Neural Networks, 10(3):626?634, 1999.
[15] I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. Gene Selection for Cancer Classification using Support Vector Machines. Journal of Machine Learning Research,
3:1439?1461, 2003.
| 2551 |@word trial:36 beep:14 version:1 proportion:1 norm:1 seems:1 open:1 hyv:1 decomposition:2 thereby:1 harder:1 reduction:1 initial:1 score:1 current:1 readily:1 visible:1 partition:3 happen:1 motor:4 designed:3 olkopf1:1 fewer:1 device:1 tone:1 beginning:1 short:1 record:1 haykin:1 filtered:1 detecting:1 unmixed:1 contribute:1 location:2 alert:1 direct:1 fixation:4 behavioral:2 inside:1 manner:1 indeed:1 expected:1 ica:12 mpg:1 themselves:1 examine:1 roughly:2 brain:14 td:2 little:1 overwhelming:1 kohlmorgen:1 becomes:1 discover:1 underlying:1 panel:1 lowest:1 what:1 kind:1 cm:2 nj:1 thorough:1 every:1 brow:1 makeig:1 classifier:4 control:2 medical:1 mmn:7 planck:1 before:4 engineering:4 referenced:1 accordance:1 limit:1 despite:4 ak:2 analyzing:1 meet:1 modulation:5 erps:2 might:1 plus:1 examined:2 evoked:1 suggests:1 limited:2 locked:3 averaged:8 acknowledgment:1 recursive:6 block:3 practice:1 useable:1 implement:1 procedure:1 area:2 elicit:1 thought:1 printed:1 convenient:1 pre:2 regular:3 wait:1 suggest:2 cannot:2 selection:2 judged:1 applying:1 unattended:1 measurable:1 attention:15 starting:1 duration:5 resolution:2 communicating:1 fade:1 attending:2 array:1 rule:1 variation:1 target:11 user:5 neighbouring:1 us:2 swallow:1 inserted:1 role:1 electrical:2 degeneration:1 region:1 movement:12 contamination:2 removed:3 decrease:1 disease:1 asked:2 dynamic:1 trained:6 raise:1 completely:3 vague:1 basis:2 america:1 train:1 forced:1 fast:1 describe:1 encouragingly:1 outside:1 quite:1 whose:2 larger:2 widely:3 say:2 relax:1 otherwise:1 bci:10 itself:1 sequence:18 took:1 singletrial:1 p300:1 lefthand:1 poorly:1 ogl:1 sixteenth:1 intervening:1 moved:1 olkopf:2 electrode:5 double:1 neumann:1 assessing:1 produce:3 disappeared:2 silver:1 object:1 help:1 depending:1 develop:2 clearer:1 bogdan:1 schl:1 measured:4 wished:1 strong:3 come:1 indicate:1 karin:1 differ:1 direction:3 artefact:8 concentrate:2 filter:1 kb:2 centered:1 human:1 elimination:10 hillsdale:1 arinen:1 immobility:2 biological:1 spencer:1 exploring:1 around:3 ic:8 normal:1 pointing:1 purpose:2 niels:2 healthy:2 modulating:1 concurrent:1 repetition:2 vice:1 uller:1 mit:1 clearly:4 always:1 aim:3 rather:2 avoid:1 varying:1 eventrelated:1 focus:4 june:2 improvement:2 ubler:1 cg:3 sense:1 hill1:1 dependent:2 entire:1 typically:2 dichotic:1 selective:3 misclassified:1 germany:2 issue:1 classification:14 spatial:2 special:1 fairly:3 equal:1 having:1 eliminated:3 kw:2 look:2 report:2 stimulus:18 others:1 few:1 retina:1 randomly:1 resulted:1 phase:2 consisting:1 attempt:1 detection:1 investigate:1 possibility:1 highly:1 severe:1 adjust:1 male:1 mixture:5 kt:3 paralysed:4 necessary:1 experience:1 fpz:1 spemannstra:1 palsy:1 re:2 plotted:1 prosthesis:1 minimal:1 psychological:1 increased:1 instance:1 classify:1 kij:1 column:1 hearing:1 vertex:1 entry:1 rare:1 successful:1 fastica:1 erlbaum:1 too:2 reported:2 perelmouter:1 varies:1 periodic:2 oger:2 person:1 thanks:1 ghanayim:1 gaze:1 together:1 quickly:1 concrete:1 imagery:1 recorded:3 ear:1 o10:1 containing:1 possibly:1 hinterberger:2 cognitive:3 jeremy:1 potential:6 de:2 nonoverlapping:1 retinal:1 sec:2 bold:1 coefficient:1 caused:2 ranking:2 blind:1 stream:2 performed:7 apparently:1 start:1 fp1:1 chloride:1 contribution:1 accuracy:2 correspond:1 yield:1 blink:3 raw:1 iterated:1 produced:2 kotchoubey:1 cybernetics:1 classified:1 stroke:1 ah:2 simultaneous:1 barnhill:1 against:2 frequency:3 involved:1 steadily:1 obvious:1 associated:1 mi:1 sampled:1 auditory:12 pilot:1 listen:2 dimensionality:2 electrophysiological:1 cj:2 amplitude:1 positioned:1 obtainable:1 focusing:1 response:13 maximally:1 improved:2 just:1 rejected:3 biomedical:1 jez:1 until:3 correlation:1 hand:5 smola:1 horizontal:1 navin:2 quality:1 reveal:1 stimulate:1 bcis:2 usa:1 effect:5 consisted:3 normalized:1 laboratory:1 illustrated:1 during:2 speaker:2 steady:1 illustrative:1 anything:1 criterion:3 distracting:1 complete:2 tt:2 interface:7 fj:3 gh:2 reflection:1 meaning:1 image:1 common:1 spinal:1 birbaumer:3 cerebral:1 imagined:2 analog:1 interpretation:1 he:1 interpret:1 significant:1 versa:1 cambridge:1 had:3 hillyard:4 cortex:2 longer:2 add:2 female:1 involvement:2 irrelevant:4 ubingen:3 binary:3 muscle:4 seen:2 minimum:1 additional:2 somewhat:1 determine:1 paradigm:8 period:5 signal:48 paralyzed:1 full:1 sound:4 stem:1 clinical:1 long:4 divided:1 schematic:1 pitch:3 patient:12 kernel:2 normalization:6 sometimes:1 cell:2 irregular:1 whereas:1 pyramidal:1 source:3 modality:1 sch:3 extra:1 flor:1 sure:1 subject:33 hz:6 sounding:1 electro:1 seem:1 near:1 counting:1 ideal:1 revealed:1 split:1 enough:3 psychology:1 interprets:1 silent:3 cn:2 listening:1 absent:1 whether:4 motivated:1 returned:1 oder:1 useful:5 generally:6 amount:1 squarewave:1 repeating:2 conscious:2 band:2 ten:4 reduced:1 stabilized:1 sign:1 estimated:3 neuroscience:1 per:3 group:3 monitor:1 erp:12 preprocessed:1 sum:1 fourth:1 communicate:3 classifiable:1 reporting:1 reasonable:1 guyon:1 separation:1 decision:2 comparable:1 bit:2 entirely:1 followed:2 fold:4 identifiable:2 activity:2 scalp:1 occur:4 subj:1 constraint:2 scene:1 speed:1 ttd:1 optical:1 influential:1 according:2 peripheral:1 poor:1 sclerosis:1 across:2 slightly:1 remain:1 separability:1 skull:1 b:1 rehabilitation:3 lasting:2 explained:1 behavioural:1 turn:1 count:1 deflation:1 drastic:1 end:1 acronym:1 available:2 eight:3 occurrence:3 subtracted:3 alternative:1 batch:1 remaining:2 include:1 ensure:1 iversen:1 concatenated:2 prof:1 society:1 diffused:2 occurs:1 kaiser:1 usual:1 disappearance:1 fovea:1 lou:1 unable:2 lateral:2 concatenation:1 separating:1 separate:1 seven:2 whom:2 amd:1 extent:1 tuebingen:2 toward:1 length:1 index:1 retained:2 illustration:1 disruptive:1 difficult:2 fp2:1 fluid:1 ratliff:1 reliably:1 perform:3 inspect:1 vertical:1 contraint:1 beat:14 neurobiology:1 looking:1 communication:1 schr:3 introduced:1 palette:1 bernd:1 toolbox:1 lal:2 acoustic:1 delorme:1 learned:1 suggested:1 below:1 mismatch:2 wi2:1 max:1 including:1 event:5 overlap:1 braincomputer:1 treated:2 pause:4 blankertz:1 improve:2 eye:7 temporally:1 carried:2 deemed:1 negativity:3 anen:4 eog:4 deviate:1 review:1 epoch:7 text:2 removal:1 underway:1 relative:1 proportional:1 age:1 degree:1 editor:1 translation:1 cancer:1 course:1 free:1 side:1 allow:3 cerebro:1 jh:2 institute:2 wide:1 bulletin:1 feedback:1 dimension:2 cortical:2 numeric:1 evaluating:1 curve:1 autoregressive:1 instructed:2 made:1 adaptive:2 preprocessing:7 far:2 transaction:5 emphasize:1 uni:1 ignore:1 bernhard:1 unreliable:1 keep:2 gene:1 overfitting:1 paralysis:2 sat:1 fixated:1 assumed:1 alternatively:1 sk:3 table:2 promising:1 channel:8 learn:1 transfer:1 robust:1 eeg:17 untrained:2 necessarily:1 did:2 spread:2 arrow:3 motivation:3 noise:2 deviant:21 screen:2 differed:1 slow:1 pfurtscheller:1 position:2 msec:11 weighting:1 externally:1 minute:2 remained:1 discarding:1 showing:2 symbol:2 dk:2 keypad:2 svm:7 concern:1 evidence:3 normalizing:1 donchin:1 vapnik:1 hoped:1 o9:1 illustrates:1 demand:1 margin:1 rejection:1 depicted:1 simply:2 appearance:1 explore:1 ganglion:1 visual:6 contained:2 saccade:2 ch:2 rice:3 weston:2 modulate:3 appreciable:1 considerable:1 hard:1 change:1 analysing:1 muscular:1 except:1 reducing:1 eeglab:1 averaging:2 hyperplane:1 wolff:2 total:3 pas:2 discriminate:1 neck:1 intact:1 kuno:1 neuper:1 exception:3 distraction:3 ziehe:1 support:4 scp:1 latter:1 meant:1 noisier:1 frontal:1 preparation:1 evaluate:1 tested:3 correlated:4 |
1,708 | 2,552 | Intrinsically Motivated Reinforcement Learning
Satinder Singh
Computer Science & Eng.
University of Michigan
[email protected]
Andrew G. Barto
Dept. of Computer Science
University of Massachusetts
[email protected]
Nuttapong Chentanez
Computer Science & Eng.
University of Michigan
[email protected]
Abstract
Psychologists call behavior intrinsically motivated when it is engaged in
for its own sake rather than as a step toward solving a specific problem
of clear practical value. But what we learn during intrinsically motivated
behavior is essential for our development as competent autonomous entities able to efficiently solve a wide range of practical problems as they
arise. In this paper we present initial results from a computational study
of intrinsically motivated reinforcement learning aimed at allowing artificial agents to construct and extend hierarchies of reusable skills that are
needed for competent autonomy.
1
Introduction
Psychologists distinguish between extrinsic motivation, which means being moved to do
something because of some specific rewarding outcome, and intrinsic motivation, which
refers to being moved to do something because it is inherently enjoyable. Intrinsic motivation leads organisms to engage in exploration, play, and other behavior driven by curiosity
in the absence of explicit reward. These activities favor the development of broad competence rather than being directed to more externally-directed goals (e.g., ref. [14]). In
contrast, machine learning algorithms are typically applied to single problems and so do
not cope flexibly with new problems as they arise over extended periods of time.
Although the acquisition of competence may not be driven by specific problems, this competence is routinely enlisted to solve many different specific problems over the agent?s
lifetime. The skills making up general competence act as the ?building blocks? out of
which an agent can form solutions to new problems as they arise. Instead of facing each
new challenge by trying to create a solution out of low-level primitives, it can focus on
combining and adjusting its higher-level skills. In animals, this greatly increases the efficiency of learning to solve new problems, and our main objective is to achieve a similar
efficiency in our machine learning algorithms and architectures.
This paper presents an elaboration of the reinforcement learning (RL) framework [11] that
encompasses the autonomous development of skill hierarchies through intrinsically motivated reinforcement learning. We illustrate its ability to allow an agent to learn broad
competence in a simple ?playroom? environment. In a related paper [1], we provide more
extensive background for this approach, whereas here the focus is more on algorithmic
details.
Lack of space prevents us from providing a comprehensive background to the many ideas
to which our approach is connected. Many researchers have argued for this kind of devel-
opmental approach in which an agent undergoes an extended developmental period during which collections of reusable skills are autonomously learned that will be useful for
a wide range of later challenges (e.g., [4, 13]). The previous machine learning research
most closely related is that of Schmidhuber (e.g., [8]) on con?dence-based curiosity and
the ideas of exploration and shaping bonuses [6, 10], although our de?nition of intrinsic reward differs from these. The most direct inspiration behind the experiment reported in this
paper, comes from neuroscience. The neuromodulator dopamine has long been associated
with reward learning [9]. Recent studies [2, 3] have focused on the idea that dopamine not
only plays a critical role in the extrinsic motivational control of behaviors aimed at harvesting explicit rewards, but also in the intrinsic motivational control of behaviors associated
with novelty and exploration. For instance, salient, novel sensory stimuli inspire the same
sort of phasic activity of dopamine cells as unpredicted rewards. However, this activation
extinguishes more or less quickly as the stimuli become familiar. This may underlie the fact
that novelty itself has rewarding characteristics [7]. These connections are key components
of our approach to intrinsically motivated RL.
2
Reinforcement Learning of Skills
According to the ?standard? view of RL (e.g., [11]) the agent-environment interaction is
envisioned as the interaction between a controller (the agent) and the controlled system (the
environment), with a specialized reward signal coming from a ?critic? in the environment
that evaluates (usually with a scalar reward value) the agent?s behavior (Fig. 1A). The agent
learns to improve its skill in controlling the environment in the sense of learning how to
increase the total amount of reward it receives over time from the critic.
External Environment
Environment
Actions
Critic
Sensations
Internal Environment
Critic
Rewards
States
Actions
Rewards
Decisions
Agent
States
Agent
B
A
Figure 1: Agent-Environment Interaction in RL. A: The usual view. B: An elaboration.
"Organism"
Sutton and Barto [11] point out that one should not identify this RL agent with an entire
animal or robot. An an animal?s reward signals are determined by processes within its brain
that monitor not only external state but also the animal?s internal state. The critic is in an
animal?s head. Fig. 1B makes this more explicit by ?factoring? the environment of Fig. 1A
into an external environment and an internal environment, the latter of which contains the
critic which determines primary reward. This scheme still includes cases in which reward
is essentially an external stimulus (e.g., a pat on the head or a word of praise). These are
simply stimuli transduced by the internal environment so as to generate the appropriate
level of primary reward.
The usual practice in applying RL algorithms is to formulate the problem one wants the
agent to learn how to solve (e.g., win at backgammon) and de?ne a reward function specially tailored for this problem (e.g., reward = 1 on a win, reward = 0 on a loss). Sometimes
considerable ingenuity is required to craft an appropriate reward function. The point of
departure for our approach is to note that the internal environment contains, among other
things, the organism?s motivational system, which needs to be a sophisticated system that
should not have to be redesigned for different problems. Handcrafting a different specialpurpose motivational system (as in the usual RL practice) should be largely unnecessary.
Skills?Autonomous mental development should result in a collection of reusable skills.
But what do we mean by a skill? Our approach to skills builds on the theory of options
[12]. Briefly, an option is something like a subroutine. It consists of 1) an option policy that
directs the agent?s behavior for a subset of the environment states, 2) an initiation set consisting of all the states in which the option can be initiated, and 3) a termination condition,
which specifies the conditions under which the option terminates. It is important to note
that an option is not a sequence of actions; it is a closed-loop control rule, meaning that
it is responsive to on-going state changes. Furthermore, because options can invoke other
options as actions, hierarchical skills and algorithms for learning them naturally emerge
from the conception of skills as options. Theoretically, when options are added to the set
of admissible agent actions, the usual Markov decision process (MDP) formulation of RL
extends to semi-Markov decision processes (SMDPs), with the one-step actions now becoming the ?primitive actions.? All of the theory and algorithms applicable to SMDPs can
be appropriated for decision making and learning with options [12].
Two components of the the options framework are especially important for our approach:
1. Option Models: An option model is a probabilistic description of the effects of executing
an option. As a function of an environment state where the option is initiated, it gives the
probability with which the option will terminate at any other state, and it gives the total
amount of reward expected over the option?s execution. Option models can be learned
from experience (usually only approximately) using standard methods. Option models
allow stochastic planning methods to be extended to handle planning at higher levels of
abstraction.
2. Intra-option Learning Methods: These methods allow the policies of many options to
be updated simultaneously during an agent?s interaction with the environment. If an option
could have produced a primitive action in a given state, its policy can be updated on the
basis of the observed consequences even though it was not directing the agent?s behavior
at the time.
In most of the work with options, the set of options must be provided by the system designer.
While an option?s policy can be improved through learning, each option has to be predefined by providing its initiation set, termination condition, and the reward function that
evaluates its performance. Many researchers have recognized the desirability of automatically creating options, and several approaches have recently been proposed (e.g., [5]). For
the most part, these methods extract options from the learning system?s attempts to solve a
particular problem, whereas our approach creates options outside of the context of solving
any particular problem.
Developing Hierarchical Collections of Skills?Children accumulate skills while they
engage in intrinsically motivated behavior, e.g., while at play. When they notice that something they can do reliably results in an interesting consequence, they remember this in a
form that will allow them to bring this consequence about if they wish to do so at a future
time when they think it might contribute to a specific goal. Moreover, they improve the
efficiency with which they bring about this interesting consequence with repetition, before
they become bored and move on to something else. We claim that the concepts of an option
and an option model are exactly appropriate to model this type of behavior. Indeed, one of
our main contributions is a (preliminary) demonstration of this claim.
3
Intrinsically Motivated RL
Our main departure from the usual application of RL is that our agent maintains a knowledge base of skills that it learns using intrinsic rewards. In most other regards, our extended RL framework is based on putting together learning and planning algorithms for
Loop forever
Current state st , current primitive action at , current option ot ,
extrinsic reward rte , intrinsic reward rti
Obtain next state st+1
//? Deal with special case if next state is salient
If st+1 is a salient event e
If option for e, oe , does not exist in O (skill-KB)
Create option oe in skill-KB;
Add st to I oe // initialize initiation set
Set ? oe (st+1 ) = 1 // set termination probability
//? set intrinsic reward value
i
rt+1
= ? [1 ? P oe (st+1 |st )] // ? is a constant multiplier
else
i
=0
rt+1
//? Update all option models
For each option o 6= oe in skill-KB (O)
If st+1 ? I o , then add st to I o // grow initiation set
If at is greedy action for o in state st
//? update option transition probability model
?
P o (x|st ) ? [?(1 ? ? o (st+1 )P o (x|st+1 ) + ?? o (st+1 )?st+1 x ]
//? update option reward model
?
Ro (st ) ? [rte + ?(1 ? ? o (st+1 ))Ro (st+1 )]
//? Q-learning update of behavior action-value function
?
QB (st , at ) ? [rte + rti + ? maxa?A?O QB (st+1 , a)]
//? SMDP-planning update of behavior action-value function
For each option o in skill-KB
P
?
QB (st , o) ? [Ro (st ) + x?S P o (x|st ) maxa?A?O QB (x, a)]
//? Update option action-value functions
For each option o ? O such that st ? I o
?
Qo (st , at ) ? [rte + ? (? o (st+1 ) ? terminal value for option o)
+?(1 ? ? o (st+1 )) ? maxa?A?O Qo (st+1 , a)]
0
For each option o ? O such that st ? I o0 and o 6= o0
P
0
0
?
Qo (st , o0 ) ? Ro (st ) + x?S P o (x|st )[? o (x) ? terminal val for option o
+((1 ? ? o (x)) ? maxa?A?O Qo (x, a))]
Choose at+1 using -greedy policy w.r.to QB // ? Choose next action
//? Determine next extrinsic reward
e
Set rt+1
to the extrinsic reward for transition st , at ? st+1
e
i
Set st ? st+1 ; at ? at+1 ; rte ? rt+1
; rti ? rt+1
Figure 2: Learning Algorithm. Extrinsic reward is denoted re while intrinsic reward is denoted ri .
?
Equations of the form x ? [y] are short for x ? (1??)x+?[y]. The behavior action value function
QB is updated using a combination of Q-learning and SMDP planning. Throughout ? is a discount
factor and ? is the step-size. The option action value functions Qo are updated using intra-option
Q-learning. Note that the intrinsic reward is only used in updating QB and not any of the Qo .
options [12].
Behavior The agent behaves in its environment according to an -greedy policy with respect to an action-value function QB that is learned using a mix of Q-learning and SMDP
planning as described in Fig. 2. Initially only the primitive actions are available to the agent.
Over time, skills represented internally as options and their models also become available
to the agent as action choices. Thus, QB maps states s and actions a (both primitive and
options) to the expected long-term utility of taking that action a in state s.
Salient Events In our current implementation we assume that the agent has intrinsic or
hardwired notions of interesting or ?salient? events in its environment. For example, in
the playroom environment we present shortly, the agent finds changes in light and sound
intensity to be salient. These are intended to be independent of any specific task and likely
to be applicable to many environments.
Reward In addition to the usual extrinsic rewards there are occasional intrinsic rewards
generated by the agent?s critic (see Fig. 1B). In this implementation, the agent?s intrinsic
reward is generated in a way suggested by the novelty response of dopamine neurons. The
intrinsic reward for each salient event is proportional to the error in the prediction of the
salient event according to the learned option model for that event (see Fig. 2 for detail).
Skill-KB The agent maintains a knowledge base of skills that it has learned in its environment. Initially this may be empty. The first time a salient event occurs, say light turned
on, structures to learn an option that achieves that salient event (turn-light-on option) are
created in the skill-KB. In addition, structures to learn an option model are also created.
So for option o, Qo maps states s and actions a (again, both primitive and options) to the
long-term utility of taking action a in state s. The option for a salient event terminates with
probability one in any state that achieves that event and never terminates in any other state.
The initiation set, I o , for an option o is incrementally expanded to includes states that lead
to states in the current initiation set.
Learning The details of the learning algorithm are presented in Fig. 2.
4
Playroom Domain: Empirical Results
We implemented intrinsically motivated RL (of Fig. 2) in a simple artificial ?playroom?
domain shown in Fig. 3A. In the playroom are a number of objects: a light switch, a ball,
a bell, two movable blocks that are also buttons for turning music on and off, as well as a
toy monkey that can make sounds. The agent has an eye, a hand, and a visual marker (seen
as a cross hair in the figure). The agent?s sensors tell it what objects (if any) are under the
eye, hand and marker. At any time step, the agent has the following actions available to
it: 1) move eye to hand, 2) move eye to marker, 3) move eye one step north, south, east or
west, 4) move eye to random object, 5) move hand to eye, and 6) move marker to eye. In
addition, if both the eye and and hand are on some object, then natural operations suggested
by the object become available, e.g., if both the hand and the eye are on the light switch,
then the action of flicking the light switch becomes available, and if both the hand and eye
are on the ball, then the action of kicking the ball becomes available (which when pushed,
moves in a straight line to the marker).
The objects in the playroom all have potentially interesting characteristics. The bell rings
once and moves to a random adjacent square if the ball is kicked into it. The light switch
controls the lighting in the room. The colors of any of the blocks in the room are only
visible if the light is on, otherwise they appear similarly gray. The blue block if pressed
turns music on, while the red block if pressed turns music off. Either block can be pushed
and as a result moves to a random adjacent square. The toy monkey makes frightened
sounds if simultaneously the room is dark and the music is on and the bell is rung. These
objects were designed to have varying degrees of difficulty to engage. For example, to get
the monkey to cry out requires the agent to do the following sequence of actions: 1) get its
eye to the light switch, 2) move hand to eye, 3) push the light switch to turn the light on, 4)
find the blue block with its eye, 5) move the hand to the eye, 6) press the blue block to turn
music on, 7) find the light switch with its eye, 8) move hand to eye, 9) press light switch
to turn light off, 10) find the bell with its eye, 11) move the marker to the eye, 12) find the
ball with its eye, 13) move its hand to the ball, and 14) kick the ball to make the bell ring.
Notice that if the agent has already learned how to turn the light on and off, how to turn
music on, and how to make the bell ring, then those learned skills would be of obvious use
in simplifying this process of engaging the toy monkey.
C
Performance of Learned Options
120
100
80
Sound On
Light On
Music On
Toy Monkey On
60
40
20
0
0
0.5
1
1.5
2
Number of Actions
2.5
7
x 10
Number of steps between extrinsic rewards
B
Average # of Actions to Salient Event
A
10000
Effect of Intrinsically Motivated Learning
8000
Extrinsic Reward Only
6000
4000
Intrinsic & Extrinsic Rewards
2000
0
0
100
200
300
400
500
600
Number of extrinsic rewards
Figure 3: A. Playroom domain. B. Speed of learning of various skills. C. The effect of intrinsically
motivated learning when extrinsic reward is present. See text for details
For this simple example, changes in light and sound intensity are considered salient by the
playroom agent. Because the initial action value function, QB , is uninformative, the agent
starts by exploring its environment randomly. Each first encounter with a salient event
initiates the learning of an option and an option model for that salient event. For example,
the first time the agent happens to turn the light on, it initiates the data structures necessary
for learning and storing the light-on option. As the agent moves around the environment, all
the options (initiated so far) and their models are simultaneously updated using intra-option
learning.
As shown in Fig. 2, the intrinsic reward is used to update QB . As a result, when the agent
encounters an unpredicted salient event a few times, its updated action value function drives
it to repeatedly attempt to achieve that salient event. There are two interesting side effects
of this: 1) as the agent tries to repeatedly achieve the salient event, learning improves both
its policy for doing so and its option-model that predicts the salient event, and 2) as its
option policy and option model improve, the intrinsic reward diminishes and the agent gets
?bored? with the associated salient event and moves on. Of course, the option policy and
model become accurate in states the agent encounters frequently. Occasionally, the agent
encounters the salient event in a state (set of sensor readings) that it has not encountered
before, and it generates intrinsic reward again (it is ?surprised?).
A summary of results is presented in Fig. 4. Each panel of the figure is for a distinct salient
event. The graph in each panel shows both the time steps at which the event occurs as
well as the intrinsic reward associated by the agent to each occurrence. Each occurrence is
denoted by a vertical bar whose height denotes the amount of associated intrinsic reward.
Note that as one goes from top to bottom in this figure, the salient events become harder to
achieve and, in fact, become more hierarchical. Indeed, the lowest one for turning on the
monkey noise (Non) needs light on, music on, light off, sound on in sequence. A number
of interesting results can be observed in this figure. First note that the salient events that
are simpler to achieve occur earlier in time. For example, Lon (light turning on) and Loff
(light turning off) are the simplest salient events, and the agent makes these happen quite
early. The agent tries them a large number of times before getting bored and moving on to
other salient events. The reward obtained for each of these events diminishes after repeated
exposure to the event. Thus, automatically, the skill of achieving the simpler events are
learned before those for the more complex events.
Figure 4: Results from the playroom domain. Each panel depicts the occurrences of salient events
as well as the associated intrinsic rewards. See text for details.
Of course, the events keep happening despite their diminished capacity to reward because
they are needed to achieve the more complex events. Consequently, the agent continues to
turn the light on and off even after it has learned this skill because this is a step along the
way toward turning on the music, as well as along the way toward turning on the monkey
noise. Finally note that the more complex skills are learned relatively quickly once the
required sub-skills are in place, as one can see by the few rewards the agent receives for
them. The agent is able to bootstrap and build upon the options it has already learned for the
simpler events. We confirmed the hierarchical nature of the learned options by inspecting
the greedy policies for the more complex options like Non and Noff. The fact that all the
options are successfully learned is also seen in Fig. 3B in which we show how long it takes
to bring about the events at different points in the agent?s experience (there is an upper
cutoff of 120 steps). This figure also shows that the simpler skills are learned earlier than
the more complex ones.
An agent having a collection of skills learned through intrinsic reward can learn a wide
variety of extrinsically rewarded tasks more easily than an agent lacking these skills. To
illustrate, we looked at a playroom task in which extrinsic reward was available only if
the agent succeeded in making the monkey cry out. This requires the 14 steps described
above. This is difficult for an agent to learn if only the extrinsic reward is available, but
much easier if the agent can use intrinsic reward to learn a collection of skills, some of
which are relevant to the overall task. Fig. 3C compares the performance of two agents in
this task. Each starts out with no knowledge of task, but one employs the intrinsic reward
mechanism we have discussed above. The extrinsic reward is always available, but only
when the monkey cries out. The figure, which shows the average of 100 repetitions of the
experiment, clearly shows the advantage of learning with intrinsic reward.
Discussion One of the key aspects of the Playroom example was that intrinsic reward
was generated only by unexpected salient events. But this is only one of the simplest
possibilities and has many limitations. It cannot account for what makes many forms of
exploration and manipulation ?interesting.? In the future, we intend to implement computational analogs of other forms of intrinsic motivation as suggested in the psychological,
statistical, and neuroscience literatures.
Despite the ?toy? nature of this domain, these results are among the most sophisticated
we have seen involving intrinsically motivated learning. Moreover, they were achieved
quite directly by combining a collection of existing RL algorithms for learning options and
option-models with a simple notion of intrinsic reward. The idea of intrinsic motivation for
artificial agents is certainly not new, but we hope to have shown that the elaboration of the
formal RL framework in the direction we have pursued, together with the use of recentlydeveloped hierarchical RL algorithms, provides a fruitful basis for developing competently
autonomous agents.
Acknowledgement Satinder Singh and Nuttapong Chentanez were funded by NSF grant CCF
0432027 and by a grant from DARPA?s IPTO program. Andrew Barto was funded by NSF grant
CCF 0432143 and by a grant from DARPA?s IPTO program.
References
[1] A. G. Barto, S. Singh, and N. Chentanez. Intrinsically motivated learning of hierarchical collections of skills. In Proceedings of the 3rd International Conference on Developmental Learning
(ICDL ?04), LaJolla CA, 2004.
[2] P. Dayan and B. W. Balleine. Reward, motivation and reinforcement learning. Neuron, 36:285?
298, 2002.
[3] S. Kakade and P. Dayan. Dopamine: Generalization and bonuses. Neural Networks, 15:549?
559, 2002.
[4] F. Kaplan and P.-Y. Oudeyer. Motivational principles for visual know-how development. In
C. G. Prince, L. Berthouze, H. Kozima, D. Bullock, G. Stojanov, and C. Balkenius, editors,
Proceedings of the Third International Workshop on Epigenetic Robotics : Modeling Cognitive
Development in Robotic Systems, pages 73?80, Edinburgh, Scotland, 2003. Lund University
Cognitive Studies.
[5] A. McGovern. Autonomous Discovery of Temporal Abstractions from Interaction with An Environment. PhD thesis, University of Massachusetts, 2002.
[6] A. Ng, D. Harada, and S. Russell. Policy invariance under reward transformations: Theory and
application to reward shaping. In Proceedings of the Sixteenth ICML. Morgan Kaufmann, 1999.
[7] P. Reed, C. Mitchell, and T. Nokes. Intrinsic reinforcing properties of putatively neutral stimuli
in an instrumental two-lever discrimination task. Animal Learning and Behavior, 24:38?45,
1996.
[8] J. Schmidhuber. A possibility for implementing curiosity and boredom in model-building neural
controllers. In From Animals to Animats: Proceedings of the First International Conference on
Simulation of Adaptive Behavior, pages 222?227, Cambridge, MA, 1991. MIT Press.
[9] W. Schultz. Predictive reward signal of dopamine neurons. Journal of Neurophysiology, 80:1?
27, 1998.
[10] R. S. Sutton. Integrated modeling and control based on reinforcement learning and dynamic
programming. In Proceedings of NIPS, pages 471?478, San Mateo, CA, 1991.
[11] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge,
MA, 1998.
[12] R. S. Sutton, D. Precup, and S. Singh. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Arti?cial Intelligence, 112:181?211, 1999.
[13] J. Wang, J. McClelland, A. Pentland, O. Sporns, I. Stockman, M. Sur, and E. Thelen. Autonomous mental develoopment by robots and animals. Science, 291:599?600, 2001.
[14] R. W. White. Motivation reconsidered: The concept of competence. Psychological Review,
66:297?333, 1959.
| 2552 |@word neurophysiology:1 briefly:1 instrumental:1 termination:3 simulation:1 eng:2 simplifying:1 arti:1 pressed:2 harder:1 initial:2 contains:2 uma:1 existing:1 current:5 nuttapong:2 activation:1 must:1 visible:1 happen:1 designed:1 update:7 smdp:3 discrimination:1 greedy:4 pursued:1 intelligence:1 scotland:1 short:1 harvesting:1 mental:2 provides:1 contribute:1 putatively:1 simpler:4 height:1 along:2 direct:1 become:7 surprised:1 consists:1 balleine:1 theoretically:1 indeed:2 expected:2 ingenuity:1 behavior:16 planning:6 frequently:1 brain:1 terminal:2 automatically:2 becomes:2 motivational:5 provided:1 moreover:2 rung:1 bonus:2 transduced:1 panel:3 lowest:1 what:4 competently:1 kind:1 monkey:9 maxa:4 transformation:1 temporal:2 remember:1 cial:1 act:1 exactly:1 ro:4 control:5 underlie:1 internally:1 appear:1 grant:4 before:4 consequence:4 sutton:4 despite:2 initiated:3 becoming:1 approximately:1 might:1 extinguishes:1 mateo:1 praise:1 balkenius:1 range:2 directed:2 practical:2 kicking:1 practice:2 block:8 implement:1 differs:1 rte:5 bootstrap:1 empirical:1 bell:6 word:1 refers:1 get:3 cannot:1 context:1 applying:1 fruitful:1 map:2 primitive:7 go:1 flexibly:1 exposure:1 focused:1 formulate:1 rule:1 handle:1 notion:2 autonomous:6 updated:6 hierarchy:2 play:3 controlling:1 engage:3 programming:1 engaging:1 updating:1 continues:1 predicts:1 observed:2 role:1 bottom:1 wang:1 connected:1 oe:6 autonomously:1 russell:1 envisioned:1 environment:25 developmental:2 reward:62 dynamic:1 stockman:1 singh:4 solving:2 predictive:1 creates:1 upon:1 efficiency:3 basis:2 easily:1 darpa:2 routinely:1 represented:1 various:1 distinct:1 artificial:3 mcgovern:1 tell:1 outcome:1 outside:1 whose:1 quite:2 solve:5 reconsidered:1 say:1 otherwise:1 favor:1 ability:1 enlisted:1 think:1 itself:1 sequence:3 advantage:1 interaction:5 coming:1 turned:1 combining:2 loop:2 relevant:1 achieve:6 sixteenth:1 description:1 moved:2 getting:1 empty:1 executing:1 ring:3 object:7 illustrate:2 andrew:2 implemented:1 c:1 come:1 direction:1 sensation:1 closely:1 stochastic:1 kb:6 exploration:4 implementing:1 argued:1 generalization:1 preliminary:1 inspecting:1 exploring:1 around:1 considered:1 algorithmic:1 claim:2 achieves:2 early:1 diminishes:2 applicable:2 repetition:2 create:2 successfully:1 hope:1 mit:2 clearly:1 sensor:2 always:1 desirability:1 rather:2 varying:1 barto:6 focus:2 lon:1 directs:1 backgammon:1 greatly:1 contrast:1 sense:1 abstraction:3 factoring:1 dayan:2 typically:1 entire:1 integrated:1 initially:2 going:1 subroutine:1 overall:1 among:2 denoted:3 development:6 animal:8 special:1 initialize:1 construct:1 never:1 once:2 having:1 ng:1 broad:2 icml:1 future:2 stimulus:5 few:2 employ:1 randomly:1 simultaneously:3 comprehensive:1 familiar:1 intended:1 consisting:1 attempt:2 epigenetic:1 possibility:2 intra:3 certainly:1 extrinsically:1 light:24 behind:1 predefined:1 accurate:1 succeeded:1 redesigned:1 necessary:1 experience:2 re:1 prince:1 psychological:2 instance:1 earlier:2 modeling:2 subset:1 neutral:1 harada:1 reported:1 st:36 international:3 probabilistic:1 rewarding:2 invoke:1 off:7 together:2 quickly:2 precup:1 again:2 thesis:1 lever:1 choose:2 external:4 creating:1 cognitive:2 toy:5 account:1 de:2 includes:2 north:1 later:1 view:2 try:2 closed:1 doing:1 red:1 start:2 sort:1 option:74 maintains:2 contribution:1 square:2 kaufmann:1 characteristic:2 efficiently:1 largely:1 identify:1 produced:1 lighting:1 researcher:2 drive:1 straight:1 confirmed:1 evaluates:2 acquisition:1 obvious:1 naturally:1 associated:6 con:1 adjusting:1 intrinsically:13 massachusetts:2 mitchell:1 knowledge:3 color:1 improves:1 shaping:2 sophisticated:2 playroom:11 higher:2 response:1 inspire:1 improved:1 formulation:1 though:1 lifetime:1 furthermore:1 hand:11 receives:2 qo:7 marker:6 lack:1 incrementally:1 undergoes:1 gray:1 mdp:1 building:2 effect:4 concept:2 multiplier:1 ccf:2 inspiration:1 deal:1 white:1 adjacent:2 during:3 trying:1 bring:3 meaning:1 novel:1 recently:1 specialized:1 behaves:1 rl:15 extend:1 organism:3 discussed:1 analog:1 accumulate:1 cambridge:2 chentanez:3 rd:1 similarly:1 baveja:1 funded:2 moving:1 robot:2 base:2 add:2 something:5 movable:1 own:1 recent:1 driven:2 rewarded:1 schmidhuber:2 occasionally:1 manipulation:1 initiation:6 nition:1 seen:3 morgan:1 recognized:1 novelty:3 determine:1 period:2 signal:3 semi:2 mix:1 sound:6 rti:3 cross:1 long:4 elaboration:3 controlled:1 prediction:1 involving:1 hair:1 controller:2 essentially:1 dopamine:6 sometimes:1 tailored:1 achieved:1 cell:1 robotics:1 background:2 whereas:2 want:1 addition:3 uninformative:1 else:2 grow:1 ot:1 specially:1 south:1 thing:1 call:1 kick:1 conception:1 switch:8 variety:1 architecture:1 idea:4 motivated:13 o0:3 utility:2 reinforcing:1 action:31 repeatedly:2 useful:1 clear:1 aimed:2 amount:3 discount:1 dark:1 mcclelland:1 simplest:2 generate:1 specifies:1 exist:1 nsf:2 notice:2 designer:1 neuroscience:2 extrinsic:15 blue:3 reusable:3 salient:28 key:2 putting:1 monitor:1 achieving:1 cutoff:1 button:1 graph:1 extends:1 throughout:1 place:1 decision:4 pushed:2 distinguish:1 encountered:1 activity:2 occur:1 ri:1 sake:1 dence:1 generates:1 aspect:1 speed:1 qb:11 expanded:1 relatively:1 developing:2 according:3 unpredicted:2 combination:1 ball:7 terminates:3 kicked:1 kakade:1 bullock:1 making:3 happens:1 psychologist:2 handcrafting:1 equation:1 turn:10 mechanism:1 needed:2 phasic:1 initiate:2 know:1 umich:2 available:9 operation:1 hierarchical:6 occasional:1 appropriate:3 occurrence:3 responsive:1 encounter:4 shortly:1 denotes:1 top:1 music:9 build:2 especially:1 objective:1 move:17 added:1 already:2 occurs:2 looked:1 intend:1 primary:2 rt:5 usual:6 win:2 entity:1 capacity:1 toward:3 devel:1 sur:1 reed:1 providing:2 demonstration:1 difficult:1 potentially:1 kaplan:1 implementation:2 reliably:1 policy:11 allowing:1 upper:1 vertical:1 neuron:3 animats:1 markov:2 pentland:1 pat:1 enjoyable:1 extended:4 head:2 directing:1 competence:6 intensity:2 required:2 extensive:1 connection:1 learned:16 nip:1 able:2 suggested:3 curiosity:3 usually:2 bar:1 departure:2 lund:1 reading:1 challenge:2 encompasses:1 program:2 sporns:1 critical:1 event:35 natural:1 difficulty:1 hardwired:1 turning:6 scheme:1 improve:3 mdps:2 eye:20 ne:1 created:2 extract:1 text:2 review:1 literature:1 acknowledgement:1 discovery:1 val:1 lacking:1 loss:1 interesting:7 limitation:1 proportional:1 facing:1 agent:56 degree:1 principle:1 editor:1 storing:1 critic:7 cry:3 berthouze:1 autonomy:1 neuromodulator:1 course:2 summary:1 side:1 allow:4 formal:1 wide:3 taking:2 emerge:1 edinburgh:1 regard:1 transition:2 sensory:1 collection:7 reinforcement:9 boredom:1 adaptive:1 schultz:1 san:1 far:1 cope:1 skill:35 forever:1 keep:1 satinder:2 robotic:1 unnecessary:1 learn:8 terminate:1 nature:2 ca:2 inherently:1 complex:5 frightened:1 domain:5 bored:3 main:3 motivation:7 noise:2 arise:3 child:1 competent:2 ref:1 repeated:1 fig:13 west:1 depicts:1 sub:1 explicit:3 wish:1 third:1 learns:2 admissible:1 externally:1 specific:6 essential:1 intrinsic:29 workshop:1 phd:1 execution:1 push:1 easier:1 michigan:2 simply:1 likely:1 visual:2 happening:1 prevents:1 unexpected:1 scalar:1 determines:1 ma:2 goal:2 consequently:1 room:3 absence:1 considerable:1 change:3 diminished:1 determined:1 smdps:2 total:2 oudeyer:1 engaged:1 invariance:1 craft:1 east:1 internal:5 latter:1 ipto:2 icdl:1 dept:1 |
1,709 | 2,553 | Sampling Methods for Unsupervised Learning
Rob Fergus? & Andrew Zisserman
Dept. of Engineering Science
University of Oxford
Parks Road, Oxford OX1 3PJ, UK.
{fergus,az
}@robots.ox.ac.uk
Pietro Perona
Dept. Electrical Engineering
California Institute of Technology
Pasadena, CA 91125, USA.
[email protected]
Abstract
We present an algorithm to overcome the local maxima problem in estimating the parameters of mixture models. It combines existing approaches from both EM and a robust fitting algorithm, RANSAC, to give
a data-driven stochastic learning scheme. Minimal subsets of data points,
sufficient to constrain the parameters of the model, are drawn from proposal densities to discover new regions of high likelihood. The proposal
densities are learnt using EM and bias the sampling toward promising
solutions. The algorithm is computationally efficient, as well as effective
at escaping from local maxima. We compare it with alternative methods,
including EM and RANSAC, on both challenging synthetic data and the
computer vision problem of alpha-matting.
1
Introduction
In many real world applications we wish to learn from data which is not labeled, to find
clusters or some structure within the data. For example in Fig. 1(a) we have some clumps
of data that are embedded in noise. Our goal is to automatically find and model them. Since
our data has many components so must our model. Consequently the model will have many
parameters and finding the optimal settings for these is a difficult problem. Additionally,
in real world problems, the signal we are trying to learn is usually mixed in with a lot of
irrelevant noise, as demonstrated by the example in Fig. 1(b). The challenge here is to find
these lines reliably despite them only constituting a small portion of the data.
Images from Google, shown in Fig. 1(c), are typical of real world data, presenting both
the challenges highlighted above. Our motivating real-world problem is to learn a visual
model from the set of images returned by Google?s image search on an object type (such
as ?camel?, ?tiger? or ?bottles?), like those shown. Since text-based cues alone were used
to compile the images, typically only 20%-50% images are visually consistent and the
remainder may not even be images of the sought object type, resulting in a challenging
learning problem.
Latent variable models provide a framework for tackling such problems. The parameters
of these may be estimated using algorithms based on EM [2] in a maximum likelihood
framework. While EM provides an efficient estimation scheme, it has a serious problem in
that for complex models, a local maxima of the likelihood function is often reached rather
than the global maxima. Attempts to remedy this problem include: annealed versions of
EM [8]; Markov-Chain Monte-Carlo (MCMC) based clustering [4] and Split and Merge
EM (SMEM) [9].
?
corresponding author
5
5
4
4
3
3
2
2
1
1
0
0
?1
?1
?2
?2
?3
?3
?4
?5
?5
?4
?4
?3
?2
?1
0
(a)
1
2
3
4
5
?5
?5
?4
?3
?2
?1
0
(b)
1
2
3
4
5
(c)
Figure 1: The objective is to learn from contaminated data such as these: (a) Synthetic
Gaussian data containing many components. (b) Synthetic line data with few components
but with a large portion of background noise. (c) Images obtained by typing ?bottles? into
Google?s image search.
Alternative approaches to unsupervised learning include the RANSAC [3, 5] algorithm and
its many derivatives. These rely on stochastic methods and have proven highly effective at
solving certain problems in Computer Vision, such as structure from motion, where the
signal-to-noise ratios are typically very small.
In this paper we introduce an unsupervised learning algorithm that is based on both latent
variable models and RANSAC-style algorithms. While stochastic in nature, it operates in
data space rather than parameter space, giving a far more efficient algorithm than traditional
MCMC methods.
2
Specification of the problem
We have a set of data x = {x1 . . . xN } with unseen labels y = {y1 . . . yN } and a parametric mixture model with parameters ?, of the form:
X
X
p(x|?) =
p(x, y|?) =
p(x|y, ?) P (y|?)
(1)
y
y
We assume the number of mixture components is known and equal to C. We also assume
that the parametric form of the mixture components is given. One of these components
will model the background noise, while the remainder fit the signal within the data. Thus
the task is to find the value of ? that maximizes the likelihood, p(x|?) of the data. This
is not a straightforward as the dimensionality of ? is large and the likelihood function is
highly non-linear. Algorithms such as EM often get stuck in local maxima such as those
illustrated in Fig. 2, and since they use gradient-descent alone, are unable to escape.
Before describing our algorithm, we first review the robust fitting algorithm RANSAC,
from which we borrow several key concepts to enable us to escape from local maxima.
2.1
RANSAC
RANSAC (RANdom Sampling And Consensus) attempts to find global maxima by drawing random subset of points, fitting a model to them and then measuring their support from
the data. A variant, MLESAC [7], gives a probabilistic interpretation of the original scheme
which we now explain.
The basic idea is to draw at random and without replacement from x, a set of P samples
for each of the C components in our model; P being the smallest number required to
compute the parameters ?c for each component. Let draw i be represented by zi , a vector
of length N containing exactly P ones, indicating the points selected with the rest being
zeros. Thus x(zi ) is the subset of points drawn from x. From x(zi ) we then compute the
parameters for the component, ?ci . Having done this for all components, we then estimate
the component mixing portions, ? using EM (keeping the other parameters fixed), giving
i
}. Using these parameters, we compute
a set of parameters for draw i, ? i = {?, ?1i . . . ?C
i
the likelihood over all the data: p(x|? ).
The entire process is repeated until either we exceed our maximum limit on the number of
draws or we reach a pre-defined performance level. The final set of parameters are those
that gave the highest likelihood: ? ? = arg maxi p(x|? i ). Since this process explores a
finite set of points in the space of ?, it is unlikely that the globally optimal point, ? M L , will
be found, but ? ? should be close so that running EM from it is guaranteed to find the global
optimum.
However, it is clear that the approach of sampling randomly, while guaranteed to eventually find a point close to ? M L , is very inefficient since the number of possible draws scales
exponentially with both P and C. Hence it is only suitable for small values of both P and
C. While Tordoff et. al. [6] proposed drawing the samples from a non-uniform density,
this approach involved incorporating auxiliary information about each sample point which
may not be available for more general problems. However, Matas et. al. [1] propose general scheme to draw samples selectively from points tentatively classified as signal. This
increases the efficiency of the sampling and motivates our approach.
3
Our approach ? PROPOSAL
Our approach, which we name PROPOSAL (PROPOsal based SAmple Learning), combines aspects of both EM and RANSAC to produce a method with the robustness of
RANSAC but with a far greater efficiency, enabling it to work on more complex models.
The problem with RANSAC is that points are drawn randomly. Even after a large number of draws this random sampling continues, despite the fact that we may have already
discovered a good, albeit local, maximum in our likelihood function.
The key idea in PROPOSAL is to draw samples from a proposal density. Initially this
density is uniform, as in RANSAC, but as regions of high likelihood are discovered, we
update it so that it gives a strong bias toward producing good draws again, increasing the
efficiency of the sampling process. However, having found local maxima, we must still be
able to escape and find the global maxima.
Local maxima are characterized by too many components in one part of the space and
too few in another. To resolve this we borrow ideas from Split and Merge EM (SMEM)
[9]. SMEM uses two types of discrete moves to discover superior maxima. In the first,
a component in an underpopulated region is split into two new ones, while in the second
two components in an overpopulated area are merged. These two moves are performed
together to keep the number of components constant. For the local maxima encountered in
Fig. 2(a), merging the green and blue components, while splitting the red component will
yield a superior solution.
(a)
(b)
Figure 2: (a) Examples of different types of local maxima encountered. The green and blue
components on the left are overpopulating a small clump of data. The magenta component
in the center models noise, while missing a clump altogether. The single red component on
the right is inadequately modeling two clumps of data. (b) The global optimum solution.
PROPOSAL acts in a similar manner, by first finding components that are superfluous via
two tests (described in section 3.3): (i) the Evaporation test ? which would find the magenta
component in Fig. 2(a) and (ii) the Overlap test ? which would identify one of the green
and blue components in Fig. 2(a). Then their proposal densities are adjusted so that they
focus on data that is underpopulated by the model, thus subsequent samples are likely to
discover a superior solution. An overview of the algorithm is as follows:
Algorithm 1 PROPOSAL
Require: Data x; Parameters: C, ?min ,
for i = 1 to I Max do
repeat
? For each component, c, compute parameters ?ci from P points drawn from the
proposal density qc (x|?c ).
? Estimate mixing portions, ? i , Q
using EM, keeping ?ci fixed.
i
i
).
? Compute the likelihood L = n p(xn |? i , ?1i . . . ?C
i
Best
until L > LRough
? Refine ? i using EM to give ? ? with likelihood L? .
if L? > LBest then
? Update the proposal densities, q(x|?), using ? ? .
? Apply the Evaporation and Overlap tests (using parameters ?min and ).
? Reassign the proposal densities of any components failing the above tests.
i
Best
? Let LBest
= L? and let ? Best = ? ? .
Rough = L ; let L
end if
end for
Output: ? Best and LBest .
We now elaborate on the various stages of the algorithm, using Fig. 3 as an example.
3.1
Sampling from data proposal densities
Each component, c, draws its samples from a proposal density, which is an empirical distribution of the form:
PN
?(x ? xn )P (y = c|xn , ?c )
(2)
qc (x|?) = n=1PN
n=1 P (y = c|xn , ?c )
where P (y|x, ?) is the posterior on the labels:
p(x|y, ?)P (y|?)
P (y|x, ?) = P
y p(x|y, ?)P (y|?)
(3)
Initially, q(x|?) is uniform, so we are drawing the points completely at random, but q(x|?)
will become more peaked, biasing our draws toward the data picked out by the component, demonstrated in Fig. 3(c), which shows the non-uniform proposal densities for each
component on a simulated problem. Note that if we are sampling with replacement, then
E[z] = P (y|x, ?)1 . However, since we must avoid degenerate combinations of points,
certain values of z are not permissible, so E[z] ? P (y|x, ?) as N ? ?.
3.2
Computing model parameters
Each component c has a subset of points picked out by z from which its parameters ? ci are
estimated. Since each subset is of the minimal size required to constrain all parameters,
this process is straightforward since it is usually closed-form. For the Gaussian example
1
Recall that z is a vector representing a draw of P points from q(x|?). It is of length N with
exactly P ones, the remaining elements being zero.
in Fig. 3, we draw 3 points for each of the 4 Gaussian components, whose mean and covariance matrices are directly computed, using appropriate normalizations to give unbiased
estimators of the population parameters.
Given ?ci for each component, the only unknown parameter is their relative weighting,
? = P (y|?). This is estimated using EM. The E-step involves computing P (y|x, ?) from
(3). This can done efficiently since the component parameters are fixed, allowing the prePN
computation of p(x|y, ?). The M-step is then ?c = N1 n=1 P (y = c|x, ?).
3.3
Updating proposal densities
Having obtained a rough model for draw i with parameters ? i and likelihood Li , we first
see if its likelihood exceeds the likelihood of the previous best rough model, L Best
Rough . If this
is the case we refine the rough model to ensure that we are at an actual maximum since the
sampling process limits us to a set of discrete points in ?-space, which are unlikely to be
maxima themselves. Running EM again, this time updating all parameters and using ? i as
an initialization, the parameters converge to ? ? , having likelihood L? . If L? exceeds a second threshold (the previous best refined model?s likelihood) LBest , then we we recompute
the proposal densities, as given in (2), using P (y|x, ? ? ). The two thresholds are needed to
avoid wasting time refining ? i ?s that are not initially promising. In updating the proposal
densities, two tests are applied to ? ? :
1. Evaporation test: If ?c < ?min , then the component is deemed to model noise, so
is flagged for resetting. Fig. 3 illustrates this test.
k? i ?? i k2
2. Overlap test2 : If for any two components, a and b, k?ai k k?b i k < 2 , then the two
a
b
components are judged to be fitting the same data. Component a or b is picked at
random and flagged for resetting.
3.4
Resetting a proposal density
If a component?s proposal density is to be reset, it is given a new density that maximizes
PC
the entropy of the mean proposal density qM (x|?) = C1 c=1 qc (x|?).
By maximizing the entropy of qM (x|?), we are ensuring that the samples will subsequently
be drawn as widely as possible, maximizing the chances of escaping from the local minima.
If qd (x|?) are the proposal densities to be reset, then we wish to maximize:
?
?
X
X
1
1
H[qM (x|?)] = H ?
qd (x|?) +
qd (x|?)?
(4)
D
C ?D
d
c6=d
P
with the constraints that Pn qd (xn |?) = 1 ? d and qd (xn |?) ? 0 ? n, d. For brevity, let us
1
define: qf (x|?) = C?D
c6=d qd (x|?).
Since a uniform distribution has the highest entropy, but qd (x|?) cannot be negative, the
optimal choice of qd (x|?) will be zero everywhere, except for x corresponding to the smallest k values of qf (x|?). At these points qd (x|?) must add to qf (x|?) to give a constant
qM (x|?). We solve for k using the other constraint, that probability mass of exactly D/C
must be added.
Thus qd (x|?) be large where qf (x|?) is small, giving the appealing result that the new component will draw preferentially from underpopulated portion of the data, as demonstrated
in Fig. 3(d).
2
b:
An alternative overlap test would compare the responsibilities of each pair of components, a and
i T
P (y=a|x,?a
) P (y=b|x,?bi )
i )kkP (y=b|x,? i )k
kP (y=a|x,?a
b
< 2 .
5
5
4
4
3
3
2
2
0.0175
0.015
0.0125
1
1
0.01
0
0
?1
?1
?2
?2
?3
?3
0.0075
0.005
0.0025
?4
?4
?5
?5
?6
x 10
?4
?2
0
(a)
2
4
6
?6
?4
?2
0
(b)
2
4
0
6
0
?3
x 10
5
5
4.5
4.5
4
4
3.5
3.5
3
3
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0
100
200
300
400
300
400
500
(c)
600
700
800
900
1000
?3
0
0
100
200
300
400
500
(d)
600
700
800
900
1000
0
100
200
500
(e)
600
700
800
900
1000
Figure 3: The Evaporation step in action. A local maximum is found in (a). (c) shows
the corresponding proposal densities for each component (black is the background model).
Note how spiky the green density is, since it is only modeling a few data points. Since
?green < ?min , its proposal density is set to qd (x|?), as shown in (d). Note how qd (x|?) is
higher in the areas occupied by the red component which is a poor fit for two clumps of
data. (b) The global maxima along with its proposal density (e). Note that the data points
are ordered for ease of visualization only.
4
4.1
Experiments
Synthetic experiments
We tested PROPOSAL on two types of synthetic data ? mixtures of 2-D lines and Gaussians with uniform background noise. We compared six algorithms: Plain EM; Deterministic Annealing EM (DAEM)[8]; Stochastic EM (SEM)[10]; Split and Merge EM (SMEM);
MLESAC and PROPOSAL. Four experiments were performed: two using lines and two
with Gaussians. The first pair of experiments examined how many components the different algorithms could handle reliably. The second pair tested the robustness to background
noise. In the Gaussian experiments, the model consisted of a mixture of 2-D Gaussian
densities and a uniform background component. In the line experiments, the model consisted of a mixture of densities modeling the residual to the line with a Gaussian noise
model, having a variance ? that was also learnt. Each line component has therefore three
parameters ? its gradient; y-intercept and variance.
Each experiment was repeated 250 times with a different, randomly generated dataset,
examples of which can be seen in Fig. 1(a) & (b). In each experiment, the same time was
allocated for each algorithm, so for example, EM which ran quickly was repeated until it
had spent the same amount of time as the slowest (usually PROPOSAL or SMEM), and
the best result from the repeated runs taken. For simplicity, the Overlap test compared only
the means of the distributions. Parameter values used for PROPOSAL were: I = 200,
?min = 0.01 and = 0.1.
In the first pair of experiments, the number of components was varied from 2 upto 10 for
lines and 20 for Gaussians. The background noise was held constant at 20%. The results are
shown in Fig. 4. PROPOSAL clearly outperforms the other approaches. In the second pair
of experiments, C = 3 components were used, with the background noise varying from 1%
up to 99% . Parameters used were the same as for the first experiment. The results can be
seen in Fig. 5. Both SMEM and PROPOSAL outperformed EM convincingly. PROPOSAL
performed well down to 30% in the line case (i.e. 10% per line) and 20% in the Gaussian
case.
EM
MLESAC
PROPOSAL
DAEM
SEM
SMEM
0.8
% success
0.7
0.6
0.5
5
1
4
0.9
3
0.8
2
0.7
1
0.6
% success
1
0.9
0
0.4
5
EM
MLESAC
PROPOSAL
DAEM
SEM
SMEM
4
3
2
1
0.5
0
0.4
?1
?1
0.3
0.3
?2
?2
0.2
0.2
?3
?3
0.1
0.1
?4
?4
0
2
3
4
5
6
7
8
9
10
Number of components
?5
?5
0
2
?4
?3
?2
?1
0
1
2
3
4
4
6
8
10
12
14
16
18
20
Number of components
5
?5
?5
?4
?3
?2
?1
0
1
2
3
4
5
?4
?3
?2
?1
0
1
2
3
4
5
(a)
(b)
(c)
(d)
Figure 4: Experiments showing the robustness to the number of components in the model.
The x-axis is the number of components ranging from 2 upwards. The y-axis is portion
of correct solutions found from 250 runs, each having with a different randomly generated
dataset. Key: EM (red solid); DAEM (cyan dot-dashed); SEM (magenta solid); SMEM
(black dotted); MLESAC (green dashed) and PROPOSAL (blue solid). (a) Results for line
data. (b) A typical line dataset for C = 10. (c) Results for Gaussian data. PROPOSAL
is still achieving 75% correct with 10 components - twice the performance of the next best
algorithm (SMEM). (d) A typical Gaussian dataset for C = 10.
EM
MLESAC
PROPOSAL
DAEM
SEM
SMEM
0.8
% success
0.7
0.6
5
1
5
4
0.9
4
3
0.8
3
2
0.7
2
% success
1
0.9
1
0.5
0
0.4
0.6
0.4
?1
0.3
0.3
?2
0.2
0.2
?3
0.1
1
0.5
0.1
0
EM
MLESAC
PROPOSAL
DAEM
SEM
SMEM
?1
?2
?3
?4
0
0
0.2
0.4
0.6
0.8
Noise portion
1
?5
?5
?4
0
0
?4
?3
?2
?1
0
1
2
3
4
5
0.2
0.4
0.6
Noise portion
0.8
1
?5
?5
(a)
(b)
(c)
(d)
Figure 5: Experiments showing the robustness to background noise. The x-axis is the
portion of noise, varying between 1% and 99%. The y-axis is portion of correct solutions
found. Key: EM (red solid); DAEM (cyan dot-dashed); SEM (magenta solid); SMEM
(black dotted); MLESAC (green dashed) and PROPOSAL (blue solid). (a) Results for
three component line data. (b) A typical line dataset for 80% noise. (c) Results for three
component Gaussian data. SMEM is marginally superior to PROPOSAL. (d) A typical
Gaussian dataset for 80% noise.
4.2 Real data experiments
We test PROPOSAL against other clustering methods on the computer vision problem
of alpha-matting (the extraction of a foreground element from a background image by
estimating the opacity for each pixel of the foreground element, see Figure 6 for examples).
The simple approach we adopt is to first form a tri-mask (the composite image is divided
into 3 regions: pixels that are definitely foreground; pixels that are definitely background
and uncertain pixels). Two color models are constructed by clustering with a mixture of
Gaussians the foreground and background pixels respectively. The opacity (alpha values)
of the uncertain pixels are then determined by using comparing the color of the pixel under
the foreground and background color models. Figure 7 compares the likelihood of the
foreground and background color models clustered using EM, SMEM and PROPOSAL on
two sets of images (11 face images and 5 dog images, examples of which are shown in Fig.
6). Each model is clustering ? 2?104 pixels in a 4-D space (R,G,B and edge strength) with
a 10 component model. In the majority of cases, PROPOSAL can be seen to outperform
SMEM which in turn out performs plain EM.
5
Discussion
In contrast to SMEM, MCEM [10] and MCMC [4], which operate in ?-space,PROPOSAL
is a data-driven approach. It prevalently examines the small portion of ?-space which has
support from the data. This gives the algorithm its robustness and efficiency. We have
shown PROPOSAL to work well on synthetic data, outperforming many standard algorithms. On real data, PROPOSAL also convincingly beats SMEM and EM. One problem
(a)
(b)
(c)
(d)
(e)
(f)
Figure 6: The alpha-matte problem. (a) & (d): Composite images. (b) & (e): Background
images. (c) & (f): Desired object segmentation. This figure is best viewed in color.
17.5
16.6
EM
SMEM
PROPOSAL
16.2
Log?likelihood
Log?likelihood
17
EM
SMEM
PROPOSAL
16.4
16.5
16
16
15.8
15.6
15.4
15.2
15.5
15
15
1
2
3
4
5
6
7
Image number
8
9
10
11
14.8
1
2
3
4
5
Image number
Figure 7: Clustering performance on (Left) 11 face images (e.g. Fig. 6(a)) and (Right) 5 dog
images (e.g. Fig. 6(d)). x-axis is image number. y-axis is log-likelihood of foreground color
model on foreground pixels plus log-likelihood of background color model on background
pixels. Three clustering methods are shown: EM (red); SMEM (green) and PROPOSAL
(blue). Line indicates mean of 10 runs from different random initializations while error
bars show the best and worst models found from the 10 runs.
with PROPOSAL is that P scales with the square of the dimension of the data (due to the
number of terms in the covariance matrix) meaning for high dimensions, a very large number of draws would be needed to find new portions of data. Hence PROPOSAL is suited to
problems of low dimension.
Acknowledgments: Funding was provided by EC Project CogViSys, EC NOE Pascal,
Caltech CNSE, the NSF and the UK EPSRC. Thanks to F. Schaffalitzky & P. Torr for
useful discussions.
References
[1] Ond?rej Chum, Ji?r?? Matas, and Josef Kittler. Locally optimized ransac. In DAGM
2003: Proceedings of the 25th DAGM Symposium, pages 236?243, 2003.
[2] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via
the em algorithm. Journal of the Royal Statistical Society, 39:1?38, 1976.
[3] M. A. Fischler and R. C. Bolles. Random sample consensus: A paradigm for model
fitting with applications to image analysis and automated cartography. Comm. ACM,
24(6):381?395, 1981.
[4] S. Richardson and P.J. Green. On bayesian analysis of mixtures with an unknown
number of components. Journal of the Royal Statistical Society, 59(4):731?792, 1997.
[5] C.V. Stewart. Robust parameter estimation. SIAM Review, 41(3):513?537, Sept. 1999.
[6] B. Tordoff and D.W. Murray. Guided sampling and consensus for motion estimation.
In Proc. ECCV, 2002.
[7] P. H. S. Torr and A. Zisserman. MLESAC: A new robust estimator with application
to estimating image geometry. CVIU, 78:138?156, 2000.
[8] N. Ueda and R. Nakano. Deterministic Annealing EM algorithm. Neural Networks,
11(2):271?282, 1998.
[9] N. Ueda, R. Nakano, Z. Ghahramani, and G. E. Hinton. SMEM algorithm for mixture
models. Neural Computation, 12(9):2109?2128, 2000.
[10] G. Wei and M. Tanner. A Monte Carlo implementation of the EM algorithm. Journal
American Statistical Society, 85:699?704, 1990.
| 2553 |@word version:1 covariance:2 solid:6 outperforms:1 existing:1 comparing:1 tackling:1 must:5 subsequent:1 update:2 alone:2 cue:1 selected:1 recompute:1 provides:1 c6:2 along:1 constructed:1 become:1 symposium:1 combine:2 fitting:5 manner:1 introduce:1 mask:1 themselves:1 globally:1 automatically:1 resolve:1 actual:1 increasing:1 provided:1 estimating:3 discover:3 project:1 maximizes:2 mass:1 finding:2 wasting:1 noe:1 act:1 exactly:3 k2:1 qm:4 uk:3 yn:1 producing:1 before:1 engineering:2 local:12 limit:2 despite:2 oxford:2 merge:3 black:3 plus:1 twice:1 initialization:2 examined:1 challenging:2 compile:1 ease:1 bi:1 clump:5 acknowledgment:1 ond:1 area:2 empirical:1 composite:2 pre:1 road:1 get:1 cannot:1 close:2 judged:1 intercept:1 deterministic:2 demonstrated:3 center:1 missing:1 annealed:1 straightforward:2 maximizing:2 qc:3 simplicity:1 splitting:1 estimator:2 examines:1 borrow:2 population:1 handle:1 us:1 element:3 updating:3 continues:1 labeled:1 epsrc:1 electrical:1 worst:1 region:4 kittler:1 highest:2 ran:1 dempster:1 comm:1 fischler:1 solving:1 mcem:1 efficiency:4 completely:1 represented:1 various:1 effective:2 monte:2 kp:1 refined:1 whose:1 widely:1 solve:1 drawing:3 unseen:1 richardson:1 highlighted:1 laird:1 final:1 inadequately:1 propose:1 reset:2 remainder:2 mixing:2 degenerate:1 az:1 cluster:1 optimum:2 produce:1 object:3 spent:1 andrew:1 ac:1 strong:1 auxiliary:1 involves:1 qd:12 guided:1 merged:1 correct:3 stochastic:4 subsequently:1 enable:1 require:1 clustered:1 adjusted:1 visually:1 sought:1 adopt:1 smallest:2 failing:1 estimation:3 proc:1 outperformed:1 label:2 rough:5 clearly:1 gaussian:11 rather:2 occupied:1 pn:3 avoid:2 varying:2 focus:1 refining:1 likelihood:22 indicates:1 slowest:1 cartography:1 contrast:1 dagm:2 typically:2 entire:1 unlikely:2 initially:3 perona:2 pasadena:1 josef:1 pixel:10 arg:1 pascal:1 equal:1 having:6 extraction:1 sampling:11 park:1 unsupervised:3 peaked:1 foreground:8 contaminated:1 serious:1 few:3 escape:3 randomly:4 geometry:1 replacement:2 n1:1 attempt:2 highly:2 mixture:10 pc:1 superfluous:1 held:1 chain:1 edge:1 incomplete:1 desired:1 minimal:2 uncertain:2 modeling:3 measuring:1 stewart:1 subset:5 uniform:7 too:2 motivating:1 kkp:1 learnt:2 synthetic:6 thanks:1 density:26 explores:1 definitely:2 siam:1 probabilistic:1 tanner:1 together:1 quickly:1 again:2 containing:2 daem:7 american:1 derivative:1 style:1 inefficient:1 li:1 performed:3 lot:1 picked:3 closed:1 responsibility:1 portion:12 reached:1 red:6 square:1 variance:2 efficiently:1 resetting:3 yield:1 identify:1 bayesian:1 marginally:1 carlo:2 classified:1 explain:1 reach:1 against:1 involved:1 dataset:6 recall:1 color:7 dimensionality:1 segmentation:1 test2:1 higher:1 zisserman:2 wei:1 done:2 ox:1 stage:1 spiky:1 until:3 ox1:1 google:3 usa:1 name:1 concept:1 unbiased:1 remedy:1 consisted:2 hence:2 illustrated:1 trying:1 presenting:1 bolles:1 performs:1 motion:2 upwards:1 image:22 ranging:1 meaning:1 funding:1 superior:4 ji:1 overview:1 exponentially:1 interpretation:1 ai:1 had:1 dot:2 specification:1 robot:1 add:1 posterior:1 irrelevant:1 driven:2 certain:2 outperforming:1 success:4 caltech:2 seen:3 minimum:1 greater:1 converge:1 maximize:1 paradigm:1 dashed:4 ii:1 signal:4 exceeds:2 characterized:1 divided:1 ensuring:1 ransac:12 variant:1 basic:1 vision:4 normalization:1 c1:1 proposal:52 background:17 annealing:2 allocated:1 permissible:1 rest:1 operate:1 tri:1 camel:1 exceed:1 split:4 automated:1 fit:2 zi:3 gave:1 escaping:2 idea:3 six:1 returned:1 reassign:1 action:1 useful:1 clear:1 amount:1 locally:1 outperform:1 nsf:1 chum:1 dotted:2 estimated:3 per:1 blue:6 discrete:2 key:4 four:1 threshold:2 achieving:1 drawn:5 pj:1 pietro:1 run:4 everywhere:1 ueda:2 draw:16 cyan:2 guaranteed:2 encountered:2 refine:2 strength:1 constraint:2 constrain:2 aspect:1 min:5 combination:1 poor:1 em:37 appealing:1 rob:1 taken:1 computationally:1 visualization:1 describing:1 eventually:1 turn:1 needed:2 end:2 available:1 gaussians:4 apply:1 appropriate:1 upto:1 alternative:3 robustness:5 altogether:1 rej:1 original:1 clustering:6 include:2 running:2 remaining:1 ensure:1 nakano:2 giving:3 ghahramani:1 murray:1 society:3 objective:1 matas:2 already:1 move:2 added:1 parametric:2 traditional:1 gradient:2 unable:1 simulated:1 majority:1 consensus:3 toward:3 length:2 ratio:1 preferentially:1 difficult:1 negative:1 implementation:1 reliably:2 motivates:1 evaporation:4 unknown:2 allowing:1 markov:1 finite:1 enabling:1 descent:1 beat:1 hinton:1 y1:1 discovered:2 varied:1 bottle:2 required:2 pair:5 dog:2 optimized:1 california:1 able:1 bar:1 usually:3 biasing:1 challenge:2 convincingly:2 including:1 green:9 max:1 royal:2 suitable:1 overlap:5 typing:1 rely:1 residual:1 representing:1 scheme:4 technology:1 smem:22 axis:6 deemed:1 tentatively:1 sept:1 text:1 review:2 relative:1 embedded:1 mixed:1 proven:1 sufficient:1 consistent:1 rubin:1 eccv:1 qf:4 repeat:1 keeping:2 bias:2 institute:1 face:2 matting:2 overcome:1 plain:2 xn:7 world:4 dimension:3 author:1 stuck:1 far:2 constituting:1 ec:2 schaffalitzky:1 alpha:4 keep:1 global:6 fergus:2 search:2 latent:2 additionally:1 promising:2 learn:4 nature:1 robust:4 ca:1 sem:7 complex:2 noise:18 repeated:4 x1:1 fig:18 elaborate:1 wish:2 weighting:1 magenta:4 down:1 showing:2 maxi:1 incorporating:1 albeit:1 merging:1 ci:5 illustrates:1 cviu:1 suited:1 entropy:3 likely:1 visual:1 ordered:1 chance:1 opacity:2 acm:1 goal:1 viewed:1 flagged:2 consequently:1 tiger:1 typical:5 except:1 operates:1 determined:1 torr:2 overpopulated:1 indicating:1 selectively:1 support:2 brevity:1 underpopulated:3 dept:2 mcmc:3 tested:2 |
1,710 | 2,554 | Active Learning for Anomaly and
Rare-Category Detection
Dan Pelleg and Andrew Moore
School of Computer Science
Carnegie-Mellon University
Pittsburgh, PA 15213 USA
[email protected], [email protected]
Abstract
We introduce a novel active-learning scenario in which a user wants to
work with a learning algorithm to identify useful anomalies. These are
distinguished from the traditional statistical definition of anomalies as
outliers or merely ill-modeled points. Our distinction is that the usefulness of anomalies is categorized subjectively by the user. We make two
additional assumptions. First, there exist extremely few useful anomalies to be hunted down within a massive dataset. Second, both useful
and useless anomalies may sometimes exist within tiny classes of similar
anomalies. The challenge is thus to identify ?rare category? records in an
unlabeled noisy set with help (in the form of class labels) from a human
expert who has a small budget of datapoints that they are prepared to categorize. We propose a technique to meet this challenge, which assumes
a mixture model fit to the data, but otherwise makes no assumptions on
the particular form of the mixture components. This property promises
wide applicability in real-life scenarios and for various statistical models. We give an overview of several alternative methods, highlighting
their strengths and weaknesses, and conclude with a detailed empirical
analysis. We show that our method can quickly zoom in on an anomaly
set containing a few tens of points in a dataset of hundreds of thousands.
1
Introduction
We begin with an example of a rare-category-detection problem: an astronomer needs to
sift through a large set of sky survey images, each of which comes with many numerical
parameters. Most of the objects (99.9%) are well explained by current theories and models.
The remainder are anomalies, but 99% of these anomalies are uninteresting, and only 1%
of them (0.001% of the full dataset) are useful. The first type of anomalies, called ?boring
anomalies?, are records which are strange for uninteresting reasons such as sensor faults or
problems in the image processing software. The useful anomalies are extraordinary objects
which are worthy of further research. For example, an astronomer might want to crosscheck them in various databases and allocate telescope time to observe them in greater
detail. The goal of our work is finding this set of rare and useful anomalies.
Although our example concerns astrophysics, this scenario is a promising general area for
exploration wherever there is a very large amount of scientific, medical, business or intelligence data and a domain expert wants to find truly exotic rare events while not becoming
Random set
of records
Ask expert
to classify
some records
Spot "important"
records
Build model
from data
and labels
Run all data
through model
Figure 1: Anomalies in Sloan data: Diffraction spikes (left). Satellite trails (center). The
active-learning loop is shown on the right.
swamped with uninteresting anomalies. Two rare categories of ?boring? anomalies in our
test astrophysics data are shown in Figure 1. The first, a well-known optical artifact, is the
phenomenon of diffraction spikes. The second consists of satellites that happened to be
flying overhead as the photo was taken.
As a first step, we might try defining a statistical model for the data, and identifying objects
which do not fit it well. At this point, objects flagged as ?anomalous? can still be almost
entirely of the uninteresting class of anomalies. The computational and statistical question
is then how to use feedback from the human user to iteratively reorder the queue of anomalies to be shown to the user in order to increase the chance that the user will soon see an
anomaly of a whole new category.
We do this in the familiar pool-based active learning framework1 . In our setting, learning
proceeds in rounds. Each round starts with the teacher labeling a small number of examples.
Then the learner models the data, taking into account the labeled examples as well as the
remainder of the data, which we assume to be much larger in volume. The learner then
identifies a small number of input records (?hints?) which are important in the sense that
obtaining labels for them would help it improve the model. These are shown to the teacher
(in our scenario, a human expert) for labeling, and the cycle repeats. The model, which we
call ?irrelevance feedback?, is shown in Figure 1.
It may seem too demanding to ask the human expert to give class labels instead of a simple
?interesting? or ?boring? flag. But in practice, this is not an issue?it seems easier to place
objects into such ?mental bins?. For example, in the astronomical data we have seen a user
place most objects into previously-known categories: point sources, low-surface-brightness
galaxies, etc. This also holds for the negative examples: it is frustrating to have to label all
anomalies as ?bad? without being able to explain why. Often, the data is better understood
as time goes by, and users wish to revise their old labels in light of new examples. Note that
the statistical model does not care about the names of the labels. For all it cares, the label
set can be utterly changed by the user from one round to another. Our tools allow that: the
labels are unconstrained and the user can add, refine, and delete classes at will. It is trivial
to accommodate the simpler ?interesting or not? model in this richer framework.
Our work differs from traditional applications of active learning in that we assume the
distribution of class sizes to be extremely skewed. For example, the smallest class may
have just a few members whereas the largest may contain a few million. Generally in
active learning, it is believed that, right from the start, examples from each class need to
be presented to the oracle [1, 2, 3]. If the class frequencies were balanced, this could be
achieved by random sampling. But in datasets with the rare categories property, this no
longer holds, and much of our effort is an attempt to remedy the situation.
Previous active-learning work tends to tie intimately to a particular model [4, 3]. We would
like to be able to ?plug in? different types of models or components and therefore propose
model-independent criteria. The same reasoning also precludes us from directly using
distances between data points, as is done in [5].
1
More precisely, we allow multiple queries and labels in each learning round ? the traditional
presentation has just one.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 2: Underlying data distribution for the example (a); behavior of the lowlik method (b?f). The
original data distribution is in (a). The unsupervised model fit to it in (b). The anomalous points
according to lowlik, given the model in (b), are shown in (c). Given labels for the points in (c), the
model in (d) is fitted. Given the new model, anomalous points according to lowlik are flagged (e).
Given labels for the points in (c) and (e), this is the new fitted model (f).
Another desired property is resilience to noise. Noise can be inherent in the data (e.g., from
measurement errors) or be an artifact of a ill-fitting model. In any case, we need to be able
to identify query points in the presence of noise. This is a not just a bonus feature: points
which the model considers noisy could very well be the key to improvement if presented
to the oracle. This is in contrast to the approach taken by some: a pre-assumption that the
data is noiseless [6, 7].
2
Overview of Hint Selection Methods
In this section we survey several proposed methods for active learning as they apply to our
setting. While the general tone is negative, what follows should not be construed as general
dismissal of these methods. Rather, it is meant to highlight specific problems with them
when applied to a particular setting. Specifically, the rare-categories assumption (and in
some cases, just having more than 2 classes) breaks the premises for some of them.
As an example, consider the data shown in Figure 2 (a). It is a mixture of two classes.
One is an X-shaped distribution, from which 2000 points are drawn. The other is a circle
with 100 points. In this example, the classifier is a Gaussian Bayes classifier trained in
a semi-supervised manner from labeled and unlabeled data, with one Gaussian per class.
The model is learned with a standard EM procedure, with the following straightforward
modification [8, 9] to enable semi-supervised learning. Before each M step we clamp the
class membership values for the hinted records to match the hints (i.e., one for the labeled
class for this record, and zero elsewhere).
Given fully labeled data, our learner would perfectly predict class membership for this data
(although it would be a poor generative model): one Gaussian centered on the circle, and
another spherical Gaussian with high variance centered on the X. Now, suppose we plan to
perform active learning in which we take the following steps:
1. Start with entirely unlabeled data.
2. Perform semi-supervised learning (which, on the first iteration degenerates to unsupervised learning).
3. Ask an expert to classify the 35 strangest records.
4. Go to Step 2.
On the first iteration (when unsupervised) the algorithm will naturally use the two Gaussians to model the data as in Figure 2(b), with one Gaussian for each of the arms of the ?X?,
and the points in the circle represented as members of one of them. What happens next all
depends on the choice of the datapoints to show to the human expert. We now survey the
methods for hint selection.
Choosing Points with Low Likelihood: A rather intuitive approach is to select as hints
the points which the model performs worst on. This can be viewed as model variance
(a)
(b)
(c)
(d)
(e)
Figure 3: Behavior of the ambig (a?c) and interleave (d?e) methods. The unsupervised model and
the points which ambig flags as anomalous, given this model (a). The model learned using labels for
these points is (b), along with the point it flags. The last refinement, given both sets of labels (c).
minimization [4] or as selection of points furthest away from any labeled points [5]. We do
this by ranking each point in order of increasing model likelihood, and choosing the most
anomalous items.
We show what this approach would flag in the given configuration in Figure 2. It is derived
from a screenshot of a running version of our code, redrawn by hand for clarity. Each
subsequent drawing shows a model which EM converged to after including the new labels,
and the hints it chooses under a particular scheme (here it is what we call lowlik). These
hints affect the model shown for the next round. The underlying distribution is shown in
gray shading. We use this same convention for the other methods below.
In the first round, the Mahalanobis distance for the points in the corners is greater than
those in the circle, therefore they are flagged. Another effect we see is that one of the arms
is represented more heavily. This is probably due to its lower variance. In any event, none
of the points in the circle is flagged. The outcome is that the next round ends up in a similar
local minimum. We can also see that another step will not result in the desired model.
Only after obtaining labels for all of the ?outlier? points (that is, those on the extremes
of the distribution) will this approach go far enough down the list to hit a point in the
circle. This means that in scenarios where there are more than a few hundred noisy data,
classification accuracy is likely to be very low.
Choosing Ambiguous Points: Another popular approach is to choose the points which the
learner is least certain about. This is the spirit of ?query by committee? [10] and ?uncertainty sampling? [11]. In our setting this is implemented in the following way. For each
data point, the EM algorithm maintains an estimate of the probability of its membership in
every mixture component. For each point, we compute the entropy of the set of all such
probabilities, and rank the points in decreasing order of the entropy. This way, the top of
the list will have the objects which are ?owned? by multiple components.
For our example, this would choose the points shown in Figure 3. As expected, points on
the decision boundaries between classes are chosen. Here, the ambiguity sets are useless
for the purpose of modeling the entire distribution. One might argue this only holds for
this contrived distribution. However, in general this is a fairly common occurrence, in the
sense that the ambiguity criterion works to nudge the decision surfaces so they better fit a
relatively small set of labeled examples. It may help modeling the points very close to the
boundaries, but it does not improve generalization accuracy in the general case. Indeed, we
see that if we repeatedly apply this criterion we end up asking for labels for a great number
of points in close proximity, to very little effect on the overall model. In the results section
below, we call this method ambig.
Combining Unlikely and Ambiguous Points: Our next candidate is a hybrid method
which tries to combine the hints from the two previous methods. Recall they both produce
a ranked list of all the points. We merge the lists into another ranked list in the following
way. Alternate between the lists when picking items. For each list, pick the top item that
has not already been placed in the output list. When all elements are taken, the output list
is a ranked list as required. We now pick the top items from this list for hints.
As expected we get a good mix of points in both hint sets (not shown). But, since neither
method identifies the small cluster, their union fails to find it as well. However, in general
it is useful to combine different criteria in this way, as our empirical results below show.
There, this method is called mix-ambig-lowlik.
Interleaving: We now present what we consider is the logical conclusion of the observations above. To the best of our knowledge, the approach is novel. The key insight is that
our group of anomalies was, in fact, reasonably ordinary when analyzed on a global scale.
In other words, the mixture density of the region we chose for the group of anomalies is
not sufficiently low for them to rank high on the hint list. Recall that the mixture model
sums up the weighted per-model densities. Therefore, a point that is ?split? among several
components approximately evenly, and scores reasonably high on at least some of them,
will not be flagged as anomalous.
Another instance of the same problem occurs when a point which is somewhat ?owned? by
a component with high mixture weight. Even if the small component that ?owns? most of
it predicts it is very unlikely, that term has very little effect on the overall density.
Therefore, our goal is to eliminate the mixture weights from the equation. Our idea is that
if we restrict the focus to match the ?point of view? of just one component, these anomalies
will become more apparent. We do this by considering just the points that ?belong? to one
component, and by ranking them according to the PDF of this component. The hope is that
given this restricted view, anomalies that do not fit the component?s own model will stand
out.
More precisely, let c be a component and i a data point. The EM algorithm maintains, for
every c and i, an estimate zic of the degree of ?ownership? that c exerts over i. For each
?
component c we create a list of all the points for which c = arg maxc? zic , ranked by zic .
Having constructed the sorted lists, we merge them in a generalization of the merge method
described above. We cycle through the lists in some order. For each list, we pick the top
item that has not already been placed in the output list, and place it at the next position in
the output list.
This strategy is appealing intuitively, although we have no further theoretical justification
for it. We show results for this strategy for our example in Figure 3, and in the experimental
section below. We see it meets the requirement of representation for all true components.
Most of the points are along the major axes of the two elongated Gaussians, but two of
the points are inside the small circle. Correct labels for even just these two points result in
perfect classification in the next EM run.
In our experiments, we found it beneficial to modify this method as follows. One of the
components is a uniform-density ?background?. This modification lets it nominate hints
more often than any other component. In terms of list merging, we take one element from
each of the lists of standard components, and then several elements from the list produced
for the background component. All of the results shown were obtained using an oversampling ratio of 20. In other words, if there are N components (excluding uniform), then the
first cycle of hint nomination will result in 20 + N hints, 20 of which from uniform.
3
Experimental Results
To establish the results hinted by the intuition above, we conducted a series of experiments.
The first one uses synthetic data. The data distribution is a mixture of components in
5, 10, 15 and 20 dimensions. The class size distribution is a geometric series with the
largest class owning half of the data and each subsequent class being half as small.
The components are multivariate Gaussians whose covariance structure can be modeled
1
1
0.95
0.9
0.9
0.8
%classes discovered
%classes discovered
0.85
0.8
0.75
0.7
0.7
0.6
0.65
0.5
lowlik
mix-ambig-lowlik
random
ambig
interleave
0.6
lowlik
random
ambig
interleave
0.55
0.4
0
200
400
600
800
hints
1000
1200
1400
1600
0
500
1000
1500
hints
2000
2500
3000
Figure 4: Learning curves for simulated data drawn from a mixture of dependency trees
(left), and for the SHUTTLE set (right). The Y axis shows the fraction of classes represented
in queries sent to the teacher. For SHUTTLE and ABALONE below, mix-ambig-loglike is
omitted because it is so similar to lowlik.
1
1
0.9
0.9
0.8
0.8
%classes discovered
%classes discovered
0.7
0.7
0.6
0.6
0.5
0.4
0.5
0.3
0.4
lowlik
random
ambig
interleave
lowlik
mix-ambig-lowlik
random
ambig
interleave
0.2
0.3
0.1
0
50
100
150
200
hints
250
300
350
400
0
50
100
150
200
hints
250
300
350
400
Figure 5: Learning curves for the ABALONE (left) and KDD (right) sets.
with dependency trees. Each Gaussian component has its covariance generated in the following way. Random attribute pairs are chosen, and added to an undirected dependency
tree structure unless they close a cycle. Each edge describes a linear dependency between
nodes, with the coefficients drawn uniformly at random, with random noise added to each
value. Each data set contains 10, 000 points. There are ten tree classes and a uniform background component. The number of ?background? points ranges from 50 to 200. Only the
results for 15 dimensions and 100 noisy points are shown as they are representative of the
other experiments. In each round of learning, the learner queries the teacher with a list of
50 points for labeling, and has access to all the queries and replies submitted previously.
This data generation scheme is still very close to the one which our tested model assumes.
Note, however, that we do not require different components to be easily identifiable. The
results of this experiment are shown in Figure 4. Also included, are results for random,
which is a baseline method choosing hints at random.
Our scoring function is driven by our application, and estimates the amount of effort the
teacher has to expend before being presented by representatives of every single class. The
assumption is that the teacher can generalize from a single example (or a very few examples) to an entire class, and the valuable information is concentrated in the first queried
member of each class. More precisely, if there are n classes, then the score under this
metric is 1/n times the number of classes represented in the query set. In the query set we
include all items queried in preceding rounds, as we do for other applicable metrics.
The best performer so far is interleave, taking five rounds or less to reveal all of the classes,
including the very rare ones. Below we show it is superior in many of the real-life data sets.
We can also see that ambig performs worse than random. This can be explained by the fact
that ambig only chooses points that already have several existing components ?competing?
for them. Rarely do these points belong to a new, yet-undiscovered component.
1
1
lowlik
random
interleave
0.95
0.9
0.9
%classes discovered
%classes discovered
0.8
0.7
0.85
0.8
0.6
0.75
0.5
0.7
lowlik
random
interleave
0.4
0.65
50
100
150
200
250
300
350
400
450
500
20
40
60
hints
80
100
120
140
160
180
200
hints
Figure 6: Learning curves for the EDSGC (left) and SDSS (right) sets.
Table 1: Properties of the data sets used.
NAME
SHUTTLE
ABALONE
KDD
EDSGC
SDSS
DIMS
RECORDS
CLASSES
9
7
33
26
22
43500
4177
50000
1439526
517371
7
20
19
7
3
SMALLEST
CLASS
0.01%
0.34%
0.002%
0.002%
0.05%
LARGEST
CLASS
78.4%
16%
21.6%
76%
50.6%
SOURCE
[12]
[13]
[13]
[14]
[15]
We were concerned that the poor performance of lowlik was just a consequence of our
choice of metric. After all, it does not measure the number of noise points (i.e points from
the uniform background component) found. These points are genuine anomalies, so it is
possible that lowlik is being penalized unfairly for its focusing on the noise points. After
examining the fraction of noise points (i.e., points drawn from the uniform background
component) found by each algorithm, we discovered that lowlik actually scores worse than
interleave on this metric as well.
The remaining experiments were run on various real data sets. Table 1 has a summary of
their properties. They represent data and computational effort orders of magnitude larger
than any active-learning result of which we are aware.
Results for the SHUTTLE set appear in Figure 4. We see that it takes the interleave algorithm five rounds to spot all classes, whereas the next best is lowlik, with 11.
The
ABALONE set (Figure 5) is a very noisy set, in which random seems to be the best longterm strategy. Again, note how ambig performs very poorly.
Due to resource limitations, results for kdd were obtained on a 50000-record random subsample of the original training set (which is roughly ten times bigger). This set has an extremely skewed distribution of class sizes, and a large number of classes. In Figure 5 we see
that lowlik performs uncharacteristically poorly. Another surprise is that the combination
of lowlik and ambig outperforms them both. It also matches interleave in performance,
and this is the only case where we have seen it do so.
The EDSGC set, as distributed, is unlabeled. The class labels relate to the shape and size of
the sky object. We see in Figure 6 that for the purpose of class discovery, we can do a good
job in a small number of rounds: here, a human would have had to label just 250 objects
before being presented with a member of the smallest class - comprising just 24 records
out of a set of 1.4 million.
4
Conclusion
We have shown that some of the popular methods for active learning perform poorly in
realistic active-learning scenarios where classes are imbalanced. Working from the definition of a mixture model we were able to propose methods which let each component
?nominate? its favorite queries. These methods work well in the presence of noisy data
and extremely rare classes and anomalies. Our simulations show that a human user only
needs to label one or two hundred examples before being presented with very rare anomalies in huge data sets. In our experience, this kind of interaction takes just an hour or two
of combined human and computer time [16].
We make no assumptions about the particular form a component takes. Consequently, we
expect our results to apply to many different kinds of component models, including the case
where components are not dependency trees, or even not all from the same distribution.
We are using lessons learned from our empirical comparison in an application for anomalyhunting in the astrophysics domain. Our application presents multiple indicators to help a
user spot anomalous data, as well as controls for labeling points and adding classes. The
application will be described in a companion paper.
References
[1] Sugato Basu, Arindam Banerjee, and Raymond J. Mooney. Active semi-supervision for pairwise constrained clustering. Submitted for publication, February, 2003.
[2] M. Seeger. Learning with labeled and unlabeled data. Technical report, Institue for Adaptive
and Neural Computation, Universiy of Edinburgh, 2000.
[3] Klaus Brinker. Incorporating diversity in active learning with support vector machines. In
Proceedings of the Twentieth International Conference on Machine Learning, 2003.
[4] David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan. Active learning with statistical
models. In G. Tesauro, D. Touretzky, and T. Leen, editors, Advances in Neural Information
Processing Systems, volume 7, pages 705?712. The MIT Press, 1995.
[5] Nirmalie Wiratunga, Susan Craw, and Stewart Massie. Index driven selective sampling for
CBR, 2003. To appear in Proceedings of the Fifth International Conference on Case-Based
Reasoning, Springer-Verlag, Trondheim, Norway, 23-26 June 2003.
[6] David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning.
Machine Learning, 15(2):201?221, 1994.
[7] Mark Plutowski and Halbert White. Selecting concise training sets from clean data. IEEE
Transactions on Neural Networks, 4(2):305?318, March 1993.
[8] Shahshashani and Landgrebe. The effect of unlabeled examples in reducing the small sample
size problem. IEEE Trans Geoscience and Remote Sensing, 32(5):1087?1095, 1994.
[9] Miller and Uyar. A mixture of experts classifier with learning based on both labeled and unlabelled data. In NIPS-9, 1997.
[10] H. S. Seung, Manfred Opper, and Haim Sompolinsky. Query by committee. In Computational
Learning Theory, pages 287?294, 1992.
[11] David D. Lewis and Jason Catlett. Heterogeneous uncertainty sampling for supervised learning. In William W. Cohen and Haym Hirsh, editors, Proceedings of ICML-94, 11th International Conference on Machine Learning, pages 148?156, New Brunswick, US, 1994. Morgan
Kaufmann Publishers, San Francisco, US.
[12] P.Brazdil and J.Gama. StatLog, 1991. http://www.liacc.up.pt/ML/statlog.
[13] C.L. Blake and C.J. Merz. UCI repository of machine learning databases, 1998. http://
www.ics.uci.edu/?mlearn/MLRepository.html.
[14] R. C. Nichol, C. A. Collins, and S. L. Lumsden. The Edinburgh/Durham southern galaxy
catalogue ? IX. Submitted to the Astrophysical Journal, 2000.
[15] SDSS. The Sloan Digital Sky Survey, 1998. www.sdss.org.
[16] Dan Pelleg. Scalable and Practical Probability Density Estimators for Scientific Anomaly Detection. PhD thesis, Carnegie-Mellon University, 2004. Tech Report CMU-CS-04-134.
[17] David MacKay. Information-based objective functions for active data selection. Neural Computation, 4(4):590?604, 1992.
[18] Fabio Gagliardi Cozman, Ira Cohen, and Marclo Cesar Cirelo. Semi-supervised learning of
mixture models and bayesian networks. In Proceedings of the Twentieth International Conference on Machine Learning, 2003.
[19] Yoram Baram, Ran El-Yaniv, and Kobi Luz. Online choice of active learning algorithms. In
Proceedings of the Twentieth International Conference on Machine Learning, 2003.
[20] Sanjoy Dasgupta. Analysis of a greedy active learning strategy. In Advances in Neural Information Processing Systems 18, 2004.
| 2554 |@word repository:1 longterm:1 version:1 interleave:11 seems:2 ambig:15 simulation:1 covariance:2 brightness:1 concise:1 pick:3 shading:1 accommodate:1 configuration:1 series:2 score:3 contains:1 selecting:1 gagliardi:1 undiscovered:1 outperforms:1 existing:1 sugato:1 current:1 yet:1 numerical:1 subsequent:2 realistic:1 kdd:3 shape:1 atlas:1 intelligence:1 generative:1 half:2 item:6 tone:1 utterly:1 greedy:1 record:12 manfred:1 mental:1 node:1 org:1 simpler:1 five:2 along:2 constructed:1 become:1 consists:1 dan:2 overhead:1 fitting:1 combine:2 inside:1 manner:1 introduce:1 pairwise:1 indeed:1 expected:2 roughly:1 behavior:2 spherical:1 decreasing:1 little:2 considering:1 increasing:1 begin:1 exotic:1 underlying:2 bonus:1 what:5 kind:2 finding:1 sky:3 every:3 tie:1 classifier:3 hit:1 control:1 medical:1 appear:2 before:4 hirsh:1 understood:1 local:1 resilience:1 tends:1 modify:1 sd:4 consequence:1 meet:2 becoming:1 merge:3 approximately:1 might:3 chose:1 range:1 practical:1 practice:1 union:1 differs:1 spot:3 procedure:1 nomination:1 area:1 empirical:3 pre:1 word:2 zoubin:1 get:1 unlabeled:6 selection:4 close:4 brazdil:1 catalogue:1 www:3 elongated:1 center:1 go:3 straightforward:1 survey:4 identifying:1 lumsden:1 insight:1 estimator:1 datapoints:2 justification:1 pt:1 suppose:1 heavily:1 user:11 anomaly:29 massive:1 trail:1 us:1 pa:1 element:3 predicts:1 database:2 labeled:8 haym:1 hunted:1 worst:1 thousand:1 nudge:1 region:1 susan:1 cycle:4 sompolinsky:1 remote:1 valuable:1 ran:1 balanced:1 intuition:1 seung:1 trained:1 flying:1 learner:5 liacc:1 easily:1 various:3 represented:4 cozman:1 astronomer:2 query:10 labeling:4 klaus:1 choosing:4 outcome:1 apparent:1 richer:1 larger:2 whose:1 drawing:1 otherwise:1 precludes:1 noisy:6 online:1 propose:3 clamp:1 interaction:1 remainder:2 uci:2 loop:1 combining:1 degenerate:1 poorly:3 intuitive:1 contrived:1 cluster:1 satellite:2 requirement:1 produce:1 yaniv:1 perfect:1 object:9 help:4 andrew:1 school:1 job:1 implemented:1 c:3 come:1 convention:1 correct:1 attribute:1 redrawn:1 awm:1 human:8 exploration:1 enable:1 centered:2 bin:1 require:1 premise:1 generalization:3 statlog:2 hinted:2 hold:3 proximity:1 sufficiently:1 blake:1 ic:1 great:1 predict:1 major:1 catlett:1 smallest:3 omitted:1 purpose:2 applicable:1 label:21 largest:3 create:1 tool:1 weighted:1 minimization:1 hope:1 mit:1 sensor:1 gaussian:6 rather:2 zic:3 shuttle:4 publication:1 ira:1 derived:1 focus:1 june:1 ax:1 improvement:1 rank:2 likelihood:2 tech:1 contrast:1 seeger:1 cesar:1 baseline:1 sense:2 el:1 membership:3 brinker:1 entire:2 unlikely:2 eliminate:1 selective:1 comprising:1 issue:1 classification:2 ill:2 overall:2 among:1 arg:1 html:1 plan:1 constrained:1 fairly:1 mackay:1 genuine:1 aware:1 having:2 shaped:1 sampling:4 unsupervised:4 icml:1 report:2 hint:21 few:6 inherent:1 richard:1 zoom:1 familiar:1 william:1 attempt:1 cbr:1 detection:3 huge:1 weakness:1 mixture:13 truly:1 extreme:1 irrelevance:1 light:1 analyzed:1 edge:1 nominate:2 experience:1 unless:1 tree:5 old:1 desired:2 circle:7 halbert:1 theoretical:1 delete:1 fitted:2 instance:1 classify:2 modeling:2 asking:1 stewart:1 ordinary:1 applicability:1 rare:11 frustrating:1 hundred:3 usefulness:1 uninteresting:4 uniform:6 examining:1 conducted:1 too:1 dependency:5 teacher:6 synthetic:1 chooses:2 combined:1 density:5 international:5 pool:1 picking:1 michael:1 quickly:1 again:1 ambiguity:2 thesis:1 containing:1 choose:2 worse:2 corner:1 expert:8 account:1 diversity:1 coefficient:1 sloan:2 ranking:2 depends:1 astrophysical:1 try:2 break:1 view:2 jason:1 start:3 bayes:1 maintains:2 construed:1 accuracy:2 variance:3 who:1 kaufmann:1 miller:1 identify:3 lesson:1 generalize:1 bayesian:1 produced:1 none:1 mooney:1 converged:1 submitted:3 mlearn:1 explain:1 maxc:1 touretzky:1 definition:2 frequency:1 galaxy:2 naturally:1 dataset:3 baram:1 popular:2 ask:3 revise:1 recall:2 astronomical:1 framework1:1 logical:1 knowledge:1 actually:1 focusing:1 norway:1 supervised:5 luz:1 leen:1 done:1 just:11 reply:1 hand:1 working:1 cohn:2 banerjee:1 artifact:2 gray:1 reveal:1 scientific:2 usa:1 name:2 effect:4 contain:1 true:1 remedy:1 moore:1 iteratively:1 white:1 round:12 mahalanobis:1 skewed:2 ambiguous:2 mlrepository:1 abalone:4 criterion:4 pdf:1 performs:4 reasoning:2 image:2 novel:2 arindam:1 common:1 superior:1 overview:2 cohen:2 volume:2 million:2 belong:2 mellon:2 measurement:1 queried:2 unconstrained:1 had:1 access:1 longer:1 surface:2 supervision:1 subjectively:1 etc:1 add:1 multivariate:1 own:1 imbalanced:1 driven:2 tesauro:1 scenario:6 certain:1 verlag:1 life:2 fault:1 scoring:1 plutowski:1 seen:2 minimum:1 additional:1 greater:2 care:2 somewhat:1 preceding:1 performer:1 morgan:1 semi:5 full:1 multiple:3 mix:5 technical:1 match:3 unlabelled:1 plug:1 believed:1 bigger:1 anomalous:7 scalable:1 heterogeneous:1 noiseless:1 cmu:3 exerts:1 metric:4 iteration:2 sometimes:1 represent:1 achieved:1 whereas:2 want:3 background:6 source:2 publisher:1 probably:1 sent:1 undirected:1 member:4 spirit:1 seem:1 jordan:1 call:3 presence:2 split:1 enough:1 concerned:1 affect:1 fit:5 perfectly:1 restrict:1 competing:1 idea:1 allocate:1 effort:3 queue:1 repeatedly:1 useful:7 generally:1 detailed:1 amount:2 prepared:1 ten:3 concentrated:1 category:8 telescope:1 http:2 exist:2 oversampling:1 happened:1 dims:1 per:2 carnegie:2 promise:1 dasgupta:1 group:2 key:2 drawn:4 clarity:1 neither:1 clean:1 pelleg:2 merely:1 sum:1 fraction:2 run:3 uncertainty:2 place:3 almost:1 strange:1 decision:2 diffraction:2 entirely:2 haim:1 refine:1 oracle:2 identifiable:1 institue:1 strength:1 precisely:3 software:1 extremely:4 optical:1 relatively:1 according:3 alternate:1 combination:1 poor:2 march:1 beneficial:1 describes:1 em:5 intimately:1 appealing:1 wherever:1 modification:2 happens:1 swamped:1 outlier:2 explained:2 restricted:1 intuitively:1 taken:3 trondheim:1 equation:1 resource:1 previously:2 committee:2 end:2 photo:1 gaussians:3 apply:3 observe:1 away:1 occurrence:1 distinguished:1 alternative:1 original:2 assumes:2 running:1 top:4 include:1 remaining:1 clustering:1 yoram:1 ghahramani:1 build:1 establish:1 february:1 objective:1 question:1 already:3 spike:2 occurs:1 strategy:4 added:2 traditional:3 southern:1 fabio:1 distance:2 simulated:1 evenly:1 argue:1 considers:1 trivial:1 reason:1 furthest:1 code:1 modeled:2 useless:2 index:1 ratio:1 relate:1 negative:2 astrophysics:3 perform:3 ladner:1 observation:1 datasets:1 defining:1 situation:1 excluding:1 worthy:1 discovered:7 david:4 pair:1 required:1 distinction:1 learned:3 hour:1 nip:1 trans:1 able:4 proceeds:1 below:6 challenge:2 including:3 event:2 demanding:1 business:1 hybrid:1 ranked:4 indicator:1 arm:2 scheme:2 improve:2 identifies:2 axis:1 raymond:1 geometric:1 discovery:1 fully:1 expect:1 highlight:1 gama:1 interesting:2 generation:1 limitation:1 digital:1 degree:1 editor:2 tiny:1 elsewhere:1 changed:1 penalized:1 repeat:1 last:1 soon:1 placed:2 summary:1 unfairly:1 allow:2 wide:1 basu:1 taking:2 expend:1 fifth:1 distributed:1 edinburgh:2 feedback:2 boundary:2 dimension:2 stand:1 curve:3 landgrebe:1 opper:1 refinement:1 adaptive:1 san:1 far:2 transaction:1 ml:1 global:1 active:19 pittsburgh:1 conclude:1 reorder:1 owns:1 francisco:1 kobi:1 why:1 table:2 promising:1 favorite:1 reasonably:2 craw:1 obtaining:2 improving:1 domain:2 whole:1 noise:7 subsample:1 categorized:1 representative:2 owning:1 extraordinary:1 fails:1 position:1 wish:1 candidate:1 screenshot:1 ix:1 interleaving:1 down:2 companion:1 edsgc:3 bad:1 specific:1 boring:3 nichol:1 sift:1 sensing:1 list:22 concern:1 incorporating:1 merging:1 adding:1 phd:1 magnitude:1 budget:1 easier:1 surprise:1 durham:1 entropy:2 likely:1 twentieth:3 highlighting:1 geoscience:1 springer:1 chance:1 owned:2 lewis:1 goal:2 presentation:1 flagged:5 viewed:1 sorted:1 consequently:1 ownership:1 included:1 specifically:1 uniformly:1 reducing:1 flag:4 uyar:1 called:2 sanjoy:1 experimental:2 merz:1 rarely:1 select:1 support:1 mark:1 brunswick:1 meant:1 categorize:1 collins:1 tested:1 phenomenon:1 |
1,711 | 2,555 | Instance-Ba sed Relevan ce Feedback fo r
Ima ge Retriev al
Giorgio Giacinto and Fabio Roli
Department of Electrical and Electronic Engineering
University of Cagliari
Piazza D?Armi, Cagliari ? Italy 09121
{giacinto,roli}@diee.unica.it
Abstract
High retrieval precision in content-based image retrieval can be
attained by adopting relevance feedback mechanisms. These
mechanisms require that the user judges the quality of the results of
the query by marking all the retrieved images as being either
relevant or not. Then, the search engine exploits this information to
adapt the search to better meet user?s needs. At present, the vast
majority of proposed relevance feedback mechanisms are
formulated in terms of search model that has to be optimized. Such
an optimization involves the modification of some search
parameters so that the nearest neighbor of the query vector contains
the largest number of relevant images. In this paper, a different
approach to relevance feedback is proposed. After the user
provides the first feedback, following retrievals are not based on knn search, but on the computation of a relevance score for each
image of the database. This score is computed as a function of two
distances, namely the distance from the nearest non-relevant image
and the distance from the nearest relevant one. Images are then
ranked according to this score and the top k images are displayed.
Reported results on three image data sets show that the proposed
mechanism outperforms other state-of-the-art relevance feedback
mechanisms.
1
In t rod u ct i on
A large number of content-based image retrieval (CBIR) systems rely on the vector
representation of images in a multidimensional feature space representing low-level
image characteristics, e.g., color, texture, shape, etc. [1]. Content-based queries are
often expressed by visual examples in order to retrieve from the database the images
that are ?similar? to the examples. This kind of retrieval is often referred to as K
nearest-neighbor retrieval. It is easy to see that the effectiveness of content-based
image retrieval systems (CBIR) strongly depends on the choice of the set of visual
features, on the choice of the ?metric? used to model the user?s perception of image
similarity, and on the choice of the image used to query the database [1]. Typically,
if we allow different users to mark the images retrieved with a given query as
relevant or non-relevant, different subsets of images will be marked as relevant.
Accordingly, the need for mechanisms to adapt the CBIR system response based on
some feedback from the user is widely recognized.
It is interesting to note that while relevance feedback mechanisms have been first
introduced in the information retrieval field [2], they are receiving more attention in
the CBIR field (Huang). The vast majority of relevance feedback techniques
proposed in the literature is based on modifying the values of the search parameters
as to better represent the concept the user bears in mind. To this end, search
parameters are computed as a function of the relevance values assigned by the user
to all the images retrieved so far. As an example, relevance feedback is often
formulated in terms of the modification of the query vector, and/or in terms of
adaptive similarity metrics. [3]-[7]. Recently, pattern classification paradigms such
as SVMs have been proposed [8]. Feedback is thus used to model the concept of
relevant images and adjust the search consequently.
Concept modeling may be difficult on account of the distribution of relevant images
in the selected feature space. ?Narrow domain? image databases allows extracting
good features, so that images bearing similar concepts belong to compact clusters.
On the other hand, ?broad domain? databases, such as image collection used by
graphic professionals, or those made up of images from the Internet, are more
difficult to subdivide in cluster because of the high variability of concepts [1]. In
these cases, it is worth extracting only low level, non-specialized features, and
image retrieval is better formulated in terms of a search problem rather then concept
modeling.
The present paper aims at offering an original contribution in this direction. Rather
then modeling the concept of ?relevance? the user bears in mind, feedback is used to
assign each image of the database a relevance score. Such a score depends only
from two dissimilarities (distances) computed against the images already marked by
the user: the dissimilarity from the set of relevant images, and the dissimilarity from
the set of non-relevant images. Despite its computational simplicity, this mechanism
allows outperforming state-of-the-art relevance feedback mechanisms both on
?narrow domain? databases, and on ?broad domain? databases.
This paper is organized as follows. Section 2 illustrates the idea behind the proposed
mechanism and provides the basic assumptions. Section 3 details the proposed
relevance feedback mechanism. Results on three image data sets are presented in
Section 4, where performances of other relevance feedback mechanisms are
compared. Conclusions are drawn in Section 5.
2
In st an ce- b ased rel evan ce est i m at i on
The proposed mechanism has been inspired by classification techniques based on
the ?nearest case? [9]-[10]. Nearest-case theory provided the mechanism to compute
the dissimilarity of each image from the sets of relevant and non?relevant images.
The ratio between the nearest relevant image and the nearest non-relevant image has
been used to compute the degree of relevance of each image of the database [11].
The present section illustrates the rationale behind the use of the nearest-case
paradigm.
Let us assume that each image of the database has been represented by a number of
low-level features, and that a (dis)similarity measure has been defined so that the
proximity between pairs of images represents some kind of ?conceptual? similarity.
In other words, the chosen feature space and similarity metric is meaningful at least
for a restricted number of users.
A search in image databases is usually performed by retrieving the k most similar
images with respect to a given query. The dimension of k is usually small, to avoid
displaying a large number of images at a time. Typical values for k are between 10
and 20. However, as the ?relevant? images that the user wishes to retrieve may not
fit perfectly with the similarity metric designed for the search engine, the user may
be interested in exploring other regions of the feature space. To this end, the user
marks the subset of ?relevant? images out of the k retrieved. Usually, such relevance
feedback is used to perform a new k-nn search by modifying some search
parameters, i.e., the position of the query point, the similarity metric, and other
tuning parameters [1]-[7]. Recent works proposed the use of support vector machine
to learn the distribution of relevant images [8]. These techniques require some
assumption about the general form of the distribution of relevant images in the
feature space. As it is difficult to make any assumption about such a distribution for
broad domain databases, we propose to exploit the information about the relevance
of the images retrieved so far in a nearest-neighbor fashion.
Nearest-neighbor techniques, as used in statistical pattern recognition, case-based
reasoning, or instance-based learning, are effective in all applications where it is
difficult to produce a high-level generalization of a ?class? of objects [9]-[10],[12][13]. Relevance learning in content base image retrieval may well fit into this
definition, as it is difficult to provide a general model that can be adapted to
represent different concepts of similarity. In addition, the number of available cases
may be too small to estimate the optimal set of parameters for such a general model.
On the other hand, it can be more effective to use each ?relevant? image as well as
each ?non-relevant? image, as ?cases? or ?instances? against which the images of
the database should be compared. Consequently, we assume that an image is as
much as relevant as much as its dissimilarity from the nearest relevant image is
small. Analogously, an image is as much as non-relevant as much as its dissimilarity
from the nearest non-relevant image is small.
3
Rel evan ce S core Com p u t ati on
According to previous section, each image of the database can be thus characterized
by a ?degree of relevance? and a ?degree of non-relevance? according to the
dissimilarities from the nearest relevant image, and from the nearest non-relevant
image, respectively. However, it should be noted that these degrees should be
treated differently because only ?relevant? images represent a ?concept? in the
user?s mind, while ?non-relevant? images may represent a number of other concepts
different from user?s interest. In other words, while it is meaningful to treat the
degree of relevance as a degree of membership to the class of relevant images, the
same does not apply to the degree of non-relevance. For this reason, we propose to
use the ?degree of non-relevance? to weight the ?degree of relevance?.
Let us denote with R the subset of indexes j ? {1,...,k} related to the set of relevant
images retrieved so far and the original query (that is relevant by default), and with
NR the subset of indexes j ? (1,...,k} related to the set of non-relevant images
retrieved so far. For each image I of the database, according to the nearest neighbor
rule, let us compute the dissimilarity from the nearest image in R and the
dissimilarity from the nearest image in NR. Let us denote these dissimilarities as
dR(I) and dNR(I), respectively. The value of dR(I) can be clearly used to measure
the degree of relevance of image I, assuming that small values of dR(I) are related
to very relevant images. On the other hand, the hypothesis that image I is relevant to
the user?s query can be supported by a high value of dNR(I). Accordingly, we
defined the relevance score
!
dR ( I ) $
relevance ( I ) = # 1 +
dN ( I ) &%
"
'1
(1)
This formulation of the score can be easily explained in terms of a distanceweighted 2-nn estimation of the posterior probability that image I is relevant. The 2
nearest neighbors are made up of the nearest relevant image, and the nearest nonrelevant image, while the weights are computed as the inverse of the distance from
the nearest neighbors.
The relevance score computed according to equation (1) is then used to rank the
images and the first k are presented to the user.
4
Exp eri m en t al resu l t s
In order to test the proposed method and compare it with other methods described in
the literature, three image databases have been used: the MIT database, a database
contained in the UCI repository, and a subset of the Corel database. These databases
are currently used for assessing and comparing relevance feedback techniques
[5],[7],[14].
The
MIT
database
was
collected
by
the
MIT
Media
Lab
(ftp://whitechapel.media.mit.edu/pub/VisTex). This database contains 40 texture
images that have been manually classified into fifteen classes. Each of these images
has been subdivided into sixteen non-overlapping images, obtaining a data set with
640 images. Sixteen Gabor filters were used to characterise these images, so that
each image is represented by a 16-dimensional feature vector [14].
The
database
extracted
from
the
UCI
repository
(http://www.cs.uci.edu/mlearn/MLRepository.html) consists of 2,310 outdoor
images. The images are subdivided into seven data classes (brickface, sky, foliage,
cement, window, path, and grass). Nineteen colour and spatial features characterise
each image. (Details are reported in the UCI web site).
The database extracted from the Corel collection is available at the KDD-UCI
repository (http://kdd.ics.uci.edu/databases/CorelFeatures/CorelFeatures.data.html).
We used a subset made up of 19513 images, manually subdivided into 43 classes.
For each image, four sets of features were available at the web site. In this paper, we
report the results related to the Color Moments (9 features), and the Co-occurrence
Texture (16 features) feature sets
For each dataset, the Euclidean distance metric has been used. A linear
normalisation procedure has been performed, so that each feature takes values in the
range between 0 and 1.
For the first two databases, each image is used as a query, while for the Corel
database, 500 images have been randomly extracted and used as query, so that all
the 43 classes are represented. At each retrieval iteration, twenty images are
returned. Relevance feedback is performed by marking images belonging to the
same class of the query as relevant, and all other images as non-relevant. The user?s
query itself is included in the set of relevant images. This experimental set up
affords an objective comparison among different methods, and is currently used by
many researchers [5],[7],[14]. Results are evaluated in term of the retrieval
precision averaged over all the considered queries. The precision is measured as the
fraction of relevant images contained in the 20 top retrieved images.
As the first two databases are of the ?narrow domain? type, while the third is of the
?broad domain? type, this experimental set-up allowed a thorough testing of the
proposed technique.
For the sake of comparison, retrieval performances obtained with two methods
recently described in the literature are also reported: MindReader [3] which
modifies the query vector and the similarity metric on account of features relevance,
and Bayes QS (Bayesian Query Shifting) which is based on query reformulation [7].
These two methods have been selected because they can be easily implemented, and
their performances can be compared to those provided by a large number of
relevance feedback techniques proposed in the CBIR literature (see for example
results presented in [15]). It is worth noting that results presented in different papers
cannot be directly compared to each other because they are not related to a common
experimental set-up. However, as they are related to the same data sets with similar
experimental set-up, a qualitative comparisons let us conclude that the performance
of the two above techniques are quite close to other results in the literature.
4.1
Experiments w ith th e MI T database
This database can be considered of the ?narrow domain? type as it contains only
images of textures of 40 different types. In addition, the selected feature space is
very suited to measure texture similarity.
Figure 1 show the performances of the proposed relevance feedback mechanism and
those of the two techniques used for comparison.
100
% Precision
95
90
Relevance
Score
Bayes QS
85
MindReader
80
75
0 rf
1 rf
2 rf
3 rf
4 rf
5 rf
Iter. Rel. Feedback
6 rf
7 rf
8 rf
Figure 1: Retrieval Performances for the MIT database in terms of average
percentage retrieval precision.
After the first feedback iteration (1rf in the graph), each relevance feedback
mechanism is able to improve the average precision attained in the first retrieval by
more than 10%, the proposed mechanism performing slightly better than
MindReader. This is a desired behaviour as a user typically allows few iterations.
However, if the user aims to better refine the search by additional feedback
iteration, MindReader and Bayes QS are not able to exploit the additional
information, as they provide no improvements after the second feedback iteration.
On the other hand, the proposed mechanism provides further improvement in
precision by increasing the number of iteration. These improvements are very small
because the first feedback already provides a high precision value, near to 95%.
4.2
Experiments w ith th e UC I database
This database too can be considered of the ?narrow domain? type as the images
clearly belong to one of the seven data classes, and features have been extracted
accordingly.
100
% Precision
98
96
Relevance
Score
Bayes QS
94
MindReader
92
90
0 rf
1 rf
2 rf
3 rf
4 rf
5 rf
6 rf
7 rf
8 rf
Iter. Rel. Feedback
Figure 2: Retrieval Performances for the UCI data set in terms of average
percentage retrieval precision.
Figure 2 show the performances attained on the UCI database. Retrieval precision is
very high after the first extraction with no feedback. Nonetheless, each of the
considered mechanism is able to exploit relevance feedback, Mindreader and Bayes
QS providing a 6% improvement, while the proposed mechanism attains a 8%
improvement. This example clearly shows the superiority of the proposed technique,
as it attains a precision of 99% after the second iteration. Further iterations allow
attaining a 100% precision. On the other hand, Bayes QS also exploits further
feedback iteration attaining a precision of 98% after 7 iterations, while MindReader
does not improve the precision attained after the first iteration. As the user typically
allows very few feedback iterations, the proposed mechanism proved to be very
suited for narrow domain databases as it allows attaining a precision close to 100%.
4.3
Experiments w ith th e Co rel databas e
Figures 3 and 4 show the performances attained on two feature sets extracted from
the Corel database. This database is of the ?broad domain? type as images represent
a very large number of concepts, and the selected feature sets represent conceptual
similarity between pairs of images only partly.
Reported results clearly show the superiority of the proposed mechanism. Let us
note that the retrieval precision after the first k-nn search (0rf in the graphs) is quite
small. This is a consequence of the difficulty of selecting a good feature space to
represent conceptual similarity between pairs of images in a broad domain database.
This difficulty is partially overcome by using MindReader or Bayes QS as they
allow improving the retrieval precision by 10% to 15% according to the number of
iteration allowed, and according to the selected feature space. Let us recall that both
MindReader and Bayes QS perform a query movement in order to perform a k-nn
query on a more promising region of the feature space. On the other hand, the
proposed mechanism based on ranking all the images of the database according to a
relevance score, not only provided higher precision after the first feedback, but also
allow to improve significantly the retrieval precision as the number of iteration is
increased. As the initial precision is quite small, a user may have more willingness
to perform further iterations as the proposed mechanism allows retrieving new
relevant images.
Figure 3: Retrieval Performances for the Corel data set (Color Moments feature set)
in terms of average percentage retrieval precision
Figure 4: Retrieval Performances for the Corel data set (Co-occurrence Texture
feature set) in terms of average percentage retrieval precision.
5
Con cl u si on s
In this paper, we proposed a novel relevance feedback technique for content-based
image retrieval. While the vast majority of relevance feedback mechanisms aims at
modeling user?s concept of relevance based on the available labeled samples, the
proposed mechanism is based on ranking the images according to a relevance score
depending on the dissimilarity from the nearest relevant and non-relevant images.
The rationale behind our choice is the same of case-based reasoning, instance-based
learning, and nearest-neighbor pattern classification. These techniques provide good
performances when the number of available training samples is too small to use
statistical techniques. This is the case of relevance feedback in CBIR, where the use
of classification models should require a suitable formulation in order to avoid socalled ?small sample? problems.
Reported results clearly showed the superiority of the proposed mechanism
especially when large databases made up of images related to many different
concepts are searched. In addition, while many relevance feedback techniques
require the tuning of some parameters, and exhibit high computational complexity,
the proposed mechanism does not require any parameter tuning, and exhibit a low
computational complexity, as a number of techniques are available to speed-up
distance computations.
References
[1] Smeulders A.W.M., Worring M., Santini S., Gupta A., Jain R.: Content-based image
retrieval at the end of the early years. IEEE Trans. on Pattern Analysis and Machine
Intelligence 22(12) (2000) 1349-1380
[2] G. Salton and M.J. McGill, Introduction to modern information retrieval, New York,
McGraw-Hill, 1988.
[3] Ishikawa Y., Subramanys R., Faloutsos C.: MindReader: Querying databases through
multiple examples. In Proceedings. of the 24 th VLDB Conference (1998) 433-438
[4] Santini S., Jain R.: Integrated browsing and querying for image databases. IEEE Multimedia
7(3) (2000) 26-39
[5] Rui Y., Huang T.S.: Relevance Feedback Techniques in Image retrieval. In Lew M.S. (ed.):
Principles of Visual Information Retrieval. Springer, London, (2001) 219-258
[6] Sclaroff S., La Cascia M., Sethi S., Taycher L.: Mix and Match Features in the ImageRover
search engine. In Lew M.S. (ed.): Principles of Visual Information Retrieval. Springer-Verlag,
London (2001) 219-258
[7] Giacinto G., Roli F.: Bayesian relevance feedback for content-based image retrieval. Pattern
Recognition 37(7) (2004) 1499-1508
[8] Zhou X.S. and Huang T.S.: Relevance feedback in image retrieval: a comprehensive review,
Multimedia Systems 8(6) (2003) 536-544
[9] Aha D.W., Kibler D., Albert M.K. Instance Based learning Algorithms. Machine Learning, 6,
(1991) 37-66
[10] Althoff K-D. Case-Based Reasoning. In Chang S.K. (ed.) Handbook on Software Engineering
and Knowledge Engineering, World Scientific (2001), 549-588.
[11] Bloch I. Information Combination Operators for Data Fusion: A Comparative Review with
Classification. IEEE Trans. on System, Man and Cybernetics - Part A, 26(1) (1996) 52-67
[12] Duda R.O., Hart P.E., and Stork D.G.: Pattern Classification. John Wiley and Sons, Inc., New
York, 2001
[13] Hastie T., Tibshrirani R., and Friedman J.: The Elements of Statistical Learning. Springer,
New York, 2001
[14] Peng J., Bhanu B., Qing S., Probabilistic feature relevance learning for content-based
image retrieval, Computer Vision and Image Understanding 75 (1999) 150-164.
[15] He J., Li M., Zhang H-J, Tong H., Zhang C, Mean Version Space: a New Active Learning
Method for Content-Based Image Retrieval, Proc. of MIR 2004, New York, USA. (2004) 15-22.
| 2555 |@word repository:3 version:1 duda:1 vldb:1 fifteen:1 moment:2 initial:1 contains:3 score:12 pub:1 selecting:1 offering:1 ati:1 outperforms:1 com:1 comparing:1 si:1 john:1 kdd:2 shape:1 designed:1 grass:1 intelligence:1 selected:5 accordingly:3 ith:3 core:1 provides:4 vistex:1 zhang:2 dn:1 retrieving:2 qualitative:1 consists:1 peng:1 inspired:1 retriev:1 window:1 increasing:1 provided:3 medium:2 kind:2 sky:1 thorough:1 multidimensional:1 superiority:3 giorgio:1 engineering:3 treat:1 consequence:1 despite:1 nonrelevant:1 meet:1 path:1 co:3 range:1 averaged:1 testing:1 procedure:1 cbir:6 evan:2 gabor:1 significantly:1 word:2 cannot:1 close:2 operator:1 www:1 modifies:1 attention:1 simplicity:1 rule:1 q:8 retrieve:2 mcgill:1 user:24 hypothesis:1 sethi:1 element:1 recognition:2 database:42 labeled:1 electrical:1 region:2 movement:1 complexity:2 easily:2 differently:1 represented:3 jain:2 effective:2 london:2 query:20 quite:3 widely:1 corelfeatures:2 knn:1 itself:1 propose:2 piazza:1 relevant:44 uci:8 cluster:2 assessing:1 produce:1 comparative:1 object:1 ftp:1 depending:1 measured:1 nearest:24 implemented:1 c:1 involves:1 judge:1 giacinto:3 direction:1 foliage:1 modifying:2 filter:1 require:5 subdivided:3 behaviour:1 assign:1 generalization:1 exploring:1 proximity:1 considered:4 ic:1 exp:1 early:1 estimation:1 proc:1 currently:2 largest:1 mit:5 clearly:5 aim:3 rather:2 avoid:2 zhou:1 improvement:5 rank:1 attains:2 membership:1 nn:4 typically:3 integrated:1 interested:1 classification:6 html:2 among:1 socalled:1 art:2 spatial:1 uc:1 field:2 extraction:1 manually:2 represents:1 broad:6 ishikawa:1 kibler:1 report:1 few:2 modern:1 randomly:1 comprehensive:1 qing:1 ima:1 friedman:1 interest:1 normalisation:1 adjust:1 behind:3 bloch:1 euclidean:1 aha:1 desired:1 instance:5 increased:1 modeling:4 subset:6 graphic:1 too:3 reported:5 st:1 probabilistic:1 receiving:1 analogously:1 huang:3 dr:4 li:1 account:2 attaining:3 inc:1 cement:1 ranking:2 depends:2 performed:3 lab:1 bayes:8 sed:1 contribution:1 smeulders:1 lew:2 characteristic:1 bayesian:2 worth:2 researcher:1 cybernetics:1 classified:1 mlearn:1 fo:1 ed:3 definition:1 against:2 nonetheless:1 salton:1 mi:1 con:1 dataset:1 proved:1 recall:1 color:3 knowledge:1 organized:1 attained:5 higher:1 response:1 formulation:2 evaluated:1 strongly:1 hand:6 web:2 overlapping:1 quality:1 willingness:1 scientific:1 usa:1 concept:13 assigned:1 noted:1 mlrepository:1 hill:1 reasoning:3 image:110 novel:1 recently:2 common:1 specialized:1 corel:6 stork:1 belong:2 he:1 tuning:3 similarity:12 etc:1 base:1 posterior:1 recent:1 showed:1 retrieved:8 italy:1 verlag:1 outperforming:1 santini:2 additional:2 recognized:1 paradigm:2 multiple:1 mix:1 match:1 adapt:2 characterized:1 retrieval:36 hart:1 basic:1 vision:1 metric:7 albert:1 iteration:15 represent:7 adopting:1 addition:3 mir:1 nineteen:1 effectiveness:1 extracting:2 near:1 noting:1 easy:1 fit:2 hastie:1 perfectly:1 ased:1 idea:1 rod:1 colour:1 returned:1 york:4 characterise:2 svms:1 http:2 affords:1 percentage:4 cagliari:2 iter:2 four:1 reformulation:1 drawn:1 ce:4 vast:3 graph:2 fraction:1 year:1 taycher:1 inverse:1 electronic:1 ct:1 internet:1 refine:1 adapted:1 software:1 sake:1 speed:1 performing:1 department:1 marking:2 according:9 combination:1 belonging:1 slightly:1 son:1 modification:2 explained:1 restricted:1 equation:1 mechanism:28 mind:3 ge:1 end:3 available:6 apply:1 occurrence:2 faloutsos:1 professional:1 subdivide:1 original:2 top:2 eri:1 exploit:5 especially:1 objective:1 already:2 nr:2 exhibit:2 fabio:1 distance:7 majority:3 seven:2 armi:1 collected:1 reason:1 bhanu:1 assuming:1 index:2 ratio:1 providing:1 difficult:5 ba:1 twenty:1 perform:4 displayed:1 worring:1 variability:1 introduced:1 namely:1 pair:3 optimized:1 engine:3 narrow:6 trans:2 able:3 usually:3 perception:1 pattern:6 rf:20 shifting:1 suitable:1 ranked:1 rely:1 treated:1 difficulty:2 representing:1 improve:3 review:2 literature:5 understanding:1 bear:2 rationale:2 interesting:1 resu:1 querying:2 sixteen:2 degree:10 displaying:1 principle:2 roli:3 supported:1 dis:1 allow:4 neighbor:8 feedback:39 dimension:1 default:1 overcome:1 world:1 collection:2 adaptive:1 made:4 far:4 compact:1 mcgraw:1 active:1 handbook:1 conceptual:3 conclude:1 search:16 promising:1 learn:1 obtaining:1 improving:1 bearing:1 cl:1 domain:12 allowed:2 site:2 referred:1 en:1 fashion:1 wiley:1 tong:1 precision:23 position:1 wish:1 outdoor:1 third:1 gupta:1 fusion:1 rel:5 texture:6 dissimilarity:11 illustrates:2 rui:1 browsing:1 sclaroff:1 suited:2 visual:4 expressed:1 contained:2 partially:1 chang:1 springer:3 extracted:5 marked:2 formulated:3 consequently:2 man:1 content:10 included:1 typical:1 multimedia:2 partly:1 experimental:4 la:1 est:1 meaningful:2 mark:2 support:1 searched:1 relevance:48 |
1,712 | 2,556 | Parametric Embedding for Class Visualization
Tomoharu Iwata, Kazumi Saito, Naonori Ueda
NTT Communication Science Laboratories
NTT Corporation
2-4 Hikaridai Seika-Cho Soraku-gun Kyoto, 619-0237 JAPAN
{iwata,saito,ueda}@cslab.kecl.ntt.co.jp
Sean Stromsten, Thomas L. Griffiths, Joshua B. Tenenbaum
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
{sean s,gruffydd,jbt}@mit.edu
Abstract
In this paper, we propose a new method, Parametric Embedding (PE), for
visualizing the posteriors estimated over a mixture model. PE simultaneously embeds both objects and their classes in a low-dimensional space.
PE takes as input a set of class posterior vectors for given data points,
and tries to preserve the posterior structure in an embedding space by
minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a Gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending
on the source of the input data, providing insight into the classifier?s behavior in supervised, semi-supervised and unsupervised settings. The PE
algorithm has a computational advantage over conventional embedding
methods based on pairwise object relations since its complexity scales
with the product of the number of objects and the number of classes. We
demonstrate PE by visualizing supervised categorization of web pages,
semi-supervised categorization of digits, and the relations of words and
latent topics found by an unsupervised algorithm, Latent Dirichlet Allocation.
1
Introduction
Recently there has been great interest in algorithms for constructing low-dimensional
feature-space embeddings of high-dimensional data sets. These algorithms seek to capture some aspect of the data set?s intrinsic structure in a low-dimensional representation
that is easier to visualize or more efficient to process by other learning algorithms. Typical embedding algorithms take as input a matrix of data coordinates in a high-dimensional
ambient space (e.g., PCA [5]), or a matrix of metric relations between pairs of data points
(MDS [7], Isomap [6], SNE [4]). The algorithms generally attempt to map all and only
nearby input points onto nearby points in the output embedding.
Here we consider a different sort of embedding problem with two sets of points X =
{x1 , . . . , xN } and C = {c1 , . . . , cK }, which we call ?objects? (X) and ?classes? (C). The
input consists of conditional probabilities p(ck |xn ) associating each object xn with each
class ck . Many kinds of data take this form: for a classification problem, C may be the set
of classes, and p(ck |xn ) the posterior distribution over these classes for each object xn ; in a
marketing context, C might be a set of products and p(ck |xn ) the probabilistic preferences
of a consumer; or in language modeling, C might be a set of semantic topics, and p(c k |xn )
the distribution over topics for a particular document, as produced by a method like Latent
Dirichlet Allocation (LDA) [1]. Typically, the number of classes is much smaller than the
number of objects, K << N .
We seek a low-dimensional embedding of both objects and classes such that the distance
between object n and class k is monotonically related to the probability p(ck |xn ). This
embedding simultaneously represents not only the relations between objects and classes,
but also the relations within the set of objects and within the set of classes ? each defined in
terms of relations to points in the other set. That is, objects that tend to be associated with
the same classes should be embedded nearby, as should classes that tend to have the same
objects associated with them. Our primary goals are visualization and structure discovery,
so we typically work with two- or three-dimensional embeddings.
Object-class embeddings have many potential uses, depending on the source of the input
data. If p(ck |xn ) represents the posterior probabilities from a supervised Bayesian classifier, an object-class embedding provides insight into the behavior of the classifier: how
well separated the classes are, where the errors cluster, whether there are clusters of objects
that ?slip through a crack? between two classes, which objects are not well captured by
any class, and which classes are intrinsically most confusable with each other. Answers to
these questions could be useful for improved classifier design. The probabilities p(c k |xn )
may also be the product of unsupervised or semi-supervised learning, where the classes
ck represent components in a generative mixture model. Then an object-class embedding
shows how well the intrinsic structure of the objects (and, in a semi-supervised setting, any
given labels) accords with the clustering assumptions of the mixture model.
Our specific formulation of the embedding problem assumes that each class c k can be
represented by a spherical Gaussian distribution in the embedding space, so that the embedding as a whole represents a simple Gaussian mixture model for each object x n . We
seek an embedding that matches the posterior probabilities for each object under this Gaussian mixture model to the input probabilities p(ck |xn ). Minimizing the Kullback-Leibler
(KL) divergence between these two posterior distributions leads to an efficient algorithm,
which we call Parametric Embedding (PE).
PE can be seen as a generalization of stochastic neighbor embedding (SNE). SNE corresponds to a special case of PE where the objects and classes are identical sets. In SNE, the
class posterior probabilities p(ck |xn ) are replaced by the probability p(xm |xn ) of object
xn under a Gaussian distribution centered on xm . When the inputs (posterior probabilities) to PE come from an unsupervised mixture model, PE performs unsupervised dimensionality reduction just like SNE. However, it has several advantages over SNE and other
methods for embedding a single set of data points based on their pairwise relations (e.g.,
MDS, Isomap). It can be applied in supervised or semi-supervised modes, when class labels are available. Because its computational complexity scales with N K, the product of
the number of objects and the number of classes, it can be applied efficiently to data sets
with very many objects (as long as the number of classes remains small). In this sense,
PE is closely related to landmark MDS (LMDS) [2], if we equate classes with landmarks,
objects with data points, and ? log p(ck |xn ) with the squared distances input to LMDS.
However, LMDS lacks a probabilistic semantics and is only suitable for unsupervised settings. Lastly, even if hard classifications are not available, it is often the relations of the
objects to the classes, rather than to each other, that we are interested in.
After describing the mathematical formulation and optimization procedures used in PE
(Section 2), we present applications to visualizing the structure of several kinds of class
posteriors. In section 3, we look at supervised classifiers of hand-labeled web pages. In
section 4, we examine semi-supervised classifiers of handwritten digits. Lastly, in section 5,
we apply PE to an unsupervised probabilistic topics model, treating latent topics as classes,
and words as objects. PE handles these datasets easily, in the last producing an embedding
for over 26,000 objects in a little over a minute (on a 2GHz Pentium computer).
2
Parametric Embedding method
Given as input conditional probabilities p(ck |xn ), PE seeks an embedding of objects with
coordinates rn and classes with coordinates ?k , such that p(ck |xn ) is approximated as
closely as possible by the posterior probabilities from a unit-variance spherical Gaussian
mixture model in the embedding space:
p(ck ) exp(? 12 k rn ? ?k k2 )
p(ck |rn ) = PK
.
1
2
l=1 p(cl ) exp(? 2 k rn ? ?l k )
(1)
Here k ? k is the Euclidean norm in the embedding space. When the conditional probabilities p(ck |xn ) arise as posterior probabilities from a mixture model, we will also typically
be given priors p(ck ) as input; otherwise the p(ck ) terms above may be assumed equal.
It is natural to measure the degree of correspondence between input probabilities
and embedding-space probabilities using a sum of KL divergences for each object:
PN
n=1 KL(p(ck |xn )||p(ck |rn )). Minimizing this sum w.r.t. {p(ck |rn ))} is equivalent to
minimizing the objective function
E({rn }, {?k }) = ?
N X
K
X
p(ck |xn ) log p(ck |rn ).
(2)
n=1 k=1
Since this minimization problem cannot be solved analytically, we employ a coordinate
descent method. We initialize {?k }, and we iteratively minimize E w.r.t. to {?k } or {rn }
while fixing the other set of parameters, until E converges.
Derivatives of E are:
K
N
X
X
?E
?E
=
?n,k (rn ? ?k ) and
=
?n,k (?k ? rn ),
?rn
??k
n=1
(3)
k=1
where ?n,k = p(ck |xn ) ? p(ck |rn ). These learning rules have an intuitive interpretation
(analogous to those in SNE) as a sum of forces pulling or pushing rn (?k ) depending on the
sign of ?n,k . Importantly, the Hessian of E w.r.t. {rn } is a semi-positive definite matrix:
! K
!0
K
K
X
X
X
?2E
0
=
p(ck |rn )?k ?k ?
p(ck |rn )?k
p(ck |rn )?k
(4)
?rn ?r0n
k=1
k=1
k=1
since the r.h.s. of (4) is exactly a covariance matrix. Thus we can find the globally optimal
solution for {rn } given {?k }.1 The computational complexity of PE is O(N K), which
is much more efficient than that of pairwise (dis)similarity-based methods with O(N 2 )
computations (such as SNE, MDS, or Isomap).
1
In our experiments, we found that optimization proceeded more smoothly with a regularized
PN
PK
objective function, J = E + ?r n=1 k rn k2 +?? k=1 k ?k k2 , where ?r , ?? > 0.
3
Analyzing supervised classifiers on web data
In this section, we show how PE can be used to visualize the structure of labeled data
(web pages) in a supervised classification task. We also compare PE with two conventional
methods, MDS [7] and Fisher linear discriminant analysis (FLDA) [3]. MDS seeks a lowdimensional embedding that preserves the input distances between objects. It does not
normally use class labels for data points, although below we discuss a way to apply MDS
to label probabilities that arise in classification. FLDA, in contrast, naturally uses labeled
data in constructing a low-dimensional embedding. It seeks a a linear projection of the
objects? coordinates in a high-dimensional ambient space that maximizes between-class
variance and minimizes within-class variance.
The set of objects comprised 5500 human-classified web pages: 500 pages sampled from
each of 11 top level classes in Japanese directories of Open Directory (http://dmoz.org/).
Pages with less than 50 words, or which occurred under multiple categories, were eliminated. A Naive Bayes (NB) classifier was trained on the full data (represented as word
frequency vectors). Posterior probabilities p(ck |xn ) were calculated for classifying each
object (web page), assuming its true class label was unknown. These probabilities, as well
as estimated priors p(ck ), form the input to PE.
Fig.1(a) shows the output of PE, which captures many features of this data set and classification algorithm. Pages belonging to the same class tend to cluster well in the embedding,
which makes sense given the large sample of labeled data. Related categories are located
nearby: e.g., sports and health, or computers and online-shopping. Well-separated clusters
correspond to classes (e.g. sports) that are easily distinguished from others. Conversely,
regional pages are dispersed, indicating that they are not easily classified. Distinctive pages
are evident as well: a few pages that are scattered among the objects of another category
might be misclassified. Pages located between clusters are likely to be categorized in multiple classes; arcs between two classes show subsets of objects that distribute their probability
among those two classes and no others.
Fig.1(b) shows the result of MDS applied to cosine distances between web pages. No
labeled information is used (only word frequency vectors for the pages), and consequently
no class structure is visible. Fig.1(c) shows the result of FLDA. To stabilize the calculation,
FLDA was applied only after word frequencies were smoothed via SVD. FLDA uses label
information, and clusters together the objects in each class better than MDS does. However,
most clusters are highly overlapping, and the separation of classes is much poorer than
with PE. This seems to be a consequence of FLDA?s restriction to purely linear projections,
which cannot, in general, separate all of the classes.
Fig.1(d) shows another way of embedding the data using MDS, but this time applied to
Euclidean distances in the (K ? 1)?dimensional space of posterior distributions p(c k |xn ).
Pages belonging to the same class are definitely more clustered in this mode, but still the
clusters are highly overlapping and provide little insight into the classifier?s behavior. This
version of MDS uses the same inputs as PE, rather than any high-dimensional word frequency vectors, but its computations are not explicitly probabilistic. The superior results of
PE (Fig.1(a)) illustrate the advantage of optimizing an appropriate probabilistic objective
function.
4
Application to semi-supervised classification
The utility of PE for analyzing classifier performance may best be illustrated in a semisupervised setting, with a large unlabeled set of objects and a smaller set of labeled objects.
We fit a probabilistic classifier based on the labeled objects, and we would like to visualize
the behavior of the classifier applied to the unlabeled objects, in a way that suggests how
(a) PE
(b) MDS (word frequencies)
(c) FLDA
(d) MDS (posteriors)
Figure 1: The visualizations of categorized web pages. Each of the 5500 web pages is show
by a particle with shape indicating the page?s class.
accurate the classifier is likely to be and what kinds of errors it is likely to make.
We constructed a simple probabilistic classifier for 2558 handwritten digits (classes 0-4)
from the MNIST database. The classifier was based on a mixture model for the density
of each class, defined by selecting either 10 or 100 digits uniformly at random from each
class and centering a fixed-covariance Gaussian (in pixel space) on each of these examples
? essentially a soft nearest-neighbor method. The posterior distribution over this classifier
for all 2558 digits was submitted as input to PE.
The resulting embeddings allow us to predict the classifiers? patterns of confusions, calculated based on the true labels for all 2558 objects. Fig. 2 shows embeddings for both 10
labels/class and 100 labels/class. In both cases we see five clouds of points corresponding
to the five classes. The clouds are elongated and oriented roughly towards a common center, forming a star shape (also seen to some extent in our other applications). Objects that
concentrate their probability on only one class will lie as far from the center of the plot as
possible ? ideally, even farther than the mean of their class, because this maximizes their
Estimated class
Estimated class
(a) PE with 10 labels/class
(b) PE with 100 labels/class
True class
0
4
3
2
3 4
26 1
1
2
True class
.x
.x
..
...
......x
1 0 557 4 0 2
............
2 26 134 278 49 1
.........
.....
.
3 36 144 17 294 2
......
...............
.4 3 117 5 6 404
.........
.
.
.
.
.
.
.x.x.x
. ... ................
...xx
.... ..
............. .
. . . .
.
....... .
. .. ... ...... ..... . .
..x
. ... .... .. .. . . ... ... ....................... . . . .. ... .. .. ..... .. .....x.x
.
.
.
.
. . . . ..................................x
.
.
.. .... .......... .. . . . . ......... .. ... .. .
.
.
.
..
.
.
.
.
. ................................................................................. . ... .................... ...... ..............................................
.. ... ....................................................................... . .... ..... .. .................................. ....................
. . . ..................... .............. ...... ...... . . . ......... .. ....... . . .
. . .. ... ................................................ ....... .... .. . .... .......... ...... .
. ................................................................ ................ ........................
......... . ...... ...... ........... .............. ........... . .. . . .
.
. . . .......... ..... . ................ . ... .. ....... ... .........
. . ....... ......... ...... ... ........ .. .. . ........... . .... .... .... .
.. .
. . . .. . . ..
. .. . .... ......... ..... .. . . . ..... . .. . .........................
. . .. ........ .......... .. .. . .
. ................. .
.
.
.
.
.
.
..............
.
.
.
.
.
.
.
. .
....x.x....x.
. . . ...... ..... .
.
.
.
. ......... ..
.
.
.
.
x
x..x.x.
0 1 2 3 4
.
.x..x.x.x.x.
0 471 8 0 0 0
.x.x..x.x.x.
x. ...x.x.x....xx.x.xx...x..
1 0 559 1 2 1
..x...x..
.. x....x.xx..x.
2 8 74 388 16 2
. ...x.x.x...... .
. . x.....x.. x.
3 2 34 2 455 0
.
.
.
... ....x... .x... .
. ......... .
x..x.x.x.x..x.x.x x. x. .
4 1 47 0 1 486
x......x.... . .
.x...x...xx
.
x....x...x.x..... x..... .
.
.
.
x.xx.x. xx....x..............x...... .. . .
. .x........x................. ...
x..x..x......... . . . .
..x.x..x..xx........x......................... . .
...x.x.x.x.x..x
x..x. .x.x........x..x.........x...............x............... . .. . . ..........................x... .. .... . .. ... . .....x.x...x..x.....x..x..x.. ..x.x.x.x.x. .x..xxx
..... . . . . .. . ... ...x.. .x..x..x.x..x.
.. .. .....
x.....x........x......................... .. ....... ...............................x................x..........x....x...x.........x.....x.x...x....x..x.x..x.x.x..
x.x. . ................... .......... . ... ..............x.....................x............ .
x. .. ... . .. ....... .....x.... ..... . . .. . .
.
. . . .... .. ........ . . ... .......... ... ..
. . . .. .. ...... .. .
.x. ......
.
x
.
.. x. . .. .
.
.
.
.
.
. ........................... .
.......x..x. x.
..........
..x.x...x....x..x..x.x..x.x.xx...x.x
...xx..x.x.
. .x.......x...x............xx...x.....
.......x..x......x..x..xx...x..x.x.
x.x.x.x
x.....x...x.....
. ..x....xx..x.....x.x..x.
x
....xx
..x.x..x..x.x.x..x.x..x.x.
0 1 2
0 330 117 5
4
3
1
0
Figure 2: Parametric embeddings for handwritten digit classification. Each dot represents
the coordinates rn of one image. Boxed numbers represent the class means ?k . ??s show
labeled examples used to train the classifier. Images of several unlabeled digits are shown
for each class.
posterior probability on that class. Moving towards the center of the plot, objects become
increasingly confused with other classes.
Relative to using only 10 labels/class, using 100 labels yields clusters that are more distinct, reflecting better between-class discrimination. Also, the labeled examples are more
evenly spread through each cluster, reflecting more faithful within-class models and less
overfitting. In both cases, the ?1? class is much closer than any other to the center of the
plot, reflecting the fact that instances of other classes tend to be mistaken for ?1?s. Instances
of other classes near the ?1? center also tend to look rather ?one-like? ? thinner and more
elongated. The dense cluster of points just outside the mean for ?1? reflects the fact that ?1?s
are rarely mistaken for other digits. In Fig. 2(a), the ?0? and ?3? distributions are particularly overlapping, reflecting that those two digits are most readily confused with each other
(apart from 1). The ?webbing? between the diffuse ?2? arm and the tighter ?3? arm reflects
the large number of ?2?s taken for ?3?s. In Fig. 2(b), that ?webbing? persists, consistent with
the observation that (again, apart from many mistaken responses of 1) the confusion of ?2?s
for ?3?s is the only large-scale error these larger data permit.
5
Application to unsupervised latent class models
In the applications above, PE was applied to visualize the structure of classes based at least
to some degree on labeled examples. The algorithm can also be used in a completely unsupervised setting, to visualize the structure of a probabilistic generative model based on
latent classes. Here we illustrate this application of PE by visualizing a semantic space of
word meanings: objects correspond to words, and classes correspond to topics in a latent
Dirichlet allocation (LDA) model [1] fit to a large (>37,000 documents, >12,000,000 word
tokens) corpus of educational materials for first grade to college (TASA). The problem of
mapping a large vocabulary is particularly challenging, and, with over 26,000 objects (word
types), prohibitively expensive for pairwise methods. Again, PE solves for the configuration shown in about a minute.
In LDA (not to be confused with FLDA above), each topic defines a probability distribution
............
.
.
.
.
.
.
..... . . ..... . .
.
. .. .
. ..
........... . . .... .. .. .... ... ...... .. .
.
.....
.... ... ....
.
....... . . . .
. ..
... . ..
. .. .... ..... .
.
.
....
.
.
.
.
.
. . .....
.
.
.
.
chemistry
.
.
......
.. ... .. .... . ....... . ...
.
.
..
...
.
... training/education
.
.
.
.
.
.
.
.
..
.... . . .... .................... . .. ... .. .. . . . . .........
... . . . . .... ....... .. .. ... . .
.. .....
.... . ....... ... .. .............. .
.. .
...
.
.
.
.
.
.
.
.
.
.
. . .............. . .. ... ... .. ............... .............................. .....
.
. . . ... ... of science
. ............ ................. .... .... . . . .philosophy/history
..
........
.. ......... ..... .. .. .... ... .. ...... ........... ... ...... . ......................
....
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
. . ...
.
.... .
.................. ... ..... .... ...
.
. . .. ..... . geology
.
.
.
.
.
.
.
.
..
. .... ... . . ...... . . ... . . ..
. ..
.
.
.
.
.
.
. . . ........ . .. .
.
.
.
.
.
. ... . . ..
. .
.
... .
. banking/insurance
.....
.
.
.
.
.
.........
................ ....
... . .... . . ....
. .. . ..... .. .
..
....... .
... ..
....
.... .
.. .
. ... .. .. ...
.. ........ ...
....................
ADSORPTION ACTIVATED
COVALENTLY ALKENE
PHASE
IMPLEMENTED STATEWIDE
CARPENTRY
GROUP
AUTISTIC
COMPOSITION DISSOLVES
MIXTURE FRACTIONAL
CONTENT
SCHOLARSHIP
PHENOMENON
DETECTION
CHEMISTRY
CONCEPTS
ADMINISTRATION
APPARATUS
EFFORTS PATIENCE
STRATIFIED
DIFFICULTY
REPRESENTED
GUIDED
SELECTIONVISION
AREAS
CALLED
EXPOSED
AGENTS DESTROYED
ORDER
SUBSTANTIALLY
AVERAGE
BLENDS
COMPREHENSIVE
UPPER
FOUND
DENSE
ACCUMULATION RETAINED
DROPPED
SHAPED
STONE
ERA
PROBLEM
ORIGIN
ESTIMATE
ABSENT
SCIENTIFIC
DISCOVERY
FIRE
HYPOTHESIS
REQUIRES
THOUSANDS
OWES
PERMEABLE
HUMAN
DUE
PROPERTY
DRILLED
FOLD
DOME
INTEREST
ENABLE
ESSENTIALS
COLLECTS
BILLIONS DEPOSITS
RISE
PRICE
MONEY
TAX
INSURANCE
Figure 3: Parametric embedding for word meanings and topics based on posterior distributions from an LDA model. Each dot represents the coordinates rn of one word. Large
phrases indicate the positions of topic means ?k (with topics labeled intuitively). Examples
of words that belong to one or more topics are also shown.
over word types that can occur in a document. This model can be inverted to give the probability that topic ck was responsible for generating word xn ; these probabilities p(ck |xn )
provide the input needed to construct a space of word and topic meanings in PE.
More specifically, we fit a 50-topic LDA model to the TASA corpus. Then, for each word
type, we computed its posterior distribution restricted to a subset of 5 topics, and input these
conditional probabilities to PE (with N = 26, 243, K = 5). Fig. 3 shows the resulting
embedding. As with the embeddings in Figs. 1 and 2, the topics are arranged roughly in a
star shape, with a tight cluster of points at each corner of the star corresponding to words
that place almost all of their probability mass on that topic. Semantically, the words in these
extreme clusters often (though not always) have a fairly specialized meaning particular to
the nearest topic. Moving towards the center of the plot, words take on increasingly general
meanings.
This embedding shows other structures not visible in previous figures: in particular, dense
curves of points connecting every pair of clusters. This pattern reflects the characteristic
probabilistic structure of topic models of semantics: in addition to the clusters of words
that associate with just one topic, there are many words that associate with just two topics,
or just three, and so on. The dense curves in Fig. 3 show that for any pair of topics in
this corpus, there exists a substantial subset of words that associate with just those topics.
For words with probability sharply concentrated on two topics, points along these curves
minimize the sum of the KL and regularization terms. This kind of distribution is intrinsically high-dimensional and cannot be captured with complete fidelity in any 2-dimensional
embedding.
As shown by the examples labeled in Fig. 3, points along the curves connecting two apparently unrelated topics often have multiple meanings or senses that join them to each topic:
?deposit? has both a geological and a financial sense, ?phase? has both an everyday and a
chemical sense, and so on.
6
Conclusions
We have proposed a probabilistic embedding method, PE, that embeds objects and classes
simultaneously. PE takes as input a probability distribution for objects over classes, or more
generally of one set of points over another set, and attempts to fit that distribution with a
simple class-conditional parametric mixture in the embedding space. Computationally, PE
is inexpensive relative to methods based on similarities or distances between all pairs of
objects, and converges quickly on many thousands of data points.
The visualization results of PE shed light on features of both the data set and the classification model used to generate the input conditional probabilities, as shown in applications
to classified web pages, partially classified digits, and the latent topics discovered by an
unsupervised method, LDA. PE may also prove useful for similarity-preserving dimension
reduction, where the high-dimensional model is not of primary interest, or more generally,
in analysis of large conditional probability tables that arise in a range of applied domains.
As an example of an application we have not yet explored, purchases, web-surfing histories, and other preference data naturally form distributions over items or categories of items.
Conversely, items define distributions over people or categories thereof. Instances of such
dyadic data abound?restaurants and patrons, readers and books, authors and publications,
species and foods...?with patterns that might be visualized. PE provides a tractable, principled, and effective visualization method for large volumes of such data, for which pairwise
methods are not appropriate.
Acknowledgments
This work was supported by a grant from the NTT Communication Sciences Laboratories.
References
[1] D. Blei, A. Ng and M. Jordan. Latent dirichlet allocation. NIPS 15, 2002.
[2] V. de Silva, J. B. Tenenbaum. Global versus local methods in nonlinear dimensionality
reduction. NIPS 15, pp. 705-712, 2002.
[3] R. Fisher. The use of multiple measurements in taxonomic problem. Annuals of Eugenics 7, pp.179?188, 1950.
[4] G. Hinton and S. Roweis. Stochastic neighbor embedding. NIPS 15, 2002.
[5] I.T. Joliffe. Principal Component Analysis. Springer, 1980.
[6] J. Tenenbaum, V. de Silva and J. Langford. A global geometric framework for nonlinear dimensionality reduction. Science 290 pp. 2319?2323, 2000.
[7] W. Torgerson. Theory and Methods of Scaling. New York, Wiley, 1958.
| 2556 |@word proceeded:1 version:1 norm:1 seems:1 open:1 seek:6 covariance:3 reduction:4 configuration:1 selecting:1 document:3 yet:1 readily:1 visible:2 shape:3 treating:1 plot:4 discrimination:1 generative:2 item:3 directory:2 farther:1 blei:1 provides:2 preference:2 org:1 five:2 mathematical:1 along:2 constructed:1 become:1 consists:1 prove:1 pairwise:5 roughly:2 behavior:4 seika:1 examine:1 brain:1 grade:1 kazumi:1 globally:1 spherical:2 food:1 little:2 abound:1 confused:3 xx:14 unrelated:1 maximizes:2 mass:1 surfing:1 what:1 kind:4 minimizes:1 substantially:1 corporation:1 every:1 shed:1 exactly:1 prohibitively:1 classifier:18 k2:3 unit:1 normally:1 grant:1 producing:1 positive:1 persists:1 dropped:1 thinner:1 apparatus:1 local:1 consequence:1 era:1 analyzing:2 permeable:1 might:4 conversely:2 suggests:1 challenging:1 co:1 collect:1 stratified:1 range:1 faithful:1 responsible:1 acknowledgment:1 definite:1 carpentry:1 digit:10 procedure:1 alkene:1 saito:2 area:1 projection:2 word:26 griffith:1 onto:1 cannot:3 unlabeled:3 nb:1 context:1 restriction:1 conventional:2 map:1 equivalent:1 elongated:2 center:6 accumulation:1 educational:1 insight:3 rule:1 importantly:1 financial:1 embedding:36 handle:1 coordinate:7 analogous:1 us:5 slip:1 hypothesis:1 origin:1 associate:3 approximated:1 particularly:2 located:2 expensive:1 labeled:12 database:1 cloud:2 solved:1 capture:2 thousand:2 substantial:1 principled:1 complexity:3 ideally:1 trained:1 tight:1 lmds:3 exposed:1 purely:1 distinctive:1 completely:1 easily:3 represented:3 train:1 separated:2 distinct:1 effective:1 outside:1 larger:1 otherwise:1 online:1 advantage:3 propose:1 lowdimensional:1 product:4 tax:1 roweis:1 intuitive:1 everyday:1 billion:1 cluster:15 categorization:2 generating:1 converges:2 object:50 depending:3 illustrate:2 geology:1 fixing:1 nearest:2 solves:1 implemented:1 come:1 indicate:1 concentrate:1 guided:1 closely:2 stochastic:2 centered:1 human:2 enable:1 material:1 education:1 shopping:1 generalization:1 clustered:1 tighter:1 exp:2 great:1 mapping:1 predict:1 visualize:5 label:13 reflects:3 minimization:1 mit:1 gaussian:7 always:1 ck:32 rather:3 pn:2 publication:1 pentium:1 contrast:1 sense:4 typically:3 relation:8 misclassified:1 interested:1 semantics:2 pixel:1 classification:8 among:2 fidelity:1 special:1 initialize:1 fairly:1 equal:2 construct:1 shaped:1 ng:1 eliminated:1 identical:1 represents:5 look:2 unsupervised:10 purchase:1 others:2 employ:1 few:1 oriented:1 simultaneously:3 preserve:2 comprehensive:1 divergence:3 replaced:1 phase:2 fire:1 attempt:2 detection:1 interest:3 highly:2 insurance:2 mixture:12 extreme:1 light:1 activated:1 sens:1 accurate:1 ambient:2 poorer:1 naonori:1 closer:1 owes:1 euclidean:2 confusable:1 instance:3 modeling:1 soft:1 phrase:1 subset:3 comprised:1 answer:1 autistic:1 cho:1 density:1 definitely:1 probabilistic:10 together:1 connecting:2 quickly:1 squared:1 again:2 cognitive:1 corner:1 book:1 derivative:1 japan:1 potential:2 distribute:1 de:2 chemistry:2 star:3 stabilize:1 explicitly:1 try:1 apparently:1 dissolve:1 sort:1 bayes:1 minimize:2 variance:3 characteristic:1 efficiently:1 equate:1 correspond:3 yield:1 bayesian:1 handwritten:3 produced:1 classified:4 submitted:1 history:2 centering:1 inexpensive:1 frequency:5 pp:3 thereof:1 naturally:2 associated:2 sampled:1 massachusetts:1 intrinsically:2 fractional:1 dimensionality:3 sean:2 reflecting:4 supervised:14 xxx:1 response:1 improved:1 formulation:2 arranged:1 though:1 tomoharu:1 marketing:1 just:6 lastly:2 until:1 langford:1 hand:1 web:11 nonlinear:2 overlapping:3 lack:1 defines:1 mode:2 lda:6 pulling:1 scientific:1 semisupervised:1 concept:1 true:4 isomap:3 analytically:1 regularization:1 chemical:1 laboratory:2 jbt:1 leibler:2 semantic:2 iteratively:1 illustrated:1 visualizing:4 cosine:1 stone:1 evident:1 complete:1 demonstrate:1 confusion:2 performs:1 silva:2 image:2 meaning:6 recently:1 superior:1 common:1 specialized:1 jp:1 volume:1 belong:1 interpretation:1 occurred:1 measurement:1 composition:1 mistaken:3 particle:1 language:1 dot:2 moving:2 similarity:3 money:1 posterior:19 optimizing:1 apart:2 joshua:1 inverted:1 captured:2 seen:2 preserving:1 monotonically:1 semi:8 multiple:4 full:1 kyoto:1 ntt:4 match:1 calculation:1 long:1 essentially:1 metric:1 represent:2 accord:1 c1:1 addition:1 source:2 regional:1 tend:5 jordan:1 call:2 near:1 embeddings:7 destroyed:1 fit:4 restaurant:1 associating:1 administration:1 absent:1 whether:1 pca:1 utility:1 effort:1 soraku:1 hessian:1 york:1 generally:3 useful:2 tenenbaum:3 concentrated:1 visualized:1 category:5 stromsten:1 http:1 generate:1 crack:1 estimated:4 sign:1 torgerson:1 group:1 sum:5 taxonomic:1 place:1 almost:1 reader:1 ueda:2 separation:1 patience:1 banking:1 scaling:1 correspondence:1 fold:1 annual:1 occur:1 sharply:1 diffuse:1 nearby:4 aspect:1 gruffydd:1 cslab:1 department:1 belonging:2 smaller:2 increasingly:2 intuitively:1 restricted:1 taken:1 computationally:1 visualization:5 remains:1 describing:1 discus:1 needed:1 tractable:1 available:2 permit:1 apply:2 appropriate:2 distinguished:1 thomas:1 assumes:1 dirichlet:4 clustering:1 top:1 pushing:1 scholarship:1 hikaridai:1 objective:3 question:1 blend:1 parametric:7 primary:2 md:13 distance:6 separate:1 geological:1 landmark:2 gun:1 topic:27 evenly:1 extent:1 discriminant:1 dmoz:1 patron:1 consumer:1 assuming:1 retained:1 providing:1 minimizing:4 sne:8 rise:1 design:1 unknown:1 upper:1 observation:1 datasets:1 arc:1 descent:1 hinton:1 communication:2 rn:23 discovered:1 smoothed:1 pair:4 kl:4 nip:3 eugenics:1 below:1 pattern:3 xm:2 suitable:1 natural:1 force:1 regularized:1 difficulty:1 arm:2 technology:1 flda:8 naive:1 health:1 prior:2 geometric:1 discovery:2 relative:2 embedded:1 deposit:2 allocation:4 versus:1 degree:2 agent:1 consistent:1 kecl:1 classifying:1 token:1 supported:1 last:1 dis:1 allow:1 institute:1 neighbor:3 ghz:1 curve:4 calculated:2 xn:25 vocabulary:1 dimension:1 author:1 dome:1 far:1 kullback:2 global:2 overfitting:1 corpus:3 assumed:1 latent:9 table:1 boxed:1 cl:1 japanese:1 constructing:2 domain:1 pk:2 spread:1 dense:4 whole:1 arise:3 dyadic:1 categorized:2 x1:1 fig:12 join:1 scattered:1 wiley:1 embeds:2 position:1 lie:1 pe:40 minute:2 specific:1 explored:1 intrinsic:2 essential:1 mnist:1 exists:1 tasa:2 easier:1 smoothly:1 likely:3 forming:1 sport:2 partially:1 springer:1 corresponds:1 iwata:2 dispersed:1 conditional:7 goal:1 consequently:1 towards:3 price:1 fisher:2 content:1 hard:1 typical:1 specifically:1 uniformly:1 semantically:1 principal:1 called:1 specie:1 svd:1 indicating:2 rarely:1 college:1 people:1 philosophy:1 phenomenon:1 |
1,713 | 2,557 | Conditional Models of Identity Uncertainty
with Application to Noun Coreference
Andrew McCallum?
Department of Computer Science
University of Massachusetts Amherst
Amherst, MA 01003 USA
[email protected]
?
Ben Wellner??
The MITRE Corporation
202 Burlington Road
Bedford, MA 01730 USA
[email protected]
?
Abstract
Coreference analysis, also known as record linkage or identity uncertainty, is a difficult and important problem in natural language processing, databases, citation matching and many other tasks. This paper introduces several discriminative, conditional-probability models for coreference analysis, all examples of undirected graphical models. Unlike
many historical approaches to coreference, the models presented here
are relational?they do not assume that pairwise coreference decisions
should be made independently from each other. Unlike other relational
models of coreference that are generative, the conditional model here can
incorporate a great variety of features of the input without having to be
concerned about their dependencies?paralleling the advantages of conditional random fields over hidden Markov models. We present positive
results on noun phrase coreference in two standard text data sets.
1
Introduction
In many domains?including computer vision, databases and natural language
processing?we find multiple views, descriptions, or names for the same underlying object. Correctly resolving these references is a necessary precursor to further processing and
understanding of the data. In computer vision, solving object correspondence is necessary
for counting or tracking. In databases, performing record linkage or de-duplication creates
a clean set of data that can be accurately mined. In natural language processing, coreference analysis finds the nouns, pronouns and phrases that refer to the same entity, enabling
the extraction of relations among entities as well as more complex propositions.
Consider, for example, the text in a news article that discusses the entities George Bush,
Colin Powell, and Donald Rumsfeld. The article contains multiple mentions of Colin
Powell by different strings??Secretary of State Colin Powell,? ?he,? ?Mr. Powell,? ?the
Secretary??and also refers to the other two entities with sometimes overlapping strings.
The coreference task is to use the content and context of all the mentions to determine how
many entities are in the article, and which mention corresponds to which entity.
This task is most frequently solved by examining individual pair-wise distance measures
between mentions independently of each other. For example, database record-linkage and
citation reference matching has been performed by learning a pairwise distance metric
between records, and setting a distance threshold below which records are merged (Monge
& Elkan, 1997; McCallum et al., 2000; Bilenko & Mooney, 2002; Cohen & Richman,
2002). Coreference in NLP has also been performed with distance thresholds or pairwise
classifiers (McCarthy & Lehnert, 1995; Ge et al., 1998; Soon et al., 2001; Ng & Cardie,
2002).
But these distance measures are inherently noisy and the answer to one pair-wise coreference decision may not be independent of another. For example, if we measure the distance
between all of the three possible pairs among three mentions, two of the distances may
be below threshold, but one above?an inconsistency due to noise and imperfect measurement. For example, ?Mr. Powell? may be correctly coresolved with ?Powell,? but particular grammatical circumstances may make the model incorrectly believe that ?Powell?
is coreferent with a nearby occurrence of ?she.? Inconsistencies might be better resolved
if the coreference decisions are made in dependent relation to each other, and in a way
that accounts for the values of the multiple distances, instead of a threshold on single pairs
independently.
Recently Pasula et al. (2003) have proposed a formal, relational approach to the problem
of identity uncertainty using a type of Bayesian network called a Relational Probabilistic
Model (Friedman et al., 1999). A great strength of this model is that it explicitly captures
the dependence among multiple coreference decisions.
However, it is a generative model of the entities, mentions and all their features, and thus
has difficulty using many features that are highly overlapping, non-independent, at varying
levels of granularity, and with long-range dependencies. For example, we might wish to
use features that capture the phrases, words and character n-grams in the mentions, the
appearance of keywords anywhere in the document, the parse-tree of the current, preceding
and following sentences, as well as 2-d layout information. To produce accurate generative
probability distributions, the dependencies between these features should be captured in the
model; but doing so can lead to extremely complex models in which parameter estimation
is nearly impossible.
Similar issues arise in sequence modeling problems. In this area significant recent success has been achieved by replacing a generative model?hidden Markov models?with a
conditional model?conditional random fields (CRFs) (Lafferty et al., 2001). CRFs have
reduced part-of-speech tagging errors by 50% on out-of-vocabulary words in comparison
with HMMs (Ibid.), matched champion noun phrase segmentation results (Sha & Pereira,
2003), and significantly improved extraction of named entities (McCallum & Li, 2003),
citation data (Peng & McCallum, 2004), and the segmentation of tables in government reports (Pinto et al., 2003). Relational Markov networks (Taskar et al., 2002) are similar
models, and have been shown to significantly improve classification of Web pages.
This paper introduces three conditional undirected graphical models for identity uncertainty. The models condition on the mentions, and generate the coreference decisions, (and
in some cases also generate attributes of the entities). In the first most general model, the
dependency structure is unrestricted, and the number of underlying entities explicitly appears in the model structure. The second and third models have no structural dependence
on the number of entities, and fall into a class of Markov random fields in which inference
corresponds to graph partitioning (Boykov et al., 1999).
After introducing the first two models as background generalizations, we show experimental results using the third, most specific model on a noun coreference problem in two
different standard newswire text domains: broadcast news stories from the DARPA Automatic Content Extraction (ACE) program, and newswire articles from the MUC-6 corpus.
In both domains we take advantage of the ability to use arbitrary, overlapping features of
the input, including multiple grammatical features, string equality, substring, and acronym
matches. Using the same features, in comparison with an alternative natural language processing technique, we reduce error by 33% and 28% in the two domains on proper nouns
and by 10% on all nouns in the MUC-6 data.
2
Three Conditional Models of Identity Uncertainty
We now describe three possible configurations for conditional models of identity uncertainty, each progressively simpler and more specific than its predecessor. All three are
based on conditionally-trained, undirected graphical models.
Undirected graphical models, also known as Markov networks or Markov random fields,
are a type of probabilistic model that excels at capturing interdependent data in which
causality among attributes is not apparent. We begin by introducing notation for mentions,
entities and attributes of entities, then in the following subsections describe the likelihood,
inference and estimation procedures for the specific undirected graphical models.
Let E = (E1 , ...Em ) be a collection of classes or ?entities?. Let X = (X1 , ...Xn ) be a
collection of random variables over observations or ?mentions?; and let Y = (Y1 , ...Yn ) be
a collection of random variables over integer identifiers, unique to each entity, specifying to
which entity a mention refers. Thus the y?s are integers ranging from 1 to m, and if Yi = Yj ,
then mention Xi is said to refer to the same underlying entity as Xj . For example, some
particular entity e4 , U.S. Secretary of State, Colin L. Powell, may be mentioned multiple
times in a news article that also contains mentions of other entities: x6 may be ?Colin
Powell?; x9 may be ?he?; x17 may be ?the Secretary of State.? In this case, the unique
integer identifier for this entity, e4 , is 4, and y6 = y9 = y17 = 4.
Furthermore, entities may have attributes. Let A be a random variable over the collection of
all attributes for all entities. Borrowing the notation of Relational Markov Networks (Taskar
et al., 2002), we write the random variable over the attributes of entity Es as Es .A =
{Es .A1 , Es .A2 , Es .A3 , ...}. For example, these three attributes may be gender, birth year,
and surname. Continuing the above example, then e4 .a1 = MALE, e4 .a2 = 1937, and e4 .a3
= Powell. One can interpret the attributes as the values that should appear in the fields of
a database record for the given entity. Attributes such as surname may take on one of the
finite number of values that appear in the mentions of the data set.
We may examine many features of the mentions, x, but since a conditional model doesn?t
generate them, we don?t need random variable notation for them. Separate measured features of the mentions and entity-assignments, y, are captured in different feature functions,
f (?), over cliques in the graphical model. Although the functions may be real-valued, typically they are binary. The parameters of the model are associated with these different
feature functions. Details and example feature functions and parameterizations are given
for the three specific models below.
The task is then to find the most likely collection of entity-assignments, y, (and optionally
also the most likely entity attributes, a), given a collection of mentions and their context, x. A generative probabilistic model of identity uncertainty is trained to maximize
P (Y, A, X). A conditional probabilistic model of identity uncertainty is instead trained to
maximize P (Y, A|X), or simply P (Y|X).
2.1
Model 1: Groups of nodes for entities
First we consider an extremely general undirected graphical model in which there is a node
for the mentions, x,1 a node for the entity-assignment of each mention, y, and a node for
each of the attributes of each entity, e.a. These nodes are connected by edges in some
unspecified structure, where an edge indicates that the values of the two connected random
variables are dependent on each the other.
1
Even though there are many mentions in x, because we are not generating them, we can represent
them as a single node. This helps show that feature functions can ask arbitrary questions about various
large and small subsets of the mentions and their context. We will still use xi to refer to the content
and context of the ith mention.
The parameters of the model are defined over cliques in this graph. Typically the parameters on many different cliques would be tied in patterns that reflect the nature of the repeated
relational structure in the data. Patterns of tied parameters are common in many graphical models, including HMMs and other finite state machines (Lafferty et al., 2001), where
they are tied across different positions in the input sequence, and by more complex patterns based on SQL-like queries, as in Markov Relational Networks (Taskar et al., 2002).
Following the nomenclature of the later, these parameter-tying-patterns are called clique
templates; each particular instance a template in the graph we call a hit.
For example, one clique template may specify a pattern consisting of two mentions, their
entity-assignment nodes, and an entity?s surname attribute node. The hits would consist
of all possible combinations of such nodes. Multiple feature functions could then be run
over each hit. One feature function might have value 1 if, for example, both mentions were
assigned to the same entity as the surname node, and if the surname value appears as a
substring in both mention strings (and value 0 otherwise).
The Hammersley-Clifford theorem stipulates that the probability of a particular set of values on the random variables in an undirected graphical model is a product of potential
functions over cliques of the graph. Our cliques will be the hits, h = {h, ...}, resulting
from a set of clique templates, t = {t, ...}. In typical fashion, we will write the probability
distribution in exponential form, with each potential function calculated as a dot-product
of feature functions, f , and learned parameters, ?,
!
X X X
1
P (y, a|x) =
exp
?l fl (y, a, x : ht ) ,
Zx
t?t
ht ?ht
l
where (y, a, x : ht ) indicates the subset of the entity-assignment, attribute, and mention
nodes selected by the clique template hit ht ; and Zx is a normalizer to make the probabilities over all y sum to one (also known as the partition function).
The parameters, ?, can be learned by maximum likelihood from labeled training data.
Calculating the partition function is problematic because there are a very large number of
possible y?s and a?s. Loopy belief propagation or Gibbs sampling sampling have been used
successfully in other similar situations, e.g. (Taskar et al., 2002).
However, note that both loopy belief propagation and Gibbs sampling only work over a
graph with fixed structure. But in our problem the number of entities (and thus number of
attribute nodes, and the domain of the entity-assignment nodes) is unknown. Inference in
these models must determine for us the highest-probability number of entities.
In related work on a generative probabilistic model of identity uncertainty, Pasula et al.
(2003), solve this problem by alternating rounds of Metropolis-Hastings sampling on a
given model structure with rounds of Metropolis-Hastings to explore the space of new
graph structures.
2.2
Model 2: Nodes for mention pairs, with attributes on mentions
To avoid the need to change the graphical model structure during inference, we now remove
any parts of the graph that depend on the number of entities, m: (1) The per-mention
entity-assignment nodes, Yi , are random variables whose domain is over the integers 0
through m; we remove these nodes, replacing them with binary-valued random variables,
Yij , over each pair of mentions, (Xi , Xj ) (indicating whether or not the two mentions are
coreferent); although it is not strictly necessary, we also restrict the clique templates to
operate over no more than two mentions (for efficiency). (2) The per-entity attribute nodes
A are removed and replaced with attribute nodes associated with each mention; we write
xi .a for the set of attributes on mention xi .
Even though the clique templates are now restricted to pairs of mentions, this does not
imply that pairwise coreference decisions are made independently of each other?they are
still highly dependent. Many pairs will overlap with each other, and constraints will flow
through these overlaps. This point is reiterated with an example in the next subsection.
Notice, however, that it is possible for the model as thus far described to assign non-zero
probability to an inconsistent set of entity-assignments, y. For example, we may have an
?inconsistent triangle? of coreference decisions in which yij and yjk are 1, while yik is 0.
We can enforce the impossibility of all inconsistent configurations by adding inconsistencychecking functions f? (yij , yjk , yik ) for all mention triples, with the corresponding ?? ?s
fixed at negative infinity?thus assigning zero probability to them. (Note that this is simply
a notational trick; in practice the inference implementation simply avoids any configurations of y that are inconsistent?a check that is simple to perform.) Thus we have
?
?
X
X
1
P (y, a|x) =
exp ?
?l fl (xi , xj , yij , xi .a, xj .a) +
?? f? (yij , yjk , yik )? .
Zx
i,j,l
i,j,k
We can also enforce consistency among the attributes of coreferent mentions by similar
means. There are many widely-used techniques for efficiently and drastically reducing
the number of pair-wise comparisons, e.g. (Monge & Elkan, 1997; McCallum et al., 2000).
In this case, we could also restrict fl (xi , xj , yij ) ? 0, ?yij = 0.
2.3
Model 3: Nodes for mention pairs, graph partitioning with learned distance
When gathering attributes of entities is not necessary, we can avoid the extra complication
of attributes by removing them from the model. What results is a straightforward, yet
highly expressive, discriminatively-trained, undirected graphical model that can use rich
feature sets and relational inference to solve identity uncertainty tasks. Determining the
most likely number of entities falls naturally out of inference. The model is
?
?
X
X
1
P (y|x) =
?l fl (xi , xj , yij ) +
?? f? (yij , yjk , yik )? .
(1)
exp ?
Zx
i,j,l
i,j,k
Recently there has been increasing interest in study of the equivalence between graph partitioning algorithms and inference in certain kinds of undirected graphical models, e.g.
(Boykov et al., 1999). This graphical model is an example of such a case. With some
thought, one can straightforwardly see that finding the highest probability coreference solution, y? = arg maxy P (y|x), exactly corresponds to finding the graph partitioning of a
(different) graph in which the mentions are the nodes and the edge weights
are the (log)
P
clique potentials on the pair of nodes hxi , xj i involved in their edge: l ?l fl (xi , xj , yij ),
where fl (xi , xj , 1) = ?fl (xi , xj , 0), and edge weights range from ?? to +?. Unlike
classic mincut/maxflow binary partitioning, here the number of partitions (corresponding
to entities) is unknown, but a single optimal number of partitions exists; negative edge
weights encourage more partitions.
Graph partitioning with negative edge weights is NP-hard, but it has a history of good
approximations, and several efficient algorithms to choose from. Our current experiments
use an instantiation of the minimizing-disagreements Correlational Clustering algorithm in
(Bansal et al., 2002). This approach is a simple yet effective partitioning scheme. It works
by measuring the degree of inconsistency incurred by including a node in a partition, and
making repairs. We refer the reader to Bansal et al. (2002) for further details.
The resulting solution does not make pairwise coreference decisions independently of each
other. It has a significant ?relational? nature because the assignment of a node to a partition (or, mention to an entity) depends not just on a single low distance measurement
to one other node, but on its low distance measurement to all nodes in the partition (and
furthermore on its high distance measurement to all nodes of all other partitions). For example, the ?Mr. Powell?/?Powell?/?she? problem discussed in the introduction would be
prevented by this model because, although the distance between ?Powell? and ?she? might
grammatically look low, the distance from ?she? to another member of the same partition,
(?Mr. Powell?) is very high.
Interestingly, in our model, the distance measure between nodes is learned from labeled
training data. That is, we use data, D, in which the correct coreference partitions are
known in order to learn a distance metric such that, when the same data is clustered, the
correct partitions emerge. This is accomplished by maximum likelihood?adjusting the
weights, ?, to maximize the product of Equation 1 over all instances hx, yi in the training
set. Fortunately this objective function is concave?it has a single global maximum?
and there are several applicable optimization methods to choose from, including gradient
ascent, stochastic gradient ascent and conjugate gradient; all simply require the derivative
of the objective function. The derivative of the log-likelihood, L, is
?
?
X
X
X
X
?L
0 ?
?
=
fl (xi , xj , yij ) ?
P? (y0 |x)
fl (xi , xj , yij
) ,
??l
0
hx,yi?D
i,j,l
y
i,j,l
where P? (y0 |x) is defined by Equation 1, using the current set of ? parameters, ?, and
P
y0 is a sum over all possible partitionings.
The number of possible partitionings is exponential in the number of mentions, so for
any reasonably-sized problem, we obviously must resort to approximate inference for the
second expectation. A simple option is stochastic gradient ascent in the form of a voted
perceptron (Collins, 2002). Here we calculate the gradient for a single training instance at a
time, and rather than use a full expectation in the second line, simply using the single most
likely (or nearly most likely) partitioning as found by a graph partitioning algorithm, and
make progressively smaller steps in the direction of these gradients
P while cycling through
the instances, hx, yi in the training data. Neither the full sum, y0 , or the partition function, Zx , need to be calculated in this case. Further details are given in (Collins, 2002).
3
Experiments with Noun Coreference
We present experimental results on natural language noun phrase coreference using Model
3 applied to two applicable data sets: the DARPA MUC-6 corpus, and a set of 117 stories
from the broadcast news portion of the DARPA ACE data set. Both data sets have annotated
coreferences. We pre-process both data sets with the Brill part-of-speech tagger.
We compare our Model 3 against two other techniques representing typical approaches to
the problem of identity uncertainty. The first is single-link clustering with a threshold,
(single-link-threshold), which is universally used in database record-linkage and citation
reference matching (Monge & Elkan, 1997; Bilenko & Mooney, 2002; McCallum et al.,
2000; Cohen & Richman, 2002). It forms partitions by simply collapsing the spanning
trees of all mentions with pairwise distances below some threshold. For each experiment,
the threshold was selected by cross validation.
The second technique, which we call best-previous-match, has been used in natural language processing applications (Morton, 1997; Ge et al., 1998; Ng & Cardie, 2002). It
works by scanning linearly through a document, and associating each mention with its
best-matching predecessor?best as measured with a single pairwise distance.
In our experiments, both single-link-threshold and best-previous-match implementations
use a distance measure based on a binary maximum entropy classifier?matching the practice of Morton (1997) and Cohen and Richman (2002).
We use an identical feature set for all techniques, including our Method 3. The features,
typical of those used in many other NLP coreference systems, are modeled after those
in Ng and Cardie (2002). They include tests for string and substring matches, acronym
matches, parse-derived head-word matches, gender, W ORD N ET subsumption, sentence
distance, distance in the parse tree; etc., and are detailed in an accompanying technical
report. They are quite non-independent, and operate at multiple levels of granularity.
Table 1 shows standard MUCstyle F1 scores for three experiments. In the first two experibest-previous-match
ments, we consider only proper
single-link-threshold
nouns, and perform five-fold cross
Model 3
validation. In the third experiment, we perform the standard
Table 1: F1 results on three data sets.
MUC evaluation, including all
nouns?pronouns, common and proper?and use the standard 30/30 document train/test
split; furthermore, as in Harabagiu et al. (2001), we consider only mentions that have
a coreferent. Model 3 out-performs both the single-link-threshold and the best-previousmatch techniques, reducing error by 28% over single-link-threshold on the ACE proper
noun data, by 24% on the MUC-6 proper noun data, and by 10% over the best-previousmatch technique on the full MUC-6 task. All differences from Model 3 are statistically
significant. Historically, these data sets have been heavily studied, and even small gains
have been celebrated.
ACE
(Proper)
90.98
91.65
93.96
MUC-6
(Proper)
88.83
88.90
91.59
MUC-6
(All)
70.41
60.83
73.42
Our overall results on MUC-6 are slightly better (with unknown statistical significance)
than the best published results of which we are aware with a matching experimental design,
Harabagiu et al. (2001), who reach 72.3% using the same training and test data.
4
Related Work and Conclusions
There has been much related work on identity uncertainty in various specific fields. Traditional work in de-duplication for databases or reference-matching for citations measures
the distance between two records by some metric, and then collapses all records at a distance below a threshold, e.g. (Monge & Elkan, 1997; McCallum et al., 2000). This method
is not relational, that is, it does not account for the inter-dependent relations among multiple decisions to collapse. Most recent work in the area has focused on learning the distance
metric (Bilenko & Mooney, 2002; Cohen & Richman, 2002) not the clustering method.
Natural language processing has had similar emphasis and lack of emphasis respectively.
Pairwise coreference learned distance measures have used decision trees (McCarthy &
Lehnert, 1995; Ng & Cardie, 2002), SVMs (Zelenko et al., 2003), maximum entropy classifiers (Morton, 1997), and generative probabilistic models (Ge et al., 1998). But all use
thresholds on a single pairwise distance, or the maximum of a single pairwise distance to
determine if or where a coreferent merge should occur.
Pasula et al. (2003) introduce a generative probability model for identity uncertainty based
on Probabilistic Relational Networks networks. Our work is an attempt to gain some of the
same advantages that CRFs have over HMMs by creating conditional models of identity
uncertainty. The models presented here, as instances of conditionally-trained undirected
graphical models, are also instances of relational Markov networks (Taskar et al., 2002)
and conditional Random fields (Lafferty et al., 2001). Taskar et al. (2002) briefly discuss
clustering of dyadic data, such as people and their movie preferences, but not identity
uncertainty or inference by graph partitioning.
Identity uncertainty is a significant problem in many fields. In natural language processing,
it is not only especially difficult, but also extremely important, since improved coreference resolution is one of the chief barriers to effective data mining of text data. Natural
language data is a domain that has particularly benefited from rich and overlapping feature representations?representations that lend themselves better to conditional probability
models than generative ones (Lafferty et al., 2001; Collins, 2002; Morton, 1997). Hence
our interest in conditional models of identity uncertainty.
Acknowledgments
We thank Andrew Ng, Jon Kleinberg, David Karger, Avrim Blum and Fernando Pereira for helpful
and insightful discussions. This work was supported in part by the Center for Intelligent Information
Retrieval and in part by SPAWARSYSCEN-SD grant numbers N66001-99-1-8912 and N66001-021-8903, and DARPA under contract number F30602-01-2-0566 and in part by the National Science
Foundation under NSF grant #IIS-0326249 and in part by the Defense Advanced Research Projec
ts Agency (DARPA), through the Department of the Interior, NBC, Acquisition Services Division,
under contract number NBCHD030010.
References
Bansal, N., Chawala, S., & Blum, A. (2002). Correlation clustering. The 43rd Annual Symposium on
Foundations of Computer Science (FOCS) (pp. 238?247).
Bilenko, M., & Mooney, R. J. (2002). Learning to combine trained distance metrics for duplicate
detection in databases (Technical Report Technical Report AI 02-296). Artificial Intelligence
Laboratory, University of Texas at Austin, Austin, TX.
Boykov, Y., Veksler, O., & Zabih, R. (1999). Fast approximate energy minimization via graph cuts.
ICCV (1) (pp. 377?384).
Cohen, W., & Richman, J. (2002). Learning to match and cluster entity names. Proceedings of
KDD-2002, 8th International Conference on Knowledge Discovery and Data Mining.
Collins, M. (2002). Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms.
Friedman, N., Getoor, L., Koller, D., & Pfeffer, A. (1999). Learning probabilistic relational models.
IJCAI (pp. 1300?1309).
Ge, N., Hale, J., & Charniak, E. (1998). A statistical approach to anaphora resolution. Proceedings
of the Sixth Workshop on Very Large Corpora (pp. 161?171).
Harabagiu, S., Bunescu, R., & Maiorano, S. (2001). Text and knowledge mining for coreference
resolution. Proceedings of the 2nd Meeting of the North American Chapter of the Association of
Computational Linguistics (NAACL-2001) (pp. 55?62).
Lafferty, J., McCallum, A., & Pereira, F. (2001). Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. Proc. ICML (pp. 282?289).
McCallum, A., & Li, W. (2003). Early results for named entity recognition with conditional random
fields, feature induction and web-enhanced lexicons. Seventh Conference on Natural Language
Learning (CoNLL).
McCallum, A., Nigam, K., & Ungar, L. H. (2000). Efficient clustering of high-dimensional data sets
with application to reference matching. Knowledge Discovery and Data Mining (pp. 169?178).
McCarthy, J. F., & Lehnert, W. G. (1995). Using decision trees for coreference resolution. IJCAI (pp.
1050?1055).
Monge, A. E., & Elkan, C. (1997). An efficient domain-independent algorithm for detecting approximately duplicate database records. Research Issues on Data Mining and Knowledge Discovery.
Morton, T. (1997). Coreference for NLP applications. Proceedings ACL.
Ng, V., & Cardie, C. (2002). Improving machine learning approaches to coreference resolution.
Fortieth Anniversary Meeting of the Association for Computational Linguistics (ACL-02).
Pasula, H., Marthi, B., Milch, B., Russell, S., & Shpitser, I. (2003). Identity uncertainty and citation
matching. Advances in Neural Information Processing (NIPS).
Peng, F., & McCallum, A. (2004). Accurate information extraction from research papers using conditional random fields. Proceedings of Human Language Technology Conference and North American Chapter of the Association for Computational Linguistics (HLT-NAACL).
Pinto, D., McCallum, A., Lee, X., & Croft, W. B. (2003). Table extraction using conditional random
fields. Proceedings of the 26th ACM SIGIR.
Sha, F., & Pereira, F. (2003). Shallow parsing with conditional random fields (Technical Report CIS
TR MS-CIS-02-35). University of Pennsylvania.
Soon, W. M., Ng, H. T., & Lim, D. C. Y. (2001). A machine learning approach to coreference
resolution of noun phrases. Computational Linguistics, 27, 521?544.
Taskar, B., Abbeel, P., & Koller, D. (2002). Discriminative probabilistic models for relational data.
Eighteenth Conference on Uncertainty in Artificial Intelligence (UAI02).
Zelenko, D., Aone, C., & Richardella, A. (2003). Kernel methods for relation extraction. Journal of
Machine Learning Research (submitted).
| 2557 |@word briefly:1 nd:1 mention:44 tr:1 configuration:3 contains:2 uma:2 score:1 celebrated:1 karger:1 charniak:1 document:3 interestingly:1 current:3 assigning:1 yet:2 must:2 parsing:1 partition:14 kdd:1 remove:2 progressively:2 generative:9 selected:2 intelligence:2 mccallum:13 ith:1 record:10 detecting:1 parameterizations:1 node:27 complication:1 preference:1 lexicon:1 simpler:1 projec:1 tagger:1 five:1 predecessor:2 symposium:1 focs:1 combine:1 introduce:1 nbc:1 pairwise:10 inter:1 peng:2 tagging:1 themselves:1 frequently:1 examine:1 bilenko:4 precursor:1 increasing:1 begin:1 underlying:3 matched:1 notation:3 what:1 tying:1 kind:1 unspecified:1 string:5 finding:2 corporation:1 concave:1 exactly:1 classifier:3 hit:5 partitioning:12 grant:2 yn:1 appear:2 segmenting:1 positive:1 service:1 subsumption:1 sd:1 merge:1 approximately:1 might:4 acl:2 emphasis:2 studied:1 equivalence:1 specifying:1 hmms:3 collapse:2 range:2 statistically:1 unique:2 acknowledgment:1 yj:1 practice:2 x17:1 procedure:1 powell:14 maxflow:1 area:2 significantly:2 thought:1 matching:9 word:3 road:1 donald:1 refers:2 pre:1 interior:1 context:4 impossible:1 milch:1 center:1 crfs:3 eighteenth:1 layout:1 straightforward:1 independently:5 focused:1 resolution:6 sigir:1 classic:1 enhanced:1 heavily:1 paralleling:1 elkan:5 trick:1 recognition:1 particularly:1 cut:1 database:9 labeled:2 pfeffer:1 taskar:7 solved:1 capture:2 calculate:1 news:4 connected:2 russell:1 highest:2 removed:1 mentioned:1 agency:1 trained:6 depend:1 solving:1 coreference:31 creates:1 division:1 efficiency:1 triangle:1 resolved:1 darpa:5 various:2 tx:1 chapter:2 train:1 fast:1 describe:2 effective:2 query:1 artificial:2 labeling:1 birth:1 apparent:1 ace:4 whose:1 valued:2 solve:2 widely:1 quite:1 otherwise:1 ability:1 noisy:1 obviously:1 advantage:3 sequence:3 product:3 pronoun:2 description:1 ijcai:2 cluster:1 produce:1 generating:1 ben:1 object:2 help:1 andrew:2 measured:2 keywords:1 c:2 direction:1 merged:1 correct:2 attribute:21 annotated:1 stochastic:2 human:1 require:1 government:1 surname:5 assign:1 hx:3 f1:2 generalization:1 abbeel:1 ungar:1 clustered:1 proposition:1 yij:12 strictly:1 accompanying:1 exp:3 great:2 early:1 a2:2 estimation:2 proc:1 applicable:2 champion:1 aone:1 successfully:1 minimization:1 rather:1 avoid:2 varying:1 morton:5 derived:1 she:4 notational:1 likelihood:4 indicates:2 check:1 impossibility:1 normalizer:1 helpful:1 inference:10 secretary:4 dependent:4 typically:2 hidden:3 relation:4 borrowing:1 koller:2 issue:2 among:6 classification:1 arg:1 uai02:1 overall:1 noun:14 field:13 aware:1 having:1 extraction:6 ng:7 sampling:4 identical:1 y6:1 anaphora:1 look:1 icml:1 nearly:2 jon:1 report:5 np:1 intelligent:1 duplicate:2 national:1 individual:1 replaced:1 consisting:1 friedman:2 attempt:1 detection:1 interest:2 highly:3 mining:5 evaluation:1 introduces:2 male:1 accurate:2 edge:7 encourage:1 necessary:4 tree:5 continuing:1 instance:6 modeling:1 bedford:1 measuring:1 assignment:9 phrase:6 loopy:2 introducing:2 subset:2 veksler:1 examining:1 seventh:1 straightforwardly:1 dependency:4 answer:1 scanning:1 muc:9 amherst:2 international:1 probabilistic:10 contract:2 lee:1 clifford:1 reflect:1 x9:1 broadcast:2 choose:2 collapsing:1 creating:1 resort:1 derivative:2 american:2 shpitser:1 li:2 account:2 potential:3 de:2 north:2 explicitly:2 depends:1 performed:2 view:1 later:1 doing:1 portion:1 option:1 voted:1 coreferent:5 who:1 efficiently:1 bayesian:1 accurately:1 cardie:5 substring:3 zx:5 mooney:4 published:1 history:1 submitted:1 reach:1 hlt:1 sixth:1 against:1 energy:1 acquisition:1 pp:8 involved:1 naturally:1 associated:2 gain:2 adjusting:1 massachusetts:1 ask:1 subsection:2 knowledge:4 lim:1 segmentation:2 appears:2 x6:1 specify:1 improved:2 though:2 furthermore:3 anywhere:1 just:1 pasula:4 correlation:1 hastings:2 parse:3 replacing:2 web:2 y9:1 overlapping:4 propagation:2 expressive:1 lack:1 believe:1 name:2 usa:2 naacl:2 equality:1 assigned:1 hence:1 alternating:1 laboratory:1 harabagiu:3 conditionally:2 round:2 during:1 m:1 bansal:3 performs:1 ranging:1 wise:3 recently:2 boykov:3 common:2 cohen:5 discussed:1 he:2 association:3 interpret:1 refer:4 measurement:4 significant:4 gibbs:2 ai:1 automatic:1 rd:1 consistency:1 newswire:2 language:11 had:1 dot:1 hxi:1 sql:1 etc:1 mccarthy:3 recent:2 zelenko:2 certain:1 binary:4 success:1 inconsistency:3 yi:5 accomplished:1 meeting:2 captured:2 george:1 unrestricted:1 preceding:1 mr:4 fortunately:1 determine:3 maximize:3 colin:5 fernando:1 ii:1 resolving:1 multiple:9 full:3 technical:4 match:8 cross:2 long:1 retrieval:1 e1:1 prevented:1 a1:2 vision:2 metric:5 circumstance:1 expectation:2 sometimes:1 represent:1 kernel:1 achieved:1 background:1 extra:1 operate:2 unlike:3 ascent:3 duplication:2 undirected:10 member:1 lafferty:5 flow:1 inconsistent:4 grammatically:1 integer:4 call:2 structural:1 counting:1 granularity:2 split:1 concerned:1 variety:1 xj:12 pennsylvania:1 restrict:2 associating:1 imperfect:1 reduce:1 texas:1 whether:1 defense:1 wellner:2 linkage:4 speech:2 nomenclature:1 yik:4 detailed:1 bunescu:1 ibid:1 svms:1 zabih:1 reduced:1 generate:3 problematic:1 nsf:1 notice:1 correctly:2 per:2 stipulates:1 write:3 group:1 threshold:14 blum:2 neither:1 clean:1 ht:5 n66001:2 graph:15 year:1 sum:3 run:1 fortieth:1 uncertainty:19 named:2 reader:1 decision:11 conll:1 capturing:1 fl:9 mined:1 correspondence:1 fold:1 annual:1 strength:1 occur:1 constraint:1 infinity:1 nearby:1 kleinberg:1 extremely:3 performing:1 department:2 combination:1 conjugate:1 across:1 smaller:1 em:1 character:1 y0:4 slightly:1 metropolis:2 shallow:1 making:1 maxy:1 restricted:1 repair:1 gathering:1 iccv:1 equation:2 discus:2 ge:4 acronym:2 enforce:2 disagreement:1 occurrence:1 alternative:1 clustering:6 nlp:3 include:1 mincut:1 graphical:14 linguistics:4 calculating:1 f30602:1 especially:1 reiterated:1 objective:2 question:1 sha:2 dependence:2 traditional:1 said:1 cycling:1 gradient:6 excels:1 distance:28 separate:1 link:6 thank:1 entity:47 spanning:1 induction:1 modeled:1 minimizing:1 optionally:1 difficult:2 yjk:4 negative:3 implementation:2 design:1 proper:7 unknown:3 perform:3 ord:1 observation:1 markov:10 enabling:1 finite:2 t:1 incorrectly:1 situation:1 relational:15 head:1 y1:1 arbitrary:2 david:1 pair:11 sentence:2 marthi:1 learned:5 nbchd030010:1 nip:1 below:5 pattern:5 hammersley:1 program:1 including:7 lend:1 belief:2 overlap:2 getoor:1 natural:10 difficulty:1 advanced:1 representing:1 scheme:1 improve:1 movie:1 historically:1 technology:1 imply:1 text:5 understanding:1 interdependent:1 discovery:3 determining:1 discriminatively:1 monge:5 triple:1 validation:2 foundation:2 incurred:1 degree:1 article:5 story:2 austin:2 anniversary:1 supported:1 soon:2 drastically:1 formal:1 perceptron:2 fall:2 template:7 barrier:1 emerge:1 grammatical:2 calculated:2 vocabulary:1 gram:1 xn:1 avoids:1 doesn:1 rich:2 made:3 collection:6 universally:1 richman:5 historical:1 far:1 citation:6 approximate:2 clique:12 global:1 instantiation:1 corpus:3 discriminative:3 xi:14 don:1 chief:1 table:4 nature:2 learn:1 reasonably:1 inherently:1 nigam:1 improving:1 complex:3 mitre:1 domain:8 significance:1 linearly:1 noise:1 arise:1 identifier:2 repeated:1 dyadic:1 x1:1 causality:1 benefited:1 fashion:1 position:1 pereira:4 wish:1 exponential:2 tied:3 third:3 burlington:1 croft:1 e4:5 theorem:1 removing:1 specific:5 insightful:1 hale:1 ments:1 a3:2 consist:1 exists:1 workshop:1 avrim:1 adding:1 ci:2 entropy:2 simply:6 appearance:1 likely:5 explore:1 tracking:1 pinto:2 gender:2 corresponds:3 acm:1 ma:2 conditional:20 identity:18 sized:1 content:3 change:1 hard:1 typical:3 reducing:2 brill:1 correlational:1 called:2 experimental:3 e:5 indicating:1 people:1 collins:4 bush:1 incorporate:1 |
1,714 | 2,558 | Pictorial Structures for Molecular
Modeling: Interpreting Density Maps
Frank DiMaio, Jude Shavlik
Department of Computer Sciences
University of Wisconsin-Madison
{dimaio,shavlik}@cs.wisc.edu
George Phillips
Department of Biochemistry
University of Wisconsin-Madison
[email protected]
Abstract
X-ray crystallography is currently the most common way protein
structures are elucidated. One of the most time-consuming steps in
the crystallographic process is interpretation of the electron density
map, a task that involves finding patterns in a three-dimensional
picture of a protein. This paper describes DEFT (DEFormable
Template), an algorithm using pictorial structures to build a
flexible protein model from the protein's amino-acid sequence.
Matching this pictorial structure into the density map is a way of
automating density-map interpretation. Also described are several
extensions to the pictorial structure matching algorithm necessary
for this automated interpretation. DEFT is tested on a set of
density maps ranging from 2 to 4? resolution, producing rootmean-squared errors ranging from 1.38 to 1.84?.
1
In trod u ction
An important question in molecular biology is what is the structure of a particular
protein? Knowledge of a protein?s unique conformation provides insight into the
mechanisms by which a protein acts. However, no algorithm exists that accurately
maps sequence to structure, and one is forced to use "wet" laboratory methods to
elucidate the structure of proteins. The most common such method is x-ray
crystallography, a rather tedious process in which x-rays are shot through a crystal
of purified protein, producing a pattern of spots (or reflections) which is processed,
yielding an electron density map. The density map is analogous to a threedimensional image of the protein. The final step of x-ray crystallography ? referred
to as interpreting the map ? involves fitting a complete molecular model (that is, the
position of each atom) of the protein into the map. Interpretation is typically
performed by a crystallographer using a time-consuming manual process. With
large research efforts being put into high-throughput structural genomics,
accelerating this process is important. We investigate speeding the process of x-ray
crystallography by automating this time-consuming step.
When interpreting a density map, the amino-acid sequence of the protein is known
in advance, giving the complete topology of the protein. However, the intractably
large conformational space of a protein ? with hundreds of amino acids and
thousands of atoms ? makes automated map interpretation challenging. A few
groups have attempted automatic interpretation, with varying success [1,2,3,4].
Figure 1: This graphic
illustrates density map
quality at various resolutions. All resolutions
depict the same alpha
helix structure
1?
2?
3?
4?
5?
Confounding the problem are several sources of error that make automated
interpretation extremely difficult. The primary source of difficulty is due to the
crystal only diffracting to a certain extent, eliminating higher frequency components
of the density map. This produces an overall blurring effect evident in the density
map. This blurring is quantified as the resolution of the density map and is
illustrated in Figure 1. Noise inherent in data collection further complicates
interpretation. Given minimal noise and sufficiently good resolution ? about 2.3?
or less ? automated density map interpretation is essentially solved [1]. However,
in poorer quality maps, interpretation is difficult and inaccurate, and other
automated approaches have failed.
The remainder of the paper describes DEFT (DEFormable Template), our
computational framework for building a flexible three-dimensional model of a
molecule, which is then used to locate patterns in the electron density map.
2
Pictorial structures
Pictorial structures model classes of objects as a single flexible template. The
template represents the object class as a collection of parts linked in a graph
structure. Each edge defines a relationship between the two parts it connects. For
example, a pictorial structure for a face may include the parts "left eye" and "right
eye." Edges connecting these parts could enforce the constraint that the left eye is
adjacent to the right eye. A dynamic programming (DP) matching algorithm of
Felzenszwalb and Huttenlocher (hereafter referred to as the F-H matching
algorithm) [5] allows pictorial structures to be quickly matched into a twodimensional image. The matching algorithm finds the globally optimal position and
orientation of each part in the pictorial structure, assuming conditional
independence on the position of each part given its neighbors.
Formally, we represent the pictorial structure as a graph G = (V,E), V = {v1,v2,?,vn}
the set of parts, and edge eij ? E connecting neighboring parts vi and vj if an explicit
dependency exists between the configurations of the corresponding parts. Each part
vi is assigned a configuration li describing the part's position and orientation in the
image. We assume Markov independence: the probability distribution over a part's
configurations is conditionally independent of every other part's configuration,
given the configuration of all the part's neighbors in the graph. We assign each edge
a deformation cost dij(li,lj), and each part a "mismatch" cost mi(li,I). These functions
are the negative log likelihoods of a part (or pair of parts) taking a specified
configuration, given the pictorial structure model.
The matching algorithm places the model into the image using maximum-likelihood.
That is, it finds the configuration L of parts in model ? in image I maximizing
P ( L I , ? ) ? P ( I L, ?) P ( L ? ) =
1?
?
??
? exp?? ? v ?V m i (li , I )?? ? exp? ?( v , v )?E m i (li , I )? ?
i
i j
?
?
Z?
?
??
(1)
N
O
N
C
C?
C?
C
C?
O
C?
Figure 2. An "interpreted" density map. Figure 3. An example of the
The right figure shows the arrangement of construction of a pictorial structure
atoms that generated the observed density. model given an amino acid.
By monotonicity of exponentiation, this minimizes ? vi?V m i (li , I ) + ? ( vi ,v j )?E d ij (li , l j ) .
The F-H matching algorithm places several additional limitations on the pictorial
structure. The object's graph must be tree structured (cyclic constraints are not
allowed), and the deformation cost function must take the form Tij (li ) ? Tji (l j ) , where
Tij and Tji are arbitrary functions and ||?|| is some norm (e.g. Euclidian distance).
3
B u i l d i n g a f l e xi b l e a t o m i c m o d e l
Given a three-dimensional map containing a large molecule and the topology (i.e.,
for proteins, the amino-acid sequence) of that molecule, our task is to determine the
Cartesian coordinates in the 3D density map of each atom in the molecule. Figure 2
shows a sample interpreted density map. DEFT finds the coordinates of all atoms
simultaneously by first building a pictorial structure corresponding to the protein,
then using F-H matching to optimally place the model into the density map. This
section describes DEFT's deformation cost function and matching cost function.
DEFT's deformation cost is related to the probability of observing a particular
configuration of a molecule. Ideally, this function is proportional to the inverse of
the molecule's potential function, since configurations with lower potential energy
are more likely observed in nature. However, this potential is quite complicated and
cannot be accurately approximated in a tree-structured pictorial structure graph.
Our solution is to only consider the relationships between covalently bonded atoms.
DEFT constructs a pictorial structure graph where vertices correspond to nonhydrogen atoms, and edges correspond to the covalent bonds joining atoms. The
cost function each edge defines maintain invariants ? interatomic distance and bond
angles ? while allowing free rotation around the bond. Given the protein's amino
acid sequence, model construction, illustrated in Figure 3, is trivial. Each part's
configuration is defined by six parameters: three translational, three rotational
(Euler angles ?, ?, and ? ). For the cost function, we define a new connection type
in the pictorial structure framework, the screw-joint, shown in Figure 4.
The screw-joint's cost function is mathematically specified in terms of a directed
version of the pictorial structure's undirected graph. Since the graph is constrained
by the fast matching algorithm to take a tree structure, we arbitrarily pick a root
node and point every edge toward this root. We now define the screw joint in terms
of a parent and a child. We rotate the child such that its z axis is coincident with the
vector from child to parent, and allow each part in the model (that is, each atom) to
freely rotate about its local z axis. The ideal geometry between child and parent is
then described by three parameters stored at each edge, xij = (xij, yij, zij). These three
parameters define the optimal translation between parent and child, in the
coordinate system of the parent (which in turn is defined such that its z-axis
corresponds to the axis connecting it to its parent).
In using these to construct the cost function dij, we define the function Tij, which
maps a parent vi's configuration li into the configuration lj of that parent's ideal child
vj. Given parameters xij on the edge between vi and vj, the function is defined
(2)
Tij ( xi , y i , z i , ? i , ? i , ? i ) = x j , y j , z j , ? j , ? j , ? j
)
(
with
? j = ?i , ? j = atan2 x?2 + y?2 ,? z? , ? j = ? 2 + atan2( y?, x?) , and
? x j , y j , z j ? = ? xi , yi , zi ? + ? x?, y?, z ??
where (x', y', z') is rotation of the bond parameters (xij, yij, zij) to world coordinates.
T
T
That is, (x?, y?, z?) = R? ,? ,? (xij, yij , zij ) with R? i ,? i , ? i the rotation matrix corresponding to
Euler angles (?i, ?i, ?i). The expressions for ?j and ?j define the optimal orientation
of each child: +z coincident with the axis that connects child and parent.
i
i
i
The F-H matching algorithm requires our cost function to take a particular form,
specifically, it must be some norm. The screw-joint model sets the deformation cost
between parent vi and child vj to the distance between child configuration lj and
Tij(li), the ideal child configuration given parent configuration li (Tji in equation (2)
is simply the identity function). We use the 1-norm weighted in each dimension,
d ij (li , l j ) = Tij (li ) ? l j
= wijrotate (? i ? ? j )
+ wijorient ?? ( ? i ? ? j ) + atan( x?2 + y ?2 ,? z ?) + (? j ? ? i ) ? ? 2 + atan( y?, x?) ??
?
?
+ wijtranslate ( xi ? x j ) ? x? + ( yi ? y j ) ? y ? + ( zi ? z j ) ? z ? .
(
(3)
)
In the above equation, wijrotate is the cost of rotating about a bond, wijorient is the cost
of rotating around any other axis, and wijtranslate is the cost of translating in x, y or z.
DEFT's screw-joint model sets wijrotate to 0, and wijorient and wijtranslate to +100.
DEFT's match-cost function implementation is based upon Cowtan's fffear
algorithm [4]. This algorithm quickly and efficiently calculates the mean squared
distance between a weighted 3D template of density and a region in a density map.
Given a learned template and a corresponding weight function, fffear uses a Fourier
convolution to determine the maximum likelihood that the weighted template
generated a region of density in the density map.
For each non-hydrogen atom in the protein, we create a target template
corresponding to a neighborhood around that particular atom, using a training set of
crystallographer-solved structures. We build a separate template for each atom type
? e.g., the ?-carbon (2nd sidechain carbon) of leucine and the backbone oxygen of
serine ? producing 171 different templates in total. A part's m function is the fffearcomputed mismatch score of that part's template over all positions and orientations.
Once we construct the model, parameters ? including the optimal orientation xij
corresponding to each edge, and the template for each part ? are learned by training
?i
(?i,?i)
(?j,?j)
?j
vi
(xi,yi,zi)
vj
(xj,yj,zj)
(x',y',z')
Figure 4: Showing the screw-joint
connection between two parts in the
model. In the directed version of the
MRF, vi is the parent of vj. By
definition, vj is oriented such that
its local z-axis is coincident with it's
ideal
bond
orientation
v
xij = (xij ,vyij , zij )T in vi. Bond parameters x ij are learned by DEFT.
the model on a set of crystallographer-determined structures. Learning the
orientation parameters is fairly simple: for each atom we define canonic coordinates
(where +z corresponds to the axis of rotation). For each child, we record the
distance r and orientation (?,?) in the canonic coordinate frame. We average over
all atoms of a given type in our training set ? e.g., over all leucine ?-carbon?s ? to
determine average parameters ravg, ?avg, and ?avg. Converting these averages from
spherical to Cartesian coordinates gives the ideal orientation parameters xij.
A similarly-defined canonic coordinate frame is employed when learning the model
templates; in this case, DEFT's learning algorithm computes target and weight
templates based on the average and inverse variance over the training set,
respectively. Figure 5 shows an overview of the learning process. Implementation
used Cowtan's Clipper library.
For each part in the model, DEFT searches through a six-dimensional conformation
space (x,y,z,?,?,?), breaking each dimension into a number of discrete bins. The
translational parameters x, y, and z are sampled over a region in the unit cell.
Rotational space is uniformly sampled using an algorithm described by Mitchell [6].
4 Model Enhancements
Upon initial testing, the pictorial-structure matching algorithm performs rather
poorly at the density-map interpretation task. Consequently, we added two routines
? a collision-detection routine, and an improved template-matching routine ? to
DEFT's pictorial-structure matching implementation. Both enhancements can be
applied to the general pictorial structure algorithm, and are not specific to DEFT.
4.1
Collision Detection
Our closer investigation revealed that much of the algorithm's poor performance is
due to distant chains colliding. Since DEFT only models covalent bonds, the
matching algorithm sometimes returns a structure with non-bonded atoms
impossibly close together. These collisions were a problem in DEFT's initial
implementation. Figure 6 shows such a collision (later corrected by the algorithm).
Given a candidate solution, it is straightforward to test for spatial collisions: we
simply test if any two atoms in the structure are impossibly (physically) close
together. If a collision occurs in a candidate, DEFT perturbs the structure. Though
O
N
N
N
O
fffear Target Template Map
C?
C?
N
N
C-1
C
O
Alanine C?
C?
C?
C
C?
N+1
Standard
Orientation
r = 1.53
? = 0.0?
? = -19.3?
C?
r = 1.51
? = 118.4?
? = -19.7?
C
Averaged Bond Geometry
Figure 5: An overview of the parameter-learning process. For each atom of a given
type ? here alanine C? ? we rotate the atom into a canonic orientation. We then
average over every atom of that type to get a template and average bond geometry.
Figure 6. This illustrates the
collision avoidance algorithm. On
the left is a collision (the predicted
molecule is in the darker color).
The amino acid's sidechain is
placed coincident with the backbone.
On the right, collision
avoidance finds the right structure.
the optimal match is no longer returned, this approach works well in practice. If
two atoms are both aligned to the same space in the most probable conformation, it
seems quite likely that one of the atoms belongs there. Thus, DEFT handles
collisions by assuming that at least one of the two colliding branches is correct.
When a collision occurs, DEFT finds the closest branch point above the colliding
nodes ? that is, the root y of the minimum subtree containing all colliding nodes.
DEFT considers each child xi of this root, matching the subtree rooted at xi, keeping
the remainder of the tree fixed. The change in score for each perturbed branch is
recorded, and the one with the smallest score increase is the one DEFT keeps.
Table 1 describes the collision-avoidance algorithm. In the case that the colliding
node is due to a chain wrapping around on itself (and not two branches running into
one another), the root y is defined as the colliding node nearest to the top of the tree.
Everything below y is matched anew while the remainder of the structure is fixed.
4.2
Improved template matching
In our original implementation, DEFT learned a template by averaging over each of
the 171 atom types. For example, for each of the 12 (non-hydrogen) atoms in the
amino-acid tyrosine we build a single template ? producing 12 tyrosine templates in
total. Not only is this inefficient, requiring DEFT to match redundant templates
against the unsolved density map, but also for some atoms in flexible sidechains,
averaging blurs density contributions from atoms more than a bond away from the
target, losing valuable information about an atom's neighborhood.
DEFT improves the template-matching algorithm by modeling the templates using a
mixture of Gaussians, a generative model where each template is modeled using a
mixture of basis templates. Each basis template is simply the mean of a cluster of
templates. Cluster assignments are learned iteratively using the EM algorithm. In
each iteration of the algorithm we compute the a priori likelihood of each image
being generated by a particular cluster mean (the E step). Then we use these
probabilities to update the cluster means (the M step). After convergence, we use
each cluster mean (and weight) as an fffear search target.
Table 1. DEFT's collision handing routine.
Given:
An illegal pictorial structure configuration L = {l1,l2,?,ln}
Return: A legal perturbation L'
Algorithm:
X ? all nodes in L illegally close to some other node
y ? root of smallest subtree containing all nodes in X
for each child xi of y
Li ? optimal position of subtree rooted at xi fixing remainder of tree
scorei ? score(Li) ? score(subtree of L rooted at xi)
i min ? arg min (scorei)
L' ? replace subtree rooted at xi in L with Limin
return L'
5 Experimental Studies
We tested DEFT on a set of proteins provided by the Phillips lab at the University
of Wisconsin. The set consists of four different proteins, all around 2.0? in
resolution. With all four proteins, reflections and experimentally-determined initial
phases were provided, allowing us to build four relatively poor-quality density
maps. To test our algorithm with poor-quality data, we down-sampled each of the
maps to 2.5, 3 and 4? by removing higher-resolution reflections and recomputed the
density. These down-sampled maps are physically identical to maps natively
constructed at this resolution. Each structure had been solved by crystallographers.
For this paper, our experiments are conducted under the assumption that the
mainchain atoms of the protein were known to within some error factor. This
assumption is fair; approaches exist for mainchain tracing in density maps [7].
DEFT simply walks along the mainchain, placing atoms one residue at a time
(considering each residue independently).
We split our dataset into a training set of about 1000 residues and a test set of about
100 residues (from a protein not in the training set). Using the training set we built
a set of templates for matching using fffear. The templates extended to a 6? radius
around each atom at 0.5? sampling. Two sets of templates were built and
subsequently matched: a large set of 171 produced by averaging all training set
templates for each atom type, and a smaller set of 24 learned through by the EM
algorithm. We ran DEFT's pictorial structure matching algorithm using both sets of
templates, with and without the collision detection code.
Although placing individual atoms into the sidechain is fairly quick, taking less than
six hours for a 200-residue protein, computing fffear match scores is very CPUdemanding. For each of our 171 templates, fffear takes 3-5 CPU-hours to compute
the match score at each location in the image, for a total of one CPU-month to
match templates into each protein! Fortunately the task is trivially parallelized; we
regularly do computations on over 100 computers simultaneously.
The results of all tests are summarized in Figure 7. Using individual-atom
templates and the collision detection code, the all-atom RMS deviation varied from
1.38? at 2? resolution to 1.84? at 4?. Using the EM-based clusters as templates
produced slight or no improvement. However, much less work is required; only 24
templates need to be matched to the image instead of 171 individual-atom templates.
Finally, it was promising that collision detection leads to significant error reduction.
4.0
Test Protein RMS Deviation
It is interesting to note that
individually using the improved
templates and using the collision
avoidance both improved the
search results; however, using
both together was a bit worse than
with collision detection alone.
More research is needed to get a
synergy
between
the
two
enhancements. Further investigation is also needed balancing
between the number and templates
and template size. The match cost
function is a critically important
part of DEFT and improvements
there will have the most profound
impact on the overall error.
3.5
base
improved templates only
3.0
collision detection + improved templates
collision detection only
2.5
2.0
1.5
1.0
0.5
0.0
2A
2.5A
3A
Density Map Resolution
4A
Figure 7. Testset error under four strategies.
6 Conclusions and future wo rk
DEFT has applied the F-H pictorial structure matching algorithm to the task of
interpreting electron density maps. In the process, we extended the F-H algorithm
in three key ways. In order to model atoms rotating in 3D, we designed another
joint type, the screw joint. We also developed extensions to deal with spatial
collisions of parts in the model, and implemented a slightly-improved template
construction routine. Both enhancements can be applied to pictorial-structure
matching in general, and are not specific to the task presented here.
DEFT attempts to bridge the gap between two types of model-fitting approaches for
interpreting electron density maps. Several techniques [1,2,3] do a good job
placing individual atoms, but all fail around 2.5-3? resolution. On the other hand,
fffear [4] has had success finding rigid elements in very poor resolution maps, but is
unable to locate highly flexible ?loops?. Our work extends the resolution threshold
at which individual atoms can be identified in electron density maps. DEFT's
flexible model combines weakly-matching image templates to locate individual
atoms from maps where individual atoms have been blurred away. No other
approach has investigated sidechain refinement in structures of this poor resolution.
We next plan to use DEFT as the refinement phase complementing a coarser
method. Rather than model the configuration of each individual atom, instead treat
each amino acid as a single part in the flexible template, only modeling rotations
along the backbone. Then, our current algorithm could place each individual atom.
A different optimization algorithm that handles cycles in the pictorial structure
graph would better handle collisions (allowing edges between non-bonded atoms).
In recent work [8], loopy belief propagation [9] has been used with some success
(though with no optimality guarantee). We plan to explore the use of belief propagation in pictorial-structure matching, adding edges in the graph to avoid collisions.
Finally, the pictorial-structure framework upon which DEFT is built seems quite
robust; we believe the accuracy of our approach can be substantially improved
through implementation improvements, allowing finer grid spacing and larger fffear
ML templates. The flexible molecular template we have described has the potential
to produce an atomic model in a map where individual atoms may not be visible,
through the power of combining weakly matching image templates. DEFT could
prove important in high-throughput protein-structure determination.
Acknowledgments
This work supported by NLM Grant 1T15 LM007359-01, NLM Grant 1R01 LM07050-01,
and NIH Grant P50 GM64598.
References
[1] A. Perrakis, T. Sixma, K. Wilson, & V. Lamzin (1997). wARP: improvement and
extension of crystallographic phases. Acta Cryst. D53:448-455.
[2] D. Levitt (2001). A new software routine that automates the fitting of protein X-ray
crystallographic electron density maps. Acta Cryst. D57:1013-1019.
[3] T. Ioerger, T. Holton, J. Christopher, & J. Sacchettini (1999). TEXTAL: a pattern
recognition system for interpreting electron density maps. Proc. ISMB:130-137.
[4] K. Cowtan (2001). Fast fourier feature recognition. Acta Cryst. D57:1435-1444.
[5] P. Felzenszwalb & D. Huttenlocher (2000). Efficient matching of pictorial structures.
Proc. CVPR. pp. 66-73.
[6] J. Mitchell (2002). Uniform distributions of 3D rotations. Unpublished Document.
[7] J. Greer (1974). Three-dimensional pattern recognition. J. Mol. Biol. 82:279-301.
[8] E. Sudderth, M. Mandel, W. Freeman & A Willsky (2005). Distributed occlusion
reasoning for tracking with nonparametric belief propagation. NIPS.
[9] D. Koller, U. Lerner & D. Angelov (1999). A general algorithm for approximate
inference and its application to hybrid Bayes nets. UAI. 15:324-333.
| 2558 |@word version:2 eliminating:1 norm:3 seems:2 nd:1 tedious:1 pick:1 euclidian:1 shot:1 reduction:1 initial:3 configuration:17 cyclic:1 score:7 hereafter:1 zij:4 document:1 current:1 must:3 distant:1 visible:1 blur:1 designed:1 depict:1 update:1 alone:1 generative:1 complementing:1 record:1 provides:1 node:8 location:1 along:2 constructed:1 profound:1 consists:1 prove:1 fitting:3 combine:1 ray:6 freeman:1 globally:1 spherical:1 cpu:2 considering:1 provided:2 matched:4 what:1 backbone:3 interpreted:2 minimizes:1 substantially:1 developed:1 finding:2 guarantee:1 every:3 act:1 unit:1 grant:3 producing:4 local:2 treat:1 joining:1 acta:3 quantified:1 challenging:1 averaged:1 ismb:1 directed:2 unique:1 acknowledgment:1 yj:1 testing:1 practice:1 atomic:1 spot:1 illegal:1 matching:26 protein:28 mandel:1 get:2 cannot:1 close:3 put:1 twodimensional:1 map:44 quick:1 maximizing:1 straightforward:1 independently:1 resolution:14 insight:1 avoidance:4 deft:35 handle:3 coordinate:8 analogous:1 construction:3 elucidate:1 target:5 programming:1 losing:1 us:1 illegally:1 element:1 approximated:1 recognition:3 coarser:1 huttenlocher:2 observed:2 solved:3 thousand:1 region:3 cycle:1 valuable:1 ran:1 ideally:1 automates:1 dynamic:1 tyrosine:2 weakly:2 upon:3 blurring:2 basis:2 cryst:3 joint:8 various:1 forced:1 fast:2 ction:1 neighborhood:2 quite:3 larger:1 cvpr:1 itself:1 final:1 sequence:5 net:1 remainder:4 neighboring:1 aligned:1 loop:1 combining:1 poorly:1 deformable:2 parent:12 enhancement:4 cluster:6 convergence:1 produce:2 object:3 fixing:1 nearest:1 ij:3 conformation:3 job:1 implemented:1 c:1 involves:2 predicted:1 radius:1 clipper:1 correct:1 tji:3 subsequently:1 nlm:2 translating:1 everything:1 bin:1 assign:1 investigation:2 probable:1 mathematically:1 yij:3 extension:3 sufficiently:1 around:7 exp:2 electron:8 biochemistry:1 smallest:2 proc:2 wet:1 bond:11 currently:1 bridge:1 individually:1 create:1 interatomic:1 weighted:3 rather:3 avoid:1 varying:1 wilson:1 improvement:4 likelihood:4 inference:1 rigid:1 inaccurate:1 typically:1 lj:3 koller:1 atan:2 overall:2 translational:2 flexible:8 orientation:11 arg:1 priori:1 plan:2 constrained:1 spatial:2 fairly:2 construct:3 once:1 atom:43 sampling:1 biology:1 represents:1 identical:1 placing:3 throughput:2 future:1 screw:7 inherent:1 few:1 oriented:1 lerner:1 simultaneously:2 individual:10 pictorial:29 geometry:3 connects:2 phase:3 occlusion:1 maintain:1 attempt:1 detection:8 investigate:1 highly:1 mixture:2 yielding:1 chain:2 poorer:1 edge:12 closer:1 necessary:1 trod:1 tree:6 canonic:4 rotating:3 walk:1 deformation:5 minimal:1 complicates:1 modeling:3 assignment:1 loopy:1 cost:17 vertex:1 deviation:2 euler:2 hundred:1 uniform:1 dij:2 conducted:1 graphic:1 optimally:1 stored:1 dependency:1 perturbed:1 density:35 automating:2 connecting:3 quickly:2 serine:1 together:3 squared:2 recorded:1 containing:3 worse:1 inefficient:1 return:3 li:15 potential:4 summarized:1 blurred:1 vi:10 performed:1 root:6 later:1 lab:1 linked:1 observing:1 bayes:1 complicated:1 contribution:1 accuracy:1 acid:9 atan2:2 efficiently:1 variance:1 correspond:2 accurately:2 produced:2 critically:1 finer:1 manual:1 definition:1 against:1 energy:1 frequency:1 ravg:1 pp:1 mi:1 unsolved:1 sampled:4 dataset:1 mitchell:2 knowledge:1 color:1 improves:1 routine:6 higher:2 crystallographic:3 improved:8 though:2 hand:1 christopher:1 propagation:3 defines:2 quality:4 believe:1 building:2 effect:1 requiring:1 assigned:1 laboratory:1 purified:1 iteratively:1 illustrated:2 deal:1 conditionally:1 adjacent:1 rooted:4 scorei:2 bonded:3 crystal:2 complete:2 evident:1 p50:1 performs:1 l1:1 interpreting:6 reflection:3 oxygen:1 reasoning:1 ranging:2 image:10 nih:1 common:2 rotation:6 overview:2 interpretation:11 slight:1 significant:1 phillips:3 automatic:1 trivially:1 grid:1 similarly:1 had:2 longer:1 biochem:1 base:1 closest:1 recent:1 confounding:1 belongs:1 certain:1 success:3 arbitrarily:1 yi:3 minimum:1 george:1 additional:1 fortunately:1 employed:1 freely:1 converting:1 determine:3 parallelized:1 redundant:1 branch:4 match:7 determination:1 molecular:4 calculates:1 impact:1 mrf:1 essentially:1 physically:2 jude:1 represent:1 sometimes:1 iteration:1 cell:1 residue:5 spacing:1 sudderth:1 source:2 greer:1 undirected:1 regularly:1 structural:1 ideal:5 revealed:1 split:1 automated:5 covalent:2 independence:2 xj:1 zi:3 topology:2 identified:1 six:3 expression:1 rms:2 accelerating:1 effort:1 wo:1 returned:1 tij:6 conformational:1 collision:23 nonparametric:1 processed:1 xij:9 exist:1 zj:1 alanine:2 discrete:1 crystallographer:4 group:1 recomputed:1 four:4 key:1 threshold:1 wisc:2 v1:1 graph:10 impossibly:2 d57:2 inverse:2 exponentiation:1 angle:3 place:4 extends:1 vn:1 bit:1 elucidated:1 constraint:2 colliding:6 software:1 fourier:2 extremely:1 min:2 optimality:1 relatively:1 handing:1 department:2 structured:2 poor:5 describes:4 smaller:1 em:3 slightly:1 invariant:1 ln:1 equation:2 legal:1 describing:1 turn:1 mechanism:1 fail:1 needed:2 gaussians:1 v2:1 enforce:1 away:2 original:1 top:1 running:1 include:1 madison:2 giving:1 build:4 threedimensional:1 r01:1 question:1 arrangement:1 added:1 occurs:2 wrapping:1 primary:1 strategy:1 dp:1 distance:5 separate:1 perturbs:1 unable:1 extent:1 considers:1 trivial:1 toward:1 willsky:1 assuming:2 code:2 modeled:1 relationship:2 rotational:2 difficult:2 carbon:3 frank:1 negative:1 implementation:6 lm07050:1 allowing:4 convolution:1 markov:1 coincident:4 extended:2 locate:3 frame:2 perturbation:1 varied:1 arbitrary:1 pair:1 required:1 specified:2 unpublished:1 connection:2 learned:6 hour:2 nip:1 below:1 pattern:5 mismatch:2 built:3 including:1 belief:3 power:1 difficulty:1 hybrid:1 eye:4 library:1 picture:1 axis:8 genomics:1 speeding:1 l2:1 wisconsin:3 interesting:1 limitation:1 proportional:1 helix:1 balancing:1 translation:1 placed:1 supported:1 free:1 keeping:1 intractably:1 allow:1 warp:1 shavlik:2 neighbor:2 template:50 face:1 felzenszwalb:2 taking:2 tracing:1 distributed:1 dimension:2 world:1 computes:1 collection:2 avg:2 refinement:2 testset:1 alpha:1 approximate:1 keep:1 monotonicity:1 synergy:1 anew:1 ml:1 uai:1 consuming:3 xi:11 hydrogen:2 search:3 table:2 promising:1 nature:1 molecule:7 robust:1 mol:1 angelov:1 investigated:1 vj:7 noise:2 allowed:1 child:14 fair:1 amino:9 levitt:1 referred:2 darker:1 natively:1 position:6 explicit:1 candidate:2 breaking:1 down:2 removing:1 rk:1 specific:2 showing:1 exists:2 adding:1 subtree:6 illustrates:2 cartesian:2 gap:1 crystallography:4 eij:1 likely:2 simply:4 sidechain:4 explore:1 failed:1 limin:1 tracking:1 corresponds:2 conditional:1 identity:1 month:1 consequently:1 replace:1 change:1 experimentally:1 specifically:1 determined:2 uniformly:1 corrected:1 averaging:3 total:3 experimental:1 attempted:1 t15:1 formally:1 rotate:3 tested:2 biol:1 |
1,715 | 2,559 | Spike Sorting: Bayesian Clustering of
Non-Stationary Data
Aharon Bar-Hillel
Neural Computation Center
The Hebrew University of Jerusalem
[email protected]
Adam Spiro
School of Computer Science and Engineering
The Hebrew University of Jerusalem
[email protected]
Eran Stark
Department of Physiology
The Hebrew University of Jerusalem
[email protected]
Abstract
Spike sorting involves clustering spike trains recorded by a microelectrode according to the source neuron. It is a complicated problem,
which requires a lot of human labor, partly due to the non-stationary nature of the data. We propose an automated technique for the clustering
of non-stationary Gaussian sources in a Bayesian framework. At a first
search stage, data is divided into short time frames and candidate descriptions of the data as a mixture of Gaussians are computed for each frame.
At a second stage transition probabilities between candidate mixtures are
computed, and a globally optimal clustering is found as the MAP solution of the resulting probabilistic model. Transition probabilities are
computed using local stationarity assumptions and are based on a Gaussian version of the Jensen-Shannon divergence. The method was applied
to several recordings. The performance appeared almost indistinguishable from humans in a wide range of scenarios, including movement,
merges, and splits of clusters.
1
Introduction
Neural spike activity is recorded with a micro-electrode which normally picks up the activity of multiple neurons. Spike sorting seeks the segmentation of the spike data such that
each cluster contains all the spikes generated by a different neuron. Currently, this task is
mostly done manually. It is a tedious mission, requiring many hours of human labor for
each recording session. Several algorithms were proposed in order to help automating this
process (see [7] for a review, [9],[10]) and some tools were implemented to assist in manual
sorting [8]. However, the ability of suggested algorithms to replace the human worker has
been quite limited.
One of the main obstacles to a successful application is the non-stationary nature of the data
[7]. The primary source of this non-stationarity is slight movements of the recording elec-
trode. Slight drifts of the electrode?s location, which are almost inevitable, cause changes in
the typical shapes of recorded spikes over time. Other sources of non-stationarity include
variable background noise and changes in the characteristic spike generated by a certain
neuron. The increasing usage of multiple electrode systems turns non-stationarity into an
acute problem, as electrodes are placed in a single location for long durations.
Using the first 2 PCA coefficients to represent the data (which preserves up to 93% of the
variance in the original recordings [1]), a human can cluster spikes by visual inspection.
When dividing the data into small enough time frames, cluster density can be approximated
by a multivariate Gaussian with a general covariance matrix without loosing much accuracy [7]. Problematic scenarios which can appear due to non-stationarity are exemplified
in Section 4.2 and include: (1) Movements and considerable shape changes of the clusters over time, (2) Two clusters which are reasonably well-separated may move until they
converge and become indistinguishable. A split of a cluster is possible in the same manner.
Most spike sorting algorithms do not address the presented difficulties at all, as they assume
full stationarity of the data. Some methods [4, 11] try to cope with the lack of stationarity by
grouping data into many small clusters and identifying the clusters that can be combined
to represent the activity of a single unit. In the second stage, [4] uses ISI information
to understand which clusters cannot be combined, while [11] bases this decision on the
density of points between clusters. In [3] a semi-automated method is suggested, in which
each time frame is clustered manually, and then the correspondence between clusters in
consecutive time frames is established automatically. The correspondence is determined
by a heuristic score, and the algorithm doesn?t handle merge or split scenarios.
In this paper we suggest a new fully automated technique to solve the clustering problem
for non-stationary Gaussian sources in a Bayesian framework. We divide the data into
short time frames in which stationarity is a reasonable assumption. We then look for good
mixture of Gaussians descriptions of the data in each time frame independently. Transition probabilities between local mixture solutions are introduced, and a globally optimal
clustering solution is computed by finding the Maximum-A-Posteriori (MAP) solution of
the resulting probabilistic model. The global optimization allows the algorithm to successfully disambiguate problematic time frames and exhibit close to human performance. We
present the outline of the algorithm in Section 2. The transition probabilities are computed
by optimizing the Jensen-Shannon divergence for Gaussians, as described in Section 3.
Empirical results and validation are presented in Section 4.
2
Clustering using a chain of Gaussian mixtures
Denote the observable spike data by D = {d}, where each spike d ? Rn is described by the vector of its PCA coefficients. We break the data into T disjoint groups
N
T
t
{Dt = {dti }i=1
}t=1 . We assume that in each frame, the data can be well approximated by
a mixture of Gaussians, where each Gaussian corresponds to a single neuron. Each Gaussian in the mixture may have a different covariance matrix. The number of components in
the mixture is not known a priori, but is assumed to be within a certain range (we used 1-6).
In the search stage, we use a standard EM (Expectation-Maximization) algorithm to find
a set of M t candidate mixture descriptions for each time frame t. We build the set of
candidates using a three step process. First, we run the EM algorithm with different number
of clusters and different initial conditions. In a second step, we import to each time frame
t the best mixture solutions found in the neighboring time frames [t ? k, .., t + k] (we
used k = 2). These solutions are also adapted by using them as the initial conditions for
the EM and running a low number of EM rounds. This mixing of solutions between time
frames is repeated several times. Finally, the solution list in each time frame is pruned
to remove similar solutions. Solutions which don?t comply with the assumption of well
shaped Gaussians are also removed.
In order to handle outliers, which are usually background spikes or non-spike events, each
mixture candidate contains an additional ?background model? Gaussian. This model?s parameters are set to 0, K ? ?t where ?t is the covariance matrix of the data in frame t and
K > 1 is a constant. Only the weight of this model is allowed to change during the EM
process.
t
After the search stage, each time frame t has a list of M t models {?ti }T,M
t=1,i=1 . Each mixK
i,t
t
ture model is described by a triplet ?ti = {?i,l
, ?ti,l , ?ti,l }l=1
, denoting Gaussian mixture?s
weights, means, and covariances respectively. Given these candidate models we define a
T
discrete random vector Z = {z t }t=1 in which each component z t has a value range of
t
t
{1, 2, .., M }. ?z = j? has the semantics of ?at time frame t the data is distributed according to the candidate mixture ?tj ?. In addition we define for each spike dti a hidden discrete
?label? random variable lit . This label indicates which Gaussian in the local mixture hy-
Nt
pothesis is the source of the spike. Denote by Lt = {lit }i=1 the vector of labels of time
frame t, and by L the vector of all the labels.
O
z
z
O
O
D
T
3
O
1
O
D
?
O
T
L
O
DT
2
(A)
O
H
L
2
1
L
O
z
O
z
O
2
1
H
?
O
O
D
L
(B)
Figure 1: (A) A Bayesian network model of the data generation process. The network has an HMM
structure, but unlike HMM it does not have fixed states and transition probabilities over time. The
variables and the CPDs are explained in Section 2. (B) A Bayesian network representation of the
relations between the data D and the hidden labels H (see Section 3.1). The visible labels L and the
sampled data points are independent given the hidden labels.
We describe the probabilistic relations between D, L, and Z using a Bayesian network
with the structure described in Figure 1A. Using the network structure and assuming i.i.d
samples the joint log probability decomposes into
1
log P (z ) +
T
X
t=2
T X
N
X
t
t
logP (z |z
t?1
)+
[log P (lit |z t ) + log P (dti |lit , z t )]
(1)
t=1 i=1
We wish to maximize this log-likelihood over all possible choices of L, Z. Notice that
by maximizing the probability of both data and labels we avoid the tendency to prefer
mixtures with many Gaussians, which appears when maximizing the probability for the
data alone. The conditional probability distributions (CPDs) of the points? labels and the
points themselves, given an assignment to Z, are given by
t
log P (lkt = j|z t = i) = log ?i,j
(2)
1
t
?1
log P (dtk |lit = j, z t = i) = ? [n log 2? + log |?ti,j | + (dtk ? ?ti,j ) ?ti,j (dtk ? ?ti,j )]
2
The transition CPDs P (z t |z t?1 ) are described in Section 3. For the first frame?s prior we
use a uniform CPD. The MAP solution for the model is found using the Viterbi algorithm.
Labels are then unified using the correspondences established between the chosen mixtures
in consecutive time frames. As a final adjustment step, we repeat the mixing process using
only the mixtures of the found MAP solution. Using this set of new candidates, we calculate
the final MAP solution in the same manner described above.
3
A statistical distance between mixtures
The transition CPDs of the form P (z t |z t?1 ) are based on the assumption that the Gaussian sources? distributions are approximately stationary in pairs of consecutive time frames.
Under this assumption, two mixtures candidates estimated at consecutive time frames are
viewed as two samples from a single unknown Gaussian mixture. We assume that each
Gaussian component from any of the two mixtures arises from a single Gaussian component in the joint hidden mixture, and so the hidden mixture induces a partition of the set of
visible components into clusters. Gaussian components in the same cluster are assumed to
arise from the same hidden source. Our estimate of p(z t = j|z t?1 = i) is based on the
probability of seeing two large samples with different empirical distributions (?t?1
and ?tj
i
respectively) under the assumption of such a single joint mixture. In Section 3.1, the estimation of the transition probability is formalized as an optimization of a Jensen-Shannon
based score over the possible partitions of the Gaussian components set.
If the family of allowed hidden mixture models is not further constrained, the optimization
problem derived in Section 3.1 is trivially solved by choosing the most detailed partition
(each visible Gaussian component is a singleton). This happens because a richer partition,
which does not merge many Gaussians, gets a higher score. In Section 3.2 we suggest
natural constraints on the family of allowed partitions in the two cases of constant and
variable number of clusters through time, and present algorithms for both cases.
3.1
A Jensen-Shannon based transition score
Assume that in two consecutive time frames we observed two labeled samples
(X 1 , L1 ), (X 2 , L2 ) of sizes N 1 , N 2 with empirical distributions ?1 , ?2 respectively. By
?empirical distribution?, or ?type? in the notation of [2], we denote the ML parameters of
the sample, for both the multinomial distribution of the mixture weights and the Gaussian distributions of the components. As stated above, we assume that the joint sample
of size N = N 1 + N 2 is generated by a hidden Gaussian mixture ?H with K H components, and its components are determined by a partition of the set of all components
in ?1 , ?2 . For convenience of notation, let us order this set of K 1 + K 2 Gaussians
and refer to them (and to their parameters) using one index. We can define a function
R : {1, .., K 1 + K 2 } ? {1, .., K H } which matches each visible Gaussian component in
?1 or ?2 to its hidden source component in ?H . Denote the labels of the sample points
Nj
under the hidden mixture H = {hji }i=1 , j = 1, 2. The values of these variables are given
by hji = R(lij ), where lij is the label index in the set of all components.
The probabilistic dependence between a data point, its visible label, and its hidden label is
explained by the Bayesian network model in Figure 1B. We assume a data point is obtained
by choosing a hidden label and then sample the point from the relevant hidden component.
The visible label is then sampled based on the hidden label using a multinomial distribution
1
+K 2
with parameters ? = {?q }K
, where ?q = P (l = q|h = R(q)), i.e., the probability
q=1
of the visible label q given the hidden label R(q) (since H is deterministic given L, P (l =
q|h) = 0 for h 6= R(q)). Denote this model, which is fully determined by R, ?, and ?H ,
by M H .
We wish to estimate P ((X 1 , L1 ) ? ?1 |(X 2 , L2 ) ? ?2 , M H ). We use ML approximations and arguments based on the method of types [2] to approximate this probability and
optimize it with respect to ?H and ?. The obtained result is (the derivation is omitted)
P ((X 1 , L1 ) ? ?1 |(X 2 , L2 ) ? ?2 , M H ) ?
(3)
K
X
X
H
max exp(?N ?
R
H
?m
m=1
H
?q Dkl (G(x|?q , ?q )|G(x|?H
m , ?m )))
{q:R(q)=m}
where G(x|?, ?) denotes a Gaussian distribution with the parameters ?, ? and the optimized ?H , ? appearing here are given as follows. Denote by wq (q ? {1, .., K 1 + K 2 })
j
the weight of model q in a naive joint mixture of ?1 ,?2 , i.e., wq = NN ?q where j = 1 if
component q is part of ?1 and the same for j = 2.
X
X
wq
H
?m
=
wq , ?q = H
, ?H
?q ?q
(4)
m =
?R(q)
{q:R(q)=m}
{q:R(q)=m}
X
H t
?H
?q (?q + (?q ? ?H
m =
m )(?q ? ?m ) )
{q:R(q)=m}
H
H
Notice that the parameters
P of a hidden Gaussian, ?m and ?m , are just the mean and covariance of the mixture q:R(q)=m ?q G(x|?q , ?q ). The summation over q in expression (3)
can be interpreted as the Jensen-Shannon (JS) divergence between the components assigned
to the hidden source m, under Gaussian assumptions.
For a given parametric family, the JS divergence is a non-negative measurement which
can be used to test whether several samples are derived from a single distribution from the
family or from a mixture of different ones [6]. The JS divergence is computed for a mixture
of n empirical distributions P1 , .., Pn with mixture weights ?1 , .., ?n . In the Gaussian
n
case, denote the mean and covariance of the component distributions by {?i , ?i }i=1 . The
?
?
mean and covariance of the mixture distribution ? , ? are a function of the means and
H
covariances of the components, with the formulae given in (4) for ?H
m ,?m . The Gaussian
JS divergence is given by
JS?G1 ,..,?n (P1 , .., Pn ) =
= H(G(x|?? , ?? )) ?
n
X
?i Dkl (G(x|?i , ?i ), G(x|?? , ?? ))
i=1
n
X
?i H(G(x|?i , ?i )) =
i=1
(5)
n
X
1
(log |?? | ?
?i log |?i |)
2
i=1
using this identity in (3), and setting ?1 = ?ti , ?2 = ?t?1
j , we finally get the following
expression for the transition probability
log P (z t = i|z t?1 = j) =
?N ? max
R
3.2
KH
X
(6)
H
G
?m
JS{?
({G(x|?q , ?q ) : R(q) = m})
q :R(q)=m}
m=1
Constrained optimization and algorithms
Consider first the case in which a one-to-one correspondence is assumed between clusters
in two consecutive frames, and hence the number of Gaussian components K is constant
over all time frames. In this case, a mapping R is allowed iff it maps to each hidden
source i a single Gaussian from mixture ?1 and a single Gaussian from ?2 . Denoting
the Gaussians matched to hidden i by R1?1 (i), R2?1 (i), the transition score (6) takes the
K
P
S(R1?1 (i), R2?1 (i)). Such an optimization of a pairwise matching
form of ?N ? max
R
i=1
score can be seen as a search for a maximal perfect matching in a weighted bipartite graph.
The nodes of the graph are the Gaussian components of ?1 , ?2 and the edges? weights are
given by the scores S(a, b). The global optimum of this problem can be efficiently found
using the Hungarian algorithm [5] in O(n3 ), which is unproblematic in our case.
The one-to-one correspondence assumption is too strong for many data sets in the spike
sorting application, as it ignores the phenomena of splits and merges of clusters. We wish
to allow such phenomena, but nevertheless enforce strong (though not perfect) demands of
correspondence between the Gaussians in two consecutive frames. In order to achieve such
balance, we place the following constraints on the allowed partitions R:
1. Each cluster of R should contain exactly one Gaussian from ?1 or exactly one
Gaussian from ?2 . Hence assignment of different Gaussians from the same mixture to the same hidden source is limited only for cases of a split or a merge.
2. The label entropy of the partition R should satisfy
H
H(?1H , .., ?K
H) ?
N1
N2
1
2
H(?11 , .., ?K
H(?12 , .., ?K
1) +
2)
N
N
(7)
Intuitively, the second constraint limits the allowed partitions to ones which are not richer
than the visible partition, i.e., do not have much more clusters. Note that the most detailed
partition (the partition into singletons) has a label entropy given by the r.h.s of inequality
1
2
(7) plus H( NN , NN ), which is one bit for N 1 = N 2 . This extra bit is the price of using the
concatenated ?rich? mixture, so we look for mixtures which do not pay such an extra price.
The optimization for this family of R does not seem to have an efficient global optimization technique, and thus we resort to a greedy procedure. Specifically, we use a bottom
up agglomerative algorithm. We start from the most detailed partition (each Gaussian is a
singleton) and merge two clusters of the partition at each round. Only merges that comply with the first constraint are considered. At each round we look for a merge which
incurs a minimal loss to the accumulated Jensen Shannon score (6) and a maximal loss to
the mixture label entropy. For two Gaussian clusters (?1 , ?1 , ?1 ), (?2 , ?2 , ?2 ) these two
quantities are given by
? log JS = ?N (w1 + w2 )JS?G1 ,?2 (G(x|?1 , ?1 ), G(x|?2 , ?2 ))
(8)
?H = ?N (w1 + w2 )H(?1 , ?2 )
1
where ?1 , ?2 are w1w+w
, w2 and wi are as in (4). We choose at each round the merge
2 w1 +w2
which minimizes the ratio between these two quantities. The algorithm terminates when
1
2
the accumulated label entropy reduction is bigger than H( NN , NN ) or when no allowed
merges exist anymore. In the second case, it may happen that the partition R found by the
algorithm violates the constraint (7). We nevertheless compute the score based on the R
found, since this partition obeys the first constraint and usually is not far from satisfying
the second.
4
4.1
Empirical results
Experimental design and data acquisition
Neural data were acquired from the dorsal and ventral pre-motor (PMd, PMv) cortices of
two Macaque monkeys performing a prehension (reaching and grasping) task. At the beginning of each trial, an object was presented in one of six locations. Following a delay
period, a Go signal prompted the monkey to reach for, grasp, and hold the target object.
A recording session typically lasted 2 hours during which monkeys completed 600 trials. During each session 16 independently-movable glass-plated tungsten micro-electrodes
f 12 score
0.9-1.0
0.8-0.9
0.7-0.8
0.6-0.7
Number of frames (%)
3386 (75%)
860 (19%)
243 (5%)
55 (1%)
Number of electrodes (%)
13 (30%)
10 (23%)
10 (23%)
11 (25%)
Table 1: Match scores between manual and automatic clustering. The rows list the appearance
frequencies of different f 1 scores.
2
were inserted through the dura, 8 into each area. Signals from these electrodes were amplified (10K), bandpass filtered (5-6000Hz), sampled (25 kHz), stored on disk (Alpha-Map
5.4, Alpha-Omega Eng.), and subjected to 3-stage preprocessing. (1) Line influences were
cleaned by pulse-triggered averaging: the signal following a pulse was averaged over many
pulses and subtracted from the original in an adaptive manner. (2) Spikes were detected
by a modified second derivative algorithm (7 samples backwards and 11 forward), accentuating spiky features; segments that crossed an adaptive threshold were identified. Within
each segment, a potential spike?s peak was defined as the time of the maximal derivative.
If a sharper spike was not encountered within 1.2ms, 64 samples (10 before peak and 53
after) were registered. (3) Waveforms were re-aligned s.t. each started at the point of maximal fit with 2 library PCs (accounting, on average, for 82% and 11% of the variance, [1]).
Aligned waveforms were projected onto the PCA basis to arrive at two coefficients.
4.2
Results and validation
1
42
42
1
1
34
34
2
3
2
1
23 1
(0.80)
23 1
(0.77)
4
2
(0.98)
1
1
42
3
5
4
3
(0.95)
5
1
(0.98)
Figure 2: Frames 3,12,24,34, and 47 from a 68-frames data set. Each frame contains 1000 spikes,
plotted here (with random number assignments) according to their first two PCs. In this data one
cluster moves constantly, another splits into distinguished clusters, and at the end two clusters are
merged. The top and bottom rows show manual and automatic clustering solutions respectively.
Notice that during the split process of the bottom left area some ambiguous time frames exist in
which 1,2, or 3 cluster descriptions are reasonable. This ambiguity can be resolved using global
considerations of past and future time frames. By finding the MAP solution over all time frames, the
algorithm manages such considerations. The numbers below the images show the f 1 score of the
2
local match between the manual and the automatic clustering solutions (see text).
We tested the algorithm using recordings of 44 electrodes containing a total of 4544 time
frames. Spike trains were manually clustered by a skilled user in the environment of AlphaSort 4.0 (Alpha-Omega Eng.). The manual and automatic clustering results were compared
2P R
using a combined measure of precision P and recall R scores f 21 = R+P
. Figure 2 demonstrates the performance of the algorithm using a particularly non-stationary data set.
Statistics on the match between manual and automated clustering are described in Table
1. In order to understand the score?s scale we note that random clustering (with the same
label distribution as the manual clustering) gets an f 21 score of 0.5. The trivial clustering
which assigns all the points to the same label gets mean scores of 0.73 and 0.67 for single
frame matching and whole electrode matching respectively. The scores of single frames
are much higher than the full electrode scores, since the problem is much harder in the
latter case. A single wrong correspondence between two consecutive frames may reduce
the electrode?s score dramatically, while being unnoticed by the single frame score. In most
cases the algorithm gives reasonably evolving clustering, even when it disagrees with the
manual solution. Examples can be seen at the authors? web site1 .
Low matching scores between the manual and the automatic clustering may result from
inherent ambiguity in the data. As a preliminary assessment of this hypothesis we obtained
a second, independent, manual clustering for the data set for which we got the lowest
match scores. The matching scores between manual and automatic clustering are presented
in Figure 3A.
A
0.62
0.68
3
3
2
2
1
H1
0.68
(A)
1
H2
(B1 )
(B2 )
(B3 )
(B4 )
Figure 3: (A) Comparison of our automatic clustering with 2 independent manual clustering solutions for our worst matched data points. Note that there is also a low match between the humans,
forming a nearly equilateral triangle. (B) Functional validation of clustering results: (1) At the beginning of a recording session, three clusters were identified. (2) 107 minutes later, some shifted their
position. They were tracked continuously. (3) The directional tuning of the top left cluster (number 3)
during the delay periods of the first 100 trials (dashed lines are 99% confidence limits). (4) Although
the cluster?s position changed, its tuning curve?s characteristics during the last 100 trials were similar.
In some cases, validity of the automatic clustering can be assessed by checking functional
properties associated with the underlying neurons. In Figure 3B we present such a validation for a successfully tracked cluster.
References
[1] Abeles M., Goldstein M.H. Multispike train analysis. Proc IEEE 65, pp. 762-773, 1977.
[2] Cover T., Thomas J. Elements of information theory. John wiley and sons, New York 1991.
[3] Emondi A.A, Rebrik S.P, Kurgansky A.V, Miller K.D. Tracking neurons recorded from tetrodes
across time. J. of Neuroscience Methods, vol. 135:95-105, 2004.
[4] Fee M., Mitra P., Kleinfeld D. Automatic sorting of multiple unit neuronal signals in the presence
of anisotropic and non-gaussian variability. J. of Neuroscience Methods, vol. 69:175-188, 1996.
[5] Kuhn H.W. The Hungarian method for the assignment problem. Naval research logistics quarterly, pp. 83-87, 1995.
[6] Lehmann E.L. Testing statistical hypotheses John Wiley and Sons, New York 1959.
[7] Lewicki, M.S. A review of methods for spike sorting: the detection and classification of neural
action potentials. Network: Computation in Neural Systems. 9(4):R53-R78, 1998.
[8] Lewicki?s Bayesian spike sorter, sslib (ftp.etho.caltech.edu).
[9] Penev P., Dimitrov A., Miller J. Characterization of and compensation for the non-stationarity
of spike shapes during physiological recordings. Neurocomputing 38-40:1695-1701, 2001.
[10] Shoham S., Fellows M.R., Normann R.A. Robust, automatic spike sorting using mixtures of
multivariate t-distributions. J. of Neuroscience Methods vol. 127(2):111-122, 2003.
[11] Snider R.K. , Bonds A.B. Classification of non-stationary neural signals. J. of Neuroscience
Methods, vol. 84(1-2):155-166, 1998.
1
http://www.cs.huji.ac.il/?aharonbh,?adams
| 2559 |@word trial:4 dtk:3 version:1 disk:1 tedious:1 seek:1 pulse:3 covariance:8 eng:2 accounting:1 pick:1 incurs:1 harder:1 reduction:1 initial:2 contains:3 score:24 denoting:2 past:1 nt:1 pothesis:1 import:1 john:2 visible:8 partition:16 happen:1 cpds:4 shape:3 motor:1 remove:1 stationary:8 alone:1 greedy:1 prehension:1 inspection:1 beginning:2 rebrik:1 short:2 filtered:1 characterization:1 node:1 location:3 skilled:1 become:1 manner:3 acquired:1 pairwise:1 isi:1 themselves:1 p1:2 globally:2 automatically:1 increasing:1 notation:2 matched:2 underlying:1 lowest:1 interpreted:1 minimizes:1 monkey:3 unified:1 finding:2 nj:1 dti:3 fellow:1 ti:9 exactly:2 demonstrates:1 wrong:1 normally:1 unit:2 appear:1 before:1 engineering:1 local:4 mitra:1 limit:2 merge:6 approximately:1 plus:1 limited:2 tungsten:1 range:3 obeys:1 averaged:1 testing:1 procedure:1 area:2 empirical:6 evolving:1 physiology:1 got:1 matching:6 shoham:1 pre:1 confidence:1 seeing:1 suggest:2 get:4 cannot:1 close:1 convenience:1 onto:1 influence:1 r78:1 optimize:1 www:1 map:8 deterministic:1 center:1 maximizing:2 jerusalem:3 go:1 duration:1 independently:2 formalized:1 identifying:1 assigns:1 handle:2 target:1 user:1 us:1 hypothesis:2 element:1 approximated:2 satisfying:1 particularly:1 labeled:1 observed:1 bottom:3 inserted:1 solved:1 worst:1 calculate:1 grasping:1 movement:3 removed:1 environment:1 segment:2 bipartite:1 basis:1 triangle:1 resolved:1 joint:5 equilateral:1 derivation:1 train:3 elec:1 separated:1 describe:1 detected:1 choosing:2 hillel:1 quite:1 heuristic:1 richer:2 solve:1 ability:1 statistic:1 g1:2 final:2 triggered:1 propose:1 mission:1 maximal:4 neighboring:1 relevant:1 aligned:2 mixing:2 iff:1 achieve:1 amplified:1 description:4 kh:1 spiro:1 cluster:30 electrode:11 r1:2 optimum:1 adam:3 perfect:2 object:2 help:1 ftp:1 ac:4 omega:2 school:1 strong:2 dividing:1 implemented:1 c:3 involves:1 hungarian:2 kuhn:1 waveform:2 merged:1 human:7 violates:1 clustered:2 preliminary:1 summation:1 hold:1 considered:1 exp:1 viterbi:1 mapping:1 ventral:1 consecutive:8 omitted:1 estimation:1 proc:1 label:25 currently:1 bond:1 successfully:2 tool:1 weighted:1 gaussian:34 modified:1 reaching:1 avoid:1 pn:2 derived:2 naval:1 indicates:1 likelihood:1 lasted:1 glass:1 posteriori:1 nn:5 accumulated:2 typically:1 hidden:20 relation:2 semantics:1 microelectrode:1 classification:2 priori:1 constrained:2 shaped:1 manually:3 lit:5 look:3 nearly:1 inevitable:1 future:1 cpd:1 micro:2 inherent:1 preserve:1 divergence:6 neurocomputing:1 n1:1 detection:1 stationarity:9 penev:1 grasp:1 mixture:40 pc:2 tj:2 chain:1 edge:1 worker:1 divide:1 re:1 plotted:1 minimal:1 obstacle:1 cover:1 logp:1 assignment:4 maximization:1 uniform:1 delay:2 successful:1 too:1 stored:1 abele:1 combined:3 density:2 peak:2 huji:4 automating:1 probabilistic:4 continuously:1 w1:3 ambiguity:2 recorded:4 containing:1 choose:1 sorter:1 resort:1 derivative:2 stark:1 potential:2 singleton:3 b2:1 coefficient:3 satisfy:1 crossed:1 later:1 try:1 lot:1 break:1 h1:1 start:1 complicated:1 il:4 accuracy:1 variance:2 characteristic:2 efficiently:1 miller:2 directional:1 bayesian:8 manages:1 reach:1 manual:12 acquisition:1 frequency:1 pp:2 associated:1 sampled:3 recall:1 pmd:1 segmentation:1 goldstein:1 appears:1 higher:2 dt:2 done:1 though:1 just:1 stage:6 spiky:1 until:1 web:1 assessment:1 lack:1 kleinfeld:1 r53:1 usage:1 b3:1 validity:1 requiring:1 contain:1 hence:2 assigned:1 pmv:1 round:4 indistinguishable:2 during:7 ambiguous:1 m:1 outline:1 l1:3 image:1 consideration:2 accentuating:1 multinomial:2 functional:2 tracked:2 khz:1 b4:1 anisotropic:1 slight:2 refer:1 measurement:1 automatic:10 tuning:2 trivially:1 session:4 acute:1 cortex:1 base:1 movable:1 j:8 multivariate:2 optimizing:1 scenario:3 certain:2 inequality:1 caltech:1 seen:2 additional:1 converge:1 maximize:1 period:2 signal:5 semi:1 dashed:1 multiple:3 full:2 match:6 long:1 divided:1 dkl:2 bigger:1 expectation:1 represent:2 background:3 addition:1 source:12 dimitrov:1 extra:2 w2:4 unlike:1 recording:8 hz:1 seem:1 presence:1 backwards:1 split:7 enough:1 ture:1 automated:4 fit:1 identified:2 reduce:1 whether:1 expression:2 pca:3 six:1 assist:1 york:2 cause:1 action:1 dramatically:1 detailed:3 induces:1 http:1 exist:2 problematic:2 notice:3 shifted:1 estimated:1 disjoint:1 neuroscience:4 hji:2 discrete:2 vol:4 group:1 nevertheless:2 threshold:1 snider:1 graph:2 run:1 lehmann:1 place:1 almost:2 reasonable:2 family:5 arrive:1 decision:1 prefer:1 fee:1 bit:2 pay:1 correspondence:7 encountered:1 activity:3 adapted:1 constraint:6 n3:1 hy:1 argument:1 pruned:1 performing:1 department:1 according:3 terminates:1 across:1 em:5 son:2 wi:1 happens:1 outlier:1 explained:2 intuitively:1 turn:1 subjected:1 end:1 aharon:1 gaussians:11 quarterly:1 enforce:1 appearing:1 anymore:1 distinguished:1 subtracted:1 original:2 thomas:1 denotes:1 clustering:23 include:2 running:1 completed:1 top:2 unnoticed:1 concatenated:1 build:1 move:2 quantity:2 spike:27 parametric:1 eran:1 primary:1 md:1 dependence:1 exhibit:1 distance:1 hmm:2 agglomerative:1 trivial:1 assuming:1 index:2 prompted:1 ratio:1 balance:1 hebrew:3 mostly:1 sharper:1 stated:1 negative:1 design:1 unknown:1 neuron:7 compensation:1 logistics:1 variability:1 frame:38 rn:1 drift:1 introduced:1 pair:1 cleaned:1 optimized:1 merges:4 registered:1 established:2 hour:2 macaque:1 address:1 bar:1 suggested:2 usually:2 exemplified:1 below:1 appeared:1 including:1 max:3 event:1 difficulty:1 natural:1 library:1 started:1 naive:1 normann:1 lij:2 text:1 review:2 comply:2 prior:1 l2:3 disagrees:1 checking:1 fully:2 loss:2 lkt:1 generation:1 validation:4 h2:1 row:2 changed:1 placed:1 repeat:1 last:1 allow:1 understand:2 aharonbh:2 wide:1 distributed:1 curve:1 transition:11 rich:1 doesn:1 ignores:1 forward:1 author:1 adaptive:2 preprocessing:1 projected:1 far:1 cope:1 approximate:1 observable:1 alpha:3 ml:2 global:4 b1:1 assumed:3 don:1 search:4 triplet:1 decomposes:1 table:2 disambiguate:1 nature:2 reasonably:2 robust:1 main:1 whole:1 noise:1 arise:1 unproblematic:1 n2:1 repeated:1 allowed:7 neuronal:1 wiley:2 precision:1 position:2 wish:3 bandpass:1 candidate:9 formula:1 minute:1 jensen:6 list:3 r2:2 physiological:1 grouping:1 tetrode:1 demand:1 sorting:9 entropy:4 lt:1 appearance:1 w1w:1 forming:1 visual:1 labor:2 adjustment:1 tracking:1 lewicki:2 corresponds:1 constantly:1 conditional:1 viewed:1 identity:1 loosing:1 replace:1 price:2 considerable:1 change:4 typical:1 determined:3 specifically:1 averaging:1 total:1 partly:1 tendency:1 experimental:1 shannon:6 wq:4 latter:1 arises:1 dorsal:1 assessed:1 tested:1 phenomenon:2 |
1,716 | 256 | 810
Nunez and Fortes
Performance of Connectionist Learning Algorithms
on 2-D SIMD Processor Arrays
Fernando J. Nunez* and Jose A.B. Fortes
School of Electrical Engineering
Purdue University
West Lafayette, IN 47907
ABSTRACT
The mapping of the back-propagation and mean field theory
learning algorithms onto a generic 2-D SIMD computer is
described. This architecture proves to be very adequate for these
applications since efficiencies close to the optimum can be
attained. Expressions to find the learning rates are given and
then particularized to the DAP array procesor.
1 INTRODUCTION
The digital simulation of connectionist learning algorithms is flexible and
accurate. However, with the exception of very small networks, conventional
computer architectures spend a lot of time in the execution of simulation
software. Parallel computers can be used to reduce the execution time. Vectorpipelined, multiprocessors, and array processors are some of the most important
classes of parallel computers 3 . Connectionist or neural net (NN) learning
algorithms have been mapped onto all of them.
The focus of this contribution is on the mapping of the back-propagation (BP)
and mean field theory (MFT) learning algorithms onto the subclass of SIMD
computers with the processors arranged in a square two-dimensional mesh and
interconnected by nearest-neighbor links.
The material is organized as follows. In section 2, the execution cost of BP and
MFT on sequential computers is found. Two-dimensional SIMD processor arrays
are described in section 3, and the costs of the two dominanting operations in the
simulations are derived. In section 4 the mapping of BP and MFT is I;ommented
* Current address: Motorola Inc., 1301
E Algonquin Rd., Schaumburg, IL 60196
Performance of Connectionist Learning Algorithms
and expressions for the learning rates are obtained. These expressions are
particularized to the DAP computer in section 5. Section 6 concludes this work.
2 BACK-PROPAGATION AND MEAN FIELD THEORY
In this paper, two learning algorithms: Bp 7 and MFT4; and 3-layer nets are
considered. The number of neurons in the input, hidden, and output layer is I, H,
and 0 respectively. BP has been used in many applications. Probably, NETtalk8
is the best known. MFT can also be used to learn arbitrary mappings between
two sets, and remarkably, to find approximate solutions to hard optimization
problems much more efficiently than a Boltzmann Machine does 4,5.
The
Vj
=
output
f ( ~ajjvj
of
-
a
neuron
i will be denoted as
Vi
and
called
value:
OJ). The summation represents the net input received and will
j'l"j
be called activation. The neuron thresold is OJ. A sigmoid-like function f is
applied to find the value. The weight of the link from neuron j to neuron i is ajj.
Since input patterns are the values of the I layer, only neuron values and
activations of the Hand 0 layers must be computed. In BP, the activation error
and the value error of the Hand 0 layers are calculated and used to change the
weights.
In a conventional computer, the execution time of BP is approximately the time
spent in finding the activations, back-propagating the activation error of the 0
layer, and modifying the I-H and H-O weights. The result is: (21 + 30)Htm'
where tm is the time required to perform a multiply/accumulate operation. Since
the net has (I + O)H connections, the learning rate in connections per second is:
1+ 0
fNBP =
(21 + 30)tm
CPS
In the MFT algorithm, only from the neuron values in equilibrium at the end of
the clamped and free annealing phases we can compute the weight increments. It
is assumed that in both phases there are A annealing temperature~ ~nd that E
iterations are enough to reach equilibrium at each temperature 4,5. With these
changes, MFT is now a deterministic algorithm where the anne ling phases are
composed of AE sweeps. The MFT execution time can be apprl?"jmated by the
time spent in computing activations in the annealing loops. T J,ing into account
that in. the clamped phase only the H layer is updated, and tha ', in the free phase
both, the Hand 0 layers change their values, the MFT leaning performance is
found to be:
ft
tMFT =
tBP
AE
CPS
MFT is AE times more expensive than BP. However, the learning qualities of
both algorithms are different and such a direct cOP'tJarison is simplistic.
811
812
Nunez and Fortes
3 2-D SIMD PROCESSOR ARRAYS
Two-dimensional single instruction multiple data stream (2-D SIMD) computers
are very efficient in the simulation of NN learning algorithms. They can provide
massive parallelism at low cost. An SIMD computer is an array of processing
elements (PEs) that execute the same instruction in each cycle. There is a single
control unit that broadcasts instructions to all the PEs. SIMD architectures
operate in a synchronous, lock-step fashion 3 ? They are also called array procesors
because their raison cfetre is to operate on vectors and matrices.
Example SIMD computers are the Illiac-IV, the Massively Parallel Processor
(MPP), the Connection Machine (CM), and the Distributed Array Processor
(DAP). With the exception of the CM, whose PE interconnection topology is a
hypercube, the other three machines are 2-D SThAD arrays because their PEs are
interconnected by a 2-D mesh with wrap-around links (figure 1).
CONTROL
UNIT
1----4
pp
Figure 1: A 2-D SIMD Processor Array
Each PE has its own local memory. The instruction has an address field to access
it. The array memory space can be seen as a 3-D volume. This volume is
generated by the PE plane, and the depth is the number of memory words that
each PE can address. When the control unit issues an address, a plane of the
memory volume is being referenced. Then, square blocks of PxP elements are the
natural addressing unit of 2-D SThAD processor arrays. There is an activity bit
register in each PE to disable the execution of instructions. This is useful to
perform operations with a subset of the PEs. It is assumed that there is no
Performance of Connectionist Learning Algorithms
overlapping between data processing an data moving operations. In other words,
PEs can be either performing some operation on data (this includes accessing the
local memory) or exchanging data with other processors.
3.1
MAPPING THE TWO BASIC OPERATIONS
It is characteristic of array processors that the way data is allocated into the PEs
memories has a very important effect on performance. For our purposes, two
data structures must be considered: vectors and matrices. The storage of vectors
is illustrated in figure 2-a. There are two modes: row and column. A vector is
split into P-element subvectors stored in the same memory plane. Very large
vectors will require two or more planes. The storage of matrices is also very
simple. They must be divided into square PXP blocks (figure 2-b). The shading
in figure 2 indicates that, in general, the sizes of vectors and matrices do not fit
the array dimensions perfectly.
p
(a)
~P
(b)
?
row
[IIJ
column
Figure 2: (a) Vector and (b) Matrix Storage
The execution time of BP and MFT in a 2-D SIMD computer is spent, almost
completely,
in
matrix-vector
multiply
(MVM)
and
vector
outer
multiply/accumulate (VOM) operations. They can be decomposed in the
following simpler operations involving PxP blocks.
a) Addition (+): C = A + B such that eij = aij + bij.
b) Point multiply/accumulate (-):
= C + A-B such that e'ij = eij + aijb ij ?
c) Unit rotation: The result block has the same elements than the original, but
rotated one place in one of the four possible directions (N, E, W, and S).
d) Row (column) broadcast: The result of the row (column) broadcast of a vector
x stored in row (column) mode is a block X such that xii = Xj ( = Xi).
a
The time required to execute a, b, c, and d will be denoted as tll' tm , t,., and t6
respectively. Next, let us see how the operation y = Ax (MVM) is decomposed in
simpler steps using the operations above. Assume that x and yare P-element
vectors, and A is a PXP block.
813
814
Nunez and Fortes
1) Row-broadcast vector x.
2) Point multiply Y = A?X.
3) Row addition of block Y,
t
Yi = f'llij =
aijxj'
j=1
j-l
This requires flOg2pl steps. In
each step multiple rotations and one addition are performed. Figure 3 shows how
eight values in the same row are added using the recursive doubling technique.
Note that the number of rotations doubles in each step. The cost is:
Pt r + log2Pto' Row addition is an inefficient operation because of the large cost
due to communication. Fortunately, for larger data its importance can be
diminished by using the scheduling described nextly.
00000000
....-
....-
....-
....-
+
+
+
+
?
.
+
+
..
+
Figure 3: Recursive Doubling
Suppose that x, y, and A have dimensions m = MP, n = NP, and nxm
respectively. Then, y = Ax must be partitioned into a sequence of nonpartitioned block operations as the one explained above. We can write:
yi
=
M
M
j=1
j=1
~Aijxj = ~(Aij?Xj)u
M
= (~Aij.Xj)u
j=1
In this expression, yi and x j represent the i-th and i-th P-element subvector of y
and x respectively, and A ij is the PxP block of A with indices i and i. Block Xi
is the result of row-broadcasting xj (x is stored in row mode.) Finally, u is a
vector with all its P-elements equal to 1. Note that in the second term M column
additions are implicit, while only one is required in the third term because blocks
instead of vectors are accumulated. Since 'II has N subvectors, and the M
subvectors of x are broadcast only once, the total cost of the MVM operation is:
Mter a similar development, the cost of the YOM ( At = A
+ yx T
)
operation is:
Performance of Connectionist Learning Algorithms
If the number of neurons in each layer is not an integer multiple of P, the storage
and execution efficiencies decrease. This effect is less important in large networks.
4 LEARNING RATES ON 2-D SIMD COMPUTERS
4.1
BACK-PROPAGATION
The neuron val~es, activations, value errors, activation errors, and thresolds of
the Hand 0 layers are organized as vectors. The weights are grouped into two
matrices: I-H and H-O. Then, the scalar operations of the original algorithm are
transformed into matrix-vector operations.
From now on, the size of the input, hidden, and output layers will be IP, HP, and
OP. .A13 commented before, the execution time is mostly spent in computing
activations, values, their errors, and in changing the weights. To compute
activations, and to back-propagate the activation error of the 0 layer MVM
operations are performed. The change of weights requires YOM operations. Alter
substituting the expressions of the previous section, the time required to learn a
pattern simulating BP on a 2-D SIMD computer is:
The time spent in data communication is given by the factors in tr and t,. The
larger they are, the smaller is the efficiency. For array processors with fast
broadcast facilities, and for nets large enough in terms of the array dimensions,
the efficiency grows since a smaller fraction of the total execution time is
dedicated to moving data. Since the net has (I + O)HP2 connections, the
learning rate is p2 times greater than using a single PE:
(I
f..
NSIMD-BP
4.2
= (21
+ O)p2
+ 30)tm
CPS
MEAN FIELD THEORY
The operations outside the annealing loops can be neglected with small error. In
consequence, only the computation of activations in the clamped and free
annealing phases is accounted for:
AE((21 + 30)Htm + {21 + H + 20)t, + (2H + O)(Ptr + log2Pta))
Under the same favorable conditions above mentioned, the learning rate is:
_
!:SIMD-MFT -
(I + O)P2
AE(21 + 30)tm
CPS
815
816
Nunez and Fortes
() LEARNING PERFORMANCE ON THE DAP
The DAP is a commercial 2-D SIMD processor array developed by lCL. It is a
massively parallel computer with bit-level PEs built around a single-bit full
adder. In addition to the 2-D PE interconnection mesh, there are row and column
broadcast buses that allow the direct transfer of data from any processor row or
column to an edge register. Many instructions require a single clock cycle leading
to very efficient codings of loop bodies. The DAP-510 computer features 25 x2 5
PEs with a maximum local memory of 1Mbit per PE. The DAP-610 has 26 x2 6
PEs, and the maximum local memory IS 64Kbit. The clock cycle in both
machines is 100 nsl.
With bit-level processors it is possible to tailor the preCISIon of fixed-point
computations to the minimum required by the application. The costs in cycles
required by several basic operations are given below. These expressions are
function of the number of bits of the operands, that has been assumed to be the
same for all of them: b bits.
The time required by the DAP to perform a block addition, point
multiplication/accumulation, and broadcast is to = 2b, tm = 2b 2 , and t6 = 8b
clock cycles respectively. On the other hand, P + 2b log2P cycles is the duration
of a row addition. Let us take b = 8 bits, and AE = 24. This values have been
found adequate in many applications. Then, the maximum learning rates of the
DAP-610 (P = 64) are:
100-160 MCPS
BP:
MFT: 4.5-6.6 MCPS
where MCPS = 106 CPS. These figures are 4 times smaller for the DAP-510. It is
worth to mention that the performance decreases quadratically with b. The two
learning rates of each algorithm correspond to the worst and best case topology.
6.1
EXAMPLES
Let us consider a one-thousand neuron net with 640, 128, and 256 neurons in the
input, hidden, and output layer. For the DAP-610 we have 1= 10, H = 2, and
o = 4. The other parameters are the same than used above. After substituting,
we see that the communication costs are less than 10% of the total,
demonstrating the efficiency of the DAP in this type of applications. The learning
rates are:
BP:
140 MCPS
MFT: 5.8 MCPS
NETtalk 10 is frequently used as a benchmark in order to compare the
performance achieved on different computers. Here, a network with similar
dimensions is considered: 224 input, 64 hidden, and 32 output neurons. These
dimensions fit perfectly into the DAP-510 since P = 32. ~ before, a data
precision of 8 bits has been taken. However, the fact than the input patterns are
binary has been exploited to obtain some savings.
The performance reached in this case is 50 MCPS. Even though NETtalk is a
relatively small network, only 30% of the total execution time is spent in data
communication. If the DAP-610 were used, somewhat less than 200 MCPS would
be learnt since the output layer is smaller than P what causes some inefficiency.
Performance of Connectionist Learning Algorithms
Finally, BP learning rates of the DAP-610 with 8- and 16-bit operands are
compared to those obtained by other machines below 2,6:
COMPUTER
VAX 780
CRAY-2
CM (65K PEs)
DAP-610 (8 bits)
DAP-610 (16 bits)
MCPS
0.027
7
13
100-160
25-40
6 CONCLUSIONS
Two-dimensional SThfl) array processors are very adequate for the simulation of
connectionist learning algorithms like BP and :MFT. These architectures can
execute them at nearly optimum speed if the network is large enough, and there is
full connectivity between layers. Other much more costly parallel architectures
are outperformed.
The mapping approach described in this paper can be easily extended to any
network topology with dense blocks in its global interconnection matrix.
However, it is obvious that 2-D SIMD arrays are not a good option to simulate
networks with random sparse connectivity.
Acknow ledgements
This work has been supported by the Ministry of Education and Science of Spain.
References
[1] (1988) AMT DAP Series, Technical Overview. Active Memory Technology.
[2] G. Blelloch & C. Rosenberg. (1987) Network Learning on the Connection
Machine. Proc. 10th Joint Coni. on Artificial Intelligence, IJCA Inc.
[3] K. Hwang & F. Briggs. (1984) Computer Architecture and Parallel Processing,
McGraw-Hill.
[4] C. Peterson & J. Anderson. (1987) A Mean Field Theory Learning Algorithm
for Neural Networks. Complex Systems, 1:995-1019.
[5] C. Peterson & B. Soderberg. (1989) A New Method For Mapping Optimization
Problems onto Neural Networks. Int'/ J. 01 Neural Systems, 1(1):3-22.
[6] D. Pomerleau, G. Gusciora, D. Touretzky & H.T. Kung. (1988) Neural
Network Simulation at Warp Speed: How We Got 17 Million Connections per
Second. Proc. IEEE Int'l Coni. on Neural Networks, 11:143-150.
[7] D. Rumelhart, G. Hinton & R. Williams. (1986) Learning Representations by
Back-Propagating Errors. Nature, (323):533-536.
[8] T. Sejnowski & C. Rosenberg. (1987) Parallel Networks that Learn to
Pronounce English Text. Complex Systems, 1:145-168.
817
| 256 |@word nd:1 instruction:6 ajj:1 propagate:1 simulation:6 mention:1 tr:1 shading:1 inefficiency:1 series:1 current:1 anne:1 activation:12 must:4 mesh:3 intelligence:1 plane:4 simpler:2 direct:2 cray:1 frequently:1 decomposed:2 motorola:1 subvectors:3 spain:1 what:1 cm:3 developed:1 finding:1 subclass:1 control:3 unit:5 hp2:1 before:2 engineering:1 local:4 referenced:1 consequence:1 approximately:1 lafayette:1 pronounce:1 recursive:2 block:13 got:1 word:2 onto:4 close:1 scheduling:1 storage:4 accumulation:1 conventional:2 deterministic:1 williams:1 duration:1 array:19 increment:1 updated:1 pt:1 suppose:1 a13:1 massive:1 commercial:1 element:7 rumelhart:1 expensive:1 ft:1 electrical:1 worst:1 thousand:1 cycle:6 decrease:2 mentioned:1 accessing:1 neglected:1 efficiency:5 completely:1 htm:2 easily:1 joint:1 fast:1 sejnowski:1 artificial:1 outside:1 lcl:1 whose:1 spend:1 larger:2 interconnection:3 cop:1 ip:1 sequence:1 net:7 interconnected:2 loop:3 double:1 optimum:2 rotated:1 spent:6 propagating:2 ij:3 op:1 nearest:1 school:1 received:1 p2:3 direction:1 modifying:1 material:1 education:1 require:2 particularized:2 blelloch:1 summation:1 around:2 considered:3 equilibrium:2 mapping:7 substituting:2 purpose:1 favorable:1 proc:2 outperformed:1 grouped:1 rosenberg:2 tll:1 derived:1 focus:1 ax:2 indicates:1 fortes:5 multiprocessor:1 nn:2 accumulated:1 hidden:4 transformed:1 issue:1 flexible:1 denoted:2 development:1 field:6 simd:16 equal:1 once:1 saving:1 represents:1 nearly:1 alter:1 connectionist:8 np:1 composed:1 phase:6 multiply:5 algonquin:1 accurate:1 edge:1 iv:1 column:8 exchanging:1 cost:9 addressing:1 subset:1 stored:3 learnt:1 connectivity:2 broadcast:8 inefficient:1 leading:1 nsl:1 account:1 coding:1 includes:1 int:2 inc:2 register:2 mp:1 vi:1 stream:1 performed:2 lot:1 reached:1 option:1 parallel:7 pxp:5 contribution:1 square:3 il:1 characteristic:1 efficiently:1 correspond:1 worth:1 processor:16 reach:1 touretzky:1 pp:1 obvious:1 organized:2 back:7 attained:1 arranged:1 execute:3 though:1 anderson:1 implicit:1 clock:3 hand:5 adder:1 overlapping:1 propagation:4 mode:3 quality:1 hwang:1 grows:1 effect:2 facility:1 illustrated:1 nettalk:2 ptr:1 hill:1 dap:18 dedicated:1 temperature:2 sigmoid:1 rotation:3 log2p:1 operand:2 overview:1 volume:3 million:1 accumulate:3 gusciora:1 mft:14 rd:1 hp:1 moving:2 access:1 own:1 massively:2 binary:1 yi:3 exploited:1 seen:1 ministry:1 minimum:1 fortunately:1 greater:1 disable:1 mbit:1 somewhat:1 fernando:1 ii:1 multiple:3 full:2 ing:1 technical:1 divided:1 involving:1 simplistic:1 basic:2 ae:6 nunez:5 iteration:1 represent:1 achieved:1 cps:5 addition:8 remarkably:1 annealing:5 allocated:1 operate:2 probably:1 integer:1 split:1 enough:3 xj:4 fit:2 architecture:6 topology:3 perfectly:2 reduce:1 tm:6 mpp:1 synchronous:1 expression:6 cause:1 adequate:3 useful:1 per:3 xii:1 write:1 ledgements:1 commented:1 four:1 demonstrating:1 changing:1 fraction:1 jose:1 tailor:1 place:1 almost:1 bit:11 layer:15 activity:1 bp:15 x2:2 software:1 speed:2 simulate:1 performing:1 relatively:1 smaller:4 partitioned:1 mcps:8 explained:1 taken:1 nxm:1 bus:1 end:1 briggs:1 operation:20 yom:2 yare:1 eight:1 generic:1 simulating:1 original:2 lock:1 coni:2 yx:1 prof:1 hypercube:1 sweep:1 added:1 costly:1 wrap:1 link:3 mapped:1 outer:1 index:1 mostly:1 acknow:1 aijxj:2 pomerleau:1 boltzmann:1 perform:3 neuron:12 purdue:1 benchmark:1 extended:1 communication:4 hinton:1 arbitrary:1 required:7 subvector:1 connection:6 quadratically:1 address:4 parallelism:1 pattern:3 below:2 built:1 oj:2 memory:10 natural:1 technology:1 vax:1 concludes:1 text:1 val:1 multiplication:1 digital:1 soderberg:1 leaning:1 row:14 accounted:1 supported:1 free:3 t6:2 english:1 aij:3 allow:1 warp:1 neighbor:1 peterson:2 sparse:1 distributed:1 calculated:1 depth:1 dimension:5 approximate:1 mcgraw:1 llij:1 mter:1 global:1 active:1 assumed:3 xi:2 learn:3 transfer:1 nature:1 complex:2 vj:1 dense:1 ling:1 body:1 west:1 fashion:1 iij:1 precision:2 mvm:4 clamped:3 pe:18 third:1 bij:1 sequential:1 importance:1 execution:11 broadcasting:1 eij:2 doubling:2 scalar:1 amt:1 tha:1 hard:1 change:4 diminished:1 called:3 total:4 e:1 exception:2 kung:1 |
1,717 | 2,560 | Adaptive Manifold Learning
Jing Wang, Zhenyue Zhang
Department of Mathematics
Zhejiang University, Yuquan Campus,
Hangzhou, 310027, P. R. China
[email protected]
[email protected]
Hongyuan Zha
Department of Computer Science
Pennsylvania State University
University Park, PA 16802
[email protected]
Abstract
Recently, there have been several advances in the machine learning and
pattern recognition communities for developing manifold learning algorithms to construct nonlinear low-dimensional manifolds from sample
data points embedded in high-dimensional spaces. In this paper, we develop algorithms that address two key issues in manifold learning: 1)
the adaptive selection of the neighborhood sizes; and 2) better fitting the
local geometric structure to account for the variations in the curvature
of the manifold and its interplay with the sampling density of the data
set. We also illustrate the effectiveness of our methods on some synthetic
data sets.
1
Introduction
Recently, there have been advances in the machine learning community for developing effective and efficient algorithms for constructing nonlinear low-dimensional manifolds from
sample data points embedded in high-dimensional spaces, emphasizing simple algorithmic
implementation and avoiding optimization problems prone to local minima. The proposed
algorithms include Isomap [6], locally linear embedding (LLE) [3] and its variations, manifold charting [1], hessian LLE [2] and local tangent space alignment (LTSA) [7], and they
have been successfully applied in several computer vision and pattern recognition problems. Several drawbacks and possible extensions of the algorithms have been pointed out
in [4, 7] and the focus of this paper is to address two key issues in manifold learning: 1)
how to adaptively select the neighborhood sizes in the k-nearest neighbor computation to
construct the local connectivity; and 2) how to account for the variations in the curvature
of the manifold and its interplay with the sampling density of the data set. We will discuss
those two issues in the context of local tangent space alignment (LTSA) [7], a variation
of locally linear embedding (LLE) [3] (see also [5],[1]). We believe the basic ideas we
proposed can be similarly applied to other manifold learning algorithms.
We first outline the basic steps of LTSA and illustrate its failure modes using two simple
examples. Given a data set X = [x1 , . . . , xN ] with xi ? Rm , sampled (possibly with
noise) from a d-dimensional manifold (d < m), LTSA proceeds in the following steps.
1) L OCAL NEIGHBORHOOD CONSTRUCTION . For each xi , i = 1, . . . , N , determine a set
Xi = [xi1 , . . . , xiki ] of its neighbors (ki nearest neighbors, for example).
k = 4
1
k = 6
k = 8
0.3
0.5
0.5
0.2
0.4
0.4
0.1
0.3
0.3
0
0.2
0.2
?0.1
0.1
0.1
?0.2
0
0
?0.3
?0.1
?0.1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
?1
0
1
10
9
?0.4
?2
0
2
?0.2
?2
0
2
?0.2
?2
0.15
0.2
0.2
0.15
0.1
0.15
0.05
0.1
0.1
0
0.05
0.05
0
2
8
7
6
5
4
?0.05
0
0
?0.1
?0.05
?0.05
3
?0.15
?0.1
?0.1
?0.2
?0.15
?0.15
2
1
0
?5
0
5
?0.25
?20
0
20
?0.2
?20
0
20
?0.2
?20
0
20
Figure 1: The data sets (first column) and computed coordinates ?i by LTSA vs. the centered arc-length coordinates Top row: Example 1. Bottom row: Example 2.
2) L OCAL LINEAR FITTING . Compute an orthonormal basis Qi for the d-dimensional
tangent space of the manifold at xi , and the orthogonal projection of each xij to the tangent
(i)
space: ?j = QTi (xij ? x
?i ) where x
?i is the mean of the neighbors.
3) L OCAL COORDINATES ALIGNMENT.
Align the N local projections ?i =
(i)
(i)
[?1 , ? ? ? , ?ki ], i = 1, . . . , N , to obtain the global coordinates ?1 , . . . , ?N . Such an alignment is achieved by minimizing the global reconstruction error
X
X
1
kEi k22 ?
kTi (I ? eeT ) ? Li ?i k22
(1.1)
k
i
i
i
over all possible Li ? Rd?d and row-orthonormal T = [?1 , . . . , ?N ] ? Rd?N , where
Ti = [?i1 , . . . , ?iki ] with the index set {i1 , . . . , iki } determined by the neighborhood of
each xi , and e is a vector of all ones.
Two strategies are commonly used for selecting the local neighborhood size k i : one is k
nearest neighborhood ( k-NN with a constant k for all the sample points) and the other is neighborhood [3, 6]. The effectiveness of the manifold learning algorithms including LTSA
depends on the manner of how the nearby neighborhoods overlap with each other and the
variation of the curvature of the manifold and its interplay with the sampling density [4].
We illustrate those issues with two simple examples.
Example 1. We sample data points from a half unit circle xi = [cos(ti ), sin(ti )]T , i =
1 . . . , N. It is easy to see that ti represent the arc-length of the circle. We choose ti ? [0, ?]
according to
ti+1 ? ti = 0.1(0.001 + | cos(ti )|)
starting at t1 = 0, and set N = 152 so that tN ? ? and tN +1 > ?. Clearly, the half circle
has unit curvature everywhere. This is an example of highly-varying sampling density.
2
Example 2. The date set is generated as xi = [ti , 10e?ti ]T , i = 1 . . . , N, where ti ?
[?6, 6] are uniformly distributed. The curvature of the 1-D curve at parameter value t is
given by
2
20|1 ? 2t2 |e?t
cg (t) =
2
(1 + 40t2 e?2t )3/2
which changes from mint cg (t) = 0 to maxt cg (t) = 20 over t ? [?6, 6]. We set N = 180.
This is an example of highly-varying curvature.
For the above two data sets, LTSA with constant k-NN strategy fails for any reasonable
k we have tested. So does LTSA with constant -neighborhoods. In the first column of
Figure 1, we plot these two data sets. The computed coordinates by LTSA with constant kneighborhoods are plotted against the centered arc-length coordinates for a selected range
of k (ideally, the plots should display points on a straight line of slops ??/4).
2
Adaptive Neighborhood Selection
In this section, we propose a neighborhood contraction and expansion algorithm for adaptively selecting ki at each sample point xi . We assume that the data are generated from
a parameterized manifold, xi = f (?i ), i = 1, . . . , N, where f : ? ? Rd ? Rm . If f
is smooth enough, using first-order Taylor expansion at a fixed ? , for a neighboring ??, we
have
f (?
? ) = f (? ) + Jf (? ) ? (?
? ? ? ) + (?, ??),
(2.2)
xij = xi + Jf (?i ) ? (?ij ? ?i ) + (?i , ?ij ).
(2.3)
where Jf (? ) ? Rm?d is the Jacobi matrix of f at ? and (?, ??) represents the error term
determined by the Hessian of f , k(?, ??)k ? cf (? )k?
? ? ? k22 , where cf (? ) ? 0 represents
the curvature of the manifold at ? . Setting ? = ?i and ?? = ?ij gives
A point xij can be regarded as a neighbor of xi with respect to the tangent space spanned
by the columns of Jf (?i ) if
k?ij ? ?i k2 is small and k(?i , ?ij )k2 kJf (?i ) ? (?ij ? ?i )k2 .
The above conditions, however, are difficult to verify in practice since we do not know
Jf (?i ). To get around this problem, consider an orthogonal basis matrix Qi of the tangent
space spanned by the columns of Jf (?i ) which can be approximately computed by the SVD
of Xi ? x
?i eT , where x
?i is the mean of the neighbors xij = f (?ij ), j = 1, . . . , ki . Note that
x
?i =
ki
1 X
xi = xi + Jf (?i ) ? (?
?i ? ?i ) + ?i ,
ki j=1 j
where ?i is the mean of (?i , ?i1 ), . . . , (?i , ?ik1 ). Eliminating xi in (2.3) by the represen(i)
(i)
tation above yields xij = x
?i + Jf (?i ) ? (?ij ? ??i ) + j with j
(i)
?j
=
QTi (xij
?x
?i ), we have
neighbor of xi if the orthogonal
(i)
(i)
x ij = x
?i + Qi ?j + j . Thus,
(i)
projection ?j is small and
(i)
(i)
(i)
= (?i , ?ij ) ? ?i . Let
xij can be selected as a
(i)
kj k2 = kxij ? x
?i ? Qi ?j k2 kQi ?j k2 = k?j k2 .
(2.4)
k(I ? Qi QTi )(Xi ? x0 eT )kF ? ?kQTi (Xi ? x0 eT )kF
(2.5)
Assume all the xij satisfy the above inequality, then we should approximately have
We will use (2.5) as a criterion for adaptive neighbor selection, starting with a K-NN at
each sample point xi with a large enough initial K and deleting points one by one until
(2.5) holds. This process will terminate when the neighborhood size equals d + k 0 for
some small k0 and (2.5) is not true. In that case, we may need to reselect a k-NN that
k(I?Qi QT
xi eT )kF
i )(Xi ??
as the neighborhood set as is detailed below.
minimizes the ratio
kQT (Xi ??
xi eT )kF
i
N EIGHBORHOOD C ONTRACTION .
(K)
C0. Determine the initial K and K-NN neighborhood Xi
ordered in non-decreasing distances to xi ,
= [xi1 , . . . , xiK ] for xi ,
kxi1 ? xi k ? kxi2 ? xi k ? . . . ? kxiK ? xi k.
Set k = K.
(k)
(k)
(k)
C1. Let x
?i be the column mean of Xi . Compute the orthogonal basis matrix Qi ,
(k)
(k)
(k)
(k)
(k)
the d largest singular vectors of Xi ? x
?i eT . Set ?i = (Qi )T (Xi ?
(k) T
x
?i e ).
(k)
(k)
(k)
(k)
(k)
(k)
(k)
C2. If kXi ? x
?i eT ? Qi ?i kF < ?k?i kF , then set Xi = Xi , ?i = ?i ,
and terminate.
(k)
(k?1)
C3. If k > d+k0 , then delete the last column of Xi to obtain Xi
, set k := k?1,
and go to step C1, otherwise, go to step C4.
(j)
C4. Let k = arc mind+k0 ?j?K
kXi
(j)
(j)
(j)
??
xi eT ?Qi ?i kF
(j)
k?i kF
(k)
(k)
, and set Xi = Xi , ?i =
?i .
Step C4 means that if there is no k-NN (k ? d + k0 ) satisfying (2.5), then the contracted
T
i e ?Qi ?i kF
neighborhood Xi should be one that minimizes kXi ??xk?
.
i kF
Once the contraction step is done we can still add back some of unselected x ij to increase
the overlap of nearby neighborhoods while still keep (2.5) intact. In fact, we can add x ij if
kxij ? x
?i ? Qi ?j k ? ?k?j k which is demonstrated in the following result (we refer to [8]
for the proof).
Theorem 2.1 Let Xi = [xi1 , . . . , xik ] satisfy (2.5). Furthermore, we assume
(i)
(i)
kxij ? x0 ? Qi ?j k ? ?k?j k,
j = k + 1, . . . , k + p,
(2.6)
(i)
where ?j = QTi (xij ? x0 ). Denote by x
?i the column mean of the expanded matrix
? i = [Xi , xi , . . . xi ]. Then for the left-singular vector matrix Q
? i corresponding to
X
k+1
k+p
T
?
the d largest singular values of Xi ? x
?i e ,
?
k+p
X (i)
p
?iQ
? Ti )(X
?i ? x
? Ti (X
?i ? x
k
? j k2 .
k(I ? Q
?i eT )kF ? ? kQ
?i eT )kF +
k+p
j=k+1
(i)
The above result shows that if the mean of the projections ?j of the expanding neighbors
is small and/or the number of the expanding points are relatively small, then approximately,
?iQ
? Ti )(X
?i ? x
? Ti (X
?i ? x
k(I ? Q
?i eT )kF ? ?kQ
?i eT )kF .
N EIGHBORHOOD E XPANSION .
E0. Set ki to be the column number of Xi obtained by the neighborhood contracting
(i)
?i ).
step. For j = ki + 1, . . . , K, compute ?j = QTi (xij ? x
E1. Denote by Ji the index subset of j?s, ki < j ? K, such that k(I ? Qi QTi )(xij ?
(i)
x
?i )k2 ? k?j k2 . Expand Xi by adding xij , j ? Ji .
Example 3. We construct the data points as xi = [sin(ti ), cos(ti ), 0.02ti ]T , i = 1, . . . , N,
with ti ? [0, 4?] uniformly distributed, which is plotted in the top-left panel in Figure 2.
0.8
0.4
(a)
k=7
0.1
0.6
(b)
k=8
(c)
k=9
0.1
0.05
0.05
0
0
?0.05
?0.05
0.4
0.2
0.2
0
1
0
1
0
0
?1
0.1
?1
(d)
k=30
?0.2
?10
0.15
0
(e)
k=15
10
?0.1
?10
0.15
0
(f)
k=30
10
?0.1
?10
0.05
0.05
0.1
0.1
0
0
0.05
0.05
?0.05
?0.05
0
0
?0.1
?0.1
?10
0
?0.05
10
?10
0
?0.05
10
?10
0
0
10
(g)
k=35
?0.15
10
?10
0
10
Figure 2: Plots of the data sets (top left), the computed coordinates ?i by LTSA vs. the
centered arc-length coordinates (a ? c), the computed coordinates ?i by LTSA with neighborhood C contraction vs the centered arc-length coordinates (e ? g), and the computed
coordinates ?i by LTSA with neighborhood contraction and expansion vs. the centered
arc-length coordinates (bottom left)
LTSA with constant k-NN fails for any k: small k leads to lack of necessary overlap among
the neighborhoods while for large k, the computed tangent space can not represent the local
geometry well. In (a ? c) of Figure 2, we plot the coordinates computed by LTSA vs. the
arc-length of the curve. Contracting the neighborhoods without expansion also results in
bad results, because of small sizes of the resulting neighborhoods, see (e ? g) of Figure 2.
Panel (d) of Figure 2 gives an excellent result computed by LTSA with both neighborhood
contraction and expansion. We want mention that our adaptive strategies also work well
for noisy data sets, we refer the readers to [8] for some examples.
3
Alignment incorporating variations of manifold curvature
Let Xi = [xi1 , . . . , xiki ] consists of the neighbors determined by the contraction and expansion steps in the above section. In (1.1), we can show that the size of the error term
kEi k2 depends on the size of the curvature of manifold at sample point xi [8]. To make the
minimization in (1.1) more uniform, we need to factor out the effect of the variations of the
curvature. To this end, we pose the following minimization problem,
X 1
1
min
k(Ti (I ? eeT ) ? Li ?i )Di?1 k22 ,
(3.7)
ki
ki
T,{Li }
i
(i)
(i)
(i)
where Di = diag(?(?1 ), . . . , ?(?ki )), and ?(?j ) is proportional to the curvature of the
manifold at the parameter value ?i , the computation of which will be discussed below. For
+
fixed T , the optimal Li is given by Li = Ti (Iki ? k1i eeT )?+
i = Ti ?i . Substituting it into
(3.7), we have the reduced minimization problem
X 1
1
?1 2
min
kTi (Iki ? eeT ? ?+
i ?i )Di k2
T
k
k
i
i
i
Imposing the normalization condition T T T = I, a solution to the minimization problem
above is given by the d eigenvectors corresponding to the second to (d + 1)st smallest
eigenvalues of the following matrix
B ? (SW ) diag(D12 /k1 , . . . , Dn2 /kn )(SW )T ,
where W = (Iki ?
shows that we can
1
T
ki ee )(Iki
(i)
set ?i (?j )
? ?+
i ?i ). Second-order analysis of the error term in (1.1)
(i)
(i)
= ? + cf (?i )k?j k2 with a small positive constant ? to
ensure ?i (?j ) > 0, and cf (?i ) ? 0 represents the mean of curvatures cf (?i , ?ij ) for all
neighbors of xi .
Let Qi denote the orthonormal matrix of the largest d right singular vectors of Xi (I ?
1
T
ki ee ). We can approximately compute cf (?i ) as follows.
k
i
arccos(?min (QTi Qi` ))
1 X
cf (?i ) ?
.
ki ? 1
k?` k2
`=2
where ?min (?) is the smallest singular value of a matrix. Then the diagonal weights ?(?i )
can be computed as
k
(i)
?i (?j ) = ? +
i
arccos(?min (QTi Qi` ))
k?j k22 X
.
ki ? 1
k?` k2
`=2
With the above preparation, we are now ready to present the adaptive LTSA algorithm.
Given a data set X = [x1 , . . . , xN ], the approach consists of the following steps:
Step 1. Determining the neighborhood Xi = [xi1 , . . . , xiki ] for each xi , i = 1, . . . , N,
using the neighborhood contraction/expansion steps in Section 2.
Step 2. Compute the truncated SVD, say Qi ?i ViT of Xi (I ? k1i eeT ) with d columns in
(i)
both Qi and Vi , the projections ?` = QTi (xi` ? x
?i ) with the mean x
?i of the
(i)
(i)
neighbors, and denote ?i = [?1 , . . . , ?ki ].
Step 3. Estimate the curvatures as follows. For each i = 1, . . . , N ,
ci =
ki ?1
1 X
arccos(?min (QTi Qi` ))
,
(i)
ki ? 1
k?` k2
`=2
Step 4. Construct alignment matrix. For i = 1, . . . , N , set
1
1
Wi = Iki ?[ ? e, Vi ][ ? e, Vi ]T ,
ki
ki
(i)
(i)
Di = ?I+ diag(ci k?1 k22 , . . . , ci k?ki k22 ),
where ? is a small constant number (usually we set ? = 1.0?6 ). Set initial B = 0.
Update B iteratively by
B(Ii , Ii ) := B(Ii , Ii ) + Wi Di?1 Di?1 WiT /ki , i = 1, . . . , N.
Step 5. Align global coordinates. Compute the d + 1 smallest eigen-vectors of B and
pick up the eigenvector [u2 , . . . , ud+1 ] matrix corresponding to the 2nd to d + 1st
smallest eigenvalues, and set T = [u2 , . . . , ud+1 ]T .
4
Experimental Results
In this section, we present several numerical examples to illustrate the performance of the
adaptive LTSA algorithm. The test data sets include curves in 2D/3D Euclidean spaces.
k=4
10
8
k=6
k=8
0.15
0.15
0.05
0.1
0.1
0
k=16
0.15
0.1
6
0.05
0.05
0.05
?0.05
0
0
?0.1
4
0
2
0
?5
0
1
?0.05
5
?20
?0.05
20 ?20
0
0
?0.15
20 ?20
?0.05
?0.1
20 ?20
0
0.3
0.3
0.3
0.3
0.2
0.2
0.2
0.2
0.1
0.1
0.1
0.1
0.6
0
0
0
0
0.4
?0.1
?0.1
?0.1
?0.1
?0.2
?0.2
?0.2
?0.2
?0.3
?0.3
?0.3
0.8
0.2
0
?1
0
?0.4
1
?2
?0.4
2
?2
0
0
?0.4
2
?2
0
20
0
2
?0.3
?0.4
2
?2
0
Figure 3: The computed coordinates ?i by LTSA taking into account curvature and variable
size of neighborhood.
First we apply the adaptive LTSA to the date sets shown in Examples 1 and 2. Adaptive
LTSA with different starting k?s works every well. See Figure 3. It shows that for these tow
data sets, the adaptive LTSA is not sensitive to the choice of the starting k or the variations
in sampling densities and manifold curvatures.
Next, we consider the swiss-roll surface defined by f (s, t) = [s cos(s), t, s sin(s)] T . It
is easy to see that Jf (s, t)T Jf (s, t) = diag(1 + s2 , 1). Denoting s = s(r) the inverse
transformation of r = r(s) defined by
r(s) =
Zs p
1 + ?2 d? =
0
1 p
(s 1 + s2 + arcsinh(s)),
2
the swiss-roll surface can be parameterized as
f?(r, t) = [s(t) cos(s(r)), t, s(r) sin(s(r))]T
and f? is isometric with respect to (r, t). In the left figure of Figure 4, we show there is a
distortion between the computed coordinates by LTSA with the best-fit neighborhood size
(bottom left) and the generating coordinates (r, t)T (top right). In the right panel of the
bottom row of the left figure of Figure 4, we plot the computed coordinates by the adaptive
LTSA with initial neighborhood size k = 30. (In fact, the adaptive LTSA is insensitive
to k and we will get similar results with a larger or smaller initial k). We can see that the
computed coordinates by the adaptive LTSA can recover the generating coordinates well
without much distortion.
Finally we applied both LTSA and the adaptive LTSA to a 2D manifold with 3 peaks
embedded in a 100 dimensional space. The data points are generated as follows. First
we generate N = 2000 3D points, yi = (ti , si , h(ti , si ))T , where ti and si randomly
distributed in the interval [?1.5, 1.5] and h(t, s) is defined by
h(t, s) = e?20t
2
?20s2
? e?10t
2
?10(s+1)2
? e?10(1+t)
2
?10s2
.
Then we embed the 3D points into a 100D space by xQ
xH
i = Hyi , where
i = Qyi ,
100?3
Q ? R
is a random orthonormal matrix resulting in an orthogonal transformation
and H ? R100?3 a matrix with its singular values uniformly distributed in (0, 1) resulting
in an affine transformation. In the top row of the right figure of Figure 4, we plot the
Generating Coordinate
swiss role
10
(a)
1
0.1
5
0.5
0.05
0
0
0
?0.5
?0.05
(b)
0.04
0.02
0
?0.02
?0.04
?5
1
0
?1
?0.06
?1
?10
0
?5
5
10
0
10
20
30
40
50
?0.1
?0.1
?0.05
0
0.05
?0.08
?0.05
0
(c)
0.06
0.03
0.06
0.04
0.01
0.02
0.02
0
0
0
?0.01
0
?0.02
?0.02
?0.02
?0.04
?0.04
0.1
(d)
0.05
0.02
0.04
0.05
?0.04
?0.03
?0.02
0
0.02
0.04
?0.04
?0.02
0
0.02
0.04
?0.05
?0.05
0
0.05
?0.06
?0.06 ?0.04 ?0.02
0
0.02 0.04
Figure 4: Left figure: 3D swiss-roll and the generating coordinates (top row), computed 2D
coordinates by LTSA with the best neighborhood size k = 15 (bottom left) and computed
2D coordinates by adaptive LTSA (bottom right). Right figure: coordinates computed by
LTSA for the orthogonally embedded 100D data set {xQ
i } (a) and the affinely embedded
H
100D data set {xi } (b), and the coordinates computed by the adaptive LTSA for {xQ
i } (c)
and {xH
i } (d).
H
computed coordinates by LTSA for xQ
i (shown in (a)) and xi (shown in (b)) with best-fit
neighborhood size k = 15. We can see the deformations (stretching and compression) are
quite prominent. In the bottom row of the right figure of Figure 4, we plot the computed
H
coordinates by the adaptive LTSA for xQ
i (shown in (c)) and xi (shown in (d)) with initial
neighborhood size k = 15. It is clear that the adaptive LTSA gives a much better result.
References
[1] M. Brand. Charting a manifold. Advances in Neural Information Processing Systems,
15, MIT Press, 2003.
[2] D. Donoho and C. Grimes. Hessian Eigenmaps: new tools for nonlinear dimensionality reduction. Proceedings of National Academy of Science, 5591-5596, 2003.
[3] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290: 2323?2326, 2000.
[4] L. Saul and S. Roweis. Think globally, fit locally: unsupervised learning of nonlinear
manifolds. Journal of Machine Learning Research, 4:119-155, 2003.
[5] E. Teh and S. Roweis. Automatic Alignment of Local Representations. Advances in
Neural Information Processing Systems, 15, MIT Press, 2003.
[6] J. Tenenbaum, V. De Silva and J. Langford. A global geometric framework for nonlinear dimension reduction. Science, 290:2319?2323, 2000.
[7] Z. Zhang and H. Zha. Principal Manifolds and Nonlinear Dimensionality Reduction
via Tangent Space Alignment. SIAM J. Scientific Computing, 26:313?338, 2004.
[8] J. Wang, Z. Zhang and H. Zha. Adaptive Manifold Learning. Technical Report CSE04-21, Dept. CSE, Pennsylvania State University, 2004.
| 2560 |@word eliminating:1 compression:1 nd:1 c0:1 iki:7 contraction:7 pick:1 mention:1 reduction:4 initial:6 selecting:2 denoting:1 com:1 si:3 numerical:1 plot:7 update:1 v:5 half:2 selected:2 xk:1 cse:2 zhang:3 c2:1 consists:2 fitting:2 manner:1 x0:4 globally:1 decreasing:1 campus:1 panel:3 qyi:1 minimizes:2 eigenvector:1 z:1 transformation:3 every:1 ti:25 rm:3 k2:16 unit:2 t1:1 positive:1 local:9 tation:1 approximately:4 china:1 co:5 range:1 zhejiang:1 practice:1 k1i:2 swiss:4 projection:5 reselect:1 get:2 selection:3 context:1 demonstrated:1 go:2 starting:4 vit:1 d12:1 wit:1 regarded:1 orthonormal:4 spanned:2 embedding:3 variation:8 coordinate:28 construction:1 pa:1 recognition:2 satisfying:1 bottom:7 role:1 wang:2 ideally:1 basis:3 r100:1 k0:4 effective:1 neighborhood:31 quite:1 larger:1 say:1 distortion:2 otherwise:1 think:1 noisy:1 interplay:3 eigenvalue:2 reconstruction:1 propose:1 neighboring:1 date:2 roweis:3 academy:1 jing:1 generating:4 illustrate:4 develop:1 iq:2 pose:1 nearest:3 ij:13 qt:1 drawback:1 centered:5 extension:1 hold:1 around:1 algorithmic:1 substituting:1 smallest:4 sensitive:1 largest:3 successfully:1 tool:1 minimization:4 mit:2 clearly:1 varying:2 focus:1 zju:1 affinely:1 cg:3 hangzhou:1 nn:7 expand:1 i1:3 issue:4 among:1 arccos:3 equal:1 construct:4 once:1 psu:1 sampling:5 tow:1 represents:3 park:1 unsupervised:1 t2:2 report:1 randomly:1 national:1 geometry:1 highly:2 alignment:8 grime:1 necessary:1 orthogonal:5 taylor:1 euclidean:1 circle:3 plotted:2 e0:1 deformation:1 delete:1 column:9 subset:1 kq:2 uniform:1 eigenmaps:1 kn:1 kxi:3 synthetic:1 adaptively:2 st:2 density:5 peak:1 siam:1 xi1:5 connectivity:1 choose:1 possibly:1 ocal:3 li:6 account:3 de:1 satisfy:2 depends:2 vi:3 zha:4 recover:1 roll:3 kxij:3 stretching:1 yield:1 straight:1 failure:1 against:1 proof:1 jacobi:1 di:6 sampled:1 dimensionality:3 back:1 isometric:1 done:1 furthermore:1 until:1 langford:1 nonlinear:7 lack:1 mode:1 scientific:1 believe:1 effect:1 k22:7 verify:1 true:1 isomap:1 iteratively:1 sin:4 criterion:1 prominent:1 outline:1 qti:10 tn:2 silva:1 recently:2 ji:2 insensitive:1 discussed:1 refer:2 imposing:1 rd:3 automatic:1 mathematics:1 similarly:1 pointed:1 ik1:1 surface:2 align:2 add:2 curvature:15 mint:1 inequality:1 yi:1 minimum:1 determine:2 ud:2 hyi:1 ii:4 smooth:1 technical:1 e1:1 qi:20 basic:2 vision:1 represent:2 normalization:1 achieved:1 c1:2 want:1 interval:1 singular:6 ltsa:34 effectiveness:2 slop:1 ee:2 easy:2 enough:2 fit:3 pennsylvania:2 idea:1 cn:1 hessian:3 kqi:1 detailed:1 eigenvectors:1 clear:1 locally:4 tenenbaum:1 reduced:1 generate:1 xij:13 key:2 inverse:1 everywhere:1 parameterized:2 reasonable:1 reader:1 ki:23 display:1 represen:1 nearby:2 min:6 expanded:1 relatively:1 department:2 developing:2 according:1 smaller:1 wi:2 discus:1 arcsinh:1 know:1 mind:1 end:1 apply:1 eigen:1 top:6 include:2 cf:7 ensure:1 sw:2 k1:1 strategy:3 diagonal:1 distance:1 manifold:25 charting:2 length:7 index:2 ratio:1 minimizing:1 difficult:1 xik:2 implementation:1 teh:1 arc:8 truncated:1 community:2 c3:1 c4:3 address:2 proceeds:1 below:2 pattern:2 usually:1 kqt:1 including:1 deleting:1 overlap:3 orthogonally:1 unselected:1 ready:1 xq:5 geometric:2 tangent:8 kf:14 determining:1 embedded:5 contracting:2 proportional:1 kti:2 affine:1 maxt:1 row:7 prone:1 last:1 lle:3 neighbor:12 saul:2 taking:1 distributed:4 curve:3 dimension:1 xn:2 zyzhang:1 dn2:1 commonly:1 adaptive:19 kei:2 eet:5 keep:1 global:4 hongyuan:1 xi:59 terminate:2 expanding:2 zhenyue:1 expansion:7 excellent:1 constructing:1 diag:4 yuquan:1 noise:1 s2:4 x1:2 contracted:1 fails:2 xh:2 theorem:1 emphasizing:1 embed:1 bad:1 incorporating:1 adding:1 ci:3 ordered:1 u2:2 donoho:1 jf:10 change:1 determined:3 uniformly:3 principal:1 svd:2 experimental:1 brand:1 intact:1 select:1 preparation:1 dept:1 tested:1 avoiding:1 |
1,718 | 2,561 | Dependent Gaussian Processes
Phillip Boyle and Marcus Frean
School of Mathematical and Computing Sciences
Victoria University of Wellington,
Wellington, New Zealand
{pkboyle,marcus}@mcs.vuw.ac.nz
Abstract
Gaussian processes are usually parameterised in terms of their covariance functions. However, this makes it difficult to deal with multiple
outputs, because ensuring that the covariance matrix is positive definite
is problematic. An alternative formulation is to treat Gaussian processes
as white noise sources convolved with smoothing kernels, and to parameterise the kernel instead. Using this, we extend Gaussian processes to
handle multiple, coupled outputs.
1
Introduction
Gaussian process regression has many desirable properties, such as ease of obtaining and
expressing uncertainty in predictions, the ability to capture a wide variety of behaviour
through a simple parameterisation, and a natural Bayesian interpretation [15, 4, 9]. Because of this they have been suggested as replacements for supervised neural networks in
non-linear regression [8, 18], extended to handle classification tasks [11, 17, 6], and used
in a variety of other ways (e.g. [16, 14]). A Gaussian process (GP), as a set of jointly
Gaussian random variables, is completely characterised by a covariance matrix with entries determined by a covariance function. Traditionally, such models have been specified
by parameterising the covariance function (i.e. a function specifying the covariance of
output values given any two input vectors). In general this needs to be a positive definite
function to ensure positive definiteness of the covariance matrix.
Most GP implementations model only a single output variable. Attempts to handle multiple
outputs generally involve using an independent model for each output - a method known
as multi-kriging [18] - but such models cannot capture the structure in outputs that covary.
As an example, consider the two tightly coupled outputs shown at the top of Figure 2, in
which one output is simply a shifted version of the other. Here we have detailed knowledge
of output 1, but sampling of output 2 is sparse. A model that treats the outputs as independent cannot exploit their obvious similarity - intuitively, we should make predictions about
output 2 using what we learn from both output 1 and 2.
Joint predictions are possible (e.g. co-kriging [3]) but are problematic in that it is not clear
how covariance functions should be defined [5]. Although there are many known positive
definite autocovariance functions (e.g. Gaussians and many others [1, 9]), it is difficult to
define cross-covariance functions that result in positive definite covariance matrices. Contrast this to neural network modelling, where the handling of multiple outputs is routine.
An alternative to directly parameterising covariance functions is to treat GPs as the outputs
of stable linear filters. R For a linear filter, the output in response to an input x(t) is
?
y(t) = h(t) ? x(t) = ?? h(t ? ? )x(? )d? , where h(t) defines the impulse response of
the filter and ? denotes convolution. Provided the linear filter is stable and x(t) is Gaussian
white noise, the output process y(t) is necessarily a Gaussian process. It is also possible
to characterise p-dimensional stable linear filters, with M -inputs and N -outputs, by a set
of M ? N impulse responses. In general, the resulting N outputs are dependent Gaussian
processes. Now we can model multiple dependent outputs by parameterising the set of
impulse responses for a multiple output linear filter, and inferring the parameter values from
data that we observe. Instead of specifying and parameterising positive definite covariance
functions, we now specify and parameterise impulse responses. The only restriction is that
the filter be linear and stable, and this is achieved by requiring the impulse responses to be
absolutely integrable.
Constructing GPs by stimulating linear filters with Gaussian noise is equivalent to constructing GPs through kernel convolutions. A Gaussian process V (s) can be constructed
over a region S by convolving a continuous white noise process X(s) with a smoothing
kernel h(s), V (s) = h(s) ? X(s) for s ? S, [7]. To this can be added a second white
noise source, representing measurement uncertainty, and together this gives a model for observations Y . This view of GPs is shown in graphical form in Figure 1(a). The convolution
approach has been used to formulate flexible nonstationary covariance functions [13, 12].
Furthermore, this idea can be extended to model multiple dependent output processes by
assuming a single common latent process [7]. For example, two dependent processes V 1 (s)
and V2 (s) are constructed from a shared dependence on X(s) for s ? S0 , as follows
Z
Z
V1 (s) =
h1 (s ? ?)X(?)d? and V2 (s) =
h2 (s ? ?)X(?)d?
S0 ?S1
S0 ?S2
where S = S0 ? S1 ? S2 is a union of disjoint subspaces. V1 (s) is dependent on X(s), s ?
S1 but not X(s), s ? S2 . Similarly, V2 (s) is dependent on X(s), s ? S2 but not X(s), s ?
S1 . This allows V1 (s) and V2 (s) to possess independent components.
In this paper, we model multiple outputs somewhat differently to [7]. Instead of assuming
a single latent process defined over a union of subspaces, we assume multiple latent processes, each defined over <p . Some outputs may be dependent through a shared reliance
on common latent processes, and some outputs may possess unique, independent features
through a connection to a latent process that affects no other output.
2
Two Dependent Outputs
Consider two outputs Y1 (s) and Y2 (s) over a region <p , where s ? <p . We have N1 obser1
vations of output 1 and N2 observations of output 2, giving us data D1 = {s1,i , y1,i }N
i=1
N2
and D2 = {s2,i , y2,i }i=1 . We wish to learn a model from the combined data D =
{D1 , D2 } in order to predict Y1 (s0 ) or Y2 (s0 ), for s0 ? <p . As shown in Figure 1(b),
we can model each output as the linear sum of three stationary Gaussian processes. One of
these (V ) arises from a noise source unique to that output, under convolution with a kernel
h. A second (U ) is similar, but arises from a separate noise source X0 that influences both
outputs (although via different kernels, k). The third is additive noise as before.
Thus we have Yi (s) = Ui (s) + Vi (s) + Wi (s), where Wi (s) is a stationary Gaussian white
noise process with variance, ?i2 , X0 (s), X1 (s) and X2 (s) are independent stationary Gaussian white noise processes, U1 (s), U2 (s), V1 (s) and V2 (s) are Gaussian processes given by
Ui (s) = ki (s) ? X0 (s) and Vi (s) = hi (s) ? Xi (s).
Figure 1: (a) Gaussian process prior for a single output. The output Y is the sum of two
Gaussian white noise processes, one of which has been convolved (?) with a kernel (h).
(b) The model for two dependent outputs Y1 and Y2 . All of X0 , X1 , X2 and the ?noise?
contributions are independent Gaussian white noise sources. Notice that if X 0 is forced to
zero Y1 and Y2 become independent processes as in (a) - we use this as a control model.
1 T
?
The k1 , k2 , h1 , h2 are parameterised Gaussian
kernels
where
k
(s)
=
v
exp
s
A
s
,
1
1
1
2
1 T
1
T
k2 (s) = v2 exp ? 2 (s ? ?) A2 (s ? ?) , and hi (s) = wi exp ? 2 s Bi s . Note that
k2 (s) is offset from zero by ? to allow modelling of outputs that are coupled and translated
relative to one another.
Y
We wish to derive the set of functions Cij
(d) that define the autocovariance (i = j) and
cross-covariance (i 6= j) between the outputs i and j, for a given separation d between
Y
arbitrary inputs sa and sb . By solving a convolution integral, Cij
(d) can be expressed in a
closed form [2], and is fully determined by the parameters of the Gaussian kernels and the
noise variances ?12 and ?22 as follows:
Y
U
V
C11
(d) = C11
(d) + C11
(d) + ?ab ?12
Y
U
C12
(d) = C12
(d)
Y
U
V
C22
(d) = C22
(d) + C22
(d) + ?ab ?22
Y
U
C21
(d) = C21
(d)
where
p
? 2 v2
1
CiiU (d) = p i exp ? dT Ai d
4
|Ai |
p
(2?) 2 v1 v2
1
U
C12
(d) = p
exp ? (d ? ?)T ?(d ? ?)
2
|A1 + A2 |
p
(2?) 2 v1 v2
1
T
U
U
(?d)
C21 (d) = p
exp ? (d + ?) ?(d + ?) = C12
2
|A1 + A2 |
p
? 2 wi2
1 T
V
Cii (d) = p
exp ? d Bi d
4
|Bi |
with ? = A1 (A1 + A2 )?1 A2 = A2 (A1 + A2 )?1 A1 .
Y
Given Cij
(d) then, we can construct the covariance matrices C11 , C12 , C21 , and C22 as
follows
? Y
?
Y
Cij (si,1 ? sj,1 ) ? ? ? Cij
(si,1 ? sj,Nj )
?
?
..
..
..
Cij = ?
(1)
?
.
.
.
Y
Cij
(si,Ni ? sj,1 )
???
Y
Cij
(si,Ni ? sj,Nj )
Together these define the positive definite symmetric covariance matrix C for the combined
output data D:
C11 C12
C=
(2)
C21 C22
We define a set of hyperparameters ? that parameterise
{v1 , v2 , w1 , w2 , A1 , A2 , B1 , B2 , ?, ?1 , ?2 }. Now, we can calculate the likelihood
1
N1 + N 2
1
log 2?
L = ? logC ? yT C?1 y ?
2
2
2
where yT = [y1,1 ? ? ? y1,N1 y2,1 ? ? ? y2,N2 ]
and C is a function of ? and D.
Learning a model now corresponds to either maximising the likelihood L, or maximising
the posterior probability P (? | D). Alternatively, we can simulate the predictive distribution for y by taking samples from the joint P (y, ? | D), using Markov Chain Monte Carlo
methods [10].
The predictive distribution at a point s0 on output i given ? and D is Gaussian with mean
y?0 and variance ?y2?0 given by
y?0 = kT C?1 y
?y2?0 = ? ? kT C?1 k
and
where
and
2.1
? = CYii (0) = vi2 + wi2 + ?i2
Y 0
Y
k = Ci1
(s ? s1,1 ) . . . Ci1
(s0 ? s1,N1 )
Y
Y
Ci2
(s0 ? s2,1 ) . . . Ci2
(s0 ? s2,N2 )
T
Example 1 - Strongly dependent outputs over 1d input space
Consider two outputs, observed over a 1d input space. Let Ai = exp(fi ), Bi = exp(gi ),
and ?i = exp(?i ). Our hyperparameters are ? = {v1 , v2 , w1 , w2 , f1 , f2 , g1 , g2 , ?, ?1 , ?2 }
where each element of ? is a scalar. As in [2] we set Gaussian priors over ?.
We generated N = 48 data points by taking N1 = 32 samples from output 1 and N2 = 16
samples from output 2. The samples from output 1 were linearly spaced in the interval
[?1, 1] and those from output 2 were uniformly spaced in the region [?1, ?0.15]?[0.65, 1].
All samples were taken under additive Gaussian noise, ? = 0.025. To build our model, we
maximised P (?|D) ? P (D | ?) P (?) using a multistart conjugate gradient algorithm,
with 5 starts, sampling from P (?) for initial conditions.
The resulting dependent model is shown in Figure 2 along with an independent (control)
model with no coupling (see Figure 1). Observe that the dependent model has learned the
coupling and translation between the outputs, and has filled in output 2 where samples are
missing. The control model cannot achieve such infilling as it is consists of two independent
Gaussian processes.
2.2
Example 2 - Strongly dependent outputs over 2d input space
Consider two outputs, observed over a 2d input space. Let
1
1
Ai = 2 I
Bi = 2 I
where I is the identity matrix.
?i
?i
Furthermore, let ?i = exp(?i ). In this toy example, we set ? = 0, so our hyperparameters
become ? = {v1 , v2 , w1 , w2 , ?1 , ?2 , ?1 , ?2 ?1 , ?2 } where each element of ? is a scalar.
Again, we set Gaussian priors over ?.
Output 2 ? independent model
Output 1 ? independent model
True function
Model mean
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
?0.1
?0.1
?0.2
?1
?0.8
?0.6
?0.4
?0.2
0
0.2
0.4
0.6
0.8
1
?0.2
?1
?0.8
?0.6
Output 1 ? dependent model
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
?0.1
?0.1
?0.8
?0.6
?0.4
?0.2
0
0.2
0.4
?0.2
0
0.2
0.4
0.6
0.8
1
0.6
0.8
1
Output 2 ? dependent model
0.5
?0.2
?1
?0.4
0.6
0.8
1
?0.2
?1
?0.8
?0.6
?0.4
?0.2
0
0.2
0.4
Figure 2: Strongly dependent outputs where output 2 is simply a translated version of output 1, with independent Gaussian noise, ? = 0.025. The solid lines represent the model,
the dotted lines are the true function, and the dots are samples. The shaded regions represent 1? error bars for the model prediction. (top) Independent model of the two outputs.
(bottom) Dependent model.
We generated 117 data points by taking 81 samples from output 1 and 36 samples from output 2. Both sets of samples formed uniform lattices over the region [?0.9, 0.9]?[?0.9, 0.9]
and were taken with additive Gaussian noise, ? = 0.025. To build our model, we maximised P (?|D) as before.
The dependent model is shown in Figure 3 along with an independent control model. The
dependent model has filled in output 2 where samples are missing. Again, the control model
cannot achieve such in-filling as it is consists of two independent Gaussian processes.
3
Time Series Forecasting
Consider the observation of multiple time series, where some of the series lead or predict
the others. We simulated a set of three time series for 100 steps each (figure 4) where
series 3 was positively coupled to a lagged version of series 1 (lag = 0.5) and negatively
coupled to a lagged version of series 2 (lag = 0.6). Given the 300 observations, we built
a dependent GP model of the three time series and compared them with independent GP
models. The dependent GP model incorporated a prior belief that series 3 was coupled to
series 1 and 2, with the lags unknown. The independent GP model assumed no coupling
between its outputs, and consisted of three independent GP models. We queried the models
for forecasts of the future 10 values of series 3. It is clear from figure 4 that the dependent
GP model does a far better job at forecasting the dependent series 3. The independent
model becomes inaccurate after just a few time steps into the future. This inaccuracy is
expected as knowledge of series 1 and 2 is required to accurately predict series 3. The
Figure 3: Strongly dependent outputs where output 2 is simply a copy of output 1, with
independent Gaussian noise. (top) Independent model of the two outputs. (bottom) Dependent model. Output 1 is modelled well by both models. Output 2 is modelled well only by
the dependent model
dependent GP model performs well as it has learned that series 3 is positively coupled to a
lagged version of series 1 and negatively coupled to a lagged version of series 2.
4
Multiple Outputs and Non-stationary Kernels
The convolution framework described here for constructing GPs can be extended to build
models capable of modelling N -outputs, each defined over a p-dimensional input space.
In general, we can define a model where we assume M -independent Gaussian white
noise processes X1 (s) . . . XM (s), N -outputs U1 (s) . . . UN (s), and M ? N kernels
N
p
{{kmn (s)}M
m=1 }n=1 where s ? < . The autocovariance (i = j) and cross-covariance
(i 6= j) functions between output processes i and j become
U
Cij
(d)
=
M Z
X
m=1
kmi (s)kmj (s + d)ds
(3)
<p
and the matrix defined by equation 2 is extended in the obvious way.
The kernels used in (3) need not be Gaussian, and need not
R ?be spatially
R ? invariant, or stationary. We require kernels that are absolutely integrable, ?? . . . ?? |k(s)|dp s < ?. This
provides a large degree of flexibility, and is an easy condition to uphold. It would seem that
an absolutely integrable kernel would be easier to define and parameterise than a positive
Y
definite function. On the other hand, we require a closed form of Cij
(d) and this may not
be attainable for some non-Gaussian kernels.
2
1
0
?1
?2
Series 1
?3
2
1
0
?1
?2
Series 2
?3
2
1
0
?1
?2
?3
Series 3
0
1
2
3
4
5
6
7
8
Figure 4: Three coupled time series, where series 1 and series 2 predict series 3. Forecasting for series 3 begins after 100 time steps where t = 7.8. The dependent model forecast
is shown with a solid line, and the independent (control) forecast is shown with a broken
line. The dependent model does a far better job at forecasting the next 10 steps of series 3
(black dots).
5
Conclusion
We have shown how the Gaussian Process framework can be extended to multiple output
variables without assuming them to be independent. Multiple processes can be handled
by inferring convolution kernels instead of covariance functions. This makes it easy to
construct the required positive definite covariance matrices for covarying outputs.
One application of this work is to learn the spatial translations between outputs. However
the framework developed here is more general than this, as it can model data that arises
from multiple sources, only some of which are shared. Our examples show the infilling of
sparsely sampled regions that becomes possible in a model that permits coupling between
outputs. Another application is the forecasting of dependent time series. Our example
shows how learning couplings between multiple time series may aid in forecasting, particularly when the series to be forecast is dependent on previous or current values of other
series.
Dependent Gaussian processes should be particularly valuable in cases where one output
is expensive to sample, but covaries strongly with a second that is cheap. By inferring both
the coupling and the independent aspects of the data, the cheap observations can be used
as a proxy for the expensive ones.
References
[1] A BRAHAMSEN , P. A review of gaussian random fields and correlation functions. Tech. Rep.
917, Norwegian Computing Center, Box 114, Blindern, N-0314 Oslo, Norway, 1997.
[2] B OYLE , P., AND F REAN , M. Multiple-output gaussian process regression. Tech. rep., Victoria
University of Wellington, 2004.
[3] C RESSIE , N. Statistics for Spatial Data. Wiley, 1993.
[4] G IBBS , M. Bayesian Gaussian Processes for Classification and Regression. PhD thesis, University of Cambridge, Cambridge, U.K., 1997.
[5] G IBBS , M., AND M AC K AY, D. J.
Efficient implementation of gaussian processes.
www.inference.phy.cam.ac.uk/mackay/abstracts/gpros.html, 1996.
[6] G IBBS , M. N., AND M AC K AY, D. J. Variational gaussian process classifiers. IEEE Trans. on
Neural Networks 11, 6 (2000), 1458?1464.
[7] H IGDON , D. Space and space-time modelling using process convolutions. In Quantitative
methods for current environmental issues (2002), C. Anderson, V. Barnett, P. Chatwin, and
A. El-Shaarawi, Eds., Springer Verlag, pp. 37?56.
[8] M AC K AY, D. J. Gaussian processes: A replacement for supervised neural networks?
NIPS97 Tutorial, 1997.
In
[9] M AC K AY, D. J. Information theory, inference, and learning algorithms. Cambridge University
Press, 2003.
[10] N EAL , R. Probabilistic inference using markov chain monte carlo methods. Tech. Report
CRG-TR-93-1, Dept. of Computer Science, Univ. of Toronto, 1993.
[11] N EAL , R. Monte carlo implementation of gaussian process models for bayesian regression and
classification. Tech. Rep. CRG-TR-97-2, Dept. of Computer Science, Univ. of Toronto, 1997.
[12] PACIOREK , C. Nonstationary Gaussian processes for regression and spatial modelling. PhD
thesis, Carnegie Mellon University, Pittsburgh, Pennsylvania, U.S.A., 2003.
[13] PACIOREK , C., AND S CHERVISH , M. Nonstationary covariance functions for gaussian process
regression. Submitted to NIPS, 2004.
[14] R ASMUSSEN , C., AND K USS , M. Gaussian processes in reinforcement learning. In Advances
in Neural Information Processing Systems (2004), vol. 16.
[15] R ASMUSSEN , C. E. Evaluation of Gaussian Processes and other methods for Non-Linear
Regression. PhD thesis, Graduate Department of Computer Science, University of Toronto,
1996.
[16] T IPPING , M. E., AND B ISHOP, C. M. Bayesian image super-resolution. In Advances in Neural
Information Processing Systems (2002), S. Becker S., Thrun and K. Obermayer, Eds., vol. 15,
pp. 1303 ? 1310.
[17] W ILLIAMS , C. K., AND BARBER , D. Bayesian classification with gaussian processes. IEEE
trans. Pattern Analysis and Machine Intelligence 20, 12 (1998), 1342 ? 1351.
[18] W ILLIAMS , C. K., AND R ASMUSSEN , C. E. Gaussian processes for regression. In Advances
in Neural Information Processing Systems (1996), D. Touretzsky, M. Mozer, and M. Hasselmo,
Eds., vol. 8.
| 2561 |@word version:6 d2:2 ci2:2 covariance:20 uphold:1 attainable:1 tr:2 solid:2 phy:1 initial:1 series:30 current:2 si:4 additive:3 cheap:2 stationary:5 intelligence:1 maximised:2 provides:1 toronto:3 c22:5 mathematical:1 along:2 constructed:2 become:3 consists:2 x0:4 expected:1 multi:1 becomes:2 provided:1 begin:1 what:1 developed:1 nj:2 quantitative:1 k2:3 classifier:1 uk:1 control:6 positive:9 before:2 treat:3 black:1 nz:1 specifying:2 shaded:1 co:1 ease:1 bi:5 graduate:1 c21:5 unique:2 union:2 definite:8 cannot:4 influence:1 restriction:1 equivalent:1 www:1 yt:2 missing:2 center:1 formulate:1 zealand:1 resolution:1 boyle:1 handle:3 traditionally:1 gps:5 us:1 element:2 expensive:2 particularly:2 sparsely:1 observed:2 bottom:2 capture:2 calculate:1 region:6 valuable:1 kriging:2 mozer:1 broken:1 ui:2 kmi:1 cam:1 solving:1 kmj:1 predictive:2 negatively:2 f2:1 completely:1 translated:2 oslo:1 joint:2 differently:1 univ:2 forced:1 monte:3 vations:1 lag:3 ability:1 statistic:1 gi:1 g1:1 gp:9 jointly:1 flexibility:1 achieve:2 infilling:2 derive:1 coupling:6 ac:6 frean:1 school:1 job:2 sa:1 filter:8 ibbs:3 require:2 behaviour:1 f1:1 ci1:2 crg:2 exp:11 predict:4 a2:8 hasselmo:1 gaussian:48 super:1 modelling:5 likelihood:2 tech:4 contrast:1 inference:3 dependent:33 el:1 sb:1 inaccurate:1 issue:1 classification:4 flexible:1 html:1 smoothing:2 spatial:3 mackay:1 ipping:1 field:1 construct:2 sampling:2 barnett:1 filling:1 future:2 others:2 report:1 few:1 tightly:1 replacement:2 n1:5 attempt:1 ab:2 evaluation:1 parameterising:4 chain:2 kt:2 integral:1 capable:1 autocovariance:3 filled:2 eal:2 vuw:1 lattice:1 paciorek:2 entry:1 uniform:1 combined:2 probabilistic:1 together:2 w1:3 again:2 thesis:3 convolving:1 toy:1 c12:6 b2:1 vi:2 view:1 h1:2 closed:2 start:1 contribution:1 formed:1 ni:2 variance:3 spaced:2 modelled:2 bayesian:5 accurately:1 mc:1 carlo:3 submitted:1 ed:3 pp:2 obvious:2 sampled:1 knowledge:2 routine:1 norway:1 dt:1 supervised:2 asmussen:3 response:6 specify:1 formulation:1 box:1 strongly:5 anderson:1 furthermore:2 parameterised:2 just:1 correlation:1 d:1 hand:1 defines:1 impulse:5 phillip:1 requiring:1 y2:9 true:2 consisted:1 spatially:1 symmetric:1 covary:1 i2:2 deal:1 white:9 nips97:1 ay:4 performs:1 image:1 variational:1 fi:1 common:2 covarying:1 extend:1 interpretation:1 expressing:1 measurement:1 mellon:1 cambridge:3 ai:4 queried:1 similarly:1 dot:2 stable:4 similarity:1 posterior:1 verlag:1 rep:3 touretzsky:1 yi:1 integrable:3 somewhat:1 cii:1 c11:5 wellington:3 multiple:16 desirable:1 cross:3 a1:7 ensuring:1 prediction:4 regression:9 kernel:16 represent:2 achieved:1 interval:1 source:6 w2:3 posse:2 seem:1 nonstationary:3 easy:2 variety:2 affect:1 pennsylvania:1 idea:1 handled:1 becker:1 forecasting:6 generally:1 detailed:1 involve:1 clear:2 characterise:1 problematic:2 tutorial:1 shifted:1 notice:1 dotted:1 disjoint:1 carnegie:1 vol:3 reliance:1 v1:9 sum:2 uncertainty:2 separation:1 ki:1 hi:2 x2:2 u1:2 simulate:1 aspect:1 department:1 conjugate:1 wi:3 parameterisation:1 s1:7 intuitively:1 invariant:1 illiams:2 taken:2 equation:1 gaussians:1 permit:1 victoria:2 observe:2 v2:12 alternative:2 convolved:2 top:3 denotes:1 ensure:1 graphical:1 exploit:1 giving:1 k1:1 build:3 added:1 dependence:1 obermayer:1 gradient:1 dp:1 subspace:2 separate:1 simulated:1 thrun:1 barber:1 marcus:2 assuming:3 maximising:2 difficult:2 cij:10 lagged:4 implementation:3 unknown:1 convolution:8 observation:5 markov:2 extended:5 incorporated:1 norwegian:1 y1:7 arbitrary:1 required:2 specified:1 connection:1 learned:2 inaccuracy:1 nip:1 trans:2 suggested:1 bar:1 usually:1 pattern:1 xm:1 wi2:2 built:1 vi2:1 belief:1 natural:1 representing:1 coupled:9 covaries:1 prior:4 review:1 relative:1 fully:1 parameterise:4 h2:2 degree:1 proxy:1 s0:11 translation:2 copy:1 allow:1 wide:1 taking:3 sparse:1 reinforcement:1 far:2 sj:4 b1:1 pittsburgh:1 assumed:1 xi:1 alternatively:1 continuous:1 latent:5 un:1 learn:3 obtaining:1 necessarily:1 constructing:3 linearly:1 s2:7 noise:19 hyperparameters:3 kmn:1 n2:5 x1:3 positively:2 definiteness:1 aid:1 wiley:1 inferring:3 wish:2 third:1 offset:1 phd:3 forecast:4 easier:1 simply:3 expressed:1 g2:1 scalar:2 u2:1 springer:1 corresponds:1 environmental:1 stimulating:1 identity:1 shared:3 characterised:1 determined:2 uniformly:1 arises:3 absolutely:3 dept:2 d1:2 handling:1 |
1,719 | 2,562 | Edge of Chaos Computation in
Mixed-Mode VLSI - ?A Hard Liquid?
Felix Sch?
urmann, Karlheinz Meier, Johannes Schemmel
Kirchhoff Institute for Physics
University of Heidelberg
Im Neuenheimer Feld 227, 69120 Heidelberg, Germany
[email protected],
WWW home page: http://www.kip.uni-heidelberg.de/vision
Abstract
Computation without stable states is a computing paradigm different from Turing?s and has been demonstrated for various types
of simulated neural networks. This publication transfers this to a
hardware implemented neural network. Results of a software implementation are reproduced showing that the performance peaks
when the network exhibits dynamics at the edge of chaos. The
liquid computing approach seems well suited for operating analog
computing devices such as the used VLSI neural network.
1
Introduction
Using artificial neural networks for problem solving immediately raises the issue of
their general trainability and the appropriate learning strategy. Topology seems to
be a key element, especially, since algorithms do not necessarily perform better when
the size of the network is simply increased. Hardware implemented neural networks,
on the other hand, offer scalability in complexity and gain in speed but naturally do
not compete in flexibility with software solutions. Except for specific applications
or highly iterative algorithms [1], the capabilities of hardware neural networks as
generic problem solvers are difficult to assess in a straight-forward fashion.
Independently, Maass et al.[2] and Jaeger [3] proposed the idea of computing without
stable states. They both used randomly connected neural networks as non-linear
dynamical systems with the inputs causing perturbations to the transient response
of the network. In order to customize such a system for a problem, a readout is
trained which requires only the network reponse of a single time step for input.
The readout may be as simple as a linear classifier: ?training? then reduces to a well
defined least-squares linear regression. Justification for this splitting into a nonlinear transformation followed by a linear one originates from Cover [4]. He proved
that the probability for a pattern classification problem to be linearly separable is
higher when cast in a high-dimensional space by a non-linear mapping.
In the terminology of Maass et al., the non-linear dynamical system is called a
liquid and together with the readouts it represents a liquid state machine (LSM).
It has been proven that under certain conditions the LSM concept is universal on
functions of time [2].
Adopting the liquid computing strategy for mixed-mode hardware implemented
networks using very large scale integration (VLSI) offers two promising prospects:
First, such a system profits immediately from scaling, i.e., more neurons increase
the complexity of the network dynamics while not increasing training complexity.
Second, it is expected that the liquid approach can cope with an imperfect substrate
as commonly present in analog hardware. Configuring highly integrated analog
hardware as a liquid therefore seems a promising way for analog computing. This
conclusion is not unexpected since the liquid computing paradigm was inspired by
a complex and ?analog? system in the first place: the biological nervous system [2].
This publication presents initial results on configuring a general purpose mixedmode neural network ASIC (application specific integrated circuit) as a liquid. The
used custom-made ANN ASIC [5] provides 256 McCulloch-Pitts neurons with about
33k analog synapses and allows a wide variety of topologies, especially highly recurrent ones. In order to operate the ASIC as a liquid a generation procedure
proposed by Bertschinger et al. [6] is adopted that generates the network topology
and weights. These authors as well showed that the performance of those inputdriven networks?meant are the suitable properties of the network dynamics to act
as a liquid?depends on whether the response of the liquid to the inputs is ordered
or chaotic. Precisely, according to a special measure the performance peaks when
the liquid is inbetween order and chaos. The reconfigurability of the used ANN
ASIC allows to explore various generation parameters, i.e., physically different liquids are evaluated; the obtained experimental results are in accordance with the
previously published software simulations [6].
2
Substrate
The substrate used in the following is a general purpose ANN ASIC manufactured
in a 0.35?m CMOS process [5]. Its design goals were to implement small synapses
while being fast reconfigurable and capable of operating at high speed; it therefore combines analog computation with digital signaling. It is comprised of 33k
analog synapses with capacitive weight storage (nominal 10-bit plus sign) and 256
McCulloch-Pitts neurons. For efficiency it employs mostly current mode circuits.
Experimental benchmark results using evolutionary algorithms training strategies
have previously been published [1]. A full weight refresh can be performed within
200?s and in the current setup one network cycle, i.e., the time base of the liquid, lasts about 0.5?s. This is due to the prototype nature of the ASIC and its
input/output; the core can already be operated about 20 times faster.
The analog operation of the chip is limited to the synaptic weights ?ij and the input
stage of the output neurons. Since both, input (Ij ) and output signals (Oi ) of the
network are binary, the weight multiplication is reduced to a summation and the
activation function g(x) of the output neurons equals the Heaviside function ?(x):
X
Oi = g(
?ij Ij ), g(x) = ?(x), I, O ? {0, 1}.
(1)
j
The neural network chip is organized in four identical blocks; each represents a fully
connected one-layer perceptron with McCulloch-Pitts neurons. One block basically
consists of 128?64 analog synapses that connect each of the 128 inputs to each of
the 64 output neurons. The network operates in a discrete time update scheme,
i.e., Eq. 1 is calculated once for each network cycle. By feeding outputs back to the
Figure 1: Network blocks can be configured for different input sources.
inputs a block can be configured as a recurrent network (c.f. Fig. 1). Additionally,
outputs of the other network blocks can be fed back to the block?s input. In this
case the output of a neuron at time t depends not only on the actual input but
also on the previous network cycle and the activity of the other blocks. Denoting
the time needed for one network cycle with ?t, the output function of one network
block becomes:
?
?
X X
X
x
O(t + ?t)ai = ? ?
?ij I(t)aj +
?ik
O(t)xk ? .
(2)
j
x?{a,b,c,d} k
Here, ?t denotes the time needed for one network cycle. The first term in the
argument of the activation function is the external input to the network block I ja .
The second term models the feedback path from the output of block a, Oka , as
well as the other 3 blocks b,c,d back to its input. For two network blocks this is
illustrated in Fig. 1. Principally, this model allows an arbitrarily large network
that operates synchronously at a common network frequency fnet = 1/?t since the
external input can be the output of other identical network chips.
external input
Figure 2: Intra- and inter-block routing schematic of the used ANN ASIC.
For the following experiments one complete ANN ASIC is used. Since one output
neuron has 128 inputs, it cannot be connected to all 256 neurons simultaneously.
Furthermore, it can only make arbitrary connections to neurons of the same block,
whereas the inter-block feedback fixes certain output neurons to certain inputs.
Details of the routing are illustrated in Fig. 2.
The ANN ASIC is connected to a standard PC with a custom-made PCI-based
interface card using a programmable logic to control the neural network chip.
3
Liquid Computing Setup
Following the terminology introduced by Maass et al. the ANN ASIC represents
the liquid. Appropriately configured, it acts as a non-linear filter to the input.
The response of the neural network ASIC at a certain time step is called the liquid
state x(t). This output is provided to the readout. In our case these are one or
more linear classifiers implemented in software. The classifier result, and thus the
response of the liquid state machine at a time t, is given by:
X
v(t) = ?(
wi xi (t)).
(3)
The weights wi are determined with a least-squares linear regression calculated for
the desired target values y(t). Using the same liquid state x(t) multiple readouts
can be used to predict differing target functions simultaneously (c.f. Fig. 3).
liquid state x(t)
bias
software
Linear Classifier
? x(t ) w
i
?101110001
?10111000 1
u(t)
i
:
?10100111 0
?010001110
?01001001 0
v(t)
i
Linear Classifier
~
x (t ) w
?
i
i
~
v(t)
i
hardware
input
neural net (liquid)
readouts
Figure 3: The liquid state machine setup.
The used setup is similar to the one used by Bertschinger et al. [6] with the central
difference that the liquid here is implemented in hardware. The specific hardware
design imposes McCulloch-Pitts type neurons that are either on or off (O ? {0, 1})
and not symmetric (O ? {?1, 1}). Besides of this, the topology and weight configuration of the ANN ASIC follow the procedure used by Bertschinger et al. The
random generation of such input-driven networks is governed by the following parameters: N , the number of neurons; k, the number of incoming connections per
neuron; ? 2 , the variance of the zero-centered Gaussian distribution from which the
weights for the incoming connections are drawn; u(t), the external input signal driving each neuron. Bertschinger et al. used a random binary input signal u(t) which
assumes with equal chance u + 1 or u ? 1. Since the used ANN ASIC has a fixed
dynamic range for a single synapse, a weight can assume a normalized value in the
interval [?1, 1] with 11 bit accuracy. For this reason, the input signal u(t) is split to
a constant bias part u and the varying part, which again is split to an excitatory and
its inverse contribution. Each neuron of the network then gets k inputs from other
neurons, one constant bias of weight u, and two mutually exclusive input neurons
with weights 0.5 and ?0.5. The latter modification was introduced to account for
the fact that the inner neurons assume only the values {0, 1}. Using the input and
its inverse accordingly recovers a differential weight change of 1 between the active
and inactive state.
The performance of the liquid state machine is evaluated according to the mutual
information of the target values y(t) and the predicted values v(t). This measure is
defined as:
XX
p(v 0 , y 0 )
M I(v, y) =
p(v 0 , y 0 ) log2
,
(4)
p(v 0 )p(y 0 )
0
0
v
y
where p(v 0 ) = probability{v(t) = v 0 } with v 0 ? {0, 1} and p(v 0 , y 0 ) is the joint
probability. It can be calculated from the confusion matrix of the linear classifier
and can be given the dimension bits.
In order to assess the capability to account for inputs of preceeding time steps, it
is sensible to define another measure, the memory capacity MC (cf. [7]):
MC =
?
X
M I(v? , y? ).
(5)
? =0
Here, v? and y? denote the prediction and target shifted in time by ? time steps
(i.e. y? (t) = y(t ? ? )). It is as well measured in bits.
4
Results
A linear classifier by definition cannot solve a linearily non-separable problem. It
therefore is a good test for the non-trivial contribution of the liquid if a liquid
state machine with a linear readout has to solve a linearly non-separable problem.
The benchmark problem used in the following is 3-bit parity in time, i.e., y ? (t) =
P ARIT Y (u(t ? ? ), u(t ? ? ? 1), u(t ? ? ? 2)), which is known to be linearly nonseparable. The linear classifiers are trained to predict the linearly non-separable
y? (t) simply from the liquid state x(t). To do this it is necessary that in the liquid
state at time t there is information present of the previous time steps.
Bertschinger et al. showed theoretically and in simulation that depending on the
parameters k, ? 2 , and u an input-driven neural network shows ordered or chaotic
dynamics. This causes input information either to disappear quickly (the simplest
case would be an identity map from input to output) or stay forever in the network
respectively. Although the transition of the network dynamics from order to chaos
happens gradually with the variation of the generation parameters (k, ? 2 , u), the
performance as a liquid shows a distinct peak when the network exhibits dynamics
inbetween order and chaos. These critical dynamics suggest the term ?computation
at the edge of chaos? which is originated by Langton [8].
The following results are obtained using the ANN ASIC as the liquid on a random
binary input string (u(t)) of length 4000 for which the linear classifier is calculated.
The shown mutual information and memory capacity are the measured performance
on a random binary test string of length 8000. For each time shift ? , a separate
classifier is calculated. For each parameter set k, ? 2 , u this procedure is repeated
several times (for exact numbers compare the individual plots), i.e. several liquids
are generated.
Fig. 4 shows the mutual information MI versus the shift in time ? for the 3-bit
delayed parity problem and the network parameters fixed to N = 256, k = 6,
? 2 = 0.14, and u = 0. Plotted are the mean values of 50 liquids evaluated in
PSfrag replacements
memory curve (k=0.14, ? 2 =6)
1
MI [bit]
0.8
MC=3.4 bit
0.6
0.4
0.2
0
0
4
6
time shift (? )
2
10
8
Figure 4: The mutual information between prediction and target for the 3-bit delayed parity problem versus the delay for k=6, ? 2 =0.14). The plotted limits are the
1-sigma spreads of 50 different liquids. The integral under this curve is the mean
MC and is the maximum in the left plot of Fig. 5.
hardware and the given limits are the standard deviation in the mean. From the
error limits it can be inferred that the parity problem is solved in all runs for ? = 0,
and in some for ? = 1. For larger time shifts the performance decreases until the
liquid has no information on the input anymore.
mean MC (hardware, {0,1} neurons)
mean MC (simulation, {-1,1} neurons)
30
30
15
1
10
20
3
15
2
10
5
order
0.1 0.2
inputs (k)
2
0
0.3
s2
0.4
0.5
4
25
MC [bit]
inputs (k)
20
5
chaos
3
25
MC [bit]
chaos
1
order
0.1 0.2
0
0.3
0.4
0.5
s2
Figure 5: Shown are two parameter sweeps for the 3-bit delayed parity in dependence of the generation parameters k and ? 2 with fixed N = 256, u = 0. Left: 50
liquids per parameter set evaluated in hardware. Right: 35 liquids per parameter
set using software simulation of ASIC but with symmetric neurons. Actual data
points are marked with black dots, the gray shading shows an interpolation. The
largest three mean MCs are marked with a white dot, asterisk, plus sign.
In order to assess how different generation parameters influence the quality of the
liquid, parameter sweeps are performed. For each parameter set several liquids are
generated and readouts trained. The obtained memory capacities of the runs are
averaged and used as the performance measure. Fig. 5 shows a parameter sweep of
k and ? 2 for the memory capacity MC for N = 256, and u = 0. On the left side,
results obtained with the hardware are shown. The shading shows an interpolation
of the actual measured values marked with dots. The largest three mean MCs are
marked in order with a white circle, white asterisk, and white plus.
It can be seen that the memory capacity peaks distinctly along a hyperbola-like
band. The area below the transition band goes along with ordered dynamics; above
it, the network exhibits chaotic behavior. The shape of the transition indicates
a constant network activity for critical dynamics. The standard deviation in the
mean of 50 liquids per parameter set is below 2%, i.e., the transition is significant.
The transition is not shown in a u-? 2 -sweep as originally by Bertschinger et al. because in the hardware setup only a limited parameter range of ? 2 and u is accessible
due to synapses of the range [?1, 1] with a limited resolution. The accessible region
(? 2 ? [0, 1] and u ? [0, 1]) nonetheless exhibits a similar transition as described by
Bertschinger et al. (not shown).
The smaller overall performance in memory capacity compared to their liquids,
on the other hand, is simply due to the anti-symmetric neurons and not to other
hardware restrictions as it can be seen from the right side of Fig. 5. There the same
parameter sweep is shown, but this time the liquid is implemented in a software
simulation of the ASIC with symmetric neurons. While all connectivity constraints
of the hardware are incorporated in the simulation, the only other change in the
setup is the adjustment of the input signal to u ? 1. 35 liquids per parameter set
are evaluated. The observed performance decrease results from the asymmetry of
the 0,1 neurons; a similar effect is observed by Bertschinger et al. for u 6= 0.
mean MI of 50 random 5-bit Boolean functions
standard deviations of distributions
30
30
15
10
5
order
0.1 0.2
0.3
s2
0.4
0.5
0.1
20
0.08
15
0.06
10
0.04
0.02
5
0
0.1 0.2
0.3
0.4
sigma of MI [bit]
inputs (k)
20
0.12
25
inputs (k)
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
MI [bit]
chaos
25
0.5
s2
Figure 6: Mean mutual information of 50 simultaneously trained linear classifiers
on randomly drawn 5-bit Boolean functions using the hardware liquid (10 liquids
per parameter set evaluated). The right plot shows the 1-sigma spreads.
Finally, the hardware-based liquid state machine was tested on 50 randomly drawn
Boolean functions of the last 5 inputs (5 bit in time) (cf. Fig. 6). In this setup,
50 linear classifiers read out the same liquid simultaneously to calculate their independent predictions at each time step. The mean mutual information (? = 0) for
the 50 classifiers in 10 runs is plotted. From the right plot it can be seen that the
standard deviation for the single measurement along the critical line is fairly small;
this shows that critical dynamics yield a generic liquid independent of the readout.
5
Conclusions & Outlook
Computing without stable states manifests a new computing paradigm different to
the Turing approach. By different authors this has been investigated for various
types of neural networks, theoretically and in software simulation. In the present
publication these ideas are transferred back to an analog computing device: a mixedmode VLSI neural network. Earlier published results of Bertschinger et al. were
reproduced showing that the readout with linear classifiers is especially successful
when the network exhibits critical dynamics.
Beyond the point of solving rather academic problems like 3-bit parity, the liquid
computing approach may be well suited to make use of the massive resources found
in analog computing devices, especially, since the liquid is generic, i.e. independent
of the readout. The experiments with the general purpose ANN ASIC allow to explore the necessary connectivity and accuracy of future hardware implementations.
With even higher integration densities the inherent unreliability of the elementary
parts of VLSI systems grows, making fault-tolerant training and operation methods
necessary. Even though it has not be shown in this publication, initial experiments
support that the used liquids show a robustness against faults introduced after the
readout has been trained.
As a next step it is planned to use parts of the ASIC to realize the readout. Such
a liquid state machine can make use of the hardware implementation and will be
able to operate in real-time on continuous data streams.
References
[1] S. Hohmann, K. Fieres, J. Meier, T. Schemmel, J. Schmitz, and F. Sch?
urmann.
Training fast mixed-signal neural networks for data classification. In Proceedings
of the International Joint Conference on Neural Networks IJCNN?04, pages
2647?2652. IEEE Press, July 2004.
[2] W. Maass, T. Natschl?
ager, and H. Markram. Real-time computing without
stable states: A new framework for neural computation based on perturbations.
Neural Computation, 14(11):2531?2560, 2002.
[3] H. Jaeger. The ?echo state? approach to analysing and training recurrent neural networks. Technical Report GMD Report 148, German National Research
Center for Information Technology, 2001.
[4] T. M. Cover. Geometrical and statistical properties of systems of linear inequalities with application in pattern recognition. IEEE Transactions on Electronic
Computers, EC-14:326?334, 1965.
[5] J. Schemmel, S. Hohmann, K. Meier, and F. Sch?
urmann. A mixed-mode analog
neural network using current-steering synapses. Analog Integrated Circuits and
Signal Processing, 38(2-3):233?244, February-March 2004.
[6] N. Bertschinger and T. Natschl?ager. Real-time computation at the edge of chaos
in recurrent neural networks. Neural Computation, 16(7):1413 ? 1436, July 2004.
[7] T. Natschl?
ager and W. Maass. Information dynamics and emergent computation in recurrent circuits of spiking neurons. In Sebastian Thrun, Lawrence
Saul, and Bernhard Sch?
olkopf, editors, Proc. of NIPS 2003, Advances in Neural
Information Processing Systems 16. MIT Press, Cambridge, MA, 2004.
[8] C. G. Langton. Computation at the edge of chaos. Physica D, 42, 1990.
| 2562 |@word seems:3 simulation:7 profit:1 outlook:1 shading:2 initial:2 configuration:1 liquid:52 denoting:1 current:3 hohmann:2 activation:2 refresh:1 realize:1 shape:1 plot:4 update:1 device:3 nervous:1 accordingly:1 xk:1 core:1 provides:1 lsm:2 along:3 differential:1 ik:1 psfrag:1 consists:1 combine:1 theoretically:2 inter:2 expected:1 behavior:1 nonseparable:1 inspired:1 actual:3 solver:1 increasing:1 becomes:1 provided:1 xx:1 circuit:4 mcculloch:4 string:2 differing:1 transformation:1 act:2 classifier:14 control:1 originates:1 configuring:2 unreliability:1 felix:2 accordance:1 limit:3 path:1 interpolation:2 black:1 plus:3 karlheinz:1 limited:3 range:3 averaged:1 block:15 implement:1 chaotic:3 signaling:1 procedure:3 area:1 universal:1 suggest:1 get:1 cannot:2 storage:1 influence:1 www:2 restriction:1 map:1 demonstrated:1 center:1 go:1 independently:1 fieres:1 resolution:1 splitting:1 immediately:2 preceeding:1 variation:1 justification:1 target:5 nominal:1 massive:1 exact:1 substrate:3 element:1 recognition:1 observed:2 solved:1 calculate:1 readout:13 region:1 connected:4 cycle:5 decrease:2 prospect:1 complexity:3 dynamic:13 trained:5 raise:1 solving:2 efficiency:1 joint:2 kirchhoff:1 chip:4 emergent:1 various:3 distinct:1 fast:2 artificial:1 pci:1 larger:1 solve:2 echo:1 reproduced:2 net:1 causing:1 flexibility:1 scalability:1 olkopf:1 asymmetry:1 jaeger:2 cmos:1 depending:1 recurrent:5 measured:3 ij:5 eq:1 implemented:6 predicted:1 filter:1 centered:1 routing:2 transient:1 ja:1 feeding:1 fix:1 biological:1 elementary:1 summation:1 im:1 physica:1 lawrence:1 mapping:1 predict:2 pitt:4 driving:1 purpose:3 proc:1 largest:2 schmitz:1 mit:1 gaussian:1 rather:1 varying:1 publication:4 indicates:1 integrated:3 vlsi:5 germany:1 issue:1 classification:2 overall:1 integration:2 special:1 mutual:6 fairly:1 equal:2 once:1 identical:2 represents:3 future:1 report:2 inherent:1 employ:1 randomly:3 simultaneously:4 national:1 individual:1 delayed:3 replacement:1 highly:3 intra:1 custom:2 inbetween:2 operated:1 pc:1 edge:5 capable:1 integral:1 necessary:3 ager:3 desired:1 plotted:3 circle:1 increased:1 earlier:1 boolean:3 planned:1 cover:2 deviation:4 comprised:1 delay:1 successful:1 connect:1 density:1 peak:4 international:1 accessible:2 stay:1 physic:1 off:1 together:1 quickly:1 connectivity:2 again:1 central:1 langton:2 external:4 manufactured:1 account:2 de:2 configured:3 depends:2 stream:1 performed:2 capability:2 contribution:2 ass:3 square:2 oi:2 accuracy:2 variance:1 yield:1 hyperbola:1 basically:1 mc:11 straight:1 published:3 synapsis:6 sebastian:1 synaptic:1 definition:1 against:1 nonetheless:1 frequency:1 naturally:1 mi:5 recovers:1 gain:1 proved:1 manifest:1 organized:1 back:4 higher:2 originally:1 follow:1 response:4 synapse:1 reponse:1 evaluated:6 though:1 furthermore:1 stage:1 until:1 hand:2 nonlinear:1 mode:4 quality:1 gray:1 aj:1 grows:1 effect:1 concept:1 normalized:1 read:1 symmetric:4 maass:5 illustrated:2 white:4 customize:1 complete:1 confusion:1 interface:1 geometrical:1 chaos:11 common:1 spiking:1 analog:14 he:1 significant:1 measurement:1 cambridge:1 ai:1 dot:3 stable:4 operating:2 base:1 showed:2 driven:2 asic:18 certain:4 inequality:1 binary:4 arbitrarily:1 fault:2 seen:3 steering:1 paradigm:3 july:2 signal:7 full:1 multiple:1 schemmel:3 reduces:1 technical:1 faster:1 academic:1 offer:2 schematic:1 prediction:3 regression:2 vision:1 physically:1 adopting:1 whereas:1 interval:1 source:1 sch:4 appropriately:1 operate:2 natschl:3 split:2 variety:1 topology:4 imperfect:1 idea:2 prototype:1 inner:1 shift:4 inactive:1 whether:1 reconfigurability:1 cause:1 programmable:1 johannes:1 band:2 hardware:20 simplest:1 reduced:1 http:1 gmd:1 shifted:1 sign:2 per:6 discrete:1 key:1 four:1 terminology:2 drawn:3 compete:1 turing:2 inverse:2 run:3 place:1 electronic:1 home:1 scaling:1 bit:18 layer:1 followed:1 activity:2 ijcnn:1 precisely:1 constraint:1 software:8 generates:1 speed:2 argument:1 separable:4 transferred:1 according:2 march:1 smaller:1 wi:2 modification:1 happens:1 making:1 gradually:1 principally:1 resource:1 mutually:1 previously:2 german:1 needed:2 fed:1 adopted:1 operation:2 appropriate:1 generic:3 anymore:1 robustness:1 capacitive:1 denotes:1 assumes:1 cf:2 log2:1 especially:4 disappear:1 february:1 sweep:5 already:1 strategy:3 exclusive:1 dependence:1 exhibit:5 evolutionary:1 separate:1 card:1 simulated:1 capacity:6 thrun:1 sensible:1 trivial:1 reason:1 besides:1 length:2 difficult:1 mostly:1 setup:7 sigma:3 implementation:3 design:2 perform:1 neuron:27 benchmark:2 anti:1 incorporated:1 perturbation:2 synchronously:1 arbitrary:1 inferred:1 introduced:3 meier:3 cast:1 connection:3 kip:2 nip:1 beyond:1 able:1 dynamical:2 pattern:2 below:2 memory:7 suitable:1 feld:1 critical:5 scheme:1 technology:1 multiplication:1 fully:1 mixed:4 generation:6 proven:1 versus:2 digital:1 asterisk:2 imposes:1 editor:1 excitatory:1 last:2 parity:6 bias:3 side:2 allow:1 perceptron:1 institute:1 wide:1 saul:1 markram:1 distinctly:1 feedback:2 calculated:5 dimension:1 transition:6 curve:2 forward:1 commonly:1 made:2 author:2 ec:1 cope:1 transaction:1 uni:2 forever:1 bernhard:1 logic:1 active:1 incoming:2 tolerant:1 xi:1 continuous:1 iterative:1 additionally:1 promising:2 nature:1 transfer:1 heidelberg:4 investigated:1 necessarily:1 complex:1 spread:2 linearly:4 s2:4 repeated:1 fig:9 fashion:1 originated:1 governed:1 specific:3 reconfigurable:1 urmann:3 showing:2 bertschinger:10 suited:2 simply:3 explore:2 unexpected:1 ordered:3 adjustment:1 chance:1 ma:1 goal:1 identity:1 marked:4 ann:11 hard:1 change:2 analysing:1 determined:1 except:1 operates:2 called:2 experimental:2 trainability:1 support:1 latter:1 meant:1 heaviside:1 tested:1 |
1,720 | 2,563 | Linear Multilayer Independent Component
Analysis for Large Natural Scenes
Yoshitatsu Matsuda ?
Kazunori Yamaguchi Laboratory
Department of General Systems Studies
Graduate School of Arts and Sciences
The University of Tokyo
Japan 153-8902
[email protected]
Kazunori Yamaguchi
[email protected]
Abstract
In this paper, linear multilayer ICA (LMICA) is proposed for extracting
independent components from quite high-dimensional observed signals
such as large-size natural scenes. There are two phases in each layer of
LMICA. One is the mapping phase, where a one-dimensional mapping
is formed by a stochastic gradient algorithm which makes more highlycorrelated (non-independent) signals be nearer incrementally. Another
is the local-ICA phase, where each neighbor (namely, highly-correlated)
pair of signals in the mapping is separated by the MaxKurt algorithm.
Because LMICA separates only the highly-correlated pairs instead of all
ones, it can extract independent components quite efficiently from appropriate observed signals. In addition, it is proved that LMICA always
converges. Some numerical experiments verify that LMICA is quite efficient and effective in large-size natural image processing.
1 Introduction
Independent component analysis (ICA) is a recently-developed method in the fields of
signal processing and artificial neural networks, and has been shown to be quite useful
for the blind separation problem [1][2][3] [4]. The linear ICA is formalized as follows. Let
s and A are N -dimensional source signals and N ? N mixing matrix. Then, the observed
signals x are defined as
x = As.
(1)
The purpose is to find out A (or the inverse W ) when the observed (mixed) signals only
are given. In other words, ICA blindly extracts the source signals from M samples of the
observed signals as follows:
? = W X,
S
(2)
?
http://www.graco.c.u-tokyo.ac.jp/?matsuda
? is the estimate of the source
where X is an N ? M matrix of the observed signals and S
signals. This is a typical ill-conditioned problem, but ICA can solve it by assuming that the
source signals are generated according to independent and non-gaussian probability distributions. In general, the ICA algorithms find out W by maximizing a criterion (called
the contrast function) such as the higher-order statistics (e.g. the kurtosis) of every com? That is, the ICA algorithms can be regarded as an optimization method of
ponent of S.
such criteria. Some efficient algorithms for this optimization problem have been proposed,
for example, the fast ICA algorithm [5][6], the relative gradient algorithm [4], and JADE
[7][8].
Now, suppose that quite high-dimensional observed signals (namely, N is quite large) are
given such as large-size natural scenes. In this case, even the efficient algorithms are not
much useful because they have to find out all the N 2 components of W . Recently, we proposed a new algorithm for this problem, which can find out global independent components
by integrating the local ICA modules. Developing this approach in this paper, we propose
a new efficient ICA algorithm named ? the linear multilayer ICA algorithm (LMICA).? It
will be shown in this paper that LMICA is quite efficient than other standard ICA algorithms in the processing of natural scenes. This paper is an extension of our previous works
[9][10].
This paper is organized as follows. In Section 2, the algorithm is described. In Section 3,
numerical experiments will verify that LMICA is quite efficient in image processing and
can extract some interesting edge detectors from large natural scenes. Lastly, this paper is
concluded in Section 4.
2 Algorithm
2.1 basic idea
LMICA can extract all the independent components approximately by repetition of the
following two phases. One is the mapping phase, which brings more highly-correlated
signals nearer. Another is local-ICA phase, where each neighbor pair of signals in the
mapping is separated by MaxKurt algorithm [8]. The mechanism of LMICA is illustrated
in Fig. 1. Note that this illustration holds just in the ideal case where the mixing matrix
A is given according to such a hierarchical model. In other words, it does not hold for an
arbitrary A. It will be shown in Section 3 that this hierarchical model is quite effective at
least in natural scenes.
2.2 mapping phase
In the mapping phase, given
P signals X are arranged in a one-dimensional array so that
pairs (i, j) taking higher k x2ik x2jk are placed nearer. Letting Y = (yi ) be the coordinate
of the i-th signal xik , the following objective function ? is defined:
XX
2
? (Y ) =
x2ik x2jk (yi ? yj ) .
(3)
i,j
k
The P
optimal mapping
P is found out by minimizing ? with respect to Y under the constraints
that yi = 0 and yi2 = 1. It has been well-known that such optimization problems can
be solved efficiently by a stochastic gradient algorithm [11][12]. In this case, the stochastic
gradient algorithm is given as follows (see [10] for the details of the derivation of this
algorithm):
yi (T + 1) := yi (T ) ? ?T (zi yi ? + zi ?) ,
(4)
Figure 1: The illustration of LMICA (the ideal case): Each number from 1 to 8 means
a source signal. In the first local-ICA phase, each neighbor pair of the completely-mixed
signals (denoted ?1-8?) is partially separated into ?1-4? and ?5-8.? Next, the mapping phase
rearranges the partially-separated signals so that more highly-correlated signals are nearer.
In consequence, the four ?1-4? signals (similarly, ?5-8? ones) are brought nearer. Then,
the local-ICA phase partially separates the pairs of neighbor signals into ?1-2,? ?3-4,? ?56,? and ?7-8.? By repetition of the two phases, LMICA can extract all the sources quite
efficiently.
where ?T is the step size at the T -th time step, zi = x2ik (k is randomly selected from
{1, . . . , M } at each time step),
X
?=
zi ,
(5)
i
and
?=
X
zi yi .
(6)
i
By calculating ? and ? before the update for each i, each update requires just O (N ) computation. Eq. (4) is guaranteed to converge to a local minimumP
of the objective function
? (Y ) if ?T decreases sufficiently slowly (limT ?? ?T = 0 and ?T = ?).
Because the Y in the above method is continuous, each continuous yi is replaced by the
ranking of itself in Y in the last of the mapping phase. That is, yi := 1 for the largest
yi , yj := N for the smallest one, and so on. The corresponding permutation ? is given as
? (i) = yi .
The total procedure of the mapping phase for given X is described as follows:
mapping phase
P
1. xik := xik ? x
?i for each i, k, where x
?i is the mean
2. yi = i, and ? (i) = i for each i.
xik
k
.
M
3. Until the convergence, repeat the following steps:
(a) Select k randomly from {1, . . . , M }, and let zi = x2ik for each i.
(b) Update each yi by Eq. (4).
P
P
(c) Normalize Y to satisfy i yi = 0 and i yi2 = 1.
4. Discretize yi .
5. Update X by x?(i)k := xik for each i and k.
2.3 local-ICA phase
In the local-ICA phase, the following contrast function ? (X) (the sum of kurtoses) is used
(MaxKurt algorithm in [8]):
X
(7)
? (X) = ?
x4ik ,
i,k
and ? (X) is minimized by ?rotating? the neighbor pairs of signals (namely, under an
orthogonal transformation). For each neighbor pair (i, i + 1), a rotation matrix Ri (?) is
given as
?
?
I i?1
0
0
0
cos ?
sin ?
0
?
? 0
(8)
Ri (?) = ?
?,
0
? sin ? cos ?
0
0
0
0
I N ?i?2
where I n is the n ? n identity matrix. Then, the optimal angle ?? is given as
? ?
?? = argmin? ? X 0 ,
(9)
where X 0 (?) = Ri (?) X. After some tedious transformation of the equations (see [8]), it
is shown that ?? is determined analytically by the following equations:
?ij
?ij
, cos 4?? = q
,
sin 4?? = q
2 + ?2
2 + ?2
?ij
?ij
ij
ij
where
?ij =
X?
?
x3ik xjk ? xik x3jk , ?ij =
P ?
k
k
x4ik + x4jk ? 6x2ik x2jk
4
(10)
?
,
and j = i + 1.
Now, the procedure of the local-ICA phase for given X is described as follows:
local-ICA phase
1. Let W local = I N , Alocal = I N
2. For each i = {1, . . . , N ? 1},
(a) Find out the optimal angle ?? by Eq. (10).
?
(b) X := Ri (?)X,
W local := Ri W local , and Alocal := Alocal Rti .
(11)
2.4 complete algorithm
The complete algorithm of LMICA for any given observed signals X is given by repeating
the mapping phase and the local-ICA phase alternately. Here, P ? is the permutation matrix
corresponding to ?.
linear multilayer ICA algorithm
1. Initial Settings: Let X be the given observed signal matrix, and W and A be I N .
2. Repetition: Do the following two phases alternately over L times.
(a) Mapping Phase: Find out the optimal permutation matrix P ? and the
optimally-arranged signals X by the mapping phase. Then, W := P ? W
and A := AP t? .
(b) Local-ICA Phase: Find out the optimal matrices W local , Alocal , and X.
Then, W := W local W and A := AAlocal .
2.5 some remarks
Relation to MaxKurt algorithm. Eq. (10) is just the same as MaxKurt algorithm [8].
The crucial difference between our LMICA and MaxKurt is that LMICA optimizes just the neighbor pairs instead of all the N (N2?1) ones in MaxKurt. In
P
LMICA, the pairs with higher ?costs? (higher k x2ik x2jk ) are brought nearer in
the mapping phase. So, independent components can be extracted effectively by
optimizing just the neighbor pairs.
Contrast function. In order to make consistency between this paper and our previous
work [10], the following contrast function ? instead of Eq. (7) is used in Section
3:
X
x2ik x2jk .
(12)
? (X) =
i,j,k
The minimization of Eq. (12) is equivalent to that of Eq. (7) under the orthogonal
transformation.
Pre-whitening. Though LMICA (which is based on MaxKurt) presupposes that X is
pre-whitened, the algorithm in Section 2.4 is applicable to any raw X without the
pre-whitening. Because any pre-whitening method suitable for LMICA has not
been found out yet, raw images of natural scenes are given as X in the numerical experiments in Section 3. In this non-whitening case, the mixing matrix A is
limited to be orthogonal and the influence of the second-order statistics is not removed. Nevertheless, it will be shown in Section 3 that the higher-order statistics
of X cause some interesting results.
3 Results
It has been well-known that various local edge detectors can be extracted from natural
scenes by the standard ICA algorithm [13][14]. Here, LMICA was applied to the same
problem. 30000 samples of natural scenes of 12 ? 12 pixels were given as the observed signals X. That is, N and M were 144 and 30000. Original natural scenes were downloaded
at http://www.cis.hut.fi/projects/ica/data/images/. The number of
layers L was set 720, where one layer means one pair of the mapping and the local-ICA
phases. For comparison, the experiments without the mapping phase were carried out,
where the mapping Y was randomly generated. In addition, the standard MaxKurt algorithm [8] was used with 10 iterations. The contrast function ? (Eq. (12)) was calculated
at each layer, and it was averaged over 10 independently generated Xs. Fig. 2-(a) shows
the decreasing curves of ? of normal LMICA and the one without the mapping phase. The
cross points show the result at each iteration of MaxKurt. Because one iteration of MaxKurt
is equivalent to 72 layers of LMICA with respect to the times of the optimizations for the
pairs of signals, a scaling (?72) is applied. Surprisingly, LMICA nearly converged to the
optimal point within just 10 layers. The number of parameters within 10 layers is 143 ? 10,
). It suggests that LMICA
which is much fewer than the degree of freedom of A ( 144?143
2
gives a quite suitable model for natural scenes. The calculation time with the values of ? is
shown in Table. 1. It shows that the time costs of the mapping phase are not much higher
than those of the local-ICA phase. The fact that 10 layers of LMICA required much less
time (22sec.) than one iteration of MaxKurt (94sec.) and optimized ? approximately (4.91)
verifies the efficiency of LMICA. Note that each iteration of MaxKurt can not be stopped
halfway. Fig. 3 shows 5 ? 5 representative edge detectors at each layer of LMICA. At the
20th layer (Fig. 3-(a)), rough and local edge detectors were recognized, though they were
a little unclear. As the layer proceeded, edge detectors became clearer and more global
(see Figs. 3-(b) and 3-(c)). It is interesting that ICA-like local edges (where the higherorder statistics are dominant) at the early stage were transformed to PCA-like global edges
(the second-order statistics are dominant) at the later stage (see [13]). For comparison, Fig.
3-(d) show the result at the 10th iteration of MaxKurt. It is similar to Fig. 3-(c) as expected.
In addition, we used large-size natural scenes. 100000 samples of natural scenes of 64 ? 64
pixels were given as X. MaxKurt and other well-known ICA algorithms are not available
for such a large-scale problem because they require huge computation. Fig. 2-(b) shows
the decreasing curve of ? in the large-size natural scenes. LMICA was carried out in 1000
layers, and it consumed about 69 hours with Intel 2.8GHz CPU. It shows that LMICA
rapidly decreased in the first 20 layers and converged around the 500th layer. It verifies
that LMICA is quite efficient in the analysis of large-size natural scenes. Fig. 4 shows
some edge detectors generated at the 1000th layer. It is interesting that some ?compound?
detectors such as a ?cross? were generated in addition to simple ?long-edge? detectors. In
a famous previous work [13] which applied ICA and PCA to small-size natural scenes,
symmetric global edge detectors similar to our ?compound? ones could be generated by
PCA which manages only the second-order statistics. On the other hand, asymmetric local
edge detectors similar to our simple ?long-edge? ones could not be generated by PCA and
could be extracted by ICA utilizing the higher-order statistics. In comparison with it, our
LMICA could extract various local and global detectors simultaneously from large-size
natural scenes. Besides, it is expected from the results for small-size images (see Fig. 3)
that other various detectors are generated at each layer. In summary, those results show
that LMICA can extract quite many useful and various detectors from large-size natural
scenes efficiently. It is also interesting that there was a plateau in the neighborhood of the
10th layer. It suggests that large-size natural scenes may be generated by two different
generative models. But, the close inspection is beyond the scope of this paper.
4 Conclusion
In this paper, we proposed the linear multilayer ICA algorithm (LMICA). We carried out
some numerical experiments on natural scenes, which verified that LMICA can find out the
approximations of independent components quite efficiently and it is applicable to large
problems. We are now analyzing the results of LMICA in large-size natural scenes of 64
? 64 pixels, and we are planning to apply this algorithm to quite large-scale images such
as the ones of 256 ? 256 pixels. We are also planning to utilize LMICA in the data mining
Table 1: Calculation time with the values of the contrast function ? (Eq. (12)): They are the
averages over 10 runs at the 10th layer (approximation) and the 720th layer (convergence)
in LMICA (the normal one and the one without the mapping phase). In addition, those of
10 iterations in MaxKurt (approximately corresponding to L = 10 ? 72 = 720) are shown.
They were calculated in Intel 2.8GHz CPU.
LMICA
LMICA without mapping MaxKurt (10 iterations)
10th layer
22sec. (4.91)
9.3sec. (17.6)
720th layer 1600sec. (4.57)
670sec. (4.57)
940sec. (4.57)
of quite high-dimensional data space, such as the text mining. In addition, we are trying to
find out the pre-whitening method suitable for LMICA. Some normalization techniques in
the local-ICA phase may be promising.
References
[1] C. Jutten and J. Herault. Blind separation of sources (part I): An adaptive algorithm
based on neuromimetic architecture. Signal Processing, 24(1):1?10, jul 1991.
[2] P. Comon. Independent component analysis - a new concept? Signal Processing,
36:287?314, 1994.
[3] A. J. Bell and T. J. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7:1129?1159, 1995.
[4] J.-F. Cardoso and Beate Laheld. Equivariant adaptive source separation. IEEE Transactions on Signal Processing, 44(12):3017?3030, dec 1996.
[5] A. Hyv?arinen and E. Oja. A fast fixed-point algorithm for independent component
analysis. Neural Computation, 9(7):1483?1492, 1997.
[6] A. Hyv?arinen. Fast and robust fixed-point algorithms for independent component
analysis. IEEE Transactions on Neural Networks, 10(3):626?634, 1999.
[7] Jean-Franc?ois Cardoso and Antoine Souloumiac. Blind beamforming for non Gaussian signals. IEE Proceedings-F, 140(6):362?370, dec 1993.
[8] Jean-Franc?ois Cardoso. High-order contrasts for independent component analysis.
Neural Computation, 11(1):157?192, jan 1999.
[9] Yoshitatsu Matsuda and Kazunori Yamaguchi. Linear multilayer ica algorithm integrating small local modules. In Proceedings of ICA2003, pages 403?408, Nara,
Japan, 2003.
[10] Yoshitatsu Matsuda and Kazunori Yamaguchi. Linear multilayer independent component analysis using stochastic gradient algorithm. In Independent Component Analysis and Blind source separation - ICA2004, volume 3195 of LNCS, pages 303?310,
Granada, Spain, sep 2004. Springer-Verlag.
[11] Yoshitatsu Matsuda and Kazunori Yamaguchi. Global mapping analysis: stochastic
approximation for multidimensional scaling. International Journal of Neural Systems,
11(5):419?426, 2001.
[12] Yoshitatsu Matsuda and Kazunori Yamaguchi. An efficient MDS-based topographic
mapping algorithm. Neurocomputing, 2005. in press.
[13] A. J. Bell and T. J. Sejnowski. The ?independent components? of natural scenes are
edge filters. Vision Research, 37(23):3327?3338, dec 1997.
[14] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural
images compared with simple cells in primary visual cortex. Proceedings of the Royal
Society of London: B, 265:359?366, 1998.
(a). for small-size images.
(b). for large-size images.
Figure 2: Decreasing curve of the contrast function ? along the number of layers (in logscale): (a). It is for small-size natural scenes of 12 ? 12 pixels. The normal and dotted
curves show the decreases of ? by LMICA and the one without the mapping phase (random
mapping), respectively. The cross points show the results of MaxKurt. Each iteration in
MaxKurt approximately corresponds to 72 layers with respect to the times of the optimizations for the pairs of signals. (b). It is for large-size natural scenes of 64 ? 64 pixels. The
curve displays the decrease of ? by LMICA in 1000 layers.
(a). at 20th layer.
(b). at 100th layer.
(c). at 720th layer.
(d). MaxKurt.
Figure 3: Representative edge detectors from natural scenes of 12 ? 12 pixels: (a). It
displays the basis vectors generated by LMICA at the 20th layer. (b). at the 100th layer.
(c). at the 720th layer. (d). It shows the ones after 10 iterations of MaxKurt algorithm.
Figure 4: Representative edge detectors from natural scenes of 64 ? 64 pixels.
| 2563 |@word proceeded:1 tedious:1 hyv:2 initial:1 com:1 yet:1 numerical:4 update:4 generative:1 selected:1 fewer:1 inspection:1 along:1 ica:34 expected:2 equivariant:1 planning:2 decreasing:3 little:1 cpu:2 project:1 xx:1 spain:1 matsuda:7 argmin:1 developed:1 transformation:3 every:1 multidimensional:1 before:1 local:26 consequence:1 analyzing:1 approximately:4 ap:1 suggests:2 co:3 limited:1 graduate:1 averaged:1 yj:2 procedure:2 lncs:1 jan:1 laheld:1 bell:2 word:2 integrating:2 pre:5 close:1 influence:1 www:2 equivalent:2 maximizing:1 independently:1 formalized:1 array:1 regarded:1 utilizing:1 coordinate:1 suppose:1 asymmetric:1 observed:10 module:2 solved:1 decrease:3 removed:1 efficiency:1 completely:1 basis:1 sep:1 various:4 derivation:1 separated:4 fast:3 effective:2 london:1 sejnowski:2 artificial:1 ponent:1 jade:1 neighborhood:1 quite:16 jean:2 solve:1 presupposes:1 statistic:7 topographic:1 itself:1 kurtosis:1 propose:1 rapidly:1 mixing:3 normalize:1 convergence:2 converges:1 ac:3 clearer:1 ij:8 school:1 eq:9 ois:2 tokyo:4 filter:2 stochastic:5 require:1 arinen:2 extension:1 hold:2 sufficiently:1 hut:1 around:1 normal:3 mapping:27 scope:1 early:1 graco:3 smallest:1 purpose:1 applicable:2 largest:1 repetition:3 minimization:1 brought:2 rough:1 always:1 gaussian:2 contrast:8 yamaguchi:6 kurtoses:1 relation:1 transformed:1 pixel:8 ill:1 denoted:1 herault:1 art:1 schaaf:1 field:1 nearly:1 minimized:1 franc:2 randomly:3 oja:1 simultaneously:1 neurocomputing:1 replaced:1 phase:34 freedom:1 huge:1 highly:4 mining:2 rearranges:1 edge:15 orthogonal:3 rotating:1 xjk:1 stopped:1 maximization:1 cost:2 iee:1 optimally:1 international:1 slowly:1 japan:2 sec:7 satisfy:1 ranking:1 blind:6 later:1 jul:1 formed:1 became:1 efficiently:5 raw:2 famous:1 manages:1 converged:2 detector:15 plateau:1 proved:1 organized:1 higher:7 arranged:2 though:2 just:6 stage:2 lastly:1 until:1 hand:1 incrementally:1 jutten:1 brings:1 verify:2 concept:1 analytically:1 symmetric:1 laboratory:1 illustrated:1 sin:3 criterion:2 trying:1 complete:2 image:9 recently:2 fi:1 rotation:1 jp:3 volume:1 consistency:1 similarly:1 cortex:1 whitening:5 dominant:2 optimizing:1 optimizes:1 compound:2 verlag:1 yi:15 der:1 recognized:1 converge:1 signal:35 rti:1 cross:3 calculation:2 long:2 nara:1 basic:1 multilayer:7 whitened:1 vision:1 blindly:1 iteration:10 normalization:1 limt:1 dec:3 cell:1 addition:6 decreased:1 source:9 concluded:1 crucial:1 beamforming:1 extracting:1 ideal:2 zi:6 architecture:1 idea:1 consumed:1 pca:4 cause:1 remark:1 useful:3 cardoso:3 repeating:1 http:2 dotted:1 four:1 nevertheless:1 verified:1 utilize:1 halfway:1 sum:1 run:1 inverse:1 angle:2 named:1 separation:5 scaling:2 layer:30 guaranteed:1 display:2 constraint:1 scene:26 ri:5 department:1 developing:1 according:2 comon:1 equation:2 mechanism:1 letting:1 neuromimetic:1 available:1 apply:1 hierarchical:2 appropriate:1 original:1 logscale:1 calculating:1 society:1 objective:2 primary:1 md:1 antoine:1 unclear:1 gradient:5 separate:2 higherorder:1 assuming:1 besides:1 illustration:2 minimizing:1 xik:6 discretize:1 arbitrary:1 pair:14 namely:3 required:1 optimized:1 hour:1 nearer:6 alternately:2 beyond:1 royal:1 suitable:3 natural:28 carried:3 extract:7 text:1 relative:1 permutation:3 mixed:2 interesting:5 downloaded:1 degree:1 granada:1 summary:1 beate:1 placed:1 last:1 repeat:1 surprisingly:1 neighbor:8 taking:1 ghz:2 van:2 curve:5 calculated:2 souloumiac:1 adaptive:2 transaction:2 global:6 continuous:2 table:2 promising:1 robust:1 yi2:2 n2:1 verifies:2 fig:10 representative:3 intel:2 x:1 deconvolution:1 effectively:1 ci:1 conditioned:1 visual:1 partially:3 springer:1 corresponds:1 extracted:3 identity:1 typical:1 determined:1 called:1 total:1 select:1 hateren:1 correlated:4 |
1,721 | 2,564 | Unsupervised Variational Bayesian
Learning of Nonlinear Models
Antti Honkela and Harri Valpola
Neural Networks Research Centre, Helsinki University of Technology
P.O. Box 5400, FI-02015 HUT, Finland
{Antti.Honkela, Harri.Valpola}@hut.fi
http://www.cis.hut.fi/projects/bayes/
Abstract
In this paper we present a framework for using multi-layer perceptron (MLP) networks in nonlinear generative models trained
by variational Bayesian learning. The nonlinearity is handled by
linearizing it using a Gauss?Hermite quadrature at the hidden neurons. This yields an accurate approximation for cases of large posterior variance. The method can be used to derive nonlinear counterparts for linear algorithms such as factor analysis, independent
component/factor analysis and state-space models. This is demonstrated with a nonlinear factor analysis experiment in which even
20 sources can be estimated from a real world speech data set.
1
Introduction
Linear latent variable models such as factor analysis, principal component analysis
(PCA) and independent component analysis (ICA) [1] are used in many applications
ranging from engineering to social sciences and psychology. In many of these cases,
the effect of the desired factors or sources to the observed data is, however, not
linear. A nonlinear model could therefore produce better results.
The method presented in this paper can be used as a basis for many nonlinear
latent variable models, such as nonlinear generalizations of the above models. It is
based on the variational Bayesian framework, which provides a solid foundation for
nonlinear modeling that would otherwise be prone to overfitting [2]. It also allows
for easy comparison of different model structures, which is even more important for
flexible nonlinear models than for simpler linear models.
General nonlinear generative models for data x(t) of the type
x(t) = f (s(t), ? f ) + n(t) = B?(As(t) + a) + b + n(t)
(1)
often employ a multi-layer perceptron (MLP) (as in the equation) or a radial basis
function (RBF) network to model the nonlinearity. Here s(t) are the latent variables
of the model, n(t) is noise and ? f are the parameters of the nonlinearity, in case
of MLP the weight matrices A, B and bias vectors a, b. In context of variational
Bayesian methods, RBF networks seem more popular of the two because it is easier
to evaluate analytic expressions and bounds for certain key quantities [3]. With
MLP networks such values are not as easily available and one usually has to resort
to numeric approximations. Nevertheless, MLP networks can often, especially for
nearly linear models and in high dimensional spaces, provide an equally good model
with fewer parameters [4]. This is important with generative models whose latent
variables are independent or at least uncorrelated and the intrinsic dimensionality
of the input is large. A reasonable approximate bound for a good model is also
often better than a strict bound for a bad model.
Most existing applications of variational Bayesian methods for nonlinear models
are concerned with the supervised case where the inputs of the network are known
and only the weights have to be learned [3, 5]. This is easier as there are fewer
parameters with related posterior variance above the nonlinear hidden layer and
the distributions thus tend to be easier to handle.
In this paper we present a novel method for evaluating the statistics of the outputs
of an MLP network in context of unsupervised variational Bayesian learning of its
weights and inputs. The method is demonstrated with a nonlinear factor analysis
problem. The new method allows for reliable estimation of a larger number of
factors than before [6, 7].
2
Variational learning of unsupervised MLPs
Let us denote the observed data by X = {x(t)|t}, the latent variables of the model
by S = {s(t)|t} and the model parameters by ? = (?i ). The nonlinearity (1) can be
used as a building block of many different models depending on the model assumed
for the sources S. Simple Gaussian prior on S leads to a nonlinear factor analysis
(NFA) model [6, 7] that is studied here because of its simplicity. The method could
easily be extended with a mixture-of-Gaussians prior on S [8] to get a nonlinear
independent factor analysis model, but this is omitted here. In many nonlinear
blind source separation (BSS) problems it is enough to apply simple NFA followed
by linear ICA postprocessing to achieve nonlinear BSS [6, 7]. Another possible
extension would be to include dynamics for S as in [9].
In order to deal with the flexible nonlinear models, a powerful learning paradigm
resistant to overfitting is needed. The variational Bayesian method of ensemble
learning [2] has proven useful here. Ensemble learning is based on approximating
the true posterior p(S, ?|X) with a tractable approximation q(S, ?), typically a
multivariate Gaussian with a diagonal covariance. The approximation is fitted to
minimize the cost
q(S, ?)
C = log
= D(q(S, ?)||p(S, ?|X)) ? log p(X)
(2)
p(S, ?, X)
where h?i denotes expectation over q(S, ?) and D(q||p) is the Kullback-Leibler divergence between q and p. As the Kullback-Leibler divergence is always non-negative,
C yields an upper bound for ? log p(X) and thus a lower bound for the evidence
p(X). The cost can be evaluated analytically for a large class of mainly linear
models [10, 11] leading to simple and efficient learning algorithms.
2.1
Evaluating the cost
Unfortunately, the cost (2) cannot be evaluated analytically for the nonlinear model
(1). Assuming a Gaussian noise model, the likelihood term of C becomes
X
Cx = h? log p(X|S, ?)i =
h? log N (x(t); f (s(t), ? f ), ?x )i .
(3)
t
The term Cx depends on the first and second moments of f (s(t), ? f ) over the posterior approximation q(S, ?), and they cannot easily be evaluated analytically. Assuming the noise covariance is diagonal, the cross terms of the covariance of the
output are not needed, only the scalar variances of the different components.
If the activation functions of the MLP network were linear, the output mean and
variance could be evaluated exactly using only the mean and variance of the inputs
s(t) and ? f . Thus a natural first approximation would be to linearize the network
about the input mean using derivatives [6]. Taking the derivative with respect to
s(t), for instance, yields
?f (s(t), ? f )
= B diag(?0 (y(t))) A,
(4)
?s(t)
where diag(v) denotes a diagonal matrix with elements of vector v on the main
diagonal and y(t) = As(t) + a. Due to the local nature of the approximation,
this can lead to severe underestimation of the variance, especially when the hidden
neurons of the MLP network operate in the saturated region. This makes the
nonlinear factor analysis algorithm using this approach unstable with large number
of factors because the posterior variance corresponding to the last factors is typically
large.
To avoid this problem, we propose using a Gauss?Hermite quadrature to evaluate
an effective linearization of the nonlinear activation functions ?(yi (t)). The Gauss?
Hermite quadrature is a method for approximating weighted integrals
Z ?
X
f (x) exp(?x2 ) dx ?
wk f (tk ),
(5)
??
k
where the weights wk and abscissas tk are selected by requiring exact result for
suitable number of low-order polynomials. This allows evaluating the mean and
variance of ?(yi (t)) by quadratures
X
p
?(yi (t))GH =
wk0 ? y i (t) + t0k yei (t)
(6)
k
e i (t))GH =
?(y
X
k
i2
h
p
wk0 ? y i (t) + t0k yei (t) ? ?(yi (t))GH ,
(7)
respectively. Here the weights and abscissas have been scaled to take into account
the Gaussian pdf weight instead of exp(?x2 ), and y i (t) and yei (t) are the mean
and variance of yi (t), respectively. We used a three point quadrature that yields
accurate enough results but can be evaluated quickly. Using e.g. five points improves
the accuracy slightly, but slows the computation down significantly. As both of the
quadratures depend on ? at the same points, they can be evaluated together easily.
e i (t)) = ?0 (yi (t))2 yei (t), the resulting mean and
Using the approximation formula ?(y
variance can be interpreted to yield an effective linearization of ?(yi (t)) through
s
e i (t))GH
?(y
h?(yi (t))i := ?(yi (t))GH
h?0 (yi (t))i :=
.
(8)
yei (t)
The positive square root is used here because the derivative of the logistic sigmoid
used as activation function is always positive. Using these to linearize the MLP as
in Eq. (4), the exact mean and variance of the linearized model can be evaluated in
a relatively straightforward manner. Evaluation of the variance due to the sources
requires propagating matrices through the network to track the correlations between
the hidden units. Hence the computational complexity depends quadratically on
the number of sources. The same problem does not affect the network weights as
each parameter only affects the value of one hidden neuron.
2.2
Details of the approximation
The mean and variance of ?(yi (t)) depend on the distribution of yi (t). The Gauss?
Hermite quadrature assumes that yi (t) is Gaussian. This is not true in our case,
as the product of two independent normally distributed variables aij and sj (t) is
super-Gaussian, although rather close to Gaussian if the mean of one of the variables
is significantly larger in absolute value than the standard deviation. In case of N
sources, the actual input yi (t) is a sum of N of these and a Gaussian variable and
therefore rather close to a Gaussian, at least for larger values of N .
Ignoring the non-Gaussianity, the quadrature depends on the mean and variance of
yi (t). These can be evaluated exactly because of the linearity of the mapping as
X
eij (sj (t)2 + sej (t)) + A2 sej (t) + e
yei,tot (t) =
A
ai ,
(9)
ij
j
where ? denotes the mean and ?e the variance of ?. Here it is assumed that the
posterior approximations q(S) and q(? f ) have diagonal covariances. Full covariances
can be used instead without too much difficulty, if necessary.
In an experiment investigating the approximation accuracy with a random
MLP [12], the Taylor approximation was found to underestimate the output variance by a factor of 400, at worst. The worst case result of the above approximation
was underestimation by a factor of 40, which is a great improvement over the Taylor approximation, but still far from perfect. The worst case behavior could be
improved to underestimation by a factor of 5 by introducing another quadrature
evaluated with a different variance for yi (t). This change cannot be easily justified
except by the fact that it produces better results. The difference in behavior of
the two methods in more realistic cases is less drastic, but the version with two
quadratures seems to provide more accurate approximations.
The more accurate approximation is implemented by evaluating another quadrature
using the variance of yi (t) originating mainly from ? f ,
X
eij (sj (t)2 + sej (t)) + e
ai ,
(10)
yei,weight (t) =
A
j
and using the implied h?0 (yi (t))i in the evaluation of the effects of these variances.
The total variance (9) is still used in evaluation of the means and the evaluation of
the effects of the variance of s(t).
2.3
Learning algorithm for nonlinear factor analysis
The nonlinear factor analysis (NFA) model [6] is learned by numerically minimizing
the cost C evaluated above. The minimization algorithm is a combination of conjugate gradient for the means of S and ? f , fixed point iteration for the variances of
S and ? f , and EM like updates for other parameters and hyperparameters.
The fixed point update algorithm for the variances follows from writing the cost
function as a sum
C = Cq + Cp = hlog q(S, ?)i + h? log p(S, ?, X)i .
(11)
A parameter ?i that is assumed independent of others under q and has a Gaussian
posterior approximation q(?i ) = N (?i ; ?i , ?ei ), only affects the corresponding negentropy term ?1/2 log(2?e?ei ) in Cq . Differentiating this with respect to ?ei and setting
?1
the result to zero leads to a fixed point update rule ?ei = 2?Cp /? ?ei
. In order to
get a stable update algorithm for the variances, dampening by halving the step on
log scale until the cost function does not increase must be added to the fixed point
updates. The variance is increased at most by 10 % on one iteration and not set to
a negative value even if the gradient is negative.
The required partial derivatives can be evaluated analytically with simple backpropagation like computations with the MLP network. The quadratures used at
hidden nodes lead to analytical expressions for the means and variances of the hidden nodes and the corresponding feedback gradients are easy to derive. Along with
the derivatives with respect to variances, it is easy to evaluate them with respect to
means of the same parameters. These derivatives can then be used in a conjugate
gradient algorithm to update the means of S and ? f .
Due to the flexibility of the MLP network and the gradient based learning algorithm,
the nonlinear factor analysis method is sensitive to the initialization. We have used
linear PCA for initialization of the means of the sources S. The means of the
weights ? f are initialized randomly while all the variances are initialized to small
constant values. After this, the sources are kept fixed for 20 iterations while only the
network weights are updated. The hyperparameters governing noise and parameter
distributions are only updated after 80 more iterations to update the sources and the
MLP. By that time, a reasonable model of the data has been learned and the method
is not likely to prune away all the sources and other parameters as unnecessary.
2.4
Other approximation methods
Another way to get a more robust approximation for the statistics of f would be
to use the deterministic sampling approach used in unscented transform [13] and
consecutively in different unscented algorithms. Unfortunately this approach does
not work very well in high dimensional cases. The unscented transform also ignores
all the prior information on the form of the nonlinearity. In case of the MLP
network, everything except the scalar activation functions is known to be linear.
All information on the correlations of variables is also ignored, which leads to loss
of accuracy when the output depends on products of input variables like in our case.
In an experiment of mean and log-variance approximation accuracy with a relatively
large random MLP [12], the unscented transform needed over 100 % more time to
achieve results with 10 times the mean squared error of the proposed approach.
Part of our problem was also faced by Barber and Bishop in their work on ensemble
learning for supervised learning of MLP networks [5]. In their work the inputs s(t)
of the network are part of the data and thus have no associated variance. This
makes the problem easier as the inputs y(t) of the hidden neurons are Gaussian.
By using the cumulative Gaussian distribution or the error function erf as the
activation function, the mean of the outputs of the hidden neurons and thus of the
outputs of the whole network can be evaluated analytically. The covariances still
need to be evaluated numerically, and that is done by evaluating all the correlations
of the hidden neurons separately. In a network with H hidden neurons, this requires
O(H 2 ) quadrature evaluations.
In our case the inputs of the hidden neurons are not Gaussian and hence even the
error function as the activation function would not allow for exact evaluation of the
means. This is why we have decided to use the standard logistic sigmoid activation
function in form of tanh which is more common and faster to evaluate numerically.
In our approach all the required means and variances can be evaluated with O(H)
quadratures.
3
Experiments
The proposed nonlinear factor analysis method was tested on natural speech data
set consisting of spectrograms of 24 individual words of Finnish speech, spoken by
20 different speakers. The spectra were modified to mimic the reception abilities of
the human ear. This is a standard preprocessing procedure for speech recognition.
No speaker or word information was used in learning, the spectrograms of different
words were simply blindly concatenated. The preprocessed data consisted of 2547
30-dimensional spectrogram vectors.
Taylor cost (nats / sample)
Proposed cost (nats / sample)
The data set was tested with two different learning algorithms for the NFA model,
one based on the Taylor approximation introduced in [6] and another based on the
proposed approximation. Contrary to [6], the algorithm based on Taylor approximation used the same conjugate gradient based optimization algorithm as the new
approximation. This helped greatly in stabilizing the algorithm that used to be
rather unstable with high source dimensionalities due to sensitivity of the Taylor
approximation in regions where it is not really valid. Both algorithms were tested
using 1 to 20 sources, each number with four different random initializations for the
MLP network weights. The number of hidden neurons in the MLP network was 40.
The learning algorithm was run for 2000 iterations.1
55
50
45
55
50
45
45
50
55
Reference cost (nats / sample)
45
50
55
Reference cost (nats / sample)
Figure 1: The attained values of C in different simulations as evaluated by the different approximations plotted against reference values evaluated by sampling. The
left subfigure shows the values from experiments using the proposed approximation
and the right subfigure from experiments using the Taylor approximation.
Fig. 1 shows a comparison of the cost function values evaluated by the different approximations and a reference value evaluated by sampling. The reference cost values
were evaluated by sampling 400 points from the distribution q(S, ? f ), evaluating
f (s, ? f ) at those points, and using the mean and variance of the output points in the
cost function evaluation. The accuracy of the procedure was checked by performing
the evaluation 100 times for one of the simulations. The standard deviation of the
values was 5 ? 10?3 nats per sample which should not show at all in the figures. The
unit nat here signifies the use of natural logarithm in Eq. (2).
The results in Fig. 1 show that the proposed approximation yields consistently very
1
The Matlab code used in the experiments is available at http://www.cis.hut.fi/
projects/bayes/software/.
54
52
50
48
46
44
5
10
# of sources
15
20
Cost function value (nats / sample)
Cost function value (nats / sample)
Proposed approximation
Reference value
56
Taylor approximation
Reference value
56
54
52
50
48
46
44
5
10
# of sources
15
20
Figure 2: The attained value of C in simulations with different numbers of sources.
The values shown are the means of 4 simulations with different random initializations. The left subfigure shows the values from experiments using the proposed
approximation and the right subfigure from experiments using the Taylor approximation. Both values are compared to reference values evaluated by sampling.
reliable estimates of the true cost, although it has a slight tendency to underestimate
it. The older Taylor approximation [6] breaks down completely in some cases and
reports very small costs even though the true value can be significantly larger.
The situations where the Taylor approximation fails are illustrated in Fig. 2, which
shows the attained cost as a function of number of sources used. The Taylor approximation shows a decrease in cost as the number of the sources increases even though
the true cost is increasing rapidly. The behavior of the proposed approximation is
much more consistent and qualitatively correct.
4
Discussion
The problem of estimating the statistics of a nonlinear transform of a probability
distribution is also encountered in nonlinear extensions of Kalman filtering. The
Taylor approximation corresponds to extended Kalman filter and the new approximation can be seen as a modification of it with a more accurate linearization. This
opens up many new potential applications in time series analysis and elsewhere.
The proposed method is somewhat similar to unscented Kalman filtering based on
the unscented transform [13], but much better suited for high dimensional MLP-like
nonlinearities. This is not very surprising, as worst case complexity of general Gaussian integration is exponential with respect to the dimensionality of the input [14]
and unscented transform as a general method with linear complexity is bound to
be less accurate in high dimensional problems. In case of the MLP, the complexity
of the unscented transform depends on the number of all weights, which in our case
with 20 sources can be more than 2000.
5
Conclusions
In this paper we have proposed a novel approximation method for unsupervised
MLP networks in variational Bayesian learning. The approximation is based on
using numerical Gauss?Hermite quadratures to evaluate the global effect of the
nonlinear activation function of the network to produce an effective linearization of
the MLP. The statistics of the outputs of the linearized network can be evaluated
exactly to get accurate and reliable estimates of the statistics of the MLP outputs.
These can be used to evaluate the standard variational Bayesian ensemble learning
cost function C and numerically minimize it using a hybrid fixed point / conjugate
gradient algorithm.
We have demonstrated the method with a nonlinear factor analysis model and a
real world speech data set. It was able to reliably estimate all the 20 factors we
attempted from the 30-dimensional data set. The presented method can be used
together with linear ICA for nonlinear BSS [7], and the approximation can be easily
applied to more complex models such as nonlinear independent factor analysis [6]
and nonlinear state-space models [9].
Acknowledgments
The authors wish to thank David Barber, Markus Harva, Bert Kappen, Juha
Karhunen, Uri Lerner and Tapani Raiko for useful comments and discussions. This
work was supported in part by the IST Programme of the European Community,
under the PASCAL Network of Excellence, IST-2002-506778. This publication only
reflects the authors? views.
References
[1] A. Hyv?
arinen, J. Karhunen, and E. Oja. Independent Component Analysis. J. Wiley,
2001.
[2] G. E. Hinton and D. van Camp. Keeping neural networks simple by minimizing the
description length of the weights. In Proc. of the 6th Ann. ACM Conf. on Computational Learning Theory, pp. 5?13, Santa Cruz, CA, USA, 1993.
[3] P. Sykacek and S. Roberts. Adaptive classification by variational Kalman filtering.
In Advances in Neural Information Processing Systems 15, pp. 753?760. MIT Press,
2003.
[4] S. Haykin. Neural Networks ? A Comprehensive Foundation, 2nd ed. Prentice-Hall,
1999.
[5] D. Barber and C. Bishop. Ensemble learning for multi-layer networks. In Advances
in Neural Information Processing Systems 10, pp. 395?401. MIT Press, 1998.
[6] H. Lappalainen and A. Honkela. Bayesian nonlinear independent component analysis
by multi-layer perceptrons. In M. Girolami, ed., Advances in Independent Component
Analysis, pp. 93?121. Springer-Verlag, Berlin, 2000.
[7] H. Valpola, E. Oja, A. Ilin, A. Honkela, and J. Karhunen. Nonlinear blind source
separation by variational Bayesian learning. IEICE Transactions on Fundamentals of
Electronics, Communications and Computer Sciences, E86-A(3):532?541, 2003.
[8] H. Attias. Independent factor analysis. Neural Computation, 11(4):803?851, 1999.
[9] H. Valpola and J. Karhunen. An unsupervised ensemble learning method for nonlinear
dynamic state-space models. Neural Computation, 14(11):2647?2692, 2002.
[10] H. Attias. A variational Bayesian framework for graphical models. In Advances in
Neural Information Processing Systems 12, pp. 209?215. MIT Press, 2000.
[11] Z. Ghahramani and M. Beal. Propagation algorithms for variational Bayesian learning. In Advances in Neural Information Processing Systems 13, pp. 507?513. MIT
Press, 2001.
[12] A. Honkela. Approximating nonlinear transformations of probability distributions for
nonlinear independent component analysis. In Proc. 2004 IEEE Int. Joint Conf. on
Neural Networks (IJCNN 2004), pp. 2169?2174, Budapest, Hungary, 2004.
[13] S. Julier and J. K. Uhlmann. A general method for approximating nonlinear transformations of probability distributions. Technical report, Robotics Research Group,
Department of Engineering Science, University of Oxford, 1996.
[14] F. Curbera. Delayed curse of dimension for Gaussian integration. Journal of Complexity, 16(2):474?506, 2000.
| 2564 |@word version:1 polynomial:1 seems:1 nd:1 open:1 hyv:1 simulation:4 linearized:2 covariance:6 solid:1 kappen:1 moment:1 electronics:1 series:1 existing:1 surprising:1 activation:8 negentropy:1 dx:1 must:1 tot:1 cruz:1 realistic:1 numerical:1 analytic:1 update:7 generative:3 fewer:2 selected:1 haykin:1 provides:1 node:2 simpler:1 hermite:5 five:1 along:1 ilin:1 manner:1 excellence:1 ica:3 behavior:3 abscissa:2 multi:4 actual:1 curse:1 increasing:1 becomes:1 project:2 estimating:1 linearity:1 interpreted:1 spoken:1 transformation:2 exactly:3 scaled:1 unit:2 normally:1 before:1 positive:2 engineering:2 local:1 oxford:1 reception:1 initialization:4 studied:1 decided:1 acknowledgment:1 block:1 backpropagation:1 procedure:2 significantly:3 word:3 radial:1 get:4 cannot:3 close:2 prentice:1 context:2 writing:1 www:2 deterministic:1 demonstrated:3 straightforward:1 stabilizing:1 simplicity:1 rule:1 handle:1 updated:2 exact:3 element:1 recognition:1 observed:2 worst:4 region:2 decrease:1 complexity:5 nats:7 dynamic:2 trained:1 depend:2 yei:7 basis:2 completely:1 easily:6 joint:1 harri:2 effective:3 whose:1 larger:4 otherwise:1 ability:1 statistic:5 erf:1 transform:7 beal:1 analytical:1 propose:1 product:2 budapest:1 rapidly:1 hungary:1 flexibility:1 achieve:2 description:1 produce:3 perfect:1 tk:2 derive:2 depending:1 linearize:2 propagating:1 ij:1 eq:2 implemented:1 girolami:1 correct:1 filter:1 consecutively:1 human:1 everything:1 arinen:1 generalization:1 really:1 extension:2 unscented:8 hut:4 hall:1 exp:2 great:1 mapping:1 finland:1 a2:1 omitted:1 estimation:1 proc:2 tanh:1 uhlmann:1 sensitive:1 weighted:1 reflects:1 minimization:1 mit:4 gaussian:15 always:2 super:1 modified:1 rather:3 avoid:1 publication:1 improvement:1 consistently:1 likelihood:1 mainly:2 greatly:1 camp:1 typically:2 hidden:13 originating:1 classification:1 flexible:2 pascal:1 integration:2 sampling:5 unsupervised:5 nearly:1 mimic:1 others:1 report:2 employ:1 randomly:1 oja:2 lerner:1 divergence:2 comprehensive:1 individual:1 delayed:1 consisting:1 dampening:1 mlp:23 evaluation:8 severe:1 saturated:1 mixture:1 accurate:7 integral:1 partial:1 necessary:1 taylor:13 logarithm:1 initialized:2 desired:1 plotted:1 subfigure:4 fitted:1 instance:1 increased:1 modeling:1 signifies:1 cost:22 introducing:1 deviation:2 too:1 fundamental:1 sensitivity:1 together:2 quickly:1 squared:1 ear:1 conf:2 resort:1 derivative:6 leading:1 account:1 potential:1 nonlinearities:1 wk:2 gaussianity:1 int:1 blind:2 depends:5 root:1 helped:1 break:1 view:1 bayes:2 lappalainen:1 mlps:1 minimize:2 square:1 accuracy:5 variance:32 ensemble:6 yield:6 bayesian:13 checked:1 ed:2 against:1 underestimate:2 pp:7 associated:1 popular:1 dimensionality:3 improves:1 attained:3 supervised:2 improved:1 evaluated:21 box:1 done:1 though:2 governing:1 sykacek:1 correlation:3 honkela:5 until:1 ei:5 nonlinear:38 propagation:1 logistic:2 ieice:1 building:1 effect:4 usa:1 requiring:1 true:5 consisted:1 counterpart:1 analytically:5 hence:2 leibler:2 i2:1 illustrated:1 deal:1 speaker:2 linearizing:1 pdf:1 cp:2 gh:5 postprocessing:1 ranging:1 variational:14 novel:2 fi:4 sigmoid:2 common:1 julier:1 slight:1 numerically:4 ai:2 nonlinearity:5 centre:1 resistant:1 stable:1 posterior:7 multivariate:1 certain:1 verlag:1 yi:18 seen:1 tapani:1 somewhat:1 spectrogram:3 prune:1 paradigm:1 full:1 technical:1 faster:1 cross:1 equally:1 halving:1 expectation:1 blindly:1 iteration:5 robotics:1 justified:1 separately:1 source:20 operate:1 finnish:1 strict:1 comment:1 tend:1 contrary:1 e86:1 seem:1 easy:3 concerned:1 enough:2 affect:3 psychology:1 attias:2 expression:2 handled:1 pca:2 speech:5 matlab:1 ignored:1 useful:2 santa:1 wk0:2 http:2 estimated:1 track:1 per:1 ist:2 key:1 four:1 group:1 nevertheless:1 preprocessed:1 kept:1 sum:2 run:1 powerful:1 reasonable:2 separation:2 layer:5 bound:6 followed:1 encountered:1 ijcnn:1 helsinki:1 x2:2 software:1 markus:1 performing:1 relatively:2 department:1 combination:1 conjugate:4 slightly:1 em:1 modification:1 equation:1 needed:3 tractable:1 drastic:1 available:2 gaussians:1 apply:1 away:1 denotes:3 assumes:1 include:1 graphical:1 concatenated:1 ghahramani:1 especially:2 approximating:4 implied:1 added:1 quantity:1 diagonal:5 gradient:7 valpola:4 thank:1 berlin:1 barber:3 unstable:2 assuming:2 code:1 kalman:4 length:1 cq:2 minimizing:2 unfortunately:2 hlog:1 robert:1 negative:3 slows:1 reliably:1 upper:1 neuron:9 juha:1 sej:3 situation:1 extended:2 hinton:1 communication:1 bert:1 community:1 introduced:1 david:1 required:2 learned:3 quadratically:1 able:1 usually:1 reliable:3 suitable:1 natural:3 difficulty:1 hybrid:1 older:1 technology:1 raiko:1 faced:1 prior:3 loss:1 filtering:3 proven:1 foundation:2 consistent:1 uncorrelated:1 prone:1 elsewhere:1 supported:1 last:1 antti:2 keeping:1 aij:1 bias:1 allow:1 perceptron:2 taking:1 differentiating:1 absolute:1 distributed:1 van:1 feedback:1 bs:3 dimension:1 world:2 numeric:1 evaluating:6 cumulative:1 ignores:1 valid:1 qualitatively:1 author:2 preprocessing:1 adaptive:1 programme:1 far:1 social:1 transaction:1 sj:3 approximate:1 kullback:2 global:1 overfitting:2 investigating:1 assumed:3 unnecessary:1 spectrum:1 latent:5 why:1 nature:1 robust:1 ca:1 ignoring:1 complex:1 european:1 diag:2 main:1 whole:1 noise:4 hyperparameters:2 quadrature:15 fig:3 wiley:1 fails:1 harva:1 wish:1 exponential:1 down:2 formula:1 bad:1 bishop:2 evidence:1 intrinsic:1 ci:2 linearization:4 nat:1 karhunen:4 uri:1 easier:4 suited:1 cx:2 eij:2 likely:1 simply:1 scalar:2 springer:1 corresponds:1 acm:1 ann:1 rbf:2 change:1 except:2 principal:1 total:1 gauss:5 tendency:1 attempted:1 underestimation:3 perceptrons:1 evaluate:6 tested:3 |
1,722 | 2,565 | Instance-Specific Bayesian Model
Averaging f or Classification
Shyam Visweswaran
Center for Biomedical Informatics
Intelligent Systems Program
Pittsburgh, PA 15213
[email protected]
Gregory F. Cooper
Center for Biomedical Informatics
Intelligent Systems Program
Pittsburgh, PA 15213
[email protected]
Abstract
Classification algorithms typically induce population-wide models
that are trained to perform well on average on expected future
instances. We introduce a Bayesian framework for learning
instance-specific models from data that are optimized to predict
well for a particular instance. Based on this framework, we present
a lazy instance-specific algorithm called ISA that performs
selective model averaging over a restricted class of Bayesian
networks. On experimental evaluation, this algorithm shows
superior performance over model selection. We intend to apply
such instance-specific algorithms to improve the performance of
patient-specific predictive models induced from medical data.
1
In t ro d u c t i o n
Commonly used classification algorithms, such as neural networks, decision trees,
Bayesian networks and support vector machines, typically induce a single model
from a training set of instances, with the intent of applying it to all future instances.
We call such a model a population-wide model because it is intended to be applied
to an entire population of future instances. A population-wide model is optimized to
predict well on average when applied to expected future instances. In contrast, an
instance-specific model is one that is constructed specifically for a particular
instance. The structure and parameters of an instance-specific model are specialized
to the particular features of an instance, so that it is optimized to predict especially
well for that instance.
Usually, methods that induce population-wide models employ eager learning in
which the model is induced from the training data before the test instance is
encountered. In contrast, lazy learning defers most or all processing until a response
to a test instance is required. Learners that induce instance-specific models are
necessarily lazy in nature since they take advantage of the information in the test
instance. An example of a lazy instance-specific method is the lazy Bayesian rule
(LBR) learner, implemented by Zheng and Webb [1], which induces rules in a lazy
fashion from examples in the neighborhood of the test instance. A rule generated by
LBR consists of a conjunction of the attribute-value pairs present in the test instance
as the antecedent and a local simple (na?ve) Bayes classifier as the consequent. The
structure of the local simple Bayes classifier consists of the attribute of interest as
the parent of all other attributes that do not appear in the antecedent, and the
parameters of the classifier are estimated from the subset of training instances that
satisfy the antecedent. A greedy step-forward search selects the optimal LBR rule
for a test instance to be classified. When evaluated on 29 UCI datasets, LBR had the
lowest average error rate when compared to several eager learning methods [1].
Typically, both eager and lazy algorithms select a single model from some model
space, ignoring the uncertainty in model selection. Bayesian model averaging is a
coherent approach to dealing with the uncertainty in model selection, and it has
been shown to improve the predictive performance of classifiers [2]. However, since
the number of models in practically useful model spaces is enormous, exact model
averaging over the entire model space is usually not feasible. In this paper, we
describe a lazy instance-specific averaging (ISA) algorithm for classification that
approximates Bayesian model averaging in an instance-sensitive manner. ISA
extends LBR by adding Bayesian model averaging to an instance-specific model
selection algorithm.
While the ISA algorithm is currently able to directly handle only discrete variables
and is computationally more intensive than comparable eager algorithms, the results
in this paper show that it performs well. In medicine, such lazy instance-specific
algorithms can be applied to patient-specific modeling for improving the accuracy
of diagnosis, prognosis and risk assessment.
The rest of this paper is structured as follows. Section 2 introduces a Bayesian
framework for instance-specific learning. Section 3 describes the implementation of
ISA. In Section 4, we evaluate ISA and compare its performance to that of LBR.
Finally, in Section 5 we discuss the results of the comparison.
2 Deci si on Th eo ret i c F rame wo rk
We use the following notation. Capital letters like X, Z, denote random variables
and corresponding lower case letters, x, z, denote specific values assigned to them.
Thus, X = x denotes that variable X is assigned the value x. Bold upper case letters,
such as X, Z, represent sets of variables or random vectors and their realization is
denoted by the corresponding bold lower case letters, x, z. Hence, X = x denotes that
the variables in X have the states given by x. In addition, Z denotes the target
variable being predicted, X denotes the set of attribute variables, M denotes a model,
D denotes the training dataset, and <Xt , Zt> denotes a generic test instance that is
not in D.
We now characterize population-wide and instance-specific model selection in
decision theoretic terms. Given training data D and a separate generic test instance
<Xt, Zt>, the Bayes optimal prediction for Zt is obtained by combining the
predictions of all models weighted by their posterior probabilities, as follows:
P (Z t | X t , D ) = ? P( Z t | X t , M ) P ( M | D )dM .
(1)
M
The optimal population-wide model for predicting Zt is as follows:
?
?
max?? U P( Z t | X t , D), P (Z t | X t , M ) P ( X | D)? ,
M
? Xt
?
[
]
(2)
where the function U gives the utility of approximating the Bayes optimal estimate
P(Zt | Xt , D), with the estimate P(Zt | Xt , M) obtained from model M. The term
P(X | D) is given by:
P ( X | D) = ? P ( X | M ) P ( M | D)dM .
(3)
M
The optimal instance-specific model for predicting Zt is as follows:
{ [
]}
max U P ( Z t | X t = x t , D), P (Z t | X t = x t , M ) ,
M
(4)
where xt are the values of the attributes of the test instance Xt for which we want to
predict Zt. The Bayes optimal estimate P(Zt | Xt = xt, D), in Equation 4 is derived
using Equation 1, for the special case in which Xt = xt .
The difference between the population-wide and the instance-specific models can be
noted by comparing Equations 2 and 4. Equation 2 for the population-wide model
selects the model that on average will have the greatest utility. Equation 4 for the
instance-specific model, however, selects the model that will have the greatest
expected utility for the specific instance Xt = xt . For predicting Zt in a given instance
Xt = xt, the model selected using Equation 2 can never have an expected utility
greater than the model selected using Equation 4. This observation provides support
for developing instance-specific models.
Equations 2 and 4 represent theoretical ideals for population-wide and instancespecific model selection, respectively; we are not suggesting they are practical to
compute. The current paper focuses on model averaging, rather than model
selection. Ideal Bayesian model averaging is given by Equation 1. Model averaging
has previously been applied using population-wide models. Studies have shown that
approximate Bayesian model averaging using population-wide models can improve
predictive performance over population-wide model selection [2]. The current paper
concentrates on investigating the predictive performance of approximate Bayesian
model averaging using instance-specific models.
3 In st an ce- S p eci fi c Algo ri t h m
We present the implementation of the lazy instance-specific algorithm based on the
above framework. ISA searches the space of a restricted class of Bayesian networks
to select a subset of the models over which to derive a weighted (averaged)
posterior of the target variable Zt . A key characteristic of the search is the use of a
heuristic to select models that will have a significant influence on the weighted
posterior. We introduce Bayesian networks briefly and then describe ISA in detail.
3.1
B ay e s i a n N e t w or k s
A Bayesian network is a probabilistic model that combines a graphical
representation (the Bayesian network structure) with quantitative information (the
parameters of the Bayesian network) to represent the joint probability distribution
over a set of random variables [3]. Specifically, a Bayesian network M representing
the set of variables X consists of a pair (G, ?G ). G is a directed acyclic graph that
contains a node for every variable in X and an arc between every pair of nodes if the
corresponding variables are directly probabilistically dependent. Conversely, the
absence of an arc between a pair of nodes denotes probabilistic independence
between the corresponding variables. ?G represents the parameterization of the
model.
In a Bayesian network M, the immediate predecessors of a node X i in X are called
the parents of X i and the successors, both immediate and remote, of Xi in X are
called the descendants of X i . The immediate successors of X i are called the children
of X i . For each node Xi there is a local probability distribution (that may be discrete
or continuous) on that node given the state of its parents. The complete joint
probability distribution over X, represented by the parameterization ?G, can be
factored into a product of local probability distributions defined on each node in the
network. This factorization is determined by the independences captured by the
structure of the Bayesian network and is formalized in the Bayesian network
Markov condition: A node (representing a variable) is independent of its nondescendants given just its parents. According to this Markov condition, the joint
probability distribution on model variables X = (X1 , X 2, ?, X n ) can be factored as
follows:
n
P ( X 1 , X 2 , ..., X n ) = ? P ( X i | parents( X i )) ,
(5)
i =1
where parents(Xi ) denotes the set of nodes that are the parents of X i . If Xi has no
parents, then the set parents(Xi ) is empty and P(Xi | parents(X i)) is just P(Xi ).
3.2
I S A M od e l s
The LBR models of Zheng and Webb [1] can be represented as members of a
restricted class of Bayesian networks (see Figure 1). We use the same class of
Bayesian networks for the ISA models, to facilitate comparison between the two
algorithms. In Figure 1, all nodes represent attributes that are discrete. Each node in
X has either an outgoing arc into target node, Z, or receives an arc from Z. That is,
each node is either a parent or a child of Z. Thus, X is partitioned into two sets: the
first containing nodes (X 1 , ?, X j in Figure 1) each of which is a parent of Z and
every node in the second set, and the second containing nodes (X j+1 , ?, X k in Figure
1) that have as parents the node Z and every node in the first set. The nodes in the
first set are instantiated to the corresponding values in the test instance for which Zt
is to be predicted. Thus, the first set of nodes represents the antecedent of the LBR
rule and the second set of nodes represents the consequent.
...
X1= x1
Xi = xi
Z
Xi+1
...
Xk
Figure 1: An example of a Bayesian network LBR model with target
node Z and k attribute nodes of which X1 , ?, X j are instantiated to
values x 1 , ?, x j in xt . X 1, ?, X j are present in the antecedent of the LBR
rule and Z, X j+1 , ?, X k (that form the local simple Bayes classifier) are
present in the consequent. The indices need not be ordered as shown,
but are presented in this example for convenience of exposition.
3.3
M od e l A ve r ag i n g
For Bayesian networks, Equation 1 can be evaluated as follows:
P ( Z t | x t , D ) = ? P ( Z t | x t , M ) P( M | D ) ,
(6)
M
with M being a Bayesian network comprised of structure G and parameters ?G. The
probability distribution of interest is a weighted average of the posterior distribution
over all possible Bayesian networks where the weight is the probability of the
Bayesian network given the data. Since exhaustive enumeration of all possible
models is not feasible, even for this class of simple Bayesian networks, we
approximate exact model averaging with selective model averaging. Let R be the set
of models selected by the search procedure from all possible models in the model
space, as described in the next section. Then, with selective model averaging,
P(Zt | xt, D) is estimated as:
P( Z t | x t , M ) P ( M | D )
?
P (Z t | x t , D) ? M ?R
.
P (M | D)
?
M ?R
(7)
Assuming uniform prior belief over all possible models, the model posterior
P(M | D) in Equation 7 can be replaced by the marginal likelihood P(D | M), to
obtain the following equation:
P ( Z | x , D) ?
t
t
? P ( Z t | x t , M ) P( D | M )
.
P( D | M )
?
M ?R
M ?R
(8)
The (unconditional) marginal likelihood P(D | M) in Equation 8, is a measure of the
goodness of fit of the model to the data and is also known as the model score. While
this score is suitable for assessing the model?s fit to the joint probability
distribution, it is not necessarily appropriate for assessing the goodness of fit to a
conditional probability distribution which is the focus in prediction and
classification tasks, as is the case here. A more suitable score in this situation is a
conditional model score that is computed from training data D of d instances as:
d
score( D, M ) = ? P ( z p | x1 ,..., x p ,z 1 ,...,z p ?1 ,M ) .
(9)
p =1
This score is computed in a predictive and sequential fashion: for the pth training
instance the probability of predicting the observed value zp for the target variable is
computed based on the values of all the variables in the preceding p-1 training
instances and the values xp of the attributes in the pth instance. One limitation of this
score is that its value depends on the ordering of the data. Despite this limitation, it
has been shown to be an effective scoring criterion for classification models [4].
The parameters of the Bayesian network M, used in the above computations, are
defined as follows:
P ( X i = k | parents ( X i ) = j ) ? ? ijk =
N ijk + ? ijk
N ij + ? ij
,
(10)
where (i) Nijk is the number of instances in the training dataset D where variable Xi
has value k and the parents of X i are in state j, (ii) N ij = ?k N ijk , (iii) ?ijk is a
parameter prior that can be interpreted as the belief equivalent of having previously
observed ?ijk instances in which variable Xi has value k and the parents of X i are in
state j, and (iv) ? ij = ?k ? ijk .
3.4
M od e l Se a r c h
We use a two-phase best-first heuristic search to sample the model space. The first
phase ignores the evidence xt in the test instance while searching for models that
have high scores as given by Equation 9. This is followed by the second phase that
searches for models having the greatest impact on the prediction of Zt for the test
instance, which we formalize below.
The first phase searches for models that predict Z in the training data very well;
these are the models that have high conditional model scores. The initial model is
the simple Bayes network that includes all the attributes in X as children of Z. A
succeeding model is derived from a current model by reversing the arc of a child
node in the current model, adding new outgoing arcs from it to Z and the remaining
children, and instantiating this node to the value in the test instance. This process is
performed for each child in the current model. An incoming arc of a child node is
considered for reversal only if the node?s value is not missing in the test instance.
The newly derived models are added to a priority queue, Q. During each iteration of
the search, the model with the highest score (given by Equation 9) is removed from
Q and placed in a set R, following which new models are generated as described just
above, scored and added to Q. The first phase terminates after a user-specified
number of models have accumulated in R.
The second phase searches for models that change the current model-averaged
estimate of P(Zt | xt , D) the most. The idea here is to find viable competing models
for making this posterior probability prediction. When no competitive models can
be found, the prediction becomes stable. During each iteration of the search, the
highest ranked model M* is removed from Q and added to R. The ranking is based
on how much the model changes the current estimate of P(Zt | xt , D). More change is
better. In particular, M* is the model in Q that maximizes the following function:
f ( R, M *) = g ( R) ? g ( R U {M *}) ,
(11)
where for a set of models S, the function g(S) computes the approximate model
averaged prediction for Zt, as follows:
g (S ) =
? P(Z
M ?S
t
| x t , M ) score( D, M )
?? score( D, M )
.
(12)
M S
The second phase terminates when no new model can be found that has a value (as
given by Equation 11) that is greater than a user-specified minimum threshold T.
The final distribution of Zt is then computed from the models in R using Equation 8.
4
Ev a lu a t i o n
We evaluated ISA on the 29 UCI datasets that Zheng and Webb used for the
evaluation of LBR. On the same datasets, we also evaluated a simple Bayes
classifier (SB) and LBR. For SB and LBR, we used the Weka implementations
(Weka v3.3.6, http://www.cs.waikato.ac.nz/ml/weka/) with default settings [5]. We
implemented the ISA algorithm as a standalone application in Java. The following
settings were used for ISA: a maximum of 100 phase-1 models, a threshold T of
0.001 in phase-2, and an upper limit of 500 models in R. For the parameter priors in
Equation 10, all ?ijk were set to 1.
All error rates were obtained by averaging the results from two stratified 10-fold
cross-validation (20 trials total) similar to that used by Zheng and Webb. Since,
both LBR and ISA can handle only discrete attributes, all numeric attributes were
discretized in a pre-processing step using the entropy based discretization method
described in [6]. For each pair of training and test folds, the discretization intervals
were first estimated from the training fold and then applied to both folds. The error
rates of two algorithms on a dataset were compared with a paired t-test carried out at
the 5% significance level on the error rate statistics obtained from the 20 trials.
The results are shown in Table 1. Compared to SB, ISA has significantly fewer
errors on 9 datasets and significantly more errors on one dataset. Compared to LBR,
ISA has significantly fewer errors on 7 datasets and significantly more errors on two
datasets. On two datasets, chess and tic-tac-toe, ISA shows considerable
improvement in performance over both SB and LBR. With respect to computation
Table 1: Percent error rates of simple Bayes (SB), Lazy Bayesian Rule (LBR)
and Instance-Specific Averaging (ISA). A - indicates that the ISA error rate is
statistically significantly lower than the marked SB or LBR error rate. A +
indicates that the ISA error rate is statistically significantly higher.
Dataset
Size
Annealing
Audiology
Breast (W)
Chess (KR-KP)
Credit (A)
Echocardiogram
Glass
Heart (C)
Hepatitis
Horse colic
House votes 84
Hypothyroid
Iris
Labor
LED 24
Liver disorders
Lung cancer
Lymphography
Pima
Postoperative
Primary tumor
Promoters
Solar flare
Sonar
Soybean
Splice junction
Tic-Tac-Toe
Wine
Zoo
898
226
699
3169
690
131
214
303
155
368
435
3163
150
57
200
345
32
148
768
90
339
106
1389
208
683
3177
958
178
101
No. of
classes
6
24
2
2
2
2
6
2
2
2
2
2
3
2
10
2
3
4
2
3
22
2
2
2
19
3
2
3
7
Num.
Attrib.
6
0
9
0
6
6
9
13
6
7
0
7
4
8
0
6
0
0
8
1
0
0
0
60
0
0
0
13
0
Nom.
Attrib.
32
69
0
36
9
1
0
0
13
15
16
18
0
8
24
0
56
18
0
7
17
57
10
0
35
60
9
0
16
Percent error rate
SB
LBR
ISA
1.9
3.5 2.7 29.6
29.4
30.9
3.7
2.9 +
2.8 +
1.1
12.1 3.0 13.8
14.0
13.9
33.2
34.0
35.9
26.9
27.8
29.0
16.2
16.2
17.5
14.2 - 14.2 - 11.3
20.2
16.0
17.8
5.1
10.1 7.0 0.9
0.9
1.4 6.0
6.0
5.3
8.8
6.1
7.0
40.5
40.5
40.3
36.8
36.8
36.8
56.3
56.3
56.3
15.5 - 15.5 - 13.2
21.8
22.0
22.3
33.3
33.3
33.3
54.4
53.5
54.2
7.5
7.5
7.5
20.2
18.3 + 19.4
15.4
15.6
15.9
7.1
7.2
7.9 4.7
4.3
4.4
30.3 - 13.7 - 10.3
1.1
1.1
1.1
6.4
8.4 8.4 -
times, ISA took 6 times longer to run than LBR on average for a single test instance
on a desktop computer with a 2 GHz Pentium 4 processor and 3 GB of RAM.
5
C o n c lu si o n s a n d Fu t u re R e s ea rc h
We have introduced a Bayesian framework for instance-specific model averaging
and presented ISA as one example of a classification algorithm based on this
framework. An instance-specific algorithm like LBR that does model selection has
been shown by Zheng and Webb to perform classification better than several eager
algorithms [1]. Our results show that ISA, which extends LBR by adding Bayesian
model averaging, improves overall on LBR, which provides support that we can
obtain additional prediction improvement by performing instance-specific model
averaging rather than just instance-specific model selection.
In future work, we plan to explore further the behavior of ISA with respect to the
number of models being averaged and the effect of the number of models selected in
each of the two phases of the search. We will also investigate methods to improve
the computational efficiency of ISA. In addition, we plan to examine other
heuristics for model search as well as more general model spaces such as
unrestricted Bayesian networks.
The instance-specific framework is not restricted to the Bayesian network models
that we have used in this investigation. In the future, we plan to explore other
models using this framework. Our ultimate interest is to apply these instancespecific algorithms to improve patient-specific predictions (for diagnosis, therapy
selection, and prognosis) and thereby to improve patient care.
A c k n ow l e d g me n t s
This work was supported by the grant T15-LM/DE07059 from the National Library
of Medicine (NLM) to the University of Pittsburgh?s Biomedical Informatics
Training Program. We would like to thank the three anonymous reviewers for their
helpful comments.
References
[1] Zheng, Z. and Webb, G.I. (2000). Lazy Learning of Bayesian Rules. Machine Learning,
41(1):53-84.
[2] Hoeting, J.A., Madigan, D., Raftery, A.E. and Volinsky, C.T. (1999). Bayesian Model
Averaging: A Tutorial. Statistical Science, 14:382-417.
[3] Pearl, J. (1988). Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, San
Mateo, CA.
[4] Kontkanen, P., Myllymaki, P., Silander, T., and Tirri, H. (1999). On Supervised Selection
of Bayesian Networks. In Proceedings of the 15th International Conference on Uncertainty
in Artificial Intelligence, pages 334-342, Stockholm, Sweden. Morgan Kaufmann.
[5] Witten, I.H. and Frank, E. (2000). Data Mining: Practical Machine Learning Tools with
Java Implementations. Morgan Kaufmann, San Francisco, CA.
[6] Fayyad, U.M., and Irani, K.B. (1993). Multi-Interval Discretization of ContinuousValued Attributes for Classification Learning. In Proceedings of the Thirteenth International
Joint Conference on Artificial Intelligence, pages 1022-1027, San Mateo, CA. Morgan
Kaufmann.
| 2565 |@word trial:2 briefly:1 gfc:1 thereby:1 initial:1 contains:1 score:12 current:7 comparing:1 od:3 discretization:3 si:2 succeeding:1 standalone:1 greedy:1 selected:4 fewer:2 intelligence:2 parameterization:2 flare:1 desktop:1 xk:1 num:1 provides:2 node:27 nom:1 rc:1 constructed:1 predecessor:1 tirri:1 viable:1 descendant:1 consists:3 combine:1 manner:1 introduce:2 expected:4 behavior:1 examine:1 multi:1 discretized:1 continuousvalued:1 enumeration:1 becomes:1 notation:1 maximizes:1 lowest:1 tic:2 interpreted:1 ret:1 ag:1 quantitative:1 every:4 ro:1 classifier:6 medical:1 grant:1 appear:1 before:1 local:5 limit:1 despite:1 nz:1 mateo:2 instancespecific:2 conversely:1 factorization:1 stratified:1 statistically:2 averaged:4 directed:1 practical:2 procedure:1 nijk:1 java:2 significantly:6 pre:1 induce:4 madigan:1 convenience:1 selection:12 risk:1 applying:1 influence:1 www:1 equivalent:1 reviewer:1 center:2 missing:1 formalized:1 disorder:1 colic:1 factored:2 rule:8 hypothyroid:1 population:13 handle:2 searching:1 target:5 user:2 exact:2 pa:2 observed:2 remote:1 ordering:1 highest:2 removed:2 trained:1 algo:1 predictive:5 efficiency:1 learner:2 joint:5 represented:2 hoeting:1 instantiated:2 describe:2 effective:1 kp:1 artificial:2 horse:1 neighborhood:1 exhaustive:1 heuristic:3 defers:1 statistic:1 postoperative:1 final:1 advantage:1 took:1 product:1 silander:1 uci:2 combining:1 realization:1 parent:16 empty:1 assessing:2 zp:1 derive:1 ac:1 liver:1 ij:4 implemented:2 predicted:2 c:1 concentrate:1 attribute:12 nlm:1 successor:2 investigation:1 attrib:2 anonymous:1 stockholm:1 practically:1 therapy:1 considered:1 credit:1 predict:5 pitt:2 lm:1 wine:1 currently:1 sensitive:1 echocardiogram:1 myllymaki:1 tool:1 weighted:4 rather:2 probabilistically:1 conjunction:1 derived:3 focus:2 improvement:2 likelihood:2 indicates:2 hepatitis:1 contrast:2 pentium:1 glass:1 helpful:1 dependent:1 accumulated:1 sb:7 typically:3 entire:2 selective:3 selects:3 overall:1 classification:9 denoted:1 plan:3 special:1 marginal:2 never:1 having:2 represents:3 future:6 intelligent:3 employ:1 ve:2 national:1 replaced:1 intended:1 antecedent:5 phase:10 deci:1 interest:3 investigate:1 mining:1 zheng:6 evaluation:2 introduces:1 unconditional:1 fu:1 sweden:1 tree:1 iv:1 waikato:1 re:1 theoretical:1 instance:59 visweswaran:1 modeling:1 goodness:2 subset:2 uniform:1 comprised:1 eager:5 characterize:1 gregory:1 st:1 international:2 probabilistic:3 informatics:3 na:1 containing:2 soybean:1 priority:1 suggesting:1 bold:2 includes:1 satisfy:1 ranking:1 depends:1 performed:1 competitive:1 bayes:9 lung:1 solar:1 accuracy:1 kaufmann:4 characteristic:1 bayesian:38 lu:2 zoo:1 processor:1 classified:1 volinsky:1 dm:2 toe:2 newly:1 dataset:5 improves:1 formalize:1 ea:1 higher:1 eci:1 supervised:1 response:1 evaluated:4 just:4 biomedical:3 until:1 receives:1 assessment:1 facilitate:1 effect:1 hence:1 assigned:2 irani:1 during:2 noted:1 iris:1 criterion:1 ay:1 theoretic:1 complete:1 performs:2 percent:2 reasoning:1 fi:1 superior:1 specialized:1 witten:1 approximates:1 significant:1 tac:2 had:1 stable:1 longer:1 posterior:6 scoring:1 captured:1 minimum:1 greater:2 additional:1 care:1 unrestricted:1 eo:1 preceding:1 morgan:4 v3:1 ii:1 isa:25 kontkanen:1 cross:1 paired:1 impact:1 prediction:9 instantiating:1 breast:1 patient:4 iteration:2 cbmi:2 represent:4 addition:2 want:1 thirteenth:1 interval:2 shyam:2 annealing:1 rest:1 comment:1 induced:2 member:1 call:1 ideal:2 iii:1 independence:2 fit:3 competing:1 prognosis:2 idea:1 weka:3 intensive:1 utility:4 gb:1 ultimate:1 wo:1 queue:1 useful:1 se:1 induces:1 http:1 tutorial:1 estimated:3 diagnosis:2 discrete:4 key:1 threshold:2 enormous:1 capital:1 ce:1 ram:1 graph:1 run:1 letter:4 uncertainty:3 audiology:1 extends:2 decision:2 comparable:1 followed:1 fold:4 encountered:1 ri:1 fayyad:1 performing:1 structured:1 developing:1 according:1 describes:1 terminates:2 partitioned:1 making:1 chess:2 restricted:4 heart:1 computationally:1 equation:18 previously:2 discus:1 reversal:1 junction:1 apply:2 generic:2 appropriate:1 denotes:9 remaining:1 graphical:1 medicine:2 especially:1 approximating:1 intend:1 added:3 primary:1 ow:1 separate:1 thank:1 me:1 assuming:1 index:1 webb:6 pima:1 frank:1 intent:1 implementation:4 zt:18 perform:2 upper:2 observation:1 datasets:7 markov:2 arc:7 immediate:3 situation:1 rame:1 introduced:1 pair:5 required:1 specified:2 optimized:3 coherent:1 pearl:1 able:1 usually:2 below:1 ev:1 program:3 max:2 belief:2 greatest:3 suitable:2 ranked:1 predicting:4 representing:2 improve:6 library:1 carried:1 raftery:1 prior:3 limitation:2 acyclic:1 validation:1 xp:1 cancer:1 placed:1 supported:1 wide:12 ghz:1 default:1 numeric:1 computes:1 ignores:1 forward:1 commonly:1 san:3 pth:2 lymphography:1 approximate:4 dealing:1 ml:1 investigating:1 incoming:1 pittsburgh:3 francisco:1 xi:12 search:12 continuous:1 sonar:1 table:2 nature:1 ca:3 ignoring:1 improving:1 necessarily:2 significance:1 promoter:1 scored:1 lbr:23 child:7 x1:5 fashion:2 cooper:1 house:1 splice:1 rk:1 specific:30 xt:20 consequent:3 evidence:1 adding:3 sequential:1 kr:1 entropy:1 led:1 explore:2 lazy:12 labor:1 ordered:1 conditional:3 marked:1 exposition:1 absence:1 feasible:2 change:3 considerable:1 specifically:2 determined:1 reversing:1 averaging:21 tumor:1 called:4 total:1 experimental:1 ijk:8 vote:1 t15:1 select:3 support:3 evaluate:1 outgoing:2 |
1,723 | 2,566 | Neighbourhood Components Analysis
Jacob Goldberger, Sam Roweis, Geoff Hinton, Ruslan Salakhutdinov
Department of Computer Science, University of Toronto
{jacob,roweis,hinton,rsalakhu}@cs.toronto.edu
Abstract
In this paper we propose a novel method for learning a Mahalanobis
distance measure to be used in the KNN classification algorithm. The
algorithm directly maximizes a stochastic variant of the leave-one-out
KNN score on the training set. It can also learn a low-dimensional linear embedding of labeled data that can be used for data visualization
and fast classification. Unlike other methods, our classification model
is non-parametric, making no assumptions about the shape of the class
distributions or the boundaries between them. The performance of the
method is demonstrated on several data sets, both for metric learning and
linear dimensionality reduction.
1
Introduction
Nearest neighbor (KNN) is an extremely simple yet surprisingly effective method for classification. Its appeal stems from the fact that its decision surfaces are nonlinear, there
is only a single integer parameter (which is easily tuned with cross-validation), and the
expected quality of predictions improves automatically as the amount of training data increases. These advantages, shared by many non-parametric methods, reflect the fact that
although the final classification machine has quite high capacity (since it accesses the entire
reservoir of training data at test time), the trivial learning procedure rarely causes overfitting
itself.
However, KNN suffers from two very serious drawbacks. The first is computational, since
it must store and search through the entire training set in order to classify a single test point.
(Storage can potentially be reduced by ?editing? or ?thinning? the training data; and in low
dimensional input spaces, the search problem can be mitigated by employing data structures
such as KD-trees or ball-trees[4].) The second is a modeling issue: how should the distance
metric used to define the ?nearest? neighbours of a test point be defined? In this paper, we
attack both of these difficulties by learning a quadratic distance metric which optimizes the
expected leave-one-out classification error on the training data when used with a stochastic
neighbour selection rule. Furthermore, we can force the learned distance metric to be low
rank, thus substantially reducing storage and search costs at test time.
2
Stochastic Nearest Neighbours for Distance Metric Learning
We begin with a labeled data set consisting of n real-valued input vectors x1 , . . . , xn in RD
and corresponding class labels c1 , ..., cn . We want to find a distance metric that maximizes
the performance of nearest neighbour classification. Ideally, we would like to optimize
performance on future test data, but since we do not know the true data distribution we
instead attempt to optimize leave-one-out (LOO) performance on the training data.
In what follows, we restrict ourselves to learning Mahalanobis (quadratic) distance metrics,
which can always be represented by symmetric positive semi-definite matrices. We estimate such metrics through their inverse square roots, by learning a linear transformation
of the input space such that in the transformed space, KNN performs well. If we denote
the transformation by a matrix A we are effectively learning a metric Q = A> A such that
d(x, y) = (x ? y)> Q(x ? y) = (Ax ? Ay)> (Ax ? Ay).
The actual leave-one-out classification error of KNN is quite a discontinuous function of the
transformation A, since an infinitesimal change in A may change the neighbour graph and
thus affect LOO classification performance by a finite amount. Instead, we adopt a more
well behaved measure of nearest neighbour performance, by introducing a differentiable
cost function based on stochastic (?soft?) neighbour assignments in the transformed space.
In particular, each point i selects another point j as its neighbour with some probability pij ,
and inherits its class label from the point it selects. We define the pij using a softmax over
Euclidean distances in the transformed space:
exp(?kAxi ? Axj k2 )
2
k6=i exp(?kAxi ? Axk k )
pij = P
,
pii = 0
(1)
Under this stochastic selection rule, we can compute the probability pi that point i will be
correctly classified (denote the set of points in the same class as i by Ci = {j|ci = cj }):
X
pij
(2)
pi =
j?Ci
The objective we maximize is the expected number of points correctly classified under this
scheme:
XX
X
pi
(3)
f (A) =
pij =
i
j?Ci
i
Differentiating f with respect to the transformation matrix A yields a gradient rule which
we can use for learning (denote xij = xi ? xj ):
XX
X
?f
= ?2A
pij (xij x>
pik xik x>
ij ?
ik )
?A
i
j?Ci
Reordering the terms we obtain a more efficiently computed expression:
?
?
X
X
X
?f
? pi
?
= 2A
pik xik x>
pij xij x>
ik ?
ij
?A
i
k
(4)
k
(5)
j?Ci
Our algorithm ? which we dub Neighbourhood Components Analysis (NCA)? is extremely
simple: maximize the above objective (3) using a gradient based optimizer such as deltabar-delta or conjugate gradients. Of course, since the cost function above is not convex,
some care must be taken to avoid local maxima during training. However, unlike many
other objective functions (where good optima are not necessarily deep but rather broad) it
has been our experience that the larger we can drive f during training the better our test
performance will be. In other words, we have never observed an ?overtraining? effect.
Notice that by learning the overall scale of A as well as the relative directions of its rows
we are also effectively learning a real-valued estimate of the optimal number of neighbours
(K). This estimate appears as the effective perplexity of the distributions pij . If the learning
procedure wants to reduce the effective perplexity (consult fewer neighbours) it can scale
up A uniformly; similarly by scaling down all the entries in A it can increase the perplexity
of and effectively average over more neighbours during the stochastic selection.
Maximizing the objective function f (A) is equivalent to minimizing the L1 norm between
the true class distribution (having probability one on the true class) and the stochastic class
distribution induced by pij via A. A natural alternative distance is the KL-divergence which
induces the following objective function:
X
X
X
g(A) =
log(
pij ) =
log(pi )
(6)
i
j?Ci
i
Maximizing this objective would correspond to maximizing the probability of obtaining a
perfect (error free) classification of the entire training set. The gradient of g(A) is even
simpler than that of f (A):
!
P
>
X X
?g
j?Ci pij xij xij
>
P
= 2A
pik xik xik ?
(7)
?A
j?Ci pij
i
k
We have experimented with optimizing this cost function as well, and found both the transformations learned and the performance results on training and testing data to be very
similar to those obtained with the original cost function.
To speed up the gradient computation, the sums that appear in equations (5) and (7) over
the data points and over the neigbours of each point, can be truncated (one because we
can do stochastic gradient rather than exact gradient and the other because pij drops off
quickly).
3
Low Rank Distance Metrics and Nonsquare Projection
Often it is useful to reduce the dimensionality of input data, either for computational savings or for regularization of a subsequent learning algorithm. Linear dimensionality reduction techniques (which apply a linear operator to the original data in order to arrive
at the reduced representation) are popular because they are both fast and themselves relatively immune to overfitting. Because they implement only affine maps, linear projections
also preserve some essential topology of the original data. Many approaches exist for linear dimensionality reduction, ranging from purely unsupervised approaches (such as factor
analysis, principal components analysis and independent components analysis) to methods
which make use of class labels in addition to input features such as linear discriminant
analysis (LDA)[3] possibly combined with relevant components analysis (RCA)[1].
By restricting A to be a nonsquare matrix of size d?D, NCA can also do linear dimensionality reduction. In this case, the learned metric will be low rank, and the transformed inputs
will lie in Rd . (Since the transformation is linear, without loss of generality we only consider the case d ? D. ) By making such a restriction, we can potentially reap many further
benefits beyond the already convenient method for learning a KNN distance metric. In particular, by choosing d D we can vastly reduce the storage and search-time requirements
of KNN. Selecting d = 2 or d = 3 we can also compute useful low dimensional visualizations on labeled datasets, using only a linear projection. The algorithm is exactly the
same: optimize the cost function (3) using gradient descent on a nonsquare A. Our method
requires no matrix inversions and assumes no parametric model (Gaussian or otherwise)
for the class distributions or the boundaries between them. For now, the dimensionality of
the reduced representation (the number of rows in A) must be set by the user.
By using an highly rectangular A so that d D, we can significantly reduce the computational load of KNN at the expense of restricting the allowable metrics to be those of
rank at most d. To achieve this, we apply the NCA learning algorithm to find the optimal
transformation A, and then we store only the projections of the training points yn = Axn
(as well as their labels). At test time, we classify a new point xtest by first computing its
projection ytest = Axtest and then doing KNN classification on ytest using the yn and
a simple Euclidean metric. If d is relatively small (say less than 10), we can preprocess
the yn by building a KD-tree or a ball-tree to further increase the speed of search at test
time. The storage requirements of this method are O(dN ) + Dd compared with O(DN )
for KNN in the original input space.
4
Experiments in Metric Learning and Dimensionality Reduction
We have evaluated the NCA algorithm against standard distance metrics for KNN and other
methods for linear dimensionality reduction. In our experiments, we have used 6 data sets
(5 from the UC Irvine repository). We compared the NCA transformation obtained from
optimizing f (for square A) on the training set with the default Euclidean distance A = I,
1
the ?whitening? transformation , A = ?? 2 (where ? is the sample data covariance matrix),
?1
and the RCA [1] transformation A = ?w 2 (where ?w is the average of the within-class
covariance matrices). We also investigated the behaviour of NCA when A is restricted to
be diagonal, allowing only axis aligned Mahalanobis measures.
Figure 1 shows that the training and (more importantly) testing performance of NCA is
consistently the same as or better than that of other Mahalanobis distance measures for
KNN, despite the relative simplicity of the NCA objective function and the fact that the
distance metric being learned is nothing more than a positive definite matrix A>A.
We have also investigated the use of linear dimensionality reduction using NCA (with nonsquare A) for visualization as well as reduced-complexity classification on several datasets.
In figure 2 we show 4 examples of 2-D visualization. First, we generated a synthetic threedimensional dataset (shown in top row of figure 2) which consists of 5 classes (shown by
different colors). In two dimensions, the classes are distributed in concentric circles, while
the third dimension is just Gaussian noise, uncorrelated with the other dimensions or the
class label. If the noise variance is large enough, the projection found by PCA is forced
to include the noise (as shown on the top left of figure 2). (A full rank Euclidean metric
would also be misled by this dimension.) The classes are not convex and cannot be linearly separated, hence the results obtained from LDA will be inappropriate (as shown in
figure 2). In contrast, NCA adaptively finds the best projection without assuming any parametric structure in the low dimensional representation. We have also applied NCA to the
UCI ?wine? dataset, which consists of 178 points labeled into 3 classes and to a database
of gray-scale images of faces consisting of 18 classes (each a separate individual) and 560
dimensions (image size is 20 ? 28). The face dataset consists of 1800 images (100 for each
person). Finally, we applied our algorithm to a subset of the USPS dataset of handwritten
digit images, consisting of the first five digit classes (?one? through ?five?). The grayscale
images were downsampled to 8 ? 8 pixel resolution resulting in 64 dimensions.
As can be seen in figure 2 when a two-dimensional projection is used, the classes are consistently much better separated by the NCA transformation than by either PCA (which is
unsupervised) or LDA (which has access to the class labels). Of course, the NCA transformation is still only a linear projection, just optimized with a cost function which explicitly
encourages local separation. To further quantify the projection results we can apply a
nearest-neighbor classification in the projected space. Using the same projection learned
at training time, we project the training set and all future test points and perform KNN in
the low-dimensional space using the Euclidean measure. The results under the PCA, LDA,
LDA followed by RCA and NCA transformations (using K=1) appear in figure 1. The
NCA projection consistently gives superior performance in this highly constrained low-
distance metric learning ? training
distance metric learning ? testing
1
1
0.95
0.95
0.9
0.9
0.85
0.85
0.8
0.8
0.75
0.75
0.7
0.7
0.65
0.65
NCA
diag?NCA
RCA
whitened
Euclidean
0.6
0.55
0.5
bal
ion
iris
wine
NCA
diag?NCA
RCA
whitened
Euclidean
0.6
0.55
hous
digit
0.5
bal
rank 2 transformation ? training
1
iris
wine
hous
digit
rank 2 transformation ? testing
1
NCA
LDA+RCA
LDA
PCA
0.9
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
bal
ion
iris
wine
hous
digit
NCA
LDA+RCA
LDA
PCA
0.9
0.8
0.3
ion
0.3
bal
ion
iris
wine
hous
digit
Figure 1: KNN classification accuracy (left train, right test) on UCI datasets balance, ionosphere, iris, wine and housing and on the USPS handwritten digits. Results are averages
over 40 realizations of splitting each dataset into training (70%) and testing (30%) subsets
(for USPS 200 images for each of the 10 digit classes were used for training and 500 for
testing). Top panels show distance metric learning (square A) and bottom panels show
linear dimensionality reduction down to d = 2.
rank KNN setting. In summary, we have found that when labeled data is available, NCA
performs better both in terms of classification performance in the projected representation
and in terms of visualization of class separation as compared to the standard methods of
PCA and LDA.
5
Extensions to Continuous Labels and Semi-Supervised Learning
Although we have focused here on discrete classes, linear transformations and fully supervised learning, many extensions of this basic idea are possible. Clearly, a nonlinear
transformation function A(?) could be learned using any architecture (such as a multilayer
perceptron) trainable by gradient methods. Furthermore, it is possible to extend the classification framework presented above to the case of a real valued (continuous) supervision
signal by defining the set of ?correct matches? Ci for point i to be those points j having
similar (continuous) targets. This naturally leads to the idea of ?soft matches?, in which
the objective function becomes a sum over all pairs, each weighted by their agreement according to the targets. Learning under such an objective can still proceed even in settings
where the targets are not explicitly provided as long as information identifying close pairs
PCA
LDA
NCA
Figure 2: Dataset visualization results of PCA, LDA and NCA applied to (from top) the
?concentric rings?, ?wine?, ?faces? and ?digits? datasets. The data are reduced from their
original dimensionalities (D=3,D=13,D=560,D=256 respectively) to the d=2 dimensions
show.
Figure 3: The two dimensional outputs of the neural network on a set of test cases. On the left, each
point is shown using a line segment that has the same orientation as the input face. On the right, the
same points are shown again with the size of the circle representing the size of the face.
is available. Such semi-supervised tasks often arise in domains with strong spatial or temporal continuity constraints on the supervision, e.g. in a video of a person?s face we may
assume that pose, and expression vary slowly in time even if no individual frames are ever
labeled explicitly with numerical pose or expression values.
To illustrate this, we generate pairs of faces in the following way: First we choose two faces
at random from the FERET-B dataset (5000 isolated faces that have a standard orientation
and scale). The first face is rotated by an angle uniformly distributed between ?45o and
scaled to have a height uniformly distributed between 25 and 35 pixels. The second face
(which is of a different person) is given the same rotation and scaling but with Gaussian
noise of ?1.22o and ?1.5 pixels. The pair is given a weight, wab , which is the probability
density of the added noise divided by its maximum possible value. We then trained a neural
network with one hidden layer of 100 logistic units to map from the 35?35 pixel intensities
of a face to a point, y, in a 2-D output space. Backpropagation was used to minimize the
cost function in Eq. 8 which encourages the faces in a pair to be placed close together:
!
X
exp(?||ya ? yb ||2 )
(8)
Cost = ?
wab log P
2
c,d exp(?||yc ? yd || )
pair(a,b)
where c and d are indices over all of the faces, not just the ones
that form a pair. Four example faces are shown to the right; horizontally the pairs agree and vertically they do not. Figure 3 above
shows that the feedforward neural network discovered polar coordinates without the user having to decide how to represent scale
and orientation in the output space.
6
Relationships to Other Methods and Conclusions
Several papers recently addressed the problem of learning Mahalanobis distance functions
given labeled data or at least side-information of the form of equivalence constraints. Two
related methods are RCA [1] and a convex optimization based algorithm [7]. RCA is
implicitly assuming a Gaussian distribution for each class (so it can be described using
only the first two moments of the class-conditional distribution). Xing et. al attempt to
find a transformation which minimizes all pairwise squared distances between points in the
same class; this implicitly assumes that classes form a single compact connected set. For
highly multimodal class distributions this cost function will be severely penalized. Lowe[6]
proposed a method similar to ours but used a more limited idea for learning a nearest
neighbour distance metric. In his approach, the metric is constrained to be diagonal (as
well, it is somewhat redundantly parameterized), and the objective function corresponds to
the average squared error between the true class distribution and the predicted distribution,
which is not entirely appropriate in a more probabilistic setting.
In parallel there has been work on learning low rank transformations for fast classification
and visualization. The classic LDA algorithm[3] is optimal if all class distributions are
Gaussian with a single shared covariance; this assumption, however is rarely true. LDA
also suffers from a small sample size problem when dealing with high-dimensional data
when the within-class scatter matrix is nearly singular[2]. Recent variants of LDA (e.g.
[5], [2]) make the transformation more robust to outliers and to numerical instability when
not enough datapoints are available. (This problem does not exist in our method since there
is no need for a matrix inversion.)
In general, there are two classes of regularization assumption that are common in linear
methods for classification. The first is a strong parametric assumption about the structure of
the class distributions (typically enforcing connected or even convex structure); the second
is an assumption about the decision boundary (typically enforcing a hyperplane). Our
method makes neither of these assumptions, relying instead on the strong regularization
imposed by restricting ourselves to a linear transformation of the original inputs.
Future research on the NCA model will investigate using local estimates of K as derived
from the entropy of the distributions pij ; the possible use of a stochastic classification rule
at test time; and more systematic comparisons between the objective functions f and g.
To conclude, we have introduced a novel non-parametric learning method ? NCA ? that
handles the tasks of distance learning and dimensionality reduction in a unified manner.
Although much recent effort has focused on non-linear methods, we feel that linear embedding has still not fully fulfilled its potential for either visualization or learning.
Acknowledgments
Thanks to David Heckerman and Paul Viola for suggesting that we investigate the alternative cost g(A) and the case of diagonal A.
References
[1] A. Bar-Hillel, T. Hertz, N. Shental, and D. Weinshall. Learning distance functions using equivalence relation. In International Conference on Machine Learning, 2003.
[2] L. Chen, H. Liao, M. Ko, J. Lin, and G. Yu. A new lda-based face recognition system which can
solve the small sample size problem. In Pattern Recognition, pages 1713?1726, 2000.
[3] R. A. Fisher. The use of multiple measurements in taxonomic problems. In Annual of Eugenic,
pages 179?188, 1936.
[4] J. Friedman, J.bentley, and R. Finkel. An algorithm for finding best matches in logarithmic
expected time. In ACM, 1977.
[5] Y. Koren and L. Carmel. Robust linear dimensionality reduction. In IEEE Trans. Vis. and Comp.
Graph., pages 459?470, 2004.
[6] D. Lowe. Similarity metric learning for a variable kernel classifier. In Neural Computation,
pages 72?85, 1995.
[7] E.P. Xing, A. Y. Ng, M.I. Jordan, and S. Russell. Distance learning metric. In Proc. of Neural
Information Processing Systems, 2003.
| 2566 |@word repository:1 inversion:2 norm:1 neigbours:1 jacob:2 covariance:3 xtest:1 reap:1 moment:1 reduction:10 score:1 selecting:1 tuned:1 ours:1 goldberger:1 yet:1 scatter:1 must:3 numerical:2 subsequent:1 shape:1 drop:1 fewer:1 toronto:2 attack:1 simpler:1 five:2 height:1 dn:2 ik:2 consists:3 manner:1 pairwise:1 expected:4 themselves:1 axn:1 salakhutdinov:1 relying:1 automatically:1 actual:1 inappropriate:1 becomes:1 begin:1 xx:2 mitigated:1 project:1 maximizes:2 panel:2 provided:1 what:1 weinshall:1 substantially:1 minimizes:1 redundantly:1 unified:1 finding:1 transformation:21 temporal:1 exactly:1 k2:1 scaled:1 classifier:1 unit:1 appear:2 yn:3 positive:2 local:3 vertically:1 severely:1 despite:1 yd:1 equivalence:2 limited:1 nca:26 acknowledgment:1 testing:6 definite:2 implement:1 backpropagation:1 digit:9 procedure:2 significantly:1 projection:12 convenient:1 word:1 downsampled:1 cannot:1 close:2 selection:3 operator:1 storage:4 instability:1 optimize:3 equivalent:1 map:2 demonstrated:1 restriction:1 maximizing:3 imposed:1 convex:4 rectangular:1 resolution:1 focused:2 simplicity:1 splitting:1 identifying:1 rule:4 importantly:1 his:1 datapoints:1 embedding:2 classic:1 handle:1 coordinate:1 feel:1 target:3 user:2 exact:1 agreement:1 recognition:2 labeled:7 database:1 observed:1 bottom:1 connected:2 russell:1 complexity:1 ideally:1 trained:1 segment:1 purely:1 usps:3 easily:1 multimodal:1 geoff:1 represented:1 train:1 separated:2 forced:1 fast:3 effective:3 choosing:1 hillel:1 quite:2 larger:1 valued:3 solve:1 say:1 otherwise:1 knn:16 itself:1 final:1 housing:1 advantage:1 differentiable:1 propose:1 relevant:1 aligned:1 uci:2 realization:1 achieve:1 roweis:2 optimum:1 requirement:2 perfect:1 leave:4 ring:1 rotated:1 illustrate:1 pose:2 nearest:7 ij:2 eq:1 strong:3 c:1 predicted:1 pii:1 quantify:1 direction:1 drawback:1 discontinuous:1 correct:1 stochastic:9 behaviour:1 extension:2 exp:4 optimizer:1 adopt:1 vary:1 wine:7 ruslan:1 polar:1 proc:1 label:7 weighted:1 clearly:1 always:1 gaussian:5 rather:2 avoid:1 finkel:1 axj:1 ax:2 inherits:1 derived:1 consistently:3 rank:9 contrast:1 entire:3 typically:2 hidden:1 relation:1 transformed:4 selects:2 pixel:4 issue:1 classification:19 orientation:3 overall:1 k6:1 constrained:2 softmax:1 spatial:1 uc:1 never:1 having:3 saving:1 ng:1 broad:1 yu:1 unsupervised:2 nearly:1 future:3 serious:1 neighbour:12 preserve:1 divergence:1 individual:2 consisting:3 ourselves:2 attempt:2 friedman:1 highly:3 investigate:2 experience:1 tree:4 euclidean:7 circle:2 isolated:1 classify:2 modeling:1 soft:2 assignment:1 cost:11 introducing:1 entry:1 subset:2 loo:2 synthetic:1 combined:1 adaptively:1 person:3 density:1 thanks:1 international:1 probabilistic:1 off:1 systematic:1 together:1 quickly:1 vastly:1 reflect:1 again:1 squared:2 choose:1 possibly:1 slowly:1 suggesting:1 potential:1 explicitly:3 vi:1 root:1 lowe:2 doing:1 xing:2 parallel:1 minimize:1 square:3 accuracy:1 variance:1 efficiently:1 yield:1 correspond:1 preprocess:1 handwritten:2 dub:1 comp:1 drive:1 classified:2 wab:2 overtraining:1 suffers:2 infinitesimal:1 against:1 naturally:1 nonsquare:4 irvine:1 dataset:7 popular:1 color:1 dimensionality:13 improves:1 cj:1 thinning:1 appears:1 supervised:3 editing:1 yb:1 evaluated:1 generality:1 furthermore:2 just:3 axk:1 nonlinear:2 continuity:1 logistic:1 lda:16 quality:1 behaved:1 gray:1 bentley:1 building:1 effect:1 true:5 regularization:3 hence:1 symmetric:1 mahalanobis:5 during:3 encourages:2 iris:5 bal:4 allowable:1 ay:2 performs:2 l1:1 ranging:1 image:6 novel:2 recently:1 superior:1 rotation:1 common:1 extend:1 measurement:1 rd:2 similarly:1 immune:1 access:2 supervision:2 surface:1 whitening:1 similarity:1 recent:2 optimizing:2 optimizes:1 perplexity:3 store:2 seen:1 care:1 somewhat:1 maximize:2 signal:1 semi:3 full:1 multiple:1 stem:1 match:3 cross:1 long:1 lin:1 divided:1 prediction:1 variant:2 basic:1 ko:1 whitened:2 multilayer:1 metric:25 liao:1 represent:1 kernel:1 ion:4 c1:1 addition:1 want:2 addressed:1 singular:1 unlike:2 induced:1 jordan:1 integer:1 consult:1 feedforward:1 enough:2 affect:1 xj:1 architecture:1 restrict:1 topology:1 reduce:4 idea:3 cn:1 expression:3 pca:8 effort:1 proceed:1 cause:1 deep:1 useful:2 amount:2 induces:1 reduced:5 generate:1 hous:4 xij:5 exist:2 notice:1 delta:1 fulfilled:1 correctly:2 discrete:1 shental:1 four:1 neither:1 graph:2 sum:2 inverse:1 angle:1 parameterized:1 taxonomic:1 arrive:1 decide:1 separation:2 decision:2 pik:3 scaling:2 entirely:1 layer:1 followed:1 koren:1 quadratic:2 annual:1 constraint:2 speed:2 extremely:2 relatively:2 department:1 according:1 ball:2 kd:2 conjugate:1 heckerman:1 hertz:1 sam:1 feret:1 rsalakhu:1 making:2 outlier:1 restricted:1 rca:9 taken:1 equation:1 visualization:8 agree:1 know:1 available:3 apply:3 appropriate:1 neighbourhood:2 alternative:2 original:6 assumes:2 top:4 include:1 threedimensional:1 objective:11 already:1 added:1 parametric:6 diagonal:3 gradient:9 distance:24 separate:1 capacity:1 discriminant:1 trivial:1 enforcing:2 assuming:2 index:1 relationship:1 minimizing:1 balance:1 potentially:2 xik:4 expense:1 perform:1 allowing:1 datasets:4 finite:1 descent:1 truncated:1 defining:1 hinton:2 ever:1 viola:1 frame:1 discovered:1 intensity:1 concentric:2 introduced:1 david:1 pair:8 kl:1 optimized:1 learned:6 trans:1 beyond:1 bar:1 pattern:1 yc:1 video:1 difficulty:1 force:1 natural:1 misled:1 representing:1 scheme:1 axis:1 deltabar:1 relative:2 reordering:1 loss:1 fully:2 validation:1 affine:1 pij:14 dd:1 uncorrelated:1 pi:5 row:3 course:2 summary:1 penalized:1 surprisingly:1 placed:1 free:1 side:1 ytest:2 perceptron:1 neighbor:2 face:16 differentiating:1 benefit:1 distributed:3 boundary:3 default:1 xn:1 dimension:7 projected:2 employing:1 compact:1 implicitly:2 dealing:1 overfitting:2 conclude:1 xi:1 grayscale:1 search:5 continuous:3 carmel:1 learn:1 robust:2 obtaining:1 investigated:2 necessarily:1 domain:1 diag:2 linearly:1 noise:5 arise:1 paul:1 nothing:1 x1:1 reservoir:1 lie:1 third:1 down:2 load:1 appeal:1 experimented:1 ionosphere:1 essential:1 restricting:3 effectively:3 ci:10 chen:1 entropy:1 logarithmic:1 horizontally:1 corresponds:1 acm:1 conditional:1 shared:2 fisher:1 change:2 reducing:1 uniformly:3 hyperplane:1 principal:1 ya:1 rarely:2 kaxi:2 trainable:1 |
1,724 | 2,567 | Discriminant Saliency for Visual Recognition
from Cluttered Scenes
Dashan Gao
Nuno Vasconcelos
Department of Electrical and Computer Engineering,
University of California, San Diego
Abstract
Saliency mechanisms play an important role when visual recognition
must be performed in cluttered scenes. We propose a computational definition of saliency that deviates from existing models by equating saliency
to discrimination. In particular, the salient attributes of a given visual
class are defined as the features that enable best discrimination between
that class and all other classes of recognition interest. It is shown that
this definition leads to saliency algorithms of low complexity, that are
scalable to large recognition problems, and is compatible with existing
models of early biological vision. Experimental results demonstrating
success in the context of challenging recognition problems are also presented.
1
Introduction
The formulation of recognition as a problem of statistical classification has enabled significant progress in the area, over the last decades. In fact, for certain types of problems
(face detection/recognition, vehicle detection, pedestrian detection, etc.) it now appears to
be possible to design classifiers that ?work reasonably well most of the time?, i.e. classifiers that achieve high recognition rates in the absence of a few factors that stress their
robustness (e.g. large geometric transformations, severe variations of lighting, etc.). Recent
advances have also shown that real-time recognition is possible on low-end hardware [1].
Given all this progress, it appears that one of the fundamental barriers remaining in the path
to a vision of scalable recognition systems, capable of dealing with large numbers of visual
classes, is an issue that has not traditionally been considered as problematic: training complexity. In this context, an aspect of particular concern is the dependence, of most modern
classifiers, on carefully assembled and pre-processed training sets. Typically these training
sets are large (in the order of thousands of examples per class) and require a combination of
1) painstaking manual labor of image inspection and segmentation of good examples (e.g.
faces) and 2) an iterative process where an initial classifier is applied to a large dataset of
unlabeled data, the classification results are manually inspected to detect more good examples (usually examples close to the classification boundary, or where the classifier fails),
and these good examples are then manually segmented and added to the training set.
Overall, the process is extremely laborious, and good training sets usually take years to
establish through the collaborative efforts of various research groups. This is completely
opposite to what happens in truly scalable learning systems (namely biological ones) that
are able to quickly bootstrap the learning process from a small number of virtually unprocessed examples. For example while humans can bootstrap learning with weak clues
and highly cluttered scenes (such as ?Mr. X is the person sitting at the end of the room,
the one with gray hair?), current faces detectors require training faces to be cropped into
(a)
(b)
(c)
(d)
Figure 1: (a)(b)(c) Various challenging examples for current saliency detectors. (a) Apple hanging
from a tree; (b) a bird in a tree; (c) an egg in a nest. (d) some DCT basis functions. From left to right,
top to bottom, detectors of: edges, bars, corners, t-junctions, spots, flow patches, and checkerbords.
20 ? 20 pixel arrays, with all the hair precisely cropped out, lighting gradients explicitly
removed, and so on. One property of biological vision that plays an important role in this
ability to learn from highly cluttered examples, is the existence of saliency mechanisms.
For example, humans rarely have to exhaustively scan a scene to detect an object of interest. Instead, salient locations simply pop-out in result of the operation of pre-recognition
saliency mechanisms. While saliency has been the subject of significant study in computer
vision, most formulations do not pose saliency itself as a major goal of recognition. Instead
saliency is usually an auxiliary pre-filtering step, whose goal is to reduce computation by
eliminating image locations that can be universally classified as non-salient, according to
some definition of saliency which is completely divorced from the particular recognition
problem at hand.
In this work, we propose an alternative definition of saliency, which we denote by discriminant saliency, and which is intrinsically grounded on the recognition problem. This new
formulation is based on the intuition that, for recognition, the salient features of a visual
class are those that best distinguish it from all other visual classes of recognition interest.
We show that 1) this intuition translates naturally into a computational principle for the
design of saliency detectors, 2) this principle can be implemented with great computational
simplicity, 3) it is possible to derive implementations which scale to recognition problems
with large numbers of classes, and 4) the resulting saliency mechanisms are compatible
with classical models of biological perception. We present experimental results demonstrating success on image databases containing complex scenes and substantial amounts of
clutter.
2
Saliency detection
The extraction of salient points from images has been a subject of research for at least a
few decades. Broadly speaking, saliency detectors can be divided into three major classes.
The first, and most popular, treats the problem as one of the detection of specific visual
attributes. These are usually edges or corners (also called ?interest points?) [2] and their
detection has roots in the structure-from-motion literature, but there have also been proposals for other low-level visual attributes such as contours [3]. A major limitation of these
saliency detectors is that they do not generalize well. For example, a corner detector will
always produce a stronger response in a region that is strongly textured than in a smooth
region, even though textured surfaces are not necessarily more salient than smooth ones.
This is illustrated by the image of Figure 1(a). While a corner detector would respond
strongly to the highly textured regions of leaves and tree branches, it is not clear that these
are more salient than the smooth apple. We would argue for the contrary.
Some of these limitations are addressed by more recent, and generic, formulations of
saliency. One idea that has recently gained some popularity is to define saliency as image
complexity. Various complexity measures have been proposed in this context. Lowe [4]
measures complexity by computing the intensity variation in an image using the difference
of Gaussian function; Sebe [5] measures the absolute value of the coefficients of a wavelet
decomposition of the image; and Kadir [6] relies on the entropy of the distribution of local
intensities. The main advantage of these data-driven definitions of saliency is a significantly greater flexibility, as they could detect any of the low-level attributes above (corners,
contours, smooth edges, etc.) depending on the image under consideration. It is not clear,
however, that saliency can always be equated with complexity. For example, Figures 1(b)
and (c), show images containing complex regions, consisting of clustered leaves and straw,
that are not terribly salient. On the contrary, the much less complex image regions containing the bird or the egg appear to be significantly more salient.
Finally, a third formulation is to start from models of biological vision, and derive saliency
detection algorithms from these models [7]. This formulation has the appeal of its roots on
what are the only known full-functioning vision systems, and it has been shown to lead to
interesting saliency behavior [7]. However, these solutions have one significant limitation:
the lack of a clearly stated optimality criteria for saliency. In the absence of such a criteria
it is difficult to evaluate, in an objective sense, the goodness of the proposed algorithms or
to develop a theory (and algorithms) for optimal saliency.
3
Discriminant saliency
The basic intuition for discriminant saliency is somewhat of a ?statement of the obvious?:
the salient attributes of a given visual concept are the attributes that most distinguish it
from all other visual concepts that may be of possible interest. While close to obvious, this
definition is a major departure from all existing definitions in the vision literature.
First, it makes reference to a ?set of visual concepts of possible interest?. While such a
set may not be well defined for all vision problem (e.g. tracking or structure-from-motion
where many of the current saliency detectors have roots [2]), it is an intrinsic component
of the recognition problem: the set of visual classes to be recognized. It therefore makes
saliency contingent upon the existence of a collection of classes and, therefore, impossible
to compute from an isolated image. It also means that, for a given object, different visual
attributes will be salient in different recognition contexts. For example while contours and
shape will be most salient to distinguish a red apple from a red car, color and texture will be
most salient when the same apple is compared to an orange. All these properties appear to
be a good idea for recognition. Second, it sets as a goal for saliency that of distinguishing
between classes. This implies that the optimality criterion for the design of salient features
is discrimination, and therefore very different from traditional criteria such as repetitiveness
under image transformations [8]. Robustness in terms of these criteria (which, once again,
are well justified for tracking but do not address the essence of the recognition problem)
can be learned if needed to achieve discriminant goals [9].
Due to this equivalence between saliency and discrimination, the principle of discriminant
saliency can be easily translated into an optimality criteria for the design of saliency algorithms. In particular, it is naturally formulated as an optimal feature selection problem:
optimal features for saliency are the most discriminant features for the one-vs-all classification problem that opposes the class of interest to all remaining classes. Or, in other words,
the most salient features are the ones that best separate the class of interest from all others.
Given the well known equivalence between features and image filters, this can also be seen
as a problem of designing optimal filters for discrimination.
3.1 Scalable feature selection
In the context of scalable recognition systems, the implementation of discriminant saliency
requires 1) the design of a large number of classifiers (as many as the total number of
classes to recognize) at set up time, and 2) classifier tuning whenever new classes are
added to, or deleted from, the problem. It is therefore important to adopt feature selection techniques that are computationally efficient, preferably reusing computation from the
design of one classifier to the next. The design of such feature selection methods is a
non-trivial problem, which we have been actively pursuing in the context of research in
feature selection itself [11]. This research has shown that information-theoretic methods,
based on maximization of mutual information between features and class labels, have the
appealing property of enabling a precise control (through factorizations based on known
statistical properties of images) over the trade off between optimality, in a minimum Bayes
error sense, and computationally efficiency [11]. Our experience of applying algorithms
in this family to the saliency detection problem is that, even those strongly biased towards
efficiency can consistently select good saliency detection filters. This is illustrated by all
the results presented in this paper, where we have adopted the maximization of marginal
diversity (MMD) [10] as the guiding principle for feature selection.
Given a classification problem with class labels Y , prior class probabilities PY (i), a set of
n features, X = (X1 , . . . , Xn ), and such that the probability density of Xk given class i is
PXk |Y (x|i), the marginal diversity (MD) of feature Xk is
(1)
md(Xk ) =< KL[PXk |Y (x|i)||PXk (x) >Y
M
where < f (i) >Y = i=1 PY (i)f (i), and KL[p||q] = p(s) log p(x)
q(x) dx the KullbackLeibler divergence between p and q. Since it only requires marginal density estimates,
the MD can be computed with histogram-based density estimates leading to an extremely
efficient algorithm for feature selection [10]. Furthermore, in the one-vs-all classification
scenario, the histogram of the ?all? class can be obtained by a weighted average of the class
conditional histograms of the image classes that it contains, i.e.
PXk |Y (x|A) =
PXk |Y (x|i)PY (i)
(2)
i?A
where A is the set of image classes that compose the ?all? class. This implies that the
bulk of the computation, the density estimation step, only has to be performed once for the
design of all saliency detectors.
3.2 Biologically plausible models
Ever since Hubel and Wiesel?s showing that different groups in V1 are tuned for detecting
different types of stimulae (e.g. bars, edges, etc.) it has been known that, the earliest stages
of biological vision can be modeled as a multiresolution image decomposition followed by
some type of non-linearity. Indeed, various ?biologically plausible? models of early vision
are based on this principle [12]. The equivalence between saliency detection and the design of optimally discriminant filters, makes discriminant saliency compatible with most
of these models. In fact, as detailed in the experimental section, our experience is that remarkably simple mechanisms, inspired by biological vision, are sufficient to achieve good
saliency results. In particular, all the results reported in this paper were achieved with the
following two step procedure, based on the Malik-Perona model of texture perception [13].
First, a saliency map (i.e. a function describing the saliency at each pixel location) is obtained by pooling the responses of the different saliency filters after half-wave rectification
S(x, y) =
2n
?i Ri2 (x, y),
(3)
i=1
where S(x, y) is the saliency at location (x, y), Ri (x, y), i = 1, . . . , 2n the channels resulting from half-wave rectification of the outputs of the saliency filters Fi (x, y), i = 1, . . . , n
R2k?1 = max[?I ? Fk (x, y), 0]
R2k = max[I ? Fk (x, y), 0]
(4)
I(x, y) the input image, and wi = md(i) a weight equal to the feature?s marginal diversity.
Second, the saliency map is fed to a peak detection module that consists of a winnertake-all network. The location of largest saliency is first found. Its spatial scale is set
to the size of the region of support of the saliency filter with strongest response at that
location. All neighbors within a circle whose radius is this scale are then suppressed (set
to zero) and the process is iterated. The procedure is illustrated by Figure 2, and produces
Scale Selection
R1
*F1
R2
wi
I
*Fj
Saliency
Map
WTA
Salient
Locations
*Fn
R2n
Figure 2: Schematic of the saliency detection model.
a list of salient locations, their saliency strengths, and scales. It is important to limit the
number of channels that contribute to the saliency map since, for any given class, there
are usually many features which are not discriminant but have strong response at various
image locations (e.g. areas of clutter). This is done through a cross-validation step that we
discuss in section 4.3.
All the experiments presented in the following section were obtained using the coefficients
of the discrete cosine transform (DCT) as features. While the precise set of features is likely
not to be crucial for the quality of the saliency results (e.g. other invertible multiresolution
decompositions, such as Gabor or other wavelets, would likely work well) the DCT feature
set has two appealing properties. First, previous experience has shown that they perform
quite well in large scale recognition problems [14]. Second, as illustrated by Figure 1(d),
the DCT basis functions contain detectors for various perceptually relevant low-level image
attributes, including edges, bars, corners, t-junctions, spots, etc. This can obviously only
make the saliency detection process easier.
4
Results and discussion
We start the experimental evaluation of discriminant saliency with some results from the
Brodatz texture database, that illustrate various interesting properties of the former.
4.1 Saliency maps
Brodatz is an interesting database because it stresses aspects of saliency that are quite problematic for most existing saliency detection algorithms: 1) the need to perform saliency
judgments in highly textured regions, 2) classes that contain salient regions of diverse
shapes, and 3) a great variety of salient attributes - e.g. corners, closed and open contours, regular geometric geometric figures (circles, squares, etc.), texture gradients, crisp
and soft edges, etc. The entire collection of textures in the database was divided into a train
and test set, using the set-up of our previous retrieval work [14]. The training database was
used to determine the salient features of each class, and saliency maps were then computed
on the test images. The process was repeated for all texture classes, on a one-vs-all setting
(class of interest against all others) with each class sequentially considered as the ?one?
class.
As illustrated by the examples shown in Figure 3, none of the challenges posed by Brodatz
seem very problematic for discriminant saliency. Note, in particular, that the latter does not
appear to have any difficulty in 1) ignoring highly textured background areas in favor of a
more salient foreground object (two leftmost images), which could itself be another texture,
2) detecting as salient a wide variety of shapes, contours of different crispness and scale,
or 3) even assign strong saliency to texture gradients (rightmost image). This robustness is
a consequence of the fact that the saliency features are tuned to discriminate the class of
interest from the rest. We next show that it can lead to significantly better saliency detection
performance than that achievable with the algorithms currently available in the literature.
Figure 3: Saliency maps (bottom row) obtained on various textures (shown in top row) from Brodatz.
Bright pixels flag salient locations. Note: the saliency maps of the second row are best viewed on
paper. A gamma-corrected version would be best for viewing on CRT displays and is available at
www.svcl.ucsd.edu/publications/nips04-crt.ps
Dataset
Faces
Motorbikes
Airplanes
DSD
97.24
96.25
93.00
SSD
77.3
81.3
78.7
HSD
61.87
74.83
80.17
pixel-based
93.05
87.83
90.33
constellation [15]
96.4
92.5
90.2
Table 1: SVM classification accuracy based on different detectors.
4.2 Comparison to existing methods
While the results of the previous section provide interesting anecdotal evidence in support of discriminant saliency, objective conclusions can only be drawn by comparison to
existing techniques. Unfortunately, it is not always straightforward to classify saliency detectors objectively by simple inspection of saliency maps, since different people frequently
attribute different degrees of saliency to a given image region. In fact, in the absence of a
larger objective for saliency, e.g. recognition, it is not even clear that the problem is well
defined. To avoid the obvious biases inherent to a subjective evaluation of saliency maps,
we tried to design an experiment that could lead to an objective comparison. The goal was
to quantify if the saliency maps produced by the different techniques contained enough
information for recognition. The rational is the following. If, when applied to an image, a
saliency detector has an output which is highly correlated with the appearance/absence of
the class of interest in that image, then it should be possible to classify the image (as belonging/not belonging to the class) by classifying the saliency map itself. We then built the
simplest possible saliency map classifier that we could conceive of: the intensity values of
the saliency map were histogrammed and fed to a support vector machine (SVM) classifier.
We compared the performance of the discriminant saliency detector (DSD) described
above, with one representative from each of the areas of the literature discussed in section 2: the Harris saliency detector (HSD) and the scale saliency detector (SSD) of [6]. To
evaluate performance on a generic recognition scenario, we adopted the Caltech database,
using the experimental set up proposed in [15]. To obtain an idea of what would be acceptable classification results on this database, we used two benchmarks: the performance,
on the same classification task, of 1) a classifier of equivalent simplicity but applied to the
images themselves and 2) the constellation-based classifier proposed in [15] (which we believe to be representative of the state-of-the-art for this database). For the simple classifier,
we reduced the luminance component of each image to a vector (by stacking all pixels into
a column) and used a SVM to classify the resulting set of points. All parameters were set to
assure a fair comparison between the saliency detectors (e.g. a multiscale version of Harris
was employed, all detectors combined information from three scales, etc.). Table 1 presents
the two benchmarks and the results of classifying the saliency histograms generated by the
three detectors.
The table supports various interesting conclusions. First, both the HSD and the SSD have
Figure 4: Original images (top row), saliency maps generated by DSD (second row), and a
comparison of salient locations detected by: DSD in the third row, SSD in the fourth, and
HSD at the bottom. Salient locations are the centers of the white circles, the circle radii
representing scale. Note: the saliency maps of the second row are best viewed on paper.
A gamma-corrected version would be best for viewing in CRT displays and is available at
www.svcl.ucsd.edu/svclwww/publications/nips04-crt.ps
very poor performance, indicating that they produce saliency maps that have weak correlation with the presence/absence of the class of interest in the image to classify. Second,
the simple pixel-based classifier works surprisingly well on this database, given that there
is indeed a substantial amount of clutter in its images (see Figure 4). Its performance is,
nevertheless, inferior to that of the constellation classifier. The third, and likely most surprising, observation is that the classification of the DSD histograms clearly outperform this
classifier, achieving the overall best performance. It should be noted that this is somewhat
of an unfair comparison for the constellation classifier, since it tries to solve a problem that
is more difficult than the one considered in this experiment. While the question of interest here is ?is class x present in the image or not?? this classifier can actually determine
the location of the element from the class (e.g. a face) in the image. In any case, these
results seem to support the claim that DSD produces saliency maps which contain most of
the saliency information required for classification. The issue of translating these saliency
maps into a combined segmentation/recognition solution will be addressed in future research.
Finally, the superiority of the DSD over the other two saliency detectors considered in this
experiment is also clearly supported by the inspection of the resulting salient locations.
Some examples are presented in Figure 4.
4.3 Determining the number of salient features
In addition to experimental validation of the performance of discriminant saliency, the experiment of the previous section suggests a classification-optimal strategy to determine the
number of features that contribute to the saliency maps of a given class of interest. Note
that, while the training examples from each class are not carefully segmented (and can contain large areas of clutter), the working assumption is that each image is labeled with respect
to the presence or absence in it of the class of interest. Hence, the classification problem
of the previous section is perfectly well defined before segmentation (e.g. separation of
the pixels containing objects in the class and pixels of background) takes place. It follows
that a natural way to determine the optimal number of features is to search for the number
that maximizes the classification rate on this problem. This search can be performed by
100
95
95
90
85
80
Accuracy(%)
100
95
Accuracy(%)
Accuracy(%)
100
90
85
0
10
20
30
40
Number of features
50
60
70
80
90
85
0
10
20
30
40
Number of features
50
60
70
80
0
10
20
30
40
Number of features
50
60
70
(a)
(b)
(c)
Figure 5: Classification accuracy vs number of features considered by the saliency detector for (a)
faces, (b) motorbikes and (c) airplanes.
a traditional cross-validation strategy, the strategy that we have adopted for all the results
presented in this paper. One interesting question is whether the performance of the DSD is
very sensitive to the number of features chosen. Our experience is that, while it is important to limit the number of features, there is usually a range that leads to results very close
to optimal. This is shown in Figure 5 where we present the variation of the classification
rate on the problem of the previous section for various classes on Caltech. Visual inspection of the saliency detection results obtained with feature sets within this range showed no
substantial differences with respect to that obtained with the optimal feature set.
References
[1] P. Viola and M. Jones. Robust real-time object detection. 2nd Int. Workshop on Statistical and
Computational Theories of Vision Modeling, Learning, Computing and Sampling, July 2001.
[2] C. Harris and M. Stephens. A combined corner and edge detector. Alvey Vision Conference,
1988.
[3] A. Sha?ashua and S. Ullman. Structural saliency: the detection of globally salient structures
using a locally connected network. Proc. Internat. Conf. on Computer Vision, 1988.
[4] D. G. Lowe. Object recognition from local scale-invariant features. In Proceedings of International Conference on Computer Vision, pp. 1150-1157, 1999.
[5] N. Sebe, M. S. Lew. Comparing salient point detectors. Pattern Recognition Letters, vol.24,
no.1-3, Jan. 2003, pp.89-96.
[6] T. Kadir and M.l Brady. Scale, Saliency and Image Description. International Journal of Computer Vision, Vol.45, No.2, p83-105, November 2001
[7] L. Itti, C. Koch and E. Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Analysis and Machine Intelligence, 20(11), Nov. 1998.
[8] C. Schmid, R. Mohr and C. Bauckhage. Comparing and Evaluating Interest Points. Proceedings
of International Conference on Computer Vision 1998, p.230-235.
[9] D. Claus and A. Fitzgibbon. Reliable Fiducial Detection in Natural Scenes. Proceedings of the
8th European Conference on Computer Vision, Prague, Czech Republic, 2004
[10] N. Vasconcelos. Feature Selection by Maximum Marginal Diversity. In Neural Information
Processing System, Vancouver, Canada, 2002
[11] N. Vasconcelos. Scalable Discriminant Feature Selection for Image Retrieval and Recgnition.
To appear in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2004
[12] D. Sagi, ?The Psychophysics of Texture Segmentation, in Early Vision and Beyond, T. Papathomas, Ed., chapter 7. MIT Press, 1996.
[13] J. Malik, P. Perona. Preattentive texture discrimination with early vision mechanisms. J Opt Soc
Am A. 7(5), 1990 May, p923-32.
[14] N. Vasconcelos and G. Carneiro. What is the Role of Independence for Visual Regognition? In
Proc. European Conference on Computer Vision, Copenhagen, Denmark, 2002.
[15] R. Fergus, P. Perona and A. Zisserman. Object Class Recognition by Unsupervised ScaleInvariant Learning. In Proc. IEEE Conf. on Computer Vision and Pattern Recognition 2003.
| 2567 |@word version:3 eliminating:1 achievable:1 wiesel:1 stronger:1 nd:1 open:1 tried:1 decomposition:3 initial:1 contains:1 tuned:2 rightmost:1 subjective:1 existing:6 current:3 comparing:2 surprising:1 dx:1 must:1 dct:4 fn:1 shape:3 discrimination:6 v:4 half:2 leaf:2 intelligence:1 dashan:1 inspection:4 xk:3 painstaking:1 detecting:2 contribute:2 location:14 consists:1 compose:1 indeed:2 rapid:1 behavior:1 themselves:1 frequently:1 inspired:1 globally:1 p83:1 linearity:1 maximizes:1 what:4 transformation:2 brady:1 preferably:1 classifier:18 control:1 appear:4 superiority:1 before:1 engineering:1 local:2 sagi:1 treat:1 limit:2 consequence:1 path:1 mohr:1 bird:2 equating:1 equivalence:3 suggests:1 challenging:2 factorization:1 range:2 bootstrap:2 spot:2 procedure:2 fitzgibbon:1 jan:1 area:5 ri2:1 significantly:3 gabor:1 pre:3 word:1 regular:1 unlabeled:1 close:3 selection:10 context:6 impossible:1 applying:1 py:3 crisp:1 www:2 map:20 equivalent:1 center:1 straightforward:1 attention:1 cluttered:4 simplicity:2 array:1 enabled:1 variation:3 traditionally:1 diego:1 play:2 inspected:1 distinguishing:1 designing:1 assure:1 element:1 recognition:32 database:9 labeled:1 bottom:3 role:3 module:1 electrical:1 thousand:1 region:9 connected:1 trade:1 removed:1 substantial:3 intuition:3 complexity:6 exhaustively:1 upon:1 efficiency:2 completely:2 basis:2 textured:5 translated:1 easily:1 svcl:2 various:10 chapter:1 carneiro:1 train:1 detected:1 whose:2 quite:2 posed:1 kadir:2 plausible:2 larger:1 solve:1 cvpr:1 ability:1 favor:1 objectively:1 transform:1 itself:4 scaleinvariant:1 obviously:1 advantage:1 propose:2 relevant:1 flexibility:1 achieve:3 multiresolution:2 description:1 p:2 r1:1 produce:4 brodatz:4 object:7 derive:2 depending:1 develop:1 pose:1 illustrate:1 progress:2 strong:2 soc:1 auxiliary:1 implemented:1 implies:2 quantify:1 radius:2 attribute:10 filter:7 human:2 terribly:1 enable:1 viewing:2 crt:4 translating:1 require:2 assign:1 f1:1 clustered:1 opt:1 biological:7 koch:1 considered:5 great:2 claim:1 major:4 early:4 adopt:1 estimation:1 proc:4 label:2 currently:1 sensitive:1 largest:1 weighted:1 anecdotal:1 mit:1 clearly:3 always:3 gaussian:1 avoid:1 publication:2 earliest:1 hsd:4 consistently:1 detect:3 sense:2 am:1 typically:1 entire:1 perona:3 pixel:8 issue:2 classification:16 overall:2 spatial:1 art:1 orange:1 mutual:1 marginal:5 equal:1 once:2 psychophysics:1 vasconcelos:4 extraction:1 sampling:1 manually:2 jones:1 unsupervised:1 foreground:1 future:1 others:2 inherent:1 few:2 modern:1 gamma:2 recognize:1 divergence:1 consisting:1 detection:19 interest:16 highly:6 evaluation:2 severe:1 laborious:1 truly:1 edge:7 capable:1 experience:4 tree:3 circle:4 isolated:1 classify:4 soft:1 column:1 modeling:1 goodness:1 maximization:2 stacking:1 republic:1 kullbackleibler:1 optimally:1 reported:1 combined:3 person:1 density:4 fundamental:1 peak:1 international:3 off:1 straw:1 invertible:1 quickly:1 again:1 containing:4 nest:1 corner:8 conf:3 leading:1 itti:1 ullman:1 reusing:1 actively:1 diversity:4 coefficient:2 pedestrian:1 int:1 explicitly:1 performed:3 vehicle:1 root:3 lowe:2 closed:1 try:1 sebe:2 start:2 red:2 bayes:1 wave:2 collaborative:1 square:1 bright:1 accuracy:5 lew:1 conceive:1 sitting:1 saliency:99 judgment:1 generalize:1 weak:2 iterated:1 produced:1 none:1 niebur:1 lighting:2 apple:4 classified:1 detector:24 r2k:2 strongest:1 manual:1 whenever:1 ed:1 definition:7 against:1 pp:2 nuno:1 obvious:3 naturally:2 rational:1 dataset:2 intrinsically:1 popular:1 crispness:1 color:1 car:1 segmentation:4 carefully:2 actually:1 appears:2 response:4 zisserman:1 formulation:6 done:1 though:1 strongly:3 furthermore:1 stage:1 correlation:1 hand:1 working:1 multiscale:1 lack:1 quality:1 gray:1 believe:1 concept:3 contain:4 functioning:1 former:1 hence:1 illustrated:5 white:1 histogrammed:1 nips04:2 inferior:1 essence:1 noted:1 cosine:1 criterion:6 leftmost:1 stress:2 theoretic:1 motion:2 fj:1 image:38 consideration:1 recently:1 fi:1 discussed:1 significant:3 tuning:1 fk:2 ssd:4 winnertake:1 surface:1 internat:1 etc:8 recent:2 showed:1 driven:1 scenario:2 certain:1 success:2 caltech:2 seen:1 minimum:1 greater:1 somewhat:2 contingent:1 mr:1 employed:1 recognized:1 determine:4 july:1 stephen:1 branch:1 full:1 segmented:2 smooth:4 cross:2 retrieval:2 divided:2 schematic:1 scalable:6 basic:1 hair:2 vision:23 histogram:5 grounded:1 mmd:1 achieved:1 proposal:1 cropped:2 justified:1 remarkably:1 background:2 addressed:2 addition:1 alvey:1 crucial:1 biased:1 rest:1 claus:1 subject:2 pooling:1 virtually:1 contrary:2 flow:1 seem:2 prague:1 structural:1 presence:2 enough:1 variety:2 independence:1 perfectly:1 opposite:1 reduce:1 idea:3 airplane:2 translates:1 unprocessed:1 whether:1 effort:1 speaking:1 clear:3 pxk:5 detailed:1 amount:2 clutter:4 locally:1 hardware:1 processed:1 simplest:1 reduced:1 outperform:1 problematic:3 per:1 popularity:1 bulk:1 broadly:1 diverse:1 discrete:1 vol:2 group:2 salient:29 demonstrating:2 nevertheless:1 achieving:1 deleted:1 drawn:1 v1:1 luminance:1 year:1 letter:1 fourth:1 respond:1 place:1 family:1 pursuing:1 patch:1 separation:1 acceptable:1 followed:1 distinguish:3 display:2 strength:1 precisely:1 scene:7 ri:1 aspect:2 extremely:2 optimality:4 department:1 hanging:1 according:1 combination:1 poor:1 belonging:2 suppressed:1 wi:2 appealing:2 wta:1 biologically:2 happens:1 invariant:1 computationally:2 rectification:2 opposes:1 describing:1 mechanism:6 discus:1 needed:1 fed:2 end:2 adopted:3 junction:2 operation:1 available:3 generic:2 alternative:1 robustness:3 motorbike:2 existence:2 original:1 top:3 remaining:2 divorced:1 establish:1 classical:1 objective:4 malik:2 added:2 question:2 strategy:3 sha:1 dependence:1 md:4 traditional:2 fiducial:1 gradient:3 separate:1 dsd:8 argue:1 discriminant:17 trivial:1 denmark:1 modeled:1 difficult:2 unfortunately:1 statement:1 stated:1 design:10 implementation:2 perform:2 observation:1 benchmark:2 enabling:1 november:1 viola:1 ever:1 precise:2 ucsd:2 intensity:3 canada:1 namely:1 required:1 kl:2 copenhagen:1 california:1 learned:1 czech:1 pop:1 trans:1 assembled:1 address:1 able:1 bar:3 usually:6 perception:2 pattern:4 departure:1 beyond:1 challenge:1 built:1 max:2 including:1 reliable:1 difficulty:1 natural:2 representing:1 schmid:1 deviate:1 prior:1 geometric:3 literature:4 vancouver:1 determining:1 interesting:6 limitation:3 filtering:1 validation:3 degree:1 sufficient:1 principle:5 classifying:2 r2n:1 row:7 compatible:3 surprisingly:1 last:1 supported:1 bias:1 neighbor:1 wide:1 face:7 barrier:1 absolute:1 boundary:1 xn:1 evaluating:1 contour:5 equated:1 collection:2 clue:1 san:1 universally:1 nov:1 dealing:1 hubel:1 sequentially:1 fergus:1 search:2 iterative:1 decade:2 table:3 learn:1 reasonably:1 channel:2 robust:1 ignoring:1 complex:3 necessarily:1 european:2 main:1 repeated:1 fair:1 x1:1 representative:2 egg:2 fails:1 guiding:1 unfair:1 third:3 wavelet:2 specific:1 showing:1 constellation:4 appeal:1 r2:1 list:1 svm:3 concern:1 evidence:1 intrinsic:1 workshop:1 gained:1 texture:11 perceptually:1 easier:1 entropy:1 simply:1 likely:3 appearance:1 gao:1 visual:16 labor:1 contained:1 tracking:2 bauckhage:1 relies:1 harris:3 conditional:1 goal:5 formulated:1 viewed:2 towards:1 room:1 absence:6 corrected:2 flag:1 called:1 total:1 discriminate:1 experimental:6 preattentive:1 rarely:1 select:1 indicating:1 support:5 people:1 latter:1 scan:1 evaluate:2 correlated:1 |
1,725 | 2,568 | Solitaire: Man Versus Machine
Xiang Yan?
Persi Diaconis?
Paat Rusmevichientong?
Benjamin Van Roy?
?
Stanford University
{xyan,persi.diaconis,bvr}@stanford.edu
?
Cornell University
[email protected]
Abstract
In this paper, we use the rollout method for policy improvement to analyze a version of Klondike solitaire. This version, sometimes called
thoughtful solitaire, has all cards revealed to the player, but then follows
the usual Klondike rules. A strategy that we establish, using iterated rollouts, wins about twice as many games on average as an expert human
player does.
1
Introduction
Though proposed more than fifty years ago [1, 7], the effectiveness of the policy improvement algorithm remains a mystery. For discounted or average reward Markov decision
problems with n states and two possible actions per state, the tightest known worst-case
upper bound in terms of n on the number of iterations taken to find an optimal policy is
O(2n /n) [9]. This is also the tightest known upper bound for deterministic Markov decision problems. It is surprising, however, that there are no known examples of Markov
decision problems with two possible actions per state for which more than n + 2 iterations
are required. A more intriguing fact is that even for problems with a large number of states
? say, in the millions ? an optimal policy is often delivered after only half a dozen or so
iterations.
In problems where n is enormous ? say, a googol ? this may appear to be a moot point
because each iteration requires ?(n) compute time. In particular, a policy is represented
by a table with one action per state and each iteration improves the policy by updating
each entry of this table. In such large problems, one might resort to a suboptimal heuristic policy, taking the form of an algorithm that accepts a state as input and generates an
action as output. An interesting recent development in dynamic programming is the rollout method. Pioneered by Tesauro and Galperin [13, 2], the rollout method leverages the
policy improvement concept to amplify the performance of any given heuristic. Unlike the
conventional policy improvement algorithm, which computes an optimal policy off-line so
that it may later be used in decision-making, the rollout method performs its computations
on-line at the time when a decision is to be made. When making a decision, rather than
applying the heuristic policy directly, the rollout method computes an action that would
result from an iteration of policy improvement applied to the heuristic policy. This does
not require ?(n) compute time since only one entry of the table is computed.
The way in which actions are generated by the rollout method may be considered an alternative heuristic that improves on the original. One might consider applying the rollout
method to this new heuristic. Another heuristic would result, again with improved performance. Iterated a sufficient number of times, this process would lead to an optimal policy.
However, iterating is usually not an option. Computational requirements grow exponentially in the number of iterations, and the first iteration, which improves on the original
heuristic, is already computationally intensive. For this reason, prior applications of the
rollout method have involved only one iteration [3, 4, 5, 6, 8, 11, 12, 13]. For example, in
the interesting study of Backgammon by Tesauro and Galperin [13], moves were generated
in five to ten seconds by the rollout method running on configurations of sixteen to thirtytwo nodes in a network of IBM SP1 and SP2 parallel-RISC supercomputers with parallel
speedup efficiencies of 90%. A second iteration of the rollout method would have been
infeasible ? requiring about six orders of magnitude more time per move.
In this paper, we apply the rollout method to a version of solitaire, modeled as a deterministic Markov decision problem with over 52! states. Determinism drastically reduces
computational requirements, making it possible to consider iterated rollouts1 . With five
iterations, a game, implemented in Java, takes about one hour and forty-five minutes on
average on a SUN Blade 2000 machine with two 900MHz CPUs, and the probability of
winning exceeds that of a human expert by about a factor of two. Our study represents an
important contribution both to the study of the rollout method and to the study of solitaire.
2
Solitaire
It is one of the embarrassments of applied mathematics that we cannot determine the odds
of winning the common game of solitaire. Many people play this game every day, yet
simple questions such as What is the chance of winning? How does this chance depend on
the version I play? What is a good strategy? remain beyond mathematical analysis.
According to Parlett [10], solitaire came into existence when fortune-telling with cards
gained popularity in the eighteenth century. Many variations of solitaire exist today, such
as Klondike, Freecell, and Carpet. Popularized by Microsoft Windows, Klondike has probably become the most widely played.
Klondike is played with a standard deck of cards: there are four suits (Spades, Clubs,
Hearts, and Diamonds) each made up of thirteen cards ranked 1 through 13: Ace, 2, 3, ...,
10, Jack, Queen, and King. During the game, each card resides in one of thirteen stacks2 :
the pile, the talon, four suit stacks and seven build stacks. Each suit stack corresponds to a
particular suit and build stacks are labeled 1 through 7.
At the beginning of the game, cards are dealt so that there is one card in the first build stack,
two cards in the second build stack, ..., and seven cards in the seventh build stack. The top
card on each of the seven build stacks is turned face-up while the rest of the cards in the
build stacks face down. The other twenty-four cards, forming the pile, face down as well.
The talon is initially empty.
The goal of the game is to move all cards into the suit stacks, aces first, then two?s, and so
on, with each suit stack evolving as an ordered increasing arrangement of cards of the same
suit. The figure below shows a typical mid-game configuration.
1
2
Backgammon is stochastic because play is influenced by the roll of dice.
In some solitaire literature, stacks are referred to as piles.
We will study a version of solitaire in which the identity of each card at each position is
revealed to the player at the beginning of the game but the usual Klondike rules still apply.
This version is played by a number of serious solitaire players as a much more difficult
version than standard Klondike. Parlett [10] offers further discussion. We call this game
thoughtful solitaire and now spell out the rules.
On each turn, the player can move cards from one stack to another in the following manner:
? Face-up cards of a build stack, called a card block, can be moved to the top of
another build stack provided that the build stack to which the block is being moved
accepts the block. Note that all face-up cards on the source stack must be moved
together. After the move, these cards would then become the top cards of the stack
to which they are moved, and their ordering is preserved. The card originally
immediately beneath the card block, now the top card in its stack, is turned faceup. In the event that all cards in the source stack are moved, the player has an
empty stack. 3
? The top face-up card of a build stack can be moved to the top of a suit stack,
provided that the suit stack accepts the card.
? The top card of a suit stack can be moved to the top of a build stack, provided that
the build stack accepts the card.
? If the pile is not empty, a move can deal its top three cards to the talon, which
maintains its cards in a first-in-last-out order. If the pile becomes empty, the player
can redeal all the cards on the talon back to the pile in one card move. A redeal
preserves the ordering of cards. The game allows an unlimited number of redeals.
? A card on the top of the talon can be moved to the top of a build stack or a suit
stack, provided that the stack to which the card is being moved accepts the card.
3
It would seem to some that since the identity of all cards is revealed to the player, whether a
card is face-up or face-down is irrelevant. We retain this property of cards as it is still important in
describing the rules and formulating our strategy.
? A build stack can only accept an incoming card block if the top card on the build
stack is adjacent to and braided with the bottom card of the block. A card is
adjacent to another card of rank r if it is of rank r + 1. A card is braided with a
card of suit s if its suit is of a color different from s. Additionally, if a build stack
is empty, it can only accept a card block whose bottom card is a King.
? A suit stack can only accept an incoming card of its corresponding suit. If a suit
stack is empty, it can only accept an Ace. If it is not empty, the incoming card
must be adjacent to the current top card of the suit stack.
As stated earlier, the objective is to end up with all cards on suit stacks. If this event occurs,
the game is won.
3
Expert Play
We were introduced to thoughtful solitaire by a senior American mathematician (former
president of the American Mathematical Society and indeed a famous combinatorialist)
who had spent a number of years studying the game. He finds this version of solitaire much
more thought-provoking and challenging than the standard Klondike. For instance, while
the latter is usually played quickly, our esteemed expert averages about 20 minutes for each
game of thoughtful solitaire. He carefully played and recorded 2,000 games, achieving a
win rate of 36.6%.
With this background, it is natural to wonder how well an optimal player can perform at
thoughtful solitaire. As we will illustrate, our best strategy offers a win rate of about 70%.
4
Machine Play
We have developed two strategies that play thoughtful solitaire. Both are based on the
following general procedure:
1.
2.
3.
4.
5.
Identify the set of legal moves.
Select and execute a legal move.
If all cards are on suit stacks, declare victory and terminate.
If the new card configuration repeats a previous one, declare loss and terminate 4 .
Repeat procedure.
The only nontrivial task in this procedure is selection from the legal moves. We will first
describe a heuristic strategy for selecting a legal move based on a card configuration. Afterwards, we will discuss the use of rollouts.
4.1
A Heuristic Strategy
Our heuristic strategy is based on part of the Microsoft Windows Klondike scoring system:
? The player starts the game with an initial score of 0.
4
One straight-forward way to determine if a card configuration has previously occurred is to store
all encountered card configurations. Instead of doing so, however, we notice that there are three kinds
of moves that could lead us into an infinite loop: pile-talon moves, moves that could juggle a card
block between two build stacks, and moves that could juggle a card block between a build stack and
a suit stack. Hence, to simplify our strategy, we disable the second kind of moves. Our heuristic will
also practically disable the third kind. For the first kind, we record if any card move other than a
pile-talon move has occurred since the last redeal. If not, we detect an infinite loop and declare loss.
? Whenever a card is moved from a build stack to a suit stack, the player gains 5
points.
? Whenever a card is moved from the talon to a build stack, the player gains 5 points.
? Whenever a card is moved from a suit stack to a build stack, the player loses 10
points.
In our heuristic strategy, we assign a score to each card move based on the above scoring
system. We assign the score zero to any moves not covered by the above rules. When
selecting a move, we choose among those that maximize the score.
Intuitively, this heuristic seems reasonable. The player has incentive to move cards from
the talon to a build stack and from a build stack to a suit stack. One important element
that the heuristic fails to capture, however, is what move to make when multiple moves
maximize the score. Such decisions ? especially during the early phases of a game ? are
crucial.
To select among moves that maximize score, we break the tie by assigning the following
priorities:
? If the card move is from a build stack to another build stack, one of the following
two assignments of priority occurs:
? If the move turns an originally face-down card face-up, we assign this move
a priority of k + 1, where k is the number of originally face-down cards on
the source stack before the move takes place.
? If the move empties a stack, we assign this move a priority of 1.
? If the card move is from the talon to a build stack, one of the following three
assignments of priority occurs:
? If the card being moved is not a King, we assign the move priority 1.
? If the card being moved is a King and its matching Queen is in the pile, in
the talon, in a suit stack, or is face-up in a build stack, we assign the move
priority 1.
? If the card being moved is a King and its matching Queen is face-down in a
build stack, we assign the move priority -1.
? For card moves not covered by the description above, we assign them a priority of
0.
In addition to introducing priorities, we modify the Windows Klondike scoring system
further by adding the following change: in a card move, if the card being moved is a King
and its matching Queen is face-down in a build stack, we assign the move a score of 0.
Note that given our assignment of scores and priorities, we practically disable card moves
from a suit stack to a build stack. Because such moves have a negative score and a card
move from the pile to the talon or from the talon to the pile has zero score and is almost
always available, our strategy would always choose the pile-talon move over the moves
from a suit stack to a build stack.
In the case when multiple moves equal in priority maximize the score, we randomly select
a move among them.
The introduction of priority improves our original game-playing strategy in two ways:
when we encounter a situation where we can move either one of two blocks on two separate
build stacks atop the top card of a third build stack, we prefer moving the block whose stack
has more face-down cards. Intuitively, such a move would strive to balance the number of
face-down cards in stacks. Our experiments show that this heuristic significantly improves
success rate. The second way in which our prioritization scheme helps is that we are more
deliberate in which King to select to enter an empty build stack. For instance, consider a
situation where the King of Hearts and the King of Spades, both on the pile, are vying for
an empty build stack and there is a face-up Queen of Diamonds on a build stack. We should
certainly move the King of Spades to the empty build stack so that the Queen of Diamonds
can be moved on top of it. Whereas our prioritization warrants such consideration, our
original heuristic does not.
4.2
Rollouts
Consider a strategy h that maps a card configuration x to a legal move h(x). What we
described in the previous section was one example of a strategy h. In this section, we will
discuss the rollout method as a procedure for amplifying the performance of any strategy.
Given a strategy h, this procedure generates an improved strategy h0 , called a rollout strategy. This idea was originally proposed by Tesauro and Galperin [13] and builds on the
policy improvement algorithm of dynamic programming [1, 7].
Given a card configuration x. A strategy h would make a move h(x). A rollout strategy
would make a move h0 (x), determined as follows:
1. For each legal move a, simulate the remainder of the game, taking move a and
then employing strategy h thereafter.
2. If any of these simulations leads to victory, choose one of them randomly and let
h0 (x) be the corresponding move a5 .
3. If none of the simulations lead to victory, let h0 (x) = h(x).
We can then iterate this procedure to generate a further improved strategy h00 that is a
rollout strategy relative to h0 . It is easy to prove that after a finite number of such iterations,
we would arrive at an optimal strategy [2]. However, the computation time required grows
exponentially in the number of iterations, so this may not be practical. Nevertheless, one
might try a few iterations and hope that this offers the bulk of the mileage.
5
Results
We implemented in Java the heuristic strategy and the procedure for computing rollout
strategies. Simulation results are provided in the following table and chart. We randomly
generated a large number of games and played them with our algorithms in an effort to approximate the success probability with the percentage of games actually won. To determine
a sufficient number of games to simulate, we used the Central Limit Theorem to compute
the confidence bounds on success probability for each algorithm with a confidence level of
99%. For the original heuristic and 1 through 3 rollout iterations, we managed to achieve
confidence bounds of [-1.4%, 1.4%]. For 4 and 5 rollout iterations, due to time constraints,
we simulated fewer games and obtained weaker confidence bounds. Interestingly, however, after 5 rollout iterations, the resulting strategy wins almost twice as frequently as our
esteemed mathematician.
5
Note that at this stage, we could record all moves made in this simulation and declare victory.
That is how our program is implemented. However, we leave step 2 as stated for the sake of clarity
in presentation.
Player
Human expert
heuristic
1 rollout
2 rollouts
3 rollouts
4 rollouts
5 rollouts
6
Success
Rate
36.6%
13.05%
31.20%
47.60%
56.83%
60.51%
70.20%
Games
Played
2,000
10,000
10,000
10,000
10,000
1,000
200
Average Time
Per Game
20 minutes
.021 seconds
.67 seconds
7.13 seconds
1 minute 36 seconds
18 minutes 7 seconds
1 hour 45 minutes
99% Confidence
Bounds
?2.78%
?.882%
?1.20%
?1.30%
?1.30%
?4.00%
?8.34%
Future Challenges
One limitation of our rollout method lies in its recursive nature. Although it is clearly
formulated and hence easily implemented in software, the algorithm does not provide a
simple and explicit strategy for human players to make decisions.
One possible direction for further exploration would be to compute a value function, mapping the state of the game to an estimate of whether or not the game can be won. Certainly,
this function could not be represented exactly, but we could try approximating it in terms
of a linear combination of features of the game state, as is common in the approximate
dynamic programming literature [2].
We have also attempted proving an upper bound for the success rate of thoughtful solitaire by enumerating sets of initial card configurations that would force loss. Currently,
the tightest upper bound we can rigorously prove is 98.81%. Speed optimization of our
software implementation is under way. If the success rate bound is improved and we are
able to run additional rollout iterations, we may produce a verifiable near-optimal strategy
for thoughtful solitaire.
Acknowlegment
This material is based upon work supported by the National Science Foundation under
Grant ECS-9985229.
References
[1] R. Bellman. Applied Dynamic Programming. Princeton University Press, 1957.
[2] D. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific,
1996.
[3] D. P. Bertsekas, J. N. Tsitsiklis, and C. Wu, Rollout Algorithms for Combinatorial
Optimization. Journal of Heuristics, 3:245-262, 1997.
[4] D. P. Bertsekas and D. A. Casta?non. Rollout Algorithms for Stochastic Scheduling
Problems. Journal of Heuristics, 5:89-108, 1999.
[5] D. Bertsimas and R. Demir. An Approximate Dynamic Programming Approach to
Multi-dimensional Knapsack Problems. Management Science, 4:550-565, 2002.
[6] D. Bertsimas and I. Popescu. Revenue Management in a Dynamic Network Environment. Transportation Science, 37:257-277, 2003.
[7] R. Howard. Dynamic Programming and Markov Processes. MIT Press, 1960.
[8] A. McGovern, E. Moss, and A. Barto. Building a Basic Block Instruction Scheduler
Using Reinforcement Learning and Rollouts. Machine Learning, 49:141-160, 2002.
[9] Y. Mansour and S. Singh. On the Complexity of Policy Iteration. In Fifteenth Conference on Uncertainty in Artificial Intelligence, 1999.
[10] D. Parlett. A History of Card Games. Oxford University Press, 1991.
[11] N. Secomandi. Analysis of a Rollout Approach to Sequencing Problems with Stochastic Routing Applications. Journal of Heuristics, 9:321-352, 2003.
[12] N. Secomandi. A Rollout Policy for the Vehicle Routing Problem with Stochastic
Demands. Operations Research, 49:796-802, 2001.
[13] G. Tesauro and G. Galperin. On-line Policy Improvement Using Monte-Carlo Search.
In Advances in Neural Information Processing Systems, 9:1068-1074, 1996.
| 2568 |@word version:8 seems:1 instruction:1 simulation:4 blade:1 initial:2 configuration:9 score:11 selecting:2 interestingly:1 current:1 surprising:1 yet:1 intriguing:1 must:2 assigning:1 atop:1 embarrassment:1 half:1 fewer:1 intelligence:1 beginning:2 record:2 node:1 club:1 five:3 rollout:27 mathematical:2 become:2 prove:2 manner:1 indeed:1 frequently:1 multi:1 bellman:1 discounted:1 cpu:1 window:3 increasing:1 becomes:1 provided:5 what:4 kind:4 developed:1 mathematician:2 every:1 tie:1 exactly:1 grant:1 appear:1 bertsekas:3 before:1 declare:4 modify:1 limit:1 oxford:1 might:3 twice:2 challenging:1 practical:1 recursive:1 block:12 procedure:7 dice:1 yan:1 evolving:1 java:2 thought:1 matching:3 significantly:1 confidence:5 amplify:1 cannot:1 selection:1 scheduling:1 applying:2 conventional:1 map:1 deterministic:2 eighteenth:1 transportation:1 immediately:1 rule:5 century:1 proving:1 variation:1 president:1 play:6 today:1 pioneered:1 programming:7 prioritization:2 element:1 roy:1 updating:1 labeled:1 bottom:2 capture:1 worst:1 sun:1 ordering:2 benjamin:1 environment:1 complexity:1 reward:1 rigorously:1 dynamic:8 depend:1 singh:1 upon:1 efficiency:1 easily:1 represented:2 describe:1 mileage:1 monte:1 mcgovern:1 artificial:1 h0:5 whose:2 heuristic:23 stanford:2 widely:1 ace:3 say:2 fortune:1 delivered:1 remainder:1 turned:2 paatrus:1 beneath:1 loop:2 achieve:1 description:1 moved:17 empty:11 requirement:2 produce:1 leave:1 paat:1 spent:1 illustrate:1 help:1 implemented:4 direction:1 stochastic:4 exploration:1 human:4 routing:2 material:1 require:1 assign:9 practically:2 considered:1 mapping:1 sp2:1 early:1 combinatorial:1 currently:1 amplifying:1 hope:1 mit:1 clearly:1 always:2 rather:1 cornell:2 barto:1 improvement:7 backgammon:2 rank:2 sequencing:1 detect:1 accept:4 initially:1 among:3 development:1 equal:1 represents:1 warrant:1 future:1 simplify:1 serious:1 few:1 randomly:3 diaconis:2 preserve:1 national:1 phase:1 rollouts:8 microsoft:2 suit:26 a5:1 certainly:2 instance:2 earlier:1 mhz:1 queen:6 assignment:3 introducing:1 entry:2 wonder:1 seventh:1 retain:1 off:1 together:1 quickly:1 again:1 central:1 recorded:1 management:2 choose:3 priority:13 expert:5 resort:1 american:2 strive:1 rusmevichientong:1 later:1 break:1 try:2 vehicle:1 analyze:1 doing:1 start:1 option:1 parallel:2 maintains:1 contribution:1 chart:1 roll:1 who:1 identify:1 dealt:1 famous:1 iterated:3 none:1 carlo:1 straight:1 ago:1 history:1 influenced:1 whenever:3 involved:1 gain:2 persi:2 color:1 improves:5 carefully:1 actually:1 back:1 originally:4 day:1 improved:4 execute:1 though:1 stage:1 scientific:1 grows:1 building:1 concept:1 requiring:1 managed:1 spell:1 former:1 hence:2 deal:1 adjacent:3 game:29 during:2 won:3 performs:1 jack:1 consideration:1 casta:1 common:2 exponentially:2 million:1 he:2 occurred:2 enter:1 mathematics:1 had:1 moving:1 recent:1 irrelevant:1 tesauro:4 store:1 came:1 success:6 scoring:3 additional:1 disable:3 forty:1 determine:3 maximize:4 afterwards:1 multiple:2 reduces:1 exceeds:1 offer:3 victory:4 neuro:1 basic:1 fifteenth:1 iteration:19 sometimes:1 preserved:1 background:1 whereas:1 addition:1 grow:1 source:3 crucial:1 fifty:1 rest:1 unlike:1 probably:1 effectiveness:1 seem:1 odds:1 call:1 near:1 leverage:1 revealed:3 easy:1 iterate:1 suboptimal:1 idea:1 intensive:1 enumerating:1 whether:2 six:1 effort:1 action:6 iterating:1 covered:2 verifiable:1 mid:1 ten:1 risc:1 generate:1 demir:1 exist:1 deliberate:1 percentage:1 notice:1 per:5 popularity:1 bulk:1 incentive:1 thereafter:1 four:3 nevertheless:1 enormous:1 achieving:1 clarity:1 bertsimas:2 year:2 run:1 mystery:1 uncertainty:1 place:1 almost:2 reasonable:1 arrive:1 wu:1 decision:9 prefer:1 bound:9 played:7 encountered:1 nontrivial:1 constraint:1 software:2 unlimited:1 sake:1 generates:2 simulate:2 speed:1 formulating:1 speedup:1 according:1 popularized:1 combination:1 remain:1 making:3 intuitively:2 taken:1 heart:2 computationally:1 legal:6 remains:1 previously:1 turn:2 describing:1 discus:2 end:1 studying:1 available:1 tightest:3 operation:1 apply:2 alternative:1 encounter:1 supercomputer:1 existence:1 original:5 knapsack:1 top:15 running:1 build:39 establish:1 especially:1 society:1 approximating:1 move:55 objective:1 already:1 question:1 arrangement:1 occurs:3 strategy:29 usual:2 spade:3 win:4 separate:1 card:85 simulated:1 athena:1 bvr:1 seven:3 reason:1 modeled:1 balance:1 thoughtful:8 difficult:1 thirteen:2 stated:2 negative:1 implementation:1 policy:18 twenty:1 diamond:3 galperin:4 upper:4 perform:1 markov:5 howard:1 finite:1 situation:2 mansour:1 stack:70 introduced:1 required:2 accepts:5 hour:2 beyond:1 able:1 usually:2 below:1 provoking:1 challenge:1 program:1 event:2 ranked:1 natural:1 force:1 scheme:1 popescu:1 moss:1 prior:1 literature:2 xiang:1 relative:1 loss:3 interesting:2 limitation:1 versus:1 sixteen:1 revenue:1 foundation:1 sufficient:2 playing:1 ibm:1 pile:13 repeat:2 last:2 supported:1 infeasible:1 drastically:1 tsitsiklis:2 senior:1 moot:1 telling:1 weaker:1 taking:2 face:17 determinism:1 van:1 resides:1 computes:2 parlett:3 forward:1 made:3 reinforcement:1 employing:1 ec:1 approximate:3 incoming:3 solitaire:20 search:1 table:4 additionally:1 terminate:2 nature:1 referred:1 fails:1 position:1 scheduler:1 explicit:1 winning:3 carpet:1 lie:1 third:2 dozen:1 minute:6 down:9 theorem:1 orie:1 adding:1 gained:1 magnitude:1 demand:1 deck:1 forming:1 ordered:1 corresponds:1 loses:1 chance:2 goal:1 identity:2 king:10 presentation:1 formulated:1 man:1 change:1 h00:1 sp1:1 typical:1 infinite:2 determined:1 called:3 player:16 attempted:1 select:4 people:1 latter:1 princeton:1 |
1,726 | 2,569 | Learning first-order Markov models for control
Pieter Abbeel
Computer Science Department
Stanford University
Stanford, CA 94305
Andrew Y. Ng
Computer Science Department
Stanford University
Stanford, CA 94305
Abstract
First-order Markov models have been successfully applied to many problems, for example in modeling sequential data using Markov chains, and
modeling control problems using the Markov decision processes (MDP)
formalism. If a first-order Markov model?s parameters are estimated
from data, the standard maximum likelihood estimator considers only
the first-order (single-step) transitions. But for many problems, the firstorder conditional independence assumptions are not satisfied, and as a result the higher order transition probabilities may be poorly approximated.
Motivated by the problem of learning an MDP?s parameters for control,
we propose an algorithm for learning a first-order Markov model that explicitly takes into account higher order interactions during training. Our
algorithm uses an optimization criterion different from maximum likelihood, and allows us to learn models that capture longer range effects, but
without giving up the benefits of using first-order Markov models. Our
experimental results also show the new algorithm outperforming conventional maximum likelihood estimation in a number of control problems
where the MDP?s parameters are estimated from data.
1
Introduction
First-order Markov models have enjoyed numerous successes in many sequence modeling
and in many control tasks, and are now a workhorse of machine learning.1 Indeed, even
in control problems in which the system is suspected to have hidden state and thus be
non-Markov, a fully observed Markov decision process (MDP) model is often favored over
partially observable Markov decision process (POMDP) models, since it is significantly
easier to solve MDPs than POMDPs to obtain a controller. [5]
When the parameters of a Markov model are not known a priori, they are often estimated
from data using maximum likelihood (ML) (and perhaps smoothing). However, in many
applications the dynamics are not truly first-order Markov, and the ML criterion may lead to
poor modeling performance. In particular, we will show that the ML model fitting criterion
explicitly considers only the first-order (one-step) transitions. If the dynamics are truly
governed by a first-order system, then the longer-range interactions would also be well
modeled. But if the system is not first-order, then interactions on longer time scales are
often poorly approximated by a model fit using maximum likelihood. In reinforcement
learning and control tasks where the goal is to maximize our long-term expected rewards,
the predictive accuracy of a model on long time scales can have a significant impact on the
attained performance.
1
To simplify the exposition, in this paper we will consider only first-order Markov models. However, the problems we describe in this paper also arise with higher order models and with more
structured models (such as dynamic Bayesian networks [4, 10] and mixed memory Markov models [8, 14]), and it is straightforward to extend our methods and algorithms to these models.
As a specific motivating example, consider a system whose dynamics are governed by
a random walk on the integers. Letting St denote the state at time t, we initialize the
system to S0 = 0, and let St = St?1 + ?t , where the increments ?t ? {?1, +1} are
equally likely to be ?1 or +1. Writing St in terms of only the ?t ?s, we have St = ?1 +
? ? ? + ?t . Thus, if the increments are independent, we have Var(ST ) = T . However
if the increments are perfectly correlated (so ?1 = ?2 = ? ? ? with probability 1), then
Var(ST ) = T 2 . So, depending
? on the correlation between the increments, the expected
value E[|ST |] can be either O( T ) or O(T ). Further, regardless of the true correlation in
the data, using maximum likelihood (ML) to estimate?the model parameters from training
data would return the same model with E[|ST |] = O( T ).
To see how these effects can lead to poor performance on a control task, consider learning
to control a vehicle (such as a car or a helicopter) under disturbances ?t due to very strong
winds. The influence of the disturbances on the vehicle?s position over one time step may be
small, but if the disturbances ?t are highly correlated, their cumulative effect over time can
be substantial. If our model completely ignores these correlations, we may overestimate
our ability to control the vehicle (thinking our variance in position is O(T ) rather than
O(T 2 )), and try to follow overly narrow/dangerous paths.
Our motivation also has parallels in the debate on using discriminative vs. generative algorithms for supervised learning. There, the consensus (assuming there is ample training
data) seems to be that it is usually better to directly minimize the loss with respect to the
ultimate performance measure, rather than an intermediate loss function such as the likelihood of the training data. (See, e.g., [16, 9].) This is because the model (no matter how
complicated) is almost always not completely ?correct? for the problem data. By analogy, when modeling a dynamical system for a control task, we are interested in having
a model that accurately predicts the performance of different control policies?so that it
can be used to select a good policy?and not in maximizing the likelihood of the observed
sequence data.
In related work, robust control offers an alternative family of methods for accounting for
model inaccuracies, specifically by finding controllers that work well for a large class of
models. (E.g., [13, 17, 3].) Also, in applied control, some practitioners manually adjust
their model?s parameters (particularly the model?s noise variance parameters) to obtain a
model which captures the variability of the system?s dynamics. Our work can be viewed
as proposing an algorithm that gives a more structured approach to estimating the ?right?
variance parameters. The issue of time scales has also been addressed in hierarchical reinforcement learning (e.g., [2, 15, 11]), but most of this work has focused on speeding up
exploration and planning rather than on accurately modeling non-Markovian dynamics.
The rest of this paper is organized as follows. We define our notation in Section 2, then
formulate the model learning problem ignoring actions in Section 3, and propose a learning algorithm in Section 4. In Section 5, we extend our algorithm to incorporate actions.
Section 6 presents experimental results, and Section 7 concludes.
2
Preliminaries
If x ? Rn , then xi denotes the i-th element of x. Also, let j:k = [j j +1 j +2 ? ? ? k?1 k]T .
For any k-dimensional vector of indices I ? Nk , we denote by xI the k-dimensional
vector with the subset of x?s entries whose indices are in I. For example, if x =
[0.0 0.1 0.2 0.3 0.4 0.5]T , then x0:2 = [0.0 0.1 0.2]T .
A finite-state decision process (DP) is a tuple (S, A, T, ?, D, R), where S is a finite set of
states; A is a finite set of actions; T = {P (St+1 = s0 |S0:t = s0:t , A0:t = a0:t )} is a set of
state transition probabilities (here, P (St+1 = s0 |S0:t = s0:t , A0:t = a0:t ) is the probability
of being in a state s0 ? S at time t + 1 after having taken actions a0:t ? At+1 in states
s0:t ? S t+1 at times 0 : t); ? ? [0, 1) is a discount factor; D is the initial state distribution,
from which the initial state s0 is drawn; and R : S 7? R is the reward function. We assume
all rewards are bounded in absolute value by Rmax . A DP is not necessarily Markov.
AP
policy ? is a mapping from states to probability distributions over actions. Let V ? (s) =
?
E[ t=0 ? t R(st )|?, s0 = s] be the usual value function for ?. Then the utility of ? is
P?
P?
P
U (?) = Es0 ?D [V ? (s0 )] = E[ t=0 ? t R(st )|?] = t=0 ? t st P (St = st |?)R(st ).
The second expectation above is with respect to the random state sequence s0 , s1 , . . . drawn
by starting from s0 ? D, picking actions according to ? and transitioning according to P .
Throughout this paper, P?? will denote some estimate of the transition probabilities. We de? (?) the utility of the policy ? in an MDP whose first-order transition probabilities
note by U
are given by P?? (and similarly V? ? the value function in the same MDP). Thus, we have2
? (?) = E
? s ?D [V? ? (s0 )] = E[
? P? ? t R(st )|?] = P? ? t P P ?(St = st |?)R(st ).
U
0
t=0
t=0
st ?
? (?)| ? ? for all ?, then finding the optimal policy in the estimated
Note that if |U (?) ? U
MDP that uses parameters P?? (using value iteration or any other algorithm) will give a
policy whose utility is within 2? of the optimal utility. [6]
For stochastic processes without decisions/actions, we will use the same notation but drop
the conditioning on ?. Often we will also abbreviate P (St = st ) by P (st ).
3
Problem Formulation
To simplify our exposition, we will begin by considering stochastic processes that do not
have decisions/actions. Section 5 will discuss how actions can be incorporated into the
model.
We first consider how well V?(s0 ) approximates V (s0 ). We have
?
?
X
X
X
X
t
t
|V? (s0 ) ? V (s0 )| =
?
P??(st |s0 )R(st ) ?
?
P (st |s0 )R(st )
st
t=0
? Rmax
?
X
t=0
t=0
st
X
P ?(st |s0 ) ? P (st |s0 ) .
?t
?
(1)
st
?
So, to ensure that V? (s0 ) is an accurate estimate of V (s0 ), weP
would
like the parameters ? of
the model to minimize the right hand side of (1). The term st P??(st |s0 ) ? P (st |s0 ) is
exactly (twice) the variational distance between the two conditional distributions P??(?|s0 )
and P (?|s0 ). Unfortunately P is not known when learning from data. We only get to
observe state sequences sampled according to P . This makes Eqn. (1) a difficult criterion
to optimize. However, it is well known that the variational distance is upper bounded by
a function of the KL-divergence. (See, e.g., [1].) The KL-divergence between P and P??
can be estimated (up to a constant) as the log-likelihood of a sample. So, given a training
sequence s0:T sampled from P , we propose to estimate the transition probabilities P?? by
T
?1 T
?t
X
X
?? = arg max
? k log P? (st+k |st ).
(2)
?
t=0 k=1
Note the difference between this and the standard maximum likelihood (ML) estimate. Since we are using a model that is parameterized as a first-order Markov
model, the probability of the data under the model is given by P? (s0 , . . . , sT ) =
P? (sT |sT ?1 )P? (sT ?1 |sT ?2 ) . . . P? (s1 |s0 )D(s0 ) (where D is the initial state distribution).
By definition, maximum likelihood (ML) chooses the parameters ? that maximize the probability of the observed data. Taking logs of the probability above, (and ignoring D(s0 ),
which is usually parameterized separately), we find that the ML estimate is given by
T
?1
X
?? = arg max
log P? (st+1 |st ).
(3)
?
2
t=0
Since P?? is a first-order model, it explicitly parameterizes only P??(St+1 = st+1 |St = st , At =
at ). We use P??(St = st |?) to denote the probability that St = st in an MDP with one-step
transition probabilities P??(St+1 = st+1 |St = st , At = at ) and initial state distribution D when
acting according to the policy ?.
S0
S0
S1
S2
S3
S1
S1
S0
S2
S2
S3
S0
S1
S2
S1
S2
S3
S1
S2
S3
(a)
(b)
(c)
Figure 1: (a) A length four training sequence. (b) ML estimation for a first-order Markov model optimizes the likelihood of the second node given the first node in each of the length two subsequences.
(c) Our objective (Eqn. 2) also includes the likelihood of the last node given the first node in each
of these three longer subsequences of the data. (White nodes represent unobserved variables, shaded
nodes represent observed variables.)
All the terms above are of the form P? (st+1 |st ). Thus, the ML estimator explicitly
considers, and tries to model well, only the observed one-step transitions. In Figure 1
we use Bayesian network notation to illustrate the difference between the two objectives for a training sequence of length four. Figure 1(a) shows the training sequence,
which can have arbitrary dependencies. Maximum likelihood (ML) estimation maximizes
fM L (?) = log P? (s1 |s0 ) + log P? (s2 |s1 ) + log P? (s3 |s2 ). Figure 1(b) illustrates the interactions modeled by ML. Ignoring ? for now, for this example our objective (Eqn. 2) is
fM L (?) + log P? (s2 |s0 ) + log P? (s3 |s1 ) + log P? (s3 |s0 ). Thus, it takes into account both
the interactions in Figure 1(b) as well as the longer-range ones in Figure 1(c).
4
Algorithm
We now present an EM algorithm for optimizing the objective in Eqn. (2) for a first-order
Markov model.3 Our algorithm is derived using the method of [7]. (See the Appendix for
details.) The algorithm iterates between the following two steps:
? E-step: Compute expected counts
? ?i, j ? S, set stats(j, i) = 0
? ?t : 0 ? t ? T ? 1, ?k : 1 ? k ? T ? t, ?l : 0 ? l ? k ? 1, ?i, j ? S
stats(j, i) + = ? k P??(St+l+1 = j, St+l = i|St = st , St+k = st+k )
? M-step: Re-estimate model parameters
P
Update ?? such that ?i, j ? S, P ?(j|i) = stats(j, i)/
?
k?S
stats(k, i)
Prior to starting EM, the transition probabilities P?? can be initialized with the first-order
transition counts (i.e., the ML estimate of the parameters), possibly with smoothing.4
Let us now consider more carefully the computation done in the E-step for one specific pair
of values for t and k (corresponding to one term log P? (st+k |st ) in Eqn. 2). For k ? 2, as
in the forward-backward algorithm for HMMs (see, e.g., [12, 10]), the pairwise marginals
can be computed by a forward propagation (computing the forward messages), a backward
propagation (computing the backward messages), and then combining the forward and
backward messages.5 Forward and backward messages are computed recursively:
P
for l = 1 to k ? 1, ?i ? S m?t+l (i) = j?S m?t+l?1 (j)P??(i|j),
(4)
P
for l = k ? 1 down to 1, ?i ? S mt+l? (i) = j?S mt+l+1? (j)P??(j|i),
(5)
3
Using higher order Markov models or more structured models (such as dynamic Bayesian networks [4, 10] or mixed memory Markov models [8, 14]) offer no special difficulties, though the
notation becomes more involved and the inference (in the E-step) might become more expensive.
4
A parameter P??(j|i) initialized to zero will remain zero throughout successive iterations of EM.
If this is undesirable, then smoothing could be used to eliminate zero initial values.
5
Note that the special case k = 1 (and thus l = 0) does not require inference. In this case we
simply have P??(St+1 = j, St = i|St = st , St+1 = st+1 ) = 1{i = st }1{j = st+1 }.
where we initialize m?t (i) = 1{i = st }, and mt+k? (i) = 1{i = st+k }. The pairwise
marginals can be computed by combining the forward and backward messages:
P??(St+l+1 = j, St+l = i|St = st , St+k = st+k ) = m?t+l (i)P??(j|i)mt+l+1? (j). (6)
For the term log P? (st+k |st ), we end up performing 2(k ? 1) message computations, and
combining messages into pairwise marginals k ? 1 times. Doing this for all terms in the
objective results in O(T 3 ) message computations and O(T 3 ) computations of pairwise
marginals from these messages. In practice, the objective (2) can be approximated by
considering only the terms in the summation with k ? H, where H is some time horizon.6
In this case, the computational complexity is reduced to O(T H 2 ).
4.1 Computational Savings
The following observation leads to substantial savings in the number of message computations. The forward messages computed for the term log P? (st+k |st ) depend only on the
value of st . So the forward messages computed for the terms {log P? (st+k |st )}H
k=1 are the
same as the forward messages computed just for the term log P? (st+H |st ). A similar observation holds for the backward messages. As a result, we need to compute only O(T H)
messages (as opposed to O(T H 2 ) in the naive algorithm).
The following observation leads to further, (even more substantial) savings. Consider two
terms in the objective log P? (st1 +k |st1 ) and log P? (st2 +k |st2 ). If st1 = st2 and st1 +k =
st2 +k , then both terms will have exactly the same pairwise marginals and contribution to
the expected counts. So expected counts have to be computed only once for every triple
i, j, k for which (St = i, St+k = j) occurs in the training data. As a consequence, the
running time for each iteration (once we have made an initial pass over the data to count
the number of occurrences of the triples) is only O(|S|2 H 2 ), which is independent of the
size of the training data.
5
Incorporating actions
In decision processes, actions influence the state transition probabilities. To generate training data, suppose we choose an exploration policy and take actions in the DP using this
policy. Given the resulting training data, and generalizing Eqn. (2) to incorporate actions,
our estimator now becomes
T
?1 T
?t
X
X
?? = arg max
? k log P? (st+k |st , at:t+k?1 ).
(7)
?
t=0 k=1
The EM algorithm is straightforwardly extended to this setting, by conditioning on the
actions during the E-step, and updating state-action transition probabilities P? (j|i, a) in
the M-step.
As before, forward messages need to be computed only once for each value of t, and backward messages only once for each value of t + k. However achieving the more substantial
savings, as described in the second paragraph of Section 4.1, is now more difficult. In particular, now the contribution of a triple i, j, k (one for which (St = i, St+k = j) occurs
in the training data) depends on the action sequence at:t+k?1 . The number of possible
sequences of actions at:t+k?1 grows exponentially with k.
If, however, we use a deterministic exploration policy to generate the training data (more
specifically, one in which the action taken is a deterministic function of the current state),
then we can again obtain these computational advantages: Counts of the number of occurrences of the triples described previously are now again a sufficient statistic. However, a single deterministic exploration policy, by definition, cannot explore all state-action
pairs. Thus, we will instead use a combination of several deterministic exploration policies,
which jointly can explore all state-action pairs. In this case, the running time for the E-step
becomes O(|S|2 H 2 |?|), where |?| is the number of different deterministic exploration
policies used. (See Section 6.2 for an example.)
6
Because of the discount term ? k in the objective (2), one can safely truncate the summation over
k after about O(1/(1 ? ?)) terms without incurring too much error.
?30
G
?200
B
?50
Utility
A
Utility
?40
?60
?70
?80
0
?600
new algorithm
maximum likelihood
0.2
0.4
0.6
0.8
Correlation level for noise
S
?400
?800
0.7
new algorithm
maximum likelihood
0.75 0.8 0.85 0.9
0.95
Correlation level between arrivals
(a)
(b)
(c)
Figure 2: (a) Grid-world. (b) Grid-world experimental results, showing the utilities of policies obtained from the MDP estimated using ML (dash-dot line), and utilities of policies obtained from the
MDP estimated using our objective (solid line). Results shown are means over 5 independent trials,
and the error bars show one standard error for the mean. The horizontal axis (correlation level for
noise) corresponds to the parameter q in the experiment description. (c) Queue experiment, showing utilities obtained using ML (dash-dot line), and using our algorithm (solid line). Results shown
are means over 5 independent trials, and the error bars show one standard error for the mean. The
horizontal axis (correlation level between arrivals) corresponds to the parameter b in the experiment
description. (Shown in color, where available.)
6
Experiments
In this section, we empirically study the performance of model fitting using our proposed
algorithm, and compare it to the performance of ordinary ML estimation.
6.1 Shortest vs. safest path
Consider an agent acting for 100 time steps in the grid-world in Figure 2(a). The initial
state is marked by S, and the absorbing goal state by G. The reward is -500 for the gray
squares, and -1 elsewhere. This DP has four actions that (try to) move in each of the four
compass directions, and succeed with probability 1 ? p. If an action is not successful, then
the agent?s position transitions to one of the neighboring squares. Similar to our example in
Section 1, the random transitions (resulting from unsuccessful actions) may be correlated
over time. In this problem, if there is no noise (p = 0), the optimal policy is to follow
one of the shortest paths to the goal that do not pass through gray squares, such as path A.
For higher noise levels, the optimal policy is to stay as far away as possible from the gray
squares, and try to follow a longer path such as B to the goal.7 At intermediate noise levels,
the optimal policy is strongly dependent on how correlated the noise is between successive
time steps. The larger the correlation, the more dangerous path A becomes (for reasons
similar to the random walk example in Section 1). In our experiments, we compare the
behavior of our algorithm and ML estimation with different levels of noise correlation.8
Figure 2(b) shows the utilities obtained by the two different models, under different degrees
of correlation in the noise. The two algorithms perform comparably when the correlation is
weak, but our method outperforms ML when there is strong correlation. Empirically, when
the noise correlation is high, our algorithm seems to be fitting a first-order model with a
larger ?effective? noise level. When the resulting estimated MDP is solved, this gives more
cautious policies, such as ones more inclined to choose path B over A. In contrast, the
ML estimate performs poorly in this problem because it tends to underestimate how far
sideways the agent tends to move due to the noise (cf. the example in Section 1).
7
For very high noise levels (e.g. p = 0.99) the optimal policy is qualitatively different again.
Experimental details: The noise is governed by an (unobserved) Markov chain with four states
corresponding to the four compass directions. If an action at time t is not successful, the agent moves
in the direction corresponding to the state of this Markov chain. On each step, the Markov chain
stays in the current state with probability q, and transitions with probability 1 ? q uniformly to any
of the four states. Our experiments are carried out varying q from 0 (low noise correlation) to 0.9
(strong noise correlation). A 200,000 length state-action sequence for the grid-world, generated using
a random exploration policy, was used for model fitting, and a constant noise level p = 0.3 was used
in the experiments. Given a learned MDP model, value iteration was used to find the optimal policy
for it. To reduce computation, we only included the terms of the objective (Eqn. 7) for which k = 10.
8
6.2 Queue
We consider a service queue in which the average arrival rate is p. Thus, p =
P (a customer arrives in one time step). Also, for each action i, let qi denote the service
rate under that action (thus, qi = P (a customer is served in one time step|action = i)). In
our problem, there are three service rates q0 < q1 < q2 with respective rewards 0, ?1, ?10.
The maximum queue size is 20, and the reward for any state of the queue is 0, except when
the queue becomes full, which results in a reward of -1000. The service rates are q0 = 0,
q1 = p and q2 = 0.75. So the inexpensive service rate q1 is sufficient to keep up with
arrivals on average. However, even though the average arrival rate is p, the arrivals come
in ?bursts,? and even the high service rate q2 is insufficient to keep the queue small during
the bursts of many consecutive arrivals.9
Experimental results on the queue are shown in Figure 2(c). We plot the utilities obtained
using each of the two algorithms for high arrival correlations. (Both algorithms perform
essentially identically at lower correlation levels.) We see that the policies obtained with
our algorithm consistently outperform those obtained using maximum likelihood to fit the
model parameters. As expected, the difference is more pronounced for higher correlation
levels, i.e., when the true model is less well approximated by a first-order model.
For learning the model parameters, we used three deterministic exploration policies, each
corresponding to always taking one of the three actions. Thus, we could use the more
efficient version of the algorithm described in the second paragraph of Section 4.1 and
at the end of Section 5. A single EM iteration for the experiments on the queue took 6
minutes for the original version of the algorithm, but took only 3 seconds for the more
efficient version; this represents more than a 100-fold speedup.
7
Conclusions
We proposed a method for learning a first-order Markov model that captures the system?s
dynamics on longer time scales than a single time step. In our experiments, this method was
also shown to outperform the standard maximum likelihood model. In other experiments,
we have also successfully applied these ideas to modeling the dynamics of an autonomous
RC car. (Details will be presented in a forthcoming paper.)
References
[1] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, 1991.
[2] T. G. Dietterich. Hierarchical reinforcement learning with the MAXQ value function decomposition. JAIR, 2000.
[3] P. Gahinet, A. Nemirovski, A. Laub, and M. Chilali. LMI Control Toolbox. Natick, MA, 1995.
[4] Z. Ghahramani. Learning dynamic Bayesian networks. In Adaptive Processing of Sequences
and Data Structures, pages 168?197. Springer-Verlag, 1998.
9
Experimental details: The true process has two different (hidden) modes for arrivals. The first
mode has a very low arrival rate, and the second mode has a very high arrival rate. We denote
the steady state distribution over the two modes by (?1 , ?2 ). (I.e., the system spends a fraction ?1
of the time in the low arrival rate mode, and a fraction ?2 = 1 ? ?1 of the time in high arrival
rate mode.) Given the steady state distribution, the state transition matrix [a 1 ? a; 1 ? b b] has
only one remaining degree of freedom, which (essentially) controls how often the system switches
between the two modes. (Here, a [resp. b] is the probability, if we are in the slow [resp. fast] mode,
of staying in the same mode the next time step.) More specifically, assuming ?1 > ?2 , we have
b ? [0, 1], a = 1 ? (1 ? b)?2 /?1 . The larger b is, the more slowly the system switches between
modes. Our experiments used ?1 = 0.8, ?2 = 0.2, P (arrival|mode 1) = 0.01, P (arrival|mode 2) =
0.99. This means b = 0.2 gives independent arrival modes for consecutive time steps. In our
experiments, q0 = 0, and q1 was equal to the average arrival rate p = ?1 P (arrival|mode 1) +
?2 P (arrival|mode 2). Note that the highest service rate q2 (= 0.75) is lower than the fast mode?s
arrival rate. Training data was generated using 8000 simulations of 25 time steps each, in which the
queue length is initialized randomly, and the same (randomly chosen) action is taken on all 25 time
steps. To reduce computational requirements, we only included the terms of the objective (Eqn. 7)
for which k = 20. We used a discount factor ? = .95 and approximated utilities by truncating at a
finite horizon of 100. Note that although we explain the queuing process by arrival/departure rates,
the algorithm learns full transition matrices for each action, and not only the arrival/departure rates.
[5] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable
stochastic domains. Artificial Intelligence, 101, 1998.
[6] M. Kearns, Y. Mansour, and A. Y. Ng. Approximate planning in large POMDPs via reusable
trajectories. In NIPS 12, 1999.
[7] R. Neal and G. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other
variants. In Learning in Graphical Models, pages 355?368. MIT Press, 1999.
[8] H. Ney, U. Essen, and R. Kneser. On structuring probabilistic dependencies in stochastic language modeling. Computer Speech and Language, 8, 1994.
[9] A. Y. Ng and M. I. Jordan. On discriminative vs. generative classifiers: A comparison of logistic
regression and naive Bayes. In NIPS 14, 2002.
[10] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kauffman, 1988.
[11] D. Precup, R. S. Sutton, and S. Singh. Theoretical results on reinforcement learning with
temporally abstract options. In Proc. ECML, 1998.
[12] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77, 1989.
[13] J. K. Satia and R. L. Lave. Markov decision processes with uncertain transition probabilities.
Operations Research, 1973.
[14] L. K. Saul and M. I. Jordan. Mixed memory Markov models: decomposing complex stochastic
processes as mixtures of simpler ones. Machine Learning, 37, 1999.
[15] R. S. Sutton. TD models: Modeling the world at a mixture of time scales. In Proc. ICML, 1995.
[16] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, 1998.
[17] C. C. White and H. K. Eldeib. Markov decision processes with imprecise transition probabilities. Operations Research, 1994.
Appendix: Derivation of EM algorithm
This Appendix derives the EM algorithm that optimizes Eqn. (7). The derivation is based
on [7]?s method. Note that because of discounting, the objective is slightly different from
the standard setting of learning the parameters of a Markov chain with unobserved variables
in the training data.
Since we are using a first-order model, we have P??(st+k |st , at:t+k?1 ) =
P
St+1:t+k?1 P??(st+k |St+k?1 , at+k?1 )P??(St+k?1 |St+k?2 , at+k?2 ) . . . P??(St+1 |st , at ).
Here, the summation is over all possible state sequences St+1:t+k?1 . So we have
PT ?1 PT ?t k
t=0
k=1 ? log P??(st+k |st , at:t+k?1 )
P
PT ?1 PT ?t k
PT ?1
Qt,k (St+1:t+k?1 )
=
k=2 ? log
St+1:t+k?1 Qt,k (St+1:t+k?1 )
t=0
t=0 ? log P??(st+1 |st , at ) +
P??(st+k |St+k?1 , at+k?1 )P??(St+k?1 |St+k?2 , at+k?2 ) . . . P??(St+1 |st , at )
PT ?1
PT ?1 PT ?t k
?
t=0 ? log P??(st+1 |st , at ) +
t=0
k=2 ? Qt,k (St+1:t+k?1 )
P (s
|S
,a
)P (S
|S
,a
)...P (S
|s ,a )
t+1 t t
t+k?2 t+k?2
?
?
.
(8)
log ?? t+k t+k?1 t+k?1 Q?? t,kt+k?1
(St+1:t+k?1 )
Here, Qt,k is a probability distribution, and the inequality follows from Jensen?s inequality
and the concavity of log(?). As in [7], the EM algorithm optimizes Eqn. (8) by alternately
optimizing with respect to the distributions Qt,k (E-step), and the transition probabilities
P??(?|?, ?) (M-step). Optimizing with respect to the Qt,k variables (E-step) is achieved by
setting Qt,k (St+1:t+k?1 ) =
P??(St+1 , . . . , St+k?1 |St = st , St+k = st+k , At:t+k?1 = at:t+k?1 ).
(9)
Optimizing with respect to the transition probabilities P??(?|?, ?) (M-step) for Qt,k
fixed as in Eqn. (9) is done by updating ?? to ??new such
P that ? i, j ? S, ? a ? A
we have that P??new (j|i, a)
=
stats(j, i, a)/ k?S stats(k, i, a), where
PT ?1 PT ?t Pk?1 k
stats(j, i, a) =
t=0
k=1
l=0 ? P??(St+l+1 = j, St+l = i|St = st , St+k =
st+k , At:t+k?1 = at:t+k?1 )1{at+l = a}. Note that only the pairwise marginals
P??(St+l+1 , St+l |St , St+k , At:t+k?1 ) are needed in the M-step, and so it is sufficient to
compute only these when optimizing with respect to the Qt,k variables in the E-step.
| 2569 |@word trial:2 version:3 seems:2 pieter:1 simulation:1 accounting:1 decomposition:1 q1:4 solid:2 recursively:1 initial:7 outperforms:1 lave:1 current:2 john:1 drop:1 plot:1 update:1 v:3 generative:2 intelligence:1 selected:1 iterates:1 node:6 successive:2 simpler:1 rc:1 burst:2 become:1 laub:1 fitting:4 paragraph:2 pairwise:6 x0:1 expected:6 indeed:1 behavior:1 planning:3 td:1 es0:1 considering:2 becomes:5 begin:1 estimating:1 notation:4 bounded:2 maximizes:1 rmax:2 spends:1 q2:4 proposing:1 finding:2 unobserved:3 safely:1 every:1 firstorder:1 exactly:2 classifier:1 control:16 overestimate:1 before:1 service:7 tends:2 consequence:1 sutton:2 path:7 ap:1 kneser:1 might:1 twice:1 shaded:1 hmms:1 nemirovski:1 range:3 chilali:1 practice:1 significantly:1 imprecise:1 get:1 cannot:1 undesirable:1 influence:2 writing:1 optimize:1 conventional:1 deterministic:6 customer:2 maximizing:1 straightforward:1 regardless:1 starting:2 truncating:1 pomdp:1 focused:1 formulate:1 stats:7 estimator:3 autonomous:1 increment:4 resp:2 pt:10 suppose:1 us:2 element:2 approximated:5 particularly:1 expensive:1 updating:2 recognition:1 predicts:1 observed:5 solved:1 capture:3 inclined:1 highest:1 substantial:4 complexity:1 reward:7 littman:1 dynamic:10 depend:1 singh:1 predictive:1 completely:2 derivation:2 fast:2 describe:1 effective:1 artificial:1 whose:4 stanford:4 solve:1 larger:3 plausible:1 ability:1 statistic:1 eldeib:1 jointly:1 sequence:13 advantage:1 took:2 propose:3 interaction:5 helicopter:1 neighboring:1 combining:3 poorly:3 description:2 pronounced:1 cautious:1 requirement:1 incremental:1 staying:1 depending:1 andrew:1 illustrate:1 qt:9 strong:3 come:1 direction:3 correct:1 stochastic:5 exploration:8 require:1 st1:4 abbeel:1 preliminary:1 summation:3 hold:1 mapping:1 consecutive:2 estimation:5 proc:2 successfully:2 sideways:1 mit:1 always:2 rather:3 varying:1 structuring:1 derived:1 consistently:1 likelihood:18 contrast:1 inference:3 dependent:1 eliminate:1 a0:5 hidden:3 interested:1 issue:1 arg:3 favored:1 priori:1 smoothing:3 special:2 initialize:2 equal:1 once:4 saving:4 having:2 ng:3 manually:1 represents:1 icml:1 thinking:1 simplify:2 intelligent:1 randomly:2 wep:1 divergence:2 freedom:1 message:17 highly:1 essen:1 adjust:1 truly:2 arrives:1 mixture:2 chain:5 accurate:1 kt:1 tuple:1 respective:1 walk:2 re:1 initialized:3 theoretical:1 uncertain:1 formalism:1 modeling:9 markovian:1 compass:2 cover:1 ordinary:1 kaelbling:1 subset:1 entry:1 successful:2 too:1 motivating:1 straightforwardly:1 dependency:2 chooses:1 st:143 stay:2 probabilistic:2 picking:1 precup:1 again:3 satisfied:1 opposed:1 choose:2 possibly:1 slowly:1 lmi:1 return:1 account:2 de:1 includes:1 matter:1 explicitly:4 depends:1 vehicle:3 try:4 wind:1 queuing:1 view:1 doing:1 bayes:1 option:1 parallel:1 complicated:1 contribution:2 minimize:2 square:4 gahinet:1 accuracy:1 variance:3 rabiner:1 weak:1 bayesian:4 accurately:2 comparably:1 trajectory:1 pomdps:2 served:1 explain:1 definition:2 inexpensive:1 underestimate:1 involved:1 sampled:2 color:1 car:2 organized:1 carefully:1 higher:6 attained:1 supervised:1 follow:3 jair:1 formulation:1 done:2 though:2 strongly:1 just:1 correlation:18 hand:1 eqn:11 horizontal:2 propagation:2 mode:16 logistic:1 gray:3 perhaps:1 grows:1 mdp:12 effect:3 dietterich:1 true:3 discounting:1 q0:3 neal:1 white:2 during:3 steady:2 criterion:4 workhorse:1 performs:1 reasoning:1 variational:2 absorbing:1 mt:4 empirically:2 conditioning:2 exponentially:1 extend:2 approximates:1 marginals:6 significant:1 enjoyed:1 grid:4 similarly:1 language:2 dot:2 longer:7 optimizing:5 optimizes:3 verlag:1 inequality:2 outperforming:1 success:1 morgan:1 maximize:2 shortest:2 full:2 offer:2 long:2 equally:1 impact:1 qi:2 variant:1 regression:1 controller:2 essentially:2 expectation:1 natick:1 iteration:5 represent:2 safest:1 achieved:1 separately:1 addressed:1 rest:1 ample:1 jordan:2 integer:1 practitioner:1 intermediate:2 identically:1 switch:2 independence:1 fit:2 forthcoming:1 perfectly:1 fm:2 reduce:2 idea:1 parameterizes:1 motivated:1 utility:12 ultimate:1 queue:10 speech:2 action:31 discount:3 reduced:1 generate:2 outperform:2 tutorial:1 s3:7 estimated:8 overly:1 reusable:1 four:7 achieving:1 drawn:2 backward:8 fraction:2 parameterized:2 almost:1 family:1 throughout:2 decision:9 appendix:3 dash:2 fold:1 dangerous:2 performing:1 speedup:1 department:2 structured:3 according:4 truncate:1 combination:1 poor:2 remain:1 slightly:1 em:9 son:1 s1:11 taken:3 previously:1 discus:1 count:6 needed:1 letting:1 end:2 available:1 operation:2 incurring:1 decomposing:1 observe:1 hierarchical:2 away:1 occurrence:2 ney:1 alternative:1 original:1 thomas:1 denotes:1 running:2 ensure:1 cf:1 remaining:1 graphical:1 giving:1 ghahramani:1 objective:12 move:3 occurs:2 usual:1 dp:4 distance:2 considers:3 consensus:1 reason:1 assuming:2 length:5 modeled:2 index:2 insufficient:1 difficult:2 unfortunately:1 debate:1 policy:24 perform:2 upper:1 observation:3 markov:30 st2:4 finite:4 ecml:1 extended:1 variability:1 incorporated:1 hinton:1 rn:1 mansour:1 arbitrary:1 pair:3 kl:2 toolbox:1 learned:1 narrow:1 pearl:1 inaccuracy:1 maxq:1 nip:2 alternately:1 bar:2 usually:2 dynamical:1 kauffman:1 departure:2 max:3 memory:3 unsuccessful:1 difficulty:1 disturbance:3 abbreviate:1 mdps:1 numerous:1 temporally:1 axis:2 concludes:1 carried:1 naive:2 speeding:1 prior:1 satia:1 fully:1 loss:2 mixed:3 analogy:1 var:2 triple:4 agent:4 degree:2 sufficient:3 s0:41 suspected:1 elsewhere:1 last:1 side:1 saul:1 taking:2 absolute:1 sparse:1 benefit:1 transition:22 cumulative:1 world:5 concavity:1 ignores:1 forward:10 made:1 reinforcement:4 qualitatively:1 adaptive:1 far:2 approximate:1 observable:2 keep:2 ml:18 discriminative:2 xi:2 subsequence:2 learn:1 robust:1 ca:2 ignoring:3 necessarily:1 complex:1 domain:1 pk:1 motivation:1 noise:17 arise:1 s2:9 arrival:22 have2:1 slow:1 wiley:2 position:3 governed:3 learns:1 down:1 minute:1 transitioning:1 specific:2 showing:2 jensen:1 derives:1 incorporating:1 vapnik:1 sequential:1 illustrates:1 justifies:1 horizon:2 nk:1 cassandra:1 easier:1 generalizing:1 simply:1 likely:1 explore:2 partially:2 springer:1 corresponds:2 ma:1 succeed:1 conditional:2 goal:4 viewed:1 marked:1 exposition:2 included:2 specifically:3 except:1 uniformly:1 acting:3 kearns:1 pas:2 experimental:6 select:1 incorporate:2 correlated:4 |
1,727 | 257 | 240
Lee
Using A Translation-Invariant Neural Network
To Diagnose Heart Arrhythmia
Susan Ciarrocca Lee
The lohns Hopkins University
Applied Physics Laboratory
Laurel. Maryland 20707
ABSTRACT
Distinctive electrocardiogram (EeG) patterns are created when the heart
is beating normally and when a dangerous arrhythmia is present. Some
devices which monitor the EeG and react to arrhythmias parameterize
the ECG signal and make a diagnosis based on the parameters. The
author discusses the use of a neural network to classify the EeG signals
directly. without parameterization. The input to such a network must
be translation-invariant. since the distinctive features of the EeG may
appear anywhere in an arbritrarily-chosen EeG segment. The input
must also be insensitive to the episode-to-episode and patient-to-patient
variability in the rhythm pattern.
1 INTRODUCTION
Figure 1 shows internally-recorded transcardiac ECG signals for one patient. The top
trace is an example of normal sinus rhythm (NSR). The others are examples of two
arrhythmias: ventricular tachycardia (V1) and ventricular fibrillation (VF). Visually. the
patterns are quite distinctive. Two problems make recognition of these patterns with a
neural net interesting.
The first problem is illustrated in Figure 2. All traces in Figure 2 are one second samples
of NSR. but the location of the QRS complex relative to the start of the sample is
shifted. Ideally. one would like a neural network to recognize each of these presentations
as NSR. without preprocessing the data to "center" it. The second problem can be
discerned by examining the two VT traces in Figure 1. Although quite similar. the two
patterns are not exactly the same. Substantial variation in signal shape and repetition rate
for NSR and VT (VF is inherently random) can be expected. even among rhythms
generated by a single patient. Patient-to-patient variations are even greater. The neural
Using A Translation-Invariant Neural Network
network must ignore variations within rhythm types, while retaining the distinctions
between rhythms. This paper discusses a simple transformation of the ECG time series
input which is both translation-invariant and fairly insensitive to rate and shape changes
within rhythm types.
o
123
4
6
TIME (SECONDS)
Figure 1: ECG Rhythm Examples
o
0.2
0.4
0.6
0.8
TIME (SECONDS)
Figure 2: Five Examples ofNSR
2 DISCUSSION
If test input to a first order neural network is rescaled, rotated, or translated with respect to
the training data, it generally will not be recognized. A second or higher order network
can be made invariant to these transformations by constraining the weights to meet
certain requirements[Giles, 1988]. The input to the jth hidden unit in a second order
network with N inputs is:
N
L
N-l
wili
i=1
+
N-i
L L
w(i,i+k)jXixi+k
(1)
i=1 k=1
Translation invariance is introduced by constraining the weights on the fIrst order inputs
to be independent of input position, and the second order weights to depend only on the
difference between indices (k), rather than on the index pairs (i,i+k)[Giles, 1988].
Rewriting equation (1) with these constraints gives:
241
242
Lee
N
Wj
N-l
L
xi +
i=l
N-k
Wkj L xi~+k
k=l
i=l
L
(2)
This is equivalent to a fIrst order neural network where the original inputs, xi' have been
replaced by new inputs, Yi' consisting of the following sums:
N
N-k
Yk = L xixi+k' k=1,2, ... .N-l
(3)
i=l
While a network with inputs in the form of equation (3) is translation invariant, it is
quite sensitive to shape and rate variations in the ECG input data. For ECG recognition,
a better function to compute is:
N
N-k
Yo = L ABS(xi) , Yk = L ABS(xi - ~+k) ,
i=l
i=l
k=1,2, ... ,N-l
(4)
Both equations (3) and (4) produce translation-invariant outputs, as long as the input time
series contains a "shape" which occupies only part of the input window, for example, the
single cycle of the sine function in Figure 3a. A periodic time series, like the sine wave
in Figure 3b, will not produce a truly translation-invariant output. Fortunately, the
translation sensitivity introduced by applying equations (3) or (4) to periodic time series
is small for small k, and only becomes important when k becomes large. One can see
this by considering the extreme case, when k=N-l, and the fInal "sum" in equation (4)
becomes the absolute value of the difference between the fIrst and the last point in the
input time series; clearly, this value will vary as the sine wave in Figure 3b is moved
through the input window. If the upper limit on the sum over k gets no larger than N/2,
)
(.)
(b)
Figure 3: Examples of signals which will (a) and will not (b) have invariant transforms
Using A Translation-Invariant Neural Network
equations (3) and (4) provide a neural network input which is nearly translation-invariant
for realistic time series. Additionally, the output of equation (4) can be used to
discriminate among NSR, VT, and VF, but is not unduly sensitive to variations within
each rhythm type.
The ECG signals used in this experiment were drawn from a data set of internally recorded
transcardiac ECG signals digitized at 100 Hz. The data set comprised 203 10-45 second
segments obtained from 52 different patients. At least one segment of NSR and one
segment of an arrhythmia was available for each patient. In addition, an "exercise" NSR
at 150 BPM was artificially constructed by cutting baseline out of the natural resting
NSR segment. Arrhythmia detection systems which parameterize the ECG can have
difficulty distinguishing high rate NSR's from slow arrhythmias.
To obtain a training data set for the neural network, short pieces were extracted from the
original rhythm segments. Since the rhythms are basically periodic, it was possible to
chose the endpoints so that the short, extracted piece could be be repeated to produce a
facsimile of the original signal. The upper trace in Figure 4 shows an original VT
segment. The boxed area is the extracted piece. The lower trace shows the extracted piece
chained end-to-end to construct a segment as long as the original. The segments
~ULL
,---------------,
I
ARRHYTHMIA
S~OM~NT
I
I
I
- - - - - - - - - - - - - -CONSTRUCTED TRAININO SI!OMI!NT
6
e
,.
e
TIMI!(SECONOS)
9
18
11
12
13
14
Figure 4: Original and Artificially-Constructed Training Segments
243
244
Lee
constructed from the short. extracted pieces were used as training input Typically. the
training data segment contained less than 25% of the original data.
The length of the input window was arbitrarily set at 1.35 seconds (135 points); by
choosing this window. all NSR inputs were guaranteed to include at least one QRS
complex. The upper limit on the sum over k in equation (4) was set to 50. The
resulting 51 inputs were presented to a standard back propagation network with seven
hidden units and four outputs. Although one output is sufficient to discriminate between
NSR and an arrhythmia. the networks were trained to differentiate among two types of VT
(generally distinguished by rate). and VF as well.
A separate training set was constructed and a separate network was trained for each patient.
The weights thus derived for a given patient were then tested on that patient's original
rhythm segments. To test the translation in variance of the network. every possible
presentation of an input rhythm segment was tested. To do this. a sliding window of 135
points was moved through the input data stream one point (1/100th of a second) at a
time. At each point. the output of equation (4) (appropriately normalized) was presented
to the network. and the resulting diagnosis recorded.
3 RESULTS
A percentage of correct diagnoses was calculated for each segment of data. For a segment
T seconds long. there are 100x(T-1.35) different presentations of the rhythm.
Presentations which included countershock. burst pacing. gain changes on the recording
equipment. post-shock rhythms. etc. were excluded. since the network had not been
trained to recognize these phenomena. The percentage correct was then calculated for the
remaining presentations as:
l00x(Number of correct diagnoses )/(Number of presentations)
The percentage of correct diagnoses for each patient was calculated similarly. except that
all segments for a particular patient were included in the count. Table 1 presents these
results.
Table 1: Results
Patients
Segments
100% Correct
99%-90% Correct
90%-80% Correct
80%-70% Correct
<70% Correct
Could Not Be Trained
29
19
3
0
0
1
163
23
6
4
1
6
Total
52
203
Using A Translation-Invariant Neural Network
The network could not be trained for one patient. This patient had two arrhythmia
segments. one identified as VT and the other as VF. Visually. the two traces were
extremely similiar; after twenty thousand iterations, the network could not distinguish
them. The network could certainly have been trained to distinguish between NSR and
those two rhythms, but this was not attempted.
The number of segments for which all possible presentations of the rhythm were
diagnosed correctly clearly establishes the translation invariance of the input. The
network was also quite successful in distinguishing among NSR and various arrhythmias.
Unfortunately, for application in inplantable defibrillators or even critical care
monitoring, the network must be more nearly perfect.
The errors the network made could be separated into two broad classes. First, short
segments of very erratic arrhythmias were misdiagnosed as NSR. Figure 5 illustrates this
type of error. The error occurs because NSR is mainly characterized by a lack of
correlation. Typically. the misdiagnosed segment is quite short. 1 second or less. This
type of error might be avoided by using longer (longer than 1.35 second) input windows
which could bridge the erratic segments. Also, a more responsive automatic gain control
on the signal might help. since the erratic segments generally had a smaller amplitude
TRANSCARDAIC
N~TWORK
~CQ
OIAQNOSIS
VP
VT NO. 2
VT NO. 1
NSR
CAN'T 10
e
I
1
2
3
I
I
466
TIME
(S~CONDS)
.,
e
Figure 5: Ventricular Fibrillation Segment Misdiagnosed as NSR
18
245
246
Lee
than the surrounding segments. The network response to input windows containing large
shifts in the amplitude of the input signal (for example, countershock and gain changes)
was usually NSR.
The second class of errors occurred when the network misdiagnosed rhythms which were
not included in the training set. For example, one patient had a few beats of a very slow
VT in his NSR segment. This slow VT was not extracted for training. Only a fast (200
BPM) VT and VF were presented to this network as possible arrhythmias. Consequently,
during testing. the network identified the slow VT as NSR. The network did identify
some rhythms it was not trained on, but only if these rhythms did not vary too much
from the training rhythms. Generally, the rate of the "unknown" rhythm had to be within
20 BPM of a training rhythm to be recognized. Morphology is also important, in that
very regular rhythms, such as the top trace in Figure 6, and noisier rhythms, like the
bottom trace, appear quite different to the network.
I
I
I
I
I
I
e
e.5
1
1.6
2
2.6
I
I
I
r
3
3.6
4
4.5
TIME <SECONDS)
I
?
I
,
t
I
5
6.5
8
8.5
7
7.6
Figure 6: Ventricular Tachycardias with Significant Morphology Differences
The misdiagnosis of rhythms not included in the training set can only be corrected by
enlarging the training set. In the future, an attempt will be made to create a "generic" set
of typical arrhythmias drawn from the entire data set, rather than taking arrhythmia
Using A Translation-Invariant Neural Network
samples from each patient only. Since the networks can generalize somewhat, it is
possible that a network trained on an individual patient's NSR and the "generic"
arrhythmia set may be able to recognize all arrhythmias, whether they are included in the
training set or noL
References
C. Giles, R. Griffin, T. Maxwell, "Encoding Geometric Invariances in Higher-Order
Neural Networks", Neural Information Processing Systems, American Institute of
Physics, New York, 1988, pp.301-309
247
| 257 |@word normalized:1 excluded:1 correct:9 laboratory:1 occurs:1 illustrated:1 occupies:1 during:1 separate:2 rhythm:25 maryland:1 series:6 contains:1 pacing:1 seven:1 length:1 nt:2 index:2 cq:1 si:1 normal:1 visually:2 must:4 xixi:1 unfortunately:1 realistic:1 shape:4 vary:2 trace:8 endpoint:1 insensitive:2 occurred:1 resting:1 twenty:1 device:1 unknown:1 parameterization:1 significant:1 upper:3 sensitive:2 bridge:1 fibrillation:2 repetition:1 short:5 create:1 establishes:1 automatic:1 similarly:1 similiar:1 beat:1 clearly:2 variability:1 location:1 had:5 digitized:1 rather:2 five:1 longer:2 burst:1 constructed:5 etc:1 introduced:2 derived:1 yo:1 pair:1 mainly:1 laurel:1 certain:1 distinction:1 unduly:1 expected:1 equipment:1 baseline:1 arrhythmia:17 arbitrarily:1 vt:12 morphology:2 yi:1 able:1 usually:1 greater:1 typically:2 entire:1 fortunately:1 care:1 hidden:2 somewhat:1 window:7 considering:1 misdiagnosed:4 becomes:3 bpm:3 recognized:2 signal:10 sliding:1 erratic:3 among:4 critical:1 natural:1 retaining:1 difficulty:1 characterized:1 fairly:1 long:3 construct:1 transformation:2 post:1 omi:1 every:1 broad:1 created:1 nearly:2 patient:19 exactly:1 ull:1 future:1 others:1 sinus:1 control:1 normally:1 internally:2 unit:2 appear:2 few:1 iteration:1 addition:1 recognize:3 geometric:1 individual:1 twork:1 relative:1 limit:2 replaced:1 consisting:1 wkj:1 encoding:1 appropriately:1 interesting:1 ab:2 meet:1 attempt:1 detection:1 hz:1 recording:1 might:2 chose:1 sufficient:1 ecg:9 certainly:1 truly:1 extreme:1 nol:1 constraining:2 translation:15 last:1 testing:1 identified:2 jth:1 institute:1 taking:1 shift:1 area:1 whether:1 absolute:1 calculated:3 regular:1 classify:1 giles:3 author:1 made:3 get:1 preprocessing:1 york:1 avoided:1 facsimile:1 applying:1 generally:4 ignore:1 equivalent:1 cutting:1 comprised:1 center:1 examining:1 successful:1 transforms:1 too:1 conds:1 periodic:3 react:1 percentage:3 xi:5 shifted:1 defibrillator:1 sensitivity:1 correctly:1 his:1 table:2 lee:5 physic:2 diagnosis:5 additionally:1 variation:5 inherently:1 hopkins:1 eeg:5 four:1 boxed:1 recorded:3 monitor:1 distinguishing:2 containing:1 drawn:2 complex:2 artificially:2 rewriting:1 tachycardia:2 did:2 recognition:2 shock:1 american:1 v1:1 sum:4 repeated:1 bottom:1 electrocardiogram:1 parameterize:2 susan:1 thousand:1 wj:1 cycle:1 slow:4 episode:2 stream:1 piece:5 rescaled:1 sine:3 griffin:1 yk:2 substantial:1 diagnose:1 vf:6 position:1 wave:2 start:1 exercise:1 ideally:1 guaranteed:1 distinguish:2 chained:1 enlarging:1 trained:8 depend:1 om:1 segment:26 dangerous:1 constraint:1 variance:1 distinctive:3 identify:1 translated:1 ventricular:4 vp:1 generalize:1 various:1 extremely:1 basically:1 surrounding:1 monitoring:1 separated:1 pattern:5 fast:1 illustrates:1 choosing:1 smaller:1 qrs:2 quite:6 larger:1 pp:1 contained:1 invariant:13 gain:3 final:1 heart:2 differentiate:1 equation:9 extracted:6 net:1 discus:2 count:1 presentation:7 amplitude:2 consequently:1 back:1 end:2 maxwell:1 higher:2 available:1 change:3 included:5 typical:1 response:1 except:1 discerned:1 corrected:1 generic:2 total:1 moved:2 diagnosed:1 beating:1 anywhere:1 distinguished:1 responsive:1 discriminate:2 correlation:1 invariance:3 attempted:1 requirement:1 original:8 produce:3 top:2 perfect:1 remaining:1 rotated:1 propagation:1 help:1 lack:1 include:1 noisier:1 tested:2 phenomenon:1 nsr:21 |
1,728 | 2,570 | Constraining a Bayesian Model of Human Visual
Speed Perception
Alan A. Stocker and Eero P. Simoncelli
Howard Hughes Medical Institute,
Center for Neural Science, and Courant Institute of Mathematical Sciences
New York University, U.S.A.
Abstract
It has been demonstrated that basic aspects of human visual motion perception are qualitatively consistent with a Bayesian estimation framework, where the prior probability distribution on velocity favors slow
speeds. Here, we present a refined probabilistic model that can account
for the typical trial-to-trial variabilities observed in psychophysical speed
perception experiments. We also show that data from such experiments
can be used to constrain both the likelihood and prior functions of the
model. Specifically, we measured matching speeds and thresholds in a
two-alternative forced choice speed discrimination task. Parametric fits
to the data reveal that the likelihood function is well approximated by
a LogNormal distribution with a characteristic contrast-dependent variance, and that the prior distribution on velocity exhibits significantly
heavier tails than a Gaussian, and approximately follows a power-law
function.
Humans do not perceive visual motion veridically. Various psychophysical experiments
have shown that the perceived speed of visual stimuli is affected by stimulus contrast,
with low contrast stimuli being perceived to move slower than high contrast ones [1, 2].
Computational models have been suggested that can qualitatively explain these perceptual
effects. Commonly, they assume the perception of visual motion to be optimal either within
a deterministic framework with a regularization constraint that biases the solution toward
zero motion [3, 4], or within a probabilistic framework of Bayesian estimation with a prior
that favors slow velocities [5, 6].
The solutions resulting from these two frameworks are similar (and in some cases identical), but the probabilistic framework provides a more principled formulation of the problem
in terms of meaningful probabilistic components. Specifically, Bayesian approaches rely
on a likelihood function that expresses the relationship between the noisy measurements
and the quantity to be estimated, and a prior distribution that expresses the probability of
encountering any particular value of that quantity. A probabilistic model can also provide a
richer description, by defining a full probability density over the set of possible ?percepts?,
rather than just a single value. Numerous analyses of psychophysical experiments have
made use of such distributions within the framework of signal detection theory in order to
model perceptual behavior [7].
Previous work has shown that an ideal Bayesian observer model based on Gaussian forms
?
posterior
low contrast
probability density
probability density
high contrast
likelihood
prior
a
posterior
likelihood
prior
v?
v?
visual speed
?
b
visual speed
Figure 1: Bayesian model of visual speed perception. a) For a high contrast stimulus, the
likelihood has a narrow width (a high signal-to-noise ratio) and the prior induces only a
small shift ? of the mean v? of the posterior. b) For a low contrast stimuli, the measurement
is noisy, leading to a wider likelihood. The shift ? is much larger and the perceived speed
lower than under condition (a).
for both likelihood and prior is sufficient to capture the basic qualitative features of global
translational motion perception [5, 6]. But the behavior of the resulting model deviates
systematically from human perceptual data, most importantly with regard to trial-to-trial
variability and the precise form of interaction between contrast and perceived speed. A
recent article achieved better fits for the model under the assumption that human contrast
perception saturates [8]. In order to advance the theory of Bayesian perception and provide
significant constraints on models of neural implementation, it seems essential to constrain
quantitatively both the likelihood function and the prior probability distribution. In previous
work, the proposed likelihood functions were derived from the brightness constancy constraint [5, 6] or other generative principles [9]. Also, previous approaches defined the prior
distribution based on general assumptions and computational convenience, typically choosing a Gaussian with zero mean, although a Laplacian prior has also been suggested [4]. In
this paper, we develop a more general form of Bayesian model for speed perception that
can account for trial-to-trial variability. We use psychophysical speed discrimination data
in order to constrain both the likelihood and the prior function.
1
1.1
Probabilistic Model of Visual Speed Perception
Ideal Bayesian Observer
Assume that an observer wants to obtain an estimate for a variable v based on a measurement m that she/he performs. A Bayesian observer ?knows? that the measurement device
is not ideal and therefore, the measurement m is affected by noise. Hence, this observer
combines the information gained by the measurement m with a priori knowledge about v.
Doing so (and assuming that the prior knowledge is valid), the observer will ? on average ?
perform better in estimating v than just trusting the measurements m. According to Bayes?
rule
1
p(v|m) = p(m|v)p(v)
(1)
?
the probability of perceiving v given m (posterior) is the product of the likelihood of v for
a particular measurements m and the a priori knowledge about the estimated variable v
(prior). ? is a normalization constant independent of v that ensures that the posterior is a
proper probability distribution.
P(v^ 2 > v^1)
1
+
Pcum=0.5
0
a
b
Pcum=0.875
vmatch vthres
v2
Figure 2: 2AFC speed discrimination experiment. a) Two patches of drifting gratings were
displayed simultaneously (motion without movement). The subject was asked to fixate
the center cross and decide after the presentation which of the two gratings was moving
faster. b) A typical psychometric curve obtained under such paradigm. The dots represent
the empirical probability that the subject perceived stimulus2 moving faster than stimulus1.
The speed of stimulus1 was fixed while v2 is varied. The point of subjective equality, vmatch ,
is the value of v2 for which Pcum = 0.5. The threshold velocity vthresh is the velocity for
which Pcum = 0.875.
It is important to note that the measurement m is an internal variable of the observer and
is not necessarily represented in the same space as v. The likelihood embodies both the
mapping from v to m and the noise in this mapping. So far, we assume that there is a
monotonic function f (v) : v ? vm that maps v into the same space as m (m-space).
Doing so allows us to analytically treat m and vm in the same space. We will later propose
a suitable form of the mapping function f (v).
An ideal Bayesian observer selects the estimate that minimizes the expected loss, given the
posterior and a loss function. We assume a least-squares loss function. Then, the optimal
estimate v? is the mean of the posterior in Equation (1). It is easy to see why this model
of a Bayesian observer is consistent with the fact that perceived speed decreases with contrast. The width of the likelihood varies inversely with the accuracy of the measurements
performed by the observer, which presumably decreases with decreasing contrast due to
a decreasing signal-to-noise ratio. As illustrated in Figure 1, the shift in perceived speed
towards slow velocities grows with the width of the likelihood, and thus a Bayesian model
can qualitatively explain the psychophysical results [1].
1.2
Two Alternative Forced Choice Experiment
We would like to examine perceived speeds under a wide range of conditions in order to
constrain a Bayesian model. Unfortunately, perceived speed is an internal variable, and it is
not obvious how to design an experiment that would allow subjects to express it directly 1 .
Perceived speed can only be accessed indirectly by asking the subject to compare the speed
of two stimuli. For a given trial, an ideal Bayesian observer in such a two-alternative forced
choice (2AFC) experimental paradigm simply decides on the basis of the two trial estimates
v?1 (stimulus1) and v?2 (stimulus2) which stimulus moves faster. Each estimate v? is based
on a particular measurement m. For a given stimulus with speed v, an ideal Bayesian
observer will produce a distribution of estimates p(?
v |v) because m is noisy. Over trials,
the observers behavior can be described by classical signal detection theory based on the
distributions of the estimates, hence e.g. the probability of perceiving stimulus2 moving
1
Although see [10] for an example of determining and even changing the prior of a Bayesian
model for a sensorimotor task, where the estimates are more directly accessible.
faster than stimulus1 is given as the cumulative probability
?
v?2
Pcum (?
v2 > v?1 ) =
p(?
v2 |v2 )
p(?
v1 |v1 ) d?
v1 d?
v2
0
(2)
0
Pcum describes the full psychometric curve. Figure 2b illustrates the measured psychometric curve and its fit from such an experimental situation.
2
Experimental Methods
We measured matching speeds (Pcum = 0.5) and thresholds (Pcum = 0.875) in a 2AFC
speed discrimination task. Subjects were presented simultaneously with two circular
patches of horizontally drifting sine-wave gratings for the duration of one second (Figure 2a). Patches were 3deg in diameter, and were displayed at 6deg eccentricity to either
side of a fixation cross. The stimuli had an identical spatial frequency of 1.5 cycle/deg. One
stimulus was considered to be the reference stimulus having one of two different contrast
values (c1 =[0.075 0.5]) and one of five different speed values (u1 =[1 2 4 8 12] deg/sec)
while the second stimulus (test) had one of five different contrast values (c2 =[0.05 0.1 0.2
0.4 0.8]) and a varying speed that was determined by an interleaved staircase procedure.
For each condition there were 96 trials. Conditions were randomly interleaved, including
a random choice of stimulus identity (test vs. reference) and motion direction (right vs.
left). Subjects were asked to fixate during stimulus presentation and select the faster moving stimulus. The threshold experiment differed only in that auditory feedback was given
to indicate the correctness of their decision. This did not change the outcome of the experiment but increased significantly the quality of the data and thus reduced the number of
trials needed.
3
Analysis
With the data from the speed discrimination experiments we could in principal apply a
parametric fit using Equation (2) to derive the prior and the likelihood, but the optimization
is difficult, and the fit might not be well constrained given the amount of data we have obtained. The problem becomes much more tractable given the following weak assumptions:
? We consider the prior to be relatively smooth.
? We assume that the measurement m is corrupted by additive Gaussian noise with
a variance whose dependence on stimulus speed and contrast is separable.
? We assume that there is a mapping function f (v) : v ? vm that maps v into the
space of m (m-space). In that space, the likelihood is convolutional i.e. the noise
in the measurement directly defines the width of the likelihood.
These assumptions allow us to relate the psychophysical data to our probabilistic model in
a simple way. The following analysis is in the m-space. The point of subjective equality
(Pcum = 0.5) is defined as where the expected values of the speed estimates are equal. We
write
E?
vm,1
vm,1 ? E?1
= E?
vm,2
= vm,2 ? E?2
(3)
where E? is the expected shift of the perceived speed compared to the veridical speed.
For the discrimination threshold experiment, above assumptions imply that the variance
var?
vm of the speed estimates v?m is equal for both stimuli. Then, (2) predicts that the
discrimination threshold is proportional to the standard deviation, thus
vm,2 ? vm,1 = ? var?
vm
(4)
likelihood
a
b
prior
vm
Figure 3: Piece-wise approximation We perform a parametric fit by assuming the prior to
be piece-wise linear and the likelihood to be LogNormal (Gaussian in the m-space).
where ? is a constant that depends on the threshold criterion Pcum and the exact shape of
p(?
vm |vm ).
3.1
Estimating the prior and likelihood
In order to extract the prior and the likelihood of our model from the data, we have to find
a generic local form of the prior and the likelihood and relate them to the mean and the
variance of the speed estimates. As illustrated in Figure 3, we assume that the likelihood is
Gaussian with a standard deviation ?(c, vm ). Furthermore, the prior is assumed to be wellapproximated by a first-order Taylor series expansion over the velocity ranges covered by
the likelihood. We parameterize this linear expansion of the prior as p(vm ) = avm + b.
We now can derive a posterior for this local approximation of likelihood and prior and then
define the perceived speed shift ?(m). The posterior can be written as
2
vm
1
1
p(m|vm )p(vm ) = [exp(?
)(avm + b)]
?
?
2?(c, vm )2
where ? is the normalization constant
?
b
p(m|vm )p(vm )dvm =
?2?(c, vm )2
?=
2
??
p(vm |m) =
(5)
(6)
We can compute ?(m) as the first order moment of the posterior for a given m. Exploiting
the symmetries around the origin, we find
?
a(m)
?(m) =
?(c, vm )2
vp(vm |m)dvm ?
(7)
b(m)
??
The expected value of ?(m) is equal to the value of ? at the expected value of the measurement m (which is the stimulus velocity vm ), thus
a(vm )
?(c, vm )2
E? = ?(m)|m=vm =
(8)
b(vm )
Similarly, we derive var?
vm . Because the estimator is deterministic, the variance of the
estimate only depends on the variance of the measurement m. For a given stimulus, the
variance of the estimate can be well approximated by
??
vm (m)
var?
vm = varm(
|m=vm )2
(9)
?m
??(m)
|m=vm )2 ? varm
= varm(1 ?
?m
Under the assumption of a locally smooth prior, the perceived velocity shift remains locally
constant. The variance of the perceived speed v?m becomes equal to the variance of the
measurement m, which is the variance of the likelihood (in the m-space), thus
var?
vm = ?(c, vm )2
(10)
With (3) and (4), above derivations provide a simple dependency of the psychophysical
data to the local parameters of the likelihood and the prior.
3.2
Choosing a Logarithmic speed representation
We now want to choose the appropriate mapping function f (v) that maps v to the m-space.
We define the m-space as the space in which the likelihood is Gaussian with a speedindependent width. We have shown that discrimination threshold is proportional to the
width of the likelihood (4), (10). Also, we know from the psychophysics literature that
visual speed discrimination approximately follows a Weber-Fechner law [11, 12], thus that
the discrimination threshold increases roughly proportional with speed and so would the
likelihood. A logarithmic speed representation would be compatible with the data and our
choice of the likelihood. Hence, we transform the linear speed-domain v into a normalized
logarithmic domain according to
v + v0
vm = f (v) = ln(
)
(11)
v0
where v0 is a small normalization constant. The normalization is chosen to account for
the expected deviation of equal variance behavior at the low end. Surprisingly, it has been
found that neurons in the Medial Temporal area (Area MT) of macaque monkeys have
speed-tuning curves that are very well approximated by Gaussians of constant width in
above normalized logarithmic space [13]. These neurons are known to play a central role
in the representation of motion. It seems natural to assume that they are strongly involved
in tasks such as our performed psychophysical experiments.
4
Results
Figure 4 shows the contrast dependent shift of speed perception and the speed discrimination threshold data for two subjects. Data points connected with a dashed line represent
the relative matching speed (v2 /v1 ) for a particular contrast value c2 of the test stimulus
as a function of the speed of the reference stimulus. Error bars are the empirical standard deviation of fits to bootstrapped samples of the data. Clearly, low contrast stimuli
are perceived to move slower. The effect, however, varies across the tested speed range
and tends to become smaller for higher speeds. The relative discrimination thresholds for
two different contrasts as a function of speed show that the Weber-Fechner law holds only
approximately. The data are in good agreement with other data from the psychophysics
literature [1, 11, 8].
For each subject, data from both experiments were used to compute a parametric leastsquares fit according to (3), (4), (7), and (10). In order to test the assumption of a LogNormal likelihood we allowed the standard deviation to be dependent on contrast and speed,
thus ?(c, vm ) = g(c)h(vm ). We split the speed range into six bins (subject2: five) and
parameterized h(vm ) and the ratio a/b accordingly. Similarly, we parameterized g(c) for
the seven contrast values. The resulting fits are superimposed as bold lines in Figure 4.
Figure 5 shows the fitted parametric values for g(c) and h(v) (plotted in the linear domain),
and the reconstructed prior distribution p(v) transformed back to the linear domain. The
approximately constant values for h(v) provide evidence that a LogNormal distribution
is an appropriate functional description of the likelihood. The resulting values for g(c)
suggest for the likelihood width a roughly exponential decaying dependency on contrast
with strong saturation for higher contrasts.
discrimination threshold (relative)
reference stimulus contrast c1:
0.075
0.5
subject 1
normalized matching speed
1.5
contrast c2
1
0.5
1
10
0.075
0.5
0.79
0.5
0.4
0.3
0.2
0.1
0
10
1
contrast:
1
10
discrimination threshold (relative)
normalized matching speed
subject 2
1.5
contrast c2
1
0.5
10
1
a
0.5
0.4
0.3
0.2
0.1
10
1
1
b
speed of reference stimulus [deg/sec]
10
stimulus speed [deg/sec]
Figure 4: Speed discrimination data for two subjects. a) The relative matching speed of
a test stimulus with different contrast levels (c2 =[0.05 0.1 0.2 0.4 0.8]) to achieve subjective equality with a reference stimulus (two different contrast values c1 ). b) The relative
discrimination threshold for two stimuli with equal contrast (c1,2 =[0.075 0.5]).
reconstructed prior
subject 1
p(v) [unnormalized]
1
Gaussian
Power-Law
g(c)
1
h(v)
2
0.9
1.5
0.8
0.1
n=-1.41
0.7
1
0.6
0.01
0.5
0.5
0.4
0.3
1
p(v) [unnormalized]
subject 2
10
0.1
1
1
1
1
10
1
10
2
0.9
n=-1.35
0.1
1.5
0.8
0.7
1
0.6
0.01
0.5
0.5
0.4
1
speed [deg/sec]
10
0.3
0
0.1
1
contrast
speed [deg/sec]
Figure 5: Reconstructed prior distribution and parameters of the likelihood function. The
reconstructed prior for both subjects show much heavier tails than a Gaussian (dashed fit),
approximately following a power-law function with exponent n ? ?1.4 (bold line).
5
Conclusions
We have proposed a probabilistic framework based on a Bayesian ideal observer and standard signal detection theory. We have derived a likelihood function and prior distribution
for the estimator, with a fairly conservative set of assumptions, constrained by psychophysical measurements of speed discrimination and matching. The width of the resulting likelihood is nearly constant in the logarithmic speed domain, and decreases approximately
exponentially with contrast. The prior expresses a preference for slower speeds, and approximately follows a power-law distribution, thus has much heavier tails than a Gaussian.
It would be interesting to compare the here derived prior distributions with measured true
distributions of local image velocities that impinge on the retina. Although a number of
authors have measured the spatio-temporal structure of natural images [14, e.g. ], it is
clearly difficult to extract therefrom the true prior distribution because of the feedback loop
formed through movements of the body, head and eyes.
Acknowledgments
The authors thank all subjects for their participation in the psychophysical experiments.
References
[1] P. Thompson. Perceived rate of movement depends on contrast. Vision Research, 22:377?380,
1982.
[2] L.S. Stone and P. Thompson. Human speed perception is contrast dependent. Vision Research,
32(8):1535?1549, 1992.
[3] A. Yuille and N. Grzywacz. A computational theory for the perception of coherent visual
motion. Nature, 333(5):71?74, May 1988.
[4] Alan Stocker. Constraint Optimization Networks for Visual Motion Perception - Analysis and
Synthesis. PhD thesis, Dept. of Physics, Swiss Federal Institute of Technology, Z?urich, Switzerland, March 2002.
[5] Eero Simoncelli. Distributed analysis and representation of visual motion. PhD thesis, MIT,
Dept. of Electrical Engineering, Cambridge, MA, 1993.
[6] Y. Weiss, E. Simoncelli, and E. Adelson. Motion illusions as optimal percept. Nature Neuroscience, 5(6):598?604, June 2002.
[7] D.M. Green and J.A. Swets. Signal Detection Theory and Psychophysics. Wiley, New York,
1966.
[8] F. H?urlimann, D. Kiper, and M. Carandini. Testing the Bayesian model of perceived speed.
Vision Research, 2002.
[9] Y. Weiss and D.J. Fleet. Probabilistic Models of the Brain, chapter Velocity Likelihoods in
Biological and Machine Vision, pages 77?96. Bradford, 2002.
[10] K. Koerding and D. Wolpert. Bayesian integration in sensorimotor learning.
427(15):244?247, January 2004.
Nature,
[11] Leslie Welch. The perception of moving plaids reveals two motion-processing stages. Nature,
337:734?736, 1989.
[12] S. McKee, G. Silvermann, and K. Nakayama. Precise velocity discrimintation despite random
variations in temporal frequency and contrast. Vision Research, 26(4):609?619, 1986.
[13] C.H. Anderson, H. Nover, and G.C. DeAngelis. Modeling the velocity tuning of macaque MT
neurons. Journal of Vision/VSS abstract, 2003.
[14] D.W. Dong and J.J. Atick. Statistics of natural time-varying images. Network: Computation in
Neural Systems, 6:345?358, 1995.
| 2570 |@word trial:11 seems:2 brightness:1 wellapproximated:1 moment:1 series:1 bootstrapped:1 subjective:3 written:1 additive:1 shape:1 medial:1 discrimination:17 v:3 generative:1 device:1 accordingly:1 provides:1 preference:1 accessed:1 five:3 mathematical:1 c2:5 become:1 qualitative:1 fixation:1 combine:1 swets:1 expected:6 roughly:2 behavior:4 examine:1 brain:1 decreasing:2 becomes:2 estimating:2 vthresh:1 minimizes:1 monkey:1 temporal:3 veridically:1 medical:1 veridical:1 engineering:1 local:4 treat:1 tends:1 despite:1 approximately:7 might:1 range:4 acknowledgment:1 testing:1 hughes:1 swiss:1 illusion:1 procedure:1 area:2 empirical:2 significantly:2 matching:7 suggest:1 convenience:1 deterministic:2 demonstrated:1 center:2 map:3 urich:1 duration:1 thompson:2 welch:1 perceive:1 rule:1 estimator:2 importantly:1 variation:1 grzywacz:1 play:1 exact:1 origin:1 agreement:1 velocity:13 approximated:3 predicts:1 observed:1 constancy:1 role:1 electrical:1 capture:1 parameterize:1 ensures:1 cycle:1 connected:1 movement:3 decrease:3 principled:1 asked:2 koerding:1 yuille:1 basis:1 various:1 represented:1 chapter:1 derivation:1 forced:3 deangelis:1 choosing:2 refined:1 outcome:1 whose:1 richer:1 larger:1 favor:2 statistic:1 transform:1 noisy:3 propose:1 interaction:1 product:1 loop:1 achieve:1 description:2 exploiting:1 eccentricity:1 produce:1 wider:1 derive:3 develop:1 measured:5 strong:1 grating:3 indicate:1 direction:1 switzerland:1 plaid:1 human:6 bin:1 fechner:2 biological:1 leastsquares:1 hold:1 around:1 considered:1 exp:1 presumably:1 mapping:5 perceived:17 estimation:2 correctness:1 federal:1 mit:1 clearly:2 gaussian:10 rather:1 varying:2 derived:3 june:1 she:1 likelihood:39 superimposed:1 contrast:35 dependent:4 typically:1 transformed:1 selects:1 translational:1 priori:2 exponent:1 spatial:1 constrained:2 psychophysics:3 fairly:1 integration:1 equal:6 having:1 identical:2 adelson:1 afc:3 nearly:1 stimulus:28 quantitatively:1 retina:1 randomly:1 simultaneously:2 detection:4 circular:1 stocker:2 taylor:1 plotted:1 fitted:1 increased:1 modeling:1 asking:1 kiper:1 leslie:1 deviation:5 dependency:2 varies:2 corrupted:1 density:3 accessible:1 probabilistic:9 vm:42 physic:1 dong:1 synthesis:1 thesis:2 central:1 choose:1 leading:1 account:3 sec:5 bold:2 depends:3 piece:2 later:1 performed:2 observer:14 sine:1 doing:2 wave:1 bayes:1 decaying:1 trusting:1 square:1 formed:1 accuracy:1 convolutional:1 variance:11 characteristic:1 percept:2 vp:1 weak:1 bayesian:20 explain:2 sensorimotor:2 frequency:2 involved:1 obvious:1 fixate:2 auditory:1 carandini:1 knowledge:3 back:1 higher:2 courant:1 wei:2 formulation:1 strongly:1 anderson:1 furthermore:1 just:2 stage:1 atick:1 defines:1 quality:1 reveal:1 grows:1 effect:2 staircase:1 normalized:4 true:2 regularization:1 hence:3 equality:3 analytically:1 illustrated:2 during:1 width:9 unnormalized:2 criterion:1 stone:1 performs:1 motion:13 weber:2 wise:2 image:3 functional:1 mt:2 mckee:1 exponentially:1 tail:3 he:1 measurement:17 significant:1 cambridge:1 tuning:2 similarly:2 had:2 dot:1 moving:5 encountering:1 v0:3 posterior:10 recent:1 paradigm:2 signal:6 dashed:2 full:2 simoncelli:3 alan:2 smooth:2 faster:5 cross:2 laplacian:1 basic:2 vision:6 normalization:4 represent:2 achieved:1 c1:4 want:2 avm:2 subject:15 dvm:2 ideal:7 constraining:1 split:1 easy:1 fit:10 shift:7 fleet:1 six:1 heavier:3 york:2 covered:1 amount:1 locally:2 induces:1 diameter:1 reduced:1 estimated:2 neuroscience:1 write:1 affected:2 express:4 threshold:14 changing:1 v1:4 parameterized:2 decide:1 patch:3 decision:1 interleaved:2 constraint:4 constrain:4 aspect:1 speed:64 u1:1 separable:1 relatively:1 according:3 march:1 describes:1 across:1 smaller:1 ln:1 equation:2 remains:1 needed:1 know:2 tractable:1 end:1 gaussians:1 apply:1 v2:8 indirectly:1 generic:1 appropriate:2 alternative:3 slower:3 drifting:2 embodies:1 classical:1 psychophysical:10 move:3 quantity:2 parametric:5 dependence:1 exhibit:1 thank:1 seven:1 impinge:1 toward:1 assuming:2 relationship:1 ratio:3 difficult:2 unfortunately:1 relate:2 implementation:1 design:1 proper:1 perform:2 neuron:3 howard:1 displayed:2 january:1 defining:1 saturates:1 variability:3 precise:2 situation:1 head:1 varied:1 coherent:1 narrow:1 macaque:2 suggested:2 bar:1 perception:15 saturation:1 including:1 green:1 power:4 suitable:1 natural:3 rely:1 participation:1 technology:1 inversely:1 numerous:1 imply:1 eye:1 extract:2 deviate:1 prior:36 literature:2 determining:1 relative:6 law:6 loss:3 interesting:1 proportional:3 var:8 sufficient:1 consistent:2 article:1 principle:1 systematically:1 compatible:1 surprisingly:1 bias:1 allow:2 side:1 institute:3 wide:1 lognormal:4 distributed:1 regard:1 curve:4 feedback:2 valid:1 cumulative:1 author:2 qualitatively:3 commonly:1 made:1 far:1 reconstructed:4 deg:8 global:1 decides:1 reveals:1 assumed:1 eero:2 spatio:1 why:1 nature:4 nakayama:1 symmetry:1 expansion:2 necessarily:1 domain:5 did:1 noise:6 allowed:1 body:1 psychometric:3 differed:1 slow:3 wiley:1 exponential:1 perceptual:3 evidence:1 essential:1 gained:1 phd:2 illustrates:1 wolpert:1 logarithmic:5 simply:1 visual:13 horizontally:1 monotonic:1 ma:1 identity:1 presentation:2 vthres:1 towards:1 change:1 typical:2 specifically:2 perceiving:2 determined:1 principal:1 conservative:1 bradford:1 experimental:3 meaningful:1 select:1 internal:2 dept:2 tested:1 |
1,729 | 2,571 | Using Machine Learning to Break Visual
Human Interaction Proofs (HIPs)
Kumar Chellapilla
Microsoft Research
One Microsoft Way
Redmond, WA 98052
[email protected]
Patrice Y. Simard
Microsoft Research
One Microsoft Way
Redmond, WA 98052
[email protected]
Abstract
Machine learning is often used to automatically solve human tasks.
In this paper, we look for tasks where machine learning algorithms
are not as good as humans with the hope of gaining insight into
their current limitations. We studied various Human Interactive
Proofs (HIPs) on the market, because they are systems designed to
tell computers and humans apart by posing challenges presumably
too hard for computers. We found that most HIPs are pure
recognition tasks which can easily be broken using machine
learning. The harder HIPs use a combination of segmentation and
recognition tasks. From this observation, we found that building
segmentation tasks is the most effective way to confuse machine
learning algorithms. This has enabled us to build effective HIPs
(which we deployed in MSN Passport), as well as design
challenging segmentation tasks for machine learning algorithms.
1
In t rod u ct i on
The OCR problem for high resolution printed text has virtually been solved 10 years
ago [1]. On the other hand, cursive handwriting recognition today is still too poor
for most people to rely on. Is there a fundamental difference between these two
seemingly similar problems?
To shed more light on this question, we study problems that have been designed to
be difficult for computers. The hope is that we will get some insight on what the
stumbling blocks are for machine learning and devise appropriate tests to further
understand their similarities and differences.
Work on distinguishing computers from humans traces back to the original Turing
Test [2] which asks that a human distinguish between another human and a machine
by asking questions of both. Recent interest has turned to developing systems that
allow a computer to distinguish between another computer and a human. These
systems enable the construction of automatic filters that can be used to prevent
automated scripts from utilizing services intended for humans [4]. Such systems
have been termed Human Interactive Proofs (HIPs) [3] or Completely Automated
Public Turing Tests to Tell Computers and Humans Apart (CAPTCHAs) [4]. An
overview of the work in this area can be found in [5]. Construction of HIPs that are
of practical value is difficult because it is not sufficient to develop challenges at
which humans are somewhat more successful than machines. This is because the
cost of failure for an automatic attacker is minimal compared to the cost of failure
for humans. Ideally a HIP should be solved by humans more than 80% of the time,
while an automatic script with reasonable resource use should succeed less than
0.01% of the time. This latter ratio (1 in 10,000) is a function of the cost of an
automatic trial divided by the cost of having a human perform the attack.
This constraint of generating tasks that are failed 99.99% of the time by all
automated algorithms has generated various solutions which can easily be sampled
on the internet. Seven different HIPs, namely, Mailblocks, MSN (before April 28th,
2004), Ticketmaster, Yahoo, Yahoo v2 (after Sept?04), Register, and Google, will
be given as examples in the next section. We will show in Section 3 that machinelearning-based attacks are far more successful than 1 in 10,000. Yet, some of these
HIPs are harder than others and could be made even harder by identifying the
recognition and segmentation parts, and emphasizing the latter. Section 4 presents
examples of more difficult HIPs which are much more respectable challenges for
machine learning, and yet surprisingly easy for humans. The final section discusses
a (known) weakness of machine learning algorithms and suggests designing simple
artificial datasets for studying this weakness.
2
Exa mp les o f H I Ps
The HIPs explored in this paper are made of characters (or symbols) rendered to an
image and presented to the user. Solving the HIP requires identifying all characters
in the correct order. The following HIPs can be sampled from the web:
Mailblocks: While signing up for free email service with
(www.mailblocks.com), you will find HIP challenges of the type:
mailblocks
MSN: While signing up for free e-mail with MSN Hotmail (www.hotmail.com), you
will find HIP challenges of the type:
Register.com: While requesting a whois lookup for a domain at www.register.com,
you will HIP challenges of the type:
Yahoo!/EZ-Gimpy (CMU): While signing up for free e-mail service with Yahoo!
(www.yahoo.com), you will receive HIP challenges of the type:
Yahoo! (version 2): Starting in August 2004, Yahoo! introduced their second
generation HIP. Three examples are presented below:
Ticketmaster: While looking for concert tickets at www.ticketmaster.com, you
will receive HIP challenges of the type:
Google/Gmail: While signing up for free e-mail with Gmail at www.google.com,
one will receive HIP challenges of the type:
While solutions to Yahoo HIPs are common English words, those for ticketmaster
and Google do not necessarily belong to the English dictionary. They appear to have
been created using a phonetic generator [8].
3
Usi n g ma ch i n e lea rn i n g t o b rea k H IP s
Breaking HIPs is not new. Mori and Malik [7] have successfully broken the EZGimpy (92% success) and Gimpy (33% success) HIPs from CMU. Our approach
aims at an automatic process for solving multiple HIPs with minimum human
intervention, using machine learning. In this paper, our main goal is to learn more
about the common strengths and weaknesses of these HIPs rather than to prove that
we can break any one HIP in particular with the highest possible success rate. We
have results for six different HIPs: EZ-Gimpy/Yahoo, Yahoo v2, mailblocks,
register, ticketmaster, and Google.
To simplify our study, we will not be using language models in our attempt to break
HIPs. For example, there are only about 600 words in the EZ-Gimpy dictionary [7],
which means that a random guess attack would get a success rate of 1 in 600 (more
than enough to break the HIP, i.e., greater than 0.01% success). HIPs become harder
when no language model is used. Similarly, when a HIP uses a language model to
generate challenges, success rate of attacks can be significantly improved by
incorporating the language model. Further, since the language model is not common
to all HIPs studied, it was not used in this paper.
Our generic method for breaking all of these HIPs is to write a custom algorithm to
locate the characters, and then use machine learning for recognition. Surprisingly,
segmentation, or finding the characters, is simple for many HIPs which makes the
process of breaking the HIP particularly easy. Gimpy uses a single constant
predictable color (black) for letters even though the background color changes. We
quickly realized that once the segmentation problem is solved, solving the HIP
becomes a pure recognition problem, and it can trivially be solved using machine
learning. Our recognition engine is based on neural networks [6][9]. It yielded a
0.4% error rate on the MNIST database, uses little memory, and is very fast for
recognition (important for breaking HIPs).
For each HIP, we have a segmentation step, followed by a recognition step. It
should be stressed that we are not trying to solve every HIP of a given type i.e., our
goal is not 100% success rate, but something efficient that can achieve much better
than 0.01%.
In each of the following experiments, 2500 HIPs were hand labeled and used as
follows (a) recognition (1600 for training, 200 for validation, and 200 for testing),
and (b) segmentation (500 for testing segmentation). For each of the five HIPs, a
convolution neural network, identical to the one described in [6], was trained and
tested on gray level character images centered on the guessed character positions
(see below). The trained neural network became the recognizer.
3.1
M a i l b l oc k s
To solve the HIP, we select the red channel, binarize and erode it, extract the largest
connected components (CCs), and breakup CCs that are too large into two or three
adjacent CCs. Further, vertically overlapping half character size CCs are merged.
The resulting rough segmentation works most of the time. Here is an example:
For instance, in the example above, the NN would be trained, and tested on the
following images:
?
The end-to-end success rate is 88.8% for segmentation, 95.9% for recognition
(given correct segmentation), and (0.888)*(0.959)7 = 66.2% total. Note that most of
the errors come from segmentation, even though this is where all the custom
programming was invested.
3.2
Register
The procedure to solve HIPs is very similar. The image was smoothed, binarized,
and the largest 5 connected components were identified. Two examples are
presented below:
The end-to-end success rate is 95.4% for segmentation, 87.1% for recognition
(given correct segmentation), and (0.954)*(0.871)5 = 47.8% total.
3.3
Y a h oo/ E Z - G i mp y
Unlike the mailblocks and register HIPs, the Yahoo/EZ-Gimpy HIPs are richer in
that a variety of backgrounds and clutter are possible. Though some amount of text
warping is present, the text color, size, and font have low variability. Three simple
segmentation algorithms were designed with associated rules to identify which
algorithm to use. The goal was to keep these simple yet effective:
a) No mesh: Convert to grayscale image, threshold to black and white, select
large CCs with sizes close to HIP char sizes. One example:
b) Black mesh: Convert to grayscale image, threshold to black and white,
remove vertical and horizontal line pixels that don?t have neighboring
pixels, select large CCs with sizes close to HIP char sizes. One example:
c) White mesh: Convert to grayscale image, threshold to black and white, add
black pixels (in white line locations) if there exist neighboring pixels, select
large CCs with sizes close to HIP char sizes. One example:
Tests for black and white meshes were performed to determine which segmentation
algorithm to use. The end-to-end success rate was 56.2% for segmentation (38.2%
came from a), 11.8% from b), and 6.2% from c), 90.3% for recognition (given
correct segmentation), and (0.562)*(0.903)4.8 = 34.4% total. The average length of a
Yahoo HIP solution is 4.8 characters.
3.4
T i c k e t ma s t e r
The procedure that solved the Yahoo HIP is fairly successful at solving some of the
ticket master HIPs. These HIPs are characterized by cris-crossing lines at random
angles clustered around 0, 45, 90, and 135 degrees. A multipronged attack as in the
Yahoo case (section 3.3) has potential. In the interests of simplicity, a single attack
was developed: Convert to grayscale, threshold to black and white, up-sample
image, dilate first then erode, select large CCs with sizes close to HIP char sizes.
One example:
The dilate-erode combination causes the lines to be removed (along with any thin
objects) but retains solid thick characters. This single attack is successful in
achieving an end-to-end success rate of 16.6% for segmentation, the recognition rate
was 82.3% (in spite of interfering lines), and (0.166)*(0.823)6.23 = 4.9% total. The
average HIP solution length is 6.23 characters.
3.5
Y a h oo ve r s i on 2
The second generation HIP from Yahoo had several changes: a) it did not use words
from a dictionary or even use a phonetic generator, b) it uses only black and white
colors, c) uses both letters and digits, and d) uses connected lines and arcs as clutter.
The HIP is somewhat similar to the MSN/Passport HIP which does not use a
dictionary, uses two colors, uses letters and digits, and background and foreground
arcs as clutter. Unlike the MSN/Passport HIP, several different fonts are used. A
single segmentation attack was developed: Remove 6 pixel border, up-sample, dilate
first then erode, select large CCs with sizes close to HIP char sizes. The attack is
practically identical to that used for the ticketmaster HIP with different
preprocessing stages and slightly modified parameters. Two examples:
This single attack is successful in achieving an end-to-end success rate of 58.4% for
segmentation, the recognition rate was 95.2%, and (0.584)*(0.952)5 = 45.7% total.
The average HIP solution length is 5 characters.
3.6
G oog l e / G M a i l
The Google HIP is unique in that it uses only image warp as a means of distorting
the characters. Similar to the MSN/Passport and Yahoo version 2 HIPs, it is also
two color. The HIP characters are arranged closed to one another (they often touch)
and follow a curved baseline. The following very simple attack was used to segment
Google HIPs: Convert to grayscale, up-sample, threshold and separate connected
components.
a)
b)
This very simple attack gives an end-to-end success rate of 10.2% for segmentation,
the recognition rate was 89.3%, giving (0.102)*(0.893)6.5 = 4.89% total probability
of breaking a HIP. Average Google HIP solution length is 6.5 characters. This can
be significantly improved upon by judicious use of dilate-erode attack. A direct
application doesn?t do as well as it did on the ticketmaster and yahoo HIPs (because
of the shear and warp of the baseline of the word). More successful and complicated
attacks might estimate and counter the shear and warp of the baseline to achieve
better success rates.
4
Lesso n s lea rn ed f ro m b rea ki n g H IPs
From the previous section, it is clear that most of the errors come from incorrect
segmentations, even though most of the development time is spent devising custom
segmentation schemes. This observation raises the following questions: Why is
segmentation a hard problem? Can we devise harder HIPs and datasets? Can we
build an automatic segmentor? Can we compare classification algorithms based on
how useful they are for segmentation?
4.1
T h e s e g me n t a t i on p r ob l e m
As a review, segmentation is difficult for the following reasons:
1. Segmentation is computationally expensive. In order to find valid patterns, a
recognizer must attempt recognition at many different candidate locations.
2. The segmentation function is complex. To segment successfully, the system
must learn to identify which patterns are valid among the set of all possible
valid and non-valid patterns. This task is intrinsically more difficult than
classification because the space of input is considerably larger. Unlike the space
of valid patterns, the space of non-valid patterns is typically too vast to sample.
This is a problem for many learning algorithms which yield too many false
positives when presented non-valid patterns.
3. Identifying valid characters among a set of valid and invalid candidates is a
combinatorial problem. For example, correctly identifying which 8 characters
among 20 candidates (assuming 12 false positives), has a 1 in 125,970 (20
choose 8) chances of success by random guessing.
4.2
B ui l d i n g b e t te r / h a r de r H I P s
We can use what we have learned to build better HIPs. For instance the HIP below
was designed to make segmentation difficult and a similar version has been
deployed by MSN Passport for hotmail registrations (www.hotmail.com):
The idea is that the additional arcs are themselves good candidates for false
characters. The previous segmentation attacks would fail on this HIP. Furthermore,
simple change of fonts, distortions, or arc types would require extensive work for
the attacker to adjust to. We believe HIPs that emphasize the segmentation problem,
such as the above example, are much stronger than the HIPs we examined in this
paper, which rely on recognition being difficult. Pushing this to the extreme, we can
easily generate the following HIPs:
Despite the apparent difficulty of these HIPs, humans are surprisingly good at
solving these, indicating that humans are far better than computers at segmentation.
This approach of adding several competing false positives can in principle be used
to automatically create difficult segmentation problems or benchmarks to test
classification algorithms.
4.3
B ui l d i n g a n a ut o ma t i c s e g me n t or
To build an automatic segmentor, we could use the
following procedure. Label characters based on
their correct position and train a recognizer. Apply
the trained recognizer at all locations in the HIP
image. Collect all candidate characters identified
with high confidence by the recognizer. Compute
the probability of each combination of candidates
(going from left to right), and output the solution
string with the highest probability. This is better
illustrated with an example.
Consider the following HIP (to the right). The
trained neural network has these maps (warm
colors indicate recognition) that show that K, Y,
and so on are correctly identified. However, the
maps for 7 and 9 show several false positives. In
general, we would get the following color coded
map for all the different candidates:
HIP
K
Y
B
7
9
With a threshold of 0.5 on the network?s outputs, the map obtained is:
We note that there are several false positives for each true positive. The number of
false positives per true positive character was found to be between 1 and 4, giving a
1 in C(16,8) = 12,870 to 1 in C(32,8) = 10,518,300 random chance of guessing the
correct segmentation for the HIP characters. These numbers can be improved upon
by constraining solution strings to flow sequentially from left to right and by
restricting overlap. For each combination, we compute a probability by multiplying
the 8 probabilities of the classifier for each position. The combination with the
highest probability is the one proposed by the classifier. We do not have results for
such an automatic segmentor at this time. It is interesting to note that with such a
method a classifier that is robust to false positives would do far better than one that
is not. This suggests another axis for comparing classifiers.
5
Con clu si on
In this paper, we have successfully applied machine learning to the problem of
solving HIPs. We have learned that decomposing the HIP problem into
segmentation and recognition greatly simplifies analysis. Recognition on even
unprocessed images (given segmentation is a solved) can be done automatically
using neural networks. Segmentation, on the other hand, is the difficulty
differentiator between weaker and stronger HIPs and requires custom intervention
for each HIP. We have used this observation to design new HIPs and new tests for
machine learning algorithms with the hope of improving them.
A c k n ow l e d ge me n t s
We would like to acknowledge Chau Luu and Eric Meltzer for their help with
labeling and segmenting various HIPs. We would also like to acknowledge Josh
Benaloh and Cem Paya for stimulating discussions on HIP security.
References
[1] Baird HS (1992), ?Anatomy of a versatile page reader,? IEEE Pro., v.80, pp. 1059-1065.
[2] Turing AM (1950), ?Computing Machinery and Intelligence,? Mind, 59:236, pp. 433-460.
[3] First Workshop on Human Interactive Proofs, Palo Alto, CA, January 2002.
[4] Von Ahn L, Blum M, and Langford J, The Captcha Project. http://www.captcha.net
[5] Baird HS and Popat K (2002) ?Human Interactive Proofs and Document Image
Analysis,? Proc. IAPR 2002 Workshop on Document Analysis Systerms, Princeton, NJ.
[6] Simard PY, Steinkraus D, and Platt J, (2003) ?Best Practice for Convolutional Neural
Networks Applied to Visual Document Analysis,? in International Conference on Document
Analysis and Recognition (ICDAR), pp. 958-962, IEEE Computer Society, Los Alamitos.
[7] Mori G, Malik J (2003), ?Recognizing Objects in Adversarial Clutter: Breaking a Visual
CAPTCHA,? Proc. of the Computer Vision and Pattern Recognition (CVPR) Conference,
IEEE Computer Society, vol.1, pages:I-134 - I-141, June 18-20, 2003
[8] Chew, M. and Baird, H. S. (2003), ?BaffleText: a Human Interactive Proof,? Proc.,
10th IS&T/SPIE Document Recognition & Retrieval Conf., Santa Clara, CA, Jan. 22.
[9] LeCun Y, Bottou L, Bengio Y, and Haffner P, ?Gradient-based learning applied to
document recognition,? Proceedings of the IEEE, Nov. 1998.
| 2571 |@word h:2 trial:1 version:3 stronger:2 asks:1 versatile:1 solid:1 harder:5 document:6 current:1 com:10 comparing:1 si:1 yet:3 gmail:2 must:2 clara:1 mesh:4 remove:2 designed:4 concert:1 half:1 intelligence:1 guess:1 devising:1 location:3 attack:15 five:1 along:1 direct:1 become:1 incorrect:1 prove:1 chew:1 market:1 themselves:1 steinkraus:1 automatically:3 little:1 becomes:1 project:1 alto:1 what:2 string:2 developed:2 finding:1 nj:1 every:1 binarized:1 interactive:5 shed:1 ro:1 classifier:4 platt:1 intervention:2 appear:1 segmenting:1 before:1 service:3 positive:9 vertically:1 despite:1 black:9 might:1 studied:2 examined:1 suggests:2 challenging:1 collect:1 practical:1 unique:1 lecun:1 testing:2 practice:1 block:1 digit:2 procedure:3 jan:1 area:1 significantly:2 printed:1 word:4 confidence:1 spite:1 get:3 close:5 py:1 www:8 map:4 starting:1 resolution:1 simplicity:1 identifying:4 pure:2 usi:1 insight:2 rule:1 utilizing:1 machinelearning:1 enabled:1 construction:2 today:1 user:1 programming:1 distinguishing:1 designing:1 us:9 crossing:1 recognition:25 particularly:1 expensive:1 database:1 labeled:1 solved:6 connected:4 counter:1 highest:3 removed:1 predictable:1 broken:2 ui:2 ideally:1 trained:5 raise:1 solving:6 segment:2 iapr:1 upon:2 eric:1 completely:1 easily:3 various:3 train:1 fast:1 effective:3 artificial:1 tell:2 labeling:1 apparent:1 richer:1 larger:1 solve:4 cvpr:1 distortion:1 invested:1 patrice:2 seemingly:1 final:1 ip:2 net:1 interaction:1 neighboring:2 turned:1 achieve:2 los:1 p:1 generating:1 object:2 spent:1 oo:2 develop:1 help:1 ticket:2 come:2 indicate:1 thick:1 merged:1 correct:6 anatomy:1 filter:1 msn:8 human:23 centered:1 enable:1 char:5 public:1 require:1 clustered:1 practically:1 around:1 presumably:1 clu:1 dictionary:4 recognizer:5 proc:3 combinatorial:1 label:1 palo:1 largest:2 create:1 successfully:3 hope:3 rough:1 aim:1 modified:1 rather:1 june:1 greatly:1 adversarial:1 baseline:3 am:1 nn:1 typically:1 going:1 pixel:5 classification:3 among:3 yahoo:17 development:1 chau:1 fairly:1 once:1 having:1 identical:2 look:1 thin:1 foreground:1 others:1 simplify:1 ve:1 intended:1 microsoft:6 attempt:2 interest:2 custom:4 adjust:1 weakness:3 extreme:1 light:1 machinery:1 minimal:1 hip:88 instance:2 asking:1 retains:1 respectable:1 cost:4 recognizing:1 successful:6 too:5 cris:1 considerably:1 fundamental:1 international:1 quickly:1 von:1 choose:1 conf:1 simard:2 potential:1 de:1 lookup:1 baird:3 register:6 mp:2 script:2 break:4 performed:1 closed:1 red:1 complicated:1 became:1 convolutional:1 guessed:1 identify:2 yield:1 multiplying:1 cc:9 ago:1 ed:1 email:1 failure:2 pp:3 proof:6 associated:1 spie:1 handwriting:1 con:1 sampled:2 intrinsically:1 color:8 ut:1 segmentation:39 back:1 follow:1 improved:3 april:1 arranged:1 done:1 though:4 furthermore:1 stage:1 langford:1 hand:3 horizontal:1 web:1 touch:1 overlapping:1 google:8 gray:1 believe:1 building:1 true:2 illustrated:1 white:8 adjacent:1 oc:1 trying:1 pro:1 image:12 common:3 shear:2 overview:1 belong:1 erode:5 automatic:8 captchas:1 trivially:1 similarly:1 language:5 had:1 similarity:1 ahn:1 add:1 something:1 recent:1 apart:2 termed:1 phonetic:2 stumbling:1 success:15 came:1 devise:2 minimum:1 greater:1 somewhat:2 additional:1 determine:1 multiple:1 characterized:1 retrieval:1 divided:1 coded:1 vision:1 cmu:2 lea:2 receive:3 rea:2 background:3 signing:4 unlike:3 virtually:1 flow:1 constraining:1 bengio:1 easy:2 enough:1 automated:3 variety:1 meltzer:1 identified:3 competing:1 idea:1 simplifies:1 haffner:1 requesting:1 rod:1 unprocessed:1 six:1 distorting:1 cause:1 differentiator:1 useful:1 clear:1 santa:1 cursive:1 amount:1 clutter:4 generate:2 http:1 exist:1 correctly:2 per:1 write:1 vol:1 threshold:6 blum:1 achieving:2 prevent:1 registration:1 vast:1 year:1 convert:5 turing:3 letter:3 you:5 master:1 angle:1 reasonable:1 reader:1 ob:1 ki:1 ct:1 internet:1 followed:1 distinguish:2 exa:1 yielded:1 strength:1 constraint:1 kumar:1 rendered:1 developing:1 combination:5 poor:1 slightly:1 character:21 mori:2 resource:1 computationally:1 discus:1 icdar:1 fail:1 mind:1 ge:1 end:12 studying:1 decomposing:1 hotmail:4 apply:1 ocr:1 v2:2 appropriate:1 generic:1 original:1 pushing:1 giving:2 build:4 society:2 warping:1 malik:2 question:3 realized:1 breakup:1 font:3 alamitos:1 dilate:4 guessing:2 ow:1 gradient:1 separate:1 captcha:3 me:3 seven:1 mail:3 luu:1 segmentor:3 binarize:1 reason:1 assuming:1 length:4 ratio:1 difficult:8 trace:1 design:2 attacker:2 perform:1 vertical:1 observation:3 convolution:1 datasets:2 arc:4 benchmark:1 acknowledge:2 curved:1 january:1 looking:1 variability:1 locate:1 rn:2 smoothed:1 august:1 introduced:1 namely:1 extensive:1 security:1 engine:1 learned:2 redmond:2 below:4 pattern:7 challenge:10 gaining:1 memory:1 overlap:1 difficulty:2 rely:2 warm:1 scheme:1 axis:1 created:1 extract:1 sept:1 text:3 review:1 generation:2 limitation:1 interesting:1 generator:2 validation:1 degree:1 sufficient:1 principle:1 interfering:1 surprisingly:3 free:4 english:2 allow:1 understand:1 warp:3 weaker:1 valid:9 doesn:1 made:2 preprocessing:1 far:3 passport:5 nov:1 emphasize:1 keep:1 cem:1 sequentially:1 grayscale:5 don:1 why:1 oog:1 learn:2 channel:1 robust:1 ca:2 improving:1 posing:1 bottou:1 necessarily:1 complex:1 domain:1 did:2 main:1 border:1 deployed:2 position:3 candidate:7 breaking:6 emphasizing:1 popat:1 chellapilla:1 explored:1 symbol:1 incorporating:1 workshop:2 mnist:1 false:8 adding:1 restricting:1 te:1 confuse:1 ez:4 visual:3 failed:1 josh:1 ch:1 chance:2 ma:3 stimulating:1 succeed:1 goal:3 invalid:1 hard:2 change:3 judicious:1 total:6 indicating:1 select:6 people:1 latter:2 stressed:1 princeton:1 tested:2 |
1,730 | 2,572 | Blind one-microphone speech separation:
A spectral learning approach
Francis R. Bach
Computer Science
University of California
Berkeley, CA 94720
[email protected]
Michael I. Jordan
Computer Science and Statistics
University of California
Berkeley, CA 94720
[email protected]
Abstract
We present an algorithm to perform blind, one-microphone speech separation. Our algorithm separates mixtures of speech without modeling
individual speakers. Instead, we formulate the problem of speech separation as a problem in segmenting the spectrogram of the signal into
two or more disjoint sets. We build feature sets for our segmenter using
classical cues from speech psychophysics. We then combine these features into parameterized affinity matrices. We also take advantage of the
fact that we can generate training examples for segmentation by artificially superposing separately-recorded signals. Thus the parameters of
the affinity matrices can be tuned using recent work on learning spectral
clustering [1]. This yields an adaptive, speech-specific segmentation algorithm that can successfully separate one-microphone speech mixtures.
1
Introduction
The problem of recovering signals from linear mixtures, with only partial knowledge of the
mixing process and the signals?a problem often referred to as blind source separation?
is a central problem in signal processing. It has applications in many fields, including
speech processing, network tomography and biomedical imaging [2]. When the problem is
over-determined, i.e., when there are no more signals to estimate (the sources) than signals
that are observed (the sensors), generic assumptions such as statistical independence of the
sources can be used in order to demix successfully [2]. Many interesting applications,
however, involve under-determined problems (more sources than sensors), where more
specific assumptions must be made in order to demix. In problems involving at least two
sensors, progress has been made by appealing to sparsity assumptions [3, 4].
However, the most extreme case, in which there is only one sensor and two or more sources,
is a much harder and still-open problem for complex signals such as speech. In this setting,
simple generic statistical assumptions do not suffice. One approach to the problem involves
a return to the spirit of classical engineering methods such as matched filters, and estimating
specific models for specific sources?e.g., specific speakers in the case of speech [5, 6].
While such an approach is reasonable, it departs significantly from the desideratum of
?blindness.? In this paper we present an algorithm that is a blind separation algorithm?our
algorithm separates speech mixtures from a single microphone without requiring models
of specific speakers.
Our approach involves a ?discriminative? approach to the problem of speech separation.
That is, rather than building a complex model of speech, we instead focus directly on the
task of separation and optimize parameters that determine separation performance. We
work within a time-frequency representation (a spectrogram), and exploit the sparsity of
speech signals in this representation. That is, although two speakers might speak simultaneously, there is relatively little overlap in the time-frequency plane if the speakers are
different [5, 4]. We thus formulate speech separation as a problem in segmentation in the
time-frequency plane. In principle, we could appeal to classical segmentation methods
from vision (see, e.g. [7]) to solve this two-dimensional segmentation problem. Speech
segments are, however, very different from visual segments, reflecting very different underlying physics. Thus we must design features for segmenting speech from first principles.
It also proves essential to combine knowledge-based feature design with learning methods.
In particular, we exploit the fact that in speech we can generate ?training examples? by
artificially superposing two separately-recorded signals. Making use of our earlier work
on learning methods for spectral clustering [1], we use the training data to optimize the
parameters of a spectral clustering algorithm. This yields an adaptive, ?discriminative?
segmentation algorithm that is optimized to separate speech signals.
We highlight one other aspect of the problem here?the major computational challenge
involved in applying spectral methods to speech separation. Indeed, four seconds of speech
sampled at 5.5 KHz yields 22,000 samples and thus we need to manipulate affinity matrices
of dimension at least 22, 000 ? 22, 000. Thus a major part of our effort has involved the
design of numerical approximation schemes that exploit the different time scales present in
speech signals.
The paper is structured as follows. Section 2 provides a review of basic methodology.
In Section 3 we describe our approach to feature design based on known cues for speech
separation [8, 9]. Section 4 shows how parameterized affinity matrices based on these cues
can be optimized in the spectral clustering setting. We describe our experimental results in
Section 5 and present our conclusions in Section 6.
2
Speech separation as spectrogram segmentation
In this section, we first review the relevant properties of speech signals in the timefrequency representation and describe how our training sets are constructed.
2.1
Spectrogram
The spectrogram is a two-dimensional (time and frequency) redundant representation of a
one-dimensional signal [10]. Let f [t], t = 0, . . . , T ? 1 be a signal in RT . The spectrogram is defined through windowed Fourier transforms and is commonly referred to as a
short-time Fourier transform or as Gabor analysis [10]. The value (U f )mn of the spectroPT ?1
gram at time window n and frequency m is defined as (U f )mn = ?1M t=0 f [t]w[t ?
na]ei2?mt/M , where w is a window of length T with small support of length c. We assume
that the number of samples T is an integer multiple of a and c. There are then N = T /a
different windows of length c. The spectrogram is thus an N ? M image which provides a
redundant time-frequency representation of time signals1 (see Figure 1).
Inversion Our speech separation framework is based on the segmentation of the spectrogram of a signal f [t] in S > 2 disjoint subsets Ai , i = 1, . . . , S of [0, N ? 1] ? [0, M ? 1].
1
In our simulations, the sampling frequency is f0 = 5.5kHz and we use a Hanning window of
length c = 216 (i.e., 43.2ms). The spacing between window is equal to a = 54 (i.e., 10.8ms). We
use a 512-point FFT (M = 512). For a speech sample of length 4 sec, we have T = 22, 000 samples
and then N = 407, which makes ? 2 ? 105 spectrogram pixels.
Frequency
Frequency
Time
Time
Figure 1: Spectrogram of speech; (left) single speaker, (right) two simultaneous speakers.
The gray intensity is proportional to the magnitude of the spectrogram.
This leads to S spectrograms Ui such that (Ui )mn = Umn if (m, n) ? Ai and zero
otherwise?note that the phase is kept the same as the one of the original mixed signal.
We now need to find S speech signals fi [t] such that each Ui is the spectrogram of fi .
In general there are no exact solutions (because the representation is redundant), and a
classical technique is to find the minimum L2 norm approximation, i.e., find fi such that
||Ui ? U fi ||2 is minimal [10]. The solution of this minimization problem involves the
pseudo-inverse of the linear operator U [10] and is equal to fi = (U ? U )?1 U ? Ui . By
our choice of window (Hanning), U ? U is proportional to the identity matrix, so that the
solution to this problem can simply be obtained by applying the adjoint operator U ? .
Normalization and subsampling There are several ways of normalizing a speech signal.
In this paper, we chose to rescale allPspeech signals as follows: for each time window n,
we compute the total energy en = m |U fmn |2 , and its 20-point moving average. The
signals are normalized so that the 80% percentile of those values is equal to one.
In order to reduce the number of spectrogram samples to consider, for a given prenormalized speech signal, we threshold coefficients whose magnitudes are less than a value
that was chosen so that the distortion is inaudible.
2.2
Generating training samples
Our approach is based on a learning algorithm that optimizes a segmentation criterion. The
training examples that we provide to this algorithm are obtained by mixing separatelynormalized speech signals. That is, given two volume-normalized speech signals f1 , f2 of
the same duration, with spectrograms U1 and U2 , we build a training sample as U train =
U1 + U2 , with a segmentation given by z = arg min{U1 , U2 }. In order to obtain better
training partitions (and in particular to be more robust to the choice of normalization),
we also search over all ? ? [0, 1] such that the least square reconstruction error of the
waveform obtained from segmenting/reconstructing using z = arg min{?U1 , (1 ? ?)U2 }
is minimized. An example of such a partition is shown in Figure 2 (left).
3
Features and grouping cues for speech separation
In this section we describe our approach to the design of features for the spectral segmentation. We base our design on classical cues suggested from studies of perceptual grouping [11]. Our basic representation is a ?feature map,? a two-dimensional representation that
has the same layout as the spectrogram. Each of these cues is associated with a specific
time scale, which we refer to as ?small? (less than 5 frames), ?medium? (10 to 20 frames),
and ?large? (across all frames). (These scales will be of particular relevance to the design
of numerical approximation methods in Section 4.3). Any given feature is not sufficient for
separating by itself; rather, it is the combination of several features that makes our approach
successful.
3.1
Non-harmonic cues
The following non-harmonic cues have counterparts in visual scenes and for these cues we
are able to borrow from feature design techniques used in image segmentation [7].
Continuity Two time-frequency points are likely to belong to the same segment if they
are close in time or frequency; we thus use time and frequency directly as features. This
cue acts at a small time scale.
Common fate cues Elements that exhibit the same time variation are likely to belong to
the same source. This takes several particular forms. The first is simply common offset and
common onset. We thus build an offset map and an onset map, with elements that are zero
when no variation occurs, and are large when there is a sharp decrease or increase (with
respect to time) for that particular time-frequency point. The onset and offset maps are
built using oriented energy filters as used in vision (with one vertical orientation). These
are obtained by convolving the spectrogram with derivatives of Gaussian windows [7].
Another form of the common fate cue is frequency co-modulation, the situation in which
frequency components of a single source tend to move in sync. To capture this cue we
simply use oriented filter outputs for a set of orientation angles (8 in our simulations).
Those features act mainly at a medium time scale.
3.2
Harmonic cues
This is the major cue for voiced speech [12, 9, 8], and it acts at all time scales (small,
medium and large): voiced speech is locally periodic and the local period is usually referred
to as the pitch.
Pitch estimation In order to use harmonic information, we need to estimate potentially
several pitches. We have developed a simple pattern matching framework for doing this
that we present in Appendix A. If S pitches are sought, the output that we obtain from the
pitch extractor is, for each time frame n, the S pitches ?n1 , . . . , ?nS , as well as the strength
ynms of the s-th pitch for each frequency m.
Timbre The pitch extraction algorithm presented in Appendix A also outputs the spectral envelope of the signal [12]. This can be used to design an additional feature related
to timbre which helps integrate information regarding speaker identification across time.
Timbre can be loosely defined as the set of properties of a voiced speech signal once the
pitch has been factored out [8]. We add the spectral envelope as a feature (reducing its
dimensionality using principal component analysis).
Building feature maps from pitch information
We build a set of features
from the pitch information. Given a time-frequency point (m, n), let s(m, n) =
arg maxs (P 0yynms0 )1/2 denote the highest energy pitch, and define the features ?ns(m,n) ,
nm s
m
P
y
y
ynms(m,n) , m0 ynm0 s(m,n) , P 0nms(m,n)
and (P 0 ynms(m,n)
1/2 . We use a partial
nm0 s(m,n) ))
m ynm0 s(m,n)
m
normalization with the square root to avoid including very low energy signals, while allowing a significant difference between the local amplitude of the speakers.
Those features all come with some form of energy level and all features involving pitch
values ? should take this energy into account when the affinity matrix is built in Section 4.
Indeed, the values of the harmonic features have no meaning when no energy in that pitch
is present.
4
Spectral clustering and affinity matrices
Given the features described in the previous section, we now show how to build affinity
(i.e., similarity) matrices that can be used to define a spectral segmenter. In particular, our
approach builds parameterized affinity matrices, and uses a learning algorithm to adjust
these parameters.
4.1
Spectral clustering
Given P data points to partition into S > 2 disjoint groups, spectral clustering methods
use an affinity matrix W , symmetric of size P ? P , that encodes topological knowledge
about the problem. Once W is available, it is normalized and its first S (P -dimensional)
eigenvectors are computed. Then, forming a P ? S matrix with these eigenvectors as
columns, we cluster the P rows of this matrix as points in RS using K-means (or a weighted
version thereof). These clusters define the final partition [7, 1].
We prefer spectral clustering methods over other clustering algorithms such as K-means or
mixtures of Gaussians estimated by the EM algorithm because we do not have any reason
to expect the segments of interest in our problem to form convex shapes in the feature
representation.
4.2
Parameterized affinity matrices
The success of spectral methods for clustering depends heavily on the construction of the
affinity matrix W . In [1], we have shown how learning can play a role in optimizing
over affinity matrices. Our algorithm assumes that fully partitioned datasets are available,
and uses these datasets as training data for optimizing the parameters of affinity matrices.
As we have discussed in Section 2.2, such training data are easily obtained in the speech
separation setting. It remains for us to describe how we parameterize the affinity matrices.
From each of the features defined in Section 3, we define a basis affinity matrix Wj =
Wj (?j ), where ?j is a (vector) parameter. We restrict ourselves to affinity matrices whose
elements are between zero and one, and with unit diagonal. We distinguish between harmonic and non-harmonic features. For non-harmonic features, we use a radial basis function to define affinities. Thus, if fa is the value of the feature for data point a, we use a
basis affinity matrix defined as Wab = exp(?||fa ? fb ||? ), where ? > 1.
For an harmonic feature, on the other hand, we need to take into account the strength of the
feature: if fa is the value of the feature for data point a, with strength ya , we use Wab =
exp(?|g(ya , yb ) + ?3 |?4 ||fa ? fb ||?2 ), where g(u, v) = (ue?5 u + ve?5 v )/(e?5 u + e?5 v )
ranges from the minimum of u and v for ?5 = ?? to their maximum for ?5 = +?.
Given m basis matrices, we use the following parameterization of W : W =
PK
?k1
?km
? ? ? ? ? Wm
, where the products are taken pointwise. Intuitively, if
k=1 ?k W1
we consider the values of affinity as soft boolean variables, taking the product of two affinity matrices is equivalent to considering the conjunction of two matrices, while taking the
sum can be seen as their disjunction: our final affinity matrix can thus be seen as a disjunctive normal form. For our application to speech separation, we consider a sum of K = 3
matrices, one matrix for each time scale. This has the advantage of allowing different
approximation schemes for each of the time scales, an issue we address in the following
section.
4.3
Approximations of affinity matrices
The affinity matrices that we consider are huge, of size at least 50,000 by 50,000. Thus a
significant part of our effort has involved finding computationally efficient approximations
of affinity matrices.
Let us assume that the time-frequency plane is vectorized by stacking one time frame after
the other. In this representation, the time scale of a basis affinity matrix W exerts an effect
on the degree of ?bandedness? of W . The matrix W is said band-diagonal with bandwidth
B, if for all i, j, |i ? j| > B ? Wij = 0. On a small time scale, W has a small bandwidth;
for a medium time scale, the band is larger but still small compared to the total size of the
matrix, while for large scale effects, the matrix W has no band structure. Note that the
bandwidth B can be controlled by the coefficient of the radial basis function involving the
time feature n.
For each of these three cases, we have designed a particular way of approximating the
matrix, while ensuring that in each case the time and space requirements are linear in the
number of time frames.
Small scale If the bandwidth B is very small, we use a simple direct sparse approximation. The complexity of such an approximation grows linearly in the number of time
frames.
Medium and large scale We use a low-rank approximation of the matrix W similar in
spirit to the algorithm of [13]. If we assume that the index set {1, . . . , P } is partitioned
randomly into I and J, and that A = W (I, I) and B = W (J, I), then W (J, I) = B >
(by symmetry) and we approximate C = W (J, J) by a linear combination of the columns
b = BE, where E ? R|I|?|J| . The matrix E is chosen so that when the linear
in I, i.e., C
combination defined by E is applied to the columns in I, the error is minimal, which leads
to an approximation of W (J, J) by B(A2 + ?I)?1 AB > .
If G is the dimension of J, then the complexity of finding the approximation is O(G3 +
G2 P ), and the complexity of a matrix-vector product with the low-rank approximation is
O(G2 P ). The storage requirement is O(GP ). For large bandwidths, we use a constant G,
i.e., we make the assumption that the rank that is required to encode a speaker is independent of the duration of the signals.
For mid-range interactions, we need an approximation whose rank grows with time, but
whose complexity does not grow quadratically with time. This is done by using the banded
structure of A and W . If ? is the proportion of retained indices, then the complexity of
storage and matrix-vector multiplication is O(P ?3 B).
5
Experiments
We have trained our segmenter using data from four different speakers, with speech signals
of duration 3 seconds. There were 28 parameters to estimate using our spectral learning
algorithm. For testing, we have use mixes from five speakers that were different from those
in the training set.
In Figure 2, for two speakers from the testing set, we show on the left part an example
of the segmentation that is obtained when the two speech signals are known in advance
(obtained as described in Section 2.2), and on the right side, the segmentation that is output
by our algorithm. Although some components of the ?black? speaker are missing, the
segmentation performance is good enough to obtain audible signals of reasonable quality.
The speech samples for this example can de downloaded from www.cs.berkeley.edu/
?fbach/speech/ . On this web site, there are additional examples of speech separation,
with various speakers, in French and in English.
An important point is that our method does not require to know the speaker in advance in
order to demix successfully; rather, it just requires that the two speakers have distinct and
far enough pitches most of the time (another but less crucial condition is that one pitch is
not too close to twice the other one).
As mentioned earlier, there was a major computational challenge in applying spectral methods to single microphone speech separation. Using the techniques described in Section 4.3,
the separation algorithm has linear running time complexity and memory requirement and,
Frequency
Frequency
Time
Time
Figure 2: (Left) Optimal segmentation for the spectrogram in Figure 1 (right), where the
two speakers are ?black? and ?grey;? this segmentation is obtained from the known separated signals. (Right) The blind segmentation obtained with our algorithm.
coded in Matlab and C, it takes 30 minutes to separate 4 seconds of speech on a 1.8 GHz
processor with 1GB of RAM.
6
Conclusions
We have presented an algorithm to perform blind source separation of speech signals from a
single microphone. To do so, we have combined knowledge of physical and psychophysical
properties of speech with learning methods. The former provide parameterized affinity
matrices for spectral clustering, and the latter make use of our ability to generate segmented
training data. The result is an optimized segmenter for spectrograms of speech mixtures.
We have successfully demixed speech signals from two speakers using this approach.
Our work thus far has been limited to the setting of ideal acoustics and equal-strength
mixing of two speakers. There are several obvious extensions that warrant investigation.
First, the mixing conditions should be weakened and should allow some form of delay or
echo. Second, there are multiple applications where speech has to be separated from a
non-stationary noise; we believe that our method can be extended to this situation. Third,
our framework is based on segmentation of the spectrogram and, as such, distortions are
inevitable since this is a ?lossy? formulation [6, 4]. We are currently working on postprocessing methods that remove some of those distortions. Finally, while running time
and memory requirements of our algorithm are linear in the duration of the signal to be
separated, the resource requirements remain a concern. We are currently working on further
numerical techniques that we believe will bring our method significantly closer to real-time.
Appendix A. Pitch estimation
Pitch estimation for one pitch In this paragraph, we assume that we are given one time
slice s of the spectrogram magnitude, s ? RM . The goal is to have a specific pattern match
s. Since the speech signals are real, the spectrogram is symmetric and we can consider only
M/2 samples.
If the signal is exactly periodic, then the spectrogram magnitude for that time frame is exactly a superposition of bumps at multiples of the fundamental frequency, The patterns we
are considering have thus the following parameters: a ?bump? function u 7? b(u), a pitch
? ? [0, M/2] and a sequence of harmonics x1 , . . . , xH at frequencies ?1 = ?, . . . , ?H =
H?, where H is the largest acceptable harmonic multiple, i.e., H = bM/2?c. The pattern
s? = s?(x, b, ?) is then built as a weighted sum of bumps.
By pattern matching, we mean to find the pattern s? as close to s in the L2 -norm sense. We
impose a constraint on the harmonic strengths (xh ), namely, that they are samples at h?
R M/2 (2)
of a function g with small second derivative norm 0
|g (?)|2 d?. The function g can
be seen as the envelope of the signal and is related to the ?timbre? of the speaker [8]. The
explicit consideration of the envelope and its smoothness is necessary for two reasons: (a)
it will provide a timbre feature helpful for separation, (b) it helps avoid pitch-halving, a
traditional problem of pitch extractors [12].
R M/2 (2)
Given b and ?, we minimize with respect to x, ||s ? s?(x)||2 + ? 0
|g (?)|2 d?, where
xh = g(h?). Since s?(x) is linear function of x, this is a spline smoothing problem, and the
solution can be obtained in closed form with complexity O(H 3 ) [14].
We now have to search over b and ?, knowing that the harmonic strengths x can be found
in closed form. We use exhaustive search on a grid for ?, while we take only a few bump
shapes. The main reason for several bump shapes is to account for the only approximate
periodicity of voiced speech. For further details and extensions, see [15].
Pitch estimation for several pitches If we are to estimate S pitches, we estimate them
recursively, by removing the estimated harmonic signals. In this paper, we assume that the
number of speakers and hence the maximum number of pitches is known. Note, however,
that since all our pitch features are always used with their strengths, our separation method
is relatively robust to situations where we try to look for too many pitches.
Acknowledgments
We wish to acknowledge support from a grant from Intel Corporation, and a graduate fellowship to Francis Bach from Microsoft Research.
References
[1] F. R. Bach and M. I. Jordan. Learning spectral clustering. In NIPS 16, 2004.
[2] A. Hyv?arinen, J. Karhunen, and E. Oja. Independent Component Analysis. John
Wiley & Sons, 2001.
[3] M. Zibulevsky, P. Kisilev, Y. Y. Zeevi, and B. A. Pearlmutter. Blind source separation
via multinode sparse representation. In NIPS 14, 2002.
[4] O. Yilmaz and S. Rickard. Blind separation of speech mixtures via time-frequency
masking. IEEE Trans. Sig. Proc., 52(7):1830?1847, 2004.
[5] S. T. Roweis. One microphone source separation. In NIPS 13, 2001.
[6] G.-J. Jang and T.-W. Lee. A probabilistic approach to single channel source separation. In NIPS 15, 2003.
[7] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE PAMI,
22(8):888?905, 2000.
[8] A. S. Bregman. Auditory Scene Analysis: The Perceptual Organization of Sound.
MIT Press, 1990.
[9] G. J. Brown and M. P. Cooke. Computational auditory scene analysis. Computer
Speech and Language, 8:297?333, 1994.
[10] S. Mallat. A Wavelet Tour of Signal Processing. Academic Press, 1998.
[11] M. Cooke and D. P. W. Ellis. The auditory organization of speech and other sources in
listeners and computational models. Speech Communication, 35(3-4):141?177, 2001.
[12] B. Gold and N. Morgan. Speech and Audio Signal Processing: Processing and Perception of Speech and Music. Wiley Press, 1999.
[13] S. Belongie, C. Fowlkes, F. Chung, and J. Malik. Spectral partitioning with indefinite
kernels using the Nystr?om extension. In ECCV, 2002.
[14] G. Wahba. Spline Models for Observational Data. SIAM, 1990.
[15] F. R. Bach and M. I. Jordan. Discriminative training of hidden Markov models for
multiple pitch tracking. In ICASSP, 2005.
| 2572 |@word blindness:1 version:1 timefrequency:1 inversion:1 norm:3 proportion:1 open:1 km:1 grey:1 simulation:2 r:1 hyv:1 nystr:1 harder:1 recursively:1 tuned:1 must:2 john:1 numerical:3 partition:4 shape:3 remove:1 designed:1 stationary:1 cue:15 parameterization:1 plane:3 short:1 provides:2 five:1 windowed:1 constructed:1 direct:1 fmn:1 combine:2 sync:1 paragraph:1 indeed:2 little:1 window:8 considering:2 estimating:1 matched:1 suffice:1 underlying:1 medium:5 developed:1 finding:2 corporation:1 pseudo:1 berkeley:5 act:3 exactly:2 rm:1 partitioning:1 unit:1 grant:1 segmenting:3 engineering:1 local:2 modulation:1 pami:1 might:1 chose:1 black:2 twice:1 weakened:1 co:1 limited:1 range:2 graduate:1 acknowledgment:1 testing:2 significantly:2 gabor:1 matching:2 radial:2 yilmaz:1 close:3 operator:2 storage:2 applying:3 optimize:2 equivalent:1 map:5 www:1 missing:1 shi:1 layout:1 duration:4 convex:1 formulate:2 factored:1 borrow:1 variation:2 construction:1 play:1 heavily:1 mallat:1 speak:1 exact:1 us:2 sig:1 element:3 cut:1 observed:1 role:1 disjunctive:1 capture:1 parameterize:1 wj:2 decrease:1 highest:1 zibulevsky:1 mentioned:1 ui:5 complexity:7 segmenter:4 trained:1 segment:4 ei2:1 f2:1 basis:6 easily:1 icassp:1 various:1 listener:1 train:1 separated:3 distinct:1 describe:5 exhaustive:1 disjunction:1 whose:4 larger:1 solve:1 distortion:3 otherwise:1 ability:1 statistic:1 gp:1 transform:1 itself:1 echo:1 final:2 advantage:2 sequence:1 reconstruction:1 interaction:1 product:3 relevant:1 mixing:4 roweis:1 gold:1 adjoint:1 cluster:2 requirement:5 demix:3 generating:1 help:2 rescale:1 progress:1 recovering:1 c:3 involves:3 come:1 waveform:1 filter:3 observational:1 require:1 arinen:1 f1:1 investigation:1 extension:3 normal:1 exp:2 bump:5 zeevi:1 m0:1 major:4 sought:1 a2:1 estimation:4 proc:1 currently:2 superposition:1 largest:1 successfully:4 weighted:2 minimization:1 mit:1 sensor:4 gaussian:1 always:1 rather:3 avoid:2 conjunction:1 encode:1 focus:1 rank:4 mainly:1 sense:1 helpful:1 hidden:1 wij:1 pixel:1 arg:3 issue:1 superposing:2 orientation:2 smoothing:1 psychophysics:1 field:1 equal:4 once:2 extraction:1 sampling:1 look:1 warrant:1 inevitable:1 minimized:1 spline:2 few:1 oriented:2 randomly:1 oja:1 simultaneously:1 ve:1 individual:1 phase:1 ourselves:1 n1:1 microsoft:1 ab:1 organization:2 interest:1 huge:1 adjust:1 umn:1 mixture:7 extreme:1 bregman:1 closer:1 partial:2 necessary:1 loosely:1 minimal:2 column:3 modeling:1 earlier:2 soft:1 boolean:1 elli:1 stacking:1 subset:1 tour:1 delay:1 successful:1 too:2 periodic:2 combined:1 fundamental:1 siam:1 lee:1 physic:1 audible:1 probabilistic:1 michael:1 na:1 w1:1 central:1 recorded:2 nm:2 convolving:1 derivative:2 chung:1 return:1 account:3 de:1 sec:1 coefficient:2 blind:8 onset:3 depends:1 root:1 try:1 closed:2 doing:1 francis:2 wm:1 masking:1 voiced:4 minimize:1 square:2 om:1 yield:3 identification:1 processor:1 wab:2 simultaneous:1 banded:1 energy:7 frequency:23 involved:3 thereof:1 obvious:1 associated:1 sampled:1 auditory:3 knowledge:4 dimensionality:1 segmentation:20 amplitude:1 reflecting:1 methodology:1 yb:1 formulation:1 done:1 just:1 biomedical:1 hand:1 working:2 web:1 continuity:1 french:1 quality:1 gray:1 believe:2 grows:2 lossy:1 building:2 effect:2 brown:1 requiring:1 normalized:4 counterpart:1 former:1 hence:1 symmetric:2 ue:1 speaker:22 percentile:1 m:2 criterion:1 pearlmutter:1 bring:1 postprocessing:1 image:3 harmonic:14 meaning:1 consideration:1 fi:5 multinode:1 common:4 mt:1 physical:1 khz:2 volume:1 belong:2 discussed:1 refer:1 significant:2 ai:2 smoothness:1 grid:1 language:1 moving:1 f0:1 similarity:1 base:1 add:1 recent:1 optimizing:2 optimizes:1 success:1 seen:3 minimum:2 additional:2 morgan:1 impose:1 spectrogram:23 determine:1 redundant:3 period:1 signal:41 multiple:5 mix:1 sound:1 segmented:1 match:1 academic:1 bach:4 manipulate:1 coded:1 controlled:1 ensuring:1 pitch:29 involving:3 desideratum:1 basic:2 halving:1 vision:2 exerts:1 normalization:3 kernel:1 fellowship:1 separately:2 spacing:1 grow:1 source:13 crucial:1 envelope:4 fbach:2 tend:1 fate:2 spirit:2 jordan:4 integer:1 ideal:1 enough:2 fft:1 independence:1 restrict:1 bandwidth:5 wahba:1 reduce:1 regarding:1 knowing:1 gb:1 effort:2 kisilev:1 speech:59 matlab:1 involve:1 eigenvectors:2 transforms:1 mid:1 locally:1 band:3 tomography:1 generate:3 estimated:2 disjoint:3 group:1 four:2 indefinite:1 threshold:1 kept:1 imaging:1 ram:1 sum:3 inverse:1 parameterized:5 angle:1 reasonable:2 separation:26 appendix:3 prefer:1 acceptable:1 distinguish:1 topological:1 strength:7 constraint:1 scene:3 encodes:1 aspect:1 fourier:2 u1:4 min:2 relatively:2 structured:1 combination:3 across:2 remain:1 reconstructing:1 em:1 son:1 partitioned:2 appealing:1 g3:1 making:1 intuitively:1 taken:1 computationally:1 resource:1 inaudible:1 remains:1 know:1 available:2 gaussians:1 spectral:20 generic:2 fowlkes:1 jang:1 original:1 assumes:1 clustering:12 subsampling:1 running:2 music:1 exploit:3 k1:1 build:6 prof:1 nm0:1 classical:5 approximating:1 psychophysical:1 move:1 malik:2 occurs:1 fa:4 rt:1 diagonal:2 traditional:1 said:1 exhibit:1 affinity:26 separate:5 separating:1 reason:3 length:5 pointwise:1 index:2 retained:1 potentially:1 design:9 perform:2 allowing:2 vertical:1 datasets:2 markov:1 acknowledge:1 situation:3 extended:1 communication:1 frame:8 sharp:1 intensity:1 namely:1 required:1 optimized:3 california:2 acoustic:1 quadratically:1 nip:4 trans:1 address:1 able:1 suggested:1 usually:1 pattern:6 perception:1 sparsity:2 challenge:2 built:3 including:2 max:1 memory:2 overlap:1 mn:3 scheme:2 demixed:1 review:2 l2:2 multiplication:1 fully:1 expect:1 highlight:1 mixed:1 interesting:1 proportional:2 integrate:1 downloaded:1 degree:1 sufficient:1 vectorized:1 principle:2 cooke:2 row:1 eccv:1 periodicity:1 english:1 side:1 allow:1 taking:2 sparse:2 ghz:1 slice:1 dimension:2 gram:1 fb:2 made:2 adaptive:2 commonly:1 bm:1 far:2 approximate:2 belongie:1 discriminative:3 search:3 channel:1 robust:2 ca:2 symmetry:1 complex:2 artificially:2 pk:1 main:1 linearly:1 noise:1 x1:1 site:1 referred:3 intel:1 en:1 wiley:2 n:2 hanning:2 explicit:1 xh:3 wish:1 perceptual:2 third:1 extractor:2 wavelet:1 minute:1 departs:1 removing:1 specific:8 appeal:1 offset:3 timbre:5 normalizing:1 grouping:2 essential:1 concern:1 rickard:1 magnitude:4 karhunen:1 simply:3 likely:2 forming:1 visual:2 tracking:1 g2:2 u2:4 identity:1 goal:1 determined:2 reducing:1 principal:1 microphone:7 total:2 experimental:1 ya:2 support:2 latter:1 relevance:1 audio:1 |
1,731 | 2,573 | Sub-Microwatt Analog VLSI
Support Vector Machine for
Pattern Classification and Sequence Estimation
Shantanu Chakrabartty and Gert Cauwenberghs
Department of Electrical and Computer Engineering
Johns Hopkins University, Baltimore, MD 21218
{shantanu,gert}@jhu.edu
Abstract
An analog system-on-chip for kernel-based pattern classification and sequence estimation is presented. State transition probabilities conditioned
on input data are generated by an integrated support vector machine. Dot
product based kernels and support vector coefficients are implemented
in analog programmable floating gate translinear circuits, and probabilities are propagated and normalized using sub-threshold current-mode
circuits. A 14-input, 24-state, and 720-support vector forward decoding kernel machine is integrated on a 3mm?3mm chip in 0.5?m CMOS
technology. Experiments with the processor trained for speaker verification and phoneme sequence estimation demonstrate real-time recognition
accuracy at par with floating-point software, at sub-microwatt power.
1
Introduction
The key to attaining autonomy in wireless sensory systems is to embed pattern recognition
intelligence directly at the sensor interface. Severe power constraints in wireless integrated
systems incur design optimization across device, circuit, architecture and system levels [1].
Although system-on-chip methodologies have been primarily digital, analog integrated systems are emerging as promising alternatives with higher energy efficiency and integration
density, exploiting the analog sensory interface and computational primitives inherent in
device physics [2]. Analog VLSI has been chosen, for instance, to implement Viterbi [3]
and HMM-based [4] sequence decoding in communications and speech processing.
Forward-Decoding Kernel Machines (FDKM) [5] provide an adaptive framework for general maximum a posteriori (MAP) sequence decoding, that avoid the need for backward
recursion over the data in Viterbi and HMM-based sequence decoding [6]. At the core of
FDKM is a support vector machine (SVM) [7] for large-margin trainable pattern classification, performing noise-robust regression of transition probabilities in forward sequence
estimation. The achievable limits of FDKM power-consumption are determined by the
number of support vectors (i.e., regression templates), which in turn are determined by
the complexity of the discrimination task and the signal-to-noise ratio of the sensor interface [8].
MVM
24
MVM
2
1
SUPPORT VECTORS
30x24
KERNEL
xs
s
?i1
30x24
K(x,xs)
x
fi1(x)
14
24x24
INPUT
NORMALIZATION
Pi1
24x24
FORWARD DECODING
24
?i[n]
Pi24
24
?j[n-1]
Figure 1: FDKM system architecture.
In this paper we describe an implementation of FDKM in silicon, for use in adaptive sequence detection and pattern recognition. The chip is fully configurable with parameters
directly downloadable onto an array of floating-gate CMOS computational memory cells.
By means of calibration and chip-in-loop training, the effect of mismatch and non-linearity
in the analog implementation is significantly reduced.
Section 2 reviews FDKM formulation and notations. Section 3 describes the schematic
details of hardware implementation of FDKM. Section 4 presents results from experiments
conducted with the fabricated chip and Section 5 concludes with future directions.
2
FDKM Sequence Decoding
FDKM recognition and sequence decoding are formulated in the framework of MAP (maximum a posteriori) estimation, combining Markovian dynamics with kernel machines.
The MAP forward decoder receives the sequence X[n] = {x[1], x[2], . . . , x[n]} and produces an estimate of conditional probability measure of state variables q[n] over all classes
i ? 1, .., S, ?i [n] = P (q[n] = i | X[n]). Unlike hidden Markov models, the states
directly encode the symbols, and the observations x modulate transition probabilities between states [6]. Estimates of the posterior probability ?i [n] are obtained from estimates
of local transition probabilities using the forward-decoding procedure [6]
?i [n] =
S
Pij [n] ?j [n ? 1]
(1)
j=1
where Pij [n] = P (q[n] = i | q[n ? 1] = j, x[n]) denotes the probability of making a
transition from class j at time n ? 1 to class i at time n, given the current observation
vector x[n]. Forward decoding (1) expresses first order Markovian sequential dependence
of state probabilities conditioned on the data.
The transition probabilities Pij [n] in (1) attached to each outgoing state j are obtained by
normalizing the SVM regression outputs fij (x):
Pij [n] = [fij (x[n]) ? zj [n]]+
(2)
Vdd
M4
A
Vc
Vg ref
Vc
M1
Vtunn
Vg
M2
M3
C
Vtunn
B
Iout
Iin
(a)
Vdd
(x.xs)2
x
M7
M9
M10
M8
M5
Vbias
M6
(b)
?ijsK(x, xs)
Figure 2: Schematic of the SVM stage. (a) Multiply accumulate cell and reference cell for
the MVM blocks in Figure 1. (b) Combined input, kernel and MVM modules.
where [.]+ = max(., 0). The normalization mechanism is subtractive rather than divisive,
with normalization offset factor zj [n] obtained using a reverse-waterfilling criterion with
respect to a probability margin ? [10],
[fij (x[n]) ? zj [n]]+ = ?.
(3)
i
Besides improved robustness [8], the advantage of the subtractive normalization (3) is its
amenability to current mode implementation as opposed to logistic normalization [11]
which requires exponentiation of currents. The SVM outputs (margin variables) fij (x)
are given by:
N
fij (x) =
?sij K(x, xs ) + bij
(4)
s
where K(?, ?) denotes a symmetric positive-definite kernel1 satisfying the Mercer condition, such as a Gaussian radial basis function or a polynomial spline [7], and xs [m], m =
1, .., N denote the support vectors. The parameters ?sij in (4) and the support vectors xs [m]
are determined by training on a labeled training set using a recursive FDKM procedure described in [5].
3
Hardware Implementation
A second order polynomial kernel K(x, y) = (x.y)2 was chosen for convenience of implementation. This inner-product based architecture directly maps onto an analog computational array, where storage and computation share common circuit elements. The FDKM
K(x, y) = ?(x).?(y). The map ?(?) need not be computed explicitly, as it only appears in
inner-product form.
1
?i[n]
fij[n]
Aij
Vdd
Vdd
Vdd
Vdd
M6
Pij[n]
M9
M7
M8
?
M4
M2
M3
M5
M1
?j[n-1]
Vref
Figure 3: Schematic of the margin propagation block.
system architecture is shown in Figure 1. It consists of several SVM stages that generates
state transition probabilities Pij [n] modulated by input data x[n], and a forward decoding
block that performs maximum a posteriori (MAP) estimation of the state sequence ?i [n].
3.1
SVM Stage
The SVM stage implements (4) to generate unnormalized probabilities. It consists of a kernel stage computing kernels K(xs , x) between input vector x and stored support vectors
xs , and a coefficient stage linearly combining kernels using stored training parameters ?sij .
Both kernel and coefficient blocks incorporate an analog matrix-vector multiplier (MVM)
with embedded storage of support vectors and coefficients. A single multiply-accumulate
cell, using floating-gate CMOS non-volative analog storage, is shown in Figure 2(a). The
floating gate node voltages (Vg ) of transistors M2 are programmed using hot-electron injection and tunneling [12]. The input stage comprising transistors M1, M3 and M4 forms
a key component in the design of the array and sets the voltage at node A as a function
of input current. By operating the array in weak-inversion, the output current through the
floating gate element M2 in terms of the input stage floating gate potential Vgref and memory element floating gate potential Vg is given by
Iout = Iin e??(Vg ?Vgref )/UT
(5)
as a product of two pseudo-currents, leading to single quadrant multiplier. Two observations can be directly made regarding Eqn. (5):
1. The input stage eliminates the effect of the bulk on the output current, making it
a function of the reference floating gate voltage which can be easily programmed
for the entire row.
2. The weight is differential in the floating gate voltages Vg ? Vgref , allowing to
increase or decrease the weight by hot electron injection only, without the need
for repeated high-voltage tunneling. For instance, the leakage current in unused
rows can be reduced significantly by programming the reference gate voltage to a
high value, leading to power savings.
The feedback transistor in the input stage M3 reduces the output impedance of node A
given by ro ? gd1 /gm1 gm2 . This makes the array scalable as additional memory elements
can be added to the node without pulling the voltage down. An added benefit of keeping
the voltage at node A fixed is reduced variation in back gate parameter ? in the floating
gate elements. The current from each memory element is summed on a low impedance
node established by two diode connected transistors M7-M10. This partially compensates
for large Early voltage effects implicit in floating gate transistors.
(a)
(b)
Figure 4: Single input-output response of the SVM stage illustrating the square transfer
function of the kernel block (log(Iout ) vs. log(Iin )) where all the MVM elements are programmed for unity gain. (a) Before calibration showing mismatch between rows. (b)
After pre-distortion compensation of input and output coefficients.
The array of elements M2 with peripheral circuits as shown in Figure 2(a) thus implement a
simple single quadrant matrix-vector multiplication module. The single quadrant operation
is adequate for unsigned inputs, and hence unsigned support vectors. A simple squaring
circuit M7-M10 is used to implement the non-linear kernel as shown in figure 2(b). The
requirement on the type of non-linearity is not stringent and can be easily incorporated
into the kernel in SVM training procedure [5]. The coefficient block consists of the same
matrix-vector multiplier given in figure 2(a). For the general probability model given by (2)
a single quadrant multiplication is sufficient to model any distribution. This can be easily
verified by observing that the distribution (2) is invariant to uniform offset in the coefficients
?sij .
3.2
Forward Decoding Stage
The forward recursion decoding is implemented by a modified version of the sum-product
probability propagation circuit in [13], performing margin-based probability propagation
according to (1). In contrast to divisive normalization that relies on the translinear principle
using sub-threshold MOS or bipolar circuits in [13], the implementation of margin-based
subtractive normalization shown in figure 3 [10] is device operation independent. The
circuit consists of several normalization cells Aij along columns computing Pij = [fij ?
z]+ using transistors M1-M4. Transistors M5-M9 form a feedback loop that compares and
stabilizes the circuit to the normalization criterion (3). The currents through transistors
M4 are auto-normalized to the previous state value ?j [n ? 1] to produce a new estimate
of ?i [n1] based on recursion (1). The delay in equation (1) is implemented using a logdomain filter and a fixed normalization current ensures that all output currents be properly
scaled to stabilize the continuous-time feedback loop.
4
Experimental Results
A 14-input, 24-state, and 24?30-support vector FDKM was integrated on a 3mm?3mm
FDKM chip, fabricated in a 0.5?m CMOS process, and fully tested. Figure 5(c) shows the
micrograph of the fabricated chip. Labeled training data pertaining to a certain task were
used to train an SVM, and the training coefficients thus obtained were programmed onto
the chip.
Table 1: FDKM Chip Summary
Technology
Area
Technology
Supply Voltage
System Parameters
Floating Cell Count
Number of Support Vectors
Input Dimension
Number of States
Power Consumption
Energy Efficiency
x2
x1 q2
q1
x6
x3
q3
x4
q4
x5
q5
Value
3mm?3mm
0.5? CMOS
4V
28814
720
14
24
80nW - 840nW
1.6pJ/MAC
x6
q6
q7
q8 q9 q10 q11 q12 q13
x5 x4
x3 x2
x1
(a)
(b)
(c)
Figure 5: (a) Transition-based sequence detection in a 13-state Markov model. (b) Experimental recording of ?7 = P (q7 ), detecting one of two recurring sequences in inputs
x1 ? x6 (x1 , x3 and x5 shown). (c) Micrograph of the FDKM chip
Programming of the trained coefficients was performed by programming respective cells
M2 along with the corresponding input stage M1, so as to establish the desired ratio of
currents. The values were established by continuing hot electron injection until the desired current was attained. During hot electron injection, the control gate Vc was adjusted
to set the injection current to a constant level for stable injection. All cells in the kernel
and coefficient modules of the SVM stage are random accessible for read, write and calibrate operations. The calibration procedure compensates for mismatch between different
input/output paths by adapting the floating gate elements in the MVM cells. This is illustrated in Figure 4 where the measured square kernel transfer function is shown before and
after calibration.
The chip is fully reconfigurable and can perform different recognition tasks by programming different training parameters, as demonstrated through three examples below. Depending on the number of active support vectors and the absolute level of currents (in
relation to decoding bandwidth), power dissipation is in the lower nanowatt to microwatt
range.
100
Simulated
Measured
95
True Positive (%)
90
85
80
75
70
65
0
5
10
15
False Positive (%)
20
(a)
25
(b)
Figure 6: (a) Measured and simulated ROC curve for the speaker verification experiment.
(b) Experimental phoneme recognition by FDKM chip. The state probability shown is for
consonant /t/ in words ?torn,? ?rat,? and ?error.? Two peaks are located as expected from
the input sequence, shown on top.
For the first set of experiments, parameters corresponding to a simple Markov chain shown
in figure 5(a) were programmed onto the chip to differentiate between two given sequences
of input features: one a sweep of active input components in rising order (x1 through
x6 ), and the other in descending order (x6 through x1 ). The output of state q7 in the
Markov chain is shown in figure 5(b). It can be clearly observed that state q7 ?fires? only
when a rising sequence of pulse trains arrives. The FDKM chip thereby demonstrates
probability propagation similar to that in the architecture of [4]. The main difference is that
the present architecture can be configured for detecting other, more complex sequences
through programming and training.
For the second set of experiments the FDKM chip was programmed to perform speaker verification using speech data from YOHO corpus. For training we chose 480 utterances corresponding to 10 separate speakers (101-110). For each of these utterances 12 mel-cepstra
coefficients were computed for every 25ms frames. These coefficients were clustered using
k-means clustering to obtain 50 clusters per speaker which were then used for training the
SVM. For testing 480 utterances for those speakers were chosen, and confidence scores
returned by the SVMs were integrated over all frames of an utterance to obtain a final
decision. Verification results obtained from the chip demonstrate 97% true acceptance at
1% false positive rate, identical to the performance obtained through floating point software simulations as shown by the receiver operating characteristic shown in figure 6(a).
The total power consumption for this task is only 840nW, demonstrating its suitability for
autonomous sensor applications.
A third set of experiment aimed at detecting phone utterances in human speech. Melcepstra coefficients of six phone utterances (/t/,/n/,/r/,/ow/,/ah/,/eh/) selected from
the TIMIT corpus were transformed using singular value decomposition and thresholding.
Even though the recognition was demonstrated for the reduced set of features, the chip operates internally with analog inputs. Figure 6(b) illustrates correct detection of phonemes as
identified by the presence of phone /t/ at the expected time instances in the input sequence.
5
Discussion and Conclusion
We designed an FDKM based sequence recognition system on silicon and demonstrated
its performance on simple but general tasks. The chip is fully reconfigurable and different sequence recognition engines can be programmed using parameters obtained through
SVM training. FDKM decoding is performed in real-time and is ideally suited for sequence
recognition and verification problems involving speech features. All analog processing in
the chip is performed by transistors operating in weak-inversion resulting in power dissipation in the nanowatt to microwatt range. Non-volatile storage of training parameters further
reduces standby power dissipation.
We also note that while low power dissipation is a virtue in many applications, increased
power can be traded for increased bandwidth. For instance, the presented circuits could be
adapted using heterojunction bipolar junction transistors in a SiGe process for ultra-high
speed MAP decoding applications in digital communication, using essentially the same
FDKM architecture as presented here.
Acknowledgement: This work is supported by a grant from The Catalyst Foundation (http://www.catalyst-foundation.org), NSF IIS-0209289, ONR/DARPA N00014-00-C0315, and ONR N00014-99-1-0612. The chip was fabricated through the MOSIS service.
References
[1] Wang, A. and Chandrakasan, A.P, ?Energy-Efficient DSPs for Wireless Sensor Networks,? IEEE Signal Proc. Mag., vol. 19 (4), pp. 68-78, July 2002.
[2] Vittoz, E.A., ?Low-Power Design: Ways to Approach the Limits,? Dig. 41st IEEE
Int. Solid-State Circuits Conf. (ISSCC), San Francisco CA, 1994.
[3] Shakiba, M.S, Johns, D.A, and Martin, K.W, ?BiCMOS Circuits for Analog Viterbi
Decoders,? IEEE Trans. Circuits and Systems II, vol. 45 (12), Dec. 1998.
[4] Lazzaro, J, Wawrzynek, J, and Lippmann, R.P, ?A Micropower Analog Circuit Implementation of Hidden Markov Model State Decoding,? IEEE J. Solid-State Circuits,
vol. 32 (8), Aug. 1997.
[5] Chakrabartty, S. and Cauwenberghs, G. ?Forward Decoding Kernel Machines: A
hybrid HMM/SVM Approach to Sequence Recognition,? IEEE Int. Conf. of Pattern
Recognition: SVM workshop. (ICPR?2002), Niagara Falls, 2002.
[6] Bourlard, H. and Morgan, N., Connectionist Speech Recognition: A Hybrid Approach, Kluwer Academic, 1994.
[7] Vapnik, V. The Nature of Statistical Learning Theory, New York: Springer-Verlag,
1995.
[8] Chakrabartty, S., and Cauwenberghs, G. ?Power Dissipation Limits and Large Margin in Wireless Sensors,? Proc. IEEE Int. Symp. Circuits and Systems(ISCAS2003),
vol. 4, 25-28, May 2003.
[9] Bahl, L.R., Cocke J., Jelinek F. and Raviv J. ?Optimal Decoding of Linear Codes for
Minimizing Symbol Error Rate,? IEEE Transactions on Inform. Theory, vol. IT-20,
pp. 284-287, 1974.
[10] Chakrabartty, S., and Cauwenberghs, G. ?Margin Propagation and Forward Decoding
in Analog VLSI,? Proc. IEEE Int. Symp. Circuits and Systems(ISCAS2004), Vancouver Canada, May 23-26, 2004.
[11] Jaakkola, T. and Haussler, D. ?Probabilistic kernel regression models,? Proc. Seventh
Int. Workshop Artificial Intelligence and Statistics , 1999.
[12] C. Dorio,P. Hasler,B. Minch and C.A. Mead, ?A Single-Transistor Silicon Synapse,?
IEEE Trans. Electron Devices, vol. 43 (11), Nov. 1996.
[13] H. Loeliger, F. Lustenberger, M. Helfenstein and F. Tarkoy, ?Probability Propagation
and Decoding in Analog VLSI,? IEEE Proc. ISIT, 1998.
| 2573 |@word illustrating:1 version:1 inversion:2 polynomial:2 achievable:1 rising:2 pulse:1 simulation:1 decomposition:1 q1:1 thereby:1 solid:2 score:1 mag:1 loeliger:1 current:17 john:2 designed:1 discrimination:1 v:1 intelligence:2 selected:1 device:4 core:1 detecting:3 node:6 org:1 along:2 m7:4 differential:1 supply:1 consists:4 isscc:1 shantanu:2 symp:2 expected:2 m8:2 linearity:2 notation:1 circuit:18 vref:1 emerging:1 q2:1 fabricated:4 pseudo:1 every:1 gm1:1 bipolar:2 ro:1 scaled:1 demonstrates:1 control:1 internally:1 grant:1 positive:4 before:2 engineering:1 local:1 service:1 limit:3 mead:1 path:1 chose:1 yoho:1 programmed:7 range:2 testing:1 recursive:1 block:6 implement:4 definite:1 x3:3 procedure:4 area:1 jhu:1 significantly:2 adapting:1 pre:1 radial:1 quadrant:4 word:1 confidence:1 onto:4 convenience:1 storage:4 unsigned:2 descending:1 www:1 map:7 demonstrated:3 primitive:1 fdkm:21 torn:1 m2:6 haussler:1 array:6 gert:2 variation:1 autonomous:1 programming:5 element:9 recognition:13 satisfying:1 located:1 q7:4 labeled:2 observed:1 module:3 electrical:1 wang:1 vbias:1 ensures:1 connected:1 decrease:1 complexity:1 ideally:1 dynamic:1 trained:2 vdd:6 q5:1 incur:1 efficiency:2 basis:1 easily:3 darpa:1 chip:21 train:2 describe:1 pertaining:1 artificial:1 distortion:1 compensates:2 statistic:1 final:1 differentiate:1 sequence:23 advantage:1 transistor:11 product:5 loop:3 combining:2 lustenberger:1 q10:1 exploiting:1 cluster:1 requirement:1 produce:2 raviv:1 cmos:5 depending:1 measured:3 aug:1 implemented:3 diode:1 q12:1 vittoz:1 direction:1 amenability:1 fij:7 correct:1 filter:1 vc:3 translinear:2 human:1 stringent:1 clustered:1 suitability:1 ultra:1 isit:1 gd1:1 adjusted:1 mm:6 viterbi:3 nw:3 mo:1 electron:5 traded:1 stabilizes:1 early:1 estimation:6 proc:5 clearly:1 sensor:5 gaussian:1 modified:1 rather:1 avoid:1 voltage:10 jaakkola:1 encode:1 q3:1 properly:1 contrast:1 posteriori:3 squaring:1 integrated:6 entire:1 hidden:2 vlsi:4 relation:1 transformed:1 i1:1 comprising:1 classification:3 integration:1 summed:1 saving:1 x4:2 identical:1 future:1 connectionist:1 spline:1 inherent:1 primarily:1 m4:5 floating:15 fire:1 n1:1 detection:3 acceptance:1 multiply:2 severe:1 arrives:1 chain:2 chakrabartty:4 respective:1 continuing:1 desired:2 instance:4 column:1 increased:2 markovian:2 calibrate:1 mac:1 uniform:1 delay:1 conducted:1 seventh:1 configurable:1 stored:2 minch:1 combined:1 st:1 density:1 m10:3 peak:1 accessible:1 probabilistic:1 physic:1 decoding:21 hopkins:1 q11:1 opposed:1 conf:2 leading:2 m9:3 potential:2 attaining:1 downloadable:1 stabilize:1 coefficient:13 int:5 configured:1 explicitly:1 performed:3 observing:1 cauwenberghs:4 timit:1 square:2 accuracy:1 phoneme:3 characteristic:1 weak:2 q6:1 dig:1 processor:1 ah:1 inform:1 energy:3 pp:2 propagated:1 gain:1 ut:1 back:1 appears:1 higher:1 attained:1 x6:5 methodology:1 response:1 improved:1 synapse:1 formulation:1 though:1 q8:1 stage:14 implicit:1 until:1 receives:1 eqn:1 propagation:6 mode:2 bahl:1 logistic:1 pulling:1 effect:3 normalized:2 true:2 multiplier:3 hence:1 read:1 symmetric:1 illustrated:1 x5:3 during:1 speaker:6 mel:1 unnormalized:1 rat:1 criterion:2 m5:3 m:1 demonstrate:2 performs:1 dissipation:5 interface:3 iin:3 common:1 volatile:1 microwatt:4 attached:1 analog:16 m1:5 kluwer:1 accumulate:2 silicon:3 dot:1 calibration:4 stable:1 operating:3 posterior:1 reverse:1 phone:3 certain:1 n00014:2 verlag:1 onr:2 iout:3 morgan:1 additional:1 signal:2 ii:2 july:1 reduces:2 academic:1 cocke:1 gm2:1 schematic:3 scalable:1 regression:4 involving:1 essentially:1 kernel:19 normalization:10 cell:9 dec:1 baltimore:1 singular:1 eliminates:1 unlike:1 recording:1 presence:1 unused:1 m6:2 architecture:7 bandwidth:2 identified:1 inner:2 regarding:1 six:1 returned:1 speech:5 york:1 lazzaro:1 programmable:1 adequate:1 aimed:1 hardware:2 svms:1 reduced:4 generate:1 http:1 zj:3 nsf:1 per:1 bulk:1 write:1 vol:6 express:1 key:2 threshold:2 demonstrating:1 micrograph:2 pj:1 verified:1 hasler:1 backward:1 mosis:1 sum:1 exponentiation:1 decision:1 tunneling:2 q9:1 adapted:1 constraint:1 x2:2 software:2 cepstra:1 generates:1 speed:1 pi1:1 performing:2 injection:6 martin:1 department:1 according:1 peripheral:1 icpr:1 across:1 describes:1 wawrzynek:1 unity:1 making:2 invariant:1 sij:4 equation:1 turn:1 count:1 mechanism:1 fi1:1 junction:1 operation:3 alternative:1 robustness:1 gate:15 denotes:2 top:1 clustering:1 establish:1 leakage:1 sweep:1 added:2 dependence:1 md:1 ow:1 separate:1 simulated:2 hmm:3 decoder:2 consumption:3 besides:1 code:1 helfenstein:1 ratio:2 minimizing:1 design:3 implementation:8 perform:2 allowing:1 observation:3 markov:5 nanowatt:2 compensation:1 communication:2 incorporated:1 frame:2 dsps:1 canada:1 trainable:1 engine:1 established:2 trans:2 recurring:1 below:1 pattern:6 mismatch:3 max:1 memory:4 power:13 hot:4 eh:1 hybrid:2 bourlard:1 recursion:3 technology:3 concludes:1 auto:1 utterance:6 review:1 acknowledgement:1 multiplication:2 vancouver:1 embedded:1 fully:4 par:1 catalyst:2 vg:6 digital:2 foundation:2 verification:5 pij:7 sufficient:1 mercer:1 principle:1 thresholding:1 subtractive:3 share:1 autonomy:1 row:3 summary:1 supported:1 wireless:4 keeping:1 aij:2 fall:1 template:1 absolute:1 jelinek:1 benefit:1 feedback:3 dimension:1 curve:1 transition:8 sensory:2 forward:12 made:1 adaptive:2 san:1 transaction:1 nov:1 lippmann:1 active:2 q4:1 corpus:2 receiver:1 consonant:1 francisco:1 continuous:1 table:1 impedance:2 promising:1 nature:1 transfer:2 robust:1 ca:1 complex:1 main:1 linearly:1 noise:2 repeated:1 ref:1 x1:6 q13:1 roc:1 sub:4 mvm:7 x24:4 third:1 bij:1 down:1 embed:1 reconfigurable:2 showing:1 symbol:2 offset:2 x:9 svm:15 micropower:1 virtue:1 normalizing:1 workshop:2 false:2 sequential:1 vapnik:1 conditioned:2 illustrates:1 margin:8 suited:1 partially:1 springer:1 relies:1 conditional:1 modulate:1 formulated:1 determined:3 operates:1 total:1 divisive:2 experimental:3 m3:4 support:15 modulated:1 incorporate:1 outgoing:1 tested:1 |
1,732 | 2,574 | The Rescorla-Wagner algorithm and Maximum
Likelihood estimation of causal parameters.
Alan Yuille
Department of Statistics
University of California at Los Angeles
Los Angeles, CA 90095
[email protected]
Abstract
This paper analyzes generalization of the classic Rescorla-Wagner (RW) learning algorithm and studies their relationship to Maximum Likelihood estimation of causal parameters. We prove that the parameters
of two popular causal models, ?P and P C, can be learnt by the same
generalized linear Rescorla-Wagner (GLRW) algorithm provided genericity conditions apply. We characterize the fixed points of these GLRW
algorithms and calculate the fluctuations about them, assuming that the
input is a set of i.i.d. samples from a fixed (unknown) distribution. We
describe how to determine convergence conditions and calculate convergence rates for the GLRW algorithms under these conditions.
1 Introduction
There has recently been growing interest in models of causal learning formulated as probabilistic inference [1,2,3,4,5]. There has also been considerable interest in relating this work
to the Rescorla-Wagner learning model [3,5,6] (also known as the delta rule). In addition,
there are studies of the equilibria of the Rescorla-Wagner model [6].
This paper proves mathematical results about these related topics. In Section (2), we describe two influential models, ?P and P C, for causal inference and how their parameters
can be learnt by maximum likelihood estimation from training data. Section (3) introduces
the generalized linear Rescorla-Wagner (GLRW) algorithm, characterize its fixed points
and quantify its fluctuations. We demonstrate that a simple GLRW can estimate the ML parameters for both the ?P and P C models provided certain genericity conditions are
satisfied. But the experimental conditions studied by Cheng [2] require a non-linear generalization of Rescorla-Wagner (Yuille, in preparation). Section (4) gives a way to determine
convergence conditions and calculate the convergence rates of GLRW algorithms. Finally
Section (5) sketches how the results in this paper can be extended to allow for arbitrary
number of causes.
2 Causal Learning and Probabilistic Inference
The task is to estimate the causal effect of variables. There is an observed event E and
two causes C1 , C2 . Observers are asked to determine the causal power of the two causes.
The variables are binary-valued. E = 1 means the event occurs, E = 0 means it does
not. Similarly for causes C1 and C2 . Much of the work in this section can be generalized
to cases where there are an arbitrary number of causes C1 , C2 , ..., CN , see section (5).
The training data {(E ? , C1? , C2? )} is assumed to be samples from an unknown distribution
Pemp (E, C1 , C2 ).
Two simple models, ?P [1] and P C [2,3], have been proposed to account for how people
estimate causal power. There is also a more recent theory based on model selection [4].
The ?P and P C theories are equivalent to assuming probability distributions for how
the training data is generated. Then the power of the causes is given by the maximum
likelihood estimation of the distribution parameters ?1 , ?2 . The two theories correspond to
probability distributions P?P (E|C1 , C2 , ?1 , ?2 ) and PP C (E|C1 , C2 , ?1 , ?2 ) given by:
P?P (E = 1|C1 , C2 , ?1 , ?2 ) = ?1 C1 + ?2 C2 . ?P model.
PP C (E = 1|C1 , C2 , ?1 , ?2 ) = ?1 C1 + ?2 C2 ? ?1 ?2 C1 C2 . P C model.
(1)
(2)
The later is a noisy-or model. The event E = 1 can be caused by C1 = 1 with probability
?1 , by C2 = 1 with probability ?2 , or caused by both. The model can be derived by setting
PP C (E = 0|C1 , C2 , ?1 , ?2 ) = (1 ? ?1 C1 )(1 ? ?2 C2 ).
We assume that there is also a distribution on the causes P (C1 , C2 |~? ) which the observers also learn from the training data. This is equivalent to maximizing (with respect to
?1 , ?2 , ~? )):
Y
Y
~ ? )} : ?
~? : ?
~? : ?
~ ? : ~? ). (3)
P ({(E ? , C
~ , ~? ) =
P (E ? , C
~ , ~? ) =
P (E ? |C
~ )P (C
?
?
By taking logarithms, we see that estimating ?1 , ?2 and ~? are independent. So we will
concentrate on estimating the ?1 , ?2 .
~ ? } is consistent with the model ? i.e. there exist parameters
If the training data {E ? , C
)
?1 , ?2 such Pemp (E|C1 , C2 ) = P (E|C1 , C2 , ?1 , ?2 ? then we can calculate the solution
directly.
For the ?P model, we have:
?1 = Pemp (E = 1|C1 = 1, C2 = 0) = Pemp (E = 1|C1 = 1),
?2 = Pemp (E = 1|C1 = 0, C2 = 1) = Pemp (E = 1|C2 = 1).
(4)
For the PP C model, we obtain Cheng?s measures of causality [2,3].
?1
=
?2
=
Pemp (E = 1|C1 = 1, C2 ) ? Pemp (E = 1|C1 = 0, C2 )
1 ? Pemp (E = 1|C1 = 0, C2 )}
Pemp (E = 1|C1 , C2 = 1) ? Pemp (E = 1|C1 , C2 = 0)
.
1 ? Pemp (E = 1|C1 , C2 = 0)}
(5)
3 Generalized Linear Rescorla-Wagner
The Rescorla-Wagner model [7] is an alternative way to account for human learning. This
iterative algorithm specifies an update rule for weights. These weights could measure the
strength of a cause, such as the parameters of the Maximum Likelihood estimation. Following recent work [3,6], we seek to find relationships between generalized linear RescorlaWagner (GLRW) and ML estimation.
3.1
GLRW and two special cases
~ } using training data {E ? , C
~ ? }. It is
The Rescorla-Wagner algorithm updates weights {V
of form:
~ t+1 = V
~ t + ?V
~ t.
V
(6)
In this paper, we are particularly concerned with two special cases for choice of the update
?V .
?V1 = ?1 C1 (E ? C1 V1 ? C2 V2 ), ?V2 = ?2 C2 (E ? C1 V1 ? C2 V2 ), basic
?V1 = ?1 C1 (1 ? C2 )(E ? V1 ), ?V2 = ?2 C2 (1 ? C1 )(E ? V2 ), variant.
(7)
(8)
The first (7) is the basic RW algorithm. The second (8) is a variant of RW with a natural
interpretation ? a weight V1 is updated only if one cause is present, C1 = 1, and the other
cause is absent, C2 = 0.
The most general GLRW is of form:
?Vit =
N
X
~ t ) + gi (E t , C
~ t ), ?i,
Vjt fij (E t , C
(9)
j=1
where {fij (., .) : i, j = 1, ..., N } and {gi (.) : i = 1, ..., N } are functions of the data
~ ?.
samples E ? , C
3.2
GLRW and Stochastic Samples
~ ? )} are independent identical (i.i.d.)
Our analysis assumes that the data samples {E ? , C
~
~
samples from an unknown distribution Pemp (E|C)P (C).
In this case, the GLRW becomes stochastic. It defines a distribution on weights which is
updated as follows:
Z
N
Y
~ t+1 |V
~ t ) = dE t dC
~t
~ t ).
P (V
?(Vit+1 ? Vit ? ?Vit )P (E t , C
(10)
i=1
This defines a Markov Chain. If certain conditions are satisfied (see section (4), the chain
will converge to a fixed distribution
P ? (V ). This distribution can be characterized
by its
P
P
?
?
expected mean < V > = V V P (V ) and its expected covariance ?? = V (V ? <
V >? )(V ? < V >? )T P ? (V ). In other words, even after convergence the weights will
fluctuate about the expected mean < V >? and the magnitude of the fluctuations will be
given by the expected covariance.
3.3
What Does GLRW Converge to?
~ ). We first
We now compute the means and covariance of the fixed point distribution P ? (V
do this for the GLRW, equation (9), and then we restrict ourselves to the two special cases,
equations (7,8).
~ ? and the covariance ?? of the fixed point distribution P ? (V
~ ),
Theorem 1. The means V
~ are given by the
using the GLRW equation (9) and any empirical distribution Pemp (E, C)
solutions to the linear equations,
N
X
j=1
Vj?
X
~
E,C
~ emp (E, C)
~ +
fij (E, C)P
X
~
E,C
~ emp (E, C)
~ = 0, ?i,
gi (E, C)P
(11)
and ?i, j:
??ik
=
X
??jl
+
X
~ k (E, C)P
~ emp (E, C),
~
Bi (E, C)B
jl
X
~ kl (E, C)P
~ emp (E, C)
~
Aij (E, C)A
~
E,C
(12)
~
E,C
~ = ?ij + fij (E, C)
~ and Bi (E, C)
~ = P V ? fij (E, C)
~ + gi (E, C)
~ (here
where Aij (E, C)
j j
?ij is the Kronecker delta function defined by ?ij = 1, if i = j and = 0 otherwise).
P
~ ij (E, C)
~ is an invertible
The means have a unique solution provided E,C~ Pemp (E, C)f
matrix.
~ ? by taking the expectation of the update rule,
Proof. We derive the formula for the means V
? ~
~ To calculate the covariances,
see equations (9), with respect to P (V ) and Pemp (E, C).
we express the update rule as:
X
~ + Bi (E, C),
~ ?i
Vit+1 ? Vi? =
(Vjt ? Vj? )Aij (E, C)
(13)
j
~ and Bi (E, C)
~ defined as above. Then we multiply both sides of equawith Aij (E, C)
tion (13) by their transpose (e.g. the left hand side by (Vkt+1 ? Vk? )) and taking the expec~ ) and Pemp (E, C)
~ (making use of the result that the expected
tation with respect to P ? (V
value of Vjt ? Vj? is zero as t ? ?.
We can apply these results to study the behaviour of the two special cases, equations (7,8),
when the data is generated by either the ?P or P C model.
First consider the basic RW algorithm (7) when the data is generated by the P?P model.
~ >? = ?
We can use Theorem 1 to rederive the result that < V
~ [3,6], and so basic RW
performs ML estimation for the P?P model. It also follows directly. that if the data is
~ >? 6= ?
generated by the PP C model, then < V
~ (although they are related by a nonlinear
equation).
Now consider the variant RW, equation (8).
Theorem 2. The expected means of the fixed points of the variant RW equation (8) when
~ ?
~ ?
the data is generated by probability model PP C (E|C,
~ ) or P?P (E|C;
~ ) are given by:
V1? = ?1 , V2? = ?2 ,
(14)
~ satisfies genericity conditions so that < C1 (1?C2 ) >< C2 (1?C1 ) >6=
provided Pemp (C)
0.
The expected covariances are given by:
?1
?2
?11 = ?1 (1 ? ?1 )
, ?22 = ?2 (1 ? ?2 )
, ?12 = ?21 = 0.
(15)
2 ? ?1
2 ? ?2
. Proof. This is a direct calculation of quantities specified in Theorem 1. For example, we
~ and then with
calculate the expected value of ?V1 and ?V2 first with respect to P (E|C)
?
respect to P (V ). This gives:
?
< ?V1 >P (E|C)P
~ ? (V ) = ?1 C1 (1 ? C2 )(?1 ? V1 ),
?
< ?V2 >P (E|C)P
(16)
~ ? (V ) = ?2 C2 (1 ? C1 )(?2 ? V2 ),
P
P
~ = ?1 C1 +?2 C2 ??1 ?2 C1 C2 ,
where we have used V P ? (V )V = V ? , E EPP C (E|C)
2
and logical relations to simply the terms (e.g. C1 = C1 , C1 (1 ? C1 ) = 0).
Taking the expectation of < ?V1 >P (E|C)P
~ ? (V ) with respect to P (C) gives,
?1 ?1 < C1 (1 ? C2 ) >P (C) ??1 V1? < C1 (1 ? C2 ) >= 0,
?2 ?2 < C2 (1 ? C1 ) >P (C) ??2 V2? < C2 (1 ? C1 ) >= 0,
(17)
and the result follows directly, except for non-generic cases where < C1 (1 ? C2 ) >= 0 or
< C2 (1 ? C1 ) >= 0. These degenerate cases are analyzed separately.
It is perhaps surprising that the same GLRW algorithm can perform ML estimation when
the data is generated by either model P?P or PP C (and this can be generalized, see section (5)). Moreover, the expected covariance is the same for both models. Observe that the
covariance decreases if we make the update coefficients ?1 , ?2 of the algorithm small. The
convergence rates are given in the next section.
The non-generic cases include the situation studied in [2] where C1 is a background cause
that it assumed to be always present, so < C1 >= 1. In this case V1? = ?1 , but V2? is
unspecified. It can be shown (Yuille, in preparation) that a nonlinear generalization of RW
can perform ML on this problem (but it is eay to check that no GLRW can). But an even
more ambiguous case occurs when ?1 = 1 (i.e. cause C1 always causes event E), then
there is no way to estimate ?2 and Cheng?s measure of causality, equation (5), becomes
undefined.
4 Convergence of Rescorla-Wagner
We now analyze the convergence of the GLRW algorithm. We obtain conditions for the
algorithm to converge and give the convergence rates. For simplicity, the results will be
illustrated only on the simple models.
Our results are based on the following theorem for the convergence of the state vector
of a stochastic iterative equation. The theorem gives necessary and sufficient conditions
for convergence, shows what the expected state vector converges to, and gives the rate of
convergence.
Theorem 3. Let ~zt+1 = At ~zt be an iterative update equation, where ~z is a state vector and
the update matrices At are
Pi.i.d. samples from P (A). The convergence properties as t ?
? depends on < A >= A AP (A). If < A > has a unit eigenvalue with eigenvector
~z? and the next largest eigenvalue has modulus ? < 1, then limt?? < ~zt >? ~z? and the
rate of convergence is et log ? . If the moduli of the eigenvalues of < A > are all less than
1, then limt?? < ~zt >= 0. If < A > has an eigenvalue with modulus greater than 1,
then < ~zt > diverges as t ? ?.
Proof. This is a standard result. To obtain it, write ~zt+1 = At At?1 ....A1 ~z1 , where ~z1 is the
initial condition. Now take the expectation of ~zt+1 with respect to the samples {(at , bt )}.
By the i.i.d. assumption, this gives < ~zt+1 >=< A >t ~z1 . The result follows by linear
algebra. Let the eigenvectors
and eigenvalues of < A > be {(?i , ~ei )}. Express
P
P the initial
conditions as ~z1 =
?i~ei where the {?i } are coefficients. Then < ~zt >= i ?i ?t~ei , and
the result follows.
We use Theorem 3 to obtain convergence results for the GLRW algorithm. To ensure convergence, we need both the expected covariance and the expected means to converge. Then
Markov?s lemma can be used to bound the fluctuations. (If we just require the expected
means to converge, then the fluctuations of the weights may be infinitely large). This can
be done by a suitable choice of the state vector ~z.
For simplicity of algebra, we demonstrate this for a GLRW algorithm with a single weight.
The update rule is Vt+1 = at Vt + bt where at , bt are random samples. We define the state
vector to be ~z = (Vt2 , Vt , 1).
Theorem 4. Consider the stochastic update rule Vt+1 = P
at Vt + bt where atP
and bt are
2
samples
from
distributions
P
(a)
and
P
(b).
Define
?
=
a
P
(a),
?
=
a
b
1
2
a
a aP (a),
P
P
P
?1 = b b2 P (b), ?2 = b bP (b), and ? = 2 a,b abP (a, b). The algorithm converges
?2
if, and only if, ?1 < 1, ?2 < 1. If so, then limt?? < Vt >=< V >= 1??
, limt?? <
2
(Vt ? < V >)2 >=
?1 (1??2 )+??2
(1??1 )(1??2 )
?
?22
(1??2 )2 .
The convergence rate is {max{?1 , |?2 |}t .
Proof. Define ~zt = (Vt2 , Vt , 1) and express the update rule in matrix form:
?
? ?
??
?
2
Vt+1
a2t 2at bt b2t
Vt2
? Vt+1 ? = ? 0
at
bt ? ? V t ?
0
0
1
1
1
This is of the form analyzed in Theorem 3 provided we set:
?
?
!
?1 ? ?1
a2t 2at bt b2t
0 ?2 ?2
at
bt ? and < A >=
A=? 0
,
0
0
1
0
0
1
P 2
P
P 2
P
where P
?1 =
a a P (a), ?2 =
a aP (a), ?1 =
b b P (b), ?2 =
b bP (b), and
? = 2 a,b abP (a, b).
The eigenvalues {?} and eigenvectors {~e} of < A > are:
?1 (1 ? ?2 ) + ??2
?2
?1 = 1, ~e1 ? (
,
, 1)
(1 ? ?1 )(1 ? ?2 ) 1 ? ?2
?
?2 = ?1 , ~e2 = (1, 0, 0), ?3 = ?2 , ~e3 ? (
, 1, 0).
?2 ? ?1
The result follows from Theorem 3.
(18)
Observe that if |?2 | < 1 but ?1 > 1, then < Vt > will converge but the expected variance
does not. The fluctuations in the GLRW algorithm will be infinitely large.
We can extend Theorem 4 to the variant of RW equation (8). Let P = Pemp , then
X
X
~ (C)C
~ 1 (1 ? C2 ), ?21 =
~ (C)C
~ 2 (1 ? C1 ),
?12 =
P (E|C)P
P (E|C)P
~
E,C
?12 =
X
~
E,C
~ (C)EC
~
P (E|C)P
1 (1 ? C2 ), ?21 =
~
E,C
X
~ (C)EC
~
P (E|C)P
2 (1 ? C1 ).
(19)
~
E,C
If the data is generated by P?P or PP C , then ?12 , ?21 , ?12 , ?21 take the same values:
?12 =< C1 (1 ? C2 ) >, ?21 =< (1 ? C1 )C2 >,
?12 = ?1 < C1 (1 ? C2 ) >, ?21 = ?2 < (1 ? C1 )C2 > .
(20)
Theorem 5. The algorithm specified by equation (8) converges provided ?? =
max{|?2 |, |?3 |, |?4 |, |?5 |} < 1, where ?2 = 1 ? (2?1 ? ?12 )?12 , ?3 = 1 ? (2?2 ? ?22 )?21 ,
?
?4 = 1 ? ?1 ?12 ?5 = 1 ? ?2 ?21 . The convergence rate is et log ? . The expected means
and covariances can be calculated from the first eigenvector.
Proof. We define the state vector ~z = (V12 , V22 , V1 , V2 , 1) and derive the update matrix
A from equation (8). The eigenvectors and eigenvalues can be calculated (calculations
omitted due to space constraints). The eigenvalues are 1, ?1 , ?2 , ?3 , ?4 . The convergence
conditions and rates follow from Theorem 3. The expected means and covariances can be
calculated from the first eigenvector, which is:
2
2
2(?1 ? ?12 )?12
?12 ?12
2(?2 ? ?22 )?21
?22 ?21
?12 ?21
~e1 = (
+
,
+
,
,
, 1),
2
2
2
2
2
2
(2?1 ? ?1 )?12 (2?1 ? ?1 )?12 (2?2 ? ?2 )?21 (2?2 ? ?2 )?21 ?12 ?21
(21)
and they agree with the calculations given in Theorem 2.
5 Generalization
The results of the previous sections can be generalized to cases where there are more than
two causes. For example, we can use the generalization of the P C model to include multi~ and preventative causes L,
~ [5] extending [2].
ple generative causes C
The probability distribution for this generalized P C model is:
n
Y
~ L;
~ ?
~ = {1 ?
PP C (E = 1|C,
~ , ?)
i=0
(1 ? ?i Ci )}
m
Y
(1 ? ?j Lj ),
(22)
j=1
where there are n + 1 generative causes {Ci } and m preventative causes {Lj } specified in
terms of parameters {?i } and {?j } (constrained to lie between 0 and 1).
We assume that there is a single background cause C0 which is always on (i.e. C0 = 1)
and whose strength ?0 is known (for relaxing this constraint, see Yuille in preparation).
Then it can be shown that the following GLRW algorithm will converge to the ML estimates
of the remaining parameters {?1 : 1 = 1, ..., n} and {?j : j = 1, ..., m} of the generalized
P C model:
?Vkt
=
Ck {
m
Y
(1 ? Li )
i=1
?Ult
=
Ll {
m
Y
k=1:k6=l
n
Y
j=1:j6=k
n
Y
(1 ? Lk )
(1 ? Cj )}(E ? ?0 ? (1 ? ?0 )Vkt ),
(1 ? Cj )}(E ? ?0 ? ?0 Ult ),
(23)
j=1
where {Vk : k = 1, ..., n} and {Ul : l = 1, ..., m} are weights.
The proof
algebra and
Q is based on the following identity for binary variQ
Q is straightforward
ables: j (1 ? ?j Lj ) j (1 ? Lj ) = j (1 ? Lj ).
The GLRW algorithm (23) will also perform ML estimation for data generated by other
probability distributions which share the same linear terms as the generalized P C model
(i.e. the terms linear in the {?i } and {?j }.) The convergence conditions and the convergence rates can be calculated using the techniques in section (4).
These results all assume genericity conditions so that none of the generative or preventative
causes is either always on or always off (i.e. ruling out case like [2]).
6 Conclusion
This paper introduced and studied generalized linear Rescorla-Wagner (GLRW) algorithms. We showed that two influential theories, ?P and P C, for estimating causal effects can
be implemented by the same GLRW, see (8). We obtained convergence results for GLRW including classifying the fixed points, calculating the asymptotic fluctuations, and the
convergence rates. Our results assume that the GLRW are i.i.d. samples from an unknown
~ Observe that the fluctuations of GLRW can be removed
empirical distribution Pemp (E, C).
by introducing damping coefficients which decrease over time. Stochastic approximation
theory [8] can then be used to give conditions for convergence.
More recent work (Yuille in preparation) clarifies the class of maximum likelihood inference problems that can be ?solved? by GLRW and by non-linear GLRW. In particular, we
show that a non-linear RW can perform ML estimation for the non-generic case studied by
Cheng. We also investigate similarities to Kalman filter models [9].
Acknowledgements
I thank Patricia Cheng, Peter Dayan and Yingnian Wu for helpfell discussions. Anonymous
referees gave useful feedback that has motivated a follow-up paper. This work was partially
supported by an NSF SLC catalyst grant ?Perceptual Learning and Brain Plasticity? NSF
SBE-0350356.
References
[1]. B. A. Spellman. ?Conditioning Causality?. In D.R. Shanks, K.J. Holyoak, and D.L.
Medin, (eds). Causal Learning: The Psychology of Learning and Motivation, Vol. 34.
San Diego, California. Academic Press. pp 167-206. 1996.
[2]. P. Cheng. ?From Covariance to Causation: A Causal Power Theory?. Psychological
Review, 104, pp 367-405. 1997.
[3]. M. Buehner and P. Cheng. ?Causal Induction: The power PC theory versus the
Rescorla-Wagner theory?. In Proceedings of the 19th Annual Conference of the Cognitive Science Society?. 1997.
[4]. J.B. Tenenbaum and T.L. Griffiths. ?Structure Learning in Human Causal Induction?.
Advances in Neural Information Processing Systems 12. MIT Press. 2001.
[5]. D. Danks, T.L. Griffiths, J.B. Tenenbaum. ?Dynamical Causal Learning?. Advances in
Neural Information Processing Systems 14. 2003.
[6]. D. Danks. ?Equilibria of the Rescorla-Wagner Model?. Journal of Mathematical
Psychology. Vol. 47, pp 109-121. 2003.
[7]. R.A. Rescorla and A.R. Wagner. ?A Theory of Pavlovian Conditioning: Variations
in the Effectiveness of Reinforcement and Nonreinforcement?. In A.H. Black andW.F.
Prokasy, eds. Classical Conditioning II: Current Research and Theory. New York.
Appleton-Century-Crofts, pp 64-99. 1972.
[8]. H.J. Kushner and D.S. Clark. Stochastic Approximation for Constrained and Unconstrained Systems. New York. Springer-Verlag. 1978.
[9]. P. Dayan and S. Kakade. ?Explaining away in weight space?. In Advances in Neural
Information Processing Systems 13. 2001.
| 2574 |@word c0:2 holyoak:1 seek:1 covariance:12 initial:2 current:1 surprising:1 plasticity:1 update:12 generative:3 mathematical:2 c2:51 direct:1 ik:1 prove:1 expected:16 growing:1 multi:1 brain:1 becomes:2 provided:6 estimating:3 moreover:1 what:2 unspecified:1 eigenvector:3 unit:1 grant:1 tation:1 fluctuation:8 ap:3 black:1 studied:4 relaxing:1 bi:4 medin:1 unique:1 empirical:2 word:1 griffith:2 selection:1 a2t:2 equivalent:2 maximizing:1 straightforward:1 vit:5 simplicity:2 rule:7 classic:1 century:1 variation:1 updated:2 diego:1 referee:1 particularly:1 observed:1 solved:1 calculate:6 decrease:2 removed:1 asked:1 algebra:3 yuille:6 v22:1 describe:2 pemp:20 whose:1 valued:1 otherwise:1 statistic:1 gi:4 noisy:1 eigenvalue:8 rescorla:15 degenerate:1 los:2 convergence:24 diverges:1 extending:1 converges:3 derive:2 stat:1 ij:4 implemented:1 quantify:1 concentrate:1 fij:5 filter:1 stochastic:6 human:2 require:2 vt2:3 behaviour:1 generalization:5 anonymous:1 equilibrium:2 omitted:1 estimation:10 largest:1 mit:1 danks:2 always:5 ck:1 fluctuate:1 derived:1 vk:2 likelihood:6 check:1 inference:4 dayan:2 bt:9 lj:5 relation:1 abp:2 k6:1 constrained:2 special:4 identical:1 causation:1 ourselves:1 interest:2 investigate:1 patricia:1 multiply:1 introduces:1 analyzed:2 undefined:1 pc:1 chain:2 necessary:1 damping:1 logarithm:1 causal:15 psychological:1 introducing:1 characterize:2 learnt:2 probabilistic:2 off:1 invertible:1 satisfied:2 cognitive:1 li:1 account:2 de:1 slc:1 b2:1 coefficient:3 caused:2 vi:1 depends:1 later:1 tion:1 observer:2 analyze:1 variance:1 correspond:1 clarifies:1 none:1 j6:1 ed:2 pp:13 e2:1 proof:6 popular:1 logical:1 cj:2 follow:2 done:1 just:1 sketch:1 hand:1 ei:3 nonlinear:2 defines:2 perhaps:1 modulus:3 effect:2 illustrated:1 ll:1 ambiguous:1 generalized:11 demonstrate:2 performs:1 recently:1 rescorlawagner:1 preventative:3 conditioning:3 jl:2 extend:1 interpretation:1 relating:1 buehner:1 appleton:1 atp:1 unconstrained:1 similarly:1 similarity:1 recent:3 showed:1 certain:2 verlag:1 binary:2 vt:11 nonreinforcement:1 analyzes:1 greater:1 determine:3 converge:7 ii:1 alan:1 characterized:1 calculation:3 academic:1 e1:2 a1:1 variant:5 basic:4 expectation:3 limt:4 c1:58 addition:1 background:2 separately:1 effectiveness:1 concerned:1 gave:1 psychology:2 restrict:1 cn:1 angeles:2 absent:1 motivated:1 ul:1 peter:1 e3:1 york:2 cause:20 useful:1 eigenvectors:3 tenenbaum:2 rw:10 specifies:1 exist:1 nsf:2 delta:2 write:1 vol:2 express:3 v1:14 v12:1 ruling:1 wu:1 bound:1 shank:1 b2t:2 cheng:7 annual:1 strength:2 kronecker:1 constraint:2 bp:2 ucla:1 ables:1 pavlovian:1 department:1 influential:2 kakade:1 making:1 equation:15 vjt:3 agree:1 apply:2 observe:3 v2:12 generic:3 away:1 alternative:1 assumes:1 remaining:1 include:2 ensure:1 kushner:1 calculating:1 prof:1 society:1 classical:1 quantity:1 occurs:2 thank:1 topic:1 induction:2 assuming:2 kalman:1 relationship:2 andw:1 zt:10 unknown:4 perform:4 markov:2 vkt:3 situation:1 extended:1 dc:1 arbitrary:2 introduced:1 kl:1 specified:3 z1:4 california:2 dynamical:1 max:2 including:1 power:5 suitable:1 event:4 natural:1 spellman:1 yingnian:1 lk:1 review:1 acknowledgement:1 asymptotic:1 catalyst:1 versus:1 sbe:1 clark:1 sufficient:1 consistent:1 classifying:1 pi:1 share:1 supported:1 transpose:1 aij:4 side:2 allow:1 explaining:1 emp:4 taking:4 wagner:15 feedback:1 calculated:4 reinforcement:1 san:1 ple:1 ec:2 prokasy:1 ml:8 assumed:2 iterative:3 learn:1 ca:1 vj:3 motivation:1 causality:3 lie:1 perceptual:1 rederive:1 croft:1 theorem:15 formula:1 expec:1 ci:2 magnitude:1 genericity:4 simply:1 infinitely:2 partially:1 epp:1 springer:1 satisfies:1 identity:1 formulated:1 considerable:1 except:1 lemma:1 experimental:1 people:1 ult:2 preparation:4 |
1,733 | 2,575 | Comparing Beliefs, Surveys and Random Walks
Erik Aurell
SICS, Swedish Institute of Computer Science
P.O. Box 1263, SE-164 29 Kista, Sweden
and Dept. of Physics,
KTH ? Royal Institute of Technology
AlbaNova ? SCFAB SE-106 91 Stockholm, Sweden
[email protected]
Uri Gordon and Scott Kirkpatrick
School of Engineering and Computer Science
Hebrew University of Jerusalem
91904 Jerusalem, Israel
{guri,kirk}@cs.huji.ac.il
Abstract
Survey propagation is a powerful technique from statistical physics that
has been applied to solve the 3-SAT problem both in principle and in
practice. We give, using only probability arguments, a common derivation of survey propagation, belief propagation and several interesting hybrid methods. We then present numerical experiments which use WSAT
(a widely used random-walk based SAT solver) to quantify the complexity of the 3-SAT formulae as a function of their parameters, both as randomly generated and after simpli?cation, guided by survey propagation.
Some properties of WSAT which have not previously been reported make
it an ideal tool for this purpose ? its mean cost is proportional to the number of variables in the formula (at a ?xed ratio of clauses to variables) in
the easy-SAT regime and slightly beyond, and its behavior in the hardSAT regime appears to re?ect the underlying structure of the solution
space that has been predicted by replica symmetry-breaking arguments.
An analysis of the tradeoffs between the various methods of search for
satisfying assignments shows WSAT to be far more powerful than has
been appreciated, and suggests some interesting new directions for practical algorithm development.
1
Introduction
Random 3-SAT is a classic problem in combinatorics, at the heart of computational complexity studies and a favorite testing ground for both exactly analyzable and heuristic solution methods which are then applied to a wide variety of problems in machine learning
and arti?cial intelligence. It consists of a ensemble of randomly generated logical expressions, each depending on N Boolean variables xi , and constructed by taking the AND of
M clauses. Each clause a consists of the OR of 3 ?literals? yi,a . yi,a is taken to be either xi
or ?xi at random with equal probability, and the three values of the index i in each clause
are distinct. Conversely, the neighborhood of a variable xi is Vi , the set of all clauses in
which xi or ?xi appear. For each such random formula, one asks whether there is some
set of xi values for which the formula evaluates to be TRUE. The ratio ? = M/N controls
the dif?culty of this decision problem, and predicts the answer with high accuracy, at least
as both N and M tend to in?nity, with their ratio held constant. At small ?, solutions are
easily found, while for suf?ciently large ? there are almost certainly no satisfying con?gurations of the xi , and compact proofs of this fact can be constructed. Between these limits
lies a complex, spin-glass-like phase transition, at which the cost of analyzing the problem
with either exact or heuristic methods explodes.
A recent series of papers drawing upon the statistical mechanics of disordered materials
has not only clari?ed the nature of this transition, but also lead to a thousand-fold increase
in the size of the concrete problems that can be solved [1, 2, 3] This paper provides a
derivation of the new methods using nothing more complex than probabilities, suggests
some generalizations, and reports numerical experiments that disentangle the contributions
of the several component heuristics employed. For two related discussions, see [4, 5].
An iterative ?belief propagation? [6] (BP) algorithm for K-SAT can be derived to evaluate
the probability, or ?belief,? that a variable will take the value TRUE in variable con?gurations that satisfy the formula considered. To calculate this, we ?rst de?ne a message
(?transport?) sent from a variable to a clause:
? ti?a is the probability that variable xi satis?es clause a
In the other direction, we de?ne a message (?in?uence?) sent from a clause to a variable:
? ia?i is the probability that clause a is satis?ed by another variable than xi
In 3-SAT, where clause a depends on variables xi , xj and xk , BP gives the following
iterative update equation for its in?uence.
(l)
ia?i
(l)
(l)
(l)
(l)
= tj?a + tk?a ? tj?a tk?a
(1)
The BP update equations for the transport ti?a involve the products of in?uences acting
on a variable from the clauses which surround xi , forming its ?cavity,? Vi , sorted by which
literal (xi or ?xi ) appears in the clause:
Y
Y
A0i =
ib?i
and
A1i =
ib?i
(2)
b?Vi , yi,b =?xi
b?Vi , yi,b =xi
The update equations are then
(l)
ti?a
=
?
?
?
?
?
?
?
(l?1)
ia?i A1i
(l?1) 1
ia?i Ai +A0i
if yi,a = ?xi
(3)
(l?1)
ia?i A0i
(l?1)
ia?i A0i +A1i
if yi,a = xi
The superscripts (l) and (l ? 1) denote iteration. The probabilistic interpretation is the
(l)
following: suppose we have ib?i for all clauses b connected to variable i. Each of these
(l)
clauses can either be satis?ed by another
? variable?(with probability i b?i ), or not be satis?ed
(l)
by another variable (with probability 1 ? ib?i ), and also be satis?ed by variable i itself.
If we set variable xi to 0, then some clauses are satis?ed by x i , and some have to be satis?ed
Q
(l)
by other variables. The probability that they are all satis?ed is b6=a,yi,b =xi ib?i . Similarly,
Q
(l)
if xi is set to 1 then all these clauses b are satis?ed with probability b6=a,yi,b =?xi ib?i .
The products in (3) can therefore be interpreted as joint probabilities of independent events.
Variable xi can be 0 or 1 in a solution if the clauses in which xi appears are either satis?ed
directly by xi itself, or by other variables. Hence
Prob(xi ) =
A0i
A0i
+ A1i
Prob(?xi ) =
and
A0i
A1i
+ A1i
(4)
A BP-based decimation scheme results from ?xing the variables with largest probability to
be either true or false. We then recalculate the beliefs for the reduced formula, and repeat.
To arrive at SP we introduce a modi?ed system of beliefs: every variable falls into one of
three classes: TRUE in all solutions (1); FALSE in all solutions(0); and TRUE in some and
FALSE in other solutions (f ree). The message from a clause to a variable (an in?uence)
is then the same as in BP above. Although we will again only need to keep track of one
message from a variable to a clause (a transport), it is convenient to ?rst introduce three
ancillary messages:
? T?i?a (1) is the probability that variable xi is true in clause a in all solutions
? T?i?a (0) is the probability that variable xi is false in clause a in all solutions
? T?i?a (f ree) is the probability that variable xi is true in clause a in some solutions
and false in others.
Note that there are here three transports for each directed link i ? a, from a variable to
a clause, in the graph. As in BP, these numbers will be functions of the in?uences from
clauses to variables in the preceeding update step. Taking again the incoming in?uences
independent, we have
(l)
T?i?a (f ree)
(l)
(l)
T?i?a (0) + T?i?a (f ree)
(l)
(l)
T? (1) + T? (f ree)
i?a
i?a
?
?
?
(l?1)
Q
ib?i
Qb?Vi \a,yi,b =xi
(l?1)
ib?i
Qb?Vi \a
b?Vi \a,yi,b =?xi
(l?1)
ib?i
(5)
The proportionality indicates that the probabilities are to be normalized. We see that the
structure is quite similar to that in BP. But we can make it closer still by introducing t i?a
with the same meaning as in BP. In SP it will then, as the case might be, be equal to to
Ti?a (f ree) + Ti?a (0) or Ti?a (f ree) + Ti?a (1). That gives (compare (3)):
(l)
ti?a
=
?
?
?
?
?
?
?
(l?1)
ia?i A1i
(l?1) 1
ia?i Ai +A0i ?A1i A0i
if yi,a = ?xi
(6)
(l?1)
ia?i A0i
(l?1)
ia?i A0i +A1i ?A1i A0i
if yi,a = xi
The update equations for ti?a are the same in SP as in BP, ?.e. one uses (1) in SP as well.
Similarly to (4), decimation now removes the most ?xed variable, i.e. the one with the
largest absolute value of (A0i ? A1i )/(A0i + A1i ? A1i A0i ). Given the complexity of the
original derivation of SP [1, 2], it is remarkable that the SP scheme can be interpreted as
a type of belief propagation in another belief system. And even more remarkable that the
?nal iteration formulae differ so little.
A modi?cation of SP which we will consider in the following is to interpolate between BP
Fraction of sites remaining after decimation
1.2
1
?=1.05
0.8
?=1
?=0.95
0.6
?=0
0.4
0.2
0
3.5
3.6
3.7
3.8
3.9
4
? = M/N
4.1
4.2
4.3
4.4
Figure 1: Dependence of decimation depth on the interpolation parameter ?.
(? = 0) and SP (? = 1) 1 by considering equations
?
(l?1)
ia?i A1i
?
?
(l?1)
? ia?i A1i +A0i ??A1i A0i
(l)
ti?a
?
(l?1)
?
ia?i A0i
?
(l?1)
ia?i A0i +A1i ??A1i A0i
if yi,a = ?xi
(7)
if yi,a = xi
We do not have an interpretation of the intermediate cases of ? as belief systems.
2
The Phase Diagram of 3-SAT
Early work on developing 3-SAT heuristics discovered that as ? is increased, the problem
changes from being easy to solve to extremely hard, then again relatively easy when the
formulae are almost certainly UNSAT. It was natural to expect that a sharp phase boundary
between SAT and UNSAT phases in the limit of large N accompanies this ?easy-hard-easy?
observed transition, and the ?nite-size scaling results of [7] con?rmed this. Their work
placed the transition at about ? = 4.2. Monasson and Zecchina [8] soon showed, using the
replica method from statistical mechanics, that the phase transition to be expected had unusual characteristics, including ?frozen variables? and a highly nonuniform distribution of
solutions, making search dif?cult. Recent technical advances have made it possible to use
simpler cavity mean ?eld methods to pinpoint the SAT/UNSAT boundary at ? = 4.267 and
suggest that the ?hard-SAT? region in which the solution space becomes inhomogeneous
begins at about ? = 3.92. These calculations also predicted a speci?c solution structure
(termed 1-RSB for ?one step replica symmetry-breaking?) [1, 2] in which the satis?able
con?gurations occur in large clusters, maximally separated from each other. Two types
of frozen variables are predicted, one set which take the same value in all clusters and a
second set whose value is ?xed within a particular cluster. The remaining variables are
?paramagnetic? and can take either value in some of the states of a given cluster. A careful
analysis of the 1-RSB solution has subsequently shown that this extreme structure is only
stable above ? = 4.15. Between 3.92 and 4.15 a wider range of cluster sizes, and wide
range of inter-cluster Hamming distances are expected.[9] As a result, we expect the values
? = 3.9, 4.15 and 4.267 to separate regions in which the nature of the 3-SAT decision
problem is distinctly different.
1
This interpolation has also been considered and implemented by R. Zecchina and co-workers.
?Survey-induced decimation? consists of using SP to determine the variable most likely to
be frozen, then setting that variable to the indicated frozen value, simplifying the formula
as a result, updating the SP calculation, and repeating the process. For ? < 3.9 we expect
SP to discover that all spins are free to take on more than one value in some ground state,
so no spins will be decimated. Above 3.9, SP ideally should identify frozen spins until
all that remain are paramagnetic. The depth of decimation, or fraction of spins reminaing
when SP sees only paramagnetic spins, is thus an important characteristic. We show in
Fig. 1 the fraction of spins remaining after survey-induced decimation for values of ? from
3.85 to 4.35 in hundreds of formulae with N = 10, 000. The error bars show the standard
deviation, which becomes quite large for large values of ?. To the left of ? = 4.2, on the
descending part of the curves, SP reaches a paramagnetic state and halts. On the right, or
ascending portion of the curves, SP stops by simply failing to converge.
Fig 1 also shows how different the behavior of BP and the hybrids between BP and SP are
in their decimation behavior. We studied BP (? = 0), underrelaxed SP (? = 0.95), SP, and
overrelaxed SP (? = 1.05). BP and underrelaxed SP do not reach a paramagnetic state, but
continue until the formula breaks apart into clauses that have no variables shared between
them. We see in Fig. 1 that BP stops working at roughly ? = 3.9, the point at which SP
begins to operate. The underrelaxed SP behaves like BP, but can be used well into the RSB
region. On the rising parts of all four curves in Fig 1, the scheme halted as the surveys
ceased to converge. Overrelaxed SP in Fig. 1 may give reasonable recommendations for
simpli?cation even on formulae which are likely to be UNSAT.
3
Some Background on WSAT
Next we consider WSAT, the random walk-based search routine used to ?nish the job of
exhibiting a satisfying con?guration after SP (or some other decimation advisor) has simpli?ed the formula. The surprising power exhibited by SP has to some extent obscured the
fact that WSAT is itself a very powerful tool for solving constraint satisfaction problems,
and has been widely used for this. Its running time, expressed in the number of walk steps
required for a successful search is also useful as an informal de?nition of the complexity of
a logical formula. Its history goes back to Papadimitriou?s [10] observation that a subtly biased random walk would with high probability discover satisfying solutions in the simpler
2-SAT problem after, at worst, O(N 2 ) steps. His procedure was to start with an arbitary
assignment of values to the binary variables, then reverse the sign of one variable at a time
using the following random process:
? select an unsatis?ed clause at random
? select at random a variable that appears in the clause
? reverse that variable
This procedure, sometimes called RWalkSAT, works because changing the sign of a variable in an unsatis?ed clause always satis?es that clause and, at ?rst, has no net effect
on other clauses. It is much more powerful than was proven initially. Two recent papers [12, 13]. have argued analytically and shown experimentally that Rwalksat ?nds satisfying con?gurations of the variables after a number of steps that is proportional to N for
values of ? up to roughly 2.7. after which this cost increases exponentially with N .
The second trick in WSAT was introduced by Kautz and Selman [11]. They also choose
an unsatis?ed clause at random, but then reverse one of the ?best? variables, selected at
random, where ?best? is de?ned as causing the fewest satis?ed clauses to become unsatis?ed. For robustness, they mix this greedy move with random moves as used in RWalkSAT,
recommending an equal mixture of the two types of moves. Barthel et al.[13] used these
two moves in numerical experiments, but found little improvement over RWalkSAT.
Median Cost per variable
15
10
N=1000
N=2000
N=5000
N=10000
N=20000
4
10
2
10
Variance of Cost per variable x N
N=1000
N=2000
N=5000
N=10000
N=20000
10
10
5
10
0
0
10
10
?
?5
1
2
?
3
4
10
0
1
2
3
4
5
Figure 2: (a) Median of WSAT cost per variable in 3-SAT as a function of ?. (b) Variance
of WSAT cost, scaled by N .
There is a third trick in the most often used variant of WSAT, introduced slightly later [14].
If any variable in the selected unsatis?ed clause can be reversed without causing any other
clauses to become unsatis?ed, this ?free? move is immediately accepted and no further
exploration is required. Since we shall show that WSAT works well above ? = 2.7, this
third move apparently gives WSAT its extra power. Although these moves were chosen
by the authors of WSAT after considerable experiment, we have no insight into why they
should be the best choices.
In Fig. 2a, we show the median number of random walk steps per variable taken by the
standard version of WSAT to solve 3-SAT formulas at values of ? ranging from 0.5 to 4.3
and for formulae of sizes ranging from N = 1000 to N = 20000. The cost of WSAT
remains linear in N well above ? = 3.9. WSAT cost distributions were collected on at
least 1000 cases at each point. Since the distributions are asymmetric, with strong tails
extending to higher cost, it is not obvious that WSAT cost is, in the statistical mechanics
language, self-averaging, or concentrated about a well-de?ned mean value which dominates the distribution as N ? ?. To test this, we calculated higher moments of the WSAT
cost distribution and found that they scale with simple powers of N. For example, in Fig.
2b, we show that the variance of the WSAT cost per variable, scaled up by N, is a wellde?ned function of ? up to almost 4.2. The third and fourth moments of the distribution
(not shown) also are constant when multiplied by N and by N 2 , respectively. The WSAT
cost per variable is thus given by a distribution which concentrates with increasing N in
exactly the way that a process governed by the usual laws of large numbers is expected to
behave, even though the typical cost increases by six orders of magnitude as we move from
the trivial cases to the critical regime.
A detailed analysis of the cost distributions which we observed will be published elsewhere
but we conclude that the median cost of solving 3-SAT using the WSAT random walk
search, as well as the mean cost if that is well-de?ned, remains linear in N up to ? =
4.15, coincidentally the onset of 1-RSB. In the 1-RSB regime, the WSAT cost per variable
distributions shift to higher values as N increases, and an exponential increase in cost
with N is likely. Is 4.15 really the endpoint for WSAT?s linearity, or will the search cost
per variable converge at still larger values of N which we could not study? We de?ne a
rough estimate of Nonset (?) by study of the cumulative distributions of WSAT cost as the
value of N for a given ? above which the distributions cross at a ?xed percentile. Plotting
log(Nonset ) against log(4.15 ? ?) in Fig. 3, we ?nd strong indication that 4.15 is indeed
an asymptote for WSAT.
Onset for linear WSAT cost per variable
5
10
N=1000
N=2000
N=5000
N=10000
N=20000
100000
N onset
Median WalkSat Cost
10000
1000
100
0.01
0.1
(4.15 - M/N)
1
10
0
10
3.4
3.6
3.8
?
4
4.2
Figure 3: Size N at which WSAT cost is linear in N as function of 4.15 ? ?.
Figure 4: WSAT cost, before and after SP-guided decimation.
4
Practical Aspects of SP + WSAT
The power of SP comes from its use to guide decimation by identifying spins which can
be frozen while minimally reducing the number of solutions that can be constructed. To
assess the complexity of the reduced formulae that decimation guided in this way produces
we compare, in Fig. 4, the median number of WSAT steps required to ?nd a satisfying
con?guration of the variables before and after decimation. To a rough approximation, we
can say that SP caps the cost of ?nding a solution to what it would be at the entry to the
critical regime. There are two factors, the reduction in the number of variables that have
to be searched, and the reduction of the distance the random walk must traverse when it is
restricted to a single cluster of solutions. In Fig. 2c the solid lines show the WSAT costs
divided by N, the original number of variables in each formula. If we instead divide the
WSAT cost after decimation by the number of variables remaining, the complexity measure
that we obtain is only a factor of two larger, as shown by the dotted lines. The relative cost
of running WSAT without bene?t of decimation is 3-4 decades larger.
We measured the actual compute time consumed in survey propagation and in WSAT. For
this we used the Zecchina group?s version 1.3 survey propagation code, and the copy of
WSAT (H. Kautz?s release 35, see [15]) that they have also employed. All programs were
run on a Pentium IV Xeon 3GHz dual processor server with 4GB of memory, and only one
processor busy. We compare timings from runs on the same 100 formulas with N = 10000
and ? = 4.1 and 4.2 (the formulas are simply extended slightly for the second case). In the
?rst case, the 100 formulas were solved using WSAT alone in 921 seconds. Using SP to
guide decimation one variable at a time, with the survey updates performed locally around
each modi?ed variable, the same 100 formulas required 6218 seconds to solve, of which
only 31 sec was spent in WSAT.
When we increase alpha to 4.2, the situation is reversed. Running WSAT on 100 formulas
with N = 10000 required 27771 seconds on the same servers, and would have taken
even longer if about half of the runs had not been stopped by a cutoff without producing a
satisfying con?guration. In contrast, the same 100 formulas were solved by SP followed
with WSAT in 10,420 sec, of which only 300 seconds were spent in WSAT. The cost of
SP does not scale linearly with N , but appears to scale as N 2 in this regime. We solved
100 formulas with N = 20, 000 using SP followed by WSAT in 39643 seconds, of which
608 sec was spent in WSAT. The cost of running SP to decimate roughly half the spins has
quadrupled, while the cost of the ?nal WSAT runs remained proportional to N .
Decimation must stop short of the paramagnetic state at the highest values of ?, to avoid
having SP fail to converge. In those cases we found that WSAT could sometimes ?nd
satisfying con?gurations if started slightly before this point. We also explored partial decimation as a means of reducing the cost of WSAT just below the 1-RSB regime, but found
that decimation of small fractions of the variables caused the WSAT running times to be
highly unpredictable, in many cases increasing strongly. As a result, partial decimation
does not seem to be a useful approach.
5
Conclusions and future work
The SP and related algorithms are quite new, so programming improvements may modify
the practical conclusions of the previous section. However, a more immediate target for
future work could be the WSAT algorithms. Further directing its random choices to incorporate the insights gained from BP and SP might make it an effective algorithm even closer
to the SAT/UNSAT transition.
Acknowledgments
We have enjoyed discussions of this work with members of the replica and cavity theory community, especially Riccardo Zecchina, Alfredo Braunstein, Marc Mezard, Remi
Monasson and Andrea Montanari. This work was performed in the framework of EU/FP6
Integrated Project EVERGROW (www.evergrow.org), and in part during a Thematic Institute supported by the EXYSTENCE EU/FP5 network of excellence. E.A. acknowledges
support from the Swedish Science Council. S.K. and U.G. are partially supported by a
US-Israeli Binational Science Foundation grant.
References
[1] M?ezard M., Parisi G. & Zecchina R.. (2002) Analytic and Algorithmic Solutions of Random
Satis?ability Problems. Science, 297:812-815
[2] M?ezard M. & Zecchina R. (2002) The random K-satis?ability problem: from an analytic solution to an ef?cient algorithm. Phys. Rev. E 66: 056126.
[3] Braunstein A., Mezard M. & Zecchina R., ?Survey propagation: an algorithm for satis?ability?,
arXiv:cs.CC/0212002 (2002).
[4] Parisi G. (2003), On the probabilistic approach to the random satis?ability problem, Proc. SAT
2003 and arXiv:cs:CC/0308010v1 .
[5] Braunstein A. and Zecchina R., (2004) Survey Propagation as Local Equilibrium Equations.
arXiv:cond-mat/0312483 v5.
[6] Pearl J. (1988) Probabilistic Reasoning in Intelligent Systems, 2nd Edition, Kauffmann.
[7] Kirkpatrick S. & Selman B. (1994) Critical Behaviour in the Sati?ability of Random Boolean
Expressions. Science 264: 1297-1301.
[8] Monasson R. & Zecchina R. (1997) Statistical mechanics of the random K-Sat problem. Phys.
Rev. E 56: 1357?1361.
[9] Montanari A., Parisi G. & Ricci-Tersenghi F. (2003) Instability of one-step replica-symmetricbroken phase in satis?ability problems. cond-mat/0308147.
[10] Papadimitriou C.H. (1991). In FOCS 1991, p. 163.
[11] Selman B. & Kautz H.A. (1993) In Proc. AAAI-93 26, pp. 46-51.
[12] Semerjian G. & Monasson R. (2003). Phys Rev E 67: 066103.
[13] Barthel W., Hartmann A.K. & Weigt M. (2003). Phys. Rev. E 67: 066104.
[14] Selman B., Kautz K. & Cohen B. (1996) Local Search Strategies for Satis?ability Testing.
DIMACS Series in Discrete Mathematics and Theoretical Computer Science 26.
[15] http://www.cs.washington.edu/homes/kautz/walksat/
| 2575 |@word version:2 rising:1 nd:5 proportionality:1 simplifying:1 arti:1 eld:1 asks:1 solid:1 reduction:2 moment:2 series:2 clari:1 comparing:1 paramagnetic:6 surprising:1 must:2 numerical:3 analytic:2 remove:1 asymptote:1 update:6 alone:1 intelligence:1 selected:2 greedy:1 half:2 cult:1 xk:1 short:1 provides:1 traverse:1 org:1 simpler:2 constructed:3 become:2 ect:1 focs:1 consists:3 introduce:2 excellence:1 inter:1 indeed:1 expected:3 roughly:3 andrea:1 behavior:3 mechanic:4 little:2 actual:1 unpredictable:1 solver:1 considering:1 becomes:2 begin:2 discover:2 underlying:1 increasing:2 linearity:1 project:1 israel:1 xed:4 what:1 interpreted:2 gurations:5 cial:1 zecchina:9 every:1 ti:10 exactly:2 scaled:2 control:1 grant:1 appear:1 producing:1 before:3 engineering:1 timing:1 modify:1 local:2 limit:2 analyzing:1 ree:7 interpolation:2 might:2 minimally:1 studied:1 suggests:2 conversely:1 co:1 dif:2 range:2 directed:1 practical:3 acknowledgment:1 testing:2 practice:1 procedure:2 nite:1 braunstein:3 convenient:1 suggest:1 instability:1 descending:1 www:2 jerusalem:2 go:1 survey:12 preceeding:1 immediately:1 identifying:1 insight:2 his:1 classic:1 kauffmann:1 target:1 suppose:1 exact:1 programming:1 us:1 decimation:20 trick:2 satisfying:8 updating:1 asymmetric:1 predicts:1 observed:2 solved:4 worst:1 thousand:1 calculate:1 recalculate:1 region:3 connected:1 eu:2 highest:1 complexity:6 ideally:1 ezard:2 solving:2 subtly:1 upon:1 easily:1 joint:1 various:1 derivation:3 fewest:1 separated:1 distinct:1 effective:1 neighborhood:1 quite:3 heuristic:4 widely:2 solve:4 whose:1 larger:3 drawing:1 say:1 ability:7 itself:3 superscript:1 indication:1 frozen:6 net:1 parisi:3 product:2 causing:2 culty:1 nity:1 ceased:1 rst:4 cluster:7 overrelaxed:2 extending:1 produce:1 tk:2 wider:1 depending:1 spent:3 ac:1 measured:1 school:1 job:1 strong:2 implemented:1 c:4 predicted:3 come:1 quantify:1 differ:1 direction:2 guided:3 inhomogeneous:1 exhibiting:1 concentrate:1 subsequently:1 ancillary:1 disordered:1 exploration:1 material:1 argued:1 behaviour:1 ricci:1 generalization:1 really:1 stockholm:1 sati:1 around:1 considered:2 ground:2 equilibrium:1 algorithmic:1 unsatis:6 early:1 purpose:1 failing:1 proc:2 council:1 largest:2 tool:2 rough:2 always:1 avoid:1 derived:1 release:1 improvement:2 indicates:1 pentium:1 contrast:1 glass:1 integrated:1 guration:3 initially:1 dual:1 uences:3 hartmann:1 development:1 equal:3 having:1 washington:1 future:2 papadimitriou:2 report:1 others:1 gordon:1 intelligent:1 randomly:2 modi:3 interpolate:1 phase:6 message:5 satis:19 highly:2 certainly:2 kirkpatrick:2 mixture:1 extreme:1 tj:2 held:1 a0i:20 closer:2 worker:1 partial:2 sweden:2 iv:1 divide:1 walk:8 re:1 obscured:1 uence:3 theoretical:1 stopped:1 increased:1 advisor:1 xeon:1 boolean:2 halted:1 assignment:2 cost:34 introducing:1 deviation:1 entry:1 hundred:1 successful:1 reported:1 answer:1 huji:1 probabilistic:3 physic:2 a1i:18 concrete:1 again:3 aaai:1 choose:1 literal:2 busy:1 de:7 sec:3 satisfy:1 combinatorics:1 caused:1 vi:7 depends:1 onset:3 later:1 break:1 performed:2 apparently:1 portion:1 xing:1 start:1 kautz:5 b6:2 contribution:1 ass:1 il:1 spin:9 accuracy:1 variance:3 characteristic:2 ensemble:1 identify:1 cc:2 published:1 cation:3 history:1 processor:2 weigt:1 reach:2 phys:4 ed:20 evaluates:1 against:1 pp:1 obvious:1 proof:1 con:9 hamming:1 stop:3 logical:2 cap:1 routine:1 back:1 appears:5 higher:3 maximally:1 swedish:2 box:1 though:1 strongly:1 just:1 until:2 working:1 transport:4 propagation:10 indicated:1 effect:1 normalized:1 true:7 hence:1 analytically:1 decimated:1 during:1 self:1 percentile:1 dimacs:1 alfredo:1 reasoning:1 meaning:1 ranging:2 ef:1 common:1 behaves:1 clause:34 binational:1 endpoint:1 exponentially:1 cohen:1 tail:1 interpretation:2 surround:1 ai:2 enjoyed:1 mathematics:1 similarly:2 language:1 had:2 stable:1 longer:1 disentangle:1 recent:3 showed:1 apart:1 reverse:3 termed:1 server:2 binary:1 continue:1 quadrupled:1 yi:14 nition:1 simpli:3 employed:2 speci:1 determine:1 converge:4 mix:1 technical:1 calculation:2 cross:1 divided:1 halt:1 variant:1 arxiv:3 iteration:2 sometimes:2 background:1 diagram:1 median:6 biased:1 operate:1 extra:1 exhibited:1 explodes:1 induced:2 tend:1 sent:2 member:1 seem:1 ciently:1 ideal:1 intermediate:1 easy:5 variety:1 xj:1 tradeoff:1 consumed:1 shift:1 whether:1 expression:2 six:1 gb:1 accompanies:1 useful:2 se:3 involve:1 detailed:1 coincidentally:1 repeating:1 locally:1 concentrated:1 reduced:2 http:1 dotted:1 sign:2 unsat:5 track:1 per:9 discrete:1 shall:1 mat:2 group:1 four:1 changing:1 cutoff:1 nal:2 replica:5 v1:1 graph:1 fraction:4 fp6:1 run:4 prob:2 powerful:4 fourth:1 arrive:1 almost:3 reasonable:1 home:1 decision:2 scaling:1 followed:2 fold:1 occur:1 constraint:1 bp:17 aspect:1 argument:2 extremely:1 qb:2 relatively:1 ned:4 developing:1 remain:1 slightly:4 rev:4 making:1 restricted:1 heart:1 taken:3 equation:6 previously:1 remains:2 fail:1 ascending:1 rmed:1 unusual:1 informal:1 multiplied:1 monasson:4 robustness:1 original:2 remaining:4 running:5 especially:1 move:8 v5:1 strategy:1 dependence:1 usual:1 kth:1 distance:2 link:1 separate:1 reversed:2 evaluate:1 extent:1 collected:1 trivial:1 erik:1 code:1 index:1 ratio:3 hebrew:1 riccardo:1 observation:1 behave:1 immediate:1 situation:1 extended:1 directing:1 discovered:1 nonuniform:1 sharp:1 community:1 introduced:2 required:5 bene:1 pearl:1 israeli:1 beyond:1 able:1 bar:1 below:1 scott:1 regime:7 program:1 royal:1 including:1 rsb:6 belief:9 memory:1 ia:14 event:1 power:4 natural:1 hybrid:2 satisfaction:1 critical:3 scheme:3 technology:1 ne:3 nding:1 started:1 acknowledges:1 relative:1 law:1 expect:3 aurell:1 interesting:2 proportional:3 suf:1 proven:1 remarkable:2 foundation:1 principle:1 plotting:1 elsewhere:1 repeat:1 placed:1 soon:1 free:2 copy:1 supported:2 appreciated:1 guide:2 institute:3 wide:2 fall:1 taking:2 absolute:1 distinctly:1 ghz:1 boundary:2 depth:2 curve:3 transition:6 calculated:1 cumulative:1 selman:4 made:1 author:1 far:1 alpha:1 compact:1 cavity:3 keep:1 incoming:1 sat:20 conclude:1 recommending:1 xi:36 search:7 iterative:2 decade:1 why:1 favorite:1 nature:2 symmetry:2 complex:2 marc:1 sp:37 montanari:2 linearly:1 edition:1 nothing:1 underrelaxed:3 site:1 fig:10 cient:1 evergrow:2 analyzable:1 mezard:2 thematic:1 pinpoint:1 exponential:1 lie:1 governed:1 breaking:2 ib:9 kirk:1 third:3 formula:25 remained:1 sics:2 explored:1 dominates:1 false:5 gained:1 magnitude:1 uri:1 remi:1 simply:2 likely:3 forming:1 expressed:1 partially:1 recommendation:1 tersenghi:1 sorted:1 careful:1 shared:1 nish:1 considerable:1 change:1 hard:3 experimentally:1 typical:1 reducing:2 acting:1 averaging:1 called:1 accepted:1 e:2 arbitary:1 cond:2 select:2 searched:1 support:1 incorporate:1 dept:1 |
1,734 | 2,576 | Object Classification from a Single Example
Utilizing Class Relevance Metrics
Michael Fink
Interdisciplinary Center for Neural Computation
The Hebrew University, Jerusalem 91904, Israel
[email protected]
Abstract
We describe a framework for learning an object classifier from a single
example. This goal is achieved by emphasizing the relevant dimensions
for classification using available examples of related classes. Learning
to accurately classify objects from a single training example is often unfeasible due to overfitting effects. However, if the instance representation provides that the distance between each two instances of the same
class is smaller than the distance between any two instances from different classes, then a nearest neighbor classifier could achieve perfect
performance with a single training example. We therefore suggest a two
stage strategy. First, learn a metric over the instances that achieves the
distance criterion mentioned above, from available examples of other
related classes. Then, using the single examples, define a nearest neighbor classifier where distance is evaluated by the learned class relevance
metric. Finding a metric that emphasizes the relevant dimensions for
classification might not be possible when restricted to linear projections.
We therefore make use of a kernel based metric learning algorithm. Our
setting encodes object instances as sets of locality based descriptors and
adopts an appropriate image kernel for the class relevance metric learning. The proposed framework for learning from a single example is
demonstrated in a synthetic setting and on a character classification task.
1
Introduction
We describe a framework for learning to accurately discriminate between two target classes
of objects (e.g. platypuses and opossums) using a single image of each class. In general,
learning to accurately classify object images from a single training example is unfeasible
due to overfitting effects of high dimensional data. However, if a certain distance function
over the instances guarantees that all within-class distances are smaller than any betweenclass distance, then nearest neighbor classification could achieve perfect performance with
a single training example. We therefore suggest a two stage method. First, learn from
available examples of other related classes (like beavers, skunks and marmots), a metric
over the instance space that satisfies the distance criterion mentioned above. Then, define
a nearest neighbor classifier based on the single examples. This nearest neighbor classifier
calculates distance using the class relevance metric.
The difficulty in achieving robust object classification emerges from the instance variety of
object appearance. This variability results from both class relevant and class non-relevant
dimensions. For example, adding a stroke crossing the digit 7, adds variability due to a class
relevant dimension (better discriminating 7?s from 1?s), while italic writing adds variability
in a class irrelevant dimension. Often certain non-relevant dimensions could be avoided by
the designer?s method of representation (e.g. incorporating translation invariance). Since
such guiding heuristics may be absent or misleading, object classification systems often use
numerous positive examples for training, in an attempt to manage within class variability.
We are guided by the observation that in many settings providing an extended training set
of certain classes might be costly or impossible due to scarcity of examples, thus motivating
methods that suffice with few training examples.
Categories? appearance variety seems to inherently entail severe overfitting effects when
only a small sample is available for training. In the extreme case of learning from a single
example it appears that the effects of overfitting might prevent any robust category generalization. These overfitting effects tend to exacerbate as a function of the representation
dimensionality.
In the spirit of the learning to learn literature [17], we try to overcome the difficulties that
entail training from a single example by using available examples from several other related
objects. Recently, it has been demonstrated that objects share distribution densities on
deformation transforms [13], shape or appearance [6]; and that objects could be detected
by a common set of reusable features [1, 18]. We suggest that in many visual tasks it
is natural to assume that one common set of constraints characterized a common set of
relevant and non-relevant dimensions shared by a specific family of related classes [10].
Our paper is organized as follows. In Sec. 2 we start by formalizing the task of training from
a single example. Sec. 3 describes a kernel over sets of local features. We then describe in
Sec. 4 a kernel based method for learning a pseudo-metric that is capable of emphasizing
the relevant dimensions and diminishing the overfitting effects of non-relevant dimensions.
By projecting the single examples using this class relevance pseudo-metric, learning from
a single example becomes feasible. Our experimental implementation described in Sec. 5,
adopts shape context descriptors [3] of Latin letters to demonstrate the feasibility of learning from a single example. We conclude with a discussion on the scope and limitations of
the proposed method.
2
Problem Setting
Let X be our object instance space and let u and v indicate two classes defined over X .
Our goal is to generate a classifier h(x) which discriminates between instances of the two
object classes u and v. Formally, h : X ? {u, v} so that ?x in class u, h(x) = u and
?x in class v, h(x) = v. We adopt a local features representation for encoding object
images. Thus, every x in our instance space is characterized by the set {lji , pij }kj=1 where
lji is a locality based descriptor calculated at location pij of image i 1 . We assume that lji is
encoded as a vector of length n and that the same number of locations k are selected from
each image2 . Thus any x in our instance space X is defined by an n ? k matrix.
Our method uses a single instance from classes u and v as well as instances from other
related classes. We denote by q the total number of classes. An example is formally defined
as a pair (x, y) where x ? X is an instance and y ? {1, . . . , q} is the index of the instance?s
class. The proposed setting postulates that two sets are provided for training h(x):
1 i
pj
2
might be selected from image i either randomly, or by a specialized interest point detector.
This assumption could be relaxed as demonstrated in [16, 19]
? A single example of class u, (x, u) and a single example of class v, (x, v)
? An extended sample {(x1 , y1 ), . . . , (xm , ym )} of m >> 1 examples where
xi ? X and yi ?
/ {u, v} for all 1 ? i ? m.
We say that a set of classes is ? > 0 separated with respect to a distance function d
if for any pair of examples belonging to the same class {(x1 , c), (x01 , c)}, the distance
d(x1 , x01 ) is smaller than the distance between any pair of examples from different classes
{(x2 , e), (x02 , g)} by at least ?:
d(x1 , x01 ) ? d(x2 , x02 ) ? ? .
Recall that our goal is to generate a classifier h(x) which discriminates between instances
of the two object classes u and v. In general, learning from a single example is prone to
overfitting, yet if a set of classes is ? separated, a single example is sufficient for a nearest
neighbor classifier to achieve perfect performance. Therefore our proposed framework is
composed of two stages:
1. Learn from the extended sample a distance function d that achieves ? separation
on classes y ?
/ {u, v}.
2. Learn a nearest neighbor classifier h(x) from the single examples, where the classifier employs d for evaluating distances.
From the theory of large margin classifiers we know that if a classifier achieves a large
margin separation on an i.i.d. sample then it is likely to generalize well. We informally
state that analogously, if we find a distance function d such that q ? 2 classes that form the
extended sample are separated by a large ? with respect to d, with high probability classes
u and v should exhibit the separation characteristic as well. If these assumptions hold and
d indeed induces ? separation on classes u and v, then a nearest neighbor classifier would
generalize well from a single training example of the target classes. It should be noted that
when training from a single example nearest neighbor, max margin and naive Bayes algorithms, all yield the same classification rule. For simplicity we choose to focus on a nearest
neighbor formulation. We will later show how the distance d might be parameterized by
measuring Euclidian distance, after applying a linear projection W to the original instance
space. Classifying instances in the original instance space by comparing them to the target
classes? single examples x and x0 , leads to overfitting. In contrast, our approach projects
the instance space by W and only then applies a nearest neighbor distance measurement
to the projected single examples W x and W x0 . Our method relies on the distance d, parameterized by W , to achieve ? separation on classes u and v. In certain problems it is
not possible to achieve ? separation by using a distance function which is based on a linear
transformation of the instance space. We therefore propose to initially map the instance
space X into an implicit feature space defined by a Mercer kernel [20].
3
A Principal Angles Image Kernel
We dedicate this section to describe a Mercer kernel between sets of locality based image features {lji , pij }kj=1 encoded as n ? k matrices. Although potentially advantageous in
many applications, one shortcoming in adopting locality based feature descriptors lays in
0
the vagueness of matching two sets of corresponding locations pij , pij 0 selected from different object images i and i0 (see Fig. 1). Recently attempts have been made to tackle this
problem [19], we choose to follow [20] by adopting the principal
angles kernel approach
that implicitly maps x of size n ? k to a significantly higher nk -dimensional feature space
?(x) ? F . The principal angles kernel is formally defined as:
2
K(xi , xi0 ) = ?(xi )?(xi0 ) = det(Q>
i Qi 0 )
10
10
10
20
20
20
30
30
30
40
40
40
50
50
60
50
60
5
10
15
20
25
30
35
40
60
5
10
15
20
25
30
35
40
5
10
15
20
25
30
35
40
Figure 1: The 40 columns in each matrix encode 60-dimentional descriptors (detailed in Sec. 5)
of three instances of the letter e. Although the objects are similar, the random sequence of sampling
locations pij entails column permutation, leading to apparently different matrices. Ignoring selection
permutation by reshaping the matrices as vectors would further obscure the relevant similarity. A
kernel applied to matrices that is invariant to column permutation can circumvent this problem.
The columns of Qi and Qi0 are each an orthonormal basis resulting from a QR decomposition of the instances xi and xi0 respectively. One advantage of the principal angels kernel
emerges from its invariance to column permutations of the instance matrices x i and xi0 ,
thus circumventing the location matching problem stated above. Extensions of the principal angles kernel that have the additional capacity to incorporate knowledge on the accurate
location matching, might enhance the kernel?s descriptive power [16].
4
Learning a Class Relevance Pseudo-Metric
In this section we describe the two stage framework for learning from a single example to
accurately classify classes u and v. We focus on transferring information from the extended
sample of classes y ?
/ {u, v} in the form of a learned pseudo-metric over X . For sake of
clarity we will start by temporarily referring to the instance space X as a vector space, but
later return to our original definition of instances in X as being matrices which columns
encode a selected set of locality based descriptors {lji , pij }kj=1 .
A pseudo-metric is a function d : X ? X ? R, which satisfies three requirements, (i)
d(x, x0 ) ? 0, (ii) d(x, x0 ) = d(x0 , x), and (iii) d(x1 , x2 ) + d(x2 , x3 ) ? d(x1 , x3 ). Following [14], we restrict ourselves to learning pseudo-metrics of the form
q
dA (x, x0 ) ? (x ? x0 )> A(x ? x0 ) ,
where A 0 is a symmetric positive semi-definite (PSD) matrix.
Since A is PSD, there exists a matrix W such that
(x ? x0 )> A(x ? x0 ) = kW x ? W x0 k22 .
Therefore, dA (x, x0 ) is the Euclidean distance between the image of x and x0 due to a
linear transformation W . We now restate our goal as that of using the extended sample
of classes y ?
/ {u, v} in order to find a linear projection W that achieves ? separation
by emphasizing the relevant dimensions for classification and diminishing the overfitting
effects of non-relevant dimensions.
Several linear methods exist for finding a class relevance projection [2, 9], some of which
have a kernel based variant [12]. Our method of choice, proposed by [14], is an online
algorithm characterized by its capacity to efficiently handle high dimensional input spaces.
In addition the method?s margin based approach is directly aimed at achieving our ? separation goal. We convert the online algorithm for finding A to our batch setting by averaging
the resulting A over the algorithm?s ? iterations [4].
Fig. 2 demonstrates how a class relevance pseudo-metric enables training a nearest neighbor classifier from a single example of two classes in a synthetic two dimensional setting.
Figure 2: A synthetic sample of six obliquely oriented classes in a two dimensional space (left).
A class relevance metric is calculated from the (m = 200) examples of the four classes y ?
/ {u, v}
marked in gray. The examples of the target classes u and v, indicated in black, are not used in calculating the metric. After learning the pseudo-metric, all the instances of the six classes are projected
to the class relevance space. Here distance measurements are performed between the four classes
y?
/ {u, v}. The results are displayed as a color coded distance matrix (center-top). Throughout the
paper distance matrix indices are ordered by class so ? separated classes should appear as block diagonal matrices. Although not participating in calculating the pseudo-metric, classes u and v exhibit
? separation (center-bottom). After the class relevance projection, a nearest neighbor classifier will
generalize well from any single example of classes u and v (right).
In the primal setting of the pseudo-metric learning, we temporarily addressed our instances
x as vectors, thus enabling subtraction and dot product operations. These operations have
no clear interpretation when applied to our representation of objects as sets of locality based
descriptors {lji , pij }kj=1 . However the adopted pseudo-metric learning algorithm has a dual
version, where interface to the data is limited to inner products. In the dual mode A is
implicitly represented by a set of support examples {xj }?j=1 and by learning two sets of
(?,f )
scalar coefficients {?h }fh=1 and {?j,h }(j,h)=(1,1) . Thus, applying the dual representation
of the pseudo-metric, distances between instances x and x0 could be calculated by:
?
?2
f
?
X
X
dA (x, x0 )2 =
?h ?
?j,h [ K(xj , x) ? K(xj , x0 ) ? K(x0j , x) + K(x0j , x0 ) ]?
h=1
j=1
dA (x, x0 )2 in the above equation is therefore evaluated by calling upon the principal angles
kernel previously described in Sec. 3. Fig. 3 demonstrates how a class relevance pseudometric enables training from a single example in a classification problem, where nonlinear
projection of the instance space is required for achieving a ? margin.
5
Experiments
Sets of six lowercase Latin letters (i.e. e, n, t, f, h and c) are selected as target classes for our
experiment (see examples in Fig. 4). The Latin character database [7] includes 60 examples
of each letter. Two representations are examined. The first is a pixel based representation
resulting from column-wise encoding the raw 36 ? 36 pixel images as a vector of length
1296. Our second representation adopts the shape context descriptors for object encoding.
This representation relies on a set of 40 locations pj randomly sampled from the object
contour. The descriptor of each location pj is based on a 60-bin histogram (5 radius ?
12 orientation bins) summing the number of ?lit? pixels falling in each specific radius and
orientation bin (using pj as the origin). Each example in our instance space is therefore
encoded as a 60 ? 40 matrix. Three shape context descriptors are depicted in Fig. 4. Shape
Figure 3: A synthetic sample of six co-centric classes in a two dimensional space (left). Two class
relevance metrics are calculated from the examples (m = 200) of the four classes y ?
/ {u, v} marked
in gray using either a linear or a second degree polynomial kernel. The examples of the target classes
u and v, indicated in black, are not used in calculating the metrics. After learning both metrics,
all the instances of the six classes are projected using both class relevance metrics. Then distance
measurements are performed between the four classes y ?
/ {u, v}. The resulting linear distance
matrix (center-top) and polynomial distance matrix (right-top) seem qualitatively different. Classes
u and v, not participating in calculating the pseudo-metric, exhibit ? separation only when using
an appropriate kernel (right-bottom). A linear kernel cannot accommodate ? separation between
co-centric classes (center-bottom).
context descriptors have proven to be robust in many classification tasks [3] and avoid the
common reliance on a detection of (the often elusive) interest points. In many writing
systems letters tend to share a common underlying set of class relevant and non-relevant
dimensions (Fig. 5-left). We therefore expect that letters should be a good candidate for
exhibiting that a class relevance pseudo-metric achieving a large margin ?, would induce
the distance separation characteristic on two additional letter classes in the same system.
We randomly select a single example of two letters (i.e. e and n) for training and save
the remaining examples for testing. A nearest neighbor classifier is defined by the two
examples, in order to assess baseline performance of training from a single example. A
linear kernel is applied for the pixel based representation while the principal angles kernel
is used for the shape context representation. Performance is assessed by averaging the
generalization accuracy (on the unseen testing examples) over 900 repetitions of random
letter selection. Baseline results for the shape context and pixel representations are depicted
in Fig. 5 A and C, respectively (letter references to Fig. 5 appear on the right bar plot).
We now make use of the 60 examples of each of the remaining letters (i.e. t, f, h and
c) in order to learn a distance over letters. The dual formulation of the pseudo-metric
learning algorithm (described in Sec. 4) is implemented and run for 1000 iterations over
random pairs selected from the 240 training examples (m = 4 classes ? 60 examples).
The same 900 example pairs used in the baseline testing are now projected using the letter
metric. It is observed that the learned pseudo-metric approximates the separation goal on
the two unseen target classes u and v (center plot of Fig. 5). A nearest neighbor classifier
is then trained using the projected examples (W x,W x0 ) from class u and v. Performance
is assessed as in the baseline test. Results for the shape context based representation are
presented in Fig. 5B while performance of the pixel based representation is depicted in
Fig. 5E.
When training from a single example the lower dimensional pixel representation (of size
1296) displays less of an overfitting effect than the shape context representation paired
with
a principal angles kernel (implicitly mapped by the kernel from size 60 ? 40 to size
60
40 ). This effect could be seen when comparing Fig. 5D and Fig. 5A. It is not surprising
that although some dimensions in the high dimensional shape context feature represen-
5
5
5
10
10
10
15
15
15
20
20
20
25
25
25
30
30
35
10
15
20
25
30
35
35
5
10
15
20
25
30
35
5
1
1
2
2
2
3
4
log(r)
1
log(r)
log(r)
30
35
5
3
4
5
4
6
??/6
8
10
12
15
20
25
30
35
3
4
5
2
10
5
2
4
6
??/6
8
10
12
2
4
6
??/6
8
10
12
Figure 4: Examples of six character classes used in the letter classification experiment (left). The
context descriptor at location p is based on a 60-bin histogram (5 radius ? 12 orientation bins)
of all surrounding pixels, using p as the origin. Three examples of the letter e, depicted with the
histogram bin boundaries (top) and three derived shape context histograms plotted as log(radius) ?
orientation bins (bottom). Note the similarity of the two shape context descriptors sampled from
analogous locations on two different examples of the letter e (two bottom-center plots). The shape
context of a descriptor sampled from a distant location is evidently different (right).
1
0.95
0.9
0.85
0.8
0.75
0.7
0.65
0.6
0.55
0.5
A
B
C
D
E
F
Figure 5: Letters in many writing systems, like uppercase Latin, tend to share a common underlying set of class relevant and non-relevant dimensions (left plot adapted from [5]). A class relevance
pseudo-metric was calculated from four letters (i.e. t, f, h and c). The central plot depicts the distance
matrix of the two target letters (i.e. e and n) after the class relevance pseudo-metric projection. The
right plot presents average accuracy of classifiers trained on a single example of lowercase letters (i.e.
e and n) in the following conditions: A. Shape Context Representation B. Shape Context Representation after class relevance projection C. Shape Context Representation after a projection derived from
uppercase letters D. Pixel Representation E. Pixel Representation after class relevance projection F.
Pixel Representation after a projection derived from uppercase letters.
tation might exhibit superior performance in classification, increasing the representation
dimensionality introduces numerous non-relevant dimensions, thus causing the substantial
overfitting effects displayed at Fig. 5A. However, it appears that by projecting the single
examples using the class relevance pseudo-metric, the class relevant dimensions are emphasized and hindering effects of the non-relevant dimensions are diminished (displayed at
Fig. 5B). It should be noted that a simple linear pseudo-metric projection cannot achieve the
desired margin on the extended sample, and therefore seems not to generalize well from the
single trial training stage. This phenomenon is manifested by the decrease in performance
when linearly projecting the pixel based representation (Fig. 5E).
Our second experiment is aimed at examining the underlying assumptions of the proposed
method. Following the same setting as in the first experiment we randomly selected two
lowercase Latin letters for the single trial training task, while applying a pseudo-metric
projection derived from uppercase Latin letters. It is observed that utilizing a less relevant
pseudo-metric attenuates the benefit in the setting based on the shape context representation paired with the principal angles kernel (Fig. 5C). In the linear pixel based setting
projecting lowercase letters to the uppercase relevance directions significantly deteriorates
performance (Fig. 5F), possibly due to deemphasizing the lowercase characterizing curves.
6
Discussion
We proposed a two stage method for classifying object images using a single example. Our
approach, first attempts to learn from available examples of other related classes, a class relevance metric where all within class distances are smaller than between class distances. We
then, define a nearest neighbor classifier for the two target classes, using the class relevance
metric. Our high dimensional representation applied a principal angles kernel [20] to sets of
local shape descriptors [3]. We demonstrated that the increased representational dimension
aggravates overfitting when learning from a single example. However, by learning the class
relevance metric from available examples of related objects, relevant dimensions for classification are emphasized and the overfitting effects of irrelevant dimensions are diminished.
Our technique thereby generates a highly accurate classifier from only a single example of
the target classes. Varying the choice of local feature descriptors [11, 15], and enhancing
the image kernel [16] might further improve the proposed method?s generalization capacity in other object classification settings. We assume that our examples represent a set of
classes that originate from a common set of constraints, thus imposing that the classes tend
to agree on the relevance and non-relevance of different dimensions. Our assumption holds
well for objects like textual characters [5]. It has been recently demonstrated that feature
selection mechanisms can enable real-world object detection by a common set of shared
features [18, 8]. These mechanisms are closely related to our framework when considering
the common features as a subset of directions in our class relevance pseudo-metric. We
therefore aim our current research at learning to classify more challenging objects.
References
[1] S. Krempp, D. Geman and Y. Amit. Sequential learning of reusable parts for object detection.
Technical report, CS Johns Hopkins, 2002.
[2] A. Bar-Hillel, T. Hertz, N. Shental and D. Weinshall. Learning Distance Functions Using
Equivalence Relations. Proc ICML03, 2003.
[3] S. Belongie, J. Malik and J. Puzicha. Matching Shapes. Proc. ICCV, 2001.
[4] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning
algorithms. IEEE Transactions on Information Theory. To appear , 2004.
[5] M.A. Chanagizi and S. Shimojo. Complexity and redundancy of writing systems, and implications for letter perception. under review, 2004.
[6] L. Fei-Fei, R. Fergus and P. Perona. Learning generative visual models from few training
examples. CVPR04 Workshop on Generative-Model Based Vision, 2004.
[7] M. Fink. A Latin Character Database. www.cs.huji.ac.il/?fink, 2004.
[8] M. Fink and K. Levi. Encoding Reusable Perceptual Features Enables Learning Future Categories from Few Examples. Tech Report CS HUJI , 2004.
[9] K. Fukunaga. Statistical Pattern Recognition. San Diego: Academic Press 2nd Ed., 1990.
[10] K. Levi and M. Fink. Learning From a Small Number of Training Examples by Exploiting
Object Categories. LCVPR04 workshop on Learning in Computer Vision, 2004.
[11] D. G. Lowe. Object recognition from local scale-invariant features. Proc. ICCV99, 1999.
[12] S. Mika, G. Ratsch, J. Weston, B. Scholkopf and K. R. Muller. Fisher Discriminant Analysis
with Kernels. Neural Networks for Signal Processing IX, 1999.
[13] E. Miller, N. Matsakis and P. Viola. Learning from One Example through Shared Densities on
Transforms. Proc. CVPR00(1), 2000.
[14] S. Shalev, Y. Singer and A. Ng. Online and Batch Learning of Pseudo-Metrics. Proc. ICML04,
2004.
[15] M. J. Swain and D. H. Ballard. Color Indexing. IJCV 7(1), 1991.
[16] A. Shashua and T. Hazan. Threading Kernel Functions: Localized vs. Holistic Representations
and the Family of Kernels over Sets of Vectors with Varying Cardinality. NIPS04 under review.
[17] S. Thrun and L. Pratt. Learning to Learn. Kluwer Academic Publishers, 1997.
[18] A. Torralba, K. Murphy and W. Freeman. Sharing features: efficient boosting procedures for
multiclass object detection. Proc. CVPR04, 2004.
[19] C.Wallraven, B.Caputo and A.Graf Recognition with Local features kernel recipe. ICCV, 2003.
[20] L. Wolf and A. Shashua. Learning over sets using kernel principal angles. JML 4, 2003.
| 2576 |@word trial:2 version:1 polynomial:2 seems:2 advantageous:1 nd:1 decomposition:1 image2:1 euclidian:1 thereby:1 accommodate:1 current:1 comparing:2 surprising:1 yet:1 john:1 distant:1 shape:19 enables:3 plot:6 v:1 generative:2 selected:7 beaver:1 provides:1 boosting:1 location:11 scholkopf:1 ijcv:1 x0:19 angel:1 indeed:1 freeman:1 considering:1 increasing:1 becomes:1 provided:1 project:1 underlying:3 suffice:1 formalizing:1 cardinality:1 israel:1 weinshall:1 finding:3 transformation:2 guarantee:1 pseudo:24 every:1 tackle:1 fink:6 classifier:20 demonstrates:2 appear:3 positive:2 local:6 tation:1 encoding:4 might:8 black:2 mika:1 examined:1 equivalence:1 challenging:1 co:2 limited:1 testing:3 block:1 definite:1 x3:2 digit:1 procedure:1 significantly:2 projection:13 matching:4 induce:1 suggest:3 unfeasible:2 cannot:2 selection:3 context:17 impossible:1 writing:4 applying:3 www:1 map:2 demonstrated:5 center:7 elusive:1 jerusalem:1 simplicity:1 rule:1 utilizing:2 orthonormal:1 handle:1 analogous:1 target:10 diego:1 us:1 origin:2 crossing:1 recognition:3 lay:1 geman:1 database:2 bottom:5 observed:2 decrease:1 mentioned:2 discriminates:2 substantial:1 lji:6 complexity:1 trained:2 upon:1 basis:1 wallraven:1 represented:1 surrounding:1 separated:4 describe:5 shortcoming:1 detected:1 shalev:1 hillel:1 heuristic:1 encoded:3 say:1 ability:1 unseen:2 online:3 sequence:1 advantage:1 descriptive:1 evidently:1 propose:1 hindering:1 product:2 causing:1 relevant:22 holistic:1 achieve:6 representational:1 obliquely:1 participating:2 qr:1 recipe:1 exploiting:1 requirement:1 perfect:3 object:31 ac:2 nearest:16 implemented:1 c:3 indicate:1 exhibiting:1 direction:2 guided:1 restate:1 radius:4 closely:1 qi0:1 enable:1 bin:7 generalization:4 extension:1 hold:2 scope:1 achieves:4 adopt:1 torralba:1 fh:1 proc:6 repetition:1 aim:1 avoid:1 varying:2 encode:2 derived:4 focus:2 tech:1 contrast:1 baseline:4 lowercase:5 i0:1 transferring:1 diminishing:2 initially:1 relation:1 icml03:1 perona:1 pixel:13 classification:15 dual:4 orientation:4 ng:1 sampling:1 kw:1 lit:1 future:1 report:2 few:3 employ:1 randomly:4 oriented:1 composed:1 murphy:1 ourselves:1 attempt:3 psd:2 detection:4 interest:2 highly:1 severe:1 introduces:1 extreme:1 primal:1 uppercase:5 implication:1 accurate:2 capable:1 euclidean:1 desired:1 plotted:1 deformation:1 instance:34 classify:4 column:7 increased:1 measuring:1 subset:1 swain:1 examining:1 motivating:1 synthetic:4 referring:1 density:2 huji:3 discriminating:1 interdisciplinary:1 michael:1 ym:1 analogously:1 enhance:1 deemphasizing:1 hopkins:1 postulate:1 central:1 manage:1 cesa:1 choose:2 possibly:1 leading:1 return:1 sec:7 includes:1 coefficient:1 later:2 try:1 performed:2 lowe:1 apparently:1 hazan:1 shashua:2 start:2 bayes:1 ass:1 il:2 accuracy:2 descriptor:16 characteristic:2 efficiently:1 miller:1 yield:1 generalize:4 raw:1 accurately:4 emphasizes:1 stroke:1 detector:1 sharing:1 ed:1 definition:1 sampled:3 exacerbate:1 recall:1 knowledge:1 emerges:2 dimensionality:2 color:2 organized:1 appears:2 centric:2 higher:1 follow:1 formulation:2 evaluated:2 stage:6 implicit:1 nonlinear:1 mode:1 indicated:2 gray:2 effect:12 k22:1 symmetric:1 nips04:1 noted:2 criterion:2 demonstrate:1 interface:1 image:13 wise:1 recently:3 common:9 superior:1 specialized:1 xi0:4 interpretation:1 approximates:1 kluwer:1 measurement:3 imposing:1 dot:1 entail:3 similarity:2 add:2 irrelevant:2 certain:4 manifested:1 yi:1 muller:1 seen:1 additional:2 relaxed:1 gentile:1 subtraction:1 x02:2 signal:1 ii:1 semi:1 technical:1 characterized:3 academic:2 reshaping:1 coded:1 paired:2 feasibility:1 calculates:1 qi:2 variant:1 enhancing:1 metric:42 vision:2 dimentional:1 iteration:2 kernel:30 adopting:2 histogram:4 represent:1 achieved:1 addition:1 addressed:1 ratsch:1 jml:1 publisher:1 tend:4 spirit:1 seem:1 latin:7 iii:1 pratt:1 variety:2 xj:3 restrict:1 inner:1 multiclass:1 det:1 absent:1 six:6 detailed:1 informally:1 aimed:2 clear:1 transforms:2 induces:1 category:4 generate:2 exist:1 designer:1 deteriorates:1 shental:1 reusable:3 four:5 reliance:1 redundancy:1 levi:2 achieving:4 falling:1 clarity:1 prevent:1 pj:4 circumventing:1 convert:1 run:1 angle:10 letter:26 parameterized:2 family:2 throughout:1 x0j:2 betweenclass:1 separation:13 display:1 cvpr04:2 adapted:1 constraint:2 fei:2 x2:4 represen:1 encodes:1 sake:1 calling:1 generates:1 fukunaga:1 pseudometric:1 belonging:1 hertz:1 smaller:4 describes:1 character:5 projecting:4 restricted:1 invariant:2 iccv:2 indexing:1 equation:1 agree:1 previously:1 mechanism:2 singer:1 know:1 adopted:1 available:7 operation:2 appropriate:2 save:1 batch:2 matsakis:1 original:3 top:4 remaining:2 platypus:1 calculating:4 amit:1 threading:1 malik:1 strategy:1 costly:1 diagonal:1 italic:1 exhibit:4 distance:34 mapped:1 thrun:1 capacity:3 originate:1 discriminant:1 length:2 index:2 providing:1 hebrew:1 potentially:1 stated:1 implementation:1 attenuates:1 bianchi:1 observation:1 enabling:1 displayed:3 viola:1 extended:7 variability:4 y1:1 pair:5 required:1 learned:3 textual:1 bar:2 perception:1 xm:1 pattern:1 max:1 power:1 difficulty:2 natural:1 circumvent:1 improve:1 misleading:1 numerous:2 naive:1 kj:4 review:2 literature:1 graf:1 expect:1 permutation:4 limitation:1 proven:1 localized:1 x01:3 degree:1 pij:8 sufficient:1 mercer:2 classifying:2 share:3 obscure:1 translation:1 prone:1 neighbor:16 characterizing:1 benefit:1 overcome:1 dimension:21 calculated:5 evaluating:1 boundary:1 contour:1 curve:1 world:1 adopts:3 made:1 qualitatively:1 projected:5 avoided:1 san:1 dedicate:1 transaction:1 krempp:1 skunk:1 implicitly:3 overfitting:13 summing:1 conclude:1 belongie:1 shimojo:1 xi:4 fergus:1 learn:8 ballard:1 robust:3 inherently:1 ignoring:1 caputo:1 da:4 linearly:1 x1:6 fig:18 depicts:1 guiding:1 candidate:1 perceptual:1 ix:1 emphasizing:3 specific:2 emphasized:2 incorporating:1 exists:1 workshop:2 adding:1 sequential:1 margin:7 nk:1 locality:6 depicted:4 appearance:3 likely:1 visual:2 ordered:1 conconi:1 temporarily:2 scalar:1 applies:1 wolf:1 satisfies:2 relies:2 weston:1 goal:6 marked:2 shared:3 fisher:1 feasible:1 diminished:2 averaging:2 principal:11 total:1 discriminate:1 invariance:2 experimental:1 formally:3 select:1 puzicha:1 support:1 assessed:2 relevance:27 scarcity:1 incorporate:1 phenomenon:1 |
1,735 | 2,577 | Maximum Likelihood Estimation of
Intrinsic Dimension
Elizaveta Levina
Department of Statistics
University of Michigan
Ann Arbor MI 48109-1092
[email protected]
Peter J. Bickel
Department of Statistics
University of California
Berkeley CA 94720-3860
[email protected]
Abstract
We propose a new method for estimating intrinsic dimension of a
dataset derived by applying the principle of maximum likelihood to
the distances between close neighbors. We derive the estimator by
a Poisson process approximation, assess its bias and variance theoretically and by simulations, and apply it to a number of simulated
and real datasets. We also show it has the best overall performance
compared with two other intrinsic dimension estimators.
1
Introduction
There is a consensus in the high-dimensional data analysis community that the only
reason any methods work in very high dimensions is that, in fact, the data are not
truly high-dimensional. Rather, they are embedded in a high-dimensional space,
but can be efficiently summarized in a space of a much lower dimension, such as a
nonlinear manifold. Then one can reduce dimension without losing much information for many types of real-life high-dimensional data, such as images, and avoid
many of the ?curses of dimensionality?. Learning these data manifolds can improve
performance in classification and other applications, but if the data structure is
complex and nonlinear, dimensionality reduction can be a hard problem.
Traditional methods for dimensionality reduction include principal component analysis (PCA), which only deals with linear projections of the data, and multidimensional scaling (MDS), which aims at preserving pairwise distances and traditionally
is used for visualizing data. Recently, there has been a surge of interest in manifold
projection methods (Locally Linear Embedding (LLE) [1], Isomap [2], Laplacian
and Hessian Eigenmaps [3, 4], and others), which focus on finding a nonlinear
low-dimensional embedding of high-dimensional data. So far, these methods have
mostly been used for exploratory tasks such as visualization, but they have also
been successfully applied to classification problems [5, 6].
The dimension of the embedding is a key parameter for manifold projection methods: if the dimension is too small, important data features are ?collapsed? onto the
same dimension, and if the dimension is too large, the projections become noisy
and, in some cases, unstable. There is no consensus, however, on how this dimension should be determined. LLE [1] and its variants assume the manifold dimension
is provided by the user. Isomap [2] provides error curves that can be ?eyeballed? to
estimate dimension. The charting algorithm, a recent LLE variant [7], uses a heuristic estimate of dimension which is essentially equivalent to the regression estimator
of [8] discussed below. Constructing a reliable estimator of intrinsic dimension and
understanding its statistical properties will clearly facilitate further applications of
manifold projection methods and improve their performance.
We note that for applications such as classification, cross-validation is in principle
the simplest solution ? just pick the dimension which gives the lowest classification error. However, in practice the computational cost of cross-validating for the
dimension is prohibitive, and an estimate of the intrinsic dimension will still be
helpful, either to be used directly or to narrow down the range for cross-validation.
In this paper, we present a new estimator of intrinsic dimension, study its statistical
properties, and compare it to other estimators on both simulated and real datasets.
Section 2 reviews previous work on intrinsic dimension. In Section 3 we derive the
estimator and give its approximate asymptotic bias and variance. Section 4 presents
results on datasets and compares our estimator to two other estimators of intrinsic
dimension. Section 5 concludes with discussion.
2
Previous Work on Intrinsic Dimension Estimation
The existing approaches to estimating the intrinsic dimension can be roughly divided into two groups: eigenvalue or projection methods, and geometric methods.
Eigenvalue methods, from the early proposal of [9] to a recent variant [10] are based
on a global or local PCA, with intrinsic dimension determined by the number of
eigenvalues greater than a given threshold. Global PCA methods fail on nonlinear
manifolds, and local methods depend heavily on the precise choice of local regions
and thresholds [11]. The eigenvalue methods may be a good tool for exploratory
data analysis, where one might plot the eigenvalues and look for a clear-cut boundary, but not for providing reliable estimates of intrinsic dimension.
The geometric methods exploit the intrinsic geometry of the dataset and are most
often based on fractal dimensions or nearest neighbor (NN) distances. Perhaps the
most popular fractal dimension is the correlation dimension [12, 13]: given a set
Sn = {x1 , . . . , xn } in a metric space, define
Cn (r) =
n
n
X
X
2
1{kxi ? xj k < r}.
n(n ? 1) i=1 j=i+1
(1)
The correlation dimension is then estimated by plotting log Cn (r) against log r and
estimating the slope of the linear part [12]. A recent variant [13] proposed plotting
this estimate against the true dimension for some simulated data and then using
this calibrating curve to estimate the dimension of a new dataset. This requires a
different curve for each n, and the choice of calibration data may affect performance.
The capacity dimension and packing numbers have also been used [14]. While
the fractal methods successfully exploit certain geometric aspects of the data, the
statistical properties of these methods have not been studied.
The correlation dimension (1) implicitly uses NN distances, and there are methods
that focus on them explicitly. The use of NN distances relies on the following fact: if
X1 , . . . , Xn are an independent identically distributed (i.i.d.) sample from a density
f (x) in Rm , and Tk (x) is the Euclidean distance from a fixed point x to its k-th
NN in the sample, then
k
? f (x)V (m)[Tk (x)]m ,
(2)
n
where V (m) = ? m/2 [?(m/2 + 1)]?1 is the volume of the unit sphere in Rm . That is,
the proportion of sample points falling into a ball around x is roughly f (x) times
the volume of the ball.
The relationship (2) can be used to estimate the dimension
by regressing log T?k
Pn
on log k over a suitable range of k, where T?k = n?1 i=1 Tk (Xi ) is the average of
distances from each point to its k-th NN [8, 11]. A comparison of this method to
a local eigenvalue method [11] found that the NN method suffered more from underestimating dimension for high-dimensional datasets, but the eigenvalue method
was sensitive to noise and parameter settings. A more sophisticated NN approach
was recently proposed in [15], where the dimension is estimated from the length of
the minimal spanning tree on the geodesic NN distances computed by Isomap.
While there are certainly existing methods available for estimating intrinsic dimension, there are some issues that have not been adequately addressed. The behavior
of the estimators as a function of sample size and dimension is not well understood
or studied beyond the obvious ?curse of dimensionality?; the statistical properties
of the estimators, such as bias and variance, have not been looked at (with the
exception of [15]); and comparisons between methods are not always presented.
3
A Maximum Likelihood Estimator of Intrinsic Dimension
Here we derive the maximum likelihood estimator (MLE) of the dimension m from
i.i.d. observations X1 , . . . , Xn in Rp . The observations represent an embedding of a
lower-dimensional sample, i.e., Xi = g(Yi ), where Yi are sampled from an unknown
smooth density f on Rm , with unknown m ? p, and g is a continuous and sufficiently
smooth (but not necessarily globally isometric) mapping. This assumption ensures
that close neighbors in Rm are mapped to close neighbors in the embedding.
The basic idea is to fix a point x, assume f (x) ? const in a small sphere S x (R) of
radius R around x, and treat the observations as a homogeneous Poisson process in
Sx (R). Consider the inhomogeneous process {N (t, x), 0 ? t ? R},
n
X
N (t, x) =
1{Xi ? Sx (t)}
(3)
i=1
which counts observations within distance t from x. Approximating this binomial
(fixed n) process by a Poisson process and suppressing the dependence on x for
now, we can write the rate ?(t) of the process N (t) as
?(t) = f (x)V (m)mtm?1
(4)
m?1
This follows immediately from the Poisson process properties since V (m)mt
=
d
m
dt [V (m)t ] is the surface area of the sphere Sx (t). Letting ? = log f (x), we can
write the log-likelihood of the observed process N (t) as (see e.g., [16])
Z R
Z R
L(m, ?) =
log ?(t) dN (t) ?
?(t) dt
0
0
This is an exponential family for which MLEs exist with probability ? 1 as n ? ?
and are unique. The MLEs must satisfy the likelihood equations
Z R
Z R
?L
=
dN (t) ?
?(t)dt = N (R) ? e? V (m)Rm = 0,
(5)
??
0
0
?
?
Z R
?L
1
V 0 (m)
=
+
N (R) +
log t dN (t) ?
?m
m
V (m)
0
?
?
V 0 (m)
?
m
?e V (m)R
log R +
= 0.
(6)
V (m)
Substituting (5) into (6) gives the MLE for m:
??1
?
N (R,x)
X
R
1
? .
log
m
? R (x) = ?
N (R, x) j=1
Tj (x)
(7)
In practice, it may be more convenient to fix the number of neighbors k rather than
the radius of the sphere R. Then the estimate in (7) becomes
?
??1
k?1
X
1
Tk (x) ?
m
? k (x) = ?
log
.
(8)
k ? 1 j=1
Tj (x)
Note that we omit the last (zero) term in the sum in (7). One could divide by
k ? 2 rather than k ? 1 to make the estimator asymptotically unbiased, as we show
below. Also note that the MLE of ? can be used to obtain an instant estimate of
the entropy of f , which was also provided by the method used in [15].
For some applications, one may want to evaluate local dimension estimates at every
data point, or average estimated dimensions within data clusters. We will, however,
assume that all the data points come from the same ?manifold?, and therefore
average over all observations.
The choice of k clearly affects the estimate. It can be the case that a dataset has
different intrinsic dimensions at different scales, e.g., a line with noise added to it
can be viewed as either 1-d or 2-d (this is discussed in detail in [14]). In such a
case, it is informative to have different estimates at different scales. In general,
for our estimator to work well the sphere should be small and contain sufficiently
many points, and we have work in progress on choosing such a k automatically.
For this paper, though, we simply average over a range of small to moderate values
k = k1 . . . k2 to get the final estimates
n
m
?k =
1X
m
? k (Xi ) ,
n i=1
m
? =
k2
X
1
m
?k .
k2 ? k 1 + 1
(9)
k=k1
The choice of k1 and k2 and behavior of m
? k as a function of k are discussed further
in Section 4. The only parameters to set for this method are k1 and k2 , and the
computational cost is essentially the cost of finding k2 nearest neighbors for every
point, which has to be done for most manifold projection methods anyway.
3.1
Asymptotic behavior of the estimator for m fixed, n ? ?.
Here we give a sketchy discussion of the asymptotic bias and variance of our estimator, to be elaborated elsewhere. The computations here are under the assumption
that m is fixed, n ? ?, k ? ?, and k/n ? 0.
As we remarked, for a given x if n ? ? and R ? 0, the inhomogeneous binomial
process N (t, x) in (3) converges weakly to the inhomogeneous Poisson process with
rate ?(t) given by (4). If we condition
on the distance Tk (x) and?assume the Poisson
?
approximation is exact, then m?1 log(Tk /Tj ) : 1 ? j ? k ? 1 are distributed as
the order statistics of a sample of size k?1 from a standard exponential distribution.
Pk?1
Hence U = m?1 j=1 log(Tk /Tj ) has a Gamma(k ? 1, 1) distribution, and EU ?1 =
1/(k ? 2). If we use k ? 2 to normalize, then under these assumptions, to a first
order approximation
E (m
? k (x)) = m,
Var (m
? k (x)) =
m2
k?3
(10)
As this analysis is asymptotic in both k and n, the factor (k ? 1)/(k ? 2) makes
no difference. There are, of course, higher
order? terms since N (t, x) is in fact a
?
binomial process with EN (t, x) = ?(t) 1 + O(t2 ) , where O(t2 ) depends on m.
With approximations (10), we have E m
? = Em
? k = m, but the computation of
Var(m)
? is complicated by the dependence among m
? k (Xi ). We have a heuristic
argument (omitted for lack of space) that, by dividing m
? k (Xi ) into n/k roughly
independent groups of size k each, the variance can be shown to be of order n ?1 ,
as it would if the estimators were independent. Our simulations confirm that this
approximation is reasonable
? ? for instance, for m-d Gaussians the ratio of the theoretical SD = C(k1 , k2 )m/ n (where C(k1 , k2 ) is calculated as if all the terms in (9)
were independent) to the actual SD of m
? was between 0.7 and 1.3 for the range of
values of m and n considered in Section 4. The bias, however, behaves worse than
the asymptotics predict, as we discuss further in Section 5.
4
Numerical Results
(a)
(b)
25
7
n=2000
n=1000
n=500
n=200
6.5
k
Dimension estimate m
Dimension estimate mk
6
m=20
m=10
m=5
m=2
20
5.5
5
4.5
15
10
4
5
3.5
3
0
10
20
30
40
50
k
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
k
Figure 1: The estimator m
? k as a function of k. (a) 5-dimensional normal for several
sample sizes. (b) Various m-dimensional normals with sample size n = 1000.
We first investigate the properties of our estimator in detail by simulations, and
then apply it to real datasets. The first issue is the behavior of m
? k as a function
of k. The results shown in Fig. 1 are for m-d Gaussians Nm (0, I), and a similar
pattern holds for observations in a unit cube, on a hypersphere, and on the popular
?Swiss roll? manifold. Fig. 1(a) shows m
? k for a 5-d Gaussian as a function of k for
several sample sizes n. For very small k the approximation does not work yet and
m
? k is unreasonably high, but for k as small as 10, the estimate is near the true value
m = 5. The estimate shows some negative bias for large k, which decreases with
growing sample size n, and, as Fig. 1(b) shows, increases with dimension. Note,
however, that it is the intrinsic dimension m rather than the embedding dimension
p ? m that matters; and as our examples below and many examples elsewhere
show, the intrinsic dimension for real data is frequently low.
The plots in Fig. 1 show that the ?ideal? range k1 . . . k2 is different for every combination of m and n, but the estimator is fairly stable as a function of k, apart
from the first few values. While fine-tuning the range k1 . . . k2 for different n is
possible and would reduce the bias, for simplicity and reproducibility of our results
we fix k1 = 10, k2 = 20 throughout this paper. In this range, the estimates are not
affected much by sample size or the positive bias for very small k, at least for the
range of m and n under consideration.
Next, we investigate an important and often overlooked issue of what happens when
the data are near a manifold as opposed to exactly on a manifold. Fig. 2(a) shows
simulation results for a 5-d correlated Gaussian with mean 0, and covariance matrix
[?ij ] = [? + (1 ? ?)?ij ], with ?ij = 1{i = j}. As ? changes from 0 to 1, the dimension
changes from 5 (full spherical Gaussian) to 1 (a line in R5 ), with intermediate values
of ? providing noisy versions.
(a)
(b)
6
5.5
5
30
n=2000
n=1000
n=500
n=100
MLE
Regression
Corr.dim.
25
Estimated dimension
MLE of dimension
4.5
4
3.5
3
2.5
20
15
10
2
5
1.5
1
0
?4
10
?3
10
?2
10
1?? (log scale)
?1
10
1
0
0
5
10
15
20
25
30
True dimension
Figure 2: (a) Data near a manifold: estimated dimension for correlated 5-d normal
as a function of 1 ? ?. (b) The MLE, regression, and correlation dimension for
uniform distributions on spheres with n = 1000. The three lines for each method
show the mean ?2 SD (95% confidence intervals) over 1000 replications.
The plots in Fig. 2(a) show that the MLE of dimension does not drop unless ?
is very close to 1, so the estimate is not affected by whether the data cloud is
spherical or elongated. For ? close to 1, when the dimension really drops, the
estimate depends significantly on the sample size, which is to be expected: n = 100
highly correlated points look like a line, but n = 2000 points fill out the space
around the line. This highlights the fundamental dependence of intrinsic dimension
on the neighborhood scale, particularly when the data may be observed with noise.
The MLE of dimension, while reflecting this dependence, behaves reasonably and
robustly as a function of both ? and n.
A comparison of the MLE, the regression estimator (regressing log T k on log k),
and the correlation dimension is shown in Fig. 2(b). The comparison is shown
on uniformly distributed points on the surface of an m?dimensional sphere, but
a similar pattern held in all our simulations. The regression range was held at
k = 10 . . . 20 (the same as the MLE) for fair comparison, and the regression for
correlation dimension was based on the first 10 . . . 100 distinct values of log C n (r),
to reflect the fact there are many more points for the log Cn (r) regression than for
the log T k regression. We found in general that the correlation dimension graph can
have more than one linear part, and is more sensitive to the choice of range than
either the MLE or the regression estimator, but we tried to set the parameters for
all methods in a way that does not give an unfair advantage to any and is easily
reproducible.
The comparison shows that, while all methods suffer from negative bias for higher
dimensions, the correlation dimension has the smallest bias, with the MLE coming
in close second. However, the variance of correlation dimension is much higher
than that of the MLE (the SD is at least 10 times higher for all dimensions). The
regression estimator, on the other hand, has relatively low variance (though always
higher than the MLE) but the largest negative bias. On the balance of bias and
variance, MLE is clearly the best choice.
Figure 3: Two image datasets: hand rotation and Isomap faces (example images).
Table 1: Estimated dimensions for popular manifold datasets. For the Swiss roll,
the table gives mean(SD) over 1000 uniform samples.
Dataset
Swiss roll
Faces
Hands
Data dim.
3
64 ? 64
480 ? 512
Sample size
1000
698
481
MLE
2.1(0.02)
4.3
3.1
Regression
1.8(0.03)
4.0
2.5
Corr. dim.
2.0(0.24)
3.5
3.91
Finally, we compare the estimators on three popular manifold datasets (Table 1):
the Swiss roll, and two image datasets shown on Fig. 3: the Isomap face database 2 ,
and the hand rotation sequence3 used in [14]. For the Swiss roll, the MLE again
provides the best combination of bias and variance.
The face database consists of images of an artificial face under three changing conditions: illumination, and vertical and horizontal orientation. Hence the intrinsic
dimension of the dataset should be 3, but only if we had the full 3-d images of the
face. All we have, however, are 2-d projections of the face, and it is clear that one
needs more than one ?basis? image to represent different poses (from casual inspection, front view and profile seem sufficient). The estimated dimension of about 4 is
therefore very reasonable.
The hand image data is a real video sequence of a hand rotating along a 1-d curve in
space, but again several basis 2-d images are needed to represent different poses (in
this case, front, back, and profile seem sufficient). The estimated dimension around
3 therefore seems reasonable. We note that the correlation dimension provides two
completely different answers for this dataset, depending on which linear part of the
curve is used; this is further evidence of its high variance, which makes it a less
reliable estimate that the MLE.
5
Discussion
In this paper, we have derived a maximum likelihood estimator of intrinsic dimension and some asymptotic approximations to its bias and variance. We have shown
1
This estimate is obtained from the range 500...1000. For this dataset, the correlation
dimension curve has two distinct linear parts, with the first part over the range we would
normally use, 10...100, producing dimension 19.7, which is clearly unreasonable.
2
http://isomap.stanford.edu/datasets.html
3
http://vasc.ri.cmu.edu//idb/html/motion/hand/index.html
that the MLE produces good results on a range of simulated and real datasets
and outperforms two other dimension estimators. It does, however, suffer from a
negative bias for high dimensions, which is a problem shared by all dimension estimators. One reason for this is that our approximation is based on sufficiently many
observations falling into a small sphere, and that requires very large sample sizes in
high dimensions (we shall elaborate and quantify this further elsewhere). For some
datasets, such as points in a unit cube, there is also the issue of edge effects, which
generally become more severe in high dimensions. One can potentially reduce the
negative bias by removing the edge points by some criterion, but we found that
the edge effects are small compared to the sample size problem, and we have been
unable to achieve significant improvement in this manner. Another option used by
[13] is calibration on simulated datasets with known dimension, but since the bias
depends on the sampling distribution, and a different curve would be needed for
every sample size, calibration does not solve the problem either. One should keep in
mind, however, that for most interesting applications intrinsic dimension will not be
very high ? otherwise there is not much benefit in dimensionality reduction; hence
in practice the MLE will provide a good estimate of dimension most of the time.
References
[1] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear
embedding. Science, 290:2323?2326, 2000.
[2] J. B. Tenenbaum, V. de Silva, and J. C. Landford. A global geometric framework for
nonlinear dimensionality reduction. Science, 290:2319?2323, 2000.
[3] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding
and clustering. In Advances in NIPS, volume 14. MIT Press, 2002.
[4] D. L. Donoho and C. Grimes. Hessian eigenmaps: New locally linear embedding
techniques for high-dimensional data. Technical Report TR 2003-08, Department of
Statistics, Stanford University, 2003.
[5] M. Belkin and P. Niyogi. Using manifold structure for partially labelled classification.
In Advances in NIPS, volume 15. MIT Press, 2003.
[6] M. Vlachos, C. Domeniconi, D. Gunopulos, G. Kollios, and N. Koudas. Non-linear
dimensionality reduction techniques for classification and visualization. In Proceedings
of 8th SIGKDD, pages 645?651. Edmonton, Canada, 2002.
[7] M. Brand. Charting a manifold. In Advances in NIPS, volume 14. MIT Press, 2002.
[8] K.W. Pettis, T.A. Bailey, A.K. Jain, and R.C. Dubes. An intrinsic dimensionality
estimator from near-neighbor information. IEEE Trans. on PAMI, 1:25?37, 1979.
[9] K. Fukunaga and D.R. Olsen. An algorithm for finding intrinsic dimensionality of
data. IEEE Trans. on Computers, C-20:176?183, 1971.
[10] J. Bruske and G. Sommer. Intrinsic dimensionality estimation with optimally topology
preserving maps. IEEE Trans. on PAMI, 20(5):572?575, 1998.
[11] P. Verveer and R. Duin. An evaluation of intrinsic dimensionality estimators. IEEE
Trans. on PAMI, 17(1):81?86, 1995.
[12] P. Grassberger and I. Procaccia. Measuring the strangeness of strange attractors.
Physica, D9:189?208, 1983.
[13] F. Camastra and A. Vinciarelli. Estimating the intrinsic dimension of data with a
fractal-based approach. IEEE Trans. on PAMI, 24(10):1404?1407, 2002.
[14] B. Kegl. Intrinsic dimension estimation using packing numbers. In Advances in NIPS,
volume 14. MIT Press, 2002.
[15] J. Costa and A. O. Hero. Geodesic entropic graphs for dimension and entropy estimation in manifold learning. IEEE Trans. on Signal Processing, 2004. To appear.
[16] D. L. Snyder. Random Point Processes. Wiley, New York, 1975.
| 2577 |@word version:1 proportion:1 seems:1 simulation:5 tried:1 covariance:1 pick:1 tr:1 reduction:6 suppressing:1 outperforms:1 existing:2 yet:1 must:1 grassberger:1 numerical:1 informative:1 plot:3 drop:2 reproducible:1 prohibitive:1 inspection:1 underestimating:1 hypersphere:1 provides:3 dn:3 along:1 become:2 replication:1 consists:1 manner:1 pairwise:1 theoretically:1 expected:1 behavior:4 roughly:3 surge:1 growing:1 frequently:1 globally:1 spherical:2 automatically:1 actual:1 curse:2 becomes:1 provided:2 estimating:5 lowest:1 what:1 finding:3 berkeley:2 every:4 multidimensional:1 exactly:1 rm:5 k2:11 unit:3 normally:1 omit:1 appear:1 producing:1 positive:1 understood:1 local:5 treat:1 sd:5 gunopulos:1 pami:4 might:1 studied:2 range:13 unique:1 practice:3 swiss:5 asymptotics:1 area:1 significantly:1 projection:8 convenient:1 confidence:1 get:1 onto:1 close:6 collapsed:1 applying:1 equivalent:1 elongated:1 map:1 simplicity:1 immediately:1 m2:1 estimator:30 fill:1 embedding:9 exploratory:2 traditionally:1 anyway:1 heavily:1 user:1 exact:1 losing:1 homogeneous:1 us:2 particularly:1 cut:1 database:2 observed:2 cloud:1 region:1 ensures:1 eu:1 decrease:1 geodesic:2 depend:1 weakly:1 basis:2 completely:1 packing:2 easily:1 various:1 distinct:2 jain:1 artificial:1 choosing:1 neighborhood:1 heuristic:2 stanford:2 solve:1 otherwise:1 koudas:1 statistic:4 niyogi:2 noisy:2 final:1 advantage:1 eigenvalue:7 sequence:1 vlachos:1 propose:1 coming:1 reproducibility:1 achieve:1 roweis:1 vasc:1 normalize:1 cluster:1 produce:1 converges:1 tk:7 derive:3 depending:1 dubes:1 pose:2 stat:1 ij:3 nearest:2 progress:1 dividing:1 come:1 quantify:1 radius:2 inhomogeneous:3 fix:3 really:1 physica:1 hold:1 around:4 sufficiently:3 considered:1 normal:3 mapping:1 predict:1 substituting:1 bickel:2 early:1 smallest:1 omitted:1 entropic:1 estimation:5 sensitive:2 largest:1 successfully:2 tool:1 mit:4 clearly:4 always:2 gaussian:3 aim:1 idb:1 rather:4 mtm:1 avoid:1 pn:1 derived:2 focus:2 improvement:1 likelihood:7 sigkdd:1 helpful:1 dim:3 sketchy:1 nn:8 overall:1 classification:6 issue:4 among:1 orientation:1 html:3 fairly:1 cube:2 sampling:1 r5:1 look:2 others:1 t2:2 report:1 few:1 belkin:2 gamma:1 geometry:1 attractor:1 interest:1 investigate:2 highly:1 regressing:2 certainly:1 severe:1 evaluation:1 truly:1 grime:1 tj:4 held:2 strangeness:1 edge:3 unless:1 tree:1 euclidean:1 divide:1 rotating:1 theoretical:1 minimal:1 mk:1 instance:1 measuring:1 cost:3 uniform:2 eigenmaps:3 too:2 front:2 optimally:1 answer:1 kxi:1 mles:2 density:2 fundamental:1 d9:1 again:2 reflect:1 nm:1 opposed:1 worse:1 de:1 summarized:1 matter:1 satisfy:1 explicitly:1 depends:3 view:1 option:1 complicated:1 slope:1 elaborated:1 ass:1 roll:5 variance:11 efficiently:1 casual:1 against:2 remarked:1 obvious:1 mi:1 sampled:1 costa:1 dataset:8 popular:4 dimensionality:12 sophisticated:1 reflecting:1 back:1 higher:5 dt:3 isometric:1 done:1 though:2 just:1 correlation:11 hand:7 horizontal:1 nonlinear:6 lack:1 perhaps:1 facilitate:1 effect:2 calibrating:1 contain:1 true:3 isomap:6 unbiased:1 adequately:1 hence:3 deal:1 visualizing:1 criterion:1 motion:1 silva:1 image:9 consideration:1 recently:2 rotation:2 behaves:2 mt:1 volume:6 discussed:3 significant:1 tuning:1 had:1 calibration:3 stable:1 surface:2 recent:3 moderate:1 apart:1 certain:1 life:1 yi:2 preserving:2 greater:1 signal:1 full:2 vinciarelli:1 smooth:2 technical:1 levina:1 cross:3 sphere:8 divided:1 mle:20 laplacian:2 variant:4 regression:11 basic:1 essentially:2 metric:1 poisson:6 cmu:1 represent:3 proposal:1 want:1 fine:1 addressed:1 interval:1 suffered:1 validating:1 seem:2 near:4 ideal:1 intermediate:1 identically:1 xj:1 affect:2 topology:1 reduce:3 idea:1 cn:3 whether:1 pca:3 kollios:1 suffer:2 peter:1 hessian:2 york:1 fractal:4 generally:1 clear:2 locally:3 tenenbaum:1 simplest:1 http:2 exist:1 camastra:1 estimated:8 write:2 shall:1 affected:2 snyder:1 group:2 key:1 threshold:2 falling:2 changing:1 asymptotically:1 graph:2 sum:1 family:1 reasonable:3 throughout:1 strange:1 scaling:1 duin:1 ri:1 aspect:1 argument:1 fukunaga:1 relatively:1 department:3 ball:2 combination:2 em:1 happens:1 equation:1 visualization:2 discus:1 count:1 fail:1 needed:2 mind:1 letting:1 hero:1 umich:1 available:1 gaussians:2 unreasonable:1 apply:2 spectral:1 bailey:1 robustly:1 rp:1 binomial:3 unreasonably:1 include:1 clustering:1 sommer:1 instant:1 const:1 exploit:2 k1:9 approximating:1 added:1 looked:1 dependence:4 md:1 traditional:1 elizaveta:1 distance:10 unable:1 mapped:1 simulated:5 capacity:1 manifold:18 consensus:2 unstable:1 reason:2 spanning:1 charting:2 length:1 index:1 relationship:1 providing:2 ratio:1 balance:1 mostly:1 potentially:1 negative:5 unknown:2 vertical:1 observation:7 datasets:13 kegl:1 precise:1 community:1 canada:1 overlooked:1 california:1 narrow:1 nip:4 trans:6 beyond:1 below:3 pattern:2 reliable:3 video:1 suitable:1 improve:2 concludes:1 sn:1 review:1 understanding:1 geometric:4 asymptotic:5 embedded:1 highlight:1 interesting:1 var:2 validation:2 sufficient:2 principle:2 plotting:2 elsewhere:3 course:1 last:1 bias:17 lle:3 neighbor:7 saul:1 face:7 distributed:3 benefit:1 curve:7 dimension:85 boundary:1 xn:3 calculated:1 far:1 approximate:1 olsen:1 implicitly:1 keep:1 confirm:1 global:3 xi:6 continuous:1 table:3 reasonably:1 ca:1 complex:1 necessarily:1 constructing:1 pk:1 noise:3 profile:2 bruske:1 fair:1 x1:3 fig:8 en:1 edmonton:1 elaborate:1 wiley:1 exponential:2 unfair:1 down:1 removing:1 evidence:1 intrinsic:28 corr:2 illumination:1 sx:3 entropy:2 michigan:1 simply:1 partially:1 relies:1 viewed:1 ann:1 donoho:1 labelled:1 shared:1 hard:1 change:2 determined:2 uniformly:1 principal:1 domeniconi:1 arbor:1 brand:1 exception:1 procaccia:1 evaluate:1 correlated:3 |
1,736 | 2,578 | Co-Training and Expansion: Towards Bridging
Theory and Practice
Maria-Florina Balcan
Computer Science Dept.
Carnegie Mellon Univ.
Pittsburgh, PA 15213
[email protected]
Avrim Blum
Computer Science Dept.
Carnegie Mellon Univ.
Pittsburgh, PA 15213
[email protected]
Ke Yang
Computer Science Dept.
Carnegie Mellon Univ.
Pittsburgh, PA 15213
[email protected]
Abstract
Co-training is a method for combining labeled and unlabeled data when
examples can be thought of as containing two distinct sets of features. It
has had a number of practical successes, yet previous theoretical analyses
have needed very strong assumptions on the data that are unlikely to be
satisfied in practice.
In this paper, we propose a much weaker ?expansion? assumption on the
underlying data distribution, that we prove is sufficient for iterative cotraining to succeed given appropriately strong PAC-learning algorithms
on each feature set, and that to some extent is necessary as well. This
expansion assumption in fact motivates the iterative nature of the original co-training algorithm, unlike stronger assumptions (such as independence given the label) that allow a simpler one-shot co-training to succeed. We also heuristically analyze the effect on performance of noise in
the data. Predicted behavior is qualitatively matched in synthetic experiments on expander graphs.
1
Introduction
In machine learning, it is often the case that unlabeled data is substantially cheaper and
more plentiful than labeled data, and as a result a number of methods have been developed
for using unlabeled data to try to improve performance, e.g., [15, 2, 6, 11, 16]. Co-training
[2] is a method that has had substantial success in scenarios in which examples can be
thought of as containing two distinct yet sufficient feature sets. Specifically, a labeled example takes the form (hx1 , x2 i, `), where x1 ? X1 and x2 ? X2 are the two parts of the
example, and ` is the label. One further assumes the existence of two functions c1 , c2 over
the respective feature sets such that c1 (x1 ) = c2 (x2 ) = `. Intuitively, this means that each
example contains two ?views,? and each view contains sufficient information to determine
the label of the example. This redundancy implies an underlying structure of the unlabeled
data (since they need to be ?consistent?), and this structure makes the unlabeled data informative. In particular, the idea of iterative co-training [2] is that one can use a small labeled
sample to train initial classifiers h1 , h2 over the respective views, and then iteratively bootstrap by taking unlabeled examples hx1 , x2 i for which one of the hi is confident but the
other is not ? and using the confident hi to label such examples for the learning algorithm
on the other view, improving the other classifier. As an example for webpage classification given in [2], webpages contain text (x1 ) and have hyperlinks pointing to them (x2 ).
From a small labeled sample, we might learn a classifier h2 that says that if a link with
the words ?my advisor? points to a page, then that page is probably a positive example
of faculty-member-home-page; so, if we find an unlabeled example with this property we
can use h2 to label the page for the learning algorithm that uses the text on the page itself.
This approach and its variants have been used for a variety of learning problems, including
named entity classification [3], text classification [10, 5], natural language processing [13],
large scale document classification [12], and visual detectors [8].
Co-training effectively requires two distinct properties of the underlying data distribution
in order to work. The first is that there should at least in principle exist low error classifiers
c1 , c2 on each view. The second is that these two views should on the other hand not be too
highly correlated ? we need to have at least some examples where h1 is confident but h2
is not (or vice versa) for the co-training algorithm to actually do anything. Unfortunately,
previous theoretical analyses have needed to make strong assumptions of this second type in
order to prove their guarantees. These include ?conditional independence given the label?
used by [2] and [4], or the assumption of ?weak rule dependence? used by [1]. The primary
contribution of this paper is a theoretical analysis that substantially relaxes the strength
of this second assumption to just a form of ?expansion? of the underlying distribution (a
natural analog of the graph-theoretic notions of expansion and conductance) that we show
in some sense is a necessary condition for co-training to succeed as well. However, we will
need a fairly strong assumption on the learning algorithms: that the hi they produce are
never ?confident but wrong? (formally, the algorithms are able to learn from positive data
only), though we give a heuristic analysis of the case when this does not hold.
One key feature of assuming only expansion on the data is that it specifically motivates the
iterative nature of the co-training algorithm. Previous assumptions that had been analyzed
imply such a strong form of expansion that even a ?one-shot? version of co-training will
succeed (see Section 2.2). In fact, the theoretical guarantees given in [2] are exactly of
this type. However, distributions can easily satisfy our weaker condition without allowing
one-shot learning to work as well, and we describe several natural situations of this form.
An additional property of our results is that they are algorithmic in nature. That is, if we
have sufficiently strong efficient PAC-learning algorithms for the target function on each
feature set, we can use them to achieve efficient PAC-style guarantees for co-training as
well. However, as mentioned above, we need a stronger assumption on our base learning
algorithms than used by [2] (see section 2.1).
We begin by formally defining the expansion assumption we will use, connecting it to standard graph-theoretic notions of expansion and conductance. We then prove the statement
that -expansion is sufficient for iterative co-training to succeed, given strong enough base
learning algorithms over each view, proving bounds on the number of iterations needed to
converge. In Section 4.1, we heuristically analyze the effect of imperfect feature sets on
co-training accuracy. Finally, in Section 4.2, we present experiments on synthetic expander
graph data that qualitatively bear out our analyses.
2
Notations, Definitions, and Assumptions
We assume that examples are drawn from some distribution D over an instance space X =
X1 ? X2 , where X1 and X2 correspond to two different ?views? of an example. Let c
denote the target function, and let X + and X ? denote the positive and negative regions of
X respectively (for simplicity we assume we are doing binary classification). For most of
this paper we assume that each view in itself is sufficient for correct classification; that is,
c can be decomposed into functions c1 , c2 over each view such that D has no probability
mass on examples x such that c1 (x1 ) 6= c2 (x2 ). For i ? {1, 2}, let Xi+ = {xi ? Xi :
ci (xi ) = 1}, so we can think of X + as X1+ ? X2+ , and let Xi? = Xi ? Xi+ . Let D+ and
D? denote the marginal distribution of D over X + and X ? respectively.
In order to discuss iterative co-training, we need to be able to talk about a hypothesis
being confident or not confident on a given example. For convenience, we will identify
?confident? with ?confident about being positive?. This means we can think of a hypothesis
hi as a subset of Xi , where xi ? hi means that hi is confident that xi is positive, and xi 6? hi
means that hi has no opinion.
As in [2], we will abstract away the initialization phase of co-training (how labeled data is
used to generate an initial hypothesis) and assume we are given initial sets S10 ? X1+ and
S20 ? X2+ such that Prhx1 ,x2 i?D (x1 ? S10 or x2 ? S20 ) ? ?init for some ?init > 0. The
goal of co-training will be to bootstrap from these sets using unlabeled data.
Now, to prove guarantees for iterative co-training, we make two assumptions: that the
learning algorithms used in each of the two views are able to learn from positive data only,
and that the distribution D+ is expanding as defined in Section 2.2 below.
2.1 Assumption about the base learning algorithms on the two views
We assume that the learning algorithms on each view are able to PAC-learn from positive
data only. Specifically, for any distribution Di+ over Xi+ , and any given , ? > 0, given
access to examples from Di+ the algorithm should be able to produce a hypothesis hi such
that (a) hi ? Xi+ (so hi only has one-sided error), and (b) with probability 1??, the error of
hi under Di+ is at most . Algorithms of this type can be naturally thought of as predicting
either ?positive with confidence? or ?don?t know?, fitting our framework. Examples of
concept classes learnable from positive data only include conjunctions, k-CNF, and axisparallel rectangles; see [7]. For instance, for the case of axis-parallel rectangles, a simple
algorithm that achieves this guarantee is just to output the smallest rectangle enclosing the
positive examples seen.
If we wanted to consider algorithms that could be confident in both directions (rather than
just confident about being positive) we could instead use the notion of ?reliable, useful?
learning due to Rivest and Sloan [14]. However, fewer classes of functions are learnable
in this manner. In addition, a nice feature of our assumption is that we will only need D+
to expand and not D? . This is especially natural if the positive class has a large amount
of cohesion (e.g, it consists of all documents about some topic Y ) but the negatives do not
(e.g., all documents about all other topics). Note that we are effectively assuming that our
algorithms are correct when they are confident; we relax this in our heuristic analysis in
Section 4.
2.2 The expansion assumption for the underlying distribution
For S1 ? X1 and S2 ? X2 , let boldface Si (i = 1, 2) denote the event that an example
hx1 , x2 i has xi ? Si . So, if we think of S1 and S2 as our confident sets in each view, then
Pr(S1 ? S2 ) denotes the probability mass on examples for which we are confident about
both views, and Pr(S1 ? S2 ) denotes the probability mass on examples for which we are
confident about just one. In this section, all probabilities are with respect to D+ . We say:
Definition 1 D+ is -expanding if for any S1 ? X1+ , S2 ? X2+ , we have
Pr(S1 ? S2 ) ? min Pr(S1 ? S2 ), Pr(S1 ? S2 ) .
We say that D+ is -expanding with respect to hypothesis class H1 ? H2 if the above
holds for all S1 ? H1 ? X1+ , S2 ? H2 ? X2+ (here we denote by Hi ? Xi+ the set
h ? Xi+ : h ? Hi for i = 1, 2).
To get a feel for this definition, notice that -expansion is in some sense necessary for
iterative co-training to succeed, because if S1 and S2 are our confident sets and do not
expand, then we might never see examples for which one hypothesis could help the other.1
In Section 3 we show that Definition 1 is in fact sufficient. To see how much weaker this
definition is than previously-considered requirements, it is helpful to consider a slightly
stronger kind of expansion that we call ?left-right expansion?.
Definition 2 We say D+ is -right-expanding if for any S1 ? X1+ , S2 ? X2+ ,
if Pr(S1 ) ? 1/2 and Pr(S2 |S1 ) ? 1 ? then Pr(S2 ) ? (1 + ) Pr(S1 ).
1
However, -expansion requires every pair to expand and so it is not strictly necessary. If there
were occasional pairs (S1 , S2 ) that did not expand, but such pairs were rare and unlikely to be encountered as confident sets in the co-training process, we might still be OK.
We say D+ is -left-expanding if the above holds with indices 1 and 2 reversed. Finally,
D+ is -left-right-expanding if it has both properties.
It is not immediately obvious but left-right expansion in fact implies Definition 1 (see Appendix A), though the converse is not necessarily true. We introduce this notion, however,
for two reasons. First, it is useful for intuition: if Si is our confident set in Xi+ and this
set is small (Pr(Si ) ? 1/2), and we train a classifier that learns from positive data on the
conditional distribution that Si induces over X3?i until it has error ? on that distribution,
then the definition implies the confident set on X3?i will have noticeably larger probability
than Si ; so it is clear why this is useful for co-training, at least in the initial stages. Secondly, this notion helps clarify how our assumptions are much less restrictive than those
considered previously. Specifically,
Independence given the label: Independence given the label implies that for any S1 ?
X1+ and S2 ? X2+ we have Pr(S2 |S1 ) = Pr(S2 ). So, if Pr(S2 |S1 ) ? 1 ? , then
Pr(S2 ) ? 1 ? as well, even if Pr(S1 ) is tiny. This means that not only does S1
expand by a (1 + ) factor as in Def. 2, but in fact it expands to nearly all of X2+ .
Weak dependence: Weak dependence [1] is a relaxation of conditional independence that
requires only that for all S1 ? X1+ , S2 ? X2+ we have Pr(S2 |S1 ) ? ? Pr(S2 )
for some ? > 0. This seems much less restrictive. However, notice that if
Pr(S2 |S1 ) ? 1 ? , then Pr(S2 |S1 ) ? , which implies by definition of weak
dependence that Pr(S2 ) ? /? and therefore Pr(S2 ) ? 1 ? /?. So, again (for
sufficiently small ), even if S1 is very small, it expands to nearly all of X2+ . This
means that, as with conditional independence, if one has an algorithm over X2 that
PAC-learns from positive data only, and one trains it over the conditional distribution given by S1 , then by driving down its error on this conditional distribution
one can perform co-training in just one iteration.
2.2.1 Connections to standard graph-theoretic notions of expansion
Our definition of -expansion (Definition 1) is a natural analog of the standard graphtheoretic notion of edge-expansion or conductance. A Markov-chain is said to have high
conductance if under the stationary distribution, for any set of states S of probability at
most 1/2, the probability mass on transitions exiting S is at least times the probability
of S. E.g., see [9]. A graph has high edge-expansion if the random walk on the graph has
high conductance. Since the stationary distribution of this walk can be viewed as having
equal probability on every edge, this is equivalent to saying that for any partition of the
graph into two pieces (S, V ? S), the number of edges crossing the partition should be at
least an fraction of the number of edges in the smaller half. To connect this to Definition
1, think of S as S1 ? S2 .
It is well-known that, for example, a random degree-3 bipartite graph with high probability
is expanding, and this in fact motivates our synthetic data experiments of Section 4.2.
2.2.2 Examples
We now give two simple examples that satisfy -expansion but not weak dependence.
Example 1: Suppose X = Rd ?Rd and the target function on each view is an axis-parallel
rectangle. Suppose a random positive example from D+ looks like a pair hx1 , x2 i such that
x1 and x2 are each uniformly distributed in their rectangles but in a highly-dependent way:
specifically, x2 is identical to x1 except that a random coordinate has been ?re-randomized?
within the rectangle. This distribution does not satisfy weak dependence (for any sets S
and T that are disjoint along all axes we have Pr(T|S) = 0) but it is not hard to verify that
D+ is -expanding for = ?(1/d).
Example 2: Imagine that we have a learning problem such that the data in X1 falls into n
different clusters: the positive class is the union of some of these clusters and the negative
class is the union of the others. Imagine that this likewise is true if we look at X2 and for
simplicity suppose that every cluster has the same probability mass. Independence given
the label would say that given that x1 is in some positive cluster Ci in X1 , x2 is equally
likely to be in any of the positive clusters Cj in X2 . But, suppose we have something much
weaker: each Ci in X1 is associated with only 3 Cj ?s in X2 (i.e., given that x1 is in Ci ,
x2 will only be in one of these Cj ?s). This distribution clearly will not even have the weak
dependence property. However, say we have a learning algorithm that assumes everything
in the same cluster has the same label (so the hypothesis space H consists of all rules that
do not split clusters). Then if the graph of which clusters are associated with which is an
expander graph, then the distributions will be expanding with respect to H. In particular,
given a labeled example x, the learning algorithm will generalize to x?s entire cluster Ci ,
then this will be propagated over to nodes in the associated clusters Cj in X2 , and so on.
3
The Main Result
We now present our main result. We assume that D+ is -expanding ( > 0) with respect
to hypothesis class H1 ? H2 , that we are given initial confident sets S10 ? X1+ , S20 ? X2+
such that Pr(S01 ? S02 ) ? ?init , that the target function can be written as hc1 , c2 i with
c1 ? H1 , c2 ? H2 , and that on each of the two views we have algorithms A1 and A2 for
learning from positive data only.
The iterative co-training that we consider proceeds in rounds. Let S1i ? X1 and S2i ? X2
be the confident sets in each view at the start of round i. We construct S2i+1 by feeding
into A2 examples according to D2 conditioned on Si1 ? Si2 . That is, we take unlabeled
examples from D such that at least one of the current predictors is confident, and feed them
into A2 as if they were positive. We run A2 with error and confidence parameters given in
the theorem below. We simultaneously do the same with A1 , creating S1i+1 .
After a pre-determined number of rounds N (specified in Theorem 1), the algorithm terminates and outputs the predictor that labels examples hx1 , x2 i as positive if x1 ? S1N +1 or
x2 ? S2N +1 and negative otherwise.
We begin by stating two lemmas that will be useful in our analysis. For both of these
lemmas, let S1 , T1 ? X1+ , S2 , T2 ? X2+ , where Sj , Tj ? Hj . All probabilities are with
respect to D+ .
Lemma 1 Suppose Pr (S1 ? S2 ) ? Pr (S1 ? S2 ), Pr (T1 | S1 ? S2 ) ? 1 ? /8 and
Pr (T2 | S1 ? S2 ) ? 1 ? /8. Then Pr (T1 ? T2 ) ? (1 + /2) Pr (S1 ? S2 ).
Proof: From Pr (T1 | S1 ? S2 ) ? 1 ? /8 and Pr (T2 | S1 ? S2 ) ? 1 ? /8 we get that
Pr (T1 ? T2 ) ? (1 ? /4) Pr (S1 ? S2 ). Since Pr (S1 ? S2 ) ? Pr (S1 ? S2 ) it follows
from the expansion property that
Pr (S1 ? S2 ) = Pr (S1 ? S2 ) + Pr (S1 ? S2 ) ? (1 + ) Pr (S1 ? S2 ).
Therefore, Pr (T1 ? T2 ) ? (1 ? /4)(1 + ) Pr (S1 ? S2 ) which implies that
Pr (T1 ? T2 ) ? (1 + /2) Pr (S1 ? S2 ).
Lemma 2 Suppose Pr (S1 ? S2 ) > Pr (S1 ? S2 ) and let ? = 1 ? Pr (S1 ? S2 ). If
?
Pr (T1 | S1 ? S2 ) ? 1 ? ?
8 and Pr (T2 | S1 ? S2 ) ? 1 ? 8 , then Pr (T1 ? T2 ) ?
?
(1 + 8 ) Pr (S1 ? S2 ).
?
Proof: From Pr (T1 | S1 ? S2 ) ? 1 ? ?
8 and Pr (T2 | S1 ? S2 ) ? 1 ? 8 we get that
?
Pr (T1 ? T2 ) ? (1 ? 4 ) Pr (S1 ? S2 ). Since Pr (S1 ? S2 ) > Pr (S1 ? S2 ) it follows
from the expansion property that Pr (S1 ? S2 ) ? Pr (S1 ? S2 ). Therefore
? = Pr (S1 ? S2 ) + Pr (S1 ? S2 ) ? (1 + ) Pr (S1 ? S2 ) ? (1 + )(1 ? Pr (S1 ? S2 ))
?
. This implies Pr (T1 ? T2 ) ? (1 ?
and so Pr (S1 ? S2 ) ? 1 ? 1+
?
(1 ? ?)(1 + 8 ). So, we have Pr (T1 ? T2 ) ? (1 + ?
8 ) Pr (S1 ? S2 ).
?
4 )(1
?
?
1+ )
?
Theorem 1 Let f in and ?f in be the (final) desired accuracy and confidence parameters.
Then we can achieve error rate f in with probability 1 ? ?f in by running co-training for
1
N = O( 1 log f1in + 1 ? ?init
) rounds, each time running A1 and A2 with accuracy and
confidence parameters set to
?f in
8
and
?f in
2N
respectively.
Proof Sketch: Assume that, for i ? 1, S1i ? X1+ and S2i ? X2+ are the confident sets in
each view after step i ? 1 of co-training. Define pi = Pr (Si1 ? Si2 ), qi = Pr (Si1 ? Si2 ),
and ?i = 1 ? pi , with all probabilities with respect to D+ . We are interested in bounding
Pr (Si1 ? Si2 ), but since technically it is easier to bound Pr (Si1 ? Si2 ), we will instead show
N
that pN ? 1 ? f in with probability 1 ? ?f in , which obviously implies that Pr(SN
1 ? S2 )
is at least as good.
?
By the guarantees on A1 and A2 , after each round we get that with probability 1 ? fNin ,
?
?
we have Pr (Si+1
| Si1 ? Si2 ) ? 1 ? f in
and Pr (Si+1
| Si1 ? Si2 ) ? 1 ? f in
1
2
8
8 . In
?
particular, this implies that with probability 1 ? fNin , we have p1 = Pr (S11 ? S12 ) ?
(1 ? /4) ? Pr (S01 ? S02 ) ? (1 ? /4)?init .
?
Consider now i ? 1. If pi ? qi , since with probability 1 ? fNin we have
Pr (Si+1
| Si1 ? Si2 ) ? 1 ? 8 and Pr (Si+1
| Si1 ? Si2 ) ? 1 ? 8 , using lemma 1 we obtain
1
2
?f in
i
i
that with probability 1 ? N , we have Pr (Si+1
? Si+1
1
2 ) ? (1 + /2) Pr (S1 ? S2 ). Similarly, by applying lemma 2, we obtain that if pi > qi and ?i ? f in then with probability
?
?i
i
i
? Si+1
1 ? fNin we have Pr (Si+1
1
2 ) ? (1 + 8 ) Pr (S1 ? S2 ). Assume now that it is the
case that the learning algorithms A1 and A2 were successful on all the N rounds; note that
this happens with probability at least 1 ? ?f in .
The above observations imply that so long as pi ? 1/2 (so ?i ? 1/2) we have pi+1 ?
1
(1 + /16)i (1 ? /4)?init . This means that after N1 = O( ?init
? 1 ) iterations of co-training
we get to a situation where pN1 > 1/2. At this point, notice that every 8/ rounds, ?
1
drops by at least a factor of 2; that is, if ?i ? 21k then ? 8 +i ? 2k+1
. So, after a total
1
1
1
1
of O( log f in + ? ?init ) rounds, we have a predictor of the desired accuracy with the
desired confidence.
4
Heuristic Analysis of Error propagation and Experiments
So far, we have assumed the existence of perfect classifiers on each view: there are no
examples hx1 , x2 i with x1 ? X1+ and x2 ? X2? or vice-versa. In addition, we have
assumed that given correctly-labeled positive examples as input, our learning algorithms
are able to generalize in a way that makes only 1-sided error (i.e., they are never ?confident
but wrong?). In this section we give a heuristic analysis of the case when these assumptions
are relaxed, along with several synthetic experiments on expander graphs.
4.1 Heuristic Analysis of Error propagation
Given confident sets S1i ? X1 and S2i ? X2 at the ith iteration, let us define their purity (precision) as puri = PrD (c(x) = 1|Si1 ? Si2 ) and their coverage (recall) to be
covi = PrD (Si1 ? Si2 |c(x) = 1). Let us also define their ?opposite coverage? to be
oppi = PrD (Si1 ?Si2 |c(x) = 0). Previously, we assumed oppi = 0 and therefore puri = 1.
However, if we imagine that there is an ? fraction of examples on which the two views disagree, and that positive and negative regions expand uniformly at the same rate, then even
if initially opp0 = 0, it is natural to assume the following form of increase in cov and opp:
covi+1
oppi+1
=
=
min (covi (1 + (1 ? covi )) + ? ? (oppi+1 ? oppi ) , 1),
min (oppi (1 + (1 ? oppi )) + ? ? (covi+1 ? covi ) , 1).
(1)
(2)
1
1
accuracy on negative
accuracy on positive
overall accuracy
0.8
accuracy on negative
accuracy on positive
overall accuracy
0.8
1
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
1
2
3
4
5
iteration
6
7
8
0
2
4
6
iteration
8
10
0
accuracy on negative
accuracy on positive
overall accuracy
2
4
6
8
10
12
14
iteration
Figure 1: Co-training with noise rates 0.1, 0.01, and 0.001 respectively (n = 5000). Solid
line indicates overall accuracy; green (dashed, increasing) curve is accuracy on positives
(covi ); red (dashed, decreasing) curve is accuracy on negatives (1 ? oppi ).
That is, this corresponds to both the positive and negative parts of the confident region
expanding in the way given in the proof of Theorem 1, with an ? fraction of the new
edges going to examples of the other label. By examining (1) and (2), we can make a
few simple observations. First, initially when coverage is low, every O(1/) steps we get
roughly cov ? 2 ? cov and opp ? 2 ? opp + ? ? cov. So, we expect coverage to increase
exponentially and purity to drop linearly. However, once coverage gets large and begins to
saturate, if purity is still high at this time it will begin dropping rapidly as the exponential
increase in oppi causes oppi to catch up with covi . In particular, a calculation (omitted)
shows that if D is 50/50 positive and negative, then overall accuracy increases up to the
point when covi + oppi = 1, and then drops from then on. This qualitative behavior is
borne out in our experiments below.
4.2 Experiments
We performed experiments on synthetic data along the lines of Example 2, with noise added
as in Section 4.1. Specifically, we create a 2n-by-2n bipartite graph. Nodes 1 to n on each
side represent positive clusters, and nodes n + 1 to 2n on each side represent negative
clusters. We connect each node on the left to three nodes on the right: each neighbor is
chosen with probability 1 ? ? to be a random node of the same class, and with probability ?
to be a random node of the opposite class. We begin with an initial confident set S1 ? X1+
and then propagate confidence through rounds of co-training, monitoring the percentage
of the positive class covered, the percent of the negative class mistakenly covered, and
the overall accuracy. Plots of three experiments are shown in Figure 1, for different noise
rates (0.1, 0.01, and 0.001). As can be seen, these qualitatively match what we expect:
coverage increases exponentially, but accuracy on negatives (1 ? oppi ) drops exponentially
too, though somewhat delayed. At some point there is a crossover where covi = 1 ? oppi ,
which as predicted roughly corresponds to the point at which overall accuracy starts to
drop.
5
Conclusions
Co-training is a method for using unlabeled data when examples can be partitioned into
two views such that (a) each view in itself is at least roughly sufficient to achieve good
classification, and yet (b) the views are not too highly correlated. Previous theoretical work
has required instantiating condition (b) in a very strong sense: as independence given the
label, or a form of weak dependence. In this work, we argue that the ?right? condition
is something much weaker: an expansion property on the underlying distribution (over
positive examples) that we show is sufficient and to some extent necessary as well.
The expansion property is especially interesting because it directly motivates the iterative
nature of many of the practical co-training based algorithms, and our work is the first
rigorous analysis of iterative co-training in a setting that demonstrates its advantages over
one-shot versions.
Acknowledgements: This work was supported in part by NSF grants CCR-0105488,
NSF-ITR CCR-0122581, and NSF-ITR IIS-0312814.
References
[1] S. Abney. Bootstrapping. In Proceedings of the 40th Annual Meeting of the Association for
Computational Linguistics (ACL), pages 360?367, 2002.
[2] A. Blum and T. M. Mitchell. Combining labeled and unlabeled data with co-training. In
Proc. 11th Annual Conference on Computational Learning Theory, pages 92?100, 1998.
[3] M. Collins and Y. Singer. Unsupervised models for named entity classification. In SIGDAT
Conf. Empirical Methods in NLP and Very Large Corpora, pages 189?196, 1999.
[4] S. Dasgupta, M. L. Littman, and D. McAllester. PAC generalization bounds for co-training. In
Advances in Neural Information Processing Systems 14. MIT Press, 2001.
[5] R. Ghani. Combining labeled and unlabeled data for text classification with a large number of
categories. In Proceedings of the IEEE International Conference on Data Mining, 2001.
[6] T. Joachims. Transductive inference for text classification using support vector machines. In
Proceedings of the 16th International Conference on Machine Learning, pages 200?209, 1999.
[7] M. Kearns, M. Li, and L. Valiant. Learning Boolean formulae. JACM, 41(6):1298?1328, 1995.
[8] A. Levin, Paul Viola, and Yoav Freund. Unsupervised improvement of visual detectors using
co-training. In Proc. 9th IEEE International Conf. on Computer Vision, pages 626?633, 2003.
[9] R. Motwani and P. Raghavan. Randomized Algorithms. Cambridge University Press, 1995.
[10] K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In
Proc. ACM CIKM Int. Conf. on Information and Knowledge Management, pages 86?93, 2000.
[11] K. Nigam, A. McCallum, S. Thrun, and T. M. Mitchell. Text classification from labeled and
unlabeled documents using em. Machine Learning, 39(2/3):103?134, 2000.
[12] S. Park and B. Zhang. Large scale unstructured document classification using unlabeled data
and syntactic information. In PAKDD 2003, LNCS vol. 2637, pages 88?99. Springer, 2003.
[13] D. Pierce and C. Cardie. Limitations of Co-Training for natural language learning from large
datasets. In Proc. Conference on Empirical Methods in NLP, pages 1?9, 2001.
[14] R. Rivest and R. Sloan. Learning complicated concepts reliably and usefully. In Proceedings
of the 1988 Workshop on Computational Learning Theory, pages 69?79, 1988.
[15] David Yarowsky. Unsupervised word sense disambiguation rivaling supervised methods. In
Meeting of the Association for Computational Linguistics, pages 189?196, 1995.
[16] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and
harmonic functions. In Proc. 20th International Conf. Machine Learning, pages 912?912, 2003.
A
Relating the definitions
We show here how Definition 2 implies Definition 1.
Theorem 2 If D+ satisfies -left-right expansion (Definition 2), then it also satisfies 0 -expansion
(Definition 1) for 0 = /(1 + ).
+
+
Proof: We will prove the contrapositive. Suppose there
exist S1 ? X1 , S2 ? X2 such that
0
Pr(S1 ? S2 ) < min Pr(S1 ? S2 ), Pr(S1 ? S2 ) . Assume without loss of generality that
Pr(S1 ? S2 ) ? Pr(S1 ? S2 ). Since Pr(S1 ? S2 ) + Pr(S1 ? S2 ) + Pr(S1 ? S2 ) = 1 it follows that Pr(S1 ? S2 ) ? 12 ? Pr(S12?S2 ) . Assume Pr(S1 ) ? Pr(S2 ). This implies that Pr(S1 ) ? 12
since Pr(S1 )+Pr(S2 ) = 2 Pr(S1 ?S2 )+Pr(S1 ?S2 ) and so Pr(S1 ) ? Pr(S1 ?S2 )+ Pr(S12?S2 ) .
Now notice that
Pr(S1 ? S2 )
Pr(S1 ? S2 )
1
?
>
Pr(S2 |S1 ) =
? 1 ? .
Pr(S1 )
Pr(S1 ? S2 ) + Pr(S1 ? S2 )
1 + 0
But
Pr(S2 ) ? Pr(S1 ? S2 ) + Pr(S1 ? S2 ) < (1 + 0 ) Pr(S1 ? S2 ) ? (1 + ) Pr(S1 )
and so Pr(S2 ) < (1 + ) Pr(S1 ). Similarly if Pr(S2 ) ? Pr(S1 ) we get a failure of expansion in the
other direction. This completes the proof.
| 2578 |@word version:2 faculty:1 stronger:3 seems:1 heuristically:2 d2:1 propagate:1 solid:1 shot:4 initial:6 plentiful:1 contains:2 document:5 puri:2 current:1 si:14 yet:3 written:1 partition:2 informative:1 wanted:1 drop:5 plot:1 stationary:2 half:1 fewer:1 mccallum:1 ith:1 node:7 simpler:1 si1:12 zhang:1 along:3 c2:7 qualitative:1 prove:5 consists:2 fitting:1 introduce:1 manner:1 roughly:3 p1:1 behavior:2 decomposed:1 decreasing:1 increasing:1 begin:5 underlying:6 matched:1 notation:1 mass:5 rivest:2 what:1 kind:1 substantially:2 developed:1 bootstrapping:1 guarantee:6 every:5 expands:2 usefully:1 exactly:1 classifier:6 wrong:2 demonstrates:1 yarowsky:1 converse:1 grant:1 positive:32 t1:13 analyzing:1 might:3 acl:1 initialization:1 co:36 practical:2 practice:2 union:2 x3:2 bootstrap:2 lncs:1 empirical:2 crossover:1 thought:3 word:2 confidence:6 pre:1 hx1:6 convenience:1 unlabeled:14 get:8 applying:1 equivalent:1 ke:1 simplicity:2 unstructured:1 immediately:1 rule:2 proving:1 notion:7 coordinate:1 feel:1 target:4 suppose:7 imagine:3 us:1 hypothesis:8 pa:3 crossing:1 rivaling:1 labeled:11 region:3 s1i:4 substantial:1 mentioned:1 intuition:1 littman:1 technically:1 bipartite:2 easily:1 talk:1 s2i:4 train:3 univ:3 distinct:3 describe:1 axisparallel:1 heuristic:5 larger:1 say:7 relax:1 hc1:1 otherwise:1 cov:4 think:4 transductive:1 itself:3 syntactic:1 final:1 obviously:1 advantage:1 propose:1 combining:3 rapidly:1 achieve:3 webpage:2 cluster:12 requirement:1 motwani:1 produce:2 perfect:1 help:2 stating:1 strong:8 coverage:6 c:3 predicted:2 implies:11 direction:2 correct:2 raghavan:1 mcallester:1 opinion:1 everything:1 noticeably:1 feeding:1 generalization:1 secondly:1 strictly:1 clarify:1 hold:3 sufficiently:2 considered:2 algorithmic:1 pointing:1 driving:1 achieves:1 smallest:1 cohesion:1 a2:7 omitted:1 proc:5 label:13 s12:3 vice:2 create:1 mit:1 clearly:1 gaussian:1 rather:1 pn:1 hj:1 conjunction:1 ax:1 joachim:1 maria:1 improvement:1 indicates:1 rigorous:1 sense:4 helpful:1 inference:1 dependent:1 unlikely:2 entire:1 initially:2 expand:6 going:1 interested:1 overall:7 classification:12 fairly:1 marginal:1 equal:1 construct:1 never:3 having:1 once:1 field:1 identical:1 park:1 look:2 unsupervised:3 nearly:2 others:1 t2:13 few:1 simultaneously:1 cheaper:1 delayed:1 phase:1 n1:1 conductance:5 highly:3 mining:1 analyzed:1 tj:1 chain:1 edge:6 necessary:5 respective:2 walk:2 re:1 desired:3 theoretical:5 instance:2 advisor:1 boolean:1 yoav:1 applicability:1 subset:1 rare:1 predictor:3 successful:1 examining:1 levin:1 too:3 connect:2 synthetic:5 my:1 confident:27 international:4 randomized:2 connecting:1 again:1 satisfied:1 management:1 containing:2 borne:1 conf:4 creating:1 style:1 li:1 int:1 satisfy:3 sloan:2 piece:1 performed:1 try:1 view:24 h1:6 analyze:2 doing:1 red:1 start:2 parallel:2 complicated:1 contrapositive:1 contribution:1 accuracy:20 likewise:1 correspond:1 identify:1 generalize:2 weak:8 cardie:1 monitoring:1 detector:2 definition:17 failure:1 ninamf:1 obvious:1 naturally:1 associated:3 di:3 proof:6 propagated:1 mitchell:2 recall:1 knowledge:1 cj:4 actually:1 ok:1 feed:1 supervised:2 though:3 generality:1 just:5 stage:1 until:1 hand:1 sketch:1 mistakenly:1 propagation:2 effect:2 contain:1 concept:2 true:2 verify:1 iteratively:1 round:9 anything:1 theoretic:3 percent:1 balcan:1 harmonic:1 exponentially:3 analog:2 association:2 relating:1 mellon:3 pn1:1 versa:2 cambridge:1 rd:2 similarly:2 language:2 had:3 access:1 base:3 something:2 scenario:1 binary:1 success:2 s11:1 meeting:2 seen:2 additional:1 relaxed:1 somewhat:1 purity:3 determine:1 converge:1 dashed:2 ii:1 semi:1 match:1 calculation:1 long:1 equally:1 a1:5 qi:3 instantiating:1 variant:1 florina:1 vision:1 cmu:3 iteration:7 represent:2 c1:6 addition:2 completes:1 appropriately:1 unlike:1 probably:1 expander:4 member:1 lafferty:1 effectiveness:1 call:1 yang:1 split:1 enough:1 relaxes:1 variety:1 independence:8 opposite:2 imperfect:1 idea:1 itr:2 bridging:1 cnf:1 cause:1 useful:4 clear:1 covered:2 amount:1 induces:1 category:1 generate:1 exist:2 percentage:1 nsf:3 notice:4 cikm:1 disjoint:1 correctly:1 ccr:2 carnegie:3 prd:3 dropping:1 dasgupta:1 vol:1 redundancy:1 key:1 blum:2 drawn:1 rectangle:6 graph:13 relaxation:1 fraction:3 run:1 sigdat:1 named:2 saying:1 home:1 disambiguation:1 appendix:1 bound:3 hi:14 def:1 encountered:1 annual:2 strength:1 s10:3 pakdd:1 x2:43 min:4 according:1 smaller:1 slightly:1 terminates:1 em:1 partitioned:1 s1:95 happens:1 intuitively:1 pr:116 sided:2 previously:3 discus:1 needed:3 know:1 singer:1 occasional:1 away:1 s2n:1 existence:2 original:1 s01:2 assumes:2 denotes:2 include:2 running:2 linguistics:2 nlp:2 restrictive:2 ghahramani:1 especially:2 added:1 primary:1 dependence:8 said:1 reversed:1 link:1 graphtheoretic:1 thrun:1 entity:2 topic:2 argue:1 extent:2 reason:1 boldface:1 assuming:2 index:1 unfortunately:1 statement:1 negative:14 enclosing:1 reliably:1 motivates:4 perform:1 allowing:1 disagree:1 observation:2 markov:1 datasets:1 situation:2 defining:1 viola:1 exiting:1 david:1 pair:4 required:1 specified:1 connection:1 s20:3 able:6 proceeds:1 below:3 hyperlink:1 including:1 reliable:1 green:1 event:1 natural:7 predicting:1 zhu:1 improve:1 imply:2 axis:2 catch:1 sn:1 text:6 nice:1 acknowledgement:1 freund:1 loss:1 expect:2 bear:1 interesting:1 limitation:1 h2:8 degree:1 sufficient:8 consistent:1 principle:1 tiny:1 pi:6 supported:1 side:2 weaker:5 allow:1 fall:1 neighbor:1 taking:1 distributed:1 curve:2 transition:1 qualitatively:3 far:1 si2:12 sj:1 opp:3 corpus:1 pittsburgh:3 assumed:3 xi:17 don:1 iterative:11 why:1 abney:1 nature:4 learn:4 expanding:11 init:8 nigam:2 improving:1 expansion:28 necessarily:1 did:1 main:2 linearly:1 s2:94 noise:4 bounding:1 paul:1 ghani:2 x1:33 precision:1 exponential:1 cotraining:1 learns:2 down:1 theorem:5 saturate:1 formula:1 pac:6 learnable:2 workshop:1 avrim:2 effectively:2 valiant:1 ci:5 pierce:1 conditioned:1 easier:1 likely:1 jacm:1 visual:2 springer:1 corresponds:2 satisfies:2 acm:1 succeed:6 conditional:6 goal:1 viewed:1 towards:1 hard:1 specifically:6 except:1 uniformly:2 determined:1 lemma:6 kearns:1 total:1 s1n:1 s02:2 formally:2 support:1 collins:1 dept:3 correlated:2 |
1,737 | 2,579 | Learning Preferences for Multiclass Problems
Fabio Aiolli
Dept. of Computer Science
University of Pisa, Italy
[email protected]
Alessandro Sperduti
Dept. of Pure and Applied Mathematics
University of Padova, Italy
[email protected]
Abstract
Many interesting multiclass problems can be cast in the general framework of label ranking defined on a given set of classes. The evaluation
for such a ranking is generally given in terms of the number of violated
order constraints between classes. In this paper, we propose the Preference Learning Model as a unifying framework to model and solve a large
class of multiclass problems in a large margin perspective. In addition,
an original kernel-based method is proposed and evaluated on a ranking
dataset with state-of-the-art results.
1
Introduction
The presence of multiple classes in a learning domain introduces interesting tasks besides
the one to select the most appropriate class for an object, the well-known (single-label)
multiclass problem. Many others, including learning rankings, multi-label classification,
hierarchical classification and ordinal regression, just to name a few, have not yet been
sufficiently studied even though they should not be considered less important. One of the
major problems when dealing with this large set of different settings is the lack of a single
universal theory encompassing all of them.
In this paper we focus on multiclass problems where labels are given as partial order constraints over the classes. Tasks naturally falling into this family include category ranking,
which is the task to infer full orders over the classes, binary category ranking, which is
the task to infer orders such that a given subset of classes are top-ranked, and any general
(q-label) classification problem.
Recently, efforts have been made in the direction to unify different ranking problems. In
particular, in [5, 7] two frameworks have been proposed which aim at inducing a label
ranking function from examples. Similarly, here we consider labels coded into sets of preference constraints, expressed as preference graphs over the set of classes. The multiclass
problem is then reduced to learning a good set of scoring functions able to correctly rank the
classes according to the constraints which are associated to the label of the examples. Each
preference graph disagreeing with the obtained ranking function will count as an error.
The primary contribution of this work is to try to make a further step towards the unification of different multiclass settings, and different models to solve them, by proposing the
Preference Learning Model, a very general framework to model and study several kinds of
multiclass problems. In addition, a kernel-based method particularly suited for this setting
is proposed and evaluated in a binary category ranking task with very promising results.
The Multiclass Setting Let ? be a set of classes, we consider a multiclass setting where
data are supposed to be sampled according to a probability distribution D over X ? Y,
X ? Rd and an hypothesis space of functions F = {f? : X ? ? ? R} with parameters
?. Moreover, a cost function c(x, y|?) defines the cost suffered by a given hypothesis on
a pattern x ? X having label y ? Y. A multiclass learning algorithm searches for a set of
parameters ?? such to minimize the true cost, that is the expected value of the cost according to the true distribution of data, i.e. Rt [?] = E(x,y)?D [c(x, y|?)]. The distribution D
is typically unknown, while it is available a training set S = {(x1 , y1 ), . . . , (xn , yn )} with
examples drawn i.i.d. from D. An empirical approximation
of the true cost, also referred
Pn
to as the empirical cost, is defined by Re [?, S] = n1 i=1 c(xi , yi |?).
2
The Preference Learning Model
In this section, starting from the general multiclass setting described above, we propose a
general technique to solve a large family of multiclass settings. The basic idea is to ?code?
labels of the original multiclass problem as sets of ranking constraints given as preference
graphs. Then, we introduce the Preference Learning Model (PLM) for the induction of
optimal scoring functions that uses those constraints as supervision.
In the case of ranking-based multiclass settings, labels are given as partial orders over
the classes (see [1] for a detailed taxonomy of multiclass learning problems). Moreover,
as observed in [5], ranking problems can be generalized by considering labels given as
preference graphs over a set of classes ? = {?1 , . . . , ?m }, and trying to find a consistent
ranking function fR : X ? ?(?) where ?(?) is the set of permutations over ?. More
formally, considering a set ?, a preference graph or ?p-graph? over ? is a directed graph
v = (N, A) where N ? ? is the set of nodes and A is the set of arcs of the graph accessed
by the function A(v). An arc a ? A is associated with its starting node ?s = ?s (a) and
its ending node ?e = ?e (a) and represents the information that the class ?s is preferred to,
and should be ranked higher than, ?e . The set of p-graphs over ? will be denoted by G(?).
Let be given a set of scoring functions f : X ? ? ? R with parameters ? working as predictors of the relevance of the associated class to given instances. A definition of a ranking
function naturally follows by taking the permutation of elements in ? corresponding to the
sorting of the values of these functions, i.e. fR (x|?) = argsort??? f (x, ?|?). We say
that a preference arc a = (?s , ?e ) is consistent with a ranking hypothesis fR (x|?), and
we write a v fR (x|?), when f (x, ?s |?) ? f (x, ?e |?) holds. Generalizing to graphs, a
p-graph g is said to be consistent with an hypothesis fR (x|?), and we write g v fR (x|?),
if every arc compounding it is consistent, i.e. g v fR (x|?) ? ?a ? A(g), a v fR (x|?).
The PLM Mapping Let us start by considering the way a multiclass problem is transformed into a PLM problem. As seen before, to evaluate the quality of a ranking function fR (x|?) is necessary to specify the nature of a cost function c(x, y|?). Specifically, we consider cost definitions corresponding to associate penalties whenever uncorrect decisions are made (e.g. a classification error for classification problems or wrong
ordering for ranking problems). To this end, as in [5], we consider a label mapping
G : y 7? {g1 (y), . . . , gqy (y)} where a set of subgraphs gi (y) ? G(?) are associated
to each label y ? Y. The total cost suffered by a ranking hypothesis fR on the example
x ? X labeled
Pyqy? Y is the number of p-graphs in G(y) not consistent with the ranking, i.e.
c(x, y|?) = j=1
[[gj (y) 6v f (x|?)]], where [[b]] is 1 if the condition b holds, 0 otherwise.
Let us describe three particular mappings proposed in [5] that seem worthwhile of note: (i)
The identity mapping, denoted by GI , where the label is mapped on itself and every inconsistent graph will have a unitary cost, (ii) the disagreement mapping, denoted by Gd , where
a simple (single-preference) subgraph is built for each arc in A(y), and (iii) the domination
mapping, denoted by GD , where for each node ?r in y a subgraph consisting of ?r plus
(a)
(b)
(c)
(d)
(e)
(f )
Figure 1: Examples of label mappings for 2-label classification (a-c) and ranking (d-f).
the nodes of its outgoing set is built. To clarify, in Figure 1 a set of mapping examples
are proposed. Considering ? = {1, 2, 3, 4, 5}, in Figure 1-(a) the label y = [1, 2|3, 4, 5]
for a 2-label classification setting is given. In particular, this corresponds to the mapping
G(y) = GI (y) = y where a single wrong ranking of a class makes the predictor to pay a
unit of cost. Similarly, in Figure 1-(b) the label mapping G(y) = GD (y) is presented for
the same problem. Another variant is presented in Figure 1-(c) where the label mapping
G(y) = Gd (y) is used and the target classes are independently evaluated and their errors
cumulated. Note that all these graphs are subgraphs of the original label in 1-(a). As an
additional example we consider the three cases depicted in the right hand side of Figure 1
that refer to a ranking problem with three classes ? = {1, 2, 3}. In Figure 1-(d) the label
y = [1|2|3] is given. As before, this also corresponds to the label mapping G(y) = GI (y).
Two alternative cost definitions can be obtained by using the p-graphs (sets of basic preferences actually) depicted in Figure 1-(e) and 1-(f). Note that the cost functions in these
cases are different. For example, assume fR (x|?) = [3|1|2], the p-graph in (e) induces a
cost c(x, yb |?) = 2 while the p-graph in (f) induces a cost c(x, yc |?) = 1.
The PLM Setting Once the label mapping G is fixed, the preference constraints of
the original multiclass problem can be arranged
S into a set of preference constraints.
Specifically, we consider the set V(S) =
(xi ,yi )?S V(xi , yi ) where V(x, y) =
{(x, gj (y))}j?{1,..,qy } and each pair (x, g) ? X ? G(?) is a preference constraint. Note
that the same instance can be replicated in V(S). This can happen, for example, when
multiple ranking constraints are associated to the same example of the original multiclass
problem. Because of this, in the following, we prefer to use a different notation for the
instances in preference constraints to avoid confusion with training examples.
Notions defined for the standard classification setting are easily extended to PLM. For a
preference constraint (v, g) ? V, the constraint error incurred by the ranking hypothesis
fR (v|?) is given by ?(v, g|?) = [[g 6v fR (v|?)]]. The empirical cost is then defined
PN
as the cost over the whole constraint set, i.e. Re [?, V] = i=1 ?(vi , gi |?). In addition,
we define the margin of an hypothesis on a pattern v for a preference arc a = (?s , ?e ),
expressing how well the preference is satisfied, as the difference between the scores of
the two linked nodes, i.e. ?A (v, a|?) = f (v, ?s |?) ? f (v, ?e |?). The margin for a pgraph constraint (v, g) is then defined as the minimum of the margin of the compounding
preferences, ?G (v, g|?) = mina?A(g) ?A (v, a|?), and gives a measure of how well the
hypothesis fulfills a given preference constraint. Note that, consistently with the classification setting, the margin is greater than 0 if and only if g v fR (v|?).
Learning in PLM In the PLM we try to learn a ?simple? hypothesis able to minimize the
empirical cost of the original multiclass problem or equivalently to satisfy the constraints in
V(S) as much as possible. The learning setting of the PLM can be reduced to the P
following
n
scheme. Given a set V of pairs (vi , gi ) ? X ? G(?), i ? {1, . . . , N }, N = i=1 qyi ,
find a set of parameters for the ranking function fR (v|?) able to minimize a combination
? = arg min? {Re [?, V] + ?R(?)} with
of a regularization and an empirical loss term, ?
? a given constant. However, since the direct minimization of this functional is hard due
to the non continuous form of the empirical error term, we use an upper-bound on the true
empirical error. To this end, let be defined a monotonically decreasing loss function L such
that L(?) ? 0 and L(0) = 1, then by defining a margin-based loss
LC (v, g|?) = L (?G (v, g|?)) = max L (?A (v, a|?))
a?A(g)
(1)
for a p-graph constraint (v, g) ? V and recalling the margin definition, the condition
PN
?(v, g|?) ? LC (v, g|?) always holds thus obtaining Re [?, V] ? i=1 LC (vi , gi |?).
The problem of learning with multiple classes (up to constant factors) is then reduced to a
minimization of a (possibly regularized) loss functional
? = arg min{L(V|?) + ?R(?)}
?
?
where L(V|?) =
PN
i=1
(2)
maxa?A(gi ) L(f (vi , ?s (a)|?) ? f (vi , ?e (a)|?)).
Many different choices can be made for
the function L(?). Some well known
examples are the ones given in the table at the left. Note that, if the function
L(?) is convex with respect to the parameters ?, the minimization of the functional in Eq. (2) will result quite easy
given a convex regularization term.
The only difficulty in this case is represented by the max term. A shortcoming to this
problem would consist in upper-bounding the max with the sum operator, though this
would probably lead to a quite row approximation of the indicator function when considering p-graphs with many arcs. It can be shown that a number of related works, e.g. [5, 7],
after minor modifications, can be seen as PLM instances when using the sum approximation. Interestingly, PLM highlights that this approximation in fact corresponds to a change
on the label mapping obtained by decomposing a complex preference graph into a set of
binary preferences and thus changing the cost definition we are indeed minimizing. In this
case, using either GD or Gd is not going to make any difference at all.
Method
?-margin Perceptron
Logistic Regression
Soft margin
Mod. Least Square
Exponential
L(?)
[1 ? ? ?1 ?]+
log2 (1 + exp(??))
[1 ? ?]+
[1 ? ?]2+
exp(??)
Multiclass Prediction through PLM A multiclass prediction is a function H : X ? Y
mapping instances to their associated label. Let be given a label mapping defined as
G(y) = {g1 (y), . . . , gqy (y)}. Then, the PLM multiclass prediction is given as the label whose induced preference constraints mostly agree with the current hypothesis, i.e.
H(x) = arg miny L(V(x, y)|?) where V(x, y) = {(x, gj (y))}j?{1,..,qy } . It can be shown
that many of the most effective methods used for learning with multiple classes, including
output coding (ECOC, OvA, OvO), boosting, least squares methods and all the methods in
[10, 3, 7, 5] fit into the PLM setting. This issue is better discussed in [1].
3
Preference Learning with Kernel Machines
In this section, we focus on a particular setting of the PLM framework consisting of
a multivariate embedding h : X ? Rs of linear functions parameterized by a set
of vectors Wk ? Rd , k ? {1, . . . , s} accommodated in a matrix W ? Rs?d , i.e.
h(x) = [h1 (x), . . . , hs (x)] = [hW1 , xi, . . . , hWs , xi]. Furthermore, we consider the set
of classes ? = {?1 , . . . , ?m } and M ? Rm?s a matrix of codes of length s with as many
rows as classes. This matrix has the same role as the coding matrix in multiclass coding,
e.g. in ECOC. Finally, the scoring function for a given class is computed as the dot product
between the embedding function and the class code vector
f (x, ?r |W, M ) = hh(x), Mr i =
s
X
k=1
Mrk hWk , xi
(3)
Now, we are able to describe a kernel-based method for the effective solution of the PLM
problem. In particular, we present the problem formulation and the associated optimization
method for the task of learning the embedding function given fixed codes for the classes
(embedding problem). Another worthwhile task consists in the optimization of the codes
for the classes when the embedding function is kept fixed (coding problem), or even to
perform a combination of the two (see for example [8]). A deeper study of the embeddingcoding version of PLM and a set of examples can be found in [1].
PLM Kesler?s Construction As a first step, we generalize the Kesler?s Construction
originally defined for single-label classification (see [6]) to the PLM setting, thus showing
that the embedding problem can be formulated as a binary classification problem in a higher
dimensional space when new variables are appropriately defined. Specifically, consider
the vector y(a) = (M?s (a) ? M?e (a) ) ? Rs defined for every preference arc in a given
preference constraint, that is a = (?s , ?e ) ? A(g). For every instance vi and preference
(?s , ?e ), the preference condition ?A (vi , a) ? 0 can be rewritten as
Ps
?A (vi , a) = fP(vi , ?s ) ? f (vi , ?e ) = hy(a),
h(vi )i
=
k=1 yk (a)hWk , vi i
P
s
s
a s
=
hW
,
y
(a)v
i
=
hW
,
[z
]
i
=
hW,
zai i ? 0
k
k
i
k
i k
k=1
k=1
(4)
where [?]sk denotes the k-th chunk of a s-chunks vector, W ? Rs?d is the vector obtained by
sequentially arranging the vectors Wk , and zai = y(a) ? vi ? Rs?d is the embedded vector
made of the s chunks defined by [zai ]sk = yk (a)vi , k ? {1, . . . , s}. From this derivation it
turns out that each preference of a constraint in the set V can be viewed as an example of
dimension s ? d in a binary classification problem. Each pair (vi , gi ) ? V then generates
a number of examples in this extended binary problem equal to the number of arcs of the
PN
p-graph gi for a total of i=1 |A(gi )| examples. In particular, the set Z = {zai } is linearly
separable in the higher dimensional problem if and only if there exists a consistent solution
for the original PLM problem. Very similar considerations, omitted for space reasons,
could be given for the coding problem as well.
The Kernel Preference Learning Optimization As pointed out before, the central task
in PLM is to learn scoring functions in such a way to be as much as possible consistent
with the set of constraints in V. This is done by finding a set of parameters minimizing a
loss function that is an upper-bound on the empirical error function. For the embedding
problem, instantiating the problem (2), and choosing the 2-norm of the parameters as regu? = arg minW 1 PN LC (vi , gi |W, M ) + ?||W ||2 where, according
larizer, we obtain W
i=1
N
to Eq.(1), the loss for each preference constraint is computed as the maximum between the
losses of all the associated preferences, that is Li = maxa?A(gi ) L(hW, zai i).
When the constraint set in V contains basic preferences only (that is p-graphs consisting of
a single arc ai = A(gi )), the optimization problem can be simplified into the minimization
of a standard functional combining a loss function with a regularization term. Specifically,
all the losses presented before can be used and, for many of them, it is possible to give a
kernel-based solution. See [11] for a set of examples of loss functions and the formulation
of the associated problem with kernels.
The Kernel Preference Learning Machine For the general case of p-graphs possibly
containing multiple arcs, we propose a kernel-based method (hereafter referred to as Kernel
Preference Learning Machine or KPLM for brevity) for PLM optimization which adopts
the loss max in Eq. (2). Borrowing the idea of soft-margin [9], for each preference arc, a
linear loss is used giving an upper bound on the indicator function loss. Specifically, we
use the SVM-like soft margin loss L(?) = [1 ? ?]+ .
Summarizing, we require a set of small norm predictors that fulfill the soft constraints of
the problem. These requirements can be expressed by the following quadratic problem
PN
2
+ C i ?i
minW,? 12 ||W||
(5)
hW, zai i ? 1 ? ?i , i ? {1, .., N }, a ? A(gi )
subject to:
?i ? 0,
i ? {1, .., N }
Note that differently from the SVM formulation for the binary classification setting, here
the slack variables ?i are associated to multiple examples, one for each preference arc in
the p-graph. Moreover, the optimal value of the ?i corresponds to the loss value as defined
by Li . As it is easily verifiable, this problem is convex and it can be solved in the usual
way by resorting to the optimization of the Wolfe dual problem. Specifically, we have to
find the saddle point (minimization w.r.t. to the primal variables {W, ?} and maximization
w.r.t. the dual variables {?, ?}) of the following Lagrangian:
PN
PN P
a
a
Q(W, ?, ?, ?) = 21 ||W||2 + C i ?i + i
a?A(gi ) ?i (1 ? ?i ? hW, zi i)
PN
? i ?i ?i , s.t. ?ia , ?i ? 0
(6)
By differentiating the Lagrangian with respect to the primal variables and imposing the
optimality conditions we obtain the set of constraints that the variables have to fulfill in
order to be an optimal solution
PN P
PN P
?Q
a a
a a
= W? i
W= i
a?A(gi ) ?i zi = 0 ?
a?A(gi ) ?i zi
?W
P
P
?Q
a
a
(7)
= C ? a?A(g ) ?i ? ?i = 0 ? a?A(g ) ?i ? C
??
i
i
i
Substituting conditions (7) in (6) and omitting constants that do not change the solution,
the problem can be restated as
Ps P P
P
a
max? i,a ?ia ? 12 k i,ai j,aj yk (ai )yk (aj )?iai ?j j hvi , vj i
a
(8)
?i ? 0,
i ? {1, .., N }, a ? A(gi )
P
subject to:
a
?
?
C,
i
?
{1,
..,
N
}
a i
P
P
a
Since Wk = i,a yk (a)?i vi = i,a [M?s (a) ? M?e (a) ]sk ?ia vi , k = 1, .., s, we obtain
P
hk (x) = hWk , xi = i,a [M?s (a) ?M?e (a) ]sk ?ia hvi , xi. Note that any kernel k(?, ?) can be
substituted in place of the linear dot product h, i to allow for non-linear decision functions.
Embedding Optimization The problem in (8) recalls the one obtained for single-label
multiclass SVM [1, 2] and, in fact, its optimization can be performed in a similar way.
Assuming a number of arcs for each preference constraint equal to q, the dual problem in
(8) involves N ? q variables leading to a very large scale problem. However, it can be noted
that the independence of constraints among the different preference constraints allows for
the separation of the variables in N disjoints sets of q variables each.
The algorithm we propose for the optimization of the overall problem consists in iteratively
selecting a preference constraint from the constraints set (a p-graph) and then optimizing
with respect to the variables associated with it, that is one for each arc of the p-graph. From
the convexity of the problem and the separation of the variables, since on each iteration we
optimize on a different subset of variables, this guarantees that the optimal solution for the
Lagrangian will be found when no new selections can lead to improvements.
The graph to optimize at each step is selected on the basis of an heuristic selection strategy.
Let the preference
constraint (vi , gi ) ? V be selected at a given iteration, to enforce the
P
constraint a?A(gi ) ?ia + ?i = C, ?i ? 0, two elements from the set of variables {?ia |a ?
A(gi )} ? {?i } will be optimized in pairs while keeping the solution inside the feasible
region ?ia ? 0. In particular, let ?1 and ?2 be the two selected variables, we restrict the
updates to the form ?1 ? ?1 ?? and ?2 ? ?2 +? with optimal choices for ?. The variables
which most violate the constraints are iteratively selected until they reach optimality KKT
conditions. For this, we have devised a KKT-based procedure which is able to select these
variables in time linear with the number of classes. For space reasons we omit the details
and we do not consider at all any implementation issue. Details and optimized versions of
this basic algorithm can be found in [1].
Generalization of KPLM As a first immediate result we can give an upper-bound on the
leave-one-out error by utilizing the sparsity of a KPLM solution, namely LOO ? |V |/N ,
where V = {i ? {1, . . . , N }| maxa?A(gi ) ?ia > 0} is the set of support vectors. Another
interesting result about the generalization ability of a KPLM is in the following theorem.
Ps
Theorem 1 Consider a KPLM hypothesis ? = (W, M ) with r=1 ||Wr ||2 = 1 and
||Mr ||2 ? RM such that min(v,g)?V ?G (v, g|?) ? ?. Then, for any probability distribution D on X ? Y with support in a ball of radius RX around the origin, with probability
1 ? ? over n random examples S, the following bound for the true cost holds
en?
32n
4
2QA 64R2
log
log
+
log
Rt [?] ?
n
?2
8R2
?2
?
where ?y ? Y, qy ? Q, |A(gr (y))| ? A, r ? {1, . . . , qy } and R = 2RM RX .
Proof. Similar to that of Theorem 4.11 in [7] when noting that the size of examples in Z
are upper-bounded by R = 2RM RX .
4
Experiments
Experimental Setting We performed experiments on the ?ModApte? split of Reuters21578 dataset. We selected the 10 most popular categories thus obtaining a reduced set
of 6,490 training documents and a set of 2,545 test documents. The corpus was then preprocessed by discarding numbers and punctuation and converting letters to lowercase. We
used a stop-list to remove very frequent words and stemming has been performed by means
of Porter?s stemmer. Term weights are calculated according to the tf/idf function. Term selection was not considered thus obtaining a set of 28,006 distinct features.
We evaluated our framework on the binary category ranking task induced by the original
multi-label classification task, thus requiring rankings having target classes of the original
multi-label problem on top. Five different well-known cost functions have been used. Let
x be an instance having ranking label y. IErr is the cost function indicating a non-perfect
ranking and corresponds to the identity mapping in Figure 1-(a). DErr is the cost defined
as the number of relevant classes uncorrectly ranked by the algorithm and corresponds to
the domination mapping in Figure 1-(b). dErr is the cost obtained counting the number of
uncorrect rankings and corresponds to the disagreement mapping in Figure 1-(c). Other two
well-known Information Retrieval (IR) based cost functions have been used. The OneErr
cost function that is 1 whenever the top ranked class is not a relevant class and the average
P
|{r 0 ?y:rank(x,r 0 )?rank(x,r)}|
1
precision cost function, which is AvgP = |y|
.
r?y
rank(x,r)
Results The model evaluation has been performed by comparing three different label
mappings for KPLM and the baseline MMP algorithm [4], a variant of the Perceptron
algorithm for ranking problems, with respect to the above-mentioned ranking losses. We
used the configuration which gave the best results in the experiments reported in [4]. KPLM
has been implemented setting s = m and the standard basis vectors er ? Rm as codes
associated to the classes. A linear kernel k(x, y) = (hx, yi + 1) was used. Model selection
for the KPLM has been performed by means of a 5-fold cross validation for different values
of the parameter C. The optimal parameters have been chosen as the ones minimizing the
mean of the values of the loss (the one used for training) over the different folders. In Table
1 we report the obtained results. It is clear that KPLM definitely outperforms the MMP
method. This is probably due to the use of margins in KPLM. Moreover, using identity and
domination mappings seems to lead to models that outperform the ones obtained by using
the disagreement mapping. Interestingly, this also happens when comparing with respect
to its own corresponding cost. This can be due to a looser approximation (as a sum of
approximations) of the true cost function. The same trend was confirmed by another set of
experiments on artificial datasets that we are not able to report here due to space limitations.
Method
MMP
KPLM (GI )
KPLM (GD )
KPLM (Gd )
IErr %
5.07
3.77
3.81
4.12
DErr %
4.92
3.66
3.59
4.13
dErr %
0.89
0.55
0.54
0.66
OneErr %
4.28
3.10
3.14
3.58
AvgP %
97.49
98.25
98.24
97.99
Table 1: Comparisons of ranking performance for different methods using different loss
functions according to different evaluation metrics. Best results are shown in bold.
5
Conclusions and Future Work
We have presented a common framework for the analysis of general multiclass problems
and proposed a kernel-based method as an instance of this setting which has shown very
good results on a binary category ranking task. Promising directions of research, that we
are currently pursuing, include experimenting with coding optimization and considering to
extend the current setting to on-line learning, interdependent labels (e.g. hierarchical or any
other structured classification), ordinal regression problems, and classification with costs.
References
[1] F. Aiolli. Large Margin Multiclass Learning: Models and Algorithms. PhD thesis, Dept. of
Computer Science, University of Pisa, 2004. http://www.di.unipi.it/? aiolli/thesis.ps.
[2] F. Aiolli and A. Sperduti. Multi-prototype support vector machine. In Proceedings of International Joint Conference of Artificial Intelligence (IJCAI), 2003.
[3] K. Crammer and Y. Singer. On the learnability and design of output codes for multiclass problems. In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory,
pages 35?46, 2000.
[4] K. Crammer and Y. Singer. A new family of online algorithms for category ranking. Journal of
Machine Learning Research, 2003.
[5] O. Dekel, C.D. Manning, and Y. Singer. Log-linear models for label ranking. In Advances in
Neural Information Processing Systems, 2003.
[6] R.O. Duda, P.E. Hart, and D.G. Stork. Pattern Classification, chapter 5, page 266. Wiley, 2001.
[7] S. Har Peled, D. Roth, and D. Zimak. Constraint classification: A new approach to multiclass
classification. In Proceedings of the 13th International Conference on Algorithmic Learning
Theory (ALT-02), 2002.
[8] G. R?atsch, A. Smola, and S. Mika. Adapting codes and embeddings for polychotomies. In
Advances in Neural Information Processing Systems, 2002.
[9] V. Vapnik. Statistical Learning Theory. Wiley, New York, NY, 1998.
[10] J. Weston and C. Watkins. Multiclass support vector machines. In M. Verleysen, editor, Proceedings of ESANN99. D. Facto Press, 1999.
[11] T. Zhang and F.J. Oles. Text categorization based on regularized linear classification methods.
Information Retrieval, 1(4):5?31, 2001.
| 2579 |@word h:1 version:2 duda:1 norm:2 seems:1 dekel:1 r:5 mrk:1 configuration:1 contains:1 score:1 hereafter:1 selecting:1 document:2 interestingly:2 outperforms:1 current:2 comparing:2 yet:1 stemming:1 happen:1 plm:21 remove:1 update:1 intelligence:1 selected:5 math:1 node:6 boosting:1 preference:44 accessed:1 five:1 zhang:1 direct:1 consists:2 inside:1 introduce:1 indeed:1 expected:1 multi:4 ecoc:2 decreasing:1 considering:6 moreover:4 notation:1 bounded:1 qyi:1 polychotomies:1 kind:1 maxa:3 proposing:1 finding:1 guarantee:1 every:4 wrong:2 rm:5 facto:1 unit:1 omit:1 yn:1 before:4 mika:1 plus:1 studied:1 directed:1 oles:1 procedure:1 empirical:8 universal:1 adapting:1 word:1 selection:4 operator:1 optimize:2 www:1 lagrangian:3 roth:1 starting:2 independently:1 convex:3 restated:1 unify:1 pure:1 subgraphs:2 utilizing:1 embedding:8 notion:1 arranging:1 target:2 construction:2 us:1 unipd:1 hypothesis:11 origin:1 associate:1 element:2 wolfe:1 trend:1 particularly:1 unipi:2 labeled:1 disagreeing:1 observed:1 role:1 solved:1 region:1 ordering:1 yk:5 alessandro:1 mentioned:1 convexity:1 peled:1 miny:1 basis:2 easily:2 joint:1 differently:1 represented:1 chapter:1 derivation:1 distinct:1 describe:2 shortcoming:1 effective:2 artificial:2 choosing:1 quite:2 whose:1 heuristic:1 solve:3 say:1 otherwise:1 ability:1 gi:24 g1:2 itself:1 online:1 propose:4 product:2 fr:15 frequent:1 relevant:2 combining:1 subgraph:2 zai:6 supposed:1 inducing:1 ijcai:1 p:4 requirement:1 categorization:1 perfect:1 leave:1 object:1 minor:1 eq:3 implemented:1 involves:1 direction:2 radius:1 require:1 hx:1 larizer:1 generalization:2 clarify:1 hold:4 sufficiently:1 considered:2 around:1 exp:2 mapping:22 algorithmic:1 substituting:1 major:1 hvi:2 omitted:1 label:37 currently:1 reuters21578:1 tf:1 minimization:5 compounding:2 always:1 aim:1 fulfill:2 pn:12 avoid:1 focus:2 improvement:1 consistently:1 rank:4 experimenting:1 hk:1 baseline:1 summarizing:1 lowercase:1 typically:1 borrowing:1 transformed:1 going:1 arg:4 classification:20 issue:2 dual:3 denoted:4 among:1 verleysen:1 overall:1 art:1 equal:2 once:1 having:3 represents:1 future:1 others:1 report:2 few:1 consisting:3 n1:1 recalling:1 evaluation:3 introduces:1 punctuation:1 primal:2 har:1 partial:2 unification:1 necessary:1 minw:2 accommodated:1 sperduti:3 re:4 instance:8 soft:4 zimak:1 maximization:1 cost:30 subset:2 predictor:3 gr:1 loo:1 learnability:1 reported:1 gd:8 chunk:3 definitely:1 international:2 thesis:2 central:1 satisfied:1 containing:1 possibly:2 leading:1 li:2 coding:6 wk:3 bold:1 satisfy:1 ranking:37 vi:19 performed:5 try:2 h1:1 linked:1 start:1 contribution:1 minimize:3 square:2 ir:1 generalize:1 rx:3 confirmed:1 reach:1 whenever:2 definition:5 naturally:2 associated:12 di:2 proof:1 sampled:1 stop:1 dataset:2 popular:1 recall:1 regu:1 actually:1 higher:3 originally:1 specify:1 iai:1 yb:1 arranged:1 evaluated:4 though:2 formulation:3 done:1 furthermore:1 just:1 smola:1 until:1 working:1 hand:1 lack:1 porter:1 defines:1 logistic:1 aj:2 quality:1 argsort:1 name:1 omitting:1 requiring:1 true:6 regularization:3 iteratively:2 noted:1 generalized:1 trying:1 mina:1 confusion:1 consideration:1 recently:1 common:1 functional:4 stork:1 discussed:1 extend:1 refer:1 expressing:1 imposing:1 ai:3 rd:2 resorting:1 mathematics:1 similarly:2 pointed:1 dot:2 supervision:1 gj:3 multivariate:1 own:1 perspective:1 italy:2 optimizing:1 binary:9 yi:4 scoring:5 seen:2 minimum:1 additional:1 greater:1 mr:2 converting:1 monotonically:1 ii:1 multiple:6 full:1 violate:1 infer:2 cross:1 retrieval:2 devised:1 hart:1 coded:1 prediction:3 variant:2 regression:3 basic:4 instantiating:1 metric:1 iteration:2 kernel:13 qy:4 addition:3 thirteenth:1 suffered:2 appropriately:1 probably:2 induced:2 subject:2 inconsistent:1 mod:1 seem:1 unitary:1 presence:1 noting:1 counting:1 iii:1 easy:1 split:1 embeddings:1 independence:1 fit:1 zi:3 gave:1 restrict:1 idea:2 prototype:1 multiclass:30 effort:1 penalty:1 york:1 generally:1 detailed:1 clear:1 verifiable:1 induces:2 category:7 reduced:4 hw1:1 http:1 outperform:1 correctly:1 wr:1 write:2 falling:1 drawn:1 changing:1 preprocessed:1 kept:1 graph:27 sum:3 parameterized:1 letter:1 place:1 family:3 pursuing:1 looser:1 separation:2 decision:2 prefer:1 bound:5 pay:1 fold:1 quadratic:1 annual:1 constraint:35 idf:1 hy:1 generates:1 min:3 optimality:2 separable:1 structured:1 according:6 combination:2 ball:1 manning:1 kesler:2 modification:1 happens:1 agree:1 turn:1 count:1 slack:1 hh:1 singer:3 ordinal:2 end:2 available:1 decomposing:1 rewritten:1 disjoints:1 worthwhile:2 hierarchical:2 appropriate:1 disagreement:3 enforce:1 alternative:1 original:9 top:3 denotes:1 include:2 log2:1 unifying:1 gqy:2 giving:1 strategy:1 primary:1 rt:2 usual:1 said:1 fabio:1 mapped:1 hwk:3 modapte:1 evaluate:1 reason:2 padova:1 induction:1 assuming:1 besides:1 code:8 length:1 minimizing:3 equivalently:1 mostly:1 taxonomy:1 implementation:1 design:1 unknown:1 perform:1 upper:6 datasets:1 arc:15 immediate:1 defining:1 extended:2 y1:1 cast:1 pair:4 namely:1 optimized:2 qa:1 able:6 pattern:3 yc:1 fp:1 sparsity:1 built:2 including:2 max:5 ia:8 ranked:4 difficulty:1 regularized:2 indicator:2 scheme:1 text:1 interdependent:1 embedded:1 encompassing:1 loss:18 permutation:2 highlight:1 interesting:3 limitation:1 validation:1 incurred:1 consistent:7 ovo:1 editor:1 row:2 keeping:1 ovum:1 side:1 allow:1 deeper:1 perceptron:2 stemmer:1 taking:1 differentiating:1 dimension:1 xn:1 ending:1 calculated:1 adopts:1 made:4 folder:1 replicated:1 simplified:1 preferred:1 dealing:1 sequentially:1 kkt:2 corpus:1 xi:8 search:1 continuous:1 sk:4 table:3 promising:2 nature:1 learn:2 obtaining:3 complex:1 domain:1 vj:1 substituted:1 linearly:1 whole:1 bounding:1 aiolli:5 x1:1 referred:2 en:1 ny:1 wiley:2 lc:4 precision:1 pisa:2 exponential:1 mmp:3 watkins:1 hw:6 theorem:3 discarding:1 showing:1 er:1 r2:2 list:1 svm:3 alt:1 consist:1 exists:1 vapnik:1 cumulated:1 phd:1 margin:13 sorting:1 suited:1 generalizing:1 depicted:2 saddle:1 expressed:2 corresponds:7 weston:1 identity:3 formulated:1 viewed:1 towards:1 feasible:1 hard:1 change:2 specifically:6 total:2 experimental:1 domination:3 hws:1 indicating:1 select:2 formally:1 atsch:1 support:4 fulfills:1 crammer:2 brevity:1 relevance:1 violated:1 dept:3 outgoing:1 |
1,738 | 258 | 742
DeWeerth and Mead
An Analog VLSI Model of Adaptation
in the Vestibulo-Ocular Reflex
Stephen P. DeWeerth and Carver A. Mead
California Institute of Technology
Pasadena, CA 91125
ABSTRACT
The vestibulo-ocular reflex (VOR) is the primary mechanism that
controls the compensatory eye movements that stabilize retinal images during rapid head motion. The primary pathways of this system are feed-forward, with inputs from the semicircular canals and
outputs to the oculomotor system. Since visual feedback is not
used directly in the VOR computation, the system must exploit
motor learning to perform correctly. Lisberger(1988) has proposed
a model for adapting the VOR gain using image-slip information
from the retina. We have designed and tested analog very largescale integrated (VLSI) circuitry that implements a simplified version of Lisberger's adaptive VOR model.
1
INTRODUCTION
A characteristic commonly found in biological systems is their ability to adapt their
function based on their inputs. The combination of the need for precision and
the variability inherent in the environment necessitates such learning in organisms.
Sensorimotor systems present obvious examples of behaviors that require learning
to function correctly. Simple actions such as walking, jumping, or throwing a ball
are not performed correctly the first time they are attempted; rather, they require
motor learning throughout many iterations of the action.
When creating artificial systems that must execute tasks accurately in uncontrolled
environments, designers can exploit adaptive techniques to improve system performance. With this in mind, it is possible for the system designer to take inspiration
from systems already present in biology. In particular, sensorimotor systems, due to
An Analog VLSI Model of Adaptation in the Vestibulo-Ocular Reflex
their direct interfaces with the environment, can gather an immediate indication of
the correctness of an action, and hence can learn without supervision. The salient
characteristics of the environment are extracted by the adapting system and do not
need to be specified in a user-defined training set.
2
THE VESTIBULa-OCULAR REFLEX
The vestibulo-ocular reflex (VOR) is an example of a sensorimotor system that
requires adaptation to function correctly. The desired response of this system is
a gain of -1.0 from head movements to eye movements (relative to the head), so
that, as the head moves, the eyes remain fixed relative to the surroundings. Due
to the feed-forward nature of the primary VOR pathways, some form of adaptation
must be present to calibrate the gain of the response in infants and to maintain this
calibration during growth, disease, and aging (Robinson, 1976).
Lisberger (1988) demonstrated variable gain of the VOR by fitting magnifying spectacles onto a monkey. The monkey moved about freely, allowing the VOR to learn
the new relationship between head and eye movements. The monkey was then
placed on a turntable, and its eye velocity was measured while head motion was
generated. The eye-velocity response to head motion for three different lens magnifications is shown in Figure 1.
------
= -1.57
G = -1.05
G
G = -0.32
30 deg/sec
I
~_v_el_o_cl_'t_y_______
150 msec
Figure 1: VOR data from Lisberger (1988). A monkey was fitted with magnifying
spectacles and allowed to learn the gain needed for an accurate VOR. The monkey's
head was then moved at a controlled velocity, and the eye velocity was measured.
Three experiments were performed with spectacle magnifications of 0.25, 1.0, and
2.0. The corresponding eye velocities showed VOR gains G of -0.32, -1.05, and
-1.57.
Lisberger has proposed a simple model for this adaptation that uses retinal-slip
information from the visual system, along with the head-motion information from
the vestibular system, to adapt the gain of the forward pathways in the VOR.
743
744
DeWeerth and Mead
Figure 2 is a schematic diagram of the pathways subserving the VOR. There are
two parallel VOR pathways from the vestibular system to the motor neurons that
control eye movements (Snyder, 1988). One pathway consists of vestibular inputs,
VOR interneurons, and motor neurons. This pathway has been shown to exhibit an
unmodified gain of approximately -0.3. The second pathway consists of vestibular
inputs, floccular target neurons (FTN), and motor neurons. This pathway is the
site of the proposed gain adaptation.
Flocculus
-I-
C)
PC
retinal
slip
()
I
eye movement
feedback
'.'
Vestibular
Inputs
< "FTN
','
:
,
---?0 VOR interneuron '
(0
D
T
Motor neuron
Figure 2: A schematic diagram of the VOR (Lisberger, 1988). Two pathways
exist connecting the vestibular neurons to the motor neurons driving the eye muscles. The unmodified pathway connects via the VOR inter neurons. The modified
~athway (the proposed site of gain adaptation) connects via the floccular target
neurons (FTN). Outputs from the Purkinje cells (PC) in the flocculus mediate gain
adaptation at the FTN s.
Lisberger's hypothesis is that feedback from the visual system through the flocculus
is used to facilitate the adaptation of the gain of the FTNs. Image slip on the
retina indicates that the total VOR gain is not adjusted correctly. The relationship
between the head motion and the image slip on the retina determines the direction
in which the gain must be changed. For example, if the head is turning to the right
and the retinal image slip is to the right, the eyes are turning too slowly and the
gain should be increased. The direction of the gain change can be considered to be
the sign of the product of head motion and retinal image slip.
3
THE ANALOG VLSI IMPLEMENTATION
We implemented a simplified version of Lisberger's VOR model using primarily
subthreshold analog very large-scale integrated (VLSI) circuitry (Mead, 1989). We
interpreted the Lisberger data to suggest that the gain of the modified pathway
An Analog VLSI Model of Adaptation in the Vestibulo-Ocular Reflex
varies from zero to some fixed upper limit. This assumption gives a minimum VOR
gain equal to the gain of the unmodified pathway, and a maximum VOR gain equal
to the sum of the unmodified pathway gain and the maximum modified pathway
gain. We designed circuitry for the unmodified pathway to give an overshoot response to a step function similar to that seen in Figure 1.
neuron circuits
PI
P2
Figure 3: An analog VLSI sensorimotor framework. Each input circuit consists
of a bias transistor and a differential pair. The voltage Vb sets a fixed current
ib through the bias transistor. This current is partitioned into currents i l and i2
according to the differential voltage VI - V2 , and these currents are summed onto a
pair of global wires. The global currents are used as inputs to two neuron circuits
that convert the currents into pulse trains PI and P2 ?
The VOR model was designed within the sensorimotor framework shown in Figure 3
(DeWeerth, 1987). The framework consists of a number of input circuits and two
output circuits. Each input circuit consists of a bias transistor and a differential pair.
The gain of the circuit is set by a fixed current through the bias transistor. This
current is partitioned according to the differential input voltage into two currents
that pass through the differential-pair transistors. The equations for these currents
are
The two currents are summed onto a pair of global wires. Each of these global
currents is input to a neuron circuit (Mead, 1989) that converts the current linearly
into the duty cycle of a pulse train. The pulse trains can be used to drive a pair
of antagonistic actuators that can bidirectionally control the motion of a physical
plant. We implement a system (such as the VOR) within this framework by augmenting the differential pairs with circuitry that computes the function needed for
the particular application.
745
746
DeWeerth and Mead
~~----~--------~--------------~r?
~
r-
Figure 4: The VLSI implementation of the unmodified pathway. The left differential pair is used to convert proportionally the differential voltage representing head
velocity (Vhead - 'Vref) into output currents. The right differential pair is used in conjunction with a first-order section to give output currents related to the derivative
of the head velocity. The gains of the two differential pairs are set by the voltages
Vp and Vo.
The unmodified pathway is implemented in the framework using two differential
pairs (Figure 4). One of these circuits proportionally converts the head motion into
output currents. This circuit generates a step in eye velocity when presented with
a step in head velocity. The other differential pair is combined with a first-order
section to generate output currents related to the derivative of the head motion.
This circuit generates a broad impulse in eye velocity when presented with a step
in head velocity. By setting the gains of the proportional and derivative circuits
correctly, we can make the overall response of this pathway similar to that of the
unmodified pathway seen when Lisberger's monkey was presented with a step in
head velocity.
We implement the modified pathway within the framework using a single differentialpair circuit that generates output currents proportional to the head velocity (Figure 5). The system adapts the gain of this pathway by integrating an error signal
with respect to time. The error signal is a current, which the circuitry computes
by multiplying the retinal image slip and the head velocity. This error current is
integrated onto a capacitor, and the voltage on the capacitor is then converted to
a current that sets the gain of the modified pathway.
4
EXPERIMENTAL METHOD AND RESULTS
To test our VOR circuitry, we designed a simple electrical model of the head and
eye (Figure 6). The head motion is represented by a voltage that is supplied by a
function generator. The oculomotor plant (the eye and corresponding muscles) is
modeled by an RC circuit that integrates output pulses from the VOR circuitry into
a voltage that represents eye velocity in head coordinates. We model the magnifying
An Analog VLSI Model of Adaptation in the Vestibulo-Ocular Reflex
~~------------------------------~~r-
r-
~
head
slip
Figure 5: The VLSI implementation of the modified pathway. A differential pair
is used to convert proportionally the differential voltage representing head velocity
(Vhead - v;.er) into output currents. Adaptive circuitry capacitively integrates the
product of head velocity and retinal image slip as a voltage Vg ? This voltage is
converted to a current ig that sets the gain of the differential pair. The voltage VA
sets the maximum gain of this pathway.
>---+
slip
Vhead--
Figure 6: A simple model of the oculomotor plant. An RC circuit (bottom)
integrates pulse trains PI and P2 into a voltage ?eye that encodes eye velocity. The
magnifying spectacles are modeled by an operational amplifier circuit (top), which
has a magnification m R2/ R I . The retinal image slip is encoded by the difference
between the output voltage of this circuit and the voltage Vhead that encodes the
head velocity.
=
747
748
DeWeerth and Mead
spectacles using an operational amplifier circuit that multiplies the eye velocity by
a gain before the velocity is used to compute the slip information. We compute the
image slip by subtracting the head velocity from the magnified eye velocity.
G
= -1.45
G
= -0.32
Figure 7: Experimental data from the VOR circuitry. The system was allowed to
adapt to spectacle magnifications of 0.25, 1.0, and 2.0. After adaptation, the eye
velocities showed corresponding VOR gains of -0.32, -0.92, and -1.45.
We performed an experiment to generate data to compare to the data measured by
Lisberger (Figure 1). A head-velocity step was supplied by a function generator and
was used as input to the VOR circuitry. The VOR outputs were then converted to an
eye velocity by the model of the oculomotor plant. The proportional, derivative, and
maximum adaptive gains were set to give a system response similar to that observed
in the monkey. The system was allowed to adapt over a number of presentations
of the input for each spectacle magnification. The resulting eye velocity data are
displayed in Figure 7.
5
CONCLUSIONS AND FUTURE WORK
In this paper, we have presented an analog VLSI implementation of a model of a
biological sensorimotor system. The system performs unsupervised learning using
signals generated as the system interacts with its environment. This model can
be compared to traditional adaptive control schemes (Astrom, 1987) for performing similar tasks. In the future, we hope to extend the model presented here to
incorporate more of the information known about the VOR.
We are currently designing and testing chips that use ultraviolet storage techniques
for gain adaptation. These chips will allow us to achieve adaptive time constants of
the same order as those found in biological systems (minutes to hours).
We are also combining our chips with a mechanical model of the head and eyes to
give more accurate environmental feedback. We can acquire true image-slip data
using a vision chip (Tanner, 1986) that computes global field motion.
An Analog VLSI Model of Adaptation in the Vestibulo-Ocular Reflex
Acknowledgments
\Ve thank Steven Lisberger for his suggestions for improving our implementation of
the VOR model. \Ve would also like to thank Massimo Sivilotti, Michelle Mahowald,
Michael Emerling, Nanette Boden, Richard Lyon, and Tobias Delbriick for their help
during the writing of this paper.
References
K.J. Astrom, Adaptive feedback control. Proceedings of the IEEE, 75:2:185-217,
1987.
S.P. DeWeerth, An Analog VLSI Framework for Motor Control. M.S. Thesis, Department of Computer Science, California Institute of Technology, Pasadena, CA,
1987.
S.G. Lisberger, The neural basis for learning simple motor skills. Science, 242:728735, 1988.
C.A. Mead, Analog VLSI and Neural Systems. Addison-Wesley, Reading, MA,
1989.
D.A. Robinson, Adaptive gain control of vestibulo-ocular reflex by the cerebellum.
1. Neurophysiology, 39:954-969, 1976.
L.H. Snyder and W.M. King, Vertical vestibuloocular reflex in cat: asymmetry and
adaptation. 1. Neurophysiology, 59:279-298, 1988.
J.E. Tanner. Integrated Optical Motion Detection. Ph.D. Thesis, Department of
Computer Science, California Institute of Technology, S223:TR:86, Pasadena, CA,
1986.
749
| 258 |@word neurophysiology:2 version:2 pulse:5 tr:1 current:23 must:4 vor:32 motor:9 designed:4 infant:1 rc:2 along:1 direct:1 differential:15 consists:5 pathway:25 fitting:1 inter:1 rapid:1 behavior:1 lyon:1 circuit:18 vref:1 sivilotti:1 interpreted:1 monkey:7 magnified:1 growth:1 control:7 before:1 aging:1 limit:1 mead:8 approximately:1 acknowledgment:1 testing:1 implement:3 adapting:2 integrating:1 suggest:1 onto:4 storage:1 writing:1 demonstrated:1 his:1 coordinate:1 antagonistic:1 target:2 user:1 us:1 slip:15 hypothesis:1 designing:1 velocity:27 magnification:5 walking:1 bottom:1 vestibula:1 observed:1 steven:1 electrical:1 cycle:1 movement:6 disease:1 environment:5 tobias:1 overshoot:1 basis:1 necessitates:1 chip:4 represented:1 cat:1 train:4 artificial:1 encoded:1 ability:1 indication:1 transistor:5 subtracting:1 product:2 flocculus:3 adaptation:15 combining:1 achieve:1 adapts:1 moved:2 asymmetry:1 help:1 augmenting:1 measured:3 p2:3 implemented:2 direction:2 require:2 biological:3 adjusted:1 considered:1 circuitry:10 driving:1 integrates:3 currently:1 correctness:1 hope:1 emerling:1 modified:6 rather:1 vestibuloocular:1 voltage:15 conjunction:1 indicates:1 ftn:4 spectacle:7 integrated:4 pasadena:3 vlsi:14 overall:1 multiplies:1 summed:2 equal:2 field:1 biology:1 represents:1 broad:1 unsupervised:1 future:2 inherent:1 primarily:1 retina:3 richard:1 surroundings:1 ve:2 connects:2 maintain:1 amplifier:2 detection:1 interneurons:1 pc:2 accurate:2 jumping:1 carver:1 capacitively:1 desired:1 fitted:1 increased:1 purkinje:1 unmodified:8 calibrate:1 mahowald:1 too:1 varies:1 combined:1 tanner:2 connecting:1 michael:1 thesis:2 slowly:1 creating:1 derivative:4 converted:3 retinal:8 sec:1 stabilize:1 vi:1 performed:3 parallel:1 characteristic:2 subthreshold:1 vp:1 accurately:1 multiplying:1 drive:1 sensorimotor:6 ocular:9 obvious:1 gain:34 subserving:1 feed:2 wesley:1 response:6 execute:1 deweerth:7 impulse:1 facilitate:1 true:1 hence:1 inspiration:1 i2:1 cerebellum:1 during:3 vo:1 performs:1 motion:12 interface:1 image:11 physical:1 analog:12 organism:1 extend:1 calibration:1 supervision:1 showed:2 muscle:2 seen:2 minimum:1 freely:1 signal:3 stephen:1 adapt:4 ultraviolet:1 controlled:1 schematic:2 va:1 vision:1 iteration:1 cell:1 diagram:2 capacitor:2 duty:1 action:3 proportionally:3 turntable:1 ph:1 generate:2 supplied:2 exist:1 canal:1 designer:2 sign:1 correctly:6 snyder:2 salient:1 convert:5 sum:1 throughout:1 vb:1 uncontrolled:1 throwing:1 encodes:2 generates:3 performing:1 optical:1 department:2 according:2 combination:1 ball:1 remain:1 partitioned:2 equation:1 mechanism:1 needed:2 mind:1 addison:1 actuator:1 v2:1 top:1 exploit:2 move:1 already:1 primary:3 interacts:1 traditional:1 exhibit:1 thank:2 modeled:2 relationship:2 acquire:1 implementation:5 perform:1 allowing:1 upper:1 vertical:1 neuron:12 wire:2 semicircular:1 displayed:1 immediate:1 variability:1 head:31 delbriick:1 pair:14 mechanical:1 specified:1 compensatory:1 california:3 hour:1 vestibular:6 robinson:2 reading:1 oculomotor:4 largescale:1 turning:2 representing:2 scheme:1 improve:1 technology:3 eye:25 relative:2 plant:4 suggestion:1 proportional:3 vg:1 generator:2 gather:1 vestibulo:8 pi:3 changed:1 placed:1 bias:4 allow:1 institute:3 magnifying:4 michelle:1 feedback:5 computes:3 forward:3 commonly:1 adaptive:8 simplified:2 ig:1 skill:1 deg:1 global:5 learn:3 nature:1 ca:3 operational:2 improving:1 boden:1 linearly:1 mediate:1 allowed:3 site:2 astrom:2 precision:1 msec:1 ib:1 minute:1 er:1 r2:1 interneuron:1 bidirectionally:1 visual:3 reflex:10 lisberger:13 determines:1 environmental:1 extracted:1 ma:1 presentation:1 king:1 massimo:1 change:1 lens:1 total:1 pas:1 experimental:2 attempted:1 incorporate:1 tested:1 |
1,739 | 2,580 | Kernel Projection Machine: a New Tool for
Pattern Recognition?
Gilles Blanchard
Fraunhofer First (IDA),
K?ekul?estr. 7, D-12489 Berlin, Germany
[email protected]
R?egis Vert
LRI, Universit?e Paris-Sud,
Bat. 490, F-91405 Orsay, France
Masagroup
24 Bd de l?Hopital, F-75005 Paris, France
[email protected]
Pascal Massart
D?epartement de Math?ematiques,
Universit?e Paris-Sud,
Bat. 425, F-91405 Orsay, France
[email protected]
Laurent Zwald
D?epartement de Math?ematiques,
Universit?e Paris-Sud,
Bat. 425, F-91405 Orsay, France
[email protected]
Abstract
This paper investigates the effect of Kernel Principal Component Analysis (KPCA) within the classification framework, essentially the regularization properties of this dimensionality reduction method. KPCA has
been previously used as a pre-processing step before applying an SVM
but we point out that this method is somewhat redundant from a regularization point of view and we propose a new algorithm called Kernel Projection Machine to avoid this redundancy, based on an analogy
with the statistical framework of regression for a Gaussian white noise
model. Preliminary experimental results show that this algorithm reaches
the same performances as an SVM.
1
Introduction
Let (xi , yi )i=1...n be n given realizations of a random variable (X, Y ) living in X ?
{?1; 1}. Let P denote the marginal distribution of X. The xi ?s are often referred to as
inputs (or patterns), and the yi ?s as labels. Pattern recognition is concerned with finding a
classifier, i.e. a function that assigns a label to any new input x ? X and that makes as few
prediction errors as possible.
It is often the case with real world data that the dimension of the patterns is very large,
and some of the components carry more noise than information. In such cases, reducing
the dimension of the data before running a classification algorithm on it sounds reasonable.
One of the most famous methods for this kind of pre-processing is PCA, and its kernelized
version (KPCA), introduced in the pioneering work of Scho? lkopf, Smola and M?uller [8].
?
This work was supported in part by the IST Programme of the European Community, under the
PASCAL Network of Excellence, IST-2002-506778.
Now, whether the quality of a given classification algorithm can be significantly improved
by using such pre-processed data still remains an open question. Some experiments have
already been carried out to investigate the use of KPCA for classification purposes, and
numerical results are reported in [8]. The authors considered the USPS handwritten digit
database and reported the test error rates achieved by the linear SVM trained on the data
pre-processed with KPCA: the conclusion was that the larger the number of principal components, the better the performance. In other words, the KPCA step was useless or even
counterproductive.
This conclusion might be explained by a redundancy arising in their experiments: there
is actually a double regularization, the first corresponding to the dimensionality reduction
achieved by KPCA, and the other to the regularization achieved by the SVM. With that in
mind it does not seem so surprising that KPCA does not help in that case: whatever the
dimensionality reduction, the SVM anyway achieves a (possibly strong) regularization.
Still, de-noising the data using KPCA seems relevant. The aforementioned experiments
suggest that KPCA should be used together with a classification algorithm that is not regularized (e.g. a simple empirical risk minimizer): in that case, it should be expected that the
KPCA is by itself sufficient to achieve regularization, the choice of the dimension being
guided by adequate model selection.
In this paper, we propose a new algorithm, called the Kernel Projection Machine (KPM),
that implements this idea: an optimal dimension is sought so as to minimize the test error
of the resulting classifier. A nice property is that the training labels are used to select the
optimal dimension ? optimal means that the resulting D-dimensional representation of the
data contains the right amount of information needed to classify the inputs. To sum up, the
KPM can be seen as a dimensionality-reduction-based classification method that takes into
account the labels for the dimensionality reduction step.
This paper is organized as follows: Section 2 gives some statistical background on regularized method vs. projection methods. Its goal is to explain the motivation and the ?Gaussian
intuition? that lies behind the KPM algorithm from a statistical point of view. Section 3
explicitly gives the details of the algorithm; experiments and results, which should be considered preliminary, are reported in Section 4.
2
2.1
Motivations for the Kernel Projection Machine
The Gaussian Intuition: a Statistician?s Perspective
Regularization methods have been used for quite a long time in non parametric statistics
since the pioneering works of Grace Wahba in the eighties (see [10] for a review). Even
if the classification context has its own specificity and offers new challenges (especially
when the explanatory variables live in a high dimensional Euclidean space), it is good to
remember what is the essence of regularization in the simplest non parametric statistical
framework: the Gaussian white noise.
So let us assume that one observes a noisy signal dY (x) = s(x)dx + ?1n dw(x) , Y (0) = 0
on [0,1] where dw(x) denotes standard white noise. To the reader not familiar with this
model, it should be considered as nothing more but an idealization of the well-known fixed
design regression problem Yi = s(i/n) + ?i for i = 1, . . . , n, where ?i ? N (0, 1), where
the goal is to recover the regression function s. (The white noise model is actually simpler
to study from a mathematical point of view). The least square criterion is defined as
Z 1
2
?n (f ) = kf k ? 2
f (x)dY (x)
0
for every f ? L2 ([0, 1]).
Given a Mercer kernel k on [0, 1]?[0, 1], the regularization least square procedure proposes
to minimize
?n (f ) + ?n kf kHk
(1)
where (?n ) is a conveniently chosen sequence and Hk denotes the RKHS induced by k.
This procedure can indeed be viewed as a model selection procedure since minimizing
?n (f ) + ?n kf kHk amounts to minimizing
inf ?n (f ) + ?n R2
kf k?R
over R > 0. In other words, regularization aims at selecting the ?best? RKHS ball
{f, kf k ? R} to represent our data.
At this stage, it is interesting to realize that the balls in the RKHS space can be viewed as
ellipsoids in the original Hilbert space L2 ([0, 1]). Indeed, let (?i )?
i=1 be some orthonormal
basis of eigenfunctions for the compact and self adjoint operator
Z 1
Tk : f ??
k(x, y)f (x)dx
0
R1
P? ? 2
Then, setting ?j = 0 f (x)?j (x)dx one has kf k2Hk = j=1 ?jj where (?j )j?1 denotes
the non increasing sequence of eigenvalues corresponding to (?j )j?1 . Hence
?
?
?
?
?X
?
X
?j2
{f, kf kHk ? R} =
? j ?j ;
? R2 .
?
?
?j
j=1
j=1
Now, due to the approximation properties of the finite dimensional spaces {? j , j ? D},
D ? N? with respect to the ellipsoids, one can think of penalized finite dimensional projection as an alternative method to regularization.RMore precisely,
if sbD denotes the projection
PD
estimator on h?j , j ? Di, i.e. sbD = j=1
?j dY ?j and one considers the penalized
b = argmin[?n (b
selection criterion D
sD ) + 2D ] then, it is proved in [1] that the selected
D
n
estimator sbDb obeys to the following oracle inequality
E[ks ? sbDb k2 ] ? C inf Eks ? sbD k2
D?1
where C is some absolute constant.
The nice thing is that whenever s belongs to some ellipsoid
?
?
?
?
?
?X
X
?j2
?
1
E(c) =
? j ?j :
?
?
c2
j=1
j=1 j
where (cj )j?1 is a decreasing sequence tending to 0 as j ? ?, then
D
D
2
2
2
? inf cD +
inf E ks ? sbD k = inf inf ks ? tk +
D?1
D?1 t?SD
D?1
n
n
As shown in [5] inf D?1 [c2D + D
n ] is (up to some absolute constant) of the order of magnitude of the minimax risk over E(c).
As a consequence, the estimator sbDb is simultaneously minimax over the collection of all
?
ellipsoids E(c), which in particular includes the collection {E( ?R), R > 0}.
To conclude and summarize, from a statistical performance point of view, what we can
expect from a regularized estimator sb (i.e. a minimizer of (1)) is that a convenient device?of ?n ensures that sb is simultaneously minimax over the collection of ellipsoids
{E( ?R), R > 0}, (at least as far as asymptotic rates of convergence are concerned ).
The alternative estimator sbDb actually achieves this goal and even better since it is also
?
adaptive over the collection of all ellipsoids and not only the family {E( ?R), R > 0}.
2.2
Extension to a general classification framework
In this section we go back to classification framework as described in the introduction. First
of all, it has been noted by several authors ([6],[9]) that the SVM can be seen as a regularized estimation method, where the regularizer is the squared norm of the function in H k .
Precisely, the SVM algorithm solves the following unconstrained optimization problem:
n
min
f ?Hbk
1X
(1 ? yi f (xi ))+ + ?kf k2Hk ,
n i=1
(2)
where Hkb = {f (x) + b, f ? Hk , b ? R}.
The above regularization can be viewed as a model selection process over RKHS balls,
similarly to the previous section. Now, the line of ideas developed there suggests that it
might actually be a better idea to consider a sequence of finite-dimensional estimators.
Additionally, it has been shown in [4] that the regularization term of the SVM is actually
too strong. We therefore transpose the ideas of previous Gaussian case to the classification
framework. Consider a Mercer kernel k defined on X ? X and Let Tk denote the operator
associated with kernel k in the following way
Z
Tk : f (.) ? L2 (X ) 7??
k(x, .)f (x)dP (x) ? L2 (X )
X
Let ?1 , ?2 , . . . denote the eigenvectors of Tk , ordered by decreasing associated eigenvalues
(?i )1?1 . For each integer D, the subspace FD defined by FD = span{11, ?1 , . . . , ?D }
(where 11 denotes the constant function equal to 1) corresponds to a subspace of H kb associS?
ated with kernel k, and Hkb = D=1 FD . Instead of selecting the ?best? ball in the RKHS,
as the SVM does, we consider the analogue of the projection estimator sbD :
f?D = arg min
f ?FD
n
X
i=1
(1 ? yi f (xi ))+
(3)
that is, more explicitly,
f?D (.) =
D
X
?j? ?j (.) + b?
j=1
with
(? ? , b? ) = arg
min
(??RD ,b?R)
n
X
i=1
?
?
??
D
X
?1 ? y i ?
?j ?j (xi ) + b??
j=1
(4)
+
An appropriate D can then be chosen using an adequate model selection procedure such as
penalization; we do not address this point in detail in the present work but it is of course
the next step to be taken.
Unfortunately, since the underlying probability P is unknown, neither are the eigenfunctions ?1 , . . ., and it is therefore not possible to implement this procedure directly. We thus
resort to considering empirical quantities as will be explained in more detail in section 3.
Essentially, the unknown vectorial space spanned by the first eigenfunctions of T k is replaced by the space spanned by the first eigenvectors of the normalized kernel Gram matrix
1
n (k(xi , xj ))1?i,j?n . At this point we can see the relation appear with Kernel PCA. We
next precise this relation and give an interpretation of the resulting algorithm in terms of
dimensionality reduction.
2.3
Link with Kernel Principal Component Analysis
Principal Component Analysis (PCA), and its non-linear variant, KPCA are widely used algorithms in data analysis. They extract from the input data space a basis (vi )i?1 which is, in
some sense, adapted to the data by looking for directions where the variance is maximized.
They are often used as a pre-processing on the data in order to reduce the dimensionality
or to perform de-noising.
As will be made more explicit in the next section, the Kernel Projection Machine consists
in replacing the ideal projection estimator defined by (3) by
1
fbD = argmin
f ?SD n
n
X
i=1
(1 ? yi f (Xi ))+
where SD is the space of dimension D chosen by the first D principal components chosen
by KPCA in feature space. Hence, roughly speaking, in the KPM, the SVM penalization is
replaced by dimensionality reduction.
Choosing D amounts to selecting the optimal D-dimensional representation of our data
for the classification task, in other words to extracting the information that is needed for
this task by model selection taking into account the relevance of the directions for the
classification task.
To conclude, the KPM is a method of dimensionality reduction that takes into account the
labels of the training data to choose the ?best? dimension.
3
The Kernel Projection Machine Algorithm
In this section, the empirical (and computable) version of the KPM algorithm is derived
from the previous theoretical arguments.
In practice the true eigenfunctions of the kernel operator are not computable. But since
only the values of functions ?1 , . . . , ?D at points x1 , . . . , xn are needed for minimizing
the empirical risk over FD , the eigenvectors of the kernel matrix K = (k(xi , xj ))1?i,j?n
will be enough for our purpose. Indeed, it is well known in numerical analysis (see [2])
that the eigenvectors of the kernel matrix approximate the eigenfunctions of the kernel
operator. This result has been pointed out in [7] in a more probabilistic language. More
precisely, if V1 , . . . , VD denote the D first eigenvectors of K with associated eigenvalues
b1 ? ?
b2 ? . . . ? ?
bD , then for each Vi
?
(1)
(n)
? (?i (x1 ), . . . , ?i (xn ))
(5)
Vi = Vi , . . . , V i
Hence, considering Equation (4), the empirical version of the algorithm described above
will first consist of solving, for each dimension D, the following optimization problem:
?
?
??
n
D
X
X
(i)
?1 ? y i ?
(? ? , b? ) = arg min
? j V j + b ??
(6)
??RD ,b?R
i=1
j=1
+
Then the solution should be
f?D (.) =
D
X
?j? ?j (.) + b? .
(7)
j=1
Once again the true functions ?j ?s are unknown. At this stage, we can do an expansion of
the solution in terms of the kernel similarly to the SVM algorithm, in the following way:
f?D (.) =
n
X
i=1
?i? k(xi , .) + b?
(8)
0.46
0.35
0.44
0.345
0.42
0.34
0.4
0.38
0.335
0.36
0.33
0.34
0.325
0.32
0.3
0
5
10
15
20
25
30
0.32
0
2
4
6
8
10
12
14
16
18
20
Figure 1: Left: KPM risk (solid) and empirical risk (dashed) versus dimension D. Right:
SVM risk and empirical risk versus C. Both on dataset ?flare-solar?.
Narrowing both expressions ( 7) and ( 8) at points x1 , . . . , xn leads the following equation:
?
?1? V1 + . . . + ?D
VD = K??
(9)
?
P
?
D
which has a straightforward solution: ?? = j=1 bj Vj (provided the D first eigenvalues
?j
are all strictly positive).
Now the KPM algorithm can be summed up as follows:
1. given data x1 , . . . , xn ? X and a positive kernel k defined on X ? X , compute
the kernel matrix K and its eigenvectors V1 , . . . , Vn together with its eigenvalues
b1 ? ?
b2 ? . . . ? ?
bn .
in decreasing order ?
bD > 0 solve the linear optimization problem
2. for each dimension D such that ?
(? ? , b? ) = arg min
?,b,?
?
under constraints ?i = 1 . . . n, ?i ? 0 , yi ?
Next, compute ?? =
D
X
?j?
j=1
bj
?
Vj and f?D (.) =
n
X
?i
(10)
i=1
D
X
j=1
Pn
(i)
?j Vj
i=1
?
+ b? ? 1 ? ? i .
(11)
?i? k(xi , .) + b?
? for which
3. The last step is a model selection problem: choose a dimension D
?
fD? performs well. We do not address directly this point here; one can think of
applying cross-validation, or to penalize the empirical loss by a penalty function
depending on the dimension.
4
Experiments
The KPM was implemented in Matlab using the free library GLPK for solving the linear
optimization problem. Since the algorithm involves the eigendecomposition of the kernel
matrix, only small datasets have been considered for the moment.
In order to assess the performance of the KPM, we carried out experiments on benchmark
datasets available on Gunnar R?atsch?s web site [3]. Several state-of-art algorithms have
already been applied to those datasets, among which the SVM. All results are reported on
the web site. To get a valid comparison with the SVM, on each classification task, we used
Table 1: Test errors of the KPM on several benchmark datasets, compared with SVM, using
G.R?atsch?s parameter selection procedure (see text). As an indication the best of the six
results presented in [3] are also reported.
Banana
Breast Cancer
Diabetis
Flare Solar
German
Heart
KPM
10.73 ? 0.42
26.51 ? 4.75
23.37 ? 1.92
32.43 ? 1.85
23.59 ? 2.15
16.89 ? 3.53
(selected D)
15
24
11
6
14
10
SVM
11.53 ? 0.66
26.04 ? 4.74
23.53 ? 1.73
32.43 ? 1.82
23.61 ? 2.07
15.95 ? 3.26
Best of 6
10.73 ? 0.43
24.77 ? 4.63
23.21 ? 1.63
32.43 ? 1.82
23.61 ? 2.07
15.95 ? 3.26
Table 2: Test errors of the KPM on several benchmark datasets, compared with SVM, using
standard 5-fold cross-validation on each realization.
KPM
SVM
Banana
11.14 ?0.73
10.69 ? 0.67
Breast Cancer 26.55?4.43
26.68 ? 5.23
Diabetis
24.14 ?1.86 23.79 ? 2.01
Flare Solar
32.70?1.97
32.62 ? 1.86
German
23.82?2.23
23.79 ? 2.12
Heart
17.59?3.30
16.23 ? 3.18
the same kernel parameters as those used for SVM, so as to work with exactly the same
geometry.
There is a subtle, but important point arising here. In the SVM performance reported by G.
R?atsch, the regularization parameter C was first determined by cross-validation on the first
5 realizations of each dataset; then the median of these values was taken as a fixed value
for the other realizations. This was done apparently for saving computation time, but this
might lead to an over-optimistic estimation of the performances since in some sense some
extraneous information is then available to the algorithm and the variation due to the choice
of C is reduced to almost zero. We first tried to mimic this methodology by applying it, in
our case, to the choice of D itself (the median of 5 D values obtained by cross-validation
on the first realizations was then used on the other realizations).
One might then argue that this way we are selecting a parameter by this method instead of
a meta-parameter for the SVM, so that the comparison is unfair. However, this distinction
being loose, this a rather moot point. To avoid this kind of debate and obtain fair results, we
decided to re-run the SVM tests by selecting systematically the regularization parameter by
a 5-fold cross-validation on each training set, and for our method, apply the same procedure
to select D. Note that there is still extraneous information in the choice of the kernel
parameters, but at least it is the same for both algorithms.
Results relative to the first methodology are reported in table 1, and those relative the
second one are reported in table 2. The globally worst performances exhibited in the second
table show that the first procedure may indeed be too optimistic. It is to be mentionned
that the parameter C of the SVM was systematically sought on a grid of only 100 values,
ranging from 0 to three times the optimal value given in [3]. Hence those experimental
results are to be considered as preliminary, and in no way they should be used to establish
a significant difference between the performances of the KPM and the SVM. Interestingly,
the graphic on the left in Figure 4 shows that our procedure is very different from the one
of [8]: when D is very large, our risk increases (leading to the existence of a minimum)
while the risk of [8] always decreases with D.
5
Conclusion and discussion
To summarize, one can see the KPM as an alternative to the regularization of the SVM: regularization using the RKHS norm can be replaced by finite dimensional projection. Moreover, this algorithm performs KPCA towards classification and thus offers a criterion to
decide what is the right order of expansion for the KPCA.
Dimensionality reduction can thus be used for classification but it is important to keep in
mind that it behaves like a regularizer. Hence, it is clearly useless to plug it in a classification algorithm that is already regularized: the effect of the dimensionality reduction may
be canceled as noted by [8].
Our experiments explicitly show the regularizing effect of KPCA: no other smoothness
control has been added in our algorithm and still, it gives performances comparable to the
one of SVM provided the dimension D is picked correctly. We only considered here selection of D by cross-validation; other methods such as penalization will be studied in future
works. Moreover, with this algorithm, we obtain a D-dimensional representation of our
data which is optimal for the classification task. Thus KPM can be see as a de-noising
method who takes into account the labels.
This version of the KPM only considers one kernel and thus one vectorial space by dimension. A more advanced version of this algorithm is to consider several kernels and thus
choose among a bigger family of spaces. This family then contains more than one space
by dimension and will allow to directly compare the performance of different kernels on
a given task, thus improving efficiency for the dimensionality reduction while taking into
account the labels.
References
[1] P. Massart A. Barron, L. Birg?e. Risk bounds for model selection via penalization.
Proba.Theory Relat.Fields, 113:301?413, 1999.
[2] Baker. The numerical treatment of integral equations. Oxford:Clarendon Press, 1977.
[3] http://ida.first.gmd.de/?raetsch/data/benchmarks.htm. Benchmark repository used in
several Boosting, KFD and SVM papers.
[4] G. Blanchard, O. Bousquet, and P.Massart. Statistical performance of support vector
machines. Manuscript, 2004.
[5] D.L. Donoho, R.C. Liu, and B. MacGibbon. Minimax risk over hyperrectangles, and
implications. Ann. Statist. 18,1416-1437, 1990.
[6] T. Evgeniou, M. Pontil, and T. Poggio. Regularization networks and support vector
machines. In A. J. Smola, P. L. Bartlett, B. Sch?olkopf, and D. Schuurmans, editors,
Advances in Large Margin Classifiers, pages 171?203, Cambridge, MA, 2000. MIT
Press.
[7] V. Koltchinskii. Asymptotics of spectral projections of some random matrices approximating integral operators. Progress in Probability, 43:191?227, 1998.
[8] B. Sch?olkopf, A. J. Smola, and K.-R. M?uller. Nonlinear component analysis as a
kernel eigenvalue problem. Neural Computation, 10:1299?1319, 1998.
[9] A. J. Smola and B. Sch?olkopf. On a kernel-based method for pattern recognition,
regression, approximation and operator inversion. Algorithmica, 22:211?231, 1998.
[10] G. Wahba. Spline Models for Observational Data, volume 59 of CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied
Mathematics, Philadelphia, Pennsylvania, 1990.
| 2580 |@word repository:1 version:5 inversion:1 seems:1 norm:2 k2hk:2 open:1 tried:1 bn:1 solid:1 carry:1 moment:1 reduction:11 liu:1 contains:2 series:1 selecting:5 epartement:2 rkhs:6 interestingly:1 ida:2 surprising:1 dx:3 bd:3 realize:1 numerical:3 v:1 selected:2 device:1 flare:3 math:4 boosting:1 simpler:1 mathematical:1 c2:1 khk:3 consists:1 excellence:1 indeed:4 expected:1 roughly:1 sud:3 sbd:5 globally:1 decreasing:3 considering:2 blanchar:1 increasing:1 provided:2 underlying:1 moreover:2 baker:1 what:3 kind:2 argmin:2 developed:1 finding:1 remember:1 every:1 exactly:1 universit:3 classifier:3 k2:2 whatever:1 control:1 hyperrectangles:1 appear:1 before:2 positive:2 sd:4 consequence:1 oxford:1 laurent:2 might:4 koltchinskii:1 k:3 studied:1 suggests:1 obeys:1 bat:3 decided:1 practice:1 implement:2 digit:1 procedure:9 pontil:1 asymptotics:1 empirical:8 significantly:1 vert:2 projection:13 convenient:1 pre:5 word:3 specificity:1 suggest:1 get:1 selection:10 operator:6 noising:3 risk:11 applying:3 context:1 live:1 zwald:2 glpk:1 go:1 straightforward:1 assigns:1 estimator:8 orthonormal:1 spanned:2 dw:2 anyway:1 variation:1 recognition:3 database:1 narrowing:1 worst:1 ensures:1 decrease:1 observes:1 intuition:2 pd:1 trained:1 solving:2 efficiency:1 usps:1 basis:2 htm:1 regularizer:2 choosing:1 hkb:2 quite:1 larger:1 widely:1 solve:1 statistic:1 think:2 itself:2 noisy:1 sequence:4 eigenvalue:6 indication:1 propose:2 fr:3 j2:2 relevant:1 realization:6 achieve:1 adjoint:1 olkopf:3 convergence:1 double:1 r1:1 tk:5 help:1 depending:1 progress:1 solves:1 strong:2 implemented:1 involves:1 direction:2 guided:1 kb:1 observational:1 preliminary:3 extension:1 strictly:1 considered:6 bj:2 achieves:2 sought:2 purpose:2 estimation:2 label:7 tool:1 uller:2 mit:1 clearly:1 gaussian:5 always:1 aim:1 rather:1 avoid:2 pn:1 eks:1 derived:1 hk:2 lri:2 regis:1 industrial:1 sense:2 sb:2 explanatory:1 kernelized:1 relation:2 fhg:1 france:4 germany:1 arg:4 classification:17 aforementioned:1 pascal:3 among:2 extraneous:2 canceled:1 proposes:1 art:1 summed:1 marginal:1 equal:1 once:1 saving:1 field:1 evgeniou:1 mimic:1 future:1 spline:1 eighty:1 few:1 simultaneously:2 familiar:1 psud:2 replaced:3 geometry:1 algorithmica:1 statistician:1 proba:1 fd:6 kfd:1 investigate:1 behind:1 implication:1 integral:2 poggio:1 mentionned:1 euclidean:1 re:1 theoretical:1 classify:1 kpca:16 too:2 graphic:1 reported:8 probabilistic:1 together:2 squared:1 again:1 choose:3 possibly:1 resort:1 leading:1 account:5 de:8 b2:2 includes:1 blanchard:2 relat:1 explicitly:3 vi:4 ated:1 view:4 picked:1 optimistic:2 apparently:1 recover:1 solar:3 minimize:2 square:2 ass:1 variance:1 who:1 maximized:1 counterproductive:1 lkopf:1 famous:1 handwritten:1 explain:1 reach:1 whenever:1 associated:3 di:1 proved:1 dataset:2 treatment:1 dimensionality:12 organized:1 hilbert:1 cj:1 subtle:1 actually:5 back:1 cbms:1 manuscript:1 clarendon:1 methodology:2 improved:1 done:1 smola:4 stage:2 web:2 replacing:1 nonlinear:1 quality:1 effect:3 normalized:1 true:2 regularization:18 hence:5 white:4 self:1 essence:1 noted:2 criterion:3 performs:2 estr:1 ranging:1 scho:1 tending:1 behaves:1 volume:1 interpretation:1 significant:1 raetsch:1 cambridge:1 smoothness:1 rd:2 unconstrained:1 grid:1 mathematics:2 similarly:2 pointed:1 language:1 own:1 perspective:1 inf:7 belongs:1 kpm:18 inequality:1 meta:1 yi:7 seen:2 minimum:1 somewhat:1 redundant:1 living:1 signal:1 dashed:1 sound:1 egis:1 plug:1 offer:2 long:1 cross:6 bigger:1 prediction:1 variant:1 regression:4 breast:2 essentially:2 kernel:29 represent:1 achieved:3 penalize:1 background:1 median:2 sch:3 regional:1 exhibited:1 massart:4 eigenfunctions:5 induced:1 thing:1 seem:1 integer:1 extracting:1 orsay:3 ideal:1 enough:1 concerned:2 xj:2 pennsylvania:1 wahba:2 reduce:1 idea:4 computable:2 whether:1 expression:1 pca:3 six:1 bartlett:1 penalty:1 speaking:1 jj:1 adequate:2 matlab:1 eigenvectors:6 amount:3 statist:1 processed:2 simplest:1 reduced:1 http:1 gmd:1 nsf:1 arising:2 correctly:1 ist:2 redundancy:2 gunnar:1 neither:1 v1:3 sum:1 idealization:1 run:1 family:3 reasonable:1 reader:1 almost:1 vn:1 decide:1 dy:3 investigates:1 comparable:1 bound:1 fold:2 oracle:1 adapted:1 vectorial:2 precisely:3 constraint:1 bousquet:1 argument:1 min:5 span:1 ball:4 explained:2 taken:2 heart:2 equation:3 previously:1 remains:1 german:2 loose:1 needed:3 mind:2 available:2 apply:1 barron:1 appropriate:1 birg:1 spectral:1 alternative:3 ematiques:2 existence:1 original:1 c2d:1 denotes:5 running:1 hbk:1 especially:1 establish:1 approximating:1 society:1 question:1 already:3 quantity:1 added:1 parametric:2 fbd:1 grace:1 dp:1 subspace:2 link:1 berlin:1 vd:2 argue:1 considers:2 useless:2 ellipsoid:6 minimizing:3 unfortunately:1 debate:1 design:1 unknown:3 perform:1 gilles:1 datasets:5 benchmark:5 finite:4 looking:1 precise:1 banana:2 community:1 introduced:1 paris:4 distinction:1 address:2 pattern:5 challenge:1 summarize:2 pioneering:2 analogue:1 regularized:5 advanced:1 minimax:4 library:1 carried:2 fraunhofer:1 extract:1 philadelphia:1 text:1 review:1 nice:2 l2:4 kf:8 asymptotic:1 relative:2 loss:1 expect:1 interesting:1 analogy:1 versus:2 penalization:4 validation:6 eigendecomposition:1 sufficient:1 mercer:2 editor:1 systematically:2 cd:1 cancer:2 course:1 penalized:2 supported:1 last:1 transpose:1 free:1 allow:1 moot:1 taking:2 absolute:2 dimension:15 xn:4 world:1 gram:1 valid:1 author:2 collection:4 adaptive:1 made:1 programme:1 far:1 approximate:1 compact:1 keep:1 b1:2 conclude:2 xi:10 table:5 additionally:1 improving:1 schuurmans:1 expansion:2 european:1 vj:3 motivation:2 noise:5 nothing:1 fair:1 x1:4 site:2 referred:1 explicit:1 lie:1 unfair:1 r2:2 svm:27 consist:1 magnitude:1 margin:1 diabetis:2 conveniently:1 ordered:1 corresponds:1 minimizer:2 ma:1 goal:3 viewed:3 donoho:1 ann:1 towards:1 determined:1 reducing:1 principal:5 called:2 experimental:2 atsch:3 select:2 support:2 relevance:1 regularizing:1 |
1,740 | 2,581 | Sparse Coding of Natural Images Using an
Overcomplete Set of Limited Capacity Units
Eizaburo Doi
Center for the Neural Basis of Cognition
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Michael S. Lewicki
Center for the Neural Basis of Cognition
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
It has been suggested that the primary goal of the sensory system is to
represent input in such a way as to reduce the high degree of redundancy. Given a noisy neural representation, however, solely reducing
redundancy is not desirable, since redundancy is the only clue to reduce
the effects of noise. Here we propose a model that best balances redundancy reduction and redundant representation. Like previous models, our
model accounts for the localized and oriented structure of simple cells,
but it also predicts a different organization for the population. With noisy,
limited-capacity units, the optimal representation becomes an overcomplete, multi-scale representation, which, compared to previous models,
is in closer agreement with physiological data. These results offer a new
perspective on the expansion of the number of neurons from retina to V1
and provide a theoretical model of incorporating useful redundancy into
efficient neural representations.
1
Introduction
Efficient coding theory posits that one of the primary goals of sensory coding is to eliminate
redundancy from raw sensory signals, ideally representing the input by a set of statistically
independent features [1]. Models for learning efficient codes, such as sparse coding [2] or
ICA [3], predict the localized, oriented, and band-pass characteristics of simple cells. In
this framework, units are assumed to be non-redundant and so the number of units should
be identical to the dimensionality of the data.
Redundancy, however, can be beneficial if it is used to compensate for inherent noise in
the system [4]. The models above assume that the system noise is low and negligible so
that redundancy in the representation is not necessary. This is equivalent to assuming that
the representational capacity of individual units is unlimited. Real neurons, however, have
limited capacity [5], and this should place constraints on how a neural population can best
encode a sensory signal. In fact, there are important characteristics of simple cells, such as
the multi-scale representation, that cannot be explained by efficient coding theory.
The aim of this study is to evaluate how the optimal representation changes when the system
is constrained by limited capacity units. We propose a model that best balances redundancy
reduction and redundant representation given the limited capacity units. In contrast to the
efficient coding models, it is possible to have a larger number of units than the intrinsic
dimensionality of the data. This further allows to introduce redundancy in the population,
enabling precise reconstruction using the imprecise representation of a single unit.
2
Model
Encoding
We assume that the encoding is a linear transform of the input x, followed by the additive
channel noise n ? N (0, ?n2 I),
r = Wx + n
= u + n,
(1)
(2)
where rows of W are referred to as the analysis vectors, r is the representation, and u is
the signal component of the representation. We will refer to u as coefficients because it is
a set of clean coefficients associated with the synthesis vectors in the decoding process, as
described below.
We define channel noise level as follows,
(channel noise level) =
?n2
? 100 [%]
?t2
(3)
where ?t2 is a constant target value of the coefficient variance. It is the inverse of the signalto-noise ratio in the representation, and therefore, we can control the information capacity
of a single unit by varying the channel noise variance. Note that in the previous models
[2, 3, 6] there is no channel noise; therefore r = u, where the signal-to-noise ratio of the
representation is infinite.
Decoding
The decoding process is assumed to be a linear transform of the representation,
x
? = Ar,
(4)
where the columns of A are referred to as the synthesis vectors1 , and x
? is the reconstruction
of the input. The reconstruction error e is then expressed as
e
= x?x
?
= (I ? AW) x ? An.
(5)
(6)
Note that no assumption on the reconstruction error is made, because eq. 4 is not a probabilistic data generative model, in contrast to the previous approaches [2, 6].
Representation desiderata
We assume a two-fold goal for the representation. The first is to preserve input information
a given noisy, limited information capacity unit. The second is to make the representation
1
In the noiseless and complete case, they are equivalent to the basis functions [2, 3]. In our
setting, however, they are in general no longer basis functions. To make this clear, we call A and W
as synthesis and analysis vectors.
(b) 20% Ch.Noise
(c) 80% Ch.Noise
(d) 8x overcomp.
Analysis
Synthesis
(a) 0% Ch.Noise
Figure 1: Optimal codes for toy problems. Data (shown with small dots) is generated with
two i.i.d. Laplacians mixed via non-orthogonal basis functions (shown by gray bars). The
optimal synthesis vectors (top row) and analysis vectors (bottom row) are shown as black
bars. Plots of synthesis vectors are scaled for visibility. (a-c) shows the complete code with
0, 20, and 80% channel noise level. (d) shows the case of 80% channel noise using an 8x
overcomplete code. Reconstruction error is (a) 0.0%, (b) 13.6%, (c) 32.2%, (d) 6.8%.
as sparse as possible, which yields an efficient code. The cost function to be minimized is
therefore defined as follows,
C(A, W)
= (reconstruction error) ? ?1 (sparseness) + ?2 (fixed variance)
2
M
M
X
X
hu2i i
2
= hkek i ? ?1
hln p(ui )i + ?2
ln
,
?t2
i=1
i=1
(7)
(8)
where h?i represents an ensemble average over the samples, and M is the number of units.
The sparseness is measured by the loglikelihood of a sparse prior p as in the previous
models [2, 3, 6]. The third, fixed variance term penalizes the case in which the coefficient
variance of the i-th unit hu2i i deviates from its target value ?t2 . It serves to fix the signalto-noise ratio in the representation, yielding a fixed information capacity. Without this
term, the coefficient variance could become trivially large so that the signal-to-noise ratio
is high, yielding smaller reconstruction error; or, the variance becomes small to satisfy only
the sparseness constraint, which is not desirable either.
Note that in order to introduce redundancy in the representation, we do not assume statistical independence of the coefficients. The second term in eq. 8 measures the sparseness of
coefficients individually but it does not impose their statistical independence. We illustrate
it with toy problems in Figure 1. If there is no channel noise, the optimal complete (1x) code
is identical to the ICA solution (a), since it gives the most sparse, non-Gaussian solution
with minimal error. As the channel noise increases (b and c), sparseness is compromised
for minimizing the reconstruction error by choosing correlated, redundant representation.
In an extreme case where the channel noise is high enough, the two units are almost completely redundant (c). It should be noted that in such a case two vectors represent the
direction of the first principal component of the data.
In addition to de-emphasizing sparseness, there is another way to introduce redundancy in
the representation. Since the goal of the representation is not the separation of independent
sources, we can set an arbitrarily large number of units in the representation. When the information capacity of a single unit is limited, the capacity of a population can be made large
by increasing the number of units. As shown in Figure 1c-d, the reconstruction error decreases as we increase the degree of overcompleteness. Note that the optimal overcomplete
code is not simply a duplication of the complete code.
Learning rule
The optimal code can be learned by the gradient descent of the cost function (eq. 8) with
respect to A and W,
?A ? (I ? AW) xxT WT ? ?n2 A,
(9)
T
T
?W ? A (I ? AW) xx
? ln(u) T
ln[hu2 i/?t2 ]
W xxT .
+?1
x
? ?2 diag
(10)
?u
hu2 i
In the limit of zero channel noise in the square case (e.g., Figure 1a) the solution is at the
equilibrium when W = A?1 (see eq. 9), where the learning rule becomes similar to the
standard ICA (except the 3rd term in eq. 10). In all other cases, there is no reason to believe
that W = A?1 , if it exists, minimizes the cost function. This is the reason why we need
to optimize A and W individually.
3
Optimal representations for natural images
We examined optimal codes for natural image patches using the proposed model. The
training data is 8x8 pixel image patches, sampled from a data set of 62 natural images [7].
The data is not preprocessed except for the subtraction of DC components [8]. Accordingly,
the intrinsic dimensionality of the data is 63, and an N-times overcomplete code consists
of N?63 units. The training set is sequentially updated during the learning and the order
is randomized to prevent any local structure in the sequence. A typical number of image
patches in a training is 5 ? 106 .
Here we first descirbe how the presence of channel noise changes the optimal code in the
complete case. Next, we examine the optimal code at different degree of overcompleteness
given a high channel noise level.
3.1
Optimal code at different channel noise level
We varied the channel noise level as 10, 20, 40, and 80%. As shown in Figure 2, learned
synthesis and analysis vectors look somewhat similar to ICA (only 10 and 80% are shown
for clarity). The comparison to the receptive fields of simple cells should be made with the
analysis vectors [9, 10, 7]. They show localized and oriented structures and are well fitted
by the Gabor function, indicating the similarity to simple cells in V1. Now, an additional
characteristic to the Gabor-like structure is that the spatial-frequency tuning of the analysis
vectors shifts towards lower spatial-frequencies as the channel noise increases (Figure 2d).
The learned code is expected to be robust to the channel noise. The reconstruction error
with respect to the data variance turned out to be 6.5, 10.1, 15.7, and 23.8% for 10, 20, 40,
and 80% of channel noise level, respectively. The noise reduction is significant considering
the fact that any whitened representation including ICA should generate the reconstruction
error of exactly the same amount of the channel noise level2 . For the learned ICA code
shown in Figure 2a, the reconstruction error was 82.7% when 80% channel noise was
applied.
2
Since the mean squared error is expressed as hkek2 i = ?n2 ? Tr(AAT ) = ?n2 ? Tr(hxxT i) =
?n2 ?(data variance), where W is whitening filters, A(? W?1 ) is their corresponding basis functions.
We used eq. (6) and hxxT i = AWhxxT iWT AT = AAT .
(a) ICA
(b) 10%
(c) 80%
(d)
Synthesis
50
ICA
10%
80%
Count [#]
40
30
20
Analysis
10
0
0
0.5
1
Spatial frequency
1.5
Figure 2: Optimal complete code at different channel noise level. (a-c) Optimized synthesis
and analysis vectors. (a) ICA. (b) Proposed model at 10% channel noise level. (c) Proposed
model at 80% channel noise level. Here 40 vectors out of 63 are shown. (d) Distribution of
spatial-frequency tuning of the analysis vectors in the condition of (a)-(c).
The robustness to channel noise can be explained by the shift of the representation towards lower spatial-frequencies. We analyzed the reconstruction error by projecting it to
the principal axes of the data. Figure 3a shows the error spectrum of the code for 80%
channel noise, along with the data spectrum (the percentage of the data variance along the
principal axes). Note that the data variance of natural images is mostly explained by the
first principal components, which correspond to lower spatial-frequencies. In the proposed
model, the ratio of the error to the data variance is relatively small around the first principal
components. It can be seen much clearer in Figure 3b, where the reconstruction percentage at each principal component is replotted. The reconstruction is more precise for more
significant principal components (i.e., smaller index), and it drops down to zero for minor
components. For comparison, we analyzed the error for ICA code, where the synthesis and
analysis vectors are optimized without channel noise and its robustness to channel noise is
tested with 80% channel noise level. As shown in Figure 3, ICA reconstructs every component equally irrespective of their very different data variance3 , therefore the percentage
of reconstruction is flat. The proposed model can be robust to channel noise by primarily
representing the principal components.
Note that such a biased reconstruction depends on the channel noise level. In Figure 3b
we also shows the reconstruction spectrum with 10% channel noise using the code for
10% channel noise level. Compared to the 80% case, the model comes to reconstruct the
data at relatively minor components as well. It means that the model can represent finer
information if the information capacity of a single unit is large enough. Such a shift of
representation is also demonstrated with the toy probems in Figure 1a-c.
3.2
Optimal code at different degree of overcompleteness
Now we examine how the optimal representation changes with the different number of
available units. We fixed the channel noise level at 80% and vary the degree of overcompleteness as 1x, 2x, 4x, and 8x. Learned vectors for 8x are shown in Figure 4a, and those for
Since the error spectrum for a whitened representation is expressed as (Et e)2 = ?n2 ?
Diag(ET hxxT iE) = ?n2 ? Diag(D) = ?n2 ? (data spectrum), where EDET = hxxT i is the eigen
value decomposition of the data covariance matrix.
3
(b)
Variance [%]
2
10
ICA
80%
DAT
1
10
0
10
1
10
0
10
1
10
Index of Principal Components
100
Reconstruction [%]
(a)
ICA
80%
10%
8x
80
60
40
20
0
10
20
30
40
50
60
Index of Principal Components
Figure 3: Error analysis. (a) Power spectrum of the data (?DAT?) and the reconstruction
error with 80% channel noise. ?80%? is the error of the 1x code for 80% channel noise
level. ?ICA? is the error of the ICA code. (b) Percentage of reconstruction at each principal
component. In addition to the conditions in (a), we also show the following (see text).
?10%?: 1x code for 10% channel noise level. The error is measured with 10% channel
noise. ?8x?: 8x code for 80% channel noise level. The error is measured with 80% channel
noise.
1x are in Figure 2c. Compared to the 1x case, where the synthesis and analysis vectors look
uniform in shape, the 8x code shows more diversity. To be precise, as illustrated in Figure
4b, the spatial-frequency tuning of the analysis vectors becomes more broadly distribued
and cover a larger region as the degree of overcompleteness increases. Physiological data
at the central fovea shows that the spatial-frequency tuning of V1 simple cells spans three
[11] or two [12] octaves. Models for efficient coding, especially ICA which provides the
most efficient code, do not reproduce such a multi-scale representation; instead, the resulting analysis vectors tune only to the highest spatial-frequency (Figure 2a; [3, 9, 10, 7]).
It is important that the proposed model generates a broader tuning distribution under the
presence of the channel noise and with the high degree of overcompleteness.
An important property of the proposed model is that the reconstruction error decreases as
the degree of overcompleteness increases. The resulting error is 23.8, 15.5, 9.7, and 6.2%
for 1x, 2x, 4x, and 8x code. The noise analysis shows that the model comes to represent
minor components as the degree of overcompleteness increases (Figure 3b). There is an
interesting similarity between the error spectra of 8x code for 80% channel noise and 1x
code for 10% channel noise. It is suggested that the population of units can represent the
same amount and the same kind of information using N times larger number of units if the
information capacity of a single unit is decreased with N times larger channel noise level.
4
Discussion
A multi-scale representation is known to provide an approximately efficient representation,
although it is not optimal as there are known statistical dependencies between scales [13].
We conjecture these residual dependences may be one reason why previous efficient coding models could not yield a broad multi-scale representation. In contrast, the proposed
model can introduce useful redundancies in the representation, which is consistent with the
emergence of a multi-scale representation. Although it can generate a broader distribution
of the spatial-frequency tuning, in these experiments, it covers only about one octave, not
two or three octaves as in the physiological data [11, 12]. This issue still remains to be
explained.
(a) 8x overcomplete w/ 80% ch. noise
(b)
Count [#]
Synthesis
x102
3
1x
2x
4x
8x
2
Analysis
1
0
0
0.5
1
Spatial frequency
1.5
Figure 4: Optimal overcomplete code. (a) Optimized 8x overcomplete code for 80% channel noise level. Here only 176 out of 504 functions are shown. The functions are sorted
according to the spatial-frequency tuning of the analysis vectors. (b) Distribution of spatialfrequency tuning of the analysis vectors at different degree of overcompleteness.
Another important characteristic of simple cells is the fact that the more numerous cells
are tuned to the lower spatial-frequencies [11, 12]. An explanation of it is that the high
data-variance components should be highly oversampled so that the reconstruction erorr is
minimized given the limited precision of a single unit [12]. As we described earlier, such
a biased representation for the high variance components is observed in our model (Figure
3b). However, the distribution of the spatial-frequency tuning of the analysis vectors does
not correspond to this trend; instead, it is bell-shaped (Figure 4b). This apparent inconsistency might be resolved by considering the synthesis vectors, because the reconstruction
error is determined by both synthesis and analysis vectors.
A related work is the Atick & Redlich?s model for retinal ganglion cells [14]. It also utilizes
redundancy in the representation but to compensate for sensory noise rather than channel
noise; therefore, the two models explain different phenomena. Another related work is
Olshausen & Field?s sparse coding model for simple cells [2], but this again looks at the
effects of sensory noise (note that if the sensory noise is neglegible this algorithm does not
learn a sparse representation, while the proposed model is appropriate for this condition; of
course such a condition might be unrealistic). Now, given a photopic environment where
the sensory noise can reasonably be regarded to be small [14], it should rather be important to examine how the constraint of noisy, limited information capacity units changes the
representation. It is reported that the information capacity is significantly decreased from
photoreceptors to spiking neurons [15], which supports our approach. In spite of its significance, to our knowledge the influence of channel noise on the representation had not been
explored.
5
Conclusion
We propose a model that both utilizes redundancy in the representation in order to compensate for the limited precision of a single unit and reduces unnecessary redundancy in order
to yield an efficient code. The noisy, overcomplete code for natural images generates a
distributed spatial-frequency tuning in addition to the Gabor-like analysis vectors, showing
a closer agreement with the physiological data than the previous efficient coding models.
The information capacity of a representation may be constrained either by the intrinsic
noise in a single unit or by the number of units. In either case, the proposed model can
adapt the parameters to primarily represent the high-variance, coarse information, yielding
a robust representation to channel noise. As the limitation is relaxed by decreasing the
channel noise level or by increasing the number of units, the model comes to represent
low-variance, fine information.
References
[1] H. B. Barlow. Possible principles underlying the transformation of sensory messages. In W. A.
Rosenblith, editor, Sensory communication, pages 217?234. MIT Press, MA, 1961.
[2] B. A. Olshausen and D. J. Field. Sparse coding with an overcomplete basis set: A strategy
employed by V1? Vision Research, 37:3311?3325, 1997.
[3] A. J. Bell and T. J. Sejnowski. The independent components of natural scenes are edge filters.
Vision Research, 37:3327?3338, 1997.
[4] H. B. Barlow. Redundancy reduction revisited. Network: Comput. Neural Syst., 12:241?253,
2001.
[5] A. Borst and F. E. Theunissen. Information theory and neural coding. Nature Neuroscience,
2:947?957, 1999.
[6] M. S. Lewicki and B. A. Olshausen. Probabilistic framework for the adaptation and comparison
of image codes. J. Opt. Soc. Am. A, 16:1587?1601, 1999.
[7] E. Doi, T. Inui, T.-W. Lee, T. Wachtler, and T. J. Sejnowski. Spatiochromatic receptive field
properties derived from information-theoretic analyses of cone mosaic responses to natural
scenes. Neural Computation, 15:397?417, 2003.
[8] A. Hyvarinen, J. Karhunen, and E. Oja. Independent Component Analysis. John Wiley & Sons,
NY, 2001.
[9] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images
compared with simple cells in primary visual cortex. Proc. R. Soc. Lond. B, 265:359?366,
1998.
[10] D. L. Ringach. Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual cortex. Journal of Neurophysiology, 88:455?463, 2002.
[11] R. L. De Valois, D. G. Albrecht, and L. G. Thorell. Spatial frequency selectivity of cells in
macaque visual cortex. Vision Research, 22(545-559), 1982.
[12] C. H. Anderson and G. C. DeAngelis. Population codes and signal to noise ratios in primary
visual cortex. In Society for Neuroscience Abstract, page 822.3, 2004.
[13] E. P. Simoncelli. Modeling the joint statistics of images in the wavelet domain. In Proc. SPIE
44th Annual Meeting, pages 188?195, Denver, Colorado, 1999.
[14] J. J. Atick and A. N. Redlich. What does the retina know about natural scenes?
Computation, 4:196?210, 1992.
Neural
[15] S. B. Laughlin and R. R. de Ruyter van Steveninck. The rate of information transfer at gradedpotential synapses. Nature, 379(15):642?645, 1996.
| 2581 |@word neurophysiology:1 decomposition:1 covariance:1 tr:2 reduction:4 valois:1 tuned:1 john:1 additive:1 wx:1 shape:1 visibility:1 plot:1 drop:1 generative:1 accordingly:1 coarse:1 provides:1 revisited:1 along:2 become:1 consists:1 introduce:4 cnbc:2 expected:1 ica:16 examine:3 multi:6 decreasing:1 borst:1 considering:2 increasing:2 becomes:4 xx:1 underlying:1 what:1 kind:1 minimizes:1 transformation:1 every:1 exactly:1 scaled:1 control:1 unit:29 negligible:1 aat:2 local:1 limit:1 encoding:2 solely:1 approximately:1 black:1 might:2 examined:1 limited:10 statistically:1 steveninck:1 bell:2 gabor:3 significantly:1 imprecise:1 spite:1 cannot:1 influence:1 optimize:1 equivalent:2 demonstrated:1 center:2 rule:2 regarded:1 population:6 updated:1 target:2 colorado:1 mosaic:1 agreement:2 pa:2 trend:1 predicts:1 theunissen:1 bottom:1 observed:1 region:1 decrease:2 highest:1 environment:1 ui:1 ideally:1 edoi:1 basis:7 completely:1 resolved:1 thorell:1 joint:1 xxt:2 sejnowski:2 doi:2 deangelis:1 choosing:1 apparent:1 larger:4 loglikelihood:1 reconstruct:1 statistic:1 transform:2 noisy:5 emergence:1 sequence:1 propose:3 reconstruction:24 adaptation:1 turned:1 representational:1 illustrate:1 clearer:1 measured:3 minor:3 eq:6 soc:2 come:3 direction:1 posit:1 filter:3 fix:1 opt:1 around:1 equilibrium:1 cognition:2 predict:1 vary:1 proc:2 individually:2 wachtler:1 overcompleteness:9 mit:1 gaussian:1 aim:1 rather:2 varying:1 broader:2 encode:1 ax:2 derived:1 contrast:3 am:1 eliminate:1 reproduce:1 pixel:1 issue:1 constrained:2 spatial:17 schaaf:1 field:5 shaped:1 identical:2 represents:1 broad:1 look:3 minimized:2 t2:5 inherent:1 primarily:2 retina:2 oriented:3 oja:1 preserve:1 individual:1 organization:1 message:1 highly:1 analyzed:2 extreme:1 yielding:3 edge:1 closer:2 necessary:1 orthogonal:1 penalizes:1 overcomplete:10 theoretical:1 minimal:1 fitted:1 column:1 earlier:1 modeling:1 ar:1 cover:2 cost:3 uniform:1 reported:1 dependency:1 aw:3 randomized:1 ie:1 probabilistic:2 lee:1 decoding:3 michael:1 synthesis:14 squared:1 central:1 again:1 reconstructs:1 albrecht:1 toy:3 syst:1 account:1 de:3 diversity:1 retinal:1 coding:12 coefficient:7 satisfy:1 depends:1 hu2:2 level2:1 square:1 variance:17 characteristic:4 ensemble:1 yield:3 hu2i:2 correspond:2 raw:1 finer:1 explain:1 synapsis:1 rosenblith:1 frequency:16 associated:1 spie:1 sampled:1 knowledge:1 dimensionality:3 response:1 anderson:1 atick:2 gray:1 believe:1 olshausen:3 effect:2 barlow:2 illustrated:1 ringach:1 during:1 noted:1 octave:3 complete:6 theoretic:1 image:11 spiking:1 denver:1 mellon:2 refer:1 significant:2 rd:1 tuning:10 trivially:1 had:1 dot:1 vectors1:1 longer:1 similarity:2 whitening:1 cortex:4 hxxt:4 perspective:1 inui:1 selectivity:1 arbitrarily:1 inconsistency:1 der:1 meeting:1 seen:1 additional:1 somewhat:1 impose:1 relaxed:1 employed:1 subtraction:1 redundant:5 signal:6 desirable:2 simoncelli:1 reduces:1 adapt:1 offer:1 compensate:3 equally:1 desideratum:1 whitened:2 noiseless:1 cmu:2 vision:3 represent:7 cell:13 addition:3 fine:1 decreased:2 source:1 edet:1 biased:2 duplication:1 call:1 presence:2 enough:2 independence:2 eizaburo:1 reduce:2 shift:3 useful:2 clear:1 tune:1 amount:2 band:1 generate:2 percentage:4 neuroscience:2 broadly:1 carnegie:2 redundancy:17 clarity:1 preprocessed:1 prevent:1 clean:1 v1:4 cone:1 inverse:1 place:1 almost:1 separation:1 patch:3 utilizes:2 followed:1 iwt:1 fold:1 annual:1 constraint:3 scene:3 flat:1 unlimited:1 generates:2 span:1 lond:1 relatively:2 conjecture:1 department:1 according:1 spatiochromatic:1 beneficial:1 smaller:2 son:1 explained:4 projecting:1 ln:3 remains:1 count:2 know:1 serf:1 available:1 appropriate:1 robustness:2 eigen:1 top:1 especially:1 society:1 dat:2 receptive:3 primary:5 dependence:1 strategy:1 gradient:1 fovea:1 capacity:16 reason:3 assuming:1 code:36 index:3 ratio:6 balance:2 minimizing:1 mostly:1 neuron:3 enabling:1 descent:1 communication:1 precise:3 dc:1 varied:1 hln:1 optimized:3 oversampled:1 learned:5 macaque:2 suggested:2 bar:2 below:1 laplacians:1 replotted:1 including:1 explanation:1 power:1 unrealistic:1 natural:10 residual:1 representing:2 numerous:1 irrespective:1 x8:1 deviate:1 prior:1 text:1 mixed:1 interesting:1 limitation:1 localized:3 degree:10 consistent:1 principle:1 editor:1 row:3 course:1 laughlin:1 sparse:8 distributed:1 van:3 sensory:10 made:3 clue:1 hyvarinen:1 sequentially:1 photoreceptors:1 pittsburgh:2 assumed:2 unnecessary:1 spectrum:7 compromised:1 why:2 nature:2 channel:48 ruyter:1 learn:1 robust:3 reasonably:1 symmetry:1 transfer:1 expansion:1 domain:1 diag:3 significance:1 noise:67 n2:9 referred:2 redlich:2 ny:1 wiley:1 precision:2 comput:1 third:1 wavelet:1 down:1 emphasizing:1 showing:1 explored:1 physiological:4 incorporating:1 intrinsic:3 exists:1 karhunen:1 sparseness:6 signalto:2 simply:1 ganglion:1 visual:4 expressed:3 lewicki:3 ch:4 ma:1 goal:4 sorted:1 towards:2 change:4 infinite:1 except:2 reducing:1 typical:1 wt:1 determined:1 principal:11 pas:1 indicating:1 support:1 hateren:1 evaluate:1 tested:1 phenomenon:1 correlated:1 |
1,741 | 2,582 | Chemosensory processing in a spiking
model of the olfactory bulb: chemotopic
convergence and center surround
inhibition
Baranidharan Raman and Ricardo Gutierrez-Osuna
Department of Computer Science
Texas A&M University
College Station, TX 77840
{barani,rgutier}@cs.tamu.edu
Abstract
This paper presents a neuromorphic model of two olfactory signalprocessing primitives: chemotopic convergence of olfactory
receptor neurons, and center on-off surround lateral inhibition in
the olfactory bulb. A self-organizing model of receptor
convergence onto glomeruli is used to generate a spatially
organized map, an olfactory image. This map serves as input to a
lattice of spiking neurons with lateral connections. The dynamics
of this recurrent network transforms the initial olfactory image into
a spatio-temporal pattern that evolves and stabilizes into odor- and
intensity-coding attractors. The model is validated using
experimental data from an array of temperature-modulated gas
sensors. Our results are consistent with recent neurobiological
findings on the antennal lobe of the honeybee and the locust.
1
In trod u ction
An artificial olfactory system comprises of an array of cross-selective chemical
sensors followed by a pattern recognition engine. An elegant alternative for the
processing of sensor-array signals, normally performed with statistical pattern
recognition techniques [1], involves adopting solutions from the biological olfactory
system. The use of neuromorphic approaches provides an opportunity for
formulating new computational problems in machine olfaction, including mixture
segmentation, background suppression, olfactory habituation, and odor-memory
associations.
A biologically inspired approach to machine olfaction involves (1) identifying key
signal processing primitives in the olfactory pathway, (2) adapting these primitives
to account for the unique properties of chemical sensor signals, and (3) applying the
models to solving specific computational problems.
The biological olfactory pathway can be divided into three general stages: (i)
olfactory epithelium, where primary reception takes place, (ii) olfactory bulb (OB),
where the bulk of signal processing is performed and, (iii) olfactory cortex, where
odor associations are stored. A review of literature on olfactory signal processing
reveals six key primitives in the olfactory pathway that can be adapted for use in
machine olfaction. These primitives are: (a) chemical transduction into a
combinatorial code by a large population of olfactory receptor neurons (ORN), (b)
chemotopic convergence of ORN axons onto glomeruli (GL), (c) logarithmic
compression through lateral inhibition at the GL level by periglomerular
interneurons, (d) contrast enhancement through lateral inhibition of mitral (M)
projection neurons by granule interneurons, (e) storage and association of odor
memories in the piriform cortex, and (f) bulbar modulation through cortical
feedback [2, 3].
This article presents a model that captures the first three abovementioned
primitives: population coding, chemotopic convergence and contrast enhancement.
The model operates as follows. First, a large population of cross-selective pseudosensors is generated from an array of metal-oxide (MOS) gas sensors by means of
temperature modulation. Next, a self-organizing model of convergence is used to
cluster these pseudo-sensors according to their relative selectivity. This clustering
generates an initial spatial odor map at the GL layer. Finally, a lattice of spiking
neurons with center on-off surround lateral connections is used to transform the GL
map into identity- and intensity-specific attractors.
The model is validated using a database of temperature-modulated sensor patterns
from three analytes at three concentration levels. The model is shown to address the
first problem in biologically-inspired machine olfaction: intensity and identity
coding of a chemical stimulus in a manner consistent with neurobiology [4, 5].
2
M o d e l i n g c h e m o t opi c c o n v e r g e n c e
The projection of sensory signals onto the olfactory bulb is organized such that
ORNs expressing the same receptor gene converge onto one or a few GLs [3]. This
convergence transforms the initial combinatorial code into an organized spatial
pattern (i.e., an olfactory image). In addition, massive convergence improves the
signal to noise ratio by integrating signals from multiple receptor neurons [6].
When incorporating this principle into machine olfaction, a fundamental difference
between the artificial and biological counterparts must be overcome: the input
dimensionality at the receptor/sensor level. The biological olfactory system employs
a large population of ORNs (over 100 million in humans, replicated from 1,000
primary receptor types), whereas its artificial analogue uses a few chemical sensors
(commonly one replica of up to 32 different sensor types).
To bridge this gap, we employ a sensor excitation technique known as temperature
modulation [7]. MOS sensors are conventionally driven in an isothermal fashion by
maintaining a constant temperature. However, the selectivity of these devices is a
function of the operating temperature. Thus, capturing the sensor response at
multiple temperatures generates a wealth of additional information as compared to
the isothermal mode of operation. If the temperature is modulated slow enough
(e.g., mHz), the behavior of the sensor at each point in the temperature cycle can
then be treated as a pseudo-sensor, and thus used to simulate a large population of
cross-selective ORNs (refer to Figure 1(a)).
To model chemotopic convergence, these temperature-modulated pseudo-sensors
(referred to as ORNs in what follows) must be clustered according to their
selectivity [8]. As a first approximation, each ORN can be modeled by an affinity
vector [9] consisting of the responses across a set of C analytes:
r
K i = K i1 , K i2 ,..., K iC
(1)
[
]
where K ia is the response of the ith ORN to analyte a. The selectivity of this ORN
r
is then defined by the orientation of the affinity vector ? i .
A close look at the OB also shows that neighboring GLs respond to similar odors
[10]. Therefore, we model the ORN-GL projection with a Kohonen self-organizing
map (SOM) [11]. In our model, the SOM is trained to model the distribution of
r
ORNs in chemical sensitivity space, defined by the affinity vector ? i . Once the
training of the SOM is completed, each ORN is assigned to the closest SOM node (a
simulated GL) in affinity space, thereby forming a convergence map. The response
of each GL can then be computed as
G aj = ?
(?
N
i =1
Wij ? ORN ia
)
(2)
where ORN ia is the response of pseudo-sensor i to analyte a, Wij=1 if pseudo-sensor i
converges to GL j and zero otherwise, and ? (?) is a squashing sigmoidal function
that models saturation.
This convergence model works well under the assumption that the different sensory
inputs are reasonably uncorrelated. Unfortunately, most gas sensors are extremely
collinear. As a result, this convergence model degenerates into a few dominant GLs
that capture most of the sensory activity, and a large number of dormant GLs that do
not receive any projections. To address this issue, we employ a form of competition
known as conscience learning [12], which incorporates a habituation mechanism to
prevent certain SOM nodes from dominating the competition. In this scheme, the
fraction of times that a particular SOM node wins the competition is used as a bias
to favor non-winning nodes. This results in a spreading of the ORN projections to
neighboring units and, therefore, significantly reduces the number of dormant units.
We measure the performance of the convergence mapping with the entropy across
the lattice, H = ?? Pi log Pi , where Pi is the fraction of ORNs that project to SOM
node i [13]. To compare Kohonen and conscience learning, we built convergence
mappings with 3,000 pseudo-sensors and 400 GL units (refer to section 4 for
details). The theoretical maximum of the entropy for this network, which
corresponds to a uniform distribution, is 8.6439. When trained with Kohonen?s
algorithm, the entropy of the SOM is 7.3555. With conscience learning, the entropy
increases to 8.2280. Thus, conscience is an effective mechanism to improve the
spreading of ORN projections across the GL lattice.
3
M o d e l i n g t h e o l f a c t o r y b u l b n e t wo r k
Mitral cells, which synapse ORNs at the GL level, transform the initial olfactory
image into a spatio-temporal code by means of lateral inhibition. Two roles have
been suggested for this lateral inhibition: (a) sharpening of the molecular tuning
range of individual M cells with respect to that of their corresponding ORNs [10],
and (b) global redistribution of activity, such that the bulb-wide representation of an
odorant, rather than the individual tuning ranges, becomes specific and concise over
time [3]. More recently, center on-off surround inhibitory connections have been
found in the OB [14]. These circuits have been suggested to perform pattern
normalization, noise reduction and contrast enhancement of the spatial patterns.
We model each M cell using a leaky integrate-and-fire spiking neuron [15]. The
input current I(t) and change in membrane potential u(t) of a neuron are given by:
I (t ) =
du
u (t )
+C
dt
R
(3)
du
?
= ?u (t ) + R ? I (t ) [? = RC ]
dt
Each M cell receives current Iinput from ORNs and current Ilateral from lateral
connections with other M cells:
I input ( j ) = ?Wij ? ORNi
i
(4)
I lateral ( j , t ) = ? Lkj ? ? (k , t ? 1)
k
where Wij indicates the presence/absence of a synapse between ORNi and Mj, as
determined by the chemotopic mapping, Lkj is the efficacy of the lateral connection
between Mk and Mj, and ?(k,t-1) is the post-synaptic current generated by a spike at
Mk:
? (k , t ? 1) = ? g (k , t ? 1) ? [u ( j, t ? 1) + ? Esyn ]
(5)
g(k,t-1) is the conductance of the synapse between Mk and Mj at time t-1, u(j,t-1) is
the membrane potential of Mj at time t-1 and the + subscript indicates this value
becomes zero if negative, and Esyn is the reverse synaptic potential. The change in
conductance of post-synaptic membrane is:
g& (k , t ) =
dg (k , t ) ? g (k , t )
=
+ z (k , t )
dt
? syn
z& ( k , t ) =
dz (k , t ) ? z ( k , t )
=
+ g norm ? spk ( k , t )
dt
? syn
(6)
where z(.) and g(.) are low pass filters of the form exp(-t/?syn) and t ? exp(?t / ? syn ) ,
respectively, ?syn is the synaptic time constant, gnorm is a normalization constant, and
spk(j,t) marks the occurrence of a spike in neuron i at time t:
?1 u ( j , t ) = Vspike ?
spk ( j , t ) = ?
?
?0 u ( j , t ) ? Vspike ?
(7)
Combining equations (3) and (4), the membrane potential can be expressed as:
du ( j , t ) ? u ( j, t ) I lateral ( j, t ) I input ( j )
=
+
+
dt
RC
C
C
?u ( j , t ? 1) + u& ( j , t ? 1) ? dt u ( j, t ) < Vthreshold ?
u ( j, t ) = ?
?
Vspike
u ( j, t ) ? Vthreshold ?
?
u& ( j, t ) =
(8)
When the membrane potential reaches Vthreshold, a spike is generated, and the
membrane potential is reset to Vrest. Any further inputs to the neuron are ignored
during the subsequent refractory period.
Following [14], lateral interactions are modeled with a center on-off surround
matrix Lij. Each M cell makes excitatory synapses to nearby M cells (d<de), where
d is the Manhattan distance measured in the lattice, and inhibitory synapses with
distant M cells (de<d<di) through granule cells (implicit in our model). Excitatory
synapses are assigned uniform random weights between [0, 0.1]. Inhibitory
synapses are assigned negative weights in the same interval. Model parameters are
summarized in Table 1.
Table 1. Parameters of the OB spiking neuron lattice
Parameter
Peak synaptic conductance (Gpeak)
Capacitance (C)
Resistance (R)
Spike voltage (V spike )
Threshold voltage (Vthreshold )
Synapse Reverse potential (E syn)
Value
0.01
1 nF
10 MOhm
70 mV
5 mV
70 mV
Excitatory distance (d e )
d <
4
1
Parameter
Synaptic time constants (? syn)
Total simulation time (t tot )
Integration time step (dt)
Refractory period (t ref)
Number of mitral cells (N)
Normalization constant (g norm)
1
Inhibitory distance (d i )
N
6
Value
10 ms
500 ms
1 ms
3 ms
400
0.0027
2
N <d <
6
6
N
Results
The proposed model is validated on an experimental dataset containing gas sensor
signals for three analytes: acetone (A), isopropyl alcohol (B) and ammonia (C), at
three different concentration levels per analyte. Two Figaro MOS sensors (TGS
2600, TGS 2620) were temperature modulated using a sinusoidal heater voltage (0-7
V; 2.5min period; 10Hz sampling frequency). The response of the two sensors to the
three analytes at the three concentration levels is shown in Figure 1(a). This
response was used to generate a population of 3,000 ORNs, which were then
mapped onto a GL layer with 400 units arranged as a 20?20 lattice.
Sensor Conductance
(Iso-propyl alcohol
Sensor conductance
(Acetone)
Sensor 1
Sensor 2
0.9
5
5
0.8
5
0.7
A3
0.6
10
10
15
15
10
0.5
A2
0.4
0.3
15
0.2
A1
0.1
500
1000
1500
A1
20
2000
2500
3000
5
15
5
20
10
15
A3
20
20
2
4
6
8
10
12
14
16
2
4
6
8
10
12
14
16
2
4
6
8
10
12
14
16
18
20
0.9
0.8
5
5
5
0.7
B3
0.6
10
10
0.5
10
0.4
B2
0.3
0.2
15
15
15
B1
0.1
B1
20
500
1000
1500
2000
2500
B2
20
10
15
5
20
10
15
B3
20
3000
5
Sensor Conductance
(Ammonia)
10
A2
20
20
18
20
0.9
5
0.8
5
5
C3
0.7
0.6
10
10
15
15
C2
0.5
10
0.4
0.3
0.2
15
C1
0.1
C1
20
500
1000
1500
2000
2500
3000
5
10
15
20
C2
20
5
10
15
Pseudo-Sensors
Concentration
(a)
(b)
20
C3
20
18
20
Figure 1. (a) Temperature modulated response to the three analytes (A,B,C) at three
concentrations (A3: highest concentration of A), and (b) initial GL maps.
The sensor response to the highest concentration of each analyte was used to
generate the SOM convergence map. Figure 1(b) shows the initial odor map of the
three analytes following conscience learning of the SOM. These olfactory images
show that the identity of the stimulus is encoded by the spatial pattern across the
lattice, whereas the intensity is encoded by the overall amplitude of this pattern.
Analytes A and B, which induce similar responses on the MOS sensors, also lead to
very similar GL maps.
The GL maps are input to the lattice of spiking neurons for further processing. As a
result of the dynamics induced by the recurrent connections, these initial maps are
transformed into a spatio-temporal pattern. Figure 2 shows the projection of
membrane potential of the 400 M cells along their first three principal components.
Three trajectories are shown per analyte, which correspond to the sensor response to
the highest analyte concentration on three separate days of data collection. These
results show that the spatio-temporal pattern is robust to the inherent drift of
chemical sensors. The trajectories originate close to each other, but slowly migrate
and converge into unique odor-specific attractors. It is important to note that these
trajectories do not diverge indefinitely, but in fact settle into an attractor, as
illustrated by the insets in Figure 2.
Odor B
20
15
10
5
Odor C
0
-5
-10
-15
-200
-150
-100
-50
100
50
0
0
-50
50
-100
100
-150
-200
150
-250
Odor A
Figure 2. Odor-specific attractors from experimental sensor data. Three trajectories
are shown per analyte, corresponding to the sensor response on three separate days.
These results show that the attractors are repeatable and robust to sensor drift.
To illustrate the coding of identity and intensity performed by the model, Figure 3
shows the trajectories of the three analytes at three concentrations. The OB network
activity evolves to settle into an attractor, where the identity of the stimulus is
encoded by the direction of the trajectory relative to the initial position, and the
intensity is encoded by the length along the trajectory. This emerging code is also
consistent with recent findings in neurobiology, as discussed next.
5
D i s c u s s i on
A recent study of spatio-temporal activity in projection neurons (PN) of the
honeybee antennal lobe (analogous to M cells in mammalian OB) reveals evolution
and convergence of the network activity into odor-specific attractors [4]. Figure
4(a) shows the projection of the spatio-temporal response of the PNs along their
first three principal components. These trajectories begin close to each other, and
evolve over time to converge into odor specific regions. These experimental results
are consistent with the attractor patterns emerging from our model. Furthermore, an
experimental study of odor identity and intensity coding in the locust show
hierarchical groupings of spatio-temporal PN activity according to odor identity,
followed by odor intensity [5]. Figure 4(b) illustrates this grouping in the activity
of 14 PNs when exposed to three odors at five concentrations. Again, these results
closely resemble the grouping of attractors in our model, shown in Figure 3.
B3
B2
B1
200
A3
150
A2
PC3
A1
100
50
C1
C2
C3
0
50
-50
0
350
-50
300
250
-100
200
PC2
-150
150
100
-200
50
PC1
-250
0
-50
-300
Figure 3. Identity and intensity coding using dynamic attractors.
Previous studies by Pearce et al. [6] using a large population of optical micro-bead
chemical sensors have shown that massive convergence of sensory inputs can be
used to provide sensory hyperacuity by averaging out uncorrelated noise. In
contrast, the focus of our work is on the coding properties induced by chemotopic
convergence. Our model produces an initial spatial pattern or olfactory image,
whereby odor identity is coded by the spatial activity across the GL lattice, and odor
intensity is encoded by the amplitude of this pattern. Hence, the bulk of the
identity/intensity coding is performed by this initial convergence primitive.
Subsequent processing by a lattice of spiking neurons introduces time as an
additional coding dimension. The initial spatial maps are transformed into a spatiotemporal pattern by means of center on-off surround lateral connections. Excitatory
lateral connections allow the model to spread M cell activity, and are responsible for
moving the attractors away from their initial coordinates. In contrast, inhibitory
connections ensure that these trajectories eventually converge onto an attractor,
rather than diverge indefinitely. It is the interplay between excitatory and inhibitory
connections that allows the model to enhance the initial coding produced by the
chemotopic convergence mapping.
(b)
(a)
octanol
hexanol
nonanol
isoamylacetate
Figure 4. (a) Odor trajectories formed by spatio-temporal activity in the honeybee
AL (adapted from [4]). (b) Identity and intensity clustering of spatio-temporal
activity in the locust AL (adapted from [5]; arrows indicate the direction of
increasing concentration).
At present, our model employs a center on-off surround kernel that is constant
throughout the lattice. Further improvements can be achieved through adaptation of
these lateral connections by means of Hebbian and anti-Hebbian learning. These
extensions will allow us to investigate additional computational functions (e.g.,
pattern completion, orthogonalization, coding of mixtures) in the processing of
information from chemosensor arrays.
Acknowledgments
This material is based upon work supported by the National Science Foundation
under CAREER award 9984426/0229598. Takao Yamanaka, Alexandre PereraLluna and Agustin Gutierrez-Galvez are gratefully acknowledged for valuable
suggestions during the preparation of this manuscript.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
Gutierrez-Osuna, R. (2002) Pattern Analysis for Machine Olfaction: A Review. IEEE
Sensors Journal 2(3): 189-202.
Pearce, T. C. (1999) Computational parallels between the biological olfactory pathway and
its analogue ?The Electronic Nose?: Part I. Biologiacal olfaction. BioSystems 41: 43-67.
Laurent, G. (1999) A Systems Perspective on Early Olfactory Coding. Science 286(22):
723-728.
Gal?n, R. F.,Sachse, S., Galizia, C.G., & Herz, A.V. (2003) Odor-driven attractor dynamics
in the antennal lobe allow for simple and rapid olfactory pattern classification. Neural
Computation 16(5): 999-1012.
Stopfer, M., Jayaraman, V., & Laurent, G. (2003) Intensity versus Identity Coding in an
Olfactory System. Neuron 39: 991-1004.
Pearce, T.C., Verschure, P.F.M.J., White, J., & Kauer, J. S. (2001) Robust Stimulus
Encoding in Olfactory Processing: Hyperacuity and Efficient Signal Transmission. In S.
Wermter, J. Austin and D. Willshaw (Eds.), Emergent Neural Computation Architectures
Based on Neuroscience. pp. 461-479. Springer-Verlag.
Lee. A. P., & Reedy, B. J. (1999) Temperature modulation in semiconductor gas sensing.
Sensors and Actuators B 60: 35-42.
Vassar, R., Chao, S.K., Sitcheran, R., Nunez, J. M., Vosshall, L.B., & Axel, A. (1994)
Topographic Organization of Sensory Projections to the Olfactory Bulb. Cell 79(6): 981991.
Gutierrez-Osuna, R. (2002) A Self-organizing Model of Chemotopic Convergence for
Olfactory Coding. In Proceedings of the 2nd EMBS-BMES Conference, pp. 23-26. Texas.
Mori, K., Nagao, H., & Yoshihara, Y. (1999) The Olfactory Bulb: Coding and Processing
of Odor molecule information. Science 286: 711-715.
Kohonen, T. (1982) Self-organized formation of topologically correct feature maps.
Biological Cybernetics 43: 59-69.
DeSieno, D. (1988) Adding conscience to competitive learning. In Proceedings of
International Conference on Neural Networks (ICNN), pp. 117-124. Piscataway, NJ.
Laaksonen, J., Koskela, M., & Oja, E. (2003) Probability interpretation of distributions on
SOM surfaces. In Proceedings of Workshop on Self-Organizing Maps. Hibikino, Japan.
Aungst et al. (2003) Center-surround inhibition among olfactory bulb glomeruli. Nature 26:
623- 629.
Gerstner, W., & Kistler, W. (2002) Spiking Neuron Models: Single Neurons, Populations,
Plasticity. Cambridge, University Press.
| 2582 |@word compression:1 norm:2 nd:1 simulation:1 lobe:3 concise:1 thereby:1 mohm:1 reduction:1 initial:13 efficacy:1 current:4 must:2 tot:1 distant:1 subsequent:2 plasticity:1 device:1 iso:1 ith:1 indefinitely:2 conscience:6 provides:1 node:5 sigmoidal:1 five:1 rc:2 along:3 c2:3 epithelium:1 pathway:4 olfactory:32 manner:1 jayaraman:1 rapid:1 behavior:1 inspired:2 increasing:1 becomes:2 project:1 begin:1 circuit:1 what:1 emerging:2 finding:2 sharpening:1 gal:1 nj:1 temporal:9 pseudo:7 nf:1 bulbar:1 willshaw:1 normally:1 unit:4 semiconductor:1 receptor:7 encoding:1 vassar:1 subscript:1 laurent:2 modulation:4 reception:1 range:2 locust:3 unique:2 responsible:1 acknowledgment:1 gpeak:1 figaro:1 adapting:1 significantly:1 projection:10 integrating:1 induce:1 onto:6 close:3 storage:1 applying:1 gls:4 map:15 center:8 dz:1 primitive:7 identifying:1 chemotopic:9 array:5 population:8 coordinate:1 analogous:1 massive:2 us:1 recognition:2 hyperacuity:2 mammalian:1 database:1 role:1 capture:2 region:1 cycle:1 highest:3 valuable:1 dynamic:4 trained:2 solving:1 exposed:1 upon:1 spk:3 emergent:1 tx:1 effective:1 ction:1 artificial:3 formation:1 encoded:5 dominating:1 otherwise:1 favor:1 tgs:2 topographic:1 transform:2 interplay:1 interaction:1 reset:1 adaptation:1 neighboring:2 kohonen:4 combining:1 organizing:5 degenerate:1 competition:3 ammonia:2 convergence:21 enhancement:3 cluster:1 transmission:1 produce:1 converges:1 illustrate:1 recurrent:2 completion:1 measured:1 c:1 involves:2 resemble:1 indicate:1 direction:2 dormant:2 vrest:1 closely:1 correct:1 filter:1 human:1 settle:2 kistler:1 material:1 orn:11 redistribution:1 clustered:1 icnn:1 biological:6 extension:1 ic:1 exp:2 mapping:4 mo:4 stabilizes:1 early:1 a2:3 combinatorial:2 spreading:2 bridge:1 gutierrez:4 sensor:39 rather:2 pn:2 voltage:3 validated:3 focus:1 improvement:1 indicates:2 contrast:5 suppression:1 selective:3 wij:4 i1:1 transformed:2 issue:1 overall:1 orientation:1 classification:1 among:1 spatial:7 integration:1 once:1 sampling:1 look:1 stimulus:4 inherent:1 few:3 employ:4 micro:1 oja:1 dg:1 national:1 individual:2 consisting:1 fire:1 attractor:14 olfaction:7 conductance:6 organization:1 interneurons:2 investigate:1 introduces:1 mixture:2 trod:1 theoretical:1 biosystems:1 mk:3 mhz:1 laaksonen:1 neuromorphic:2 lattice:12 uniform:2 stored:1 acetone:2 spatiotemporal:1 fundamental:1 sensitivity:1 peak:1 international:1 lee:1 off:6 axel:1 diverge:2 enhance:1 again:1 containing:1 slowly:1 oxide:1 ricardo:1 japan:1 account:1 potential:8 sinusoidal:1 de:2 coding:15 summarized:1 b2:3 mv:3 performed:4 competitive:1 parallel:1 desieno:1 formed:1 correspond:1 produced:1 trajectory:10 cybernetics:1 synapsis:4 reach:1 synaptic:6 ed:1 frequency:1 pp:3 di:1 dataset:1 improves:1 dimensionality:1 organized:4 segmentation:1 syn:7 amplitude:2 manuscript:1 alexandre:1 dt:7 day:2 response:13 synapse:4 arranged:1 furthermore:1 stage:1 implicit:1 kauer:1 receives:1 mode:1 aj:1 b3:3 nagao:1 counterpart:1 evolution:1 hence:1 assigned:3 chemical:8 spatially:1 i2:1 illustrated:1 white:1 odorant:1 during:2 self:6 whereby:1 excitation:1 m:4 temperature:13 analyte:7 orthogonalization:1 image:6 recently:1 spiking:8 refractory:2 million:1 association:3 discussed:1 interpretation:1 expressing:1 refer:2 surround:8 cambridge:1 tuning:2 glomerulus:3 gratefully:1 moving:1 cortex:2 operating:1 inhibition:7 surface:1 dominant:1 closest:1 recent:3 perspective:1 driven:2 reverse:2 pns:2 selectivity:4 verlag:1 certain:1 additional:3 converge:4 period:3 signal:10 ii:1 multiple:2 reduces:1 hebbian:2 cross:3 divided:1 post:2 molecular:1 award:1 coded:1 a1:3 nunez:1 normalization:3 adopting:1 kernel:1 achieved:1 cell:14 c1:3 receive:1 background:1 addition:1 whereas:2 embs:1 interval:1 wealth:1 pc3:1 koskela:1 hz:1 induced:2 elegant:1 incorporates:1 habituation:2 presence:1 iii:1 enough:1 architecture:1 texas:2 six:1 collinear:1 wo:1 resistance:1 migrate:1 ignored:1 transforms:2 generate:3 inhibitory:6 neuroscience:1 per:3 bulk:2 herz:1 key:2 threshold:1 acknowledged:1 prevent:1 opi:1 replica:1 fraction:2 respond:1 topologically:1 place:1 throughout:1 electronic:1 raman:1 ob:6 pc2:1 wermter:1 capturing:1 layer:2 followed:2 mitral:3 activity:11 adapted:3 nearby:1 generates:2 simulate:1 extremely:1 formulating:1 min:1 optical:1 department:1 according:3 piscataway:1 chemosensory:1 membrane:7 across:5 osuna:3 evolves:2 biologically:2 honeybee:3 mori:1 equation:1 eventually:1 mechanism:2 nose:1 serf:1 operation:1 actuator:1 hierarchical:1 away:1 occurrence:1 alternative:1 odor:23 clustering:2 ensure:1 completed:1 opportunity:1 maintaining:1 granule:2 capacitance:1 spike:5 primary:2 concentration:11 abovementioned:1 affinity:4 win:1 distance:3 separate:2 mapped:1 lateral:15 simulated:1 takao:1 originate:1 code:4 length:1 modeled:2 ratio:1 piriform:1 esyn:2 unfortunately:1 negative:2 perform:1 neuron:17 pearce:3 anti:1 gas:5 neurobiology:2 station:1 pc1:1 intensity:13 drift:2 c3:3 connection:11 engine:1 address:2 suggested:2 lkj:2 pattern:18 saturation:1 built:1 including:1 memory:2 analogue:2 ia:3 treated:1 alcohol:2 scheme:1 improve:1 heater:1 analytes:8 conventionally:1 yamanaka:1 lij:1 chao:1 review:2 literature:1 evolve:1 relative:2 manhattan:1 antennal:3 suggestion:1 versus:1 foundation:1 integrate:1 bulb:8 metal:1 consistent:4 article:1 principle:1 uncorrelated:2 pi:3 squashing:1 austin:1 excitatory:5 gl:16 supported:1 verschure:1 bias:1 allow:3 wide:1 leaky:1 feedback:1 overcome:1 cortical:1 dimension:1 sensory:6 commonly:1 collection:1 replicated:1 stopfer:1 neurobiological:1 gene:1 global:1 reveals:2 b1:3 spatio:9 bead:1 table:2 mj:4 reasonably:1 robust:3 molecule:1 career:1 nature:1 du:3 gerstner:1 som:11 spread:1 arrow:1 noise:3 ref:1 referred:1 transduction:1 fashion:1 slow:1 axon:1 position:1 comprises:1 winning:1 iinput:1 orns:10 specific:7 repeatable:1 inset:1 sensing:1 a3:4 grouping:3 incorporating:1 workshop:1 adding:1 illustrates:1 gap:1 reedy:1 entropy:4 logarithmic:1 forming:1 expressed:1 springer:1 corresponds:1 identity:12 absence:1 change:2 determined:1 operates:1 averaging:1 principal:2 total:1 pas:1 experimental:5 college:1 mark:1 modulated:6 preparation:1 |
1,742 | 2,583 | A Method for Inferring Label Sampling
Mechanisms in Semi-Supervised Learning
Saharon Rosset
Data Analytics Research Group
IBM T.J. Watson Research Center
Yorktown Heights, NY 10598
[email protected]
Hui Zou
Department of Statistics
Stanford University
Stanford, CA 94305
[email protected]
Ji Zhu
Department of Statistics
University of Michigan
Ann Arbor, MI 48109
[email protected]
Trevor Hastie
Department of Statistics
Stanford University
Stanford, CA 94305
[email protected]
Abstract
We consider the situation in semi-supervised learning, where the ?label
sampling? mechanism stochastically depends on the true response (as
well as potentially on the features). We suggest a method of moments
for estimating this stochastic dependence using the unlabeled data. This
is potentially useful for two distinct purposes: a. As an input to a supervised learning procedure which can be used to ?de-bias? its results using
labeled data only and b. As a potentially interesting learning task in itself. We present several examples to illustrate the practical usefulness of
our method.
1 Introduction
In semi-supervised learning, we assume we have a sample (xi , yi , si )ni=1 , of i.i.d. draws
from a joint distribution on (X, Y, S), where:1
? xi ? Rp are p-vectors of features.
? yi is a label, or response (yi ? R for regression, yi ? {0, 1} for 2-class classification).
? si ? {0, 1} is a ?labeling indicator?, that is yi is observed if and only if si = 1,
while xi is observed for all i.
In this paper we consider the interesting case of semi-supervised learning, where the probability of observing the response depends on the data through the true response, as well as
1
Our notation here differs somewhat from many semi-supervised learning papers, where the unlabeled part of the sample is separated from the labeled part and sometimes called ?test set?.
potentially through the features. Our goal is to model this unknown dependence:
l(x, y) = P r(S = 1|x, y)
(1)
Note that the dependence on y (which is unobserved when S = 0) prevents us from using
standard supervised modeling approaches to learn l. We show here that we can use the
whole data-set (labeled+unlabeled data) to obtain estimates of this probability distribution
within a parametric family of distributions, without needing to ?impute? the unobserved
responses.2
We believe this setup is of significant practical interest. Here are a couple of examples of
realistic situations:
1. The problem of learning from positive examples and unlabeled data is of significant
interest in document topic learning [4, 6, 8]. Consider a generalization of that problem,
where we observe a sample of positive and negative examples and unlabeled data, but we
believe that the positive and negative labels are supplied with different probabilities (in
the document learning example, positive examples are typically more likely to be labeled
than negative ones, which are much more abundant). These probabilities may also not be
uniform within each class, and depend on the features as well. Our methods allow us to
infer these labeling probabilities by utilizing the unlabeled data.
2. Consider a satisfaction survey, where clients of a company are requested to report their
level of satisfaction, but they can choose whether or not they do so. It is reasonable to
assume that their willingness to report their satisfaction depends on their actual satisfaction
level. Using our methods, we can infer the dependence of the reporting probability on
the actual satisfaction by utilizing the unlabeled data, i.e., the customers who declined to
respond.
Being able to infer the labeling mechanism is important for two distinct reasons. First,
it may be useful for ?de-biasing? the results of supervised learning, which uses only the
labeled examples. The generic approach for achieving this is to use ?inverse sampling?
weights (i.e. weigh labeled examples by 1/l(x, y)). The us of this for maximum likelihood estimation is well established in the literature as a method for correcting sampling
bias (of which semi-supervised learning is an example) [10]. We can also use the learned
mechanism to post-adjust the probabilities from a probability estimation methods such as
logistic regression to attain ?unbiasedness? and consistency [11]. Second, understanding
the labeling mechanism may be an interesting and useful learning task in itself. Consider,
for example, the ?satisfaction survey? scenario described above. Understanding the way in
which satisfaction affects the customers? willingness to respond to the survey can be used
to get a better picture of overall satisfaction and to design better future surveys, regardless
of any supervised learning task which models the actual satisfaction.
Our approach is described in section 2, and is based on a method of moments. Observe
that for
Pnevery function of the features g(x), we can get an unbiased estimate of its mean
as n1 i=1 g(xi ). We show that if we know the underlying label sampling mechanism
l(x, y) we can get a different unbiased estimate of Eg(x), which uses only the labeled
examples, weighted by 1/l(x, y). We suggest inferring the unknown function l(x, y) by
requiring that we get identical estimates of Eg(x) using both approaches. We illustrate our
method?s implementation on the California Housing data-set in section 3. In section 4 we
review related work in the machine learning and statistics literature, and we conclude with
a discussion in section 5.
2
The importance of this is that we are required to hypothesize and fit a conditional probability
model for l(x, y) only, as opposed to the full probability model for (S, X, Y ) required for, say, EM.
2 The method
Let g(x) be any function of our features. We construct two different unbiased estimates of
Eg(x), one based on all n data points and one based on labeled examples only, assuming
P (S = 1|x, y) is known. Then, our method uses the equality in expectation of the two
estimates to infer P (S = 1|x, y). Specifically, consider g(x) and also:
?
g(x)
if s = 1 (y observed)
p(S=1|x,y)
f (x, y, s) =
(2)
0
otherwise
Then:
Theorem 1 Assume P (S = 1|x, y) > 0 , ?x, y. Then:
E(g(X)) = E(f (X, Y, S))
.
Proof:
Z
E(f (X, Y, S)) =
f (x, y, s)dP (x, y, s) =
X,Y,S
Z
Z
g(x)
=
ZX
=
Y
P (S = 1|x, y)
dP (y|x)dP (x) =
P (S = 1|x, y)
g(x)dP (x) = Eg(X)
?
X
The empirical interpretation of this expectation result is:
n
n
g(xi )
1X
1 X
1X
f (xi , yi , si ) =
g(xi )
? Eg(x) ?
n i=1
n i:s =1 P (S = 1|xi , yi )
n i=1
(3)
i
which can be interpreted as relating an estimate of Eg(x) based on the complete data on
the right, to the one based on labeled data only, which requires weighting that is inversely
proportional to the probability of labeling, to compensate for ignoring the unlabeled data.
(3) is the fundamental result we use for our purpose, leading to a ?method of moments?
approach to estimating l(x, y) = P (S = 1|x, y), as follows:
? Assume that l(x, y) = p? (x, y) , ? ? Rk belongs to a parametric family with k
parameters.
? Select k different functions g1 (x), ..., gk (x), and define f1 , ..., fk correspondingly
according to (2).
? Demand equality of the leftmost and rightmost sums in (3) for each of g1 , ..., gk ,
and solve the resulting k equations to get an estimate of ?.
Many practical and theoretical considerations arise when we consider what ?good? choices
of the representative functions g1 (x), ..., gk (x) may be. Qualitatively we would like to
accomplish the standard desirable properties of inverse problems: uniqueness, stability and
robustness. We want the resulting equations to have a unique ?correct? solution. We want
our functions to have low variance so the inaccuracy in (3) is minimal, and we want them
to be ?different? from each other to get a stable solution in the k-dimensional space. It is of
course much more difficult to give concrete quantitative criteria for selecting the functions
in practical situations. What we can do in practice is evaluate how stable the results we get
are. We return to this topics in more detail in section 5.
A second set of considerations in selecting these functions is the computational one: can we
even solve the resulting inverse problems with a reasonable computational effort? In general, solving systems of more than one nonlinear equation is a very hard problem. We also
need to consider the possibility of non-unique solutions. These questions are sometimes
inter-related with the choice of gk (x).
Suppose we wish to solve a set of non-linear equations for ?:
X gk (xi )
X
hk (?) =
?
gk (xi ) = 0, k = 1, . . . , K
p (x , y )
s =1 ? i i
i
(4)
i
The solution of (4) is similar to
arg min h(?) = arg min
X
hk (?)2
(5)
m
Notice that every solution to (4) minimizes (5), but there may be local minima of (5) that
are not solutions to (4). Hence simply applying a Newton-Raphson method to (5) is not
a good idea: if we have a sufficiently good initial guess about the solution, the NewtonRaphson method converges quadratically fast; however, it can also fail to converge, if the
root does not exist nearby. In practice, we can combine the Newton-Raphson method with
a line search strategy that makes sure h(?) is reduced at each iteration (the Newton step is
always a descent direction of h(?)). While this method can still occasionally fail by landing
on a local minimum of h(?), this is quite rare in practice [1]. The remedy is usually to try
a new starting point. Other global algorithms based on the so called model-trust region
approach are also used in practice. These methods have a reputation for robustness even
when starting far from the desired zero or minimum [2].
In some cases we can employ simpler methods, since the equations we get can be manipulated algebraically to give more ?friendly? formulations. We show two examples in the
next sub-section.
2.1 Examples of simplified calculations
We consider two situations where we can use algebra to simplify the solution of the equations our method gives. The first is the obvious application to two-class classification,
where the label sampling mechanism depends on the class label only. Our method then
reduces to the one suggested by [11]. The second is a more involved regression scenario,
with a logistic dependence between the sampling probability and the actual label.
First, consider a two-class classification scenario, where the sampling mechanism is independent of x:
?
p1 if y = 1
P (S = 1|x, y) =
p0 if y = 0
Then we need two functions of x to ?de-bias? our classes. One natural choice is g(x) = 1,
which implies we are simply trying to invert the sampling probabilities. Assume we use
one of the features g(x) = xj as our second function. Plugging these into (3) we get that
to find p0 , p1 we should solve:
#{yi = 1 observed} #{yi = 0 observed}
+
n =
p?1
p?0
P
P
X
x
x
ij
ij
si =1,yi =0
si =1,yi =1
+
xij =
p
?
p?0
1
i
which we can solve analytically to get:
p?1
=
r1 n0 ? r0 n1
rn0 ? r0 n
r1 n0 ? r0 n1
r1 n ? rn1
P
where nk = #{yi = k observed} , rk = si =1,yi =k xij , k = 0, 1
p?0
=
As a second, more involved, example, consider a regression situation (like the satisfaction
survey mentioned in the introduction), where we assume the probability of observing the
response has a linear-logistic dependence on the actual response (again we assume for simplicity independence on x, although dependence on x poses no theoretical complications):
P (S = 1|x, y) =
exp(a + by)
= logit?1 (a + by)
1 + exp(a + by)
(6)
with a, b unknown parameters. Using the same two g functions as above gives us the
slightly less friendly set of equations:
X
1
n =
?1
(?
a + ?byi )
si =1 logit
X
X
xij
xij =
?1
logit (?
a + ?byi )
si =1
i
which with some algebra we can re-write as:
X
0 =
exp(??byi )(?
x0j ? xij )
(7)
si =1
exp(?
a)m0
=
X
exp(??byi )
(8)
si =1
where x
?0j is the empirical mean of the j?th feature over unlabeled examples and m0 is the
number of unlabeled examples. We do not have an analytic solution for these equations.
However, the decomposition they offer allows us to solve them by searching first over b to
solve (7), then plugging the result into (8) to get an estimate of a. In the next section we
use this solution strategy on a real-data example.
3 Illustration on the California Housing data-set
To illustrate our method, we take a fully labeled regression data-set and hide some of the
labels based on a logistic transformation of the response, then examine the performance
of our method in recovering the sampling mechanism and improving resulting prediction
through de-biasing. We use the California Housing data-set [9], collected based on US
Census data. It contains 20640 observations about log( median house price) throughout
California regions. The eight features are: median income, housing median age, total
rooms, total bedrooms, population, households, latitude and longitude.
We use 3/4 of the data for modeling and leave 1/4 aside for evaluation. Of the training
data, we hide some of the labels stochastically, based on the ?label sampling? model:
P (S = 1|y) = logit?1 (1.5(y ? y?) ? 0.5)
(9)
this scheme results in having 6027 labeled training examples, 9372 training examples with
the labels removed and 5241 test examples.
We use equations (7,8) to estimate a
?, ?b based on each one of the 8 features. Figure
1Pand Table 1 show the results of our analysis. In Figure 1 we display the value of
x0j ? xj ) for a range of possible values for b. We observe that all
si =1 exp(?byi )(?
features give us 0 crossing around the correct value of 1.5. In Table 1 we give details of the
8 models estimated by a search strategy as follows:
6
x 10
5000
6000
3
4000
2
2000
0
1
0
0
?2000
?1
?4000
?5000
0
5
x 10
1
2
3
0
5
x 10
1
2
3
0
5
x 10
1
2
3
0
1
2
3
10
4
4
2
2
5
0
0
0
0
1
2
3
0
1
2
3
0
1
2
3
500
1000
0
0
?1000
?500
?2000
?3000
0
1
2
3
?1000
Figure 1: Value of RHS of (7) (vertical axis) vs value of b (horizontal axis) for the 8 different
features. The correct value is b = 1.5, and so we expect to observe ?zero crossings? around that
value, which we indeed observe on all 8 graphs.
? Find ?b by minimizing |
P
si =1
exp(?byi )(?
x0j ? xij )| over the range b ? [0, 3].
? Find a
? by plugging ?b from above into (8).
The table also shows the results of using these estimates to ?de-bias? the prediction model,
i.e. once we have a
?, ?b we use them to calculate P? (S = 1|y) and use the inverse of these
estimated probabilities as weights in a least squares analysis of the labeled data. The table
compares the predictive performance of the resulting models on the 1/4 evaluation set
(5241 observations) to that of the model built using labeled data only with no weighting
and that of the model built using the labeled data and the ?correct? weighting based on
our knowledge of the true a, b. Most of the 8 features give reasonable estimates, and the
prediction models built using the resulting weighting schemes perform comparably to the
one built using the ?correct? weights. They generally attain MSE about 20% smaller than
that of the non-weighted model built without regard to the label sampling mechanism.
The stability of the resulting estimates is related to the ?reasonableness? of the selected
g(x) functions. To illustrate that, we also tried the function g(x) = x3 ? x4 ? x5 /(x1 ? x2 )
(still in combination with the identity function, so we can use (7,8)). The resulting estimates
were ?b = 3.03, a
? = 0.074. Clearly these numbers are way outside the reasonable range of
the results in Table 1. This is to be expected as this choice of g(x) gives a function with
very long tails. Thus, a few ?outliers? dominate the two estimates of E(g(x)) in (3) which
we use to estimate a, b.
4 Related work
The surge of interest in semi-supervised learning in recent years has been mainly in the
context of text classification ([4, 6, 8] are several examples of many). There is also a
Table 1: Summary of estimates of sampling mechanism using each of 8 features
Feature
median income
housing median age
total rooms
total bedrooms
population
households
latitude
longitude
(no weighting)
(true sampling model)
b
1.52
1.18
1.58
1.64
1.7
1.63
1.55
1.33
N/A
1.5
a
-0.519
-0.559
-0.508
-0.497
-0.484
-0.499
-0.514
-0.545
N/A
-0.5
Prediction MSE
0.1148
0.1164
0.1147
0.1146
0.1146
0.1146
0.1147
0.1155
0.1354
0.1148
wealth of statistical literature on missing data and biased sampling (e.g. [3, 7, 10]) where
methods have been developed that can be directly or indirectly applied to semi-supervised
learning. Here we briefly survey some of the interesting and popular approaches and relate
them to our method.
The EM approach to text classification is advocated by [8]. Some ad-hoc two-step variants
are surveyed by [6]. They consists of iterating between completing class labels and estimating the classification model. The main caveat of all these methods is that they ignore
the sampling mechanism, and thus implicitly assume it cancels out in the likelihood function ? i.e., that the sampling is random and that l(x, y) is fixed. It is possible, in principle,
to remove this assumption, but that would significantly increase the complexity of the algorithms, as it would require specifying a likelihood model for the sampling mechanism
and adding its parameters to the estimation procedure. The methods described by [7] and
discussed below take this approach.
The use of re-weighted loss to account for unknown sampling mechanisms is suggested
by [4, 11]. Although they differ significantly in the details, both of these can account for
label-dependent sampling in two-class classification. They do not offer solutions for other
modeling tasks or for feature-dependent sampling, which our approach covers.
In the missing data literature, [7] (chapter 15) and references therein offer several methods for handling ?nonignorable nonresponse?. These are all based on assuming complete
probability models for (X, Y, S) and designing EM algorithms for the resulting problem.
An interesting example is the bivariate normal stochastic ensemble model, originally suggested by [3]. In our notation, they assume that there is an additional fully unobserved
?response? zi , and that yi is observed if and only if zi > 0. They also assume that yi , zi
are bivariate normal, depending on the features xi , that is:
?
yi
zi
?
??
?N
xi ?1
xi ?2
? ? 2
?
,
?? 2
?? 2
1
??
this leads to a complex, but manageable, EM algorithm for inferring the sampling mechanism and fitting the actual model at once. The main shortcoming of this approach, as
we see it, is in the need to specify a complete and realistic joint probability model engulfing both the sampling mechanism and the response function. This precludes completely
the use of non-probabilistic methods for the response model part (like trees or kernel methods), and seems to involve significant computational complications if we stray from normal
distributions.
5 Discussion
The method we suggest in this paper allows for the separate and unbiased estimation of
label-sampling mechanisms, even when they stochastically depend on the partially unobserved labels. We view this ?de-coupling? of the sampling mechanism estimation from the
actual modeling task at hand as an important and potentially very useful tool, both because
it creates a new, interesting learning task and because the results of the sampling model can
be used to ?de-bias? any black-box modeling tool for the supervised learning task through
inverse weighting (or sampling, if the chosen tool does not take weights).
Our method of moments suffers from the same problems all such methods (and inverse
problems in general) share, namely the uncertainty about the stability and validity of the
results. It is difficult to develop general theory for stable solutions to inverse problems.
What we can do in practice is attempt to validate the estimates we get. We have already
seen one approach for doing this in section 3, where we used multiple choices for g(x)
and compared the resulting estimates of the parameters determining l(x, y). Even if we
had not known the true values of a and b, the fact that we got similar estimates using
different features would reassure us that these estimates were reliable, and give us an idea
of their uncertainty. A second approach for evaluating the resulting estimates could be to
use bootstrap sampling, which can be used to calculate bootstrap confidence intervals of
the parameter estimates.
The computational issues also need to be tackled if our method is to be applicable for large
scale problems with complex sampling mechanisms. We have suggested a general methodology in section 2, and some ad-hoc solutions for special cases in section 2.1. However
we feel that a lot more can be done to develop efficient and widely applicable methods for
solving the moment equations.
Acknowledgments
We thank John Langford and Tong Zhang for useful discussions.
References
[1]
Acton, F. (1990) Numerical Methods That Work. Washington: Math. Assoc. of America.
[2]
Dennis, J. & Schnabel, R. (1983) Numerical Methods for Unconstrained Optimization and
Nonlinear Equations. New Jersey: Prentice-Hall.
[3]
Heckman, J.I. (1976). The common structure of statistical models for truncation, sample selection and limited dependent variables, and a simple estimator for such models. Annals of
Economic and Social Measurement 5:475-492.
[4]
Lee, W.S. & Liu, B. (2003). Learning with Positive and Unlabeled Examples Using Weighted
Logistic Regression. ICML-03
[5]
Lin, Y., Lee, Y. & Wahba, G. (2000). Support vector machines for classification in nonstandard
situations. Machine Learning, 46:191-202.
[6]
Liu, B., Dai, Y., Li, X., Lee, W.S. & Yu, P. (2003). Building Text Classifiers Using Positive and
Unlabeled Examples. Proceedings ICDM-03
[7]
Little, R. & Rubin, D. (2002). Statistical Analysis with Missing Data, 2nd Ed. . Wiley & Sons.
[8]
Nigam, K., McCallum , A., Thrun, S. & Mitchell, T. (2000) Text Classification from Labeled
and Unlabeled Documents using EM. Machine Learning 39(2/3):103-134.
[9]
Pace, R.K. & Barry, R. (1997). Sparse Spatial Autoregressions. Stat. & Prob. Let., 33 291-297.
[10] Vardi, Y. (1985). Empirical Distributions in Selection Bias Models. Annals of Statistics, 13.
[11] Zou, H., Zhu, J. & Hastie, T. (2004). Automatic Bayes Carpentary in Semi-Supervised Classification. Unpublished.
| 2583 |@word briefly:1 manageable:1 seems:1 logit:4 nd:1 tried:1 decomposition:1 p0:2 moment:5 initial:1 liu:2 contains:1 selecting:2 document:3 rightmost:1 com:2 si:13 john:1 numerical:2 realistic:2 analytic:1 hypothesize:1 remove:1 n0:2 aside:1 v:1 selected:1 guess:1 mccallum:1 caveat:1 math:1 complication:2 simpler:1 zhang:1 height:1 consists:1 combine:1 fitting:1 inter:1 expected:1 indeed:1 p1:2 examine:1 surge:1 company:1 actual:7 little:1 estimating:3 notation:2 underlying:1 rn0:1 what:3 interpreted:1 minimizes:1 developed:1 unobserved:4 transformation:1 quantitative:1 every:1 friendly:2 assoc:1 classifier:1 positive:6 local:2 black:1 therein:1 specifying:1 limited:1 analytics:1 range:3 practical:4 unique:2 acknowledgment:1 practice:5 differs:1 x3:1 bootstrap:2 procedure:2 empirical:3 attain:2 significantly:2 got:1 confidence:1 suggest:3 get:12 unlabeled:13 selection:2 prentice:1 context:1 applying:1 landing:1 customer:2 center:1 missing:3 regardless:1 starting:2 survey:6 simplicity:1 correcting:1 estimator:1 utilizing:2 dominate:1 stability:3 searching:1 population:2 nonresponse:1 feel:1 annals:2 suppose:1 us:3 designing:1 crossing:2 labeled:15 observed:7 calculate:2 region:2 removed:1 weigh:1 mentioned:1 complexity:1 depend:2 solving:2 algebra:2 predictive:1 creates:1 completely:1 joint:2 chapter:1 america:1 jersey:1 separated:1 distinct:2 fast:1 shortcoming:1 labeling:5 outside:1 quite:1 stanford:6 solve:7 widely:1 say:1 otherwise:1 precludes:1 statistic:5 g1:3 itself:2 housing:5 hoc:2 validate:1 r1:3 converges:1 leave:1 illustrate:4 depending:1 coupling:1 pose:1 stat:2 develop:2 ij:2 advocated:1 longitude:2 recovering:1 implies:1 differ:1 direction:1 correct:5 stochastic:2 require:1 f1:1 generalization:1 sufficiently:1 around:2 hall:1 normal:3 exp:7 m0:2 purpose:2 uniqueness:1 estimation:5 applicable:2 label:17 tool:3 weighted:4 clearly:1 always:1 likelihood:3 mainly:1 hk:2 dependent:3 typically:1 overall:1 classification:10 arg:2 issue:1 spatial:1 special:1 construct:1 once:2 having:1 washington:1 sampling:29 identical:1 x4:1 yu:1 cancel:1 icml:1 future:1 report:2 simplify:1 employ:1 few:1 manipulated:1 n1:3 attempt:1 interest:3 possibility:1 evaluation:2 adjust:1 tree:1 abundant:1 desired:1 srosset:1 re:2 theoretical:2 minimal:1 modeling:5 cover:1 rare:1 uniform:1 usefulness:1 accomplish:1 rosset:1 unbiasedness:1 fundamental:1 probabilistic:1 lee:3 concrete:1 again:1 rn1:1 opposed:1 choose:1 stochastically:3 leading:1 return:1 li:1 account:2 de:7 depends:4 ad:2 root:1 try:1 view:1 lot:1 observing:2 doing:1 bayes:1 square:1 ni:1 pand:1 variance:1 who:1 ensemble:1 comparably:1 zx:1 nonstandard:1 suffers:1 trevor:1 ed:1 involved:2 obvious:1 proof:1 mi:1 couple:1 popular:1 mitchell:1 knowledge:1 originally:1 supervised:14 methodology:1 response:11 specify:1 formulation:1 done:1 box:1 langford:1 hand:1 horizontal:1 dennis:1 trust:1 nonlinear:2 logistic:5 willingness:2 believe:2 building:1 validity:1 requiring:1 true:5 remedy:1 unbiased:4 equality:2 analytically:1 hence:1 eg:6 x5:1 impute:1 yorktown:1 criterion:1 leftmost:1 trying:1 complete:3 saharon:1 consideration:2 common:1 ji:1 tail:1 interpretation:1 discussed:1 relating:1 significant:3 measurement:1 automatic:1 unconstrained:1 consistency:1 fk:1 had:1 stable:3 hide:2 recent:1 belongs:1 scenario:3 occasionally:1 watson:1 yi:16 seen:1 minimum:3 additional:1 somewhat:1 dai:1 r0:3 converge:1 algebraically:1 barry:1 semi:9 full:1 desirable:1 needing:1 infer:4 reduces:1 multiple:1 calculation:1 offer:3 compensate:1 raphson:2 long:1 lin:1 icdm:1 post:1 plugging:3 prediction:4 variant:1 regression:6 expectation:2 iteration:1 sometimes:2 kernel:1 invert:1 want:3 interval:1 wealth:1 median:5 biased:1 sure:1 affect:1 fit:1 xj:2 independence:1 hastie:3 bedroom:2 zi:4 wahba:1 economic:1 idea:2 whether:1 effort:1 useful:5 generally:1 iterating:1 involve:1 reduced:1 supplied:1 exist:1 xij:6 notice:1 estimated:2 pace:1 write:1 group:1 achieving:1 graph:1 sum:1 year:1 inverse:7 prob:1 uncertainty:2 respond:2 reporting:1 family:2 reasonable:4 x0j:3 throughout:1 draw:1 completing:1 display:1 tackled:1 x2:1 nearby:1 min:2 department:3 according:1 combination:1 smaller:1 slightly:1 em:5 son:1 outlier:1 census:1 equation:11 mechanism:19 fail:2 know:1 umich:1 eight:1 observe:5 generic:1 indirectly:1 robustness:2 rp:1 newton:3 household:2 question:1 already:1 parametric:2 strategy:3 dependence:7 heckman:1 dp:4 separate:1 thank:1 thrun:1 topic:2 collected:1 reason:1 assuming:2 byi:6 illustration:1 minimizing:1 setup:1 difficult:2 potentially:5 relate:1 gk:6 negative:3 design:1 implementation:1 unknown:4 perform:1 vertical:1 observation:2 jizhu:1 descent:1 situation:6 namely:1 required:2 unpublished:1 california:4 learned:1 quadratically:1 established:1 inaccuracy:1 able:1 suggested:4 usually:1 below:1 latitude:2 biasing:2 built:5 reliable:1 satisfaction:10 natural:1 client:1 indicator:1 zhu:2 scheme:2 inversely:1 picture:1 declined:1 axis:2 text:4 review:1 literature:4 understanding:2 autoregressions:1 determining:1 fully:2 expect:1 loss:1 interesting:6 proportional:1 age:2 rubin:1 principle:1 share:1 ibm:2 course:1 summary:1 truncation:1 bias:6 allow:1 correspondingly:1 sparse:1 regard:1 evaluating:1 qualitatively:1 simplified:1 far:1 income:2 social:1 ignore:1 implicitly:1 global:1 conclude:1 xi:13 reasonableness:1 search:2 reputation:1 table:6 learn:1 ca:2 ignoring:1 nigam:1 improving:1 requested:1 mse:2 complex:2 zou:2 main:2 rh:1 whole:1 arise:1 vardi:1 x1:1 representative:1 ny:1 tong:1 wiley:1 sub:1 inferring:3 surveyed:1 wish:1 stray:1 house:1 weighting:6 theorem:1 rk:2 bivariate:2 adding:1 importance:1 hui:1 demand:1 nk:1 michigan:1 simply:2 likely:1 prevents:1 partially:1 conditional:1 goal:1 identity:1 ann:1 room:2 price:1 hard:1 specifically:1 called:2 total:4 arbor:1 select:1 support:1 newtonraphson:1 schnabel:1 evaluate:1 handling:1 |
1,743 | 2,584 | Worst-Case Analysis of Selective Sampling for
Linear-Threshold Algorithms?
Nicol`o Cesa-Bianchi
DSI, University of Milan
[email protected]
Claudio Gentile
Universit`a dell?Insubria
[email protected]
Luca Zaniboni
DTI, University of Milan
[email protected]
Abstract
We provide a worst-case analysis of selective sampling algorithms for
learning linear threshold functions. The algorithms considered in this
paper are Perceptron-like algorithms, i.e., algorithms which can be efficiently run in any reproducing kernel Hilbert space. Our algorithms exploit a simple margin-based randomized rule to decide whether to query
the current label. We obtain selective sampling algorithms achieving on
average the same bounds as those proven for their deterministic counterparts, but using much fewer labels. We complement our theoretical
findings with an empirical comparison on two text categorization tasks.
The outcome of these experiments is largely predicted by our theoretical results: Our selective sampling algorithms tend to perform as good
as the algorithms receiving the true label after each classification, while
observing in practice substantially fewer labels.
1
Introduction
In this paper, we consider learning binary classification tasks with partially labelled data
via selective sampling. A selective sampling algorithm (e.g., [3, 12, 7] and references
therein) is an on-line learning algorithm that receives a sequence of unlabelled instances,
and decides whether or not to query the label of the current instance based on instances
and labels observed so far. The idea is to let the algorithm determine which labels are most
useful to its inference mechanism, so that redundant examples can be discarded on the fly
and labels can be saved.
The overall goal of selective sampling is to fit real-world scenarios where labels are scarce
or expensive. As a by now classical example, in a web-searching task, collecting web pages
is a fairly automated process, but assigning them a label (a set of topics) often requires timeconsuming and costly human expertise. In these cases, it is clearly important to devise
learning algorithms having the ability to exploit the label information as much as possible.
Furthermore, when we consider kernel-based algorithms [23, 9, 21], saving labels directly
implies saving support vectors in the currently built hypothesis, which, in turn, implies
saving running time in both training and test phases.
Many algorithms have been proposed in the literature to cope with the broad task of learning
with partially labelled data, working under both probabilistic and worst-case assumptions,
for either on-line or batch settings. These range from active learning algorithms [8, 22],
?
The authors gratefully acknowledge partial support by the PASCAL Network of Excellence under EC grant no. 506778. This publication only reflects the authors? views.
to the query-by-committee algorithm [12], to the adversarial ?apple tasting? and labelefficient algorithms investigated in [16] and [17, 6], respectively. In this paper we present
a worst-case analysis of two Perceptron-like selective sampling algorithms. Our analysis
relies on and contributes to a well-established way of studying linear-threshold algorithms
within the mistake bound model of on-line learning (e.g., [18, 15, 11, 13, 14, 5]). We
show how to turn the standard versions of the (first-order) Perceptron algorithm [20] and
the second-order Perceptron algorithm [5] into selective sampling algorithms exploiting a
randomized margin-based criterion (inspired by [6]) to select labels, while preserving in
expectation the same mistake bounds.
In a sense, this line of research complements an earlier work on selective sampling [7],
where a second-order kind of algorithm was analyzed under precise stochastic assumptions
about the way data are generated. This is exactly what we face in this paper: we avoid
any assumption whatsoever on the data-generating process, but we are still able to prove
meaningful statements about the label efficiency features of our algorithms.
In order to give some empirical evidence for our analysis, we made some experiments
on two medium-size text categorization tasks. These experiments confirm our theoretical
results, and show the effectiveness of our margin-based label selection rule.
2
Preliminaries, notation
An example is a pair (x, y), where x ? Rn is an instance vector and y ? {?1, +1}
is the associated binary label. A training set S is any finite sequence of examples S =
(x1 , y1 ), . . . , (xT , yT ) ? (Rn ? {?1, +1})T . We say that S is linearly separable if there
exists a vector u ? Rn such that yt u> xt > 0 for t = 1, . . . , T .
We consider the following selective sampling variant of a standard on-line learning model
(e.g., [18, 24, 19, 15] and references therein). This variant has been investigated in [6]
for a version of Littlestone?s Winnow algorithm [18, 15]. Learning proceeds on-line in
a sequence of trials. In the generic trial t the algorithm receives instance xt from the
environment, outputs a prediction y?t ? {?1, +1} about the label yt associated with xt ,
and decides whether or not to query the label yt . No matter what the algorithm decides,
we say that the algorithm has made a prediction mistake if y?t 6= yt . We measure the
performance of the algorithm by the total number of mistakes it makes on S (including
the trials where the true label remains hidden). Given a comparison class of predictors, the
goal of the algorithm is to bound the amount by which this total number of mistakes differs,
on an arbitrary sequence S, from some measure of the performance of the best predictor in
hindsight within the comparison class. Since we are dealing with (zero-threshold) linearthreshold algorithms, it is natural to assume the comparison class be the set of all (zerothreshold) linear-threshold predictors, i.e., all (possibly normalized) vectors u ? R n . Given
a margin value ? > 0, we measure the performance of u on S by its cumulative hinge loss 1
PT
>
[11, 13]
t=1 D? (u; (xt , yt )), where D? (u; (xt , yt )) = max{0, ? ? yt u xt }.
Broadly speaking, the goal of the selective sampling algorithm is to achieve the best bound
on the number of mistakes with as few queried labels as possible. As in [6], our algorithms
exploit a margin-based randomized rule to decide which labels to query. Thus, our mistake
bounds are actually worst-case over the training sequence and average-case over the internal randomization of the algorithms. All expectations occurring in this paper are w.r.t. this
randomization.
3
The algorithms and their analysis
As a simple example, we start by turning the classical Perceptron algorithm [20] into a
worst-case selective sampling algorithm. The algorithm, described in Figure 1, has a real
1
The cumulative hinge loss measures to what extent hyperplane u separates S at margin ?. This
is also called the soft margin in the SVM literature [23, 9, 21].
ALGORITHM Selective sampling Perceptron algorithm
Parameter b > 0.
Initialization: v 0 = 0; k = 1.
For t = 1, 2, . . . do:
?t , with x
?t = xt /||xt ||;
1. Get instance vector xt ? Rn and set rt = v >
k?1 x
2. predict with y?t = SGN(rt ) ? {?1, +1};
b
3. draw a Bernoulli random variable Zt ? {0, 1} of parameter b+|r
;
t|
4. if Zt = 1 then:
(a) ask for label yt ? {?1, +1},
(b) if y?t 6= yt then update as follows: v k = v k?1 + yt x
?t , k ? k + 1.
Figure 1: The selective sampling (first-order) Perceptron algorithm.
parameter b > 0 which might be viewed as a noise parameter, ruling the extent to which
a linear threshold model fits the data at hand. The algorithm maintains a vector v ? R n
(whose initial value is zero). In each trial t the algorithm observes an instance vector
?t .
xt ? Rn and predicts the binary label yt through the sign of the margin value rt = v >
k?1 x
Then the algorithm decides whether to ask for the label yt through a simple randomized
rule: a coin with bias b/(b + |rt |) is flipped; if the coin turns up heads (Zt = 1 in Figure
1) then the label yt is revealed. Moreover, on a prediction mistake (?
yt 6= yt ) the algorithm
updates vector v k according to the usual Perceptron additive rule. On the other hand, if
either the coin turns up tails or y?t = yt no update takes place. Notice that k is incremented
only when an update occurs. Thus, at the end of trial t, subscript k counts the number
of updates made so far (plus one). In the following theorem we prove that our selective
sampling version of the Perceptron algorithm can achieve, in expectation, the same mistake
bound as the standard Perceptron?s using fewer labels. See Remark 1 for a discussion.
Theorem 1 Let S = ((x1 , y1 ), (x2 , y2 ), . . . , (xT , yT )) ? (Rn ? {?1, +1})T be any sequence of examples and UT be the (random) set of update trials for the algorithm in Figure
1 (i.e, the set of trials t ? T such that y?t 6= yt and Zt = 1). Then the expected number of
mistakes made by the algorithm
inhFigure 1 is upper boundediby
2
P
||u||2
2b+1
1
inf ?>0 inf u?Rn 2b E
xt , yt )) + (2b+1)
.
2
t?UT ? D? (u; (?
8b
?
h
i
PT
b
.
The expected number of labels queried by the algorithm is equal to t=1 E b+|r
t|
Proof. Let Mt be the Bernoulli variable which is one iff y?t 6= yt and denote by k(t) the
value of theh update counter
k in trial t just before the update k ? k + 1. Our goal is then
i
PT
to bound E
t=1 Mt from above. Consider the case when trial t is such that M t Zt = 1.
Then one can verify by direct inspection that choosing rt = v >
?t (as in Figure 1)
k(t?1) x
yields yt u> x
?t ? yt rt = 21 ||u ? v k(t?1) ||2 ? 21 ||u ? v k(t) ||2 + 12 ||v k(t?1) ? v k(t) ||2 ,
holding for any u ? Rn . On the other hand, if trial t is such that Mt Zt = 0 we have
v k(t?1) = v k(t) . Hence we conclude that the equality
M t Z t y t u> x
?t ? yt rt = 21 ||u ? v k(t?1) ||2 ? 21 ||u ? v k(t) ||2 + 21 ||v k(t?1) ? v k(t) ||2
actually holds for all trials t. We sum over t = 1, . . . , T while observing that Mt Zt = 1
implies both ||v k(t?1) ?v k(t) || = 1 and yt rt ? 0. Recalling that v k(0) = 0 and rearranging
PT
we obtain
>
?t + |rt | ? 12 ? 12 ||u||2 ,
?u ? Rn .
(1)
t=1 Mt Zt yt u x
Now, since the previous inequality holds for any comparison vector u ? Rn , we stretch u
to b+1/2
xt , yt )),
? u, being ? > 0 a free parameter. Then, by the very definition of D ? (u; (?
b+1/2
?
y t u> x
?t ? b+1/2
(? ? D? (u; (?
xt , yt ))) ?? > 0. Plugging into (1) and rearranging,
?
2
PT
P
1
1
xt , yt )) + (2b+1)
||u||2 . (2)
t=1 Mt Zt (b + |rt |) ? (b + 2 )
t?UT ? D? (u; (?
8? 2
ALGORITHM Selective sampling second-order Perceptron algorithm
Parameter b > 0.
Initialization: A0 = I; v 0 = 0; k = 1.
For t = 1, 2, . . . do:
?1
?t x
?>
x
?t , x
?t = xt /||xt ||;
1. Get xt ? Rn and set rt = v >
t )
k?1 (Ak?1 + x
2. predict with y?t = SGN(rt ) ? {?1, +1};
3. draw a Bernoulli random variable Zt ? {0, 1} of parameter
b
;
(3)
?1
1 2
?>
A
b + |rt | + 2 rt 1 + x
x
?
t
t
k?1
4. if Zt = 1 then:
(a) ask for label yt ? {?1, +1},
(b) if y?t 6= yt then update as follows:
v k = v k?1 + yt x
?t , Ak = Ak?1 + x
?t x
?>
t , k ? k + 1.
Figure 2: The selective sampling second-order Perceptron algorithm.
b
From Figure 1 we see that E[Zt | Z1 , . . . , Zt?1 ] = b+|r
. Therefore, taking expectations
t|
on both sides of (2),
hP
i P
h h
ii
T
T
E
t=1 Mt Zt (b + |rt |) =
t=1 E E Mt Zt b + |rt | | Z1 , . . . , Zt?1
h
h
ii
hP
i
PT
T
=
E
M
b
+
|r
|
E
Z
|
Z
,
.
.
.
,
Z
=
E
M
b.
t
t
t
1
t?1
t
t=1
t=1
hP
i
T
Replacing backhP
into (2) iand dividing by b proves the claimed bound on E
t=1 Mt .
T
The value of E
Zt (the expected number of queried labels) trivially follows from
hP
i
ht=1
i
P
T
T
E
Z
=
E
E[Z
|
Z
,
.
.
.
,
Z
]
.
2
t
t
1
t?1
t=1
t=1
We now consider the selective sampling version of the second-order Perceptron algorithm,
as defined in [5]. See Figure 2. Unlike the first-order algorithm, the second-order algorithm mantains a vector v ? Rn and a matrix A ? Rn ? Rn (whose initial value is
the identity matrix I). The algorithm predicts through the sign of the margin quantity
?1
rt = v >
?t x
?>
x
?t , and decides whether to ask for the label yt through a
t )
k?1 (Ak?1 + x
randomized rule similar to the one in Figure 1. The analysis follows the same pattern as
the proof of Theorem 1. A key step in this analysis is a one-trial progress equation developed in [10] for a regression framework. See also [4]. Again, the comparison between
the second-order Perceptron?s bound and the one contained in Theorem 2 reveals that the
selective sampling algorithm can achieve, in expectation, the same mistake bound (see Remark 1) using fewer labels.
Theorem 2 Using the notation of Theorem 1, the expected number of mistakes made by
the algorithm in Figure 2 is upper bounded by
"
#
!
n
X 1
b >
1 X
inf inf
E
D? (u; (?
xt , yt )) + 2 u E Ak(T ) u +
E ln (1 + ?i ) ,
?>0 u?Rn
?
2?
2b i=1
t?UT
P
where ?1 , . . .P
, ?n are the eigenvalues of the (random) correlation matrix t?UT x
?t x
?>
t and
>
Ak(T ) = I + t?UT x
?t x
?t (thus 1 + ?i is the i-th eigenvalue of Ak(T ) ). The expected num
PT
b >
.
ber of labels queried by the algorithm is equal to t=1 E
b+|rt |+ 21 rt2 1+x
?t A?1
?t
k?1 x
Proof sketch. The proof proceeds along the same lines as the proof of Theorem 1. Thus
we only emphasize the main differences. In addition to the notation given there, we define
Ut as the set of update trials up to time t,Pi.e., Ut = {i ? t : Mi Zi = 1}, and Rt as the
(random) function Rt (u) = 12 ||u||2 + i?Ut 12 (yi ? u> x
?i )2 . When trial t is such that
Mt Zt = 1 we can exploit a result contained in [10] for linear regression (proof of Theorem
?1
?t (as in Figure 2)
3 therein), where it is essentially shown that choosing rt = v >
k?1 Ak(t) x
yields
?1
?1
? rt )2 = inf n Rt (u) ? inf n Rt?1 (u) + 21 x
?>
?t ? rt2 x
?>
?t . (4)
t Ak(t) x
t Ak(t)?1 x
u?R
u?R
On the other hand, if trial t is such that Mt Zt = 0 we have Ut = Ut?1 , thus
inf u?Rn Rt?1 (u) = inf u?Rn Rt (u). Hence the equality
2
2 > ?1
1
?t Ak(t)?1 x
?t
2 Mt Zt (yt ? rt ) + rt x
1
2 (yt
?1
?>
?t
(5)
= inf n Rt (u) ? inf n Rt?1 (u) + 21 Mt Zt x
t Ak(t) x
u?R
u?R
holds for all trials t. We sum over t = 1, . . . , T , and observe that by definition RT (u) =
PT Mt Zt
1
2
>
?i )2 and R0 (u) = 21 ||u||2 (thus inf u?Rn R0 (u) = 0).
t=1
2 ||u|| +
2 (yi ? u x
After some manipulation one can see that (5) implies
PT
?1
>
?t + |rt | + 21 rt2 (1 + x
?>
?t )
t Ak(t)?1 x
t=1 Mt Zt yt u x
PT
?1
? 21 u> Ak(T ) u + t=1 21 Mt Zt x
?>
?t ,
(6)
t Ak(t) x
holding for any u ? Rn . We continue by elaborating on (6). First, as in [4, 10, 5], we
det(Ak(t) )
?1
upper bound the quadratic terms x
?>
?t by2 ln det(Ak(t)?1
t Ak(t) x
) . This gives
P
PT 1
det(Ak(T ) )
n
?1
?>
?t ? 12 ln det(A
= 12 i=1 ln (1 + ?i ) .
t Ak(t) x
t=1 2 Mt Zt x
0)
Second, as in the proof of Theorem 1, we stretch the comparison vector u ? R n to ?b u and
introduce hinge loss terms. We obtain:
PT
?1
1 2
?>
?t )
t Ak(t)?1 x
t=1 Mt Zt b + |rt | + 2 rt (1 + x
Pn
P
b2
1
>
(7)
xt , yt )) + 2?
? b t?UT ?1 D? (u; (?
2 u Ak(T ) u + 2
i=1 ln (1 + ?i ).
hP
i
hP
i
T
T
The bounds on E
t=1 Mt and E
t=1 Zt can now be obtained by following the
proof of Theorem 1.
2
Remark 1 The bounds in Theorems 1 and 2 depend on the choice of parameter b. As a
matter of fact, the
optimal tuning of this parameter
is easily computed.
q Let us2 set for brevity
hP
i
1
1
3
? ? (u; S) = E
?
D
xt , yt )) . Choosing b = 2 1 + ||4?
t?UT ? D? (u; (?
u||2 D? (u; S) in
Theorem 1 gives the following bound on the expected number of mistakes:
q
2
? ? (u; S) + ||u||2 + ||u|| D
? ? (u; S) + ||u||2 2 .
inf u?Rn D
(8)
2?
2?
4?
This is an expectation version of the mistake bound for the standard (first-order) Perceptron
algorithm [14]. Notice, that in the special case when the data are linearly separable with
margin ? ? the optimal tuning simplifies to b = 1/2 and
r yields the familiar Perceptron
bound ||u||2 /(? ? )2 . On the other hand, if we set b = ?
are led to the bound
? ? (u; S) +
inf u?Rn D
2
1
?
Pn
E ln(1+? )
i
i=1
u> E[Ak(T ) ]u in Theorem 2 we
q
Pn
(u> E Ak(T ) u) i=1 E ln (1 + ?i ) ,
(9)
Here det denotes the determinant.
Clearly, this tuning relies on information not available ahead of time, since it depends on the
whole sequence of examples. The same holds for the choice of b giving rise to (9).
3
which is an expectation version of the mistake bound for the (deterministic) second-order
Perceptron algorithm, as proven in [5]. As it turns out, (8) and (9) might be even sharper
than their deterministic counterparts. In fact, the set of update trials UT is on average
significantly smaller than the one
for the deterministic algorithms. This tends to shrink the
? ? (u; S), u> E Ak(T ) u, and Pn E ln (1 + ?i ), the main ingredients
three terms D
i=1
of the selective sampling bounds.
Remark 2 Like any Perceptron-like algorithm, the algorithms in Figures 1 and 2 can be
efficiently run in any given reproducing kernel Hilbert space (e.g., [9, 21, 23]), just by
turning them into equivalent dual forms. This is actually what we did in the experiments
reported in the next section.
4
Experiments
The empirical evaluation of our algorithms was carried out on two datasets of free-text documents. The first dataset is made up of the first (in chronological order) 40, 000 newswire
stories from Reuters Corpus Volume 1 (RCV1) [2]. The resulting set of examples was
classified over 101 categories. The second dataset is a specific subtree of the OHSUMED
corpus of medical abstracts [1]: the subtree rooted in ?Quality of Health Care? (MeSH
code N05.712). From this subtree we randomly selected a subset of 40, 000 abstracts. The
resulting number of categories was 94. We performed a standard preprocessing on the
datasets ? details will be given in the full paper.
Two kinds of experiments were made on each dataset. In the first experiment we compared
the selective sampling algorithms in Figures 1 and 2 (for different values of b), with the
standard second-order Perceptron algorithm (requesting all labels). Such a comparison
was devoted to studying the extent to which a reduced number of label requests might
lead to performance degradation. In the second experiment, we compared variable vs.
constant label-request rate. That is, we fixed a few values for parameter b, run the selective
sampling algorithm in Figure 2, and computed the fraction of labels requested over the
training set. Call this fraction p? = p?(b). We then run a second-order selective sampling
algorithm with (constant) label request probability equal to p? (independent of t). The aim
of this experiment was to investigate the effectiveness of a margin-based selective sampling
criterion, as opposed to a random one.
Figure 3 summarizes the results we obtained on RCV1 (the results on OHSUMED turned
out to be similar, and are therefore omitted from this paper). For the purpose of this
graphical representation, we selected the 50 most frequent categories from RCV1, those
with frequency larger than 1%. The standard second-order algorithm is denoted by 2 ND ORDER - ALL - LABELS , the selective sampling algorithms in Figures 1 and 2 are denoted by
1 ST- ORDER and 2 ND - ORDER, respectively, whereas the second-order algorithm with constant label request is denoted by 2 ND - ORDER - FIXED - BIAS.4 As evinced by Figure 3(a),
there is a range of values for parameter b that makes 2 ND - ORDER achieve almost the same
performance as 2 ND - ORDER - ALL - LABELS, but with a substantial reduction in the total
number of queried labels.5 In Figure 3(b) we report the results of running 2 ND - ORDER,
1 ND - ORDER and 2 ND - ORDER - FIXED - BIAS after choosing values for b that make the average F-measure achieved by 2 ND - ORDER just slightly larger than those achieved by the
other two algorithms. We then compared the resulting label request rates and found 2 ND ORDER largely best among the three algorithms (its instantaneous label rate after 40, 000
examples is less than 19%). We made similar experiments for specific categories in RCV1.
On the most frequent ones (such as category 70 ? Figure 3(c)) this behavior gets emphasized. Finally, in Figure 3(d) we report a direct macroaveraged F-measure comparison
4
We omitted to report on the first-order algorithms 1 ST- ORDER - ALL - LABELS and 1 ST- ORDER since they are always outperformed by their corresponding second-order algorithms.
5
Notice that the figures are plotting instantaneous label rates, hence the overall fraction of queried
labels is obtained by integration.
FIXED - BIAS ,
2ND ORDER: Parameter ?b? variations
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
4000
F-measure 2ND-ORDER - b=0.025
F-measure 2ND-ORDER-FIXED-BIAS - p=0.489
F-measure 1ST-ORDER - b=1.0
Label-request 2ND-ORDER - b=0.025
Label-request 2ND-ORDER-FIXED-BIAS - p=0.489
Label-request 1ST-ORDER - b=1.0
0.9
F-measure & Label-request
F-measure & Label-request
Selective Sampling comparison on RCV1 Dataset
1
F-measure 2ND-ORDER-ALL-LABELS
F-measure 2ND-ORDER - b=0.025
F-measure 2ND-ORDER - b=0.05
F-measure 2ND-ORDER - b=0.075
Label-request 2ND-ORDER - b=0.025
Label-request 2ND-ORDER - b=0.05
Label-request 2ND-ORDER - b=0.075
1
0.8
0.7
0.6
0.5
0.4
0.3
0.2
8000
12000
16000
20000
0.1
4000
24000
8000
Training examples
12000
(a)
24000
2ND-ORDER: Margin based vs Fixed bias
F-measure 2ND-ORDER - b=0.025
F-measure 2ND-ORDER-FIXED-BIAS - p=0.489
F-measure 1ST-ORDER - b=1.0
Label-request 2ND-ORDER - b=0.025
Label-request 2ND-ORDER-FIXED-BIAS - p=0.489
Label-request 1ST-ORDER - b=1.0
2ND-ORDER
2ND-ORDER-FIXED-BIAS
0.74
0.72
0.7
F-measure
F-measure & Label-request
1
20000
(b)
Selective Sampling comparison on category 70 of RCV1 Dataset
1.2
16000
Training examples
0.8
0.6
0.4
0.68
0.66
0.64
0.62
0.6
0.58
0.2
4000
0.56
8000
12000
16000
20000
24000
Training examples
(c)
0.104
0.211
0.336
0.425
0.489
Label-request
(d)
Figure 3: Instantaneous F-measure and instantaneous label-request rate on the RCV1
dataset. We solved a binary classification problem for each class and then (macro)averaged
the results. All curves tend to flatten after about 24, 000 examples (out of 40, 000). (a)
Instantaneous macroaveraged F-measure of 2 ND - ORDER (for three values of b) and their
corresponding label-request curves. For the very sake of comparison, we also included
the F-measure of 2 ND - ORDER - ALL - LABELS. (b) Comparison among 2 ND - ORDER, 1 STORDER and 2 ND - ORDER - FIXED - BIAS . (c) Same comparison on a specific category. (d)
F-measure of 2 ND - ORDER vs. F-measure of 2 ND - ORDER - FIXED - BIAS for 5 values of parameter b, after 40, 000 examples.
between 2 ND - ORDER and 2 ND - ORDER - FIXED - BIAS for 5 values of b. On the x-axis are
the resulting 5 values of the constant bias p?(b). As expected, 2 ND - ORDER outperforms
2 ND - ORDER - FIXED - BIAS, though the difference between the two tends to shrink as b (or,
equivalently, p?(b)) gets larger.
5
Conclusions and open problems
We have introduced new Perceptron-like selective sampling algorithms for learning linearthreshold functions. We analyzed these algorithms in a worst-case on-line learning setting, providing bounds on both the expected number of mistakes and the expected number of labels requested. Our theoretical investigation naturally arises from the traditional
way margin-based algorithms are analyzed in the mistake bound model of on-line learning
[18, 15, 11, 13, 14, 5]. This investigation suggests that our worst-case selective sampling
algorithms can achieve on average the same accuracy as that of their more standard relatives, but allowing a substantial label saving. These theoretical results are corroborated
by our empirical comparison on textual data, where we have shown that: (1) the selective
sampling algorithms tend to be unaffected by observing less and less labels; (2) if we fix
ahead of time the total number of label observations, the margin-driven way of distributing
these observations over the training set is largely more effective than a random one.
We close by two simple open questions. (1) Our selective sampling algorithms depend on a
scale parameter b having a significant influence on their practical performance. Is there any
principled way of adaptively tuning b so as to reduce the algorithms? sensitivity to tuning
parameters? (2) Theorems 1 and 2 do not make any explicit statement about the number of
weight updates/support vectors computed by our selective sampling algorithms. We would
like to see a theoretical argument that enables us to combine the bound on the number of
mistakes with that on the number of labels, giving rise to a meaningful upper bound on the
number of updates.
References
[1] The OHSUMED test collection. URL: medir.ohsu.edu/pub/ohsumed/.
[2] Reuters corpus volume 1. URL: about.reuters.com/researchandstandards/corpus/.
[3] Atlas, L., Cohn, R., and Ladner, R. (1990). Training connectionist networks with queries and
selective sampling. In NIPS 2. MIT Press.
[4] Azoury, K.S., and Warmuth, M.K. (2001). Relative loss bounds for on-line density estimation
with the exponential familiy of distributions. Machine Learning, 43(3):211?246, 2001.
[5] Cesa-Bianchi, N., Conconi, A., and Gentile, C. (2002). A second-order Perceptron algorithm.
In Proc. 15th COLT, pp. 121?137. LNAI 2375, Springer.
[6] Cesa-Bianchi, N. Lugosi, G., and Stoltz, G. (2004). Minimizing Regret with Label Efficient
Prediction In Proc. 17th COLT, to appear.
[7] Cesa-Bianchi, N., Conconi, A., and Gentile, C. (2003). Learning probabilistic linear-threshold
classifiers via selective sampling. In Proc. 16th COLT, pp. 373?386. LNAI 2777, Springer.
[8] Campbell, C., Cristianini, N., and Smola, A. (2000). Query learning with large margin classifiers. In Proc. 17th ICML, pp. 111?118. Morgan Kaufmann.
[9] Cristianini, N., and Shawe-Taylor, J. (2001). An Introduction to Support Vector Machines.
Cambridge University Press.
[10] Forster, J. On relative loss bounds in generalized linear regression. (1999). In Proc. 12th Int.
Symp. FCT, pp. 269?280, Springer.
[11] Freund, Y., and Schapire, R. E. (1999). Large margin classification using the perceptron algorithm. Machine Learning, 37(3), 277?296.
[12] Freund, Y., Seung, S., Shamir, E., and Tishby, N. (1997). Selective sampling using the query
by committee algorithm. Machine Learning, 28(2/3):133?168.
[13] Gentile, C. & Warmuth, M. (1998). Linear hinge loss and average margin. In NIPS 10, MIT
Press, pp. 225?231.
[14] Gentile, C. (2003). The robustness of the p-norm algorithms. Machine Learning, 53(3), 265?
299.
[15] Grove, A.J., Littlestone, N., & Schuurmans, D. (2001). General convergence results for linear
discriminant updates. Machine Learning, 43(3), 173?210.
[16] Helmbold, D.P., Littlestone, N. and Long, P.M. (2000). Apple tasting. Information and Computation, 161(2), 85?139.
[17] Helmbold, D.P., and Panizza, S. (1997). Some label efficient learning results. In Proc. 10th
COLT, pp. 218?230. ACM Press.
[18] Littlestone, N. (1988). Learning quickly when irrelevant attributes abound: a new linearthreshold algorithm. Machine Learning, 2(4), 285?318.
[19] Littlestone, N., and Warmuth, M.K. (1994). The weighted majority algorithm. Information and
Computation, 108(2), 212?261.
[20] F. Rosenblatt. (1958). The Perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Review, 65, 386?408.
[21] Sch?olkopf, B., and Smola, A. (2002). Learning with kernels. MIT Press, 2002.
[22] Tong, S., and Koller, D. (2000). Support vector machine active learning with applications to
text classification. In Proc. 17th ICML. Morgan Kaufmann.
[23] Vapnik, V.N. (1998). Statistical Learning Theory. Wiley.
[24] Vovk, V. (1990). Aggregating strategies. Proc. 3rd COLT, pp. 371?383. Morgan Kaufmann.
| 2584 |@word trial:17 determinant:1 version:6 norm:1 nd:39 open:2 reduction:1 initial:2 pub:1 document:1 elaborating:1 outperforms:1 current:2 com:1 assigning:1 mesh:1 additive:1 enables:1 atlas:1 update:14 v:3 fewer:4 selected:2 warmuth:3 inspection:1 num:1 dell:1 along:1 direct:2 prove:2 combine:1 symp:1 introduce:1 excellence:1 expected:9 behavior:1 brain:1 inspired:1 ohsumed:4 abound:1 notation:3 moreover:1 bounded:1 medium:1 what:4 kind:2 substantially:1 developed:1 whatsoever:1 hindsight:1 finding:1 dti:2 collecting:1 chronological:1 exactly:1 universit:1 classifier:2 grant:1 medical:1 appear:1 before:1 aggregating:1 tends:2 mistake:18 ak:25 subscript:1 lugosi:1 might:3 plus:1 therein:3 tasting:2 initialization:2 suggests:1 range:2 averaged:1 practical:1 practice:1 regret:1 differs:1 empirical:4 significantly:1 flatten:1 get:4 close:1 selection:1 storage:1 influence:1 equivalent:1 deterministic:4 yt:39 timeconsuming:1 helmbold:2 rule:6 insubria:1 searching:1 variation:1 pt:12 shamir:1 hypothesis:1 expensive:1 predicts:2 corroborated:1 observed:1 fly:1 solved:1 worst:8 counter:1 incremented:1 observes:1 substantial:2 principled:1 environment:1 seung:1 cristianini:2 depend:2 panizza:1 efficiency:1 easily:1 effective:1 query:8 outcome:1 choosing:4 whose:2 larger:3 say:2 ability:1 sequence:7 eigenvalue:2 frequent:2 macro:1 turned:1 iff:1 achieve:5 milan:2 olkopf:1 n05:1 exploiting:1 convergence:1 categorization:2 generating:1 rt2:3 progress:1 dividing:1 predicted:1 implies:4 saved:1 attribute:1 stochastic:1 human:1 sgn:2 fix:1 preliminary:1 randomization:2 investigation:2 stretch:2 hold:4 considered:1 predict:2 omitted:2 purpose:1 estimation:1 proc:8 outperformed:1 label:70 currently:1 by2:1 reflects:1 weighted:1 mit:3 clearly:2 always:1 aim:1 avoid:1 claudio:1 pn:4 publication:1 bernoulli:3 adversarial:1 sense:1 inference:1 a0:1 lnai:2 hidden:1 koller:1 selective:36 linearthreshold:3 overall:2 classification:5 dual:1 pascal:1 us2:1 denoted:3 among:2 colt:5 special:1 fairly:1 integration:1 equal:3 saving:4 having:2 sampling:36 flipped:1 broad:1 icml:2 report:3 connectionist:1 few:2 randomly:1 familiar:1 phase:1 recalling:1 organization:1 investigate:1 evaluation:1 analyzed:3 devoted:1 grove:1 partial:1 researchandstandards:1 stoltz:1 taylor:1 littlestone:5 theoretical:6 instance:7 earlier:1 soft:1 subset:1 predictor:3 tishby:1 reported:1 adaptively:1 st:7 density:1 randomized:5 sensitivity:1 probabilistic:3 receiving:1 quickly:1 again:1 cesa:5 opposed:1 possibly:1 b2:1 int:1 matter:2 depends:1 performed:1 view:1 observing:3 start:1 maintains:1 accuracy:1 macroaveraged:2 largely:3 efficiently:2 kaufmann:3 yield:3 expertise:1 apple:2 unaffected:1 classified:1 definition:2 frequency:1 pp:7 naturally:1 associated:2 proof:8 mi:1 dataset:6 ask:4 ut:14 hilbert:2 actually:3 campbell:1 shrink:2 though:1 furthermore:1 just:3 smola:2 correlation:1 working:1 receives:2 hand:5 web:2 replacing:1 sketch:1 cohn:1 quality:1 normalized:1 true:2 y2:1 counterpart:2 verify:1 hence:3 equality:2 rooted:1 criterion:2 generalized:1 instantaneous:5 mt:19 volume:2 tail:1 significant:1 cambridge:1 queried:6 tuning:5 rd:1 trivially:1 hp:7 newswire:1 gratefully:1 shawe:1 winnow:1 inf:13 driven:1 irrelevant:1 scenario:1 claimed:1 manipulation:1 inequality:1 zaniboni:2 binary:4 continue:1 yi:2 devise:1 preserving:1 morgan:3 gentile:6 care:1 r0:2 determine:1 redundant:1 ii:2 full:1 unlabelled:1 long:1 luca:1 plugging:1 prediction:4 variant:2 regression:3 essentially:1 expectation:7 kernel:4 achieved:2 addition:1 whereas:1 sch:1 unlike:1 tend:3 effectiveness:2 call:1 revealed:1 automated:1 fit:2 zi:1 reduce:1 idea:1 simplifies:1 requesting:1 det:5 whether:5 distributing:1 url:2 speaking:1 remark:4 useful:1 amount:1 category:7 reduced:1 schapire:1 notice:3 sign:2 rosenblatt:1 broadly:1 key:1 threshold:7 achieving:1 ht:1 fraction:3 sum:2 run:4 place:1 almost:1 ruling:1 decide:2 draw:2 summarizes:1 bound:26 quadratic:1 evinced:1 ahead:2 x2:1 sake:1 argument:1 rcv1:7 separable:2 fct:1 according:1 request:20 smaller:1 slightly:1 ln:8 equation:1 remains:1 turn:5 count:1 mechanism:1 committee:2 end:1 studying:2 available:1 observe:1 generic:1 batch:1 coin:3 robustness:1 denotes:1 running:2 graphical:1 hinge:4 exploit:4 giving:2 prof:1 classical:2 question:1 quantity:1 occurs:1 strategy:1 costly:1 rt:34 usual:1 traditional:1 forster:1 separate:1 majority:1 topic:1 extent:3 discriminant:1 code:1 providing:1 minimizing:1 equivalently:1 statement:2 holding:2 sharper:1 rise:2 zt:27 perform:1 bianchi:5 upper:4 allowing:1 observation:2 ladner:1 datasets:2 discarded:1 acknowledge:1 finite:1 precise:1 head:1 y1:2 rn:21 reproducing:2 arbitrary:1 introduced:1 complement:2 pair:1 iand:1 z1:2 textual:1 established:1 nip:2 able:1 proceeds:2 pattern:1 built:1 including:1 max:1 natural:1 turning:2 scarce:1 axis:1 carried:1 psychol:1 health:1 text:4 review:1 literature:2 nicol:1 relative:3 freund:2 loss:6 dsi:3 proven:2 ingredient:1 plotting:1 story:1 pi:1 free:2 bias:15 side:1 perceptron:23 ber:1 face:1 taking:1 curve:2 world:1 cumulative:2 author:2 made:8 collection:1 preprocessing:1 far:2 ec:1 cope:1 emphasize:1 confirm:1 dealing:1 decides:5 active:2 reveals:1 corpus:4 conclude:1 rearranging:2 contributes:1 schuurmans:1 requested:2 investigated:2 did:1 main:2 linearly:2 azoury:1 whole:1 noise:1 reuters:3 x1:2 tong:1 wiley:1 explicit:1 exponential:1 theorem:14 xt:22 specific:3 emphasized:1 svm:1 evidence:1 exists:1 vapnik:1 subtree:3 occurring:1 margin:17 led:1 contained:2 conconi:2 partially:2 springer:3 relies:2 acm:1 goal:4 viewed:1 identity:1 labelled:2 included:1 unimi:3 hyperplane:1 vovk:1 degradation:1 total:4 called:1 meaningful:2 select:1 internal:1 support:5 arises:1 brevity:1 ohsu:1 |
1,744 | 2,585 | Hierarchical Clustering of a Mixture Model
Jacob Goldberger Sam Roweis
Department of Computer Science, University of Toronto
{jacob,roweis}@cs.toronto.edu
Abstract
In this paper we propose an efficient algorithm for reducing a large
mixture of Gaussians into a smaller mixture while still preserving the component structure of the original model; this is achieved
by clustering (grouping) the components. The method minimizes
a new, easily computed distance measure between two Gaussian
mixtures that can be motivated from a suitable stochastic model
and the iterations of the algorithm use only the model parameters,
avoiding the need for explicit resampling of datapoints. We demonstrate the method by performing hierarchical clustering of scenery
images and handwritten digits.
1
Introduction
The Gaussian mixture model (MoG) is a flexible and powerful parametric framework for unsupervised data grouping. Mixture models, however, are often involved
in other learning processes whose goals extend beyond simple density estimation to
hierarchical clustering, grouping of discrete categories or model simplification. In
many such situations we need to group the Gaussians components and re-represent
each group by a new single Gaussian density. This grouping results in a compact
representation of the original mixture of many Gaussians that respects the original
component structure in the sense that no original component is split in the reduced
representation. We can view the problem of Gaussian component clustering as general data-point clustering with side information that points belonging to the same
original Gaussian component should end up in the same final cluster. Several algorithms that perform clustering of data points given such constraints were recently
proposed [11, 5, 12]. In this study we extend these approaches to model-based
rather than datapoint based settings. Of course, one could always generate data by
sampling from the model, enforcing the constraint that any two samples generated
by the same mixture component must end up in the same final cluster. We show
that if we already have a parametric representation of the constraint via the MoG
density, there is no need for an explicit sampling phase to generate representative
datapoints and their associated constraints.
In other situations we want to collapse a MoG into a mixture of fewer components
in order to reduce computation complexity. One example is statistical inference
in switching dynamic linear models, where performing exact inference with a MoG
prior causes the number of Gaussian components representing the current belief
to grow exponentially in time. One common solution to this problem is grouping
the Gaussians according to their common history in recent timesteps and collapsing
Gaussians grouped together into a single Gaussian [1]. Such a reduction, however, is
not based on the parameters of the Gaussians. Other instances in which collapsing
MoGs is relevant are variants of particle filtering [10], non-parametric belief propagation [7] and fault detection in dynamical systems [3]. A straight-forward solution
for these situations is first to produce samples from the original MoG and then to
apply the EM algorithm to learn a reduced model; however this is computationally
inefficient and does not preserve the component structure of the original mixture.
2
The Clustering Algorithm
We assume that we are given a mixture density f composed of k d-dimensional
Gaussian components:
f (y) =
k
X
?i N (y; ?i , ?i ) =
i=1
k
X
?i fi (y)
(1)
i=1
We want to cluster the components of f into a reduced mixture of m < k components. If we denote the set of all (d-dimensional) Gaussian mixture models with at
most m components by MoG(m), one way to formalize the goal of clustering is to
say that we wish to find the element g of MoG(m) ?closest? to f under some distance measure. A common proximityR criterion is the cross-entropy from f to g, i.e.
g? = arg ming KL(f ||g) = arg maxg f log g, where KL() is the Kullback-Leibler
divergence and the minimization is performed over all g in MoG(m). This criterion
leads to an intractable optimization problem; there is not even a closed-form expression for the KL-divergence between two MoGs let alone an analytic minimizer of
its second argument. Furthermore, minimizing a KL-based criterion does not preserving the original component structure of f . Instead, we introduce the following
Pm
Pk
new distance measure between f = i=1 ?i fi and g = j=1 ?j gj :
d(f, g) =
k
X
m
?i min KL(fi ||gj )
i=1
j=1
(2)
which can be intuitively thought of as the cost of coding data generated by f under
the model g, if all points generated by component i of f must be coded under a single
component of g. Unlike the KL-divergence between two MoGs, this distance can
be analytically computed. In particular, each term is a KL-divergence between two
Gaussian distributions N (?1 , ?1 ) and N (?2 , ?2 ) which is given by:
|?2 |
1
T ?1
(log
+ T r(??1
2 ?1 ) + (?1 ? ?2 ) ?2 (?1 ? ?2 ) ? d).
2
|?1 |
Under this distance, the optimal reduced MoG representation g? is the solution to
the minimization of (2) over MoG(m): g? = arg ming d(f, g). Although the minimization ranges over all the MoG(m), we prove that the optimal density g? is a
MoG obtained from grouping the components of f into clusters and collapsing all
Gaussians within a cluster into a single Gaussian. There is no closed-form solution
for the minimization; rather, we propose an iterative algorithm to obtain a locally
optimal solution. Denote the set of all the mk mappings from {1, ..., k} to {1, ..., m}
by S. For each ? ? S and g ? M oG(m) define:
d(f, g, ?) =
k
X
i=1
?i KL(fi ||g?(i) ).
(3)
For a given g ? M oG(m), we associate a matching function ? g ? S:
m
? g (i) = arg min KL(fi ||gj )
i = 1, ..., k
j=1
(4)
It can be easily verified that:
d(f, g) = d(f, g, ? g ) = min d(f, g, ?)
(5)
??S
i.e. ? g is the optimal mapping between the components of f and g. Using (5) to
define our main optimization we obtain the optimal reduced model as a solution of
the following double minimization problem:
g? = arg min min d(f, g, ?)
g
(6)
??S
For m > 1 the double minimization (6) can not be solved analytically. Instead,
we can use alternating minimization to obtain a local minimum. Given a matching
function ? ? S, we define g ? ? M oG(m) as follows. For each j such that ? ?1 (j) is
non empty, define the following MoG density:
P
i?? ?1 (j) ?i fi
?
(7)
fj = P
i?? ?1 (j) ?i
The mean and variance of the set fj? , denoted by ?0j and ?0j , are:
1 X
1 X
?0j =
?i ?i ,
?0j =
?i ?i + (?i ? ?0j )(?i ? ?0j )T
?j
?j
?1
?1
i??
(j)
(j)
i??
where ?j = i???1 (j) ?i . Let gj? = N (?0j , ?0j ) be the Gaussian distribution obtained
by collapsing the set fj? into a single Gaussian. It satisfies:
P
gj? = N (?0j , ?0j ) = arg min KL(fj? ||g) = arg min d(fj? , g)
g
g
such that the minimization is performed over all the d-dimensional Gaussian densities. Denote the collapsed version of f according to ? by g ? , i.e.:
g? =
m
X
?j gj?
(8)
j=1
Lemma 1: Given a MoG f and a matching function ? ? S, g ? is the unique
minimum point of d(f, g, ?). More precisely, d(f, g ? , ?) ? d(f, g, ?) for all g ?
M oG(m), and if d(f, g ? , ?) = d(f, g, ?) then gj? = gj for all j = 1, .., m such that
gj? and gj are the Gaussian components of g ? and g respectively.
R
Pk
Proof: Denote c =
i=1
fi log fi (a constant independent of g).
?i
k
c ? d(f, g, ?) =
X
?i
i=1
=
m
X
?j
j=1
Z
Z
fi log(g?(i) ) =
m
X
X
j=1 i?? ?1 (j)
fj? log(gj ) =
m
X
?j
j=1
Z
?i
Z
fi log(gj )
gj? log(gj )
The Jensen inequality yields:
?
m
X
j=1
?j
Z
gj?
log(gj? )
=
m
X
j=1
?j
Z
fj?
log(gj? )
=
k
X
i=1
?i
Z
?
) = c ? d(f, g ? , ?)
fi log(g?(i)
R
R
The equality fj? log(gj ) = gj? log(gj ) is due to the fact that log(gj ) is a quadratic
expression and the first two moments of fj? and its collapsed version gj? are equal. Jensen?s
inequality is saturated if and only if for all j = 1, .., m (such that ? ?1 (j) is not empty) the
Gaussian densities gj and gj? are equal. 2
Using Lemma 1 we obtain a closed form description of a single iteration of the
alternating minimization algorithm, which can be viewed as a type of K-means
operating at the meta-level of model parameters:
?g
=
arg min d(f, g, ?)
(REGROUP)
g?
=
arg min d(f, g, ?)
(REFIT)
?
g
Above, ? g (i) = arg minj KL(fi ||gj ) and g ? is computed using (8). The iterative
algorithm monotonically decreases the distance measure d(f, g). Hence, since S
is finite, the algorithm converges to a local minimum point after finite number of
iterations. The next theorem ensures that once the iterative algorithm converges
we obtain a clustering of the MoG components.
Definition 1: A MoG g ? M oG(m) is an m-mixture collapsed version of f if there
exists a matching function ? ? S such that g is obtained by collapsing f according
to ?, .i.e. g = g ? .
Theorem 1: If applying a single iteration (expressions (regroup) and (refit)) to
a function g ? M oG(m) does not decrease the distance function (2), then necessarily
g is a collapsed version of f .
Proof: Let g ? M oG(m) and let ? be a matching function such that d(f, g) = d(f, g, ?).
Let g ? be a collapsed version of f according to ?. The MoG g ? is obtained as a result of
applying a single iteration to g. Let g be composed of the following Gaussians {g1 , ..., gm }
?
and similarly let g ? = {g1? , ..., gm
}. According to Lemma 1, d(f, g) = d(f, g, ?) ?
?
?
d(f, g , ?) ? d(f, g ). Assume that a single iteration does not decrease the distance,
i.e. d(f, g) = d(f, g ? ). Hence d(f, g, ?) = d(f, g ? , ?). According to Lemma 1, this implies
that gj = gj? for all j = 1, ..., m. Therefore g is a collapsed version of f . 2
Theorem 1 implies that each local minimum of the propose iterative algorithm is a
collapsed version of f .
Given the optimal matching function ?, the lastPstep of the algorithm is to set
the weights of the reduced representation. ?j? = {i|?(i)=j} ?i . These weights are
automatically obtained via the collapsing process.
3
Experimental Results
In this section we evaluate the performance of our semi-supervised clustering algorithm and compare it to the standard ?flat? clustering approach that does not
respect the original component structure. We have applied both methods to clustering handwritten digits and natural scene images. In each case, a set of objects
is organized in predefined categories. For each category c we learn from a labeled
training set a Gaussian distribution f (x|c). A prior distribution over the categories
p(c) can be also extracted from the labeled training set. The goal is to cluster the
objects into a small number of clusters (fewer than the number of class labels). The
standard (flat) approach is to apply an unsupervised clustering to entire collection
of original objects, ignoring their class labels. Alternatively we can utilize the given
categorization as side-information in order to obtain an improved reduced clustering
which also respects the original labels, thus inducing a hierarchical structure.
B
BN
Class A
method
this
paper
unsupervised
EM
Class B
cls
Class A
Class B
Class 1
Class 2
0
100
0
93
7
1
4
96
16
85
Figure 1: (top) Means of 10 models of
digit classes. (bottom) Means of two
clusters after our algorithm has grouped
0,2,3,5,6,8 and 1,4,7,9.
2
99
1
93
7
3
99
1
87
14
4
3
98
22
78
5
99
2
66
34
6
99
1
96
4
7
0
100
16
84
8
94
6
23
77
9
1
99
25
76
Table 1: Clustering results showing the purity of a 2-cluster reduced model learned
from a training set of handwritten digits in 10 original classes. For each true label,
the percentage of cases (from an unseen test set) falling into each of the two reduced classes is shown. The top two lines show the purity of assignments provided
by our clustering algorithm; the bottom two lines show assignments from a flat
unsupervised fitting of a two component mixture.
Our first experiment used a database of handwritten digits. Each example is represented by a 8 ? 8 grayscale pixel image; 700 cases are used to learn a 64-dimensional
full covariance Gaussian distribution for each class. In the next step we want to
divide the digits into two natural clusters, while taking into account their original
10-way structure. We applied our semi-supervised algorithm to reduce the mixture of 10 Gaussians into a mixture of two Gaussians. The minimal distance (2)
is obtained when the ten digits are divided into the two groups {0, 2, 3, 5, 6, 8} and
{1, 4, 7, 9}. The means of the two resulting clusters are shown in Figure 1.
To evaluate the purity of this clustering, the reduced MoG was used to label a test
set consists of 4000 previously unseen examples. The binary labels on the test set are
obtained by comparing the likelihood of the two components in the reduced mixture.
Table 1 (top) presents, for each digit, the percentage of images that were affiliated
with each of the two clusters. Alternatively we can apply a standard EM algorithm
to learn by maximum likelihood a flat mixture of 2 Gaussians directly from the 7000
training examples, without utilizing their class labels. Table 1 (bottom) shows the
results of such an unsupervised clustering, evaluated on the same test set. Although
the likelihood of the unsupervised mixture model was significantly better than the
semi-supervised model, both on train and test data-sets it is obvious that the purity
of the clusters it learns is much worse since it is not preserving the hierarchical
class structure. Comparing the top and bottom of Table 1, we can see that using
the side information we obtain a clustering of the digit data-base which is much
more correlated with categorization of the set into ten digits than the unsupervised
procedure.
In a second experiment, we evaluate the performance of our proposed algorithm on
image category models. The database used consists of 1460 images selectively handpicked from the COREL database to create 16 categories. The images within each
category have similar color spatial layout, and are labeled with a high-level semantic
clustering results
2
semi?supervised
unsupervised
mutual information
1.8
1.6
1.4
1.2
A
1
C
0.8
0.6
0.4
1.5
2
2.5
3
3.5
4
4.5
5
5.5
6
# of clusters
6.5
B
D
Figure 2: Hierarchical clustering of natural image categories. (left) Mutual information between reduced cluster index and original class. (right) Sample images
from the sets A,B,C,D learned by hierarchical clustering.
description (e.g. fields, sunset). For each pixel we extract a five-dimensional feature
vector (3 color features and x,y position). From all the pixels that are belonging
to the same category we learn a single Gaussian. We have clustered the image
categories into k = 2, ..., 6 sets using our algorithm and compared the results to
unsupervised clustering obtained from an EM procedure that learned a mixture of
k Gaussians. In order to evaluate the quality of the clustering in terms of correlation
with the category information we computed the mutual information (MI) between
the clustering result (into k clusters) and the category affiliation of the images in a
test set. A high value of mutual information indicates a strong resemblance between
the content of the learned clusters and the hand-picked image categories. It can be
verified from the results summarized in Figure 2 that, as we can expect, the MI in
the case of semi-supervised clustering is consistently larger than the MI in the case
of completely unsupervised clustering. A semi-supervised clustering of the image
database yields clusters that are based on both low-level features and a high level
available categorization. Sampled images from clustering into 4 sets presented in
Figure 2.
4
A Stochastic Model for the Proposed Distance
In this section we describe a stochastic process that induces a likelihood function
which coincides with the distance measure d(f, g) presented in section 2. Suppose
we are given two MoGs:
f (y) =
k
X
i=1
?i fi (y) =
k
X
?i N (y; ?i , ?i ) ,
g(y) =
m
X
j=1
i=1
?j gj (y) =
m
X
?j N (y; ?0j , ?0j )
j=1
Consider an iid sample set of size n, drawn from f (y). The samples can be arranged
in k blocks according to the Gaussian component that was selected to produce the
sample. Assume that ni samples were drawn from the i-th component fi and denote
these samples by yi = {yi1 , ..., yini }. Next, we compute the likelihood of the sample
set according to the model g; but under the constraint that samples within the
same block must be assigned to the same mixture component of g. In other words,
instead of having a hidden variable for each sample point we shall have one for each
sample block. The likelihood of the sample set yn according to the MoG g under
this constraint is:
Ln (g) = g(y1 , ..., yk ) =
k X
m
Y
i=1 j=1
?j
ni
Y
t=1
N (yit ; ?0j , ?0j )
The main result is that as the number of points sampled grows large, the expected
negative log likelihood becomes equal to the distance d(f, g) under the measure
proposed above:
Theorem 2: For each g ? M oG(m)
1
log Ln (g) = c ? d(f, g)
lim
n?? n
P R
such that c = ?i fi log fi does not depend on g.
(9)
Surprisingly, as noted earlier the mixture weights ?j do not appear in the asymptotic
likelihood function of the generative model presented in this section.
Proof: To prove the theorem we shall use the following lemma:
Lemma 2: Let {xjn } j = 1, .., m be a set of m sequences of real positive numbers P
such that xjn ? xj and let {?j } be a set of positive numbers.
Then
1
log j ?j (xjn )n ? maxj log xj [This can be shown as follows: Let a = arg maxj xj .
n
P
Then for n sufficiently large, ?a (xan )n ?
? (xjn )n ? m?a (xan )n . Hence log xa ?
j j
P
n
1
limn?? n log j ?j (xjn ) ? log xa .]
The points {yi1 , ..., yini } are independently sampled from
distribution
fi .
R
Qni the Gaussian
Therefore, the law of large numbers implies: n1i log t=1
N (yit ; ?0j , ?0j ) ?
fi log gj .
1
Qn
R
i
Hence, substituting: xjni = ( t=1
N (yit ; ?0j , ?0j )) ni ? exp(
R fi log gj ) = xj in Lemma 2,
Pm
Qni
1
we obtain: ni log j=1 ?j t=1 N (yit ; ?0j , ?0j ) ? maxm
fi log gj In a similar manner,
j=1
the law of large numbers, applied to the discrete distribution (?1 , ..., ?k ), yields nni ? ?i .
Pk ni 1
Pm
Qni
Hence n1 log Ln (g) = n1 log g(y1 , ..., yk ) =
? ni log j=1 ?j t=1
N (yit ; ?0j , ?0j ) ?
i=1 n
Pk
i=1
5
?i maxm
j=1
R
fi log gj = c ?
Pk
i=1
?i minm
j=1 KL(fi ||gj ) = c ? d(f, g)
2
Relations to Previous Approaches and Conclusions
Other authors have recently investigated the learning of Gaussian mixture models
using various pieces of side information or constraints. Shental et al. [5] utilized the
generative model described in the previous section and the EM algorithm derived
from it, to learn a MoG from data set endowed with equivalence constraints that
enforce equivalent points to be assigned to the same cluster. Vasconcelos and Lippman [9] proposed a similar EM based clustering algorithm for constructing mixture
hierarchies using a finite set of virtual samples.
Given the generative model presented above, we can apply the EM algorithm to
learn the (locally) maximum likelihood parameters of the reduced MoG model g(y).
This EM-based approach, however, is not precisely suitable for our component
clustering problem. The EM update rule for the weights of the reduced mixture
density is based only on the number of the original components that are clustered
into a single component without taking into account the relative weights [9].
The problem discussed in this study is also related to the Information-Bottleneck
Pk
(IB) principle [8]. In the case of mixture of histograms f = i=1 ?i fi , the IB
principle yields theP
following iterative algorithm for finding a clustering of a mixture
m
of histograms g = j=1 ?j gj (y):
P
X
wij ?i fi
?j e??KL(fi ||gj )
,
?j =
wij ?i ,
gj = Pi
(10)
wij = P
??KL(f
||g
)
i
l
i wij ?i
l ?l e
i
Assuming that the number of the (virtual) samples tends to ?, we can derive,
in a manner similar to the Gaussian case, a grouping algorithm for a mixture of
histograms. Slonim and Weiss [6] showed that the clustering algorithm in this case
can be either motivated from the EM algorithm applied to a suitable generative
model [4] or from the (hard decision version) of the IB principle [8]. However,
when we want to represent the clustering result as a mixture density there is a
difference in the resulting mixture coefficient between the EM and the IB based
algorithms. Unlike the IB updating equation (10) of the coefficients wij , the EM
update equation is based only on the number of components that are collapsed into
a single Gaussian. In the case of mixture of Gaussians, applying the IB principle
results only in a partitioning of the original components but does not deliver a
reduced representation in the form of a smaller mixture [2]. If we modify gj in
equation (10) by collapsing the mixture gj into a single Gaussian we obtain a soft
version of our algorithm. Setting the Lagrange multiplier ? to ? we recover exactly
the algorithm described in Section 2.
To conclude, we have presented an efficient Gaussian component clustering algorithm that can be used for object category clustering and for MoG collapsing. We
have shown that our method optimizes the distance measure between two MoG that
we proposed. In this study we have assumed that the desired number of clusters is
given as part of the problem setup, but if this is not the case, standard methods for
model selection can be applied.
References
[1] Y. Bar-Shalom and X. Li. Estimation and tracking: principles, techniques and software. Artech House, 1993.
[2] S. Gordon, H. Greenspan, and J. Goldberger. Applying the information bottleneck
principle to unsupervised clustering of discrete and continuous image representations.
In ICCV, 2003.
[3] U. Lerner, R. Parr, D. Koller, and G. Biswas. Bayesian fault detection and diagnosis
in dynamic systems. In AAAI/IAAI, pp. 531?537, 2000.
[4] J. Puzicha, T. Hofmann, and J. Buhmann. Histogram clustering for unsupervised
segmentation and image retrieval. Pattern Recognition Letters, 20(9):899?909, 1999.
[5] N. Shental, A. Bar-Hillel, T. Hertz, and D. Weinshall. Computing gaussian mixture models with em using equivalence constraints. In Proc. of Neural Information
Processing Systems, 2003.
[6] N. Slonim and Y. Weiss. Maximum likelihood and the information bottleneck. In
Proc. of Neural Information Processing Systems, 2003.
[7] E. Sudderth, A. Ihler, W. Freeman, and A. Wilsky. Non-parametric belief propagation. In CVPR, 2003.
[8] N. Tishby, F. Pereira, and W. Bialek. The information bottleneck method. In Proc.
of the 37-th Annual Allerton Conference on Communication, Control and Computing,
pages 368?377, 1999.
[9] N. Vasconcelos and A. Lippman. Learning mixture hierarchies. In Proc. of Neural
Information Processing Systems, 1998.
[10] J. Vermaak, A. A. Doucet, and P. Perez. Maintaining multi-modality through mixture
tracking. In Int. Conf. on Computer Vision, 2003.
[11] K. Wagstaff, C. Cardie, S. Rogers, and S. Schroell. Constraind k-means clustering
with background knowledge. In Proc. Int. Conf. on Machine Learning, 2001.
[12] E.P. Xing, A. Y. Ng, M.I. Jordan, and S. Russell. Distance learning metric. In Proc.
of Neural Information Processing Systems, 2003.
| 2585 |@word version:9 bn:1 covariance:1 jacob:2 vermaak:1 moment:1 reduction:1 current:1 comparing:2 goldberger:2 must:3 hofmann:1 analytic:1 update:2 resampling:1 alone:1 generative:4 fewer:2 selected:1 yi1:2 toronto:2 allerton:1 five:1 prove:2 consists:2 fitting:1 manner:2 introduce:1 expected:1 multi:1 freeman:1 ming:2 automatically:1 becomes:1 provided:1 weinshall:1 minimizes:1 finding:1 exactly:1 partitioning:1 control:1 yn:1 appear:1 positive:2 local:3 modify:1 tends:1 slonim:2 switching:1 equivalence:2 collapse:1 range:1 regroup:2 unique:1 block:3 lippman:2 procedure:2 digit:10 thought:1 significantly:1 matching:6 word:1 selection:1 collapsed:8 applying:4 equivalent:1 layout:1 independently:1 rule:1 utilizing:1 datapoints:2 hierarchy:2 gm:2 suppose:1 exact:1 associate:1 element:1 recognition:1 utilized:1 updating:1 labeled:3 database:4 bottom:4 sunset:1 solved:1 ensures:1 decrease:3 russell:1 yk:2 complexity:1 dynamic:2 depend:1 deliver:1 completely:1 easily:2 represented:1 various:1 train:1 describe:1 hillel:1 whose:1 larger:1 cvpr:1 say:1 g1:2 unseen:2 yini:2 final:2 sequence:1 propose:3 relevant:1 roweis:2 description:2 inducing:1 qni:3 cluster:20 double:2 empty:2 produce:2 categorization:3 converges:2 object:4 derive:1 strong:1 c:1 implies:3 stochastic:3 virtual:2 rogers:1 clustered:2 sufficiently:1 exp:1 mapping:2 parr:1 substituting:1 estimation:2 proc:6 label:7 grouped:2 maxm:2 create:1 minimization:9 gaussian:27 always:1 rather:2 greenspan:1 og:8 derived:1 consistently:1 likelihood:10 indicates:1 sense:1 inference:2 entire:1 hidden:1 relation:1 koller:1 wij:5 pixel:3 arg:11 flexible:1 denoted:1 spatial:1 mutual:4 equal:3 once:1 field:1 having:1 vasconcelos:2 sampling:2 ng:1 unsupervised:12 gordon:1 composed:2 preserve:1 divergence:4 lerner:1 maxj:2 phase:1 n1:2 detection:2 saturated:1 mixture:37 perez:1 predefined:1 divide:1 re:1 desired:1 minimal:1 mk:1 instance:1 earlier:1 soft:1 assignment:2 cost:1 tishby:1 density:10 together:1 aaai:1 collapsing:8 worse:1 conf:2 inefficient:1 li:1 account:2 coding:1 summarized:1 coefficient:2 int:2 piece:1 performed:2 view:1 picked:1 closed:3 xing:1 recover:1 ni:6 variance:1 yield:4 handwritten:4 bayesian:1 iid:1 cardie:1 straight:1 history:1 minm:1 datapoint:1 minj:1 definition:1 pp:1 involved:1 obvious:1 associated:1 proof:3 mi:3 ihler:1 sampled:3 iaai:1 xan:2 color:2 lim:1 knowledge:1 organized:1 formalize:1 segmentation:1 supervised:6 improved:1 wei:2 arranged:1 evaluated:1 furthermore:1 xa:2 correlation:1 hand:1 propagation:2 quality:1 resemblance:1 grows:1 true:1 multiplier:1 biswas:1 analytically:2 equality:1 hence:5 alternating:2 assigned:2 leibler:1 semantic:1 noted:1 coincides:1 criterion:3 demonstrate:1 fj:9 image:16 recently:2 fi:25 common:3 corel:1 exponentially:1 extend:2 discussed:1 pm:3 similarly:1 particle:1 operating:1 gj:38 base:1 closest:1 recent:1 showed:1 optimizes:1 shalom:1 inequality:2 meta:1 binary:1 affiliation:1 fault:2 yi:1 preserving:3 minimum:4 purity:4 monotonically:1 artech:1 semi:6 full:1 cross:1 retrieval:1 divided:1 coded:1 variant:1 vision:1 mog:23 metric:1 iteration:6 represent:2 histogram:4 achieved:1 background:1 want:4 grow:1 sudderth:1 limn:1 modality:1 wilsky:1 unlike:2 n1i:1 jordan:1 split:1 xj:4 timesteps:1 reduce:2 bottleneck:4 motivated:2 expression:3 cause:1 locally:2 ten:2 induces:1 category:14 reduced:15 generate:2 percentage:2 diagnosis:1 discrete:3 shall:2 shental:2 group:3 falling:1 drawn:2 yit:5 verified:2 utilize:1 letter:1 powerful:1 decision:1 simplification:1 quadratic:1 nni:1 annual:1 constraint:9 precisely:2 scene:1 flat:4 software:1 argument:1 min:9 performing:2 department:1 according:9 belonging:2 hertz:1 smaller:2 em:13 sam:1 intuitively:1 iccv:1 wagstaff:1 computationally:1 ln:3 equation:3 previously:1 end:2 available:1 gaussians:13 endowed:1 apply:4 hierarchical:7 enforce:1 mogs:4 original:16 top:4 clustering:40 maintaining:1 already:1 parametric:4 bialek:1 distance:14 enforcing:1 assuming:1 index:1 minimizing:1 setup:1 negative:1 affiliated:1 refit:2 perform:1 finite:3 situation:3 communication:1 y1:2 kl:14 learned:4 beyond:1 bar:2 dynamical:1 pattern:1 belief:3 suitable:3 natural:3 buhmann:1 representing:1 extract:1 prior:2 asymptotic:1 law:2 relative:1 expect:1 filtering:1 principle:6 pi:1 course:1 surprisingly:1 side:4 taking:2 qn:1 forward:1 collection:1 author:1 compact:1 kullback:1 doucet:1 conclude:1 assumed:1 thep:1 alternatively:2 grayscale:1 maxg:1 iterative:5 continuous:1 table:4 learn:7 ignoring:1 investigated:1 necessarily:1 cl:1 constructing:1 pk:6 main:2 representative:1 position:1 pereira:1 explicit:2 wish:1 house:1 ib:6 learns:1 theorem:5 showing:1 jensen:2 grouping:7 intractable:1 exists:1 entropy:1 xjn:5 lagrange:1 tracking:2 minimizer:1 satisfies:1 extracted:1 scenery:1 goal:3 viewed:1 content:1 hard:1 reducing:1 lemma:7 experimental:1 selectively:1 puzicha:1 evaluate:4 avoiding:1 correlated:1 |
1,745 | 2,586 | Variational minimax estimation of discrete
distributions under KL loss
Liam Paninski
Gatsby Computational Neuroscience Unit
University College London
[email protected]
http://www.gatsby.ucl.ac.uk/?liam
Abstract
We develop a family of upper and lower bounds on the worst-case expected KL loss for estimating a discrete distribution on a finite number m
of points, given N i.i.d. samples. Our upper bounds are approximationtheoretic, similar to recent bounds for estimating discrete entropy; the
lower bounds are Bayesian, based on averages of the KL loss under
Dirichlet distributions. The upper bounds are convex in their parameters
and thus can be minimized by descent methods to provide estimators with
low worst-case error; the lower bounds are indexed by a one-dimensional
parameter and are thus easily maximized. Asymptotic analysis of the
bounds demonstrates the uniform KL-consistency of a wide class of estimators as c = N/m ? ? (no matter how slowly), and shows that
no estimator is consistent for c bounded (in contrast to entropy estimation). Moreover, the bounds are asymptotically tight as c ? 0 or ?,
and are shown numerically to be tight within a factor of two for all c.
Finally, in the sparse-data limit c ? 0, we find that the Dirichlet-Bayes
(add-constant) estimator with parameter scaling like ?c log(c) optimizes
both the upper and lower bounds, suggesting an optimal choice of the
?add-constant? parameter in this regime.
Introduction
The estimation of discrete distributions given finite data ? ?histogram smoothing? ? is a
canonical problem in statistics and is of fundamental importance in applications to language
modeling, informatics, and safari organization (1?3). In particular, estimation of discrete
distributions under Kullback-Leibler (KL) loss is of basic interest in the coding community, in the context of two-step universal codes (4, 5). The problem has received signicant
attention from a variety of statistical viewpoints (see, e.g., (6) and references therein); in
this work, we will focus on the ?minimax? approach, that is, on developing estimators
which work well even in the worst case, with the performance of an estimator measured by
the average KL loss. The recent work of (7) and (8) has answered many of the important
asymptotic questions in the heavily-sampled limit, where the number of data samples, N ,
is much larger than the number of support points, m, of the unknown distribution; in particular, the optimal (minimax) error rate has been identified in closed form in the case that
m is fixed and N ? ?, and a simple estimator that asymptotically achieves this optimum
has been described. Our goal here is to analyze further the opposite case, when N/m is
bounded or even small (the sparse data case). It will turn out that the estimators which are
asymptotically optimal as N/m ? ? are far from optimal in this sparse data case, which
may be considered more important for applications to modeling of large dictionaries.
Much of our approach is influenced by the similarities to the entropy estimation problem
(9?11), where the sparse data regime is also important for applications and of independent
mathematical interest: how do we decide how much probability to assign to bins for which
no samples, or very few samples, are observed? We will emphasize the similarities (and
important differences) between these two problems throughout.
Upper bounds
The basic idea is to find a simple upper bound on the worst-case expected loss, and then to
minimize this upper bound over some tractable class of possible estimators; the resulting
optimized estimator will then be guaranteed to possess good worst-case properties. Clearly
we want this upper bound to be as tight as possible, and the space of allowed estimators
to be as large as possible, while still allowing easy minimization. The approach taken here
is to develop bounds which are convex in the estimator, and to allow the estimators to
range over a large convex space; this implies that the minimization problem is tractable by
descent methods, since no non-global local minima exist.
We begin by defining the class of estimators we will be minimizing over: p? of the form
g(ni )
,
p?i = Pm
i=1 g(ni )
with ni defined as the number of samples observed in bin i and the constants gj ? g(j)
taking values in the (N + 1)?dimensional convex space gj ? 0; note that normalization
of the estimated distribution is automatically enforced. The ?add-constant? estimators,
gj = Nj+?
+m? , ? > 0, are an important special case (7).
After some rearrangement, the expected KL loss for these estimators satisfies
!
m
X
pi
Ep~ (L(~
p, p?)) = Ep~
pi log
p?i
i=1
?
?
!
N
m
X
X
X
??H(pi ) +
(? log gj )pi BN,j (pi )? + Ep~ log
g(nk )
=
j=0
i
?
X
i
=
X
?
??H(pi ) +
f (pi );
X
j
k=1
?
(? log gj )pi BN,j (pi )? + Ep~ ?1 +
i
we have abbreviated p~ the true underlying distribution, the entropy function
H(t) = ?t log t,
the binomial functions
and
N j
BN,j (t) =
t (1 ? t)N ?j ,
j
X
f (t) = ?H(t) ? t +
(gj ? t log gj )BN,j (t).
j
X
k
!
g(nk )
P
Equality holds iff k g(nk ) is constant almost surely (as is the case, e.g., for any addconstant estimator).
We have two distinct simple bounds on the above: first, the obvious
m
X
f (pi ) ? m max f (t),
i=1
0?t?1
which generalizes the bound considered in (7) (where a similar bound was derived asymptotically as N ? ? for m fixed, and applied only to the add-constant estimators), or
X
f (t)
,
f (pi ) ? m max f (t) +
max
0?t?1/m
1/m?t?1 t
i
P
which follows easily from i pi = 1; see (11) for a proof. The above maxima are always
achieved, by the compactness of the intervals and the continuity of the binomial and entropy
functions. Again, the key point is that these bounds are uniform over all possible underlying
p (that is, they bound the worst-case error).
Why two bounds? The first is nearly tight for N >> m (it is actually asymptotically
possible to replace m with m ? 1 in this limit, due to the fact that pi must sum to one;
see (7, 8)), but grows linearly with m and thus cannot be tight for m comparable to or
larger than N . In particular, the optimizer doesn?t depend on m, only N (and hence the
bound can?t help but behave linearly in m). The second bound is much more useful (and,
as we show below, tight) in the data-sparse regime N << m.
The resulting minimization problems have a polynomial approximation flavor: we are trying to find an optimal set of weights gj such that the sum in the definition of f (t) (a
polynomial in t) will be as close to H(t) + t as possible. In this sense our approach is
nearly identical to that recently followed for bounding the bias in the entropy estimation
case (11, 12). There are three key differences, however: the term penalizing the variance
in the entropy case is missing here, the approximation only has to be good from above, not
from below as well (both making the problem easier), and the approximation is nonlinear,
instead of linear, in gj (making the problem harder). Indeed, we will see below that the entropy estimation problem is qualitatively easier than the estimation of the full distribution,
despite the entropic form of the KL loss.
Smooth minimization algorithm
In the next subsections, we develop methods for minimizing these bounds as a function of
gj (that is, for choosing estimators with good worst-case properties). The first key point is
that the bounds involve maxima over a collection of convex functions in gj , and hence the
bounds are convex in gj ; since the coefficients gj take values in a convex set, no non-global
local minima exist, and the global mimimum can be found by simple descent procedures.
One complicating factor is that the bounds are nondifferentiable in gj : while methods
for direct minimization of this type of L? error exist (13), they require that we track the
location in t of the maximal error; since this argmax can jump discontinuously as a function
of gj , this interior maximization loop can be time-consuming. A more efficient solution
is given by approximating this nondifferentiable objective function by smooth functions
which retain the convexity of the original objective. We employ a Laplace approximation
(albeit in a different direction than usual): use the fact that
Z
1
max h(t) = lim log
eqh(t)
q?? q
t?A
t?A
for continuous h(t) and compact A; thus, letting h(t) = f (t), we can minimize
Z 1
Uq ({gj }) ?
eqf (t) dt,
0
or
Vq ({gj }) ? log
Z
1/m
eqmf (t) dt
0
!
+ log
Z
1
1/m
f (t)
q t
e
!
dt ,
for q increasing; these new objective functions are smooth, with easily-computable gradients, and are still convex, since f (t) is convex in gj , convex functions are preserved under
convex, increasing maps (i.e., the exponential), and sums of convex functions are convex.
(In fact, since Uq is strictly convex in g for any q, the minima are unique, which to our
knowledge is not necessarily the case for the original minimax problem.) It is easy to show
that any limit point of the sequence of minimizers of the above problems will minimize
the original problem; applying conjugate gradient descent for each q, with the previous
minimizer as the seed for the minimization in the next largest q, worked well in practice.
Initialization; connection to Laplace estimator
It is now useful to look for suitable starting points for the minimization. For example, for
the first bound, approximate the maximum by an integral, that is, find gj to minimize
?
?
Z 1
X
m
dt ??H(t) ? t +
(gj ? t log gj )BN,j (t)? .
0
j
(Note that this can be thought of as the limit of the above Uq minimization problem as q ?
0, as can be seen by expanding the exponential.) The gj that minimizes this approximation
to the upper bound is trivially derived as
R1
tBN,j (t)dt
?(j + 2, N ? j + 1)
j+1
=
,
=
gj = R01
?(j
+
1,
N
?
j
+
1)
N
+2
BN,j (t)dt
0
R1
with ?(a, b) = 0 ta?1 (1 ? t)b?1 dt defined as usual. The resulting estimator p? agrees
exactly with ?Laplace?s estimator,? the add-? estimator with ? = 1. Note, though, that to
derive this gj , we completely ignore the first two terms (?H(t) ? t) in the upper bound,
and the resulting estimator can therefore be expected to be suboptimal (in particular, the
gj will be chosen too large, since ?H(t) ? t is strictly decreasing for t < 1). Indeed,
we find that add-? estimators with ? < 1 provide a much better starting point for the
optimization, as expected given (7,8). (Of course, for N/m large enough an asymptotically
optimal estimator is given by the perturbed add-constant estimator of (8), and none of this
numerical optimization is necessary.) In the limit as c = N/m ? 0, we will see below that
a better initialization point is the add-? estimator with parameter ? ? H(c) = ?c log c.
Fixed-point algorithm
On examining the gradient of the above problems with respect to gj , a fixed-point algorithm
may be derived. We have, for example, that
Z 1
?U
t
=
dt 1 ?
eqf (t) BN,j (t);
?gj
gj
0
thus, analogously to the q ? 0 case above, a simple update is given by
R 1 qf 0 (t)
te
BN,j (t)dt
1
,
gj = R01
0
eqf (t) BN,j (t)dt
0
which effectively corresponds to taking the mean of the binomial function BN,j , weighted
by the ?importance? term eqf (t) , which in turn is controlled by the proximity of t to the
maximum of f 0 (t) for q large. While this is an attractive strategy, conjugate gradient
descent proved to be a more stable algorithm in our hands.
Lower bounds
Once we have found an estimator with good worst-case error, we want to compare its
performance to some well-defined optimum. To do this, we obtain lower bounds on the
worst-case performance of any estimator (not just the class of p? we minimized over in the
last section). Once again, we will derive a family of bounds indexed by some parameter ?,
and then optimize over ?.
Our lower bounds are based on the well-known fact that, for any proper prior distribution,
the average (Bayesian) loss is less than or equal to the maximum (worst-case) loss. The
most convenient class of priors to use here are the Dirichlet priors. Thus we will compute
the average KL error under any Dirichlet distribution (interesting in its own right), then
maximize over the possible Dirichlet priors (that is, find the ?least favorable? Dirichlet
prior) to obtain the tightest lower bound on the worst-case error; importantly, the resulting
bounds will be nonasymptotic (that is, valid for all N and m). This approach therefore
generalizes the asymptotic lower bound used in (7), who examined the KL loss under the
special case of the uniform Dirichlet prior. See also (4) for direct application of this idea
to bound the average code length, and (14), who derived a lower bound on the average KL
loss, again in the uniform Dirichlet case.
We compute the Bayes error as follows. First, it is well-known (e.g., (9, 14)) that the
KL-Bayes estimate of p~ given count data ~n (under any prior, not just the Dirichlet) is the
posterior mean (interestingly, the KL loss shares this property with the squared error); for
the Dirichlet prior with parameter ?
~ , this conditional mean has the particularly simple form
?
~ + ~n
EDir(~?|~n) p~ = P
,
i ?i + ni
?|~n) denoting the Dir(~
?) density conditioned on data ~n. Second, it is straightwith Dir(~
forward to show (14) that the conditional average KL error, given this estimate, has an
appealing form: the entropy at the conditional mean minus the conditional mean entropy
(one can easily check the strict
Ppositivity of this average error via the concavity of the vector
entropy function H(~
p) = ? i pi log pi ). Thus we can write the average loss as
?
? X
?
?
?
~ + ~n
?i + ni
P
EDir(~?) H( P
)?EDir(~?|~n) H(~
p) =
EDir(~?) H(
)?EDir(~?+~n) H(pi ) ,
N+ i ?i
i ?i+ni
i
where the inner averages over p~ are under the Dirichlet distribution and the outer averages
over ~n and ni are under the corresponding Dirichlet-multinomial or Dirichlet-binomial
mixtures (i.e., multinomials whose
P parameter p~ is itself Dirichlet distributed); we have
used linearity of the expectation, i ni = N , and Dir(~
?|~n) = Dir(~
? + ~n). Evaluating
the right-hand side of the above, in turn, requires the formula
!
X
?i
?i ) ,
?EDir(?) H(pi ) = P
?(?i + 1) ? ?(1 +
i ?i
i
d
with ?(t) = dt
log ?(t); recall that ?(t + 1) = ?(t) + 1t . All of the above may thus be
easily computed numerically for any N, m, and ?
~ ; to simplify, however, we will restrict ?
~
to be constant, ?
~ = (?, ?, . . . , ?). This symmetrizes the
above
formulae;
we
can
replace
P
the outer sum with multiplication by m, and substitute i ?i = m?. Finally, abbreviating
K = N + m?, we have that the worst-case error is bounded below by:
N
mX
j+?
1
1
p?,m,N (j)(j + ?) ? log
+ ?(j + ?) +
? ?(K) ?
, (1)
K j=0
K
j+?
K
with p?,m,N (j) the beta-binomial distribution
N ?(m?)?(j + ?)?(K ? (j + ?))
p?,m,N (j) =
.
j
?(K)?(?)?(m? ? ?)
This lower bound is valid for all N, m, and ?, and can be optimized numerically in the
(scalar) parameter ? in a straightforward manner.
Asymptotic analysis
In this section, we aim to understand some of the implications of the rather complicated
expressions above, by analyzing them in some simplifying limits. Due to space constraints,
we can only sketch the proof of each of the following statements.
Proposition 1. Any add-? estimator, ? > 0, is uniformly KL-consistent if N/m ? ?.
This is a simple generalization of a result of (7), who proved consistency for the special
case of m fixed and N ? ?; the main point here is that N/m is allowed to tend to infinity
arbitarily slowly. The result follows on utilizing our first upper bound (the main difference
between our analysis and that of (7) is that our bound holds for all m, N , whereas (7)
focuses on the asymptotic case) and noting that max0?t?1 f (t) = O(1/N ) for f (t) defined
by any add-constant estimator; hence our upper bound is uniformly O(m/N ). To obtain
the O(1/N ) bound, we plug in the add-constant gj = (j + ?)/N :
?
?
X
j
+
?
)BN,j (t)? .
f (t) = ?/N + t ?log t ?
(log
N
j
?
For t fixed, an application of the delta method implies that the sum looks like log(t + N
)?
1?t
;
an
expansion
of
the
logarithm,
in
turn,
implies
that
the
right-hand
side
converges
to
2N t
1
2N (1 ? t), for any fixed ? > 0. On a 1/N scale, on the other hand, we have
?
?
X
t
t
log(j + ?)BN,j ( )? ,
N f ( ) = ? + t ?log t ?
N
N
j
which can be uniformly bounded above. In fact, as demonstrated by (7), the binomial sum
on the right-hand side converges to the corresponding Poisson sum; interestingly, a similar
Poisson sum plays a key role in the analysis of the entropy estimation case in (12).
A converse follows easily from the lower bounds developed above:
Proposition 2. No estimator is uniformly KL-consistent if lim sup N/m < ?.
Of course, it is intuitively clear that we need many more than m samples to estimate a
distribution on m bins; our contribution here is a quantitative asymptotic lower bound on
the error in the data-sparse regime. (A simpler but slightly weaker asymptotic bound may
be developed from the lower bound given in (14).) Once again, we contrast with the entropy
estimation case, where consistent estimators do exist in this regime (12).
We let N, m ? ?, N/m ? c, 0 < c < ?. The beta-binomial distribution has mean N/m
and converges to a non-degenerate limit, which we?ll
denote p?,c , in this regime. Using
1
Fatou?s lemma and ?(t) = log(t) ? 2t
+ O t?2 , t ? ?, we obtain the asymptotic
lower bound
?
1 X
1
p?,c (j)(? + j) ? log(? + j) + ?(? + j) +
> 0.
c + ? j=0
?+j
Also interestingly, it is easy to see that our lower bound behaves as m?1
2N (1 + o(1)) as
Pk
N/m ? ? for any fixed positive ? (since in this case j=0 p?,m,N (j) ? 0 for any fixed
finite k). Thus, comparing to the upper bound on the minimax error in (8), we have the
somewhat surprising fact that:
?
0.1
0.01
optimal ?
approx opt
(upper) / (lower)
lower bound
0.001
5
4
3
2
1
3
2
1
?4
10
lower bound
j=0 approx
(m?1)/2N approx
least?favorable Bayes
Braess?Sauer
optimized
?3
10
?2
?1
10
10
0
1
10
10
N/m
Figure 1: Illustration of bounds and asymptotic results. N = 100, m varying. a.
Numerically- and theoretically-obtained optimal (least-favorable) ?, as a function of c =
N/m; note close agreement. b. Numerical lower bounds and theoretical approximations;
note the log-linear growth as c ? 0. The j = 0 approximation is obtained by retaining
only the j = 0 term of the sum in the lower bound (1); this approximation turns out to
be sufficiently accurate in the c ? 0 limit, while the (m ? 1)/2N approximation is tight
as c ? ?. c. Ratio comparison of upper to lower bounds. Dashed curve is the ratio
obtained by plugging the asymptotically optimal estimator due to Braess-Sauer (8) into
our upper bound; solid-dotted curve numerically least-favorable Dirichlet estimator; black
solid curve optimized estimator. Note that curves for optimized and Braess-Sauer estimators are in constant proportion, since bounds are independent of m for c large enough.
Most importantly, note that optimized bounds are everywhere tight within a factor of 2, and
asymptotically tight as c ? ? or c ? 0.
Proposition 3. Any fixed-? Dirichlet prior is asymptotically least-favorable as
N
m
? ?.
This generalizes Theorem 2 of (7) (and in fact, an alternate proof can be constructed on
close examination of Krichevskiy?s proof of that result).
Finally, we examine the optimizers of the bounds in the data-sparse limit, c = N/m ? 0.
Proposition 4. The least-favorable Dirichlet parameter is given by H(c) as c ? 0; the
corresponding Bayes estimator also asymptotically minimizes the upper bound (and hence
the bounds are asymptotically tight in this limit). The maximal and average errors grow as
?log(c)(1 + o(1)), c ? 0.
This is our most important asymptotic result. It suggests a simple and interesting rule of
thumb for estimating distributions in this data-sparse limit: use the add-? estimator with
? = H(c). When the data are very sparse (c sufficiently small) this estimator is optimal;
see Fig. 1 for an illustration. The proof, which is longer than those of the above results but
still fairly straightforward, has been omitted due to space constraints.
Discussion
We have omitted a detailed discussion of the form of the estimators which numerically
minimize the upper bounds developed here; these estimators were empirically found to
be perturbed add-constant estimators, with gj growing linearly for large j but perturbed
downward in the approximate range j < 10. Interestingly, in the heavily-sampled limit
N >> m, the minimizing estimator provided by (8) again turns out to be a perturbed
add-constant estimator. Further details will be provided elsewhere.
We note an interesting connection to the results of (9), who find that 1/m scaling of the
add-constant parameter ? is empirically optimal for for an entropy estimation application
with large m. This 1/m scaling bears some resemblance to the optimal H(c) scaling that
we find here, at least on a logarithmic scale (Fig. 1a); however, it is easy to see that the extra
? log(c) term included here is useful. As argued in (3), it is a good idea, in the data-sparse
limit N << m, to assign substantial probability mass to bins which have not seen any data
samples. Since the total probability assigned to these bins by any add-? estimator scales in
this limit as P (unseen) = m?/(N + m?), it is clear that the choice ? ? 1/m decays too
quickly.
Finally, we note an important direction for future research: the upper bounds developed
here turn out to be least tight in the range N ? m, when the optimum in the bound occurs
near t = 1/m; in this case, our bounds can be loose by roughly a factor of two (exactly
the degree of looseness we found in Fig. 1c). Thus it would be quite worthwhile to explore
upper bounds which are tight in this N ? m range.
Acknowledgements: We thank Z. Ghahramani and D. Mackay for helpful conversations;
LP is supported by an International Research Fellowship from the Royal Society.
References
1. D. Mackay, L. Peto, Natural Language Engineering 1, 289 (1995).
2. N. Friedman, Y. Singer, NIPS (1998).
3. A. Orlitsky, N. Santhanam, J. Zhang, Science 302, 427 (2003).
4. T. Cover, IEEE Transactions on Information Theory 18, 216 (1972).
5. R. Krichevsky, V. Trofimov, IEEE Transactions on Information Theory 27, 199 (1981).
6. D. Braess, H. Dette, Sankhya 66, 707 (2004).
7. R. Krichevsky, IEEE Transactions on Information Theory 44, 296 (1998).
8. D. Braess, T. Sauer, Journal of Approximation Theory 128, 187 (2004).
9. T. Schurmann, P. Grassberger, Chaos 6, 414 (1996).
10. I. Nemenman, F. Shafee, W. Bialek, NIPS 14 (2002).
11. L. Paninski, Neural Computation 15, 1191 (2003).
12. L. Paninski, IEEE Transactions on Information Theory 50, 2200 (2004).
13. G. Watson, Approximation theory and numerical methods (Wiley, Boston, 1980).
14. D. Braess, J. Forster, T. Sauer, H. Simon, Algorithmic Learning Theory 13, 380 (2002).
| 2586 |@word schurmann:1 polynomial:2 proportion:1 trofimov:1 bn:12 simplifying:1 minus:1 solid:2 harder:1 denoting:1 interestingly:4 comparing:1 surprising:1 must:1 grassberger:1 numerical:3 update:1 location:1 simpler:1 zhang:1 mathematical:1 constructed:1 direct:2 beta:2 manner:1 theoretically:1 indeed:2 expected:5 roughly:1 examine:1 abbreviating:1 growing:1 decreasing:1 automatically:1 increasing:2 begin:1 estimating:3 bounded:4 moreover:1 underlying:2 linearity:1 mass:1 provided:2 minimizes:2 developed:4 nj:1 quantitative:1 orlitsky:1 growth:1 exactly:2 demonstrates:1 uk:2 unit:1 converse:1 positive:1 engineering:1 local:2 limit:15 despite:1 analyzing:1 black:1 therein:1 initialization:2 examined:1 suggests:1 liam:3 range:4 unique:1 practice:1 optimizers:1 procedure:1 universal:1 thought:1 convenient:1 cannot:1 close:3 interior:1 context:1 applying:1 www:1 optimize:1 map:1 demonstrated:1 missing:1 straightforward:2 attention:1 starting:2 tbn:1 convex:14 estimator:47 rule:1 utilizing:1 importantly:2 laplace:3 play:1 heavily:2 agreement:1 particularly:1 observed:2 ep:4 role:1 worst:12 substantial:1 convexity:1 depend:1 tight:12 completely:1 easily:6 distinct:1 london:1 choosing:1 whose:1 quite:1 larger:2 statistic:1 unseen:1 itself:1 sequence:1 ucl:2 maximal:2 loop:1 iff:1 degenerate:1 optimum:3 r1:2 converges:3 help:1 derive:2 develop:3 ac:2 measured:1 received:1 implies:3 direction:2 bin:5 require:1 argued:1 assign:2 generalization:1 proposition:4 opt:1 strictly:2 hold:2 proximity:1 sufficiently:2 considered:2 seed:1 algorithmic:1 achieves:1 dictionary:1 optimizer:1 entropic:1 omitted:2 estimation:11 favorable:6 largest:1 agrees:1 weighted:1 minimization:8 clearly:1 always:1 aim:1 rather:1 varying:1 derived:4 focus:2 check:1 contrast:2 sense:1 helpful:1 minimizers:1 compactness:1 retaining:1 smoothing:1 special:3 fairly:1 mackay:2 equal:1 once:3 identical:1 look:2 nearly:2 future:1 minimized:2 simplify:1 few:1 employ:1 argmax:1 friedman:1 rearrangement:1 organization:1 interest:2 nemenman:1 mixture:1 implication:1 accurate:1 integral:1 necessary:1 sauer:5 indexed:2 logarithm:1 theoretical:1 modeling:2 cover:1 maximization:1 uniform:4 examining:1 too:2 perturbed:4 dir:4 density:1 fundamental:1 international:1 retain:1 informatics:1 analogously:1 quickly:1 again:5 squared:1 slowly:2 suggesting:1 nonasymptotic:1 coding:1 coefficient:1 matter:1 closed:1 analyze:1 sup:1 bayes:5 complicated:1 simon:1 contribution:1 minimize:5 ni:8 variance:1 who:4 maximized:1 bayesian:2 thumb:1 none:1 influenced:1 definition:1 obvious:1 proof:5 sampled:2 proved:2 recall:1 subsection:1 lim:2 knowledge:1 conversation:1 actually:1 dette:1 ta:1 dt:11 though:1 just:2 hand:5 sketch:1 nonlinear:1 continuity:1 resemblance:1 grows:1 true:1 equality:1 hence:4 assigned:1 leibler:1 attractive:1 ll:1 trying:1 variational:1 chaos:1 recently:1 behaves:1 multinomial:2 empirically:2 numerically:6 approx:3 consistency:2 pm:1 trivially:1 language:2 stable:1 similarity:2 longer:1 gj:31 add:16 posterior:1 own:1 recent:2 optimizes:1 watson:1 seen:2 minimum:3 somewhat:1 surely:1 maximize:1 dashed:1 full:1 smooth:3 plug:1 plugging:1 controlled:1 basic:2 expectation:1 poisson:2 histogram:1 normalization:1 achieved:1 preserved:1 whereas:1 want:2 fellowship:1 interval:1 grow:1 extra:1 posse:1 strict:1 tend:1 near:1 noting:1 easy:4 enough:2 variety:1 identified:1 opposite:1 suboptimal:1 inner:1 idea:3 restrict:1 computable:1 expression:1 useful:3 detailed:1 involve:1 clear:2 http:1 exist:4 canonical:1 dotted:1 neuroscience:1 estimated:1 track:1 delta:1 brae:6 discrete:5 write:1 santhanam:1 key:4 penalizing:1 asymptotically:11 sum:9 enforced:1 everywhere:1 family:2 throughout:1 decide:1 almost:1 scaling:4 comparable:1 bound:67 guaranteed:1 followed:1 constraint:2 worked:1 infinity:1 answered:1 developing:1 alternate:1 conjugate:2 slightly:1 appealing:1 lp:1 making:2 intuitively:1 taken:1 vq:1 turn:7 abbreviated:1 count:1 loose:1 singer:1 letting:1 tractable:2 generalizes:3 tightest:1 worthwhile:1 uq:3 original:3 substitute:1 binomial:7 dirichlet:17 ghahramani:1 approximating:1 society:1 r01:2 objective:3 question:1 occurs:1 strategy:1 usual:2 forster:1 bialek:1 gradient:4 mx:1 krichevsky:2 thank:1 outer:2 nondifferentiable:2 code:2 length:1 illustration:2 ratio:2 minimizing:3 statement:1 proper:1 unknown:1 looseness:1 allowing:1 upper:20 finite:3 descent:5 behave:1 defining:1 peto:1 community:1 kl:16 optimized:6 connection:2 nip:2 below:5 regime:6 max:4 royal:1 suitable:1 natural:1 examination:1 minimax:5 prior:9 acknowledgement:1 multiplication:1 asymptotic:10 loss:14 bear:1 interesting:3 degree:1 consistent:4 viewpoint:1 pi:17 share:1 qf:1 course:2 elsewhere:1 supported:1 last:1 bias:1 allow:1 side:3 understand:1 weaker:1 wide:1 taking:2 sparse:10 distributed:1 curve:4 complicating:1 valid:2 evaluating:1 doesn:1 concavity:1 forward:1 qualitatively:1 collection:1 jump:1 far:1 transaction:4 approximate:2 emphasize:1 compact:1 ignore:1 kullback:1 global:3 consuming:1 continuous:1 why:1 expanding:1 expansion:1 necessarily:1 pk:1 main:2 linearly:3 bounding:1 allowed:2 fig:3 sankhya:1 gatsby:3 wiley:1 exponential:2 formula:2 theorem:1 shafee:1 decay:1 albeit:1 effectively:1 importance:2 te:1 conditioned:1 downward:1 nk:3 flavor:1 easier:2 fatou:1 entropy:14 arbitarily:1 logarithmic:1 boston:1 paninski:3 explore:1 scalar:1 corresponds:1 minimizer:1 satisfies:1 conditional:4 goal:1 replace:2 included:1 discontinuously:1 uniformly:4 lemma:1 max0:1 total:1 college:1 support:1 |
1,746 | 2,587 | Integrating Topics and Syntax
Thomas L. Griffiths
[email protected]
Massachusetts Institute of Technology
Cambridge, MA 02139
Mark Steyvers
[email protected]
University of California, Irvine
Irvine, CA 92614
David M. Blei
[email protected]
University of California, Berkeley
Berkeley, CA 94720
Joshua B. Tenenbaum
[email protected]
Massachusetts Institute of Technology
Cambridge, MA 02139
Abstract
Statistical approaches to language learning typically focus on either
short-range syntactic dependencies or long-range semantic dependencies
between words. We present a generative model that uses both kinds of
dependencies, and can be used to simultaneously find syntactic classes
and semantic topics despite having no representation of syntax or semantics beyond statistical dependency. This model is competitive on tasks
like part-of-speech tagging and document classification with models that
exclusively use short- and long-range dependencies respectively.
1
Introduction
A word can appear in a sentence for two reasons: because it serves a syntactic function, or
because it provides semantic content. Words that play different roles are treated differently
in human language processing: function and content words produce different patterns of
brain activity [1], and have different developmental trends [2]. So, how might a language
learner discover the syntactic and semantic classes of words? Cognitive scientists have
shown that unsupervised statistical methods can be used to identify syntactic classes [3]
and to extract a representation of semantic content [4], but none of these methods captures
the interaction between function and content words, or even recognizes that these roles
are distinct. In this paper, we explore how statistical learning, with no prior knowledge of
either syntax or semantics, can discover the difference between function and content words
and simultaneously organize words into syntactic classes and semantic topics.
Our approach relies on the different kinds of dependencies between words produced by
syntactic and semantic constraints. Syntactic constraints result in relatively short-range dependencies, spanning several words within the limits of a sentence. Semantic constraints
result in long-range dependencies: different sentences in the same document are likely to
have similar content, and use similar words. We present a model that can capture the interaction between short- and long-range dependencies. This model is a generative model for
text in which a hidden Markov model (HMM) determines when to emit a word from a topic
model. The different capacities of the two components of the model result in a factorization
of a sentence into function words, handled by the HMM, and content words, handled by
the topic model. Each component divides words into finer groups according to a different
criterion: the function words are divided into syntactic classes, and the content words are
divided into semantic topics. This model can be used to extract clean syntactic and semantic classes and to identify the role that words play in a document. It is also competitive in
quantitative tasks, such as part-of-speech tagging and document classification, with models
specialized to detect short- and long-range dependencies respectively.
The plan of the paper is as follows. First, we introduce the approach, considering the
general question of how syntactic and semantic generative models might be combined,
and arguing that a composite model is necessary to capture the different roles that words
can play in a document. We then define a generative model of this form, and describe
a Markov chain Monte Carlo algorithm for inference in this model. Finally, we present
results illustrating the quality of the recovered syntactic classes and semantic topics.
2
Combining syntactic and semantic generative models
A probabilistic generative model specifies a simple stochastic procedure by which data
might be generated, usually making reference to unobserved random variables that express
latent structure. Once defined, this procedure can be inverted using statistical inference,
computing distributions over latent variables conditioned on a dataset. Such an approach is
appropriate for modeling language, where words are generated from the latent structure of
the speaker?s intentions, and is widely used in statistical natural language processing [5].
Probabilistic models of language are typically developed to capture either short-range or
long-range dependencies between words. HMMs and probabilistic context-free grammars [5] generate documents purely based on syntactic relations among unobserved word
classes, while ?bag-of-words? models like naive Bayes or topic models [6] generate documents based on semantic correlations between words, independent of word order. By
considering only one of the factors influencing the words that appear in documents, these
models assume that all words should be assessed on a single criterion: the posterior distribution for an HMM will group nouns together, as they play the same syntactic role even
though they vary across contexts, and the posterior distribution for a topic model will assign
determiners to topics, even though they bear little semantic content.
A major advantage of generative models is modularity. A generative model for text specifies a probability distribution over words in terms of other probability distributions over
words, and different models are thus easily combined. We can produce a model that expresses both the short- and long-range dependencies of words by combining two models
that are each sensitive to one kind of dependency. However, the form of combination must
be chosen carefully. In a mixture of syntactic and semantic models, each word would exhibit either short-range or long-range dependencies, while in a product of models (e.g. [7]),
each word would exhibit both short-range and long-range dependencies. Consideration of
the structure of language reveals that neither of these models is appropriate. In fact, only
a subset of words ? the content words ? exhibit long-range semantic dependencies, while
all words obey short-range syntactic dependencies. This asymmetry can be captured in a
composite model, where we replace one of the probability distributions over words used in
the syntactic model with the semantic model. This allows the syntactic model to choose
when to emit a content word, and the semantic model to choose which word to emit.
2.1
A composite model
We will explore a simple composite model, in which the syntactic component is an HMM
and the semantic component is a topic model. The graphical model for this composite is
shown in Figure 1(a). The model is defined in terms of three sets of variables: a sequence
of words w = {w1 , . . . , wn }, with each wi being one of W words, a sequence of topic
assignments z = {z1 , . . . zn }, with each zi being one of T topics, and a sequence of
classes c = {c1 , . . . , cn }, with each ci being one of C classes. One class, say ci = 1, is
designated the ?semantic? class. The zth topic is associated with a distribution over words
?
(a)
(b)
0.2
z1
w1
c1
z2
w2
c2
z3
w3
c3
z4
w4
c4
0.8
in
with
for
on
...
0.5
0.4
0.1
network
neural
networks
output
...
image
images
object
objects
...
kernel
support
vector
svm
...
0.9
network used for images
image obtained with kernel
0.7
output described with objects
used
trained
obtained
described
...
neural network trained with svm images
Figure 1: The composite model. (a) Graphical model. (b) Generating phrases.
?(z) , each class c 6= 1 is associated with a distribution over words ?(c) , each document
d has a distribution over topics ?(d) , and transitions between classes ci?1 and ci follow a
distribution ? (si?1 ) . A document is generated via the following procedure:
1. Sample ?(d) from a Dirichlet(?) prior
2. For each word wi in document d
(a) Draw zi from ?(d)
(b) Draw ci from ? (ci?1 )
(c) If ci = 1, then draw wi from ?(zi ) , else draw wi from ?(ci )
Figure 1(b) provides an intuitive representation of how phrases are generated by the composite model. The figure shows a three class HMM. Two classes are simple multinomial
distributions over words. The third is a topic model, containing three topics. Transitions
between classes are shown with arrows, annotated with transition probabilities. The topics in the semantic class also have probabilities, used to choose a topic when the HMM
transitions to the semantic class. Phrases are generated by following a path through the
model, choosing a word from the distribution associated with each syntactic class, and a
topic followed by a word from the distribution associated with that topic for the semantic
class. Sentences with the same syntax but different content would be generated if the topic
distribution were different. The generative model thus acts like it is playing a game of
Madlibs: the semantic component provides a list of topical words (shown in black) which
are slotted into templates generated by the syntactic component (shown in gray).
2.2
Inference
The EM algorithm can be applied to the graphical model shown in Figure 1, treating the
document distributions ?, the topics and classes ?, and the transition probabilities ? as
parameters. However, EM produces poor results with topic models, which have many parameters and many local maxima. Consequently, recent work has focused on approximate
inference algorithms [6, 8]. We will use Markov chain Monte Carlo (MCMC; see [9]) to
perform full Bayesian inference in this model, sampling from a posterior distribution over
assignments of words to classes and topics.
We assume that the document-specific distributions over topics, ?, are drawn from a
Dirichlet(?) distribution, the topic distributions ?(z) are drawn from a Dirichlet(?) distribution, the rows of the transition matrix for the HMM are drawn from a Dirichlet(?)
distribution, the class distributions ?(c) a re drawn from a Dirichlet(?) distribution, and all
Dirichlet distributions are symmetric. We use Gibbs sampling to draw iteratively a topic
assignment zi and class assignment ci for each word wi in the corpus (see [8, 9]).
Given the words w, the class assignments c, the other topic assignments z?i , and the
hyperparameters, each zi is drawn from:
P (zi |z?i , c, w)
? ( P (zi |z?i )
(d )
nzi i + ?
?
(d )
(nzi i + ?)
P (wi |z, c, w?i )
(z )
nwii +?
(zi )
+W ?
n?
ci =
6 1
ci = 1
(d )
(z )
where nzi i is the number of words in document di assigned to topic zi , nwii is the number
of words assigned to topic zi that are the same as wi , and all counts include only words for
which ci = 1 and exclude case i. We have obtained these conditional distributions by using
the conjugacy of the Dirichlet and multinomial distributions to integrate out the parameters
?, ?. Similarly conditioned on the other variables, each ci is drawn from:
P (ci |c?i , z, w) ? P (wi |c, z, w?i )
P (ci |c?i )
?
(ci?1 )
(ci )
(ci )
(n
+?)(n
n
+?
ci
ci+1 +I(ci?1 =ci )?I(ci =ci+1 )+?)
?
? (cwi )i
ci 6= 1
(c )
n ? +W ?
n ? i +I(ci?1 =ci )+C?
?
(ci?1 )
(ci )
(zi )
(nci
+?)(nci+1 +I(ci?1 =ci )?I(ci =ci+1 )+?)
?
? nwi +?
ci = 1
(zi )
(ci )
(z )
nwii
n ? +W ?
(c )
nwii is the
n
?
+I(ci?1 =ci )+C?
where
is as before,
number of words assigned to class ci that are the
(c
)
same as wi , excluding case i, and nci i?1 is the number of transitions from class ci?1
to class ci , and all counts of transitions exclude transitions both to and from ci . I(?) is an
indicator function, taking the value 1 when its argument is true, and 0 otherwise. Increasing
the order of the HMM introduces additional terms into P (ci |c?i ), but does not otherwise
affect sampling.
3
Results
We tested the models on the Brown corpus and a concatenation of the Brown and TASA
corpora. The Brown corpus [10] consists of D = 500 documents and n = 1, 137, 466 word
tokens, with part-of-speech tags for each token. The TASA corpus is an untagged collection
of educational materials consisting of D = 37, 651 documents and n = 12, 190, 931 word
tokens. Words appearing in fewer than 5 documents were replaced with an asterisk, but
punctuation was included. The combined vocabulary was of size W = 37, 202.
We dedicated one HMM class to sentence start/end markers {.,?,!}. In addition to running
the composite model with T = 200 and C = 20, we examined two special cases: T = 200,
C = 2, being a model where the only HMM classes are the start/end and semantic classes,
and thus equivalent to Latent Dirichlet Allocation (LDA; [6]); and T = 1, C = 20, being
an HMM in which the semantic class distribution does not vary across documents, and
simply has a different hyperparameter from the other classes. On the Brown corpus, we
ran samplers for LDA and 1st, 2nd, and 3rd order HMM and composite models, with three
chains of 4000 iterations each, taking samples at a lag of 100 iterations after a burn-in of
2000 iterations. On Brown+TASA, we ran a single chain for 4000 iterations for LDA and
the 3rd order HMM and composite models. We used a Gaussian Metropolis proposal to
sample the hyperparameters, taking 5 draws of each hyperparameter for each Gibbs sweep.
3.1
Syntactic classes and semantic topics
The two components of the model are sensitive to different kinds of dependency among
words. The HMM is sensitive to short-range dependencies that are constant across documents, and the topic model is sensitive to long-range dependencies that vary across documents. As a consequence, the HMM allocates words that vary across contexts to the semantic class, where they are differentiated into topics. The results of the algorithm, taken
from the 4000th iteration of a 3rd order composite model on Brown+TASA, are shown in
Figure 2. The model cleanly separates words that play syntactic and semantic roles, in
sharp contrast to the results of the LDA model, also shown in the figure, where all words
are forced into topics. The syntactic categories include prepositions, pronouns, past-tense
verbs, and punctuation. While one state of the HMM, shown in the eighth column of the
figure, emits common nouns, the majority of nouns are assigned to the semantic class.
The designation of words as syntactic or semantic depends upon the corpus. For comparison, we applied a 3rd order composite model with 100 topics and 50 classes to a set
the
blood
,
of
body
heart
and
in
to
is
the
,
and
of
a
in
trees
tree
with
on
the
,
and
of
in
land
to
farmers
for
farm
the
of
,
to
in
and
classes
government
a
state
the
a
of
,
in
to
picture
film
image
lens
a
the
of
,
in
water
is
and
matter
are
the
,
of
a
and
in
story
is
to
as
the
,
a
of
and
drink
alcohol
to
bottle
in
the
,
a
in
game
ball
and
team
to
play
blood
heart
pressure
body
lungs
oxygen
vessels
arteries
*
breathing
the
a
his
this
their
these
your
her
my
some
forest
trees
forests
land
soil
areas
park
wildlife
area
rain
in
for
to
on
with
at
by
from
as
into
farmers
land
crops
farm
food
people
farming
wheat
farms
corn
he
it
you
they
i
she
we
there
this
who
government
state
federal
public
local
act
states
national
laws
department
*
new
other
first
same
great
good
small
little
old
light
eye
lens
image
mirror
eyes
glass
object
objects
lenses
be
have
see
make
do
know
get
go
take
find
water
matter
molecules
liquid
particles
gas
solid
substance
temperature
changes
said
made
used
came
went
found
called
story
stories
poem
characters
poetry
character
author
poems
life
poet
can
would
will
could
may
had
must
do
have
did
drugs
drug
alcohol
people
drinking
person
effects
marijuana
body
use
time
way
years
day
part
number
kind
place
ball
game
team
*
baseball
players
football
player
field
basketball
,
;
(
:
)
Figure 2: Upper: Topics extracted by the LDA model. Lower: Topics and classes from the
composite model. Each column represents a single topic/class, and words appear in order
of probability in that topic/class. Since some classes give almost all probability to only a
few words, a list is terminated when the words account for 90% of the probability mass.
of D = 1713 NIPS papers from volumes 0-12. We used the full text, from the Abstract
to the Acknowledgments or References section, excluding section headers. This resulted
in n = 4, 312, 614 word tokens. We replaced all words appearing in fewer than 3 papers with an asterisk, leading to W = 17, 268 types. We used the same sampling scheme
as Brown+TASA. A selection of topics and classes from the 4000th iteration are shown
in Figure 3. Words that might convey semantic information in another setting, such as
?model?, ?algorithm?, or ?network?, form part of the syntax of NIPS: the consistent use of
these words across documents leads them to be incorporated into the syntactic component.
3.2
Identifying function and content words
Identifying function and content words requires using information about both syntactic
class and semantic context. In a machine learning paper, the word ?control? might be an
innocuous verb, or an important part of the content of a paper. Likewise, ?graph? could
refer to a figure, or indicate content related to graph theory. Tagging classes might indicate
that ?control? appears as a verb rather than a noun, but deciding that ?graph? refers to a
figure requires using information about the content of the rest of the document.
The factorization of words between the HMM and LDA components provides a simple
means of assessing the role that a given word plays in a document: evaluating the posterior
probability of assignment to the LDA component. The results of using this procedure to
identify content words in sentences excerpted from NIPS papers are shown in Figure 4.
Probabilities were evaluated by averaging over assignments from all 20 samples, and take
into account the semantic context of the whole document. As a result of combining shortand long-range dependencies, the model is able to pick out the words in each sentence that
concern the content of the document. Selecting the words that have high probability of
image
images
object
objects
feature
recognition
views
#
pixel
visual
in
with
for
on
from
at
using
into
over
within
data
gaussian
mixture
likelihood
posterior
prior
distribution
em
bayesian
parameters
is
was
has
becomes
denotes
being
remains
represents
exists
seems
state
policy
value
function
action
reinforcement
learning
classes
optimal
*
see
show
note
consider
assume
present
need
propose
describe
suggest
membrane
synaptic
cell
*
current
dendritic
potential
neuron
conductance
channels
used
trained
obtained
described
given
found
presented
defined
generated
shown
chip
analog
neuron
digital
synapse
neural
hardware
weight
#
vlsi
model
algorithm
system
case
problem
network
method
approach
paper
process
experts
expert
gating
hme
architecture
mixture
learning
mixtures
function
gate
networks
values
results
models
parameters
units
data
functions
problems
algorithms
kernel
support
vector
svm
kernels
#
space
function
machines
set
however
also
then
thus
therefore
first
here
now
hence
finally
network
neural
networks
output
input
training
inputs
weights
#
outputs
#
*
i
x
t
n
c
r
p
Figure 3: Topics and classes from the composite model on the NIPS corpus.
1.
In contrast to this approach, we study here how the overall network activity can control single cell
parameters such as input resistance, as well as time and space constants, parameters that are crucial for
excitability and spariotemporal (sic) integration.
The integrated architecture in this paper combines feed forward control and error feedback adaptive
control using neural networks.
In other words, for our proof of convergence, we require the softassign algorithm to return a doubly
stochastic matrix as *sinkhorn theorem guarantees that it will instead of a matrix which is merely close
2. to being doubly stochastic based on some reasonable metric.
The aim is to construct a portfolio with a maximal expected return for a given risk level and time
horizon while simultaneously obeying *institutional or *legally required constraints.
The left graph is the standard experiment the right from a training with # samples.
3.
The graph G is called the *guest graph, and H is called the host graph.
Figure 4: Function and content words in the NIPS corpus. Graylevel indicates posterior
probability of assignment to LDA component, with black being highest. The boxed word
appears as a function word and a content word in one element of each pair of sentences.
Asterisked words had low frequency, and were treated as a single word type by the model.
being assigned to syntactic HMM classes produces templates for writing NIPS papers, into
which content words can be inserted. For example, replacing the content words that the
model identifies in the second sentence with content words appropriate to the topic of the
present paper, we could write: The integrated architecture in this paper combines simple
probabilistic syntax and topic-based semantics using generative models.
3.3
Marginal probabilities
We assessed the marginal probability of the data under each model, P (w), using the harmonic mean of the likelihoods over the last 2000 iterations of sampling, a standard method
for evaluating Bayes factors via MCMC [11]. This probability takes into account the complexity of the models, as more complex models are penalized by integrating over a latent
space with larger regions of low probability. The results are shown in Figure 5. LDA outperforms the HMM on the Brown corpus, but the HMM out-performs LDA on the larger
Brown+TASA corpus. The composite model provided the best account of both corpora,
Brown
Brown+TASA
?4e+07
Marginal likelihood
Marginal likelihood
?4e+06
Composite
?4.5e+06
LDA
?5e+06
HMM
?5.5e+06
?6e+06
1st
2nd
3rd
1st
2nd
Composite
?5e+07
?6e+07
LDA
?7e+07
?8e+07
3rd
HMM
1st
2nd
3rd
1st
2nd
3rd
Figure 5: Log marginal probabilities of each corpus under different models. Labels on
horizontal axis indicate the order of the HMM.
All tags
Top 10
Composite
0.4
0.2
0
0.8
Adjusted Rand Index
Adjusted Rand Index
HMM
0.6
1st 2nd 3rd
Brown
1st 2nd 3rd
Brown+TASA
1st 2nd 3rd
Brown
1st 2nd 3rd
Brown+TASA
1000 most frequent words
0.6
0.4
0.2
0
DC
HMM Composite
Figure 6: Part-of-speech tagging for HMM, composite, and distributional clustering (DC).
being able to use whichever kind of dependency information was most predictive. Using
a higher-order transition matrix for either the HMM or the composite model produced little improvement in marginal likelihood for the Brown corpus, but the 3rd order models
performed best on Brown+TASA.
3.4
Part-of-speech tagging
Part-of-speech tagging ? identifying the syntactic class of a word ? is a standard task in
computational linguistics. Most unsupervised tagging methods use a lexicon that identifies
the possible classes for different words. This simplifies the problem, as most words belong
to a single class. However, genuinely unsupervised recovery of parts-of-speech has been
used to assess statistical models of language learning, such as distributional clustering [3].
We assessed tagging performance on the Brown corpus, using two tagsets. One set consisted of all Brown tags, excluding those for sentence markers, leaving a total of 297 tags.
The other set collapsed these tags into ten high-level designations: adjective, adverb, conjunction, determiner, foreign, noun, preposition, pronoun, punctuation, and verb. We evaluated tagging performance using the Adjusted Rand Index [12] to measure the concordance
between the tags and the class assignments of the HMM and composite models in the
4000th iteration. The Adjusted Rand Index ranges from ?1 to 1, with an expectation of 0.
Results are shown in Figure 6. Both models produced class assignments that were strongly
concordant with part-of-speech, although the HMM gave a slightly better match to the full
tagset, and the composite model gave a closer match to the top-level tags. This is partly because all words that vary strongly in frequency across contexts get assigned to the semantic
class in the composite model, so it misses some of the fine-grained distinctions expressed in
the full tagset. Both the HMM and the composite model performed better than the distributional clustering method described in [3], which was used to form the 1000 most frequent
words in Brown into 19 clusters. Figure 6 compares this clustering with the classes for
those words from the HMM and composite models trained on Brown.
3.5
Document classification
The 500 documents in the Brown corpus are classified into 15 groups, such as editorial journalism and romance fiction. We assessed the quality of the topics recovered by the LDA
and composite models by training a naive Bayes classifier on the topic vectors produced
by the two models. We computed classification accuracy using 10-fold cross validation for
the 4000th iteration from a single chain. The two models perform similarly. Baseline accuracy, choosing classes according to the prior, was 0.09. Trained on Brown, the LDA model
gave a mean accuracy of 0.51(0.07), where the number in parentheses is the standard error. The 1st, 2nd, and 3rd order composite models gave 0.45(0.07), 0.41(0.07), 0.42(0.08)
respectively. Trained on Brown+TASA, the LDA model gave 0.54(0.04), while the 1st.
2nd, and 3rd order composite models gave 0.48(0.06), 0.48(0.05), 0.46(0.08) respectively.
The slightly lower accuracy of the composite model may result from having fewer data
in which to find correlations: it only sees the words allocated to the semantic component,
which account for approximately 20% of the words in the corpus.
4
Conclusion
The composite model we have described captures the interaction between short- and longrange dependencies between words. As a consequence, the posterior distribution over the
latent variables in this model picks out syntactic classes and semantic topics and identifies
the role that words play in documents. The model is competitive in part-of-speech tagging and classification with models that specialize in short- and long-range dependencies
respectively. Clearly, such a model does not do justice to the depth of syntactic or semantic
structure, or their interaction. However, it illustrates how a sensitivity to different kinds of
statistical dependency might be sufficient for the first stages of language acquisition, discovering the syntactic and semantic building blocks that form the basis for learning more
sophisticated representations.
Acknowledgements. The TASA corpus appears courtesy of Tom Landauer and Touchstone Applied
Science Associates, and the NIPS corpus was provided by Sam Roweis. This work was supported by
the DARPA CALO program and NTT Communication Science Laboratories.
References
[1] H. J. Neville, D. L. Mills, and D. S. Lawson. Fractionating language: Different neural subsytems with different sensitive periods. Cerebral Cortex, 2:244?258, 1992.
[2] R. Brown. A first language. Harvard University Press, Cambridge, MA, 1973.
[3] M. Redington, N. Chater, and S. Finch. Distributional information: A powerful cue for acquiring syntactic categories. Cognitive Science, 22:425?469, 1998.
[4] T. K. Landauer and S. T. Dumais. A solution to Plato?s problem: the Latent Semantic Analysis theory of acquisition, induction, and representation of knowledge. Psychological Review,
104:211?240, 1997.
[5] C. Manning and H. Sch?utze. Foundations of statistical natural language processing. MIT Press,
Cambridge, MA, 1999.
[6] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet Allocation. Journal of Machine
Learning Research, 3:993?1022, 2003.
[7] N. Coccaro and D. Jurafsky. Towards better integration of semantic predictors in statistical
language modeling. In Proceedings of ICSLP-98, volume 6, pages 2403?2406, 1998.
[8] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy
of Science, 101:5228?5235, 2004.
[9] W.R. Gilks, S. Richardson, and D. J. Spiegelhalter, editors. Markov Chain Monte Carlo in
Practice. Chapman and Hall, Suffolk, 1996.
[10] H. Kucera and W. N. Francis. Computational analysis of present-day American English. Brown
University Press, Providence, RI, 1967.
[11] R. E. Kass and A. E. Rafferty. Bayes factors. Journal of the American Statistical Association,
90:773?795, 1995.
[12] L. Hubert and P. Arabie. Comparing partitions. Journal of Classification, 2:193?218, 1985.
| 2587 |@word illustrating:1 seems:1 nd:11 justice:1 cleanly:1 pressure:1 pick:2 solid:1 exclusively:1 slotted:1 liquid:1 selecting:1 document:28 past:1 outperforms:1 recovered:2 z2:1 current:1 ka:1 comparing:1 si:1 must:2 romance:1 partition:1 treating:1 generative:10 fewer:3 discovering:1 cue:1 short:13 blei:3 provides:4 lexicon:1 c2:1 consists:1 doubly:2 specialize:1 combine:2 introduce:1 tagging:10 expected:1 brain:1 touchstone:1 food:1 little:3 considering:2 increasing:1 becomes:1 provided:2 discover:2 mass:1 kind:7 developed:1 unobserved:2 finding:1 guarantee:1 berkeley:3 quantitative:1 act:2 classifier:1 farmer:2 control:5 unit:1 appear:3 organize:1 before:1 marijuana:1 scientist:1 influencing:1 local:2 limit:1 consequence:2 despite:1 path:1 approximately:1 might:7 black:2 burn:1 examined:1 innocuous:1 jurafsky:1 hmms:1 factorization:2 range:21 acknowledgment:1 gilks:1 arguing:1 practice:1 block:1 procedure:4 area:2 w4:1 drug:2 composite:31 word:98 integrating:2 griffith:2 intention:1 refers:1 suggest:1 get:2 close:1 selection:1 context:6 risk:1 writing:1 collapsed:1 equivalent:1 courtesy:1 educational:1 go:1 focused:1 identifying:3 recovery:1 his:1 steyvers:2 graylevel:1 play:8 us:1 associate:1 trend:1 element:1 recognition:1 harvard:1 untagged:1 legally:1 genuinely:1 distributional:4 role:8 inserted:1 capture:5 region:1 wheat:1 went:1 highest:1 ran:2 developmental:1 complexity:1 arabie:1 trained:6 predictive:1 purely:1 upon:1 baseball:1 learner:1 basis:1 easily:1 darpa:1 differently:1 chip:1 distinct:1 forced:1 describe:2 monte:3 choosing:2 header:1 lag:1 widely:1 film:1 larger:2 say:1 otherwise:2 football:1 grammar:1 richardson:1 syntactic:34 farm:3 advantage:1 sequence:3 propose:1 interaction:4 product:1 zth:1 maximal:1 frequent:2 uci:1 combining:3 pronoun:2 roweis:1 academy:1 intuitive:1 artery:1 convergence:1 cluster:1 asymmetry:1 assessing:1 produce:4 generating:1 object:7 c:1 indicate:3 annotated:1 stochastic:3 human:1 calo:1 coccaro:1 material:1 public:1 require:1 government:2 assign:1 icslp:1 dendritic:1 adjusted:4 drinking:1 hall:1 deciding:1 great:1 major:1 vary:5 institutional:1 utze:1 determiner:2 bag:1 label:1 sensitive:5 federal:1 mit:3 clearly:1 gaussian:2 aim:1 rather:1 conjunction:1 chater:1 cwi:1 focus:1 she:1 improvement:1 likelihood:5 indicates:1 contrast:2 baseline:1 detect:1 glass:1 inference:5 foreign:1 typically:2 integrated:2 hidden:1 relation:1 her:1 vlsi:1 semantics:3 pixel:1 overall:1 classification:6 among:2 plan:1 noun:5 special:1 integration:2 marginal:6 field:1 once:1 construct:1 having:2 ng:1 sampling:5 chapman:1 represents:2 park:1 unsupervised:3 suffolk:1 few:1 simultaneously:3 national:2 resulted:1 replaced:2 consisting:1 softassign:1 conductance:1 introduces:1 mixture:4 punctuation:3 light:1 hubert:1 chain:6 emit:3 closer:1 necessary:1 allocates:1 tree:3 divide:1 old:1 re:1 psychological:1 column:2 modeling:2 zn:1 assignment:11 phrase:3 subset:1 predictor:1 dependency:25 providence:1 my:1 combined:3 finch:1 st:11 person:1 dumais:1 sensitivity:1 rafferty:1 probabilistic:4 together:1 w1:2 containing:1 choose:3 cognitive:2 farming:1 expert:2 american:2 leading:1 return:2 concordance:1 account:5 exclude:2 potential:1 hme:1 matter:2 mcmc:2 depends:1 performed:2 view:1 francis:1 competitive:3 bayes:4 start:2 lung:1 ass:1 accuracy:4 who:1 likewise:1 identify:3 bayesian:2 produced:4 none:1 carlo:3 finer:1 classified:1 synaptic:1 acquisition:2 frequency:2 associated:4 di:1 proof:1 irvine:2 emits:1 dataset:1 massachusetts:2 knowledge:2 carefully:1 sophisticated:1 appears:3 feed:1 higher:1 day:2 follow:1 tom:1 synapse:1 rand:4 evaluated:2 though:2 strongly:2 stage:1 correlation:2 horizontal:1 replacing:1 marker:2 sic:1 quality:2 gray:1 lda:15 scientific:1 building:1 effect:1 brown:26 true:1 tense:1 consisted:1 hence:1 assigned:6 excitability:1 symmetric:1 iteratively:1 jbt:1 laboratory:1 semantic:42 poem:2 game:3 basketball:1 speaker:1 criterion:2 syntax:6 performs:1 dedicated:1 temperature:1 oxygen:1 image:9 harmonic:1 consideration:1 common:1 specialized:1 multinomial:2 volume:2 cerebral:1 analog:1 he:1 belong:1 association:1 refer:1 cambridge:4 gibbs:2 rd:15 z4:1 similarly:2 particle:1 language:13 had:2 portfolio:1 cortex:1 sinkhorn:1 posterior:7 recent:1 adverb:1 came:1 life:1 joshua:1 inverted:1 captured:1 wildlife:1 additional:1 period:1 full:4 ntt:1 match:2 cross:1 long:13 divided:2 host:1 parenthesis:1 crop:1 metric:1 expectation:1 editorial:1 iteration:9 kernel:4 cell:2 c1:2 proposal:1 addition:1 nci:3 fine:1 else:1 leaving:1 crucial:1 allocated:1 w2:1 rest:1 sch:1 plato:1 jordan:1 wn:1 affect:1 zi:12 w3:1 architecture:3 gave:6 simplifies:1 cn:1 handled:2 resistance:1 speech:9 action:1 tenenbaum:1 ten:1 hardware:1 category:2 generate:2 specifies:2 fiction:1 nzi:3 write:1 hyperparameter:2 express:2 group:3 blood:2 drawn:6 neither:1 clean:1 graph:7 merely:1 year:1 kucera:1 you:1 powerful:1 place:1 almost:1 reasonable:1 draw:6 drink:1 followed:1 fold:1 activity:2 constraint:4 your:1 nwi:1 ri:1 tag:7 argument:1 gruffydd:1 relatively:1 corn:1 department:1 designated:1 according:2 combination:1 poor:1 ball:2 manning:1 membrane:1 across:7 slightly:2 em:3 character:2 sam:1 wi:9 metropolis:1 making:1 taken:1 heart:2 conjugacy:1 remains:1 count:2 know:1 whichever:1 serf:1 end:2 obey:1 appropriate:3 differentiated:1 appearing:2 gate:1 thomas:1 rain:1 dirichlet:9 include:2 running:1 recognizes:1 graphical:3 denotes:1 top:2 clustering:4 linguistics:1 sweep:1 question:1 said:1 exhibit:3 separate:1 capacity:1 hmm:31 concatenation:1 majority:1 topic:48 reason:1 spanning:1 water:2 induction:1 index:4 z3:1 neville:1 guest:1 policy:1 perform:2 upper:1 neuron:2 markov:4 gas:1 excluding:3 team:2 incorporated:1 topical:1 dc:2 communication:1 sharp:1 verb:4 david:1 bottle:1 required:1 pair:1 c3:1 sentence:11 z1:2 c4:1 california:2 distinction:1 nip:7 beyond:1 able:2 usually:1 pattern:1 eighth:1 breathing:1 program:1 adjective:1 treated:2 natural:2 indicator:1 alcohol:2 scheme:1 technology:2 spiegelhalter:1 eye:2 picture:1 identifies:3 axis:1 extract:2 naive:2 text:3 prior:4 review:1 acknowledgement:1 law:1 bear:1 designation:2 allocation:2 asterisk:2 digital:1 integrate:1 validation:1 foundation:1 sufficient:1 consistent:1 editor:1 story:3 playing:1 land:3 row:1 preposition:2 penalized:1 token:4 soil:1 last:1 free:1 supported:1 english:1 institute:2 template:2 taking:3 excerpted:1 feedback:1 depth:1 vocabulary:1 transition:10 evaluating:2 author:1 msteyver:1 collection:1 made:1 reinforcement:1 forward:1 poet:1 adaptive:1 approximate:1 longrange:1 reveals:1 corpus:19 landauer:2 latent:8 modularity:1 channel:1 molecule:1 ca:2 forest:2 vessel:1 boxed:1 complex:1 did:1 arrow:1 terminated:1 whole:1 hyperparameters:2 convey:1 body:3 obeying:1 lawson:1 third:1 grained:1 theorem:1 specific:1 substance:1 gating:1 list:2 svm:3 concern:1 exists:1 ci:42 mirror:1 conditioned:2 tasa:12 illustrates:1 horizon:1 mill:1 simply:1 explore:2 likely:1 tagsets:1 visual:1 expressed:1 acquiring:1 determines:1 relies:1 extracted:1 ma:4 conditional:1 consequently:1 towards:1 replace:1 content:24 change:1 included:1 sampler:1 averaging:1 miss:1 lens:3 called:3 total:1 partly:1 concordant:1 player:2 mark:1 support:2 people:2 assessed:4 tagset:2 tested:1 |
1,747 | 2,588 | Spike-Timing Dependent Plasticity and Mutual
Information Maximization for a Spiking Neuron
Model
Taro Toyoizumi?? ,
Jean-Pascal Pfister?
Kazuyuki Aihara? ?,
Wulfram Gerstner?
? Department of Complexity Science and Engineering,
The University of Tokyo, 153-8505 Tokyo, Japan
? Ecole Polytechnique F?ed?erale de Lausanne (EPFL),
School of Computer and Communication Sciences and
Brain-Mind Institute, 1015 Lausanne, Switzerland
? Graduate School of Information Science and Technology,
The University of Tokyo, 153-8505 Tokyo, Japan
[email protected],
[email protected],
[email protected]
[email protected]
Abstract
We derive an optimal learning rule in the sense of mutual information
maximization for a spiking neuron model. Under the assumption of
small fluctuations of the input, we find a spike-timing dependent plasticity (STDP) function which depends on the time course of excitatory
postsynaptic potentials (EPSPs) and the autocorrelation function of the
postsynaptic neuron. We show that the STDP function has both positive
and negative phases. The positive phase is related to the shape of the
EPSP while the negative phase is controlled by neuronal refractoriness.
1
Introduction
Spike-timing dependent plasticity (STDP) has been intensively studied during the last
decade both experimentally and theoretically (for reviews see [1, 2]). STDP is a variant
of Hebbian learning that is sensitive not only to the spatial but also to the temporal correlations between pre- and postsynaptic neurons. While the exact time course of the STDP
function varies between different types of neurons, the functional consequences of these
differences are largely unknown. One line of modeling research takes a given STDP rule
and analyzes the evolution of synaptic efficacies [3?5]. In this article, we take a different
?
Alternative address: ERATO Aihara Complexity Modeling Project, JST, 45-18 Oyama, Shibuyaku, 151-0065 Tokyo , Japan
approach and start from first principles. More precisely, we ask what is the optimal synaptic update rule so as to maximize the mutual information between pre- and postsynaptic
neurons.
Previously information theoretical approaches to neural coding have been used to quantify
the amount of information that a neuron or a neural network is able to encode or transmit [6?8]. In particular, algorithms based on the maximization of the mutual information
between the output and the input of a network, also called infomax principle [9], have been
used to detect the principal (or independent) components of the input signal, or to reduce
the redundancy [10?12]. Although it is a matter of discussion whether neurons simply
?transmit? information as opposed to classification or task-specific processing [13], strategies based on information maximization provide a reasonable starting point to construct
neuronal networks in an unsupervised, but principled manner.
Recently, using a rate neuron, Chechik applied information maximization to detect static
input patterns from the output signal, and derived the optimal temporal learning window;
the learning window has a positive part due to the effect of the postsynaptic potential and
has flat negative parts with a length determined by the memory span [14].
In this paper, however, we employ a stochastic spiking neuron model to study not only
the effect of postsynaptic potentials generated by synaptic input but also the effect of the
refractory period of the postsynaptic neuron on the shape of the optimal learning window.
We discuss the relation of mutual information and Fisher information for small input variance in Sec. 2. Optimization of the Fisher information by gradient ascent yields an optimal
learning rule as shown in Sec. 3
2
2.1
Model assumptions
Neuron model
The model we are considering is a stochastic neuron with refractoriness. The instantaneous
firing rate ? at time t depends on the membrane potential u(t) and refractoriness R(t):
?(t) = g(?u(t))R(t),
(1)
where g(?u) = g0 log2 [1+e?u ] is a smoothed piecewise linear function with a scaling varit???abs )2
able ? and a constant g0 = 85Hz. The refractory variable is R(t) = ? 2 (t?
?(t ?
+(t?
t???abs )2
refr
t? ? ?abs ) and depends on the time elapsed since the last firing time t?, the absolute refractory period ?abs = 3 ms, and the time constant of relative refractoriness ?refr = 10 ms. The
Heaviside step function ? takes a value of 1 for positive arguments and zero otherwise.
The postsynaptic potential depends on the input spike trains of N presynaptic neurons. A
presynaptic spike of neuron i ? {1, 2, . . . , N } emitted at time tfi evokes a postsynaptic
potential with time course ?(t ? tfi ). The total membrane potential is
u(t)
=
N
X
i=1
wi
X
f
?(t ?
tfi )
=
N
X
i=1
wi
Z
?(s)xi (t ? s)ds
(2)
P
where xi (t) = f ?(t ? tfi ) denotes the spike train of the presynaptic neuron i. The above
model is a special case of the spike response model with escape noise [2]. For vanishing
refractoriness ?refr ? 0 and ?abs ? 0, the above model reduces to an inhomogeneous
Poisson process.
For a given set of presynaptic spikes in an interval [0, T ], hence for a given time course of
membrane potential {u(t)|t ? [0, T ]}, the model generates an output spike train
X
y(t) =
?(t ? tf )
(3)
f
with firing times {tf |f = 1, . . . , n} with a probability density
"Z
#
T
P (y|u) = exp
(y(t) log ?(t) ? ?(t)) dt .
(4)
0
where ?(t) is given by Eq. (1), i.e., ?(t) = g(?u(t)) R(t). Since the refractory variable R
depends on the firing time t? of the previous output spike, we sometimes write ?(t|t?) instead
of ?(t) in order to make this dependence explicit.
Equation (4) can then be re-expressed in
R
? t?t ?(s|t?)ds
?
terms of the survivor function S(t|t) = e
and the interval distribution Q(t|t?) =
?
?
?(t|t)S(t|t) in a more transparent form:
?
?
n
Y
P (y|u) = ?
Q(tf |tf ?1 )? S(T |tn ),
(5)
f =1
where t0 = 0 and n is the number of postsynaptic spikes in [0, T ]. In words, the probability
that a specific output spike train y occurs can be calculated from the interspike intervals
Q(tf |tf ?1 ) and the probability that the neuron ?survives? from the last spike at time tn to
time T without further firing.
2.2
Fisher information and mutual information
Let us consider input spike trains with stationary statistics. These input spike trains generate
an input potential u(t) with an average value u0 and standard deviation ?. Assuming a
weak dependence of g on the membrane potential u, i.e., for small ?, we expand g around
g0 = g(0) to obtain g(?u(t)) = g0 + g00 ?u(t) + g000 [?u(t)]2 /2 + O(? 3 ) where g0 is the
value of g in the absence of input and the next terms describe the influence of the input.
Here and in the following, all calculations will be done to order ? 2 .
In the limit of small ?, the mutual information is given by [15]
Z
Z T
?2 T
I(Y ; X) =
dt
dt0 ?(t ? t0 )J0 (t ? t0 ) + O(? 3 ),
2 0
0
(6)
with the autocovariance function of the membrane potential
?(t ? t0 )
= h?u(t)?u(t0 )iX ,
(7)
with ?u(t) = u(t) ? u0 and Fisher information
*
+
?
? 2 log P (y|u) ??
0
J0 (t ? t ) = ?
??u(t)??u(t0 ) ??=0
,
(8)
Y |?=0
R
R
with h?iY |?=0 = ? P (y|? = 0)dy and h?iX = ? P (x)dx. Note that the Fisher
information (8) is to be evaluated at the constant g0 , i.e., at the value ?u = 0, whereas
the autocovariance in Eq. (7) is calculated with respect to the mean membrane potentital
u0 = hu(t)iX which is in general different from zero. The derivation of (6) is based
on the assumption that the variability of the output signal is small and g(?u) does not
deviate much from g0 , i.e., it corresponds to the regime of small signal-to-noise ratio.
It is well known that the information capacity of the Gaussian channel is given by the
log of the signal-to-noise ratio [16], and the mutual information is proportional to the
signal-to-noise ratio when it is small. The relation between the Fisher information, the
mutual information, and optimal tuning curves has previously been established in the
regime of large signal-to-noise ratio [17].
We introduce the following notation: Let ?0 = hy(t)iY |?=0 = h?(t)iY |?=0 be the spon0
0
taneous firing rate in the absence of input and ??1
0 hy(t)y(t )iY |?=0 = ?(t ? t ) + ?0 [1 +
0
?(t ? t )] be the postsynaptic firing probability at time t given a postsynaptic spike at t 0 ,
i.e., the autocorrelation function of Y . From the theory of stationary renewal processes [2]
?Z
??1
?0 =
s Q0 (s)ds
,
Z
?0 [1 + ?(s)] = Q0 (|s|) + Q0 (s0 )?0 [1 + ?(|s| ? s0 )] ?(|s| ? s0 )ds0 ,
(9)
where Q0 (s) = g0 R(s)e?g0 [(s??abs )??refr arctan(s??abs )/?refr ] is the interval distribution for
constant g = g0 . The interval distribution vanishes during the absolute refractory time ?abs ;
cf. Fig. 1.
(A)
(B)
0.05
0.2
0.04
0
?0.2
?(s)
Q0 (s)
0.03
0.02
?0.4
?0.6
0.01
PSfrag replacements
PSfrag replacements
?0.8
0
?(s)
?0.01
0
Q0 (s)
20
40
60
s [ms]
80
100
?1
0
10
20
30
40
50
s [ms]
Figure 1: Interspike interval distribution Q0 and normalized autocorrelation function ?.
The circles show numerical results, the solid line the theory.
The Fisher information of (8) is calculated from (4) to be
? 0 ?2
g0
J0 (t ? t0 ) = ?(t ? t0 )
h?0 (t)iY |?=0
g0
with the instantaneous firing rate ?0 (t) = g0 R(t). Hence the mutual information is
? ?2 Z T
? 2 g00
I(Y ; X) =
dt ?0 ? 2
2 g0
0
? ?2
? 2 g00
=
T ?0 ? 2 .
2 g0
(10)
(11)
(12)
For an interpretation of Eq. (11) we note that ? 2 = ?(0) is the variance of the membrane potential and depends on the statistics of the presynaptic input whereas ? 0 is the
spontaneous firing rate which characterizes the output of the postsynaptic neuron. Hence,
Equation (11) contains both pre- and postsynaptic factors.
3
Results: Optimal spike-timing dependent learning rule
In the previous section we have calculated the mutual information between presynaptic
input spike trains and the output of the postsynaptic neuron under the assumption of small
fluctuations of g. The mutual information depends on parameters of the model neuron, in
particular the synaptic weights that characterize the efficacy of the connections between
pre- and postsynaptic neurons. In this section, we will optimize the mutual information
by changing the synaptic weights in an appropriate fashion. To do so we will proceed in
several steps.
First, based on gradient ascent we derive a batch learning rule of synaptic weights that
maximizes the mutual information. In a second step, we transform the batch rule into an
online rule that reduces to the batch version when averaged. Finally, in subsection 3.2, we
will see that the online learning rule shares properties with STDP, in particular a biphasic
dependence upon the relative timing of pre- and postsynaptic spikes.
3.1
Learning rule for spiking model neuron
In order to keep the analysis as simple as possible, we suppose that the input spike trains
are independent Poisson trains, i.e., h?xi (t)?xj (t0 )iX = ?i ?(t ? t0 )?ij , where ?xi (t) =
xi (t) ? ?i with rate ?i = hxi (t)iX . Then we obtain the variance of the membrane potential
X
? 2 = h[?u(t)]2 iX = ?2
wj2 ?j
(13)
j
with ?2 =
R
2
? (s)ds.
Applying gradient ascent to (11) with an appropriate learning rate ?, we obtain the batch
learning rule of synaptic weights as
? ?2 Z T
?I(Y ; X)
?? 2
? 2 g00
?wi = ?
dt ?0
??
.
(14)
?wi
2 g0
?wi
0
The derivative of ?0 with respect to wi vanishes, since ?0 is the spontaneous firing rate in
the absence of input. We note that both ?0 and ? 2 are defined by an ensemble averages, as
is typical for a ?batch? rule.
While there are many candidates of online learning rule that give (14) on average, we
are interested in rules that depend directly on neuronal spikes
P rather than mean rates. To
2
2
proceed
it
is
useful
to
write
?
=
h[?u(t)]
i
with
?u
=
X
i wi ??i (t) where ??i (t) =
R
?(s)?xi (t ? s)ds. In this notation, one simple form of an online learning rule that
depends on both the postsynaptic firing statistics and presynaptic autocorrelation is
? 0 ?2
dwi
g0
= ?? 2
y(t)??i (t)?u(t),
(15)
dt
g0
Hence weights are updated with each postsynaptic spike with an amplitude proportional
to an online estimate of the membrane potential variance calculated as the product of
?u and ??i . Indeed, to order ? 0 , the input and the output spikes are independent;
hy(t)??i (t)?u(t)iY,X = hy(t)iY |?=0 h??i (t)?u(t)iX and the average of (15) leads back
to (14).
3.2
STDP function as a spike-pair effect
Application of the online learning rule (15) during a trial of duration T , yields a total
change of the synaptic efficacy which depends on all the presynaptic spikes via the factor
??i ; on the postsynaptic potential via the factor ?u; and on the postsynaptic spike train
y(t). In order to extract the spike pair effect evoked by a given presynaptic spike at t pre
i
and a postsynaptic spike at tpost , we average over x and y given the pair of spikes. The
spike pair effect up to the second order of ? is therefore described as
? 0 ?2 Z T
g0
pre
post
2
?wi (t
? ti ) = ??
dthy(t)iY |tpost ,?=0 h??i (t)?u(t)iX|tpre
, (16)
i
g0
0
R
R
where h?iY |tpost ,?=0 = dy ? P (y|tpost , ? = 0) and h?iX|tpre
= dx ? P (x|tpre
i ).
i
Note that the leading factor of Eq. (16) is already of order ? 2 , so that all other factors have to be evaluated to order ? 0 . Suppressing all terms containing ?, we obtain
post
P (y|tpost , u) ? P (y|tpost , ? = 0) and from the Bayes formula P (x|tpre
) =
i ,t
P (tpost |x,tpre
)
i
hP (tpost |x,tpre
)iX|tpre
i
pre
P (x|tpre
i ) ? P (x|ti ).
i
and tpost , we think of separating the effects caused by
In order to see the contribution of tpre
i
post
spikes at tpre
,
t
from
the
mean
weight
evolution caused by all other spikes. Therefore
i
we insert hy(t)iY |tpost ,?=0 = ?(t?tpost )+?0 [1+?(t?tpost )] and h??i (t)?u(t)iX|tpre
=
i
)
into
the
following
wi [?2 (t ? tpre ) + ?2 ?i ] into Eq. (16) and decompose ?wi (tpost ? tpre
i
? 0 ?2
0
2 g0
four terms: the drift term ?wi = ?? g0 T ?0 ?2 wi ?i of the batch learning (14) that
? 0 ?2
pre
2 g0
post
=
??
or
t
;
the
presynaptic
component
?w
does not depend on tpre
?0 ? 2 wi
i
i
g0
post
that is triggered by the presynaptic spike at tpre
=
i ; the postsynaptic component ?wi
i
? 0 ?2 h
R
T
post
2 g0
)dt ?2 wi ?i that is triggered by the postsynaptic spike
?? g0
1 + ?0 0 ?(t ? t
at tpost ; and the correlation component
#
? 0 ?2 "
Z T
g0
pre
pre
corr
2
2 post
post 2
?wi
= ??
wi ? (t
? ti ) + ? 0
?(t ? t
)? (t ? ti )dt (17)
g0
0
that depends on the difference of the pre- and postsynaptic spike timing.
(A)
(B)
(C)
0.05
15
?5
x 10
0.8
?2 (s)
PSfrag replacements
tpost ? tpre
i [ms]
2
?0 (? ? ? )(s)
?wicorr
0.6
PSfrag replacements
tpost ? tpre
i [ms]
?2 (s)
0.4
0.2
0
?50
?25
0
s [ms]
25
?w50icorr
0
?0.05
10
PSfrag replacements
s [ms]
?wicorr
?0 (? ? ?2 )(s)
1
?0.1
?2 (s)
?0 (? ? ?2 )(s)
?0.15
?0.2
?50
?25
0
s [ms]
25
50
5
0
?5
?50
?25
post
0
25
50
tpre
i [ms]
t
?
Figure 2: (A) The effect from EPSP: the first term in the square bracket of (17). (B) The
effect from refractoriness: the second term in the square bracket of (17). (C) Temporal
learning window ?wicorr of (17).
In the following, we choose a simple exponential EPSP ?(t) = ?(s)e?s/?u with a time
constant ?u = 10 ms. The parameters are N = 100, ?i = 40 Hz for all i, wi = (N ?u ?i )?1 ,
? = 1 and ? = 0.1.
Figure 2 shows ?wicorr of (17). The first term of (17) indicates the contribution of a
presynaptic spike at tpre
to increase the online estimation of membrane potential variance
i
at time tpost , whereas the second term represents the effect of the refractory period on
postsynaptic firing intensity, i.e., the normalized autocorrelation function convolved with
the presynaptic contribution term. Due to the averaging of h?iY |tpost ,?=0 and h?iX|tpre
in
i
(16), this optimal temporal learning window is local in time; we do not need to impose a
memory span [14] to restrict the negative part of the learning window.
Figure 3 compares ?wi of (16) with numerical simulations of (15). We note a good agreement between theory and simulation. We recall, that all calculations, and hence the STDP
function of (17) are valid for small ?, i.e., for small fluctuation of g.
?4
3.6
x 10
3.4
3.2
?wi
3
2.8
2.6
2.4
PSfrag replacements
tpost ? tpre
i [ms]
2.2
?50
?25
0
t
post
?
25
50
tpre
i [ms]
Figure 3: The comparison of the analytical result of (16) ( solid line ) and the numerical
simulation of the online learning rule (15) ( circles ). For the simulation, the conditional
dwi
average h?wi iX,Y |tpre
,tpost is evaluated by integrating dt over 200 ms around spike pairs
i
pre
with the given interval tpost ? ti ;
4
Conclusion
It is important for neurons especially in primary sensory systems to send information from
previous processing circuits to neurons in other areas while capturing the essential features
of its input. Mutual information is a natural quantity to be maximized from this perspective. We introduced an online learning rule for synaptic weights that increases information
transmission for small input fluctuation. Introduction of the temporal properties of the target neuron enables us to analyze the temporal properties of the learning rule required to
increase the mutual information. Consequently, the temporal learning window is given in
terms of the time course of EPSPs and the autocorrelation function of the postsynaptic neuron. In particular, neuronal refractoriness plays a major role and yields the negative part
of the learning window. Though we restrict our analysis here to excitatory synapses with
independent spike trains, it is straightforward to generalize the approach to a mixture of excitatory and inhibitory neurons with weakly correlated spike trains as long as the synaptic
weights are small enough. The analytically derived temporal learning window is similar to
the experimentally observed bimodal STDP window [1]. Since the effective time course
of EPSPs and the autocorrelation function of output spike trains vary from one part of
the brain to another, it is important to compare those functions with the temporal learning
window in biological settings.
Acknowledgments
T.T. is supported by the Japan Society for the Promotion of Science and a Grant-in-Aid for
JSPS Fellows; J.-P.P. is supported by the Swiss National Science Foundation. We thank Y.
Aviel for discussions.
References
[1] G. Bi and M. Poo. Synaptic modification of correlated activity: Hebb?s postulate revisited.
Annu. Rev. Neurosci., 24:139?166, 2001.
[2] W. Gerstner and W. M. Kistler. Spiking Neuron Models. Cambridge University Press, 2002.
[3] R. Kempter, W. Gerstner, and J. L. van Hemmen. Hebbian learning and spiking neurons. Phys.
Rev. E, 59:4498?4514, 1999.
[4] W. Gerstner and W. M. Kistler. Mathematical formulations of hebbian learning. Biol. Cybern.,
87:404?415, 2002.
[5] R. G?utig, R. Aharonov, S. Rotter, and H. Sompolinsky. Learning input correlations through
nonlinear temporally asymmetric hebbian plasticity. J. Neurosci., 23(9):3697?3714, 2003.
[6] R. B. Stein. The information capacity of nerve cells using a frequency code. Biophys. J.,
7:797?826, 1967.
[7] W. Bialek, F. Rieke, R. de Ruyter van Stevenick, and D. Warland. Reading a neural code.
Science, 252:1854?1857, 1991.
[8] F. Rieke, D. Warland, R. R. van Steveninck, and W. Bialek. Spikes. MIT Press, 1997.
[9] R. Linsker. Self-organization in a perceptual network. Computer, 21:105?117, 1988.
[10] J-P. Nadal and N. Parga. Nonlinear neurons in the low-noise limit: a factorial code maximizes
information transfer. Network: Comput.Neural Syst., 5:565?581, 1994.
[11] J-P Nadal, N. Brunel, and N Parga. Nonlinear feedforward networks with stochastic outputs:
infomax implies redundancy reduction. Network: Comput. Neural Syst., 9:207?217, 1998.
[12] A. J. Bell and T. Sejnowski. An information-maximization approach to blind separation and
blind deconvolution. Neural Comput., 7(6):1004?1034, 1995.
[13] J. J. Hopfield. Encoding for computation: recognizing brief dynamical patterns by exploiting
effects of weak rhythms on action-potential timing. Proc. Natl. Acad. Sci. USA, 101(16):6255?
6260, 2004.
[14] G. Checkik. Spike-timing-dependent plasticity and relevant mutual information maximization.
Neural Comput., 15:1481?1510, 2003.
[15] V. V. Prelov and E. C. van der Meulen. An asymptotic expression for the information and
capacity of a multidimensional channel with weak input signals. IEEE. Trans. Inform. Theory,
39(5):1728?1735, 1993.
[16] T. M. Cover and J. A. Thomas. Elements of Information Theory. New York: Wiley, 1991.
[17] N. Brunel and J-P. Nadal. Mutual information, fisher information, and population coding. Neural Comput., 10:1731?1757, 1998.
| 2588 |@word trial:1 version:1 hu:1 simulation:4 solid:2 reduction:1 contains:1 efficacy:3 wj2:1 ecole:1 suppressing:1 dx:2 numerical:3 interspike:2 plasticity:5 enables:1 shape:2 update:1 stationary:2 vanishing:1 revisited:1 arctan:1 mathematical:1 psfrag:6 autocorrelation:7 introduce:1 manner:1 theoretically:1 indeed:1 brain:2 window:11 considering:1 project:1 notation:2 maximizes:2 circuit:1 what:1 nadal:3 biphasic:1 temporal:9 fellow:1 multidimensional:1 ti:5 refr:5 grant:1 positive:4 engineering:1 timing:8 local:1 limit:2 consequence:1 acad:1 encoding:1 fluctuation:4 firing:12 studied:1 evoked:1 lausanne:2 graduate:1 bi:1 averaged:1 steveninck:1 acknowledgment:1 swiss:1 j0:3 area:1 bell:1 chechik:1 pre:13 word:1 integrating:1 influence:1 applying:1 cybern:1 optimize:1 send:1 straightforward:1 poo:1 starting:1 duration:1 rule:19 population:1 rieke:2 transmit:2 updated:1 spontaneous:2 suppose:1 target:1 play:1 exact:1 aharonov:1 agreement:1 element:1 asymmetric:1 observed:1 role:1 sompolinsky:1 varit:1 principled:1 vanishes:2 complexity:2 depend:2 weakly:1 aviel:1 upon:1 hopfield:1 derivation:1 train:13 describe:1 effective:1 sejnowski:1 jean:2 dt0:1 toyoizumi:1 otherwise:1 statistic:3 think:1 transform:1 online:9 triggered:2 analytical:1 product:1 epsp:3 relevant:1 erale:1 exploiting:1 transmission:1 derive:2 ac:2 ij:1 school:2 eq:5 epsps:3 implies:1 quantify:1 switzerland:1 inhomogeneous:1 tokyo:7 stochastic:3 jst:1 kistler:2 g000:1 transparent:1 decompose:1 biological:1 insert:1 around:2 stdp:10 exp:1 major:1 vary:1 estimation:1 proc:1 sensitive:1 tf:6 survives:1 promotion:1 mit:1 gaussian:1 rather:1 encode:1 derived:2 indicates:1 survivor:1 utig:1 sense:1 detect:2 dependent:5 epfl:3 relation:2 expand:1 interested:1 classification:1 pascal:2 spatial:1 special:1 renewal:1 mutual:18 construct:1 represents:1 unsupervised:1 linsker:1 piecewise:1 escape:1 employ:1 national:1 phase:3 replacement:6 ab:8 organization:1 mixture:1 bracket:2 natl:1 autocovariance:2 re:1 circle:2 theoretical:1 modeling:2 cover:1 maximization:7 deviation:1 jsps:1 recognizing:1 characterize:1 varies:1 density:1 infomax:2 iy:11 postulate:1 opposed:1 containing:1 choose:1 derivative:1 leading:1 japan:4 syst:2 potential:17 de:2 coding:2 sec:2 matter:1 caused:2 depends:10 blind:2 analyze:1 characterizes:1 start:1 bayes:1 contribution:3 square:2 variance:5 largely:1 ensemble:1 yield:3 maximized:1 generalize:1 weak:3 parga:2 synapsis:1 phys:1 inform:1 ed:1 synaptic:11 frequency:1 static:1 ask:1 intensively:1 recall:1 subsection:1 amplitude:1 oyama:1 back:1 nerve:1 dt:8 response:1 formulation:1 done:1 refractoriness:7 evaluated:3 though:1 correlation:3 d:5 nonlinear:3 usa:1 effect:11 normalized:2 evolution:2 hence:5 analytically:1 q0:7 during:3 erato:1 self:1 rhythm:1 m:14 polytechnique:1 tn:2 instantaneous:2 recently:1 functional:1 spiking:6 refractory:6 jp:2 interpretation:1 cambridge:1 tuning:1 hp:1 hxi:1 perspective:1 rotter:1 der:1 analyzes:1 impose:1 maximize:1 period:3 signal:8 u0:3 reduces:2 hebbian:4 calculation:2 long:1 post:10 controlled:1 variant:1 poisson:2 sometimes:1 bimodal:1 cell:1 whereas:3 interval:7 ascent:3 hz:2 emitted:1 tfi:4 feedforward:1 enough:1 xj:1 restrict:2 reduce:1 t0:10 whether:1 expression:1 proceed:2 york:1 action:1 useful:1 factorial:1 amount:1 stein:1 generate:1 inhibitory:1 write:2 redundancy:2 four:1 changing:1 evokes:1 reasonable:1 separation:1 dy:2 scaling:1 capturing:1 activity:1 precisely:1 flat:1 g00:4 hy:5 generates:1 argument:1 span:2 department:1 membrane:10 postsynaptic:27 wi:21 rev:2 modification:1 aihara:3 equation:2 previously:2 discus:1 mind:1 appropriate:2 alternative:1 batch:6 convolved:1 thomas:1 denotes:1 cf:1 log2:1 warland:2 especially:1 society:1 g0:28 already:1 quantity:1 spike:43 occurs:1 strategy:1 primary:1 dependence:3 bialek:2 gradient:3 thank:1 separating:1 capacity:3 sci:1 presynaptic:13 assuming:1 length:1 code:3 ratio:4 negative:5 unknown:1 neuron:30 communication:1 variability:1 smoothed:1 drift:1 intensity:1 introduced:1 pair:5 required:1 connection:1 ds0:1 elapsed:1 established:1 tpre:23 address:1 able:2 trans:1 dynamical:1 pattern:2 regime:2 reading:1 memory:2 natural:1 technology:1 brief:1 meulen:1 temporally:1 extract:1 deviate:1 review:1 kazuyuki:1 relative:2 asymptotic:1 kempter:1 proportional:2 foundation:1 taro:2 s0:3 article:1 principle:2 share:1 course:6 excitatory:3 supported:2 last:3 institute:1 absolute:2 van:4 curve:1 calculated:5 valid:1 sensory:1 tpost:21 dwi:2 keep:1 sat:2 xi:6 decade:1 channel:2 transfer:1 ruyter:1 gerstner:5 neurosci:2 noise:6 neuronal:4 fig:1 hemmen:1 fashion:1 hebb:1 aid:1 wiley:1 explicit:1 exponential:1 comput:5 candidate:1 perceptual:1 ix:13 formula:1 annu:1 specific:2 deconvolution:1 essential:1 corr:1 biophys:1 simply:1 expressed:1 brunel:2 ch:2 corresponds:1 conditional:1 consequently:1 fisher:8 absence:3 wulfram:2 experimentally:2 change:1 determined:1 typical:1 averaging:1 principal:1 called:1 pfister:2 total:2 heaviside:1 biol:1 correlated:2 |
1,748 | 2,589 | Reducing Spike Train Variability:
A Computational Theory Of
Spike-Timing Dependent Plasticity
Sander M. Bohte1,2
[email protected]
1
Dept. Software Engineering
CWI, Amsterdam, The Netherlands
Michael C. Mozer2
[email protected]
2
Dept. of Computer Science
University of Colorado, Boulder, USA
Abstract
Experimental studies have observed synaptic potentiation when a
presynaptic neuron fires shortly before a postsynaptic neuron, and
synaptic depression when the presynaptic neuron fires shortly after. The dependence of synaptic modulation on the precise timing of the two action potentials is known as spike-timing dependent plasticity or STDP. We derive STDP from a simple computational principle: synapses adapt so as to minimize the postsynaptic neuron?s variability to a given presynaptic input, causing
the neuron?s output to become more reliable in the face of noise.
Using an entropy-minimization objective function and the biophysically realistic spike-response model of Gerstner (2001), we simulate neurophysiological experiments and obtain the characteristic
STDP curve along with other phenomena including the reduction in
synaptic plasticity as synaptic efficacy increases. We compare our
account to other efforts to derive STDP from computational principles, and argue that our account provides the most comprehensive
coverage of the phenomena. Thus, reliability of neural response in
the face of noise may be a key goal of cortical adaptation.
1
Introduction
Experimental studies have observed synaptic potentiation when a presynaptic neuron fires shortly before a postsynaptic neuron, and synaptic depression when the
presynaptic neuron fires shortly after. The dependence of synaptic modulation on
the precise timing of the two action potentials, known as spike-timing dependent
plasticity or STDP, is depicted in Figure 1. Typically, plasticity is observed only
when the presynaptic and postsynaptic spikes (hereafter, pre and post) occur within
a 20?30 ms time window, and the transition from potentiation to depression is very
rapid. Another important observation is that synaptic plasticity decreases with increased synaptic efficacy. The effects are long lasting, and are therefore referred to
as long-term potentiation (LTP) and depression (LTD). For detailed reviews of the
evidence for STDP, see [1, 2].
Because these intriguing findings appear to describe a fundamental learning mechanism in the brain, a flurry of models have been developed that focus on different
aspects of STDP, from biochemical models that explain the underlying mechanisms
giving rise to STDP [3], to models that explore the consequences of a STDP-like
learning rules in an ensemble of spiking neurons [4, 5, 6, 7], to models that propose fundamental computational justifications for STDP. Most commonly, STDP
Figure 1: (a) Measuring STDP experimentally: pre-post spike pairs are repeatedly induced at a fixed interval ?tpre?post , and the resulting change to the strength of the synapse
is assessed; (b) change in synaptic strength after repeated spike pairing as a function of
the difference in time between the pre and post spikes (data from Zhang et al., 1998). We
have superimposed an exponential fit of LTP and LTD.
is viewed as a type of asymmetric Hebbian learning with a temporal dimension.
However, this perspective is hardly a fundamental computational rationale, and
one would hope that such an intuitively sensible learning rule would emerge from a
first-principle computational justification.
Several researchers have tried to derive a learning rule yielding STDP from first
principles. Rao and Sejnowski [8] show that STDP emerges when a neuron attempts
to predict its membrane potential at some time t from the potential at time t ? ?t.
However, STDP emerges only for a narrow range of ?t values, and the qualitative
nature of the modeling makes it unclear whether a quantitative fit can be obtained.
Dayan and H?
ausser [9] show that STDP can be viewed as an optimal noise-removal
filter for certain noise distributions. However, even small variation from these noise
distributions yield quite different learning rules, and the noise statistics of biological
neurons are unknown. Eisele (private communication) has shown that an STDP-like
learning rule can be derived from the goal of maintaining the relevant connections
in a network. Chechik [10] is most closely related to the present work. He relates
STDP to information theory via maximization of mutual information between input
and output spike trains. This approach derives the LTP portion of STDP, but fails
to yield the LTD portion.
The computational approach of Chechik (as well as Dayan and H?ausser) is premised
on a rate-coding neuron model that disregards the relative timing of spikes. It
seems quite odd to argue for STDP using rate codes: if spike timing is irrelevant
to information transmission, then STDP is likely an artifact and is not central to
understanding mechanisms of neural computation. Further, as noted in [9], because
STDP is not quite additive in the case of multiple input or output spikes that are
near in time [11], one should consider interpretations that are based on individual
spikes, not aggregates over spike trains.
Here, we present an alternative computational motivation for STDP. We conjecture
that a fundamental objective of cortical computation is to achieve reliable neural responses, that is, neurons should produce the identical response?both in the number
and timing of spikes?given a fixed input spike train. Reliability is an issue if neurons are affected by noise influences, because noise leads to variability in a neuron?s
dynamics and therefore in its response. Minimizing this variability will reduce the
effect of noise and will therefore increase the informativeness of the neuron?s output
signal. The source of the noise is not important; it could be intrinsic to a neuron
(e.g., a noisy threshold) or it could originate in unmodeled external sources causing
fluctuations in the membrane potential uncorrelated with a particular input.
We are not suggesting that increasing neural reliability is the only learning objective.
If it were, a neuron would do well to give no response regardless of the input.
Rather, reliability is but one of many objectives that learning tries to achieve. This
form of unsupervised learning must, of course, be complemented by supervised and
reinforcement learning that allow an organism to achieve its goals and satisfy drives.
We derive STDP from the following computational principle: synapses adapt so as
to minimize the entropy of the postsynaptic neuron?s output in response to a given
presynaptic input. In our simulations, we follow the methodology of neurophysiological experiments. This approach leads to a detailed fit to key experimental results.
We model not only the shape (sign and time course) of the STDP curve, but also
the fact that potentiation of a synapse depends on the efficacy of the synapse?it
decreases with increased efficacy. In addition to fitting these key STDP phenomena, the model allows us to make predictions regarding the relationship between
properties of the neuron and the shape of the STDP curve.
Before delving into the details of our approach, we attempt to give a basic intuition about the approach. Noise in spiking neuron dynamics leads to variability in
the number and timing of spikes. Given a particular input, one spike train might
be more likely than others, but the output is nondeterministic. By the entropyminimization principle, adaptation should reduce the likelihood of these other possibilities. To be concrete, consider a particular experimental paradigm. In [12], a
pre neuron is identified with a weak synapse to a post neuron, such that the pre is
unlikely to cause the post to fire. However, the post can be induced to fire via a
second presynaptic connection. In a typical trial, the pre is induced to fire a single
spike, and with a variable delay, the post is also induced to fire (typically) a single
spike. To increase the likelihood of the observed post response, other response possibilities must be suppressed. With presynaptic input preceding the postsynaptic
spike, the most likely alternative response is no output spikes at all. Increasing
the synaptic connection weight should then reduce the possibility of this alternative
response. With presynaptic input following the postsynaptic spike, the most likely
alternative response is a second output spike. Decreasing the synaptic connection
weight should reduce the possibility of this alternative response. Because both of
these alternatives become less likely as the lag between pre and post spikes is increased, one would expect that the magnitude of synaptic plasticity diminishes with
the lag, as is observed in the STDP curve.
Our approach to reducing response variability given a particular input pattern involves computing the gradient of synaptic weights with respect to a differentiable
model of spiking neuron behavior. We use the Spike Response Model (SRM) of [13]
with a stochastic threshold, where the stochastic threshold models fluctuations of
the membrane potential or the threshold outside of experimental control. For the
stochastic SRM, the response probability is differentiable with respect to the synaptic weights, allowing us to calculate the entropy gradient with respect to the weights
conditional on the presented input. Learning is presumed to take a gradient step
to reduce this conditional entropy. In modeling neurophysiological experiments, we
demonstrate that this learning rule yields the typical STDP curve. We can predict
the relationship between the exact shape of the STDP curve and physiologically
measurable parameters, and we show that our results are robust to the choice of
the few free parameters of the model.
Two papers in these proceedings are closely related to our work. They also find
STDP-like curves when attempting to maximize an information-theoretic measure?
the mutual information between input and output?for a Spike Response Model
[14, 15]. Bell & Parra [14] use a deterministic SRM model which does not model the
LTD component of STDP properly. The derivation by Toyoizumi et al. [15] is valid
only for an essentially constant membrane potential with small fluctuations. Neither
of these approaches has succeeded in quantitatively modeling specific experimental
data with neurobiologically-realistic timing parameters, and neither explains the
saturation of LTD/LTP with increasing weights as we do. Nonetheless, these models
make an interesting contrast to ours by suggesting a computational principle of
optimization of information transmission, as contrasted with our principle of neural
noise reduction. Perhaps experimental tests can be devised to distinguish between
these competing theories.
2
The Stochastic Spike Response Model
The Spike Response Model (SRM), defined by Gerstner [13], is a generic integrateand-fire model of a spiking neuron that closely corresponds to the behavior of a
biological spiking neuron and is characterized in terms of a small set of easily interpretable parameters [16]. The standard SRM formulation describes the temporal
evolution of the membrane potential based on past neuronal events, specifically as
a weighted sum of postsynaptic potentials (PSPs) modulated by reset and threshold effects of previous postsynaptic spiking events. Following [13], the membrane
potential of cell i at time t, ui (t), is defined as:
X
X
(1)
ui (t) = ?(t ? f?i ) +
wij
?(t ? f?i , t ? fj ),
j??i
fj ?Fjt
where ?i is the set of inputs connected to neuron i, Fjt is the set of times prior to
t that neuron j has spiked, f?i is the time of the last spike of neuron i, wij is the
synaptic weight from neuron j to neuron i, ?(t ? f?i , t ? fj ) is the PSP in neuron
i due to an input spike from neuron j at time fj , and ?(t ? f?i ) is the refractory
response due to the postsynaptic spike at time f?i . Neuron i fires when the potential
ui (t) exceeds a threshold (?) from below.
The postsynaptic potential ? is modeled as the differential alpha function in [13],
defined with respect to two variables: the time since the most recent postsynaptic
spike, x, and the time since the presynaptic spike, s:
s
1
s
? exp ?
H(s)H(x ? s)+
(2)
?(x, s) =
exp ?
?s
1 ? ?m
?m
?s
s ? x
x
x
+exp ?
exp ?
? exp ?
H(x)H(s ? x) ,
?s
?m
?s
where ?s and ?m are the rise and decay time-constants of the PSP, and H is the
Heaviside function. The refractory reset function is defined to be [13]:
x + ?abs
x
?(x) = uabs H(?abs ? x)H(?x) + uabs exp ?
+ usr exp ? s , (3)
f
?
?r
r
where uabs is a large negative contribution to the potential to model the absolute
refractory period, with duration ?abs . We smooth this refractory response by a fast
decaying exponential with time constant ?rf . The third term in the sum represents
the slow decaying exponential recovery of an elevated threshold, usr , with time
constant ?rs . (Graphs of these ? and ? functions can be found in [13].) We made
a minor modification to the SRM described in [13] by relaxing the constraint that
?rs = ?m ; smoothing the absolute refractory function is mentioned in [13] but not
explicitly defined as we do here. In all simulations presented, ?abs = 2ms, ?rs = 4?m ,
and ?rf = 0.1?m .
The SRM we just described is deterministic. Gerstner [13] introduces a stochastic variant of the SRM (sSRM) by incorporating the notion of a stochastic firing
threshold: given membrane potential ui (t), the probability density of the neuron
firing at time t is specified by ?(ui (t)). Herrmann & Gerstner [17] find that then
for a realistic escape-rate noise model the firing probability density as a function of
the potential is initially small and constant, transitioning to asymptotically linear
increasing around threshold ?. In our simulations, we use such a function:
?
(4)
?(v) = (ln[1 + exp(?(? ? v))] ? ?(? ? v)),
?
where ? is the firing threshold in the absence of noise, ? determines the abruptness of
the constant-to-linear probability density transition around ?, and ? determines the
slope of the increasing part. Experiments with sigmoidal and exponential density
functions were found to not qualitatively affect the results.
3
Minimizing Conditional Entropy
We now derive the rule for adjusting the weight from a presynaptic neuron j to a
postsynaptic sSRM neuron i, so as to minimize the entropy of i?s response given
a particular spike sequence from j. A spike sequence is described by the set of all
times at which spikes have occurred within some interval between 0 and T , denoted
FjT for neuron j. We assume the interval is wide enough that spikes outside the
interval do not influence the state of the neuron within the interval (e.g., through
threshold reset effects). We can then treat intervals as independent of each other.
Let the postsynaptic neuron i produce a response ? ? ?i , where ?i is the set of all
possible responses given the input, ? ? FiT , and g(?) is the probability density over
responses. The differential conditional entropy h(?i ) of neuron i?s response is then
defined as:
Z
h(?i ) = ?
g(?)log g(?) d?.
(5)
?i
To minimize the differential conditional entropy by adjusting the neuron?s weights,
we compute the gradient of the conditional entropy with respect to the weights:
Z
?h(?i )
?log(g(?))
= ?
g(?)
log(g(?)) + 1 d?.
(6)
?wij
?wij
?i
For a differentiable neuron model, ?log(g(?))/?wij can be expressed as follows when
neuron i fires once at time f?i [18]:
Z T
?
?log(g(?))
??(ui (t)) ?ui (t) ?(t ? fi ) ? ?(ui (t))
=
dt,
(7)
?wij
?wij
?(ui (t))
t=0 ?ui (t)
where ?(.) is the Dirac delta, and ?(ui (t)) is the firing probability-density of neuron
i at time t. (See [18] for the generalization to multiple postsynaptic spikes.) With
the sSRM we can compute the partial derivatives ??(ui (t))/?ui (t) and ?ui (t)/?wij .
Given the density function (4),
?
?ui (t)
??(ui (t))
=
,
= ?(t ? f?i , t ? fj ).
?ui (t)
1 + exp(?(? ? ui (t))
?wij
To perform gradient descent in the conditional entropy, we use the weight update
?h(?i )
?wij ? ?
(8)
?wij
Z
Z
T ??(t ? f?i , t ? fj ) ?(t ? f?i ) ? ?(ui (t))
?
g(?) log(g(?)) + 1
dt d?.
(1 + exp(?(? ? ui (t)))?(ui (t))
?i
t=0
We can use numerical methods to evaluate Equation (8). However, it seems biologically unrealistic to suppose a neuron can integrate over all possible responses ?.
This dilemma can be circumvented in two ways. First, the resulting learning rule
might be cached in some form through evolution so that the full computation is not
necessary (e.g., in an STDP curve). Second, the specific response produced by a
neuron on a single trial might be considered to be a sample from the distribution
g(?), and the integration is performed by a sampling process over repeated trials;
Figure 2: (a) Experimental setup of Zhang et al. and (b) their experimental STDP curve
(small squares) vs. our model (solid line). Model parameters: ?s = 1.5ms, ?m = 12.25ms.
each trial would produce a stochastic gradient step.
4
Simulation Methodology
We model in detail the experiment of Zhang et al. [12] (Figure 2a). In this experiment, a post neuron is identified that has two neurons projecting to it, call them
the pre and the driver. The pre is subthreshold: it produces depolarization but no
spike. The driver is suprathreshold: it induces a spike in the post. Plasticity of
the pre-post synapse is measured as a function of the timing between pre and post
spikes (?tpre?post ) by varying the timing between induced spikes in the pre and the
driver (?tpre?driver ). This measurement yields the well-known STDP curve (Figure
1b).1 The experiment imposes several constraints on a simulation: The driver alone
causes spiking > 70% of the time, the pre alone causes spiking < 10% of the time,
synchronous firing of driver and pre cause LTP if and only if the post fires, and the
time constants of the EPSPs??s and ?m in the sSRM?are in the range of 1?3ms
and 10?15ms respectively. These constraints remove many free parameters from
our simulation. We do not explicitly model the two input cells; instead, we model
the EPSPs they produce. The magnitude of these EPSPs are picked to satisfy the
experimental constraints: the driver EPSP alone causes a spike in the post on 77.4%
of trials, and the pre EPSP alone causes a spike on fewer than 0.1% of trials. Free
parameters of the simulation are ? and ? in the spike-probability function (? can be
folded into ?), and the magnitude (usr , uabs ) and reset time constants (?rs , ?rf , ?abs ).
The dependent variable of the simulation is ?tpre?driver , and we measure the time
of the post spike to determine ?tpre?post . We estimate the weight update for a
given ?tpre?driver using Equation 8, approximating the integral by a summation
over all time-discretized output responses consisting of 0, 1, or 2 spikes. Three or
more spikes have a probability that is vanishingly small.
5
Results
Figure 2b shows a typical STDP curve obtained from the model by plotting the
estimated weight update of Equation 8 against ?tpre?post . The model also explains
a key finding that has not been explained by any other account, namely, that the
magnitude of LTP or LTD decreases as the efficacy of the synapse between the pre
and the post increases [2]. Further, the dependence is stronger for LTP than LTD.
Figure 3a plots the magnitude of LTP for ?tpre?post = ?5 ms and the magnitude
of LTD for ?tpre?post = 7 ms as the amplitude of the pre?s EPSP is increased.
The magnitude of the weight change decreases as the weight increases, and this
1
In most experimental studies of STDP, the driver neuron is not used: the post is
induced to spike by a direct depolarizing current injection. Modeling current injections
requires additional assumptions. Consequently, we focus on the Zhang et al. experiment.
Figure 3: (a) LTP and LTD plasticity as a function of synaptic efficacy of the subthreshold
input. (b)-(d) STDP curves predicted by model as ?m , usr , and ? are manipulated.
effect is stronger for LTP than LTD. The model?s explanation for this phenomenon
is simple: As the weight increases, its effect saturates, and a small change to the
weight does little to alter its influence. Consequently, the gradient of the entropy
with respect to the weight goes toward zero.
The qualitative shape of the STDP curve is robust to settings of the model?s parameters, e.g., the EPSP decay time constant ?m (Figure 3b), the strength of the
threshold reset usr (Figure 3c), and the spiking threshold ? (Figure 3d). Additionally, the spike-probability function (exponential, sigmoidal, or linear) is not critical.
The model makes two predictions relating the shape of the STDP curve to properties of a neuron. These predictions are empirically testable if a diverse population
of cells can be studied: (1) the width of the LTD and LTP windows should depend
on the EPSP decay time constant (Figure 3b), (2) the strength of LTP to LTD
should depend on the strength of the threshold reset (Figure 3c), because stronger
resets lead to reduced LTD by reducing the probability of a second spike.
6
Discussion
In this paper, we explored a fundamental computational principle, that synapses
adapt so as to minimize the variability of a neuron?s response in the face of
noisy inputs, yielding more reliable neural representations. From this principle?
instantiated as conditional entropy minimization?we derived the STDP learning
curve. Importantly, the simulation methodology we used to derive the curve closely
follows the procedure used in neurophysiological experiments [12]. Our simulations
obtain an STDP curve that is robust to model parameters and details of the noise
distribution.
Our results are critically dependent on the use of Gerstner?s stochastic Spike Response Model, whose dynamics are a good approximation to those of a biological
spiking neuron. The sSRM has the virtue of being characterized by parameters that
are readily related to neural dynamics, and its dynamics are differentiable, allowing
us to derive a gradient-descent learning rule.
Our simulations are based on the classical STDP experiment in which a single
presynaptic spike is paired with a single postsynaptic spike. The same methodology
can be applied to the situation in which there are multiple presynaptic and/or
postsynaptic spikes, although the computation involved becomes nontrivial. We
are currently modeling the data from multi-spike experiments.
We modeled the Zhang et al. experiment in which a driver neuron is used to induce
the post to fire. To induce the post to fire, most other studies use a depolarizing
current injection. We are not aware of any established model for current injection
within the SRM framework, and we are currently elaborating such a model. We
expect to then be able to simulate experiments in which current injections are used,
allowing us to investigate the interesting issue of whether the two experimental
techniques produce different forms of STDP.
Acknowledgement Work of SMB supported by the Netherlands Organization for
Scientific Research (NWO), TALENT grant S-62 588.
References
[1] G-q. Bi and M-m. Poo. Synaptic modification by correlated activity: Hebb?s postulate
revisited. Ann. Rev. Neurosci., 24:139?166, 2001.
[2] A. Kepecs, M.C.W. van Rossum, S. Song, and J. Tegner. Spike-timing-dependent
plasticity: common themes and divergent vistas. Biol. Cybern., 87:446?458, 2002.
[3] A. Saudargiene, B. Porr, and F. W?
org?
otter. How the shape of pre- and postsynaptic
signals can influence stdp: A biophysical model. Neural Comp., 16:595?625, 2004.
[4] W. Gerstner, R. Kempter, J. L. van Hemmen, and H. Wagner. A neural learning rule
for sub-millisecond temporal coding. Nature, 383:76?78, 1996.
[5] S. Song, K. Miller, and L. Abbott. Competitive hebbian learning through spiketime
-dependent synaptic plasticity. Nat. Neurosci., 3:919?926, 2000.
[6] R. van Rossum, G.-q. Bi, and G.G. Turrigiano. Stable hebbian learning from spike
time dependent plasticity. J. Neurosci., 20:8812?8821, 2000.
[7] L.F. Abbott and W. Gerstner. Homeostasis and Learning through STDP. In D. Hansel
et al(eds), Methods and Models in Neurophysics, 2004.
[8] R.P.N. Rao and T.J. Sejnowski. Spike-timing-dependent plasticity as temporal difference learning. Neural Comp., 13:2221?2237, 2001.
[9] P. Dayan and M. H?
ausser. Plasticity kernels and temporal statistics. In S. Thrun,
L. Saul, and B. Sch?
olkopf, editors, NIPS 16. 2004.
[10] G. Chechik. Spike-timing-dependent plascticity and relevant mutual information maximization. Neural Comp., 15:1481?1510, 2003.
[11] R.C. Froemke and Y. Dan. Spike-timing-dependent synaptic modification induced by
natural spike trains. Nature, 416:433?438, 2002.
[12] L.l. Zhang, H.W. Tao, C.E. Holt, W.A. Harris, and M-m. Poo. A critical window
for cooperation and competition among developing retinotectal synapses. Nature,
395:37?44, 1998.
[13] W. Gerstner. A framework for spiking neuron models: The spike response model. In
F. Moss & S. Gielen (eds), The Handbook of Biol. Physics, vol 4, pp 469?516, 2001.
[14] A.J. Bell and L.C. Parra. Maximizing information yields spike timing dependent
plasticity. NIPS 17. 2005.
[15] T. Toyoizumi, J-P. Pfister, K. Aihara, and W. Gerstner. Spike-timing dependent
plasticity and mutual information maximization for a spiking neuron model. NIPS
17. 2005.
[16] R. Jolivet, T.J. Lewis, and W. Gerstner. The spike response model: a framework to
predict neuronal spike trains. In Kaynak et al.(eds), Proc. ICANN/ICONIP 2003, pp
846?853. 2003.
[17] A. Herrmann and W. Gerstner. Noise and the PSTH response to current transients:
I. J. Comp. Neurosci., 11:135?151, 2001.
[18] X. Xie and H.S. Seung. Learning in neural networks by reinforcement of irregular
spiking. Physical Review E, 69(041909), 2004.
| 2589 |@word trial:6 private:1 seems:2 stronger:3 simulation:11 tried:1 r:4 solid:1 reduction:2 efficacy:6 hereafter:1 ours:1 elaborating:1 past:1 current:6 intriguing:1 must:2 readily:1 additive:1 realistic:3 numerical:1 plasticity:16 shape:6 remove:1 plot:1 interpretable:1 update:3 v:1 alone:4 fewer:1 provides:1 revisited:1 psth:1 sigmoidal:2 org:1 zhang:6 along:1 direct:1 become:2 differential:3 driver:11 pairing:1 qualitative:2 fitting:1 dan:1 nondeterministic:1 presumed:1 rapid:1 behavior:2 multi:1 brain:1 discretized:1 decreasing:1 little:1 window:3 increasing:5 becomes:1 underlying:1 depolarization:1 developed:1 finding:2 temporal:5 quantitative:1 control:1 grant:1 appear:1 rossum:2 before:3 engineering:1 timing:18 treat:1 ssrm:5 consequence:1 fluctuation:3 modulation:2 firing:6 might:3 studied:1 relaxing:1 range:2 bi:2 procedure:1 bell:2 chechik:3 pre:18 induce:2 holt:1 influence:4 cybern:1 measurable:1 deterministic:2 maximizing:1 poo:2 go:1 regardless:1 duration:1 recovery:1 rule:10 importantly:1 population:1 notion:1 variation:1 justification:2 suppose:1 colorado:2 exact:1 smb:1 neurobiologically:1 asymmetric:1 observed:5 calculate:1 connected:1 decrease:4 mentioned:1 mozer:1 intuition:1 ui:21 seung:1 flurry:1 dynamic:5 depend:2 dilemma:1 easily:1 vista:1 derivation:1 train:7 instantiated:1 fast:1 describe:1 sejnowski:2 neurophysics:1 aggregate:1 outside:2 quite:3 lag:2 whose:1 toyoizumi:2 statistic:2 noisy:2 sequence:2 differentiable:4 biophysical:1 turrigiano:1 propose:1 vanishingly:1 reset:7 adaptation:2 epsp:5 causing:2 relevant:2 achieve:3 dirac:1 olkopf:1 competition:1 transmission:2 produce:6 cached:1 derive:7 measured:1 minor:1 odd:1 epsps:3 coverage:1 c:1 involves:1 predicted:1 closely:4 filter:1 stochastic:8 transient:1 suprathreshold:1 explains:2 potentiation:5 integrateand:1 generalization:1 biological:3 parra:2 summation:1 around:2 considered:1 stdp:46 exp:10 predict:3 diminishes:1 proc:1 currently:2 nwo:1 hansel:1 homeostasis:1 weighted:1 minimization:2 hope:1 rather:1 varying:1 cwi:2 derived:2 focus:2 properly:1 superimposed:1 likelihood:2 contrast:1 dependent:13 dayan:3 biochemical:1 typically:2 unlikely:1 initially:1 wij:11 tao:1 issue:2 among:1 denoted:1 smoothing:1 integration:1 mutual:4 once:1 aware:1 sampling:1 identical:1 represents:1 unsupervised:1 alter:1 others:1 quantitatively:1 escape:1 few:1 manipulated:1 comprehensive:1 individual:1 consisting:1 fire:14 attempt:2 ab:5 organization:1 possibility:4 investigate:1 introduces:1 nl:1 yielding:2 succeeded:1 integral:1 partial:1 necessary:1 increased:4 modeling:5 rao:2 measuring:1 maximization:3 srm:9 delay:1 density:7 fundamental:5 physic:1 michael:1 concrete:1 central:1 postulate:1 external:1 derivative:1 account:3 potential:15 premised:1 suggesting:2 kepecs:1 coding:2 satisfy:2 explicitly:2 depends:1 performed:1 try:1 picked:1 portion:2 competitive:1 decaying:2 abruptness:1 slope:1 depolarizing:2 contribution:1 minimize:5 square:1 characteristic:1 ensemble:1 yield:5 subthreshold:2 miller:1 weak:1 biophysically:1 produced:1 critically:1 comp:4 researcher:1 drive:1 explain:1 synapsis:4 synaptic:21 ed:3 against:1 nonetheless:1 pp:2 involved:1 adjusting:2 emerges:2 amplitude:1 dt:2 supervised:1 follow:1 methodology:4 response:34 xie:1 synapse:6 formulation:1 just:1 artifact:1 perhaps:1 scientific:1 usa:1 effect:6 evolution:2 bohte:1 width:1 noted:1 m:8 iconip:1 theoretic:1 demonstrate:1 fj:6 fi:1 tegner:1 common:1 spiking:13 empirically:1 physical:1 refractory:5 he:1 interpretation:1 relating:1 occurred:1 organism:1 elevated:1 measurement:1 talent:1 reliability:4 stable:1 recent:1 perspective:1 ausser:3 irrelevant:1 certain:1 additional:1 preceding:1 determine:1 paradigm:1 maximize:1 period:1 signal:2 relates:1 multiple:3 full:1 hebbian:3 exceeds:1 smooth:1 adapt:3 characterized:2 long:2 dept:2 devised:1 post:26 paired:1 prediction:3 variant:1 basic:1 essentially:1 kernel:1 cell:3 irregular:1 addition:1 interval:6 source:2 sch:1 induced:7 ltp:12 call:1 near:1 sander:1 enough:1 affect:1 fit:4 psps:1 identified:2 competing:1 reduce:5 regarding:1 synchronous:1 whether:2 ltd:13 effort:1 song:2 cause:6 hardly:1 action:2 depression:4 repeatedly:1 uabs:4 detailed:2 netherlands:2 induces:1 reduced:1 millisecond:1 sign:1 delta:1 estimated:1 diverse:1 vol:1 affected:1 key:4 threshold:14 neither:2 abbott:2 graph:1 asymptotically:1 sum:2 distinguish:1 activity:1 nontrivial:1 strength:5 occur:1 eisele:1 constraint:4 software:1 aspect:1 simulate:2 attempting:1 injection:5 conjecture:1 circumvented:1 developing:1 membrane:7 describes:1 psp:2 postsynaptic:18 suppressed:1 rev:1 modification:3 biologically:1 aihara:1 lasting:1 intuitively:1 spiked:1 projecting:1 explained:1 boulder:1 ln:1 equation:3 mechanism:3 generic:1 alternative:6 shortly:4 saudargiene:1 maintaining:1 giving:1 testable:1 approximating:1 classical:1 objective:4 spike:69 dependence:3 unclear:1 gradient:8 thrun:1 sensible:1 originate:1 presynaptic:14 argue:2 toward:1 code:1 modeled:2 relationship:2 minimizing:2 setup:1 negative:1 rise:2 unknown:1 fjt:3 allowing:3 perform:1 neuron:56 observation:1 descent:2 kaynak:1 situation:1 saturates:1 variability:7 precise:2 communication:1 unmodeled:1 pair:1 namely:1 specified:1 connection:4 narrow:1 established:1 jolivet:1 nip:3 tpre:9 able:1 below:1 pattern:1 saturation:1 rf:3 reliable:3 including:1 explanation:1 unrealistic:1 event:2 critical:2 natural:1 moss:1 review:2 understanding:1 prior:1 removal:1 acknowledgement:1 relative:1 kempter:1 expect:2 rationale:1 interesting:2 usr:5 integrate:1 informativeness:1 imposes:1 principle:10 plotting:1 editor:1 uncorrelated:1 course:2 cooperation:1 supported:1 last:1 free:3 allow:1 wide:1 saul:1 face:3 wagner:1 emerge:1 absolute:2 van:3 curve:17 dimension:1 cortical:2 transition:2 valid:1 commonly:1 reinforcement:2 made:1 herrmann:2 qualitatively:1 porr:1 alpha:1 otter:1 handbook:1 physiologically:1 additionally:1 retinotectal:1 nature:4 delving:1 robust:3 gerstner:11 froemke:1 icann:1 neurosci:4 motivation:1 noise:16 repeated:2 neuronal:2 referred:1 hemmen:1 hebb:1 slow:1 fails:1 theme:1 sub:1 exponential:5 third:1 transitioning:1 specific:2 explored:1 decay:3 divergent:1 virtue:1 evidence:1 derives:1 intrinsic:1 incorporating:1 magnitude:7 nat:1 entropy:12 depicted:1 gielen:1 explore:1 likely:5 neurophysiological:4 amsterdam:1 expressed:1 corresponds:1 determines:2 lewis:1 complemented:1 harris:1 conditional:8 goal:3 viewed:2 consequently:2 ann:1 absence:1 experimentally:1 change:4 typical:3 specifically:1 reducing:3 contrasted:1 folded:1 pfister:1 experimental:12 disregard:1 modulated:1 assessed:1 phenomenon:4 evaluate:1 heaviside:1 biol:2 correlated:1 |
1,749 | 259 | 168
Lee and Lippmann
Practical Characteristics of Neural Network
and Conventional Pattern Classifiers on
Artificial and Speech Problems*
Yuchun Lee
Digital Equipment Corp.
40 Old Bolton Road,
OGOl-2Ull
Stow, MA 01775-1215
Richard P. Lippmann
Lincoln Laboratory, MIT
Room B-349
Lexington, MA 02173-9108
ABSTRACT
Eight neural net and conventional pattern classifiers (Bayesianunimodal Gaussian, k-nearest neighbor, standard back-propagation,
adaptive-stepsize back-propagation, hypersphere, feature-map, learning vector quantizer, and binary decision tree) were implemented
on a serial computer and compared using two speech recognition
and two artificial tasks. Error rates were statistically equivalent on
almost all tasks, but classifiers differed by orders of magnitude in
memory requirements, training time, classification time, and ease
of adaptivity. Nearest-neighbor classifiers trained rapidly but required the most memory. Tree classifiers provided rapid classification but were complex to adapt. Back-propagation classifiers typically required long training times and had intermediate memory
requirements. These results suggest that classifier selection should
often depend more heavily on practical considerations concerning
memory and computation resources, and restrictions on training
and classification times than on error rate.
-This work was sponsored by the Department of the Air Force and the Air Force Office of
Scientific Research.
Practical Characteristics of Neural Network
1
Introduction
A shortcoming of much recent neural network pattern classification research has
been an overemphasis on back-propagation classifiers and a focus on classification
error rate as the main measure of performance. This research often ignores the many
alternative classifiers that have been developed (see e.g. [10]) and the practical
tradeoffs these classifiers provide in training time, memory requirements, classification time, complexity, and adaptivity. The purpose of this research was to explore
these tradeoffs and gain experience with many different classifiers. Eight neural net
and conventional pattern classifiers were used. These included Bayesian-unimodal
Gaussian, k-nearest neighbor (kNN), standard back-propagation, adaptive-stepsize
back-propagation, .hypersphere, feature-map (FM), learning vector quantizer (LVQ) ,
and binary decision tree classifiers.
BULLSEYE
DISJOINT
I. )
B
Dimensionality: 2
Testing Set Size: 500
Training Set Size: 500
Classes: 2
DIGIT
Dimensionality: 22 Cepstra
Training Set Size: 70
Testing Set Size: 112
16 Training Sets
16 Testing Sets
Classes:
7 Digits
Talker Dependent
Dimensionality: 2
Testing Set Size: 500
Training Set Size: 500
Classes: 2
VOWEL
Dimension: 2 Formants
Training Set Size: 338
Testing Set Size: 330
Classes: 10 Vowels
Talker Independent
Figure 1: Four problems used to test classifiers.
Classifiers were implemented on a serial computer and tested using the four problems shown in Fig. 1. The upper two artificial problems (Bullseye and Disjoint)
require simple two-dimensional convex or disjoint decision regions for minimum error classification. The lower digit recognition task (7 digits, 22 cepstral parameters,
169
170
Lee and Lippmann
16 talkers, 70 training and 112 testing patterns per talker) and vowel recognition
task (10 vowels, 2 formant parameters, 67 talkers, 338 training and 330 testing patterns) use real speech data and require more complex decision regions. These tasks
are described in [6, 11] and details of experiments are available in [9].
2
Training and Classification Parameter Selection
Initial experiments were performed to select sizes of classifiers that provided good
performance with limited training data and also to select high-performing versions
of each type of classifier. Experiments determined the number of nodes and hidden
layers in back-propagation classifiers, pruning techniques to use with tree and hypersphere classifiers, and numbers of exemplars or kernel nodes to use with feature-map
and LVQ classifiers.
2.1
Back-Propagation Classifiers
In standard back-propagation, weights typically are updated only after each trial
or cycle. A trial is defined as a single training pattern presentation and a cycle is
defined as a sequence of trials which sample all patterns in the training set. In group
updating, weights are updated every T trials while in trial-by-trial training, weights
are updated every trial. Furthermore, in trial-by-trial updating, training patterns
can be presented sequentially where a pattern is guaranteed to be presented every
T trials, or they can be presented randomly where patterns are randomly selected
from the training set. Initial experiments demonstrated that random trial-by-trial
training provided the best convergence rate and error reduction during training. It
was thus used whenever possible with all back-propagation classifiers.
All back-propagation classifiers used a single hidden layer and an output layer with
as many nodes as classes. The classification decision corresponded to the class of
the node in the output layer with the highest output value. During training, the
desired output pattern, D, was a vector with all elements set to 0 except for the
element corresponding to the correct class of the input pattern. This element of
D was set to 1. The mean-square difference between the actual output and this
desired output error is minimized when the output of each node is exactly the Bayes
a posteriori probability for each correct class [1, 10]. Back-propagation with this
"1 of m" desired output is thus well justified theoretically because it attempts to
estimate minimum-error Bayes probability functions. The number of hidden nodes
used in each back-propagation classifier was determined experimentally as described
in [6, 7, 9, 11].
Three "improved" back-propagation classifiers with the potential of reduced training
times where studied. The first, the adaptive-stepsize-classifier, has a global stepsize
that is adjusted after every training cycle as described in [4]. The second, the
multiple-adaptive-stepsize classifier, has multiple stepsizes (one for each weight)
which are adjusted after every training cycle as described in [8]. The third classifier
uses the conjugate gradient method [9, 12] to minimize the output mean-square
error.
Practical Characteristics of Neural Network
The goal of the three "improved" versions of back-propagation was to shorten the often lengthy training time observed with standard back-propagation. These improvements relied on fundamental assumptions about the error surfaces. However, only
the multiple-adaptive-stepsize algorithm was used for the final classifier comparison
due to the poor performance of the other two algorithms. The adaptive-stepsize
classifier often could not achieve adequately low error rates because the global stepsize (7]) frequently converged too quickly to zero during training. The multipleadaptive-stepsize classifier did not train faster than a standard back-propagation
classifier with carefully selected stepsize value. Nevertheless, it eliminated the need
for pre-selecting the stepsize parameter. The conjugate gradient classifier worked
well on simple problems but almost always rapidly converged to a local minimum
which provided high error rates on the more complex speech problems.
4oo0~____~(A~)~H_Y_P_E~R_S_PH_E_RE~____~
(B)
BINARY DECISION TREE
3000
2000
F2(Hz)
1000
500
L.L_ _----L.;~___'~_ _. l _ _ ._ ____l
o
500
Fl(Hz)
1000
1400 0
500
1000
1400
Fl(Hz)
Figure 2: Decision regions formed by the hypersphere classifier (A) and by the
binary decision tree classifier (B) on the test set for the vowel problem. Inputs
consist of the first two formants for ten vowels in the words A. who'd, <> hawed, +
hod, 0 hud, x had, > heed, ~ hid, 0 head, V heard, and < hood as described in
[6, 9].
2.2
Hypersphere Classifier
Hypersphere classifiers build decision regions from nodes that form separate hypersphere decision regions. Many different types of hypersphere classifiers have been
developed [2, 13]. Experiments discussed in [9], led to the selection of a specific version of hypersphere classifier with "pruning". Each hypersphere can only shrink in
size, centers are not repositioned, an ambiguous response (positive outputs from hyperspheres corresponding to different classes) is mediated using a nearest-neighbor
171
172
Lee and Lippmann
rule, and hyperspheres that do not contribute to the classification performance are
pruned from the classifier for proper "fitting" of the data and to reduce memory
usage. Decision regions formed by a hypersphere classifier for the vowel classification problem are shown in the left side of Fig. 2. Separate regions in this figure
correspond to different vowels. Decision region boundaries contain arcs which are
segments of hyperspheres (circles in two dimensions) and linear segments caused by
the application of the nearest neighbor rule for ambiguous responses.
2.3
Binary Decision Tree Classifier
Binary decision tree classifiers from [3] were used in all experiments. Each node in a
tree has only two immediate offspring and the splitting decision is based on only one
of the input dimensions. Decision boundaries are thus overlapping hyper-rectangles
with sides parallel to the axes of the input space and decision regions become more
complex as more nodes are added to the tree. Decision trees for each problem were
grown until they classified all the training data exactly and then pruned back using
the test data to determine when to stop pruning. A complete description of the
decision tree classifier used is provided in [9] and decision regions formed by this
classifier for the vowel problem are shown in the right side of Fig. 2.
2.4
Other Classifiers
The remaining four classifiers were tuned by selecting coarse sizing parameters to
"fit" the problem imposed. Some of these parameters include the number of exemplars in the LVQ and feature map classifiers and k in the k-nearest neighbor
classifier. Different types of covariance matrices (full, diagonal, and various types
of grand averaging) were also tried for the Bayesian-unimodal Gaussian classifier.
Best sizing parameter values for classifiers were almost always not those that that
best classified the training set. For the purpose of this study, training data was used
to determine internal parameters or weights in classifiers. The size of a classifier
and coarse sizing parameters were selected using the test data. In real applications
when a test set is not available, alternative methods, such as cross validation[3, 14]
would be used.
3
Classifier Comparison
All eight classifiers were evaluated on the four problems using simulations programmed in C on a Sun 3/110 workstation with a floating point accelerator. Classifiers were trained until their training error rate converged.
3.1
Error Rates
Error rates for all classifiers on all problems are shown in Fig. 3. The middle
solid lines in this figure correspond to the average error rate over all classifiers
for each problem. The shaded area is one binomial standard deviation above and
below this average. As can be seen, there are only three cases where the error
rate of anyone classifier is substantially different from the average error. These
exceptions are the Bayesian-unimodal Gaussian classifier on the disjoint problem
Practical Characteristics of Neural Network
IU~
____________________,
lU~--------------------,
--
DISJOINT
BULLSEYE
~
a:
oa:
CC
UJ
Z
2
o
-~
o
--u.
o~~-L~~~~==~~~
30~--------------------,
DIGIT
25
VOWEL
Ul
Ul
<
...J
o
Figure 3: Error rates for all classifiers on all four problems. The middle solid
lines correspond to the average error rate over all classifiers for each problem. The
shaded area is one binomial standard deviation above and below the average error
rate.
and the decision tree classifier on the digit and the disjoint problem. The Bayesianunimodal Gaussian classifier performed poorly on the disjoint problem because it
was unable to form the required bimodal disjoint decision regions. The decision
tree classifier performed poorly on the digit problem because the small amount of
training data (10 patterns per class) was adequately classified by a minimal13-node
tree which didn't generalize well and didn't even use all 22 input dimensions. The
decision tree classifier worked well for the disjoint problem because it forms decision
regions parallel to both input axes as required for this problem.
3.2
Practical Characteristics
In contrast to the small differences in error rate, differences between classifiers on
practical performance issues such as training and classification time, and memory
usage were large. Figure 4 shows that the classifiers differed by orders of magnitude
in training time. Shown in log-scale, the k-nearest neighbor stands out distinctively
173
174
Lee and Lippmann
10,000
_"""T""---r---""T'"'---r----,----,---.,.....--.,-:I
1000
--
100
CI)
10
1
o BULLSEYE
? VOWEL
6. DISJOINT
o
0.01
DIGIT
L--L_ _- L_ _--L._ _--1_ _----'_ _---l_ _ _' - -_ _"---.....
kNN
MULTI?STEPSIZE
HYPERSPHERE
BACK?PROP
BAYESIAN
FEATURE MAP
Lva
TREE
CLASSIFIERS
Figure 4: Training time of all classifiers on all four problems.
as the fastest trained classifier by many orders of magnitude. Depending on the
problem, Bayesian-unimodal Gaussian, hypersphere, decision tree, and feature map
classifiers also have reasonably short training times. LVQ and back-propagation
classifiers often required the longest training time. It should be noted that alternative implementations, for example using parallel computers, would lead to different
results.
Adaptivity or the ability to adapt using new patterns after complete training also
differed across classifiers. The k-nearest neighbor and hypersphere classifiers are
able to incorporate new information most readily. Others such as back-propagation
and LVQ classifiers are more difficult to adapt and some, such as decision tree
classifiers, are not designed to handle further adaptation after training is complete.
The binary decision tree can classify patterns much faster than others. Unlike most
classifiers that depend on "distance" calculations between the input pattern and all
stored exemplars, the decision tree classifier requires only a few numerical comparisons. Therefore, the decision tree classifier was many orders of magnitude faster
Practical Characteristics of Neural Network
8000
kNN
->
a:
f /)
Q)
>-
0
?t:.
FM
BULLSEYE
VOWEL
DISJOINT
0 DIGIT
6000
BAYES
CD
HYPERSPHERE
BACK-PROPAGATION
0
:E
w
:E 4000
MULTIPLE STEPSIZE
Z
0
~
0
u:::
en
en 2000
cs:
...J
0
o
100
200
300
400
TRAINING PROGRAM COMPLEXITY (Lines of Codes)
Figure 5: Classification memory usage versus training program complexity for all
classifiers on all four problems.
in classification than other classifiers. However, decision tree classifiers require the
most complex training algorithm. As a rough measurement of the ease of implementation, subjectively measured by the number of lines in the training program,
the decision tree classifier is many times more complex than the simplest training
program- that of the k-nearest neighbor classifier. However, the k-nearest neighbor
classifier is one of the slowest in classification when implemented serially without
complex search techniques such as k-d trees [5]. These techniques greatly reduce
classification time but make adaptation to new training data more difficult and
increase complexity.
4
Trade-Offs Between Performance Criteria
Noone classifier out-performed the rest on all performance criteria. The selection
of a "best" classifier depends on practical problem constraints which differ across
problems. Without knowing these constraints or associating explicit costs with
various performance criteria, a classifier that is "best" can not be meaningfully
determined. Instead, there are numerous trade-off relationships between various
criteria.
175
176
Lee and Lippmann
One trade-off shown in Fig. 5 is classification memory usage versus the complexity
of the training algorithm. The far upper left corner, where training is very simple
and memory is not efficiently utilized, contains the k-nearest neighbor classifier. In
contrast, the binary decision tree classifier is in the lower right corner, where the
overall memory usage is minimized and the training process is very complex. Other
classifiers are intermediate.
3000
I
I.
I ---r
MULTIPLE STEPSIZE
?
BACKPROPAGATION
--
2000
(/)
w
...~
C)
z
Z
cc
a:
to-
1000
Lva
BAYES
? I
?
0
HYPERSPHERE
TREE
kNN
4000
3000
1000
2000
CLASSIFICATION MEMORY USAGE (Bytes)
5000
Figure 6: Training time versus classification memory usage of all classifiers on the
vowel problem.
Figure 6 shows the relationship between training time and classification memory
usage for the vowel problem. The k-nearest neighbor classifier consistently provides
the shortest training time but requires the most memory. The hypersphere classifier optimizes these two criteria well across all four problems. Back-propagation
classifiers frequently require long training times and require intermediate amounts
of memory.
5
Summary
This study explored practical characteristics of neural net and conventional pattern
classifiers. Results demonstrate that classification error rates can be equivalent
across classifiers when classifiers are powerful enough to form minimum error decision regions, when they are rigorously tuned, and when sufficient training data
is provided. Practical characteristics such as training time, memory requirements,
and classification time, however, differed by orders of magnitude. In practice, these
factors are more likely to affect classifier selection. Selection will often be driven
Practical Characteristics of Neural Network
by practical considerations concerning memory and computation resources, restrictions on training, test, and adaptation times, and ease of use and implementation.
The many existing neural net and conventional classifiers allow system designers to
trade these characteristics off'. Tradeoffs will vary with implementation hardware
(e.g. serial versus parallel, analog versus digital) and details of the problem (e.g.
dimension of the input vector, complexity of decision regions). Our current research
efforts are exploring these tradeoff's on more difficult problems and studying additional classifiers including radial-basis-function classifiers, high-order networks, and
Gaussian mixture classifiers.
References
[1] A. R. Barron and R. 1. Barron. Statistical learning networks: A unifying view. In
1988 Symposium on the Interface: Statistics and Computing Science, Reston, Virginia, April 21-23 1988.
[2] B. G. Batchelor. Classification and data analysis in vector space. In B. G. Batchelor,
editor, Pattern Recognition, chapter 4, pages 67-116. Plenum Press, London, 1978.
[3] 1. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone. Classification and
Regression Trees. Wadsworth International Group, Belmont, CA, 1984.
[4] 1. W. Chan and F. Fallside. An adaptive training algorithm for back propagation
networks. Computer Speech and Language, 2:205-218, 1987.
[5] J. H. Friedman, J. L. Bentley, and R. A. Finkel. An algorithm for finding best
matches in logarithmic expected time. ACM Transactions on Mathematical Software,
3(3):209-226, September 1977.
[6] W. M. Huang and R. P. Lippmann. Neural net and traditional classifiers. In D. Anderson, editor, Neural Information Processing Systems, pages 387-396, New York,
1988. American Institute of Physics.
[7] William Y. Huang and Richard P. Lippmann. Comparisons between conventional
and neural net classifiers. In 1st International Conference on Neural Networks, pages
IV-485. IEEE, June 1987.
[8] R. A. Jacobs. Increased rates of convergence through learning rate adaptation. Neural
Networks, 1:295-307, 1988.
[9] Yuchun Lee. Classifiers: Adaptive modules in pattern recognition systems. Master's
thesis, Massachusetts Institute of Technology, Department of Electrical Engineering
and Computer Science, Cambridge, MA, May 1989.
[10] R. P. Lippmann. Pattern classification using neural networks. IEEE Communications
Magazine, 27(11):47-54, November 1989.
[11] Richard P. Lippmann and Ben Gold. Neural classifiers useful for speech recognition.
In 1st International Conference on Neural Networks, pages IV-417. IEEE, June 1987.
[12] W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling, editors. Numerical
Recipes. Cambridge University Press, New York, 1986.
[13] D. 1. Reilly, L. N. Cooper, and C. Elbaum. A neural model for category learning.
Biological Cybernetics, 45:35-41, 1982.
[14] M. Stone. Cross-validation choice and assessment of statistical predictions. Journal
of the Royal Statistical Society, B-36:111-147, 1974.
177
| 259 |@word trial:12 middle:2 version:3 simulation:1 tried:1 covariance:1 jacob:1 solid:2 reduction:1 initial:2 contains:1 selecting:2 tuned:2 existing:1 current:1 readily:1 belmont:1 numerical:2 designed:1 sponsored:1 selected:3 short:1 hypersphere:17 quantizer:2 coarse:2 node:10 contribute:1 provides:1 mathematical:1 become:1 symposium:1 fitting:1 theoretically:1 expected:1 rapid:1 frequently:2 formants:2 multi:1 actual:1 provided:6 didn:2 substantially:1 developed:2 elbaum:1 lexington:1 finding:1 every:5 ull:1 exactly:2 classifier:108 positive:1 engineering:1 local:1 offspring:1 studied:1 shaded:2 ease:3 limited:1 programmed:1 fastest:1 statistically:1 practical:14 hood:1 testing:7 practice:1 backpropagation:1 digit:9 area:2 reilly:1 pre:1 road:1 word:1 radial:1 suggest:1 selection:6 restriction:2 conventional:6 map:6 equivalent:2 center:1 demonstrated:1 imposed:1 lva:2 convex:1 splitting:1 shorten:1 rule:2 handle:1 updated:3 plenum:1 heavily:1 magazine:1 us:1 element:3 recognition:6 updating:2 utilized:1 observed:1 module:1 electrical:1 region:14 cycle:4 sun:1 trade:4 highest:1 complexity:6 reston:1 rigorously:1 trained:3 depend:2 segment:2 f2:1 basis:1 various:3 chapter:1 grown:1 train:1 shortcoming:1 london:1 artificial:3 corresponded:1 hyper:1 ability:1 formant:1 knn:4 statistic:1 final:1 sequence:1 net:6 adaptation:4 hid:1 rapidly:2 poorly:2 achieve:1 lincoln:1 gold:1 description:1 recipe:1 convergence:2 requirement:4 ben:1 depending:1 measured:1 exemplar:3 nearest:12 implemented:3 c:1 differ:1 correct:2 require:5 sizing:3 biological:1 adjusted:2 exploring:1 talker:5 vary:1 purpose:2 mit:1 rough:1 offs:1 gaussian:7 always:2 finkel:1 breiman:1 stepsizes:1 office:1 ax:2 focus:1 june:2 improvement:1 longest:1 consistently:1 slowest:1 greatly:1 contrast:2 equipment:1 posteriori:1 dependent:1 vetterling:1 typically:2 hidden:3 iu:1 issue:1 classification:25 overall:1 wadsworth:1 eliminated:1 minimized:2 others:2 richard:3 few:1 randomly:2 hawed:1 floating:1 batchelor:2 vowel:14 william:1 attempt:1 friedman:2 mixture:1 experience:1 tree:28 iv:2 old:1 desired:3 circle:1 increased:1 classify:1 cost:1 deviation:2 too:1 virginia:1 stored:1 st:2 fundamental:1 grand:1 international:3 lee:7 off:3 physic:1 quickly:1 thesis:1 huang:2 corner:2 american:1 potential:1 caused:1 depends:1 performed:4 view:1 hud:1 bayes:4 relied:1 parallel:4 minimize:1 air:2 square:2 formed:3 characteristic:10 who:1 efficiently:1 correspond:3 generalize:1 bayesian:5 lu:1 cc:2 cybernetics:1 converged:3 classified:3 whenever:1 lengthy:1 workstation:1 gain:1 stop:1 massachusetts:1 dimensionality:3 yuchun:2 carefully:1 back:24 response:2 improved:2 april:1 evaluated:1 shrink:1 anderson:1 furthermore:1 until:2 overlapping:1 propagation:22 assessment:1 scientific:1 bentley:1 usage:8 contain:1 adequately:2 laboratory:1 during:3 ambiguous:2 noted:1 criterion:5 stone:2 complete:3 demonstrate:1 interface:1 consideration:2 heed:1 discussed:1 analog:1 measurement:1 cambridge:2 language:1 had:2 surface:1 subjectively:1 recent:1 chan:1 optimizes:1 driven:1 corp:1 binary:8 seen:1 minimum:4 additional:1 determine:2 shortest:1 multiple:5 unimodal:4 full:1 faster:3 adapt:3 calculation:1 cross:2 long:2 match:1 concerning:2 serial:3 prediction:1 regression:1 kernel:1 bimodal:1 justified:1 rest:1 unlike:1 hz:3 meaningfully:1 intermediate:3 enough:1 affect:1 fit:1 fm:2 associating:1 reduce:2 knowing:1 tradeoff:4 ul:2 effort:1 speech:6 york:2 useful:1 heard:1 amount:2 ten:1 hardware:1 category:1 simplest:1 reduced:1 designer:1 disjoint:11 per:2 group:2 four:8 nevertheless:1 rectangle:1 powerful:1 master:1 almost:3 decision:35 bolton:1 layer:4 fl:2 guaranteed:1 constraint:2 worked:2 software:1 cepstra:1 anyone:1 pruned:2 performing:1 department:2 poor:1 conjugate:2 across:4 resource:2 studying:1 available:2 eight:3 barron:2 stepsize:14 alternative:3 binomial:2 remaining:1 include:1 unifying:1 build:1 uj:1 society:1 added:1 diagonal:1 traditional:1 september:1 gradient:2 fallside:1 distance:1 separate:2 unable:1 oa:1 code:1 relationship:2 difficult:3 olshen:1 implementation:4 proper:1 upper:2 arc:1 november:1 immediate:1 communication:1 head:1 required:5 bullseye:5 able:1 below:2 pattern:21 program:4 including:1 memory:18 royal:1 hyperspheres:3 serially:1 force:2 technology:1 numerous:1 mediated:1 byte:1 adaptivity:3 accelerator:1 versus:5 digital:2 validation:2 sufficient:1 editor:3 cd:1 summary:1 l_:4 side:3 allow:1 institute:2 neighbor:12 cepstral:1 boundary:2 dimension:5 stand:1 ignores:1 adaptive:8 far:1 transaction:1 pruning:3 lippmann:10 global:2 sequentially:1 repositioned:1 search:1 reasonably:1 ca:1 complex:8 did:1 distinctively:1 main:1 fig:5 en:2 differed:4 cooper:1 explicit:1 third:1 specific:1 explored:1 consist:1 ci:1 magnitude:5 hod:1 flannery:1 led:1 logarithmic:1 explore:1 likely:1 acm:1 ma:3 prop:1 teukolsky:1 goal:1 presentation:1 lvq:5 room:1 experimentally:1 included:1 determined:3 except:1 averaging:1 exception:1 select:2 internal:1 incorporate:1 tested:1 |
1,750 | 2,590 | Log-concavity results on Gaussian process
methods for supervised and unsupervised
learning
Liam Paninski
Gatsby Computational Neuroscience Unit
University College London
[email protected]
http://www.gatsby.ucl.ac.uk/?liam
Abstract
Log-concavity is an important property in the context of optimization,
Laplace approximation, and sampling; Bayesian methods based on Gaussian process priors have become quite popular recently for classification,
regression, density estimation, and point process intensity estimation.
Here we prove that the predictive densities corresponding to each of these
applications are log-concave, given any observed data. We also prove
that the likelihood is log-concave in the hyperparameters controlling the
mean function of the Gaussian prior in the density and point process intensity estimation cases, and the mean, covariance, and observation noise
parameters in the classification and regression cases; this result leads to
a useful parameterization of these hyperparameters, indicating a suitably
large class of priors for which the corresponding maximum a posteriori
problem is log-concave.
Introduction
Bayesian methods based on Gaussian process priors have recently become quite popular
for machine learning tasks (1). These techniques have enjoyed a good deal of theoretical
examination, documenting their learning-theoretic (generalization) properties (2), and developing a variety of efficient computational schemes (e.g., (3?5), and references therein).
We contribute to this theoretical literature here by presenting results on the log-concavity of
the predictive densities and likelihood associated with several of these methods, specifically
techniques for classification, regression, density estimation, and point process intensity estimation. These results, in turn, imply that it is relatively easy to tune the hyperparameters
for, approximate the posterior distributions of, and sample from these models.
Our results are based on methods which we believe will be applicable more widely in machine learning contexts, and so we give all necessary details of the (fairly straightforward)
proof techniques used here.
Log-concavity background
We begin by discussing the log-concavity property: its uses, some examples of log-concave
(l.c.) functions, and the key theorem on which our results are based. Log-concavity is
perhaps most important in a maximization context: given a real function f ofsome vector
~ if g(f (?))
~ is concave for some invertible function g, and the parameters ?~ live
parameter ?,
in some convex set, then f is unimodal, with no non-global local maxima. (Note that in
this case a global maximum, if one exists, is not necessarily unique, but maximizers of f
do form a convex set, and hence maxima are essentially unique in a sense.) Thus ascent
procedures for maximization can be applied without fear of being trapped in local maxima;
this is extremely useful when the space to be optimized over is high-dimensional. This
logic clearly holds for any arbitrary rescaling g; of course, we are specifically interested in
g(t) = log t, since logarithms are useful in the context of taking products (in a probabilistic
context, read conditional independence): log-concavity is preserved under multiplication,
since the logarithm converts multiplication into addition and concavity is preserved under
addition.
Log-concavity is also useful in the context of Laplace (central limit theorem - type) approximations (3), in which the logarithm of a function (typically a probability density or
likelihood function) is approximated via a second-order (quadratic) expansion about its
maximum or mean (6); this log-quadratic approximation is a reasonable approach for functions whose logs are known to be concave.
Finally, l.c. distributions are in general easier to sample from than arbitrary distributions,
as discussed in the context of adaptive rejection and slice sampling (7, 8) and the randomwalk-based samplers analyzed in (9).
We should note that log-concavity is not a generic property: l.c. probability densities necessarily have exponential tails (ruling out power law tails, and more generally distributions
with any infinite moments). Log-concavity also induces a certain degree of smoothness;
for example, l.c. densities must be continuous on the interior of their support. See, e.g., (9)
for more detailed information on the various special properties implied by log-concavity.
A few simple examples of l.c. functions are as follows: the Gaussian density in any dimension; the indicator of any convex set (e.g., the uniform density over any convex, compact
set); the exponential density; the linear half-rectifier. More interesting well-known examples include the determinant of a matrix, or the inverse partition function of an energyR
~
~ = ( ef (~x,?)
based probabilistic model (e.g., an exponential family), Z ?1 (?)
d~x)?1 , l.c. in
~ is convex in ?~ for all ~x. Finally, log-concavity is preserved under taking
?~ whenever f (~x, ?)
products (as noted above), affine translations of the domain, and/or pointwise limits, since
concavity is preserved under addition, affine translations, and pointwise limits, respectively.
Sums of l.c. functions are not necessarily l.c., as is easily shown (e.g., a mixture of Gaussians with widely-separated means, or the indicator of the union of disjoint convex sets).
However, a key theorem (10, 11) gives:
Theorem (Integrating out preserves log-concavity). If f (~x, ~y ) is jointly l.c. in (~x, ~y ), for
~x and ~y finite dimensional, then
Z
f0 (~x) ? f (~x, ~y )d~y
is l.c. in ~x.
Think of ~y as a latent variable or hyperparameter we want to marginalize over. This
very useful fact has seen applications in various branches of statistics and operations research, but does not seem well-known in the machine learning community. The theorem
implies, for example, that convolutions of l.c. functions are l.c.; thus the random vectors
with l.c. densities form a vector space. Moreover, indefinite integrals of l.c. functions are
l.c.; hence the error function, and more generally the cumulative distribution function of
any l.c. density, is l.c., which is useful in the setting of generalized linear models (12) for
classification. Finally, the mass under a l.c. probability measure of a convex set which
is translated in a convex manner is itself a l.c. function of the convex translation parameter (11).
Gaussian process methods background
We now give a brief review of Gaussian process methods. Our goals are modest; we will
do little more than define notation. See, e.g., (1) and references for further details. Gaussian process methods are based on a Bayesian ?latent variable? approach: dependencies
between the observed input and output data {~ti } and {~yi } are modeled as arising through a
hidden (unobserved) Gaussian process G(~t). Recall that a Gaussian process is a stochastic
process whose finite-dimensional projections are all multivariate Gaussian, with means and
covariances defined consistently for all possible projections, and is therefore specified by
its mean ?(~t) and covariance function C(~t1 , ~t2 ).
The applications we will consider may be divided into two settings; ?supervised? and ?unsupervised? problems. We discuss the somewhat simpler unsupervised case first (however,
it should be noted that the supervised cases have received significantly more attention in
the machine learning literature to date, and might be considered of more importance to this
community).
Density estimation: We are given unordered data {~ti }; the setup is valid for any sample
space, but assume ~ti ? <d , d < ?, for concreteness. We model the data as i.i.d. samples
from an unknown distribution p. The prior over these unknown distributions, in turn, is
modeled as a conditioned Gaussian process, p ? G(~t): p is drawn from a Gaussian process
G(~t) of mean ?(~t) and covariance C (to ensure that the resulting random measures are
well-defined, we will assume throughout that G is moderately well-behaved; almost-sure
local Lebesgue integrability is sufficient), conditioned to be nonnegative and to integrate
to one over some arbitrarily large compact set (the latter by an obvious limiting argument,
to prevent conditioning on a set of measure zero; the introduction of the compact set is to
avoid problems of the sort encountered when trying to define uniform probability measures
on unbounded spaces) with respect to some natural base measure on the sample space (e.g.,
Lebesgue measure in <d ) (13). It is worth emphasizing that this setup differs somewhat
from some earlier proposals
? (5,14,15), which postulated that nonnegativity be enforced by,
e.g., modeling log p or p as Gaussian, instead of the Gaussian p here; each approach has
its own advantages, and it is unclear at the moment whether our results can be extended to
this context (as will be clear below, the roadblock is in the normalization constraint, which
is transformed nonlinearly along with the density in the nonlinear warping setup).
Point process intensity estimation: A nearly identical setup can be used if we assume the
data {~ti } represent a sample from a Poisson process with an unknown underlying intensity
function (16?18); the random density above is simply replaced by the random intensity
function here (this type of model is known as a Cox, or doubly-stochastic, process in the
point-process literature). The only difference is that intensity functions are not required to
be normalized, so we need only condition the Gaussian process G(~t) from which we draw
the intensity functions to be nonnegative. It turns out we will be free to use any l.c. and
convex warping of the range space of the Gaussian process G(~t) to enforce positivity;
suitable warpings include exponentiation (corresponding to modeling the logarithm of the
intensity as Gaussian (17)) or linear half-rectification.
The supervised cases require a few extra ingredients. We are given paired data, inputs {~ti }
with corresponding outputs {~yi }. We model the outputs as noise-corrupted observations
~ ~t) at the points {~ti }; denote the additional hidden ?observafrom the Gaussian process G(
tion? noise process as {~n(~ti )}. This noise process is not always taken to be Gaussian; for
~ ~t),
computational reasons, {~n(~ti )} is typically assumed i.i.d., and also independent of G(
but both of these assumptions will be unnecessary for the results stated below.
~ ~ti ) + ?i~n(~ti ); in words, draw G(
~ ~t) from a Gaussian
Regression: We assume ~y (~ti ) = G(
process of mean ?
~ (~t) and covariance C; the outputs are then obtained by sampling this
~ ~t) at ~ti and adding noise ~n(~ti ) of scale ?i .
function G(
Classification: y(~ti ) = 1 G(~ti ) + ?i n(~ti ) > 0 , where 1(.) denotes the indicator function
of an event. This case is as in the regression model, except we only observe a binarythresholded version of the real output.
Results
Our first result concerns the predictive densities associated with the above models: the posterior density of any continuous linear functional of G(~t), given observed data D = {~ti }
and/or {yi }, under the Gaussian process prior for G(~t). The simplest and most important case of such a linear projection is the projection onto a finite collection of coordinates, {~tpred }, say; in this special case, the predictive density is the posterior density
p({G(~tpred )}|D). It turns out that all we need to assume is the log-concavity of the distribution p(G, ~n); this is clearly more general than what is needed for the strictly Gaussian
cases considered above (for example, Laplacian priors on G are permitted, which could
lead to more robust performance). Also note that dependence of (G, ~n) is allowed; this
permits, for example, coupling of the effective scales of the observation noise ~ni = ~n(~ti )
for nearby points ~ti . Additonally, we allow nonstationarity and anisotropic correlations in
G. The result applies for any of the applications discussed above.
Proposition 1 (Predictive density). Given a l.c. prior on (G, ~n), the predictive density is
always l.c., for any data D.
In other words, conditioning on data preserves these l.c. processes (where an l.c. process,
like a Gaussian process, is defined by the log-concavity of its finite-dimensional projections). This represents a significant generalization of the obvious fact that in the regression
setup under Gaussian noise, conditioning preserves Gaussian processes.
Our second result applies to the likelihood of the hyperparameters corresponding to the
above applications: the mean function ?, the covariance function C, and the observation
noise scales {?i }. We first state the main result in some generality, then provide some
useful examples and interpretation below. For each j > 0, let Aj,?~ denote a family of linear
~ ~ti )),
maps from some finite-dimensional vector space Gj to <N dG , where dG = dim(G(
and N is the number of observed dataP
points. Our main assumptions are as follows: first,
assume A?1~ may be written A?1~ =
?k Kj,k , where {Kj,k } is a fixed set of matrices
j,?
j,?
and the inverse is defined as a map from range(Aj,?~ ) to Gj /ker(Aj,?~ ). Second, assume
that dim(A?1 (V )) is constant in ?~ for any set V . Finally, equip the (doubly) latent space
~
j,?
Gj ? <N dG = {(GL , ~n)} with a translation family of l.c. measures pj,?L (GL , ~n) indexed
by the mean parameter ?L , i.e., pj,?L (GL , ~n) = pj ((GL , ~n) ? ?L ), for some fixed measure
pj (.). Then if the sequence pj (G, ~n) induced by pj and Aj converges pointwise to the joint
density p(G, ~n), then:
Proposition 2 (Likelihood). In the supervised cases, the likelihood is jointly l.c. in the
~ {? ?1 }), for
latent mean function, covariance parameters, and inverse noise scales (?L , ?,
i
all data D. In the unsupervised cases, the likelihood is l.c. in the mean function ?.
Note that the mean function ?(~t) is induced in a natural way by ?L and Ai,?~ , and that we
allow the noise scale parameters {?i } to vary independently, increasing the robustness of
the supervised methods (19) (since outliers can be ?explained,? without large perturbations
of the underlying predictive distributions of G(~t), by simply increasing the corresponding
noise scale ?i ). Of course, in practice, it is likely that to avoid overfitting one would want
to reduce the effective number of free parameters by representing ?(~t) and ?~ in finitedimensional spaces, and restricting the freedom of the inverse noise scales {?i }. The logconcavity in the mean function ?(~t) demonstrated here is perhaps most useful in the point
process setting, where ?(~t) can model the effect of excitatory or inhibitory inputs on the
intensity function, with spatially- or temporally-varying patterns of excitation, and/or selfexcitatory interactions between observation sites ~ti (by letting ?(~t) depend on the observed
points ~ti (16, 20)).
In the special case that the l.c. prior measure pj is taken to be Gaussian with covariance
C0 , the main assumption here is effectively on the parameterization of the covariance C;
ignoring the (technical) limiting operation in j for the moment, we are assuming roughly
~ the covariance may be written
that there exists a single basis in which, for all allowed ?,
C = A?~ C0 At?~ , where A?~ is of the special form described above.
We may simplify further by assuming that C0 is white and stationary. One important
example of a suitable two-parameter family of covariance kernels satisfying the conditions of Proposition 2 is provided by the Ornstein-Uhlenbeck kernels (which correspond to
exponentially-filtered one-dimensional white noise):
C(t1 , t2 ) = ? 2 e?2|t1 ?t2 |/?
For this kernel, one can parameterize C = A?~ At?~ , with A?1
= ?1 I ? ?2 D? , where I
~
?
and D denote the identity and differential operators, respectively, and ?k > 0 to ensure
that C is positive-definite. (To derive this reparameterization, note that C(|t1 ? t2 |) solves
(I ? aD2 )C(|t1 ? t2 |) = b?(t), for suitable constants a, b.) Thus Proposition 2 generalizes a recent neuroscientific result: the likelihood for a certain neural model (the leaky
integrate-and-fire model driven by Gaussian noise, for which the corresponding covariance
is Ornstein-Uhlenbeck) is l.c. (21, 22) (of course, in this case the model was motivated by
biophysical instead of learning-theoretic concerns).
In addition, multidimensional generalizations of this family are straightforward: corresponding kernels solve the Helmholtz problem,
(I ? a?)C(~t) = b?(~t),
with ? denoting the Laplacian. Solutions to this problem are well-known: in the isotropic
case, we obtain a family of radial Bessel functions, with a, b again setting the overall magnitude and correlation scale of C(~t1 , ~t2 ) = C(||~t1 ? ~t2 ||2 ). Generalizing
P in a different
?1
direction, we could let A?~ include higher-order differential terms, A~ = k=0 ?k Dk ; the
?
resulting covariance kernels correspond to higher-order autoregression process priors.
An even broader class of kernel parameterizations may be developed in the spectral domain:
still assuming stationary white noise inputs, we may diagonalize C in the Fourier basis,
that is, C(~
? ) = Ot P (~
? )O, with O the (unitary) Fourier transform operator and P (~
? ) the
power spectral density. Thus, comparing
to
the
conditions
above,
if
the
spectral
density
P
? )|2 (where |.| denotes complex magnitude),
may be written as P (~
? )?1 = | k ?k hk (~
for ?k > 0 and functions hk (~
? ) such that sign(real(hk (~
? ))) is constant in k for any ?
~,
~ A~ here may be taken as the multiplication operator
then the likelihood will be l.c. in ?;
?
P
Ot ( k ?k hk (~
? ))?1 ). Remember that the smoothness of the sample paths of G(~t) depends
on the rate of decay of the spectral
P density (1,23); thus we may obtain smoother (or rougher)
kernel families by choosing k ?k hk (~
? ) as more rapidly- (or slowly-)increasing.
Proofs
Predictive density. This proof is a straightforward application of the Prekopa theorem (10).
Write the predictive distributions as
Z
p({Lk G}|D) = K(D) p({Lk G}, {G(ti ), n(ti )})p({yi , ti }|{Lk G}, {G(ti ), n(ti )}),
where {Lk } is a finite set of continuous linear functionals of G, K(D) is a constant that
depends only on the data, the integral is over all {G(ti ), n(ti )}, and {ni , yi } is ignored in
the unsupervised case. Now we need only prove that the multiplicands on the right hand
side above are l.c. The log-concavity of the left term is assumed; the right term, in turn,
can be rewritten as
p({yi , ti }|{Lk G}, {G(ti ), n(ti )}) = p({yi , ti }|{G(ti ), n(ti )}),
by the Markovian nature of the models. We prove the log-concavity of the right individually
for each of our applications.
In the supervised cases, {ti } is given and so we only need to look at p({yi }|{G(ti ), n(ti )}).
In the classification case, this is simply an indicator of the set
\
? 0, yi = 0
G(ti ) + ?i ni
,
> 0, yi = 1
i
which is jointly convex in {G(ti ), n(ti )}, completing the proof in this case.
The regression case is proven in a similar fashion: write p({yi }|{G(ti ), n(ti )}) as the limit
as ? 0 of the indicator of the convex set
\
(|G(ti ) + ?i ni ? yi | < ) ,
i
then use the fact that pointwise limits preserve log-concavity. (The predictive distributions
of {y(t)} will also be l.c. here, by a nearly identical argument.)
In the density estimation case, the term
p({ti }|{G(ti )}) =
Y
G(ti )
i
is obviously l.c. in {G(ti )}. However, recall that we perturbed the distribution of G(t)
in this case as well, by conditioning G(t) to be positive and normalized. The fact that
p({Lk G}, {G(ti )}) is l.c. follows upon writing this term as a marginalization of densities
which are products of l.c. densities with indicators of convex sets (enforcing the linear
normalization and positivity constraints).
Finally, for the point process intensity case, write the likelihood term, as usual,
R
Y
~ ~
f (G(~ti )),
p({ti }|{G(ti )}) = e? f (G(t))dt
i
where f is the scalar warping function that takes the original Gaussian function G(~t) into
the space of intensity functions. This term is clearly l.c. whenever f (s) is both convex and
l.c. in s; for more details on this class of functions, see e.g. (20).
Likelihood. We begin with the unsupervised cases. In the density estimation case, write
the likelihood as
Z
Y
L(?) = dp? (G)1C ({G(~t)})
G(~ti ),
i
with p? (G) the probability of G under ?. Here 1C is the (l.c.) indicator function of the
convex set enforcing the linear constraints (positivity and normalization) on G. All three
terms in the integrand on the right are clearly jointly l.c. in (G, ?). In the point process
case,
Z
R
Y
~ ~
L(?) = dp? (G)e? f (G(t))dt
f (G(~ti ));
i
the joint log-concavity of the three multiplicands on the right is again easily demonstrated.
R
The Prekopa theorem cannot be directly applied here, since the functions 1C (.) and e? f (.)
depend in an infinite-dimensional way on G and ?; however, we can apply the Prekopa theorem to any finite-dimensional approximation of these functions (e.g., by approximating
the normalization condition and exponential integral by Riemann sums and the positivity
condition at a finite number of points), then obtain the theorem in the limit as the approximation becomes infinitely fine, using the fact that pointwise limits preserve log-concavity.
For the supervised cases, write
Z
?1
G
n
~
L(?L , ?, {? }) = lim dpj (GL , ~n)1 Aj,? (GL + ?L ) + ~? .(~n + ?L ) ? V
j
Z
X
?1
= lim dpj (GL , ~n)1 (GL , ~n) ? (
?k Kj,k V, ~? . .V ) + ?L ,
j
k
with V an appropriate convex constraint set (or limit thereof) defined by the observed data
n
N dG
{yi }, ?G
, respectively, and . denoting pointL and ?L the projection of ?L into Gj or <
wise operations on vectors. The result now follows immediately from Rinott?s theorem on
convex translations of sets under l.c. probability measures (11, 22).
Again, we have not assumed anything more about p(GL , ~n) than log-concavity; as before,
this allows dependence of G and ~n, anisotropic correlations, etc. It is worth noting, though,
that the above result is somewhat stronger in the supervised case than the unsupervised; the
proof of log-concavity in the covariance parameters
?~ does not seem to generalize easily to
P
the unsupervised setup (briefly, because log( k ?k yk ) is not jointly concave in (?k , yk ) for
all (?k , yk ), ?k yk > 0, precluding a direct application of the Prekopa or Rinott theorems
in the unsupervised case). Extensions to ensure that the unsupervised likelihood is l.c. in ?~
~ and will not be pursued
are possible, but require further restrictions on the form of p(G|?)
here.
Discussion
We have provided some useful results on the log-concavity of the predictive densities and
likelihoods associated with several common Gaussian process methods for machine learning. In particular, our results preclude the existence of non-global local maxima in these
functions, for any observed data; moreover, Laplace approximations of these functions will
not, in general, be disastrous, and efficient sampling methods are available.
Perhaps the main practical implication of our results stems from our proposition on the
likelihood; we recommend a certain simple way to obtain parameterized families of kernels which respect this log-concavity property. Kernel families which may be obtained
in this manner can range from extremely smooth to singular, and may model anisotropies
flexibly. Finally, these results indicate useful classes of constraints (or more generally, regularizing priors) on the hyperparameters; any prior which is l.c. (or any constraint set which
is convex) in the parameterization discussed here will lead to l.c. a posteriori problems.
More generally, we have introduced some straightforward applications of a useful and interesting theorem. We expect that further applications in machine learning (e.g., in latent
variable models, marginalization of hyperparameters, etc.) will be easy to find.
Acknowledgements: We thank Z. Ghahramani and C. Williams for many helpful conversations. LP is supported by an International Research Fellowship from the Royal Society.
References
1. M. Seeger, International Journal of Neural Systems 14, 1 (2004).
2. P. Sollich, A. Halees, Neural Computation 14, 1393 (2002).
3. C. Williams, D. Barber, IEEE PAMI 20, 1342 (1998).
4. M. Gibbs, D. MacKay, IEEE Transactions on Neural Networks 11, 1458 (2000).
5. L. Csato, Gaussian processes - iterative sparse approximations, Ph.D. thesis, Aston U.
(2002).
6. T. Minka, A family of algorithms for approximate bayesian inference, Ph.D. thesis,
MIT (2001).
7. W. Gilks, P. Wild, Applied Statistics 41, 337 (1992).
8. R. Neal, Annals of Statistics 31, 705 (2003).
9. L. Lovasz, S. Vempala, The geometry of logconcave functions and an O? (n3 ) sampling
algorithm, Tech. Rep. 2003-04, Microsoft Research (2003).
10. A. Prekopa, Acad Sci. Math. 34, 335 (1973).
11. Y. Rinott, Annals of Probability 4, 1020 (1976).
12. P. McCullagh, J. Nelder, Generalized linear models (Chapman and Hall, London,
1989).
13. J. Oakley, A. O?Hagan, Biometrika under review (2003).
14. I. Good, R. Gaskins, Biometrika 58, 255 (1971).
15. W. Bialek, C. Callan, S. Strong, Physical Review Letters 77, 4693 (1996).
16. D. Snyder, M. Miller, Random Point Processes in Time and Space (Springer-Verlag,
1991).
17. J. Moller, A. Syversveen, R. Waagepetersen, Scandinavian Journal of Statistics 25,
451 (1998).
18. I. DiMatteo, C. Genovese, R. Kass, Biometrika 88, 1055 (2001).
19. R. Neal, Monte Carlo implementation of Gaussian process models for Bayesian regression and classification, Tech. Rep. 9702, University of Toronto (1997).
20. L. Paninski, Network: Computation in Neural Systems 15, 243 (2004).
21. J. Pillow, L. Paninski, E. Simoncelli, NIPS 17 (2003).
22. L. Paninski, J. Pillow, E. Simoncelli, Neural Computation 16, 2533 (2004).
23. H. Dym, H. McKean, Fourier Series and Integrals (Academic Press, New York, 1972).
| 2590 |@word determinant:1 version:1 cox:1 briefly:1 stronger:1 suitably:1 c0:3 covariance:14 moment:3 series:1 denoting:2 precluding:1 ka:1 comparing:1 additonally:1 must:1 written:3 partition:1 stationary:2 half:2 pursued:1 parameterization:3 isotropic:1 filtered:1 parameterizations:1 contribute:1 math:1 toronto:1 simpler:1 unbounded:1 along:1 direct:1 become:2 differential:2 prove:4 doubly:2 wild:1 manner:2 roughly:1 riemann:1 anisotropy:1 little:1 preclude:1 increasing:3 becomes:1 begin:2 provided:2 moreover:2 notation:1 underlying:2 mass:1 what:1 developed:1 unobserved:1 remember:1 multidimensional:1 ti:54 concave:7 biometrika:3 uk:2 unit:1 t1:7 positive:2 before:1 local:4 limit:8 acad:1 path:1 pami:1 might:1 therein:1 liam:3 range:3 unique:2 practical:1 gilks:1 union:1 practice:1 definite:1 differs:1 procedure:1 ker:1 significantly:1 projection:6 word:2 integrating:1 radial:1 onto:1 interior:1 marginalize:1 operator:3 cannot:1 context:8 live:1 writing:1 www:1 restriction:1 map:2 demonstrated:2 straightforward:4 attention:1 flexibly:1 independently:1 convex:18 williams:2 immediately:1 mckean:1 reparameterization:1 coordinate:1 laplace:3 limiting:2 annals:2 controlling:1 us:1 helmholtz:1 approximated:1 satisfying:1 hagan:1 observed:7 parameterize:1 yk:4 moderately:1 depend:2 predictive:11 upon:1 basis:2 translated:1 easily:3 joint:2 multiplicand:2 various:2 separated:1 effective:2 london:2 monte:1 choosing:1 quite:2 whose:2 widely:2 solve:1 say:1 statistic:4 think:1 jointly:5 itself:1 transform:1 obviously:1 advantage:1 sequence:1 biophysical:1 ucl:2 interaction:1 product:3 date:1 rapidly:1 converges:1 coupling:1 derive:1 ac:2 received:1 strong:1 solves:1 implies:1 indicate:1 direction:1 stochastic:2 require:2 generalization:3 proposition:5 strictly:1 ad2:1 extension:1 hold:1 considered:2 hall:1 vary:1 estimation:9 applicable:1 individually:1 lovasz:1 mit:1 clearly:4 gaussian:32 always:2 avoid:2 varying:1 broader:1 consistently:1 likelihood:15 integrability:1 hk:5 seeger:1 tech:2 sense:1 posteriori:2 dim:2 helpful:1 inference:1 typically:2 hidden:2 transformed:1 interested:1 overall:1 classification:7 special:4 fairly:1 mackay:1 sampling:5 chapman:1 identical:2 represents:1 look:1 unsupervised:10 nearly:2 genovese:1 t2:7 recommend:1 simplify:1 few:2 dg:4 preserve:5 replaced:1 geometry:1 lebesgue:2 fire:1 microsoft:1 freedom:1 analyzed:1 mixture:1 implication:1 callan:1 integral:4 necessary:1 modest:1 indexed:1 logarithm:4 theoretical:2 earlier:1 modeling:2 markovian:1 maximization:2 uniform:2 dependency:1 perturbed:1 corrupted:1 density:32 international:2 probabilistic:2 dimatteo:1 invertible:1 again:3 central:1 thesis:2 slowly:1 positivity:4 rescaling:1 unordered:1 postulated:1 ornstein:2 depends:2 tion:1 sort:1 ni:4 miller:1 correspond:2 rinott:3 generalize:1 bayesian:5 carlo:1 worth:2 randomwalk:1 whenever:2 nonstationarity:1 minka:1 obvious:2 thereof:1 associated:3 proof:5 popular:2 recall:2 lim:2 conversation:1 higher:2 dt:2 supervised:9 permitted:1 syversveen:1 though:1 generality:1 correlation:3 hand:1 nonlinear:1 aj:5 perhaps:3 behaved:1 believe:1 effect:1 normalized:2 hence:2 documenting:1 read:1 spatially:1 neal:2 deal:1 white:3 noted:2 excitation:1 anything:1 generalized:2 trying:1 presenting:1 theoretic:2 wise:1 ef:1 recently:2 common:1 functional:1 physical:1 conditioning:4 exponentially:1 anisotropic:2 discussed:3 tail:2 interpretation:1 significant:1 gibbs:1 ai:1 smoothness:2 enjoyed:1 scandinavian:1 f0:1 gj:4 etc:2 base:1 posterior:3 multivariate:1 own:1 recent:1 driven:1 certain:3 verlag:1 rep:2 arbitrarily:1 discussing:1 yi:13 seen:1 additional:1 somewhat:3 bessel:1 branch:1 smoother:1 unimodal:1 simoncelli:2 stem:1 smooth:1 technical:1 academic:1 divided:1 paired:1 laplacian:2 regression:8 essentially:1 poisson:1 normalization:4 represent:1 kernel:9 uhlenbeck:2 csato:1 preserved:4 background:2 want:2 addition:4 proposal:1 fine:1 fellowship:1 singular:1 diagonalize:1 extra:1 ot:2 ascent:1 sure:1 induced:2 logconcave:1 seem:2 unitary:1 noting:1 easy:2 variety:1 independence:1 marginalization:2 reduce:1 whether:1 motivated:1 york:1 ignored:1 useful:11 generally:4 detailed:1 clear:1 tune:1 ph:2 induces:1 simplest:1 http:1 inhibitory:1 sign:1 neuroscience:1 trapped:1 disjoint:1 arising:1 write:5 hyperparameter:1 snyder:1 key:2 indefinite:1 drawn:1 prevent:1 pj:7 concreteness:1 convert:1 sum:2 enforced:1 inverse:4 exponentiation:1 parameterized:1 letter:1 family:10 reasonable:1 ruling:1 throughout:1 almost:1 draw:2 completing:1 quadratic:2 prekopa:5 encountered:1 nonnegative:2 constraint:6 n3:1 nearby:1 fourier:3 integrand:1 argument:2 extremely:2 vempala:1 relatively:1 developing:1 sollich:1 lp:1 outlier:1 explained:1 taken:3 rectification:1 turn:5 discus:1 needed:1 letting:1 available:1 gaussians:1 operation:3 permit:1 generalizes:1 autoregression:1 observe:1 rewritten:1 apply:1 generic:1 enforce:1 spectral:4 appropriate:1 oakley:1 robustness:1 existence:1 original:1 denotes:2 include:3 ensure:3 ghahramani:1 approximating:1 society:1 implied:1 warping:4 dependence:2 usual:1 bialek:1 unclear:1 dp:2 thank:1 sci:1 barber:1 reason:1 equip:1 enforcing:2 assuming:3 pointwise:5 modeled:2 setup:6 disastrous:1 stated:1 neuroscientific:1 implementation:1 unknown:3 observation:5 convolution:1 finite:8 extended:1 perturbation:1 arbitrary:2 community:2 intensity:12 introduced:1 nonlinearly:1 required:1 specified:1 optimized:1 rougher:1 nip:1 below:3 pattern:1 royal:1 power:2 suitable:3 event:1 natural:2 examination:1 indicator:7 representing:1 scheme:1 aston:1 brief:1 imply:1 temporally:1 lk:6 kj:3 prior:12 literature:3 review:3 acknowledgement:1 multiplication:3 law:1 expect:1 interesting:2 proven:1 ingredient:1 integrate:2 degree:1 affine:2 sufficient:1 translation:5 course:3 excitatory:1 gl:9 supported:1 free:2 side:1 allow:2 taking:2 waagepetersen:1 leaky:1 sparse:1 slice:1 dimension:1 finitedimensional:1 valid:1 cumulative:1 dpj:2 concavity:26 pillow:2 collection:1 adaptive:1 transaction:1 functionals:1 approximate:2 compact:3 logic:1 global:3 overfitting:1 assumed:3 unnecessary:1 nelder:1 continuous:3 latent:5 iterative:1 nature:1 robust:1 ignoring:1 expansion:1 moller:1 necessarily:3 complex:1 domain:2 logconcavity:1 main:4 noise:15 hyperparameters:6 allowed:2 site:1 fashion:1 gatsby:3 nonnegativity:1 exponential:4 theorem:12 emphasizing:1 rectifier:1 dk:1 decay:1 concern:2 maximizers:1 exists:2 restricting:1 adding:1 effectively:1 importance:1 magnitude:2 conditioned:2 easier:1 rejection:1 generalizing:1 paninski:4 simply:3 likely:1 infinitely:1 scalar:1 fear:1 halees:1 applies:2 springer:1 conditional:1 goal:1 identity:1 mccullagh:1 specifically:2 infinite:2 except:1 sampler:1 indicating:1 college:1 support:1 latter:1 roadblock:1 regularizing:1 |
1,751 | 2,591 | Detecting Significant Multidimensional Spatial
Clusters
Daniel B. Neill, Andrew W. Moore, Francisco Pereira, and Tom Mitchell
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
{neill,awm,fpereira,t.mitchell}@cs.cmu.edu
Abstract
Assume a uniform, multidimensional grid of bivariate data, where each
cell of the grid has a count ci and a baseline bi . Our goal is to find
spatial regions (d-dimensional rectangles) where the ci are significantly
higher than expected given bi . We focus on two applications: detection of
clusters of disease cases from epidemiological data (emergency department visits, over-the-counter drug sales), and discovery of regions of increased brain activity corresponding to given cognitive tasks (from fMRI
data). Each of these problems can be solved using a spatial scan statistic
(Kulldorff, 1997), where we compute the maximum of a likelihood ratio
statistic over all spatial regions, and find the significance of this region
by randomization. However, computing the scan statistic for all spatial
regions is generally computationally infeasible, so we introduce a novel
fast spatial scan algorithm, generalizing the 2D scan algorithm of (Neill
and Moore, 2004) to arbitrary dimensions. Our new multidimensional
multiresolution algorithm allows us to find spatial clusters up to 1400x
faster than the naive spatial scan, without any loss of accuracy.
1
Introduction
One of the core goals of modern statistical inference and data mining is to discover patterns
and relationships in data. In many applications, however, it is important not only to discover
patterns, but to distinguish those patterns that are significant from those that are likely to
have occurred by chance. This is particularly important in epidemiological applications,
where a rise in the number of disease cases in a region may or may not be indicative
of an emerging epidemic. In order to decide whether further investigation is necessary,
epidemiologists must know not only the location of a possible outbreak, but also some
measure of the likelihood that an outbreak is occurring in that region. Similarly, when
investigating brain imaging data, we want to not only find regions of increased activity, but
determine whether these increases are significant or due to chance fluctuations.
More generally, we are interested in spatial data mining problems where the goal is detection of overdensities: spatial regions with high counts relative to some underlying baseline.
In the epidemiological datasets, the count is some quantity (e.g. number of disease cases,
or units of cough medication sold) in a given area, where the baseline is the expected value
of that quantity based on historical data. In the brain imaging datasets, our count is the
total fMRI activation in a given set of voxels under the experimental condition, while our
baseline is the total activation in that set of voxels under the null or control condition.
We consider the case in which data has been aggregated to a uniform, d-dimensional grid.
For the fMRI data, we have three spatial dimensions; for the epidemiological data, we have
two spatial dimensions but also use several other quantities (time, patients? age and gender)
as ?pseudo-spatial? dimensions; this is discussed in more detail below.
In the general case, let G be a d-dimensional grid of cells, with size N1 ? N2 ? . . . ? Nd .
Each cell si ? G (where i is a d-dimensional vector) is associated with a count ci and a
baseline bi . Our goal is to search over all d-dimensional rectangular regions S ? G, and
find regions where the total count C(S) = ?S ci is higher than expected, given the baseline
B(S) = ?S bi . In addition to discovering these high-density regions, we must also perform
statistical testing to determine whether these regions are significant. As is necessary in
the scan statistics framework, we focus on finding the single, most significant region; the
method can be iterated (removing each significant cluster once it is found) to find multiple
significant regions.
1.1 Likelihood ratio statistics
Our basic model assumes that counts ci are generated by an inhomogeneous Poisson process with mean qbi , where q (the underlying ratio of count to baseline) may vary spatially.
We wish to detect hyper-rectangular regions S such that q is significantly higher inside S
than outside S. To do so, for a given region S, we assume that q = qin uniformly for cells
si ? S, and q = qout uniformly for cells si ? G ? S. We then test the null hypothesis H0 (S):
qin ? (1 + ?)qout against the alternative hypothesis H1 (S): qin > (1 + ?)qout . If ? = 0, this is
equivalent to the classical spatial scan statistic [1-2]: we are testing for regions where q in is
greater than qout . However, in many real-world applications (including the epidemiological
and fMRI datasets discussed later) we expect some fluctuation in the underlying baseline;
thus, we do not want to detect all deviations from baseline, but only those where the amount
of deviation is greater than some threshold. For example, a 10% increase in disease cases
in some region may not be interesting to epidemiologists, even if the underlying population
is large enough to conclude that this is a ?real? (statistically significant) increase in q. By
increasing ?, we can focus the scan statistic on regions with larger ratios of count to baseline. For example, we can use the scan statistic with ? = 0.25 to test for regions where q in
is more than 25% higher than qout . Following Kulldorff [1], our spatial scan statistic is the
maximum, over all regions S, of the ratio of the likelihoods under the alternative and null
hypotheses. Taking logs for convenience, we have:
D? (S) = log
supqin >(1+?)qout ?si ?S P(ci ? Po(qin bi )) ?si ?G?S P(ci ? Po(qout bi ))
supqin ?(1+?)qout ?si ?S P(ci ? Po(qin bi )) ?si ?G?S P(ci ? Po(qout bi ))
C(S)
Ctot ?C(S)
Ctot
= (sgn) C(S) log
+ (Ctot ?C(S)) log
?Ctot log
(1 + ?)B(S)
Btot ? B(S)
Btot + ?B(S)
where C(S) and B(S) are the count and baseline of the region S under consideration, Ctot
and Btot are the total count and baseline of the entire grid G, and sgn = +1 if C(S)
B(S) > (1 +
?C(S)
?) CBtot
and -1 otherwise. Then the scan statistic D?,max is equal to the maximum D? (S)
tot ?B(S)
over all spatial regions (d-dimensional rectangles) under consideration. We note that our
statistical and computational methods are not limited to the Poisson model given here; any
model of null and alternative hypotheses such that the resulting statistic D(S) satisfies the
conditions given in [4] can be used for the fast spatial scan.
1.2 Randomization testing
Once we have found the highest scoring region S? = arg maxS D(S) of grid G, we must still
determine the statistical significance of this region. Since the exact distribution of the test
statistic Dmax is only known in special cases, in general we must find the region?s p-value by
randomization. To do so, we run a large number R of random replications, where a replica
has the same underlying baselines bi as G, but counts are randomly drawn from the null
Ctot
hypothesis H0 (S? ). More precisely, we pick ci ? Po(qbi ), where q = qin = (1+?) Btot +?B(S
?)
Ctot
?
0
for si ? S? , and q = qout = Btot +?B(S
? ) for si ? G ? S . The number of replicas G with
Dmax (G0 ) ? Dmax (G), divided by the total number of replications R, gives us the p-value
for our most significant region S? . If this p-value is less than ? (where ? is the false positive
rate, typically chosen to be 0.05 or 0.1), we can conclude that the discovered region is
statistically significant at level ?.
1.3 The naive spatial scan
The simplest method of finding Dmax is to compute D(S) for all rectangular regions of sizes
k1 ? k2 ? . . . ? kd , where 1 ? k j ? N j . Since there are a total of ?dj=1 (N j ? k j + 1) regions
of each size, there are a total of O(?dj=1 N 2j ) regions to examine. We can compute D(S)
for any region S in constant time, by first finding the count C(S) and baseline B(S), then
computing D.1 This allows us to compute Dmax of a grid G in O(?dj=1 N 2j ) time. However,
significance testing by randomization also requires us to find Dmax for each replica G0 ,
and compare this to Dmax (G); thus the total complexity is multiplied by the number of
replications R. When the size of the grid is large, as is the case for the epidemiological and
fMRI datasets we are considering, this naive approach is computationally infeasible.
Instead, we apply our ?overlap-multiresolution partitioning? algorithm [3-4], generalizing
this method from two-dimensional to d-dimensional datasets. This reduces the complexity
to O(?dj=1 N j log N j ) in cases where the most significant region S? has a sufficiently high ratio of count to baseline, and (as we show in Section 3) typically results in tens to thousands
of times speedup over the naive approach. We note that this fast spatial scan algorithm is
exact (always finds the correct value of Dmax and the corresponding region S? ); the speedup
results from the observation that we do not need to search a given set of regions if we can
prove that none of them have score > Dmax . Thus we use a top-down, branch-and-bound
approach: we maintain the current maximum score of the regions we have searched so far,
calculate upper bounds on the scores of subregions contained in a given region, and prune
regions whose upper bounds are less than the current value of Dmax . When searching a
replica grid, we care only whether Dmax of the replica grid is greater than Dmax (G). Thus
we can use Dmax of the original grid for pruning on the replicas, and can stop searching a
replica if we find a region with score > Dmax (G).
2
Overlap-multiresolution partitioning
As in [4], we use a multiresolution search method which relies on an overlap-kd tree data
structure. The overlap-kd tree, like kd-trees [5] and quadtrees [6], is a hierarchical, spacepartitioning data structure. The root node of the tree represents the entire space under
consideration (i.e. the entire grid G), and each other node represents a subregion of the
grid. Each non-leaf node of a d-dimensional overlap-kd tree has 2d children, an ?upper?
and a ?lower? child in each dimension. For example, in three dimensions, a node has six
children: upper and lower children in the x, y, and z dimensions. The overlap-kd tree is
different from the standard kd-tree and quadtree in that adjacent regions overlap: rather
than splitting the region in half along each dimension, instead each child contains more
than half the area of the parent region. For example, a 64 ? 64 ? 64 grid will have six
children: two of size 48 ? 64 ? 64, two of size 64 ? 48 ? 64, and two of size 64 ? 64 ? 48.
1 An
old trick makes it possible to compute the count and baseline of any rectangular region in
time constant in N: we first form a d-dimensional array of the cumulative counts, then compute
each region?s count by adding/subtracting at most 2d cumulative counts. Note that because of the
exponential dependence on d, these techniques suffer from the ?curse of dimensionality?: neither the
naive spatial scan, nor the fast spatial scan discussed below, are appropriate for very high dimensional
datasets.
In general, let region S have size k1 ? k2 ? . . . ? kd . Then the two children of S in dimension
j (for j = 1 . . . d) have size k1 ? . . . ? k j?1 ? f j k j ? k j+1 ? . . . ? kd , where 21 < f j < 1. This
partitioning (for the two-dimensional case, where f 1 = f2 = 34 ) is illustrated in Figure 1.
Note that there is a region SC common to all of these children; we call this region the center
of S. When we partition region S in this manner, it can be proved that any subregion of S
either a) is contained entirely in (at least) one of S1 . . . S2d , or b) contains the center region
SC . Figure 1 illustrates each of these possibilities, for the simple case of d = 2.
S
S_1
S_2
S_3
S_4
S_C
Figure 1: Overlap-multires partitioning
of region S (for d = 2). Any subregion
of S either a) is contained in some Si ,
i = 1 . . . 4, or b) contains SC .
Now we can search all subregions of S by recursively searching S1 . . . S2d , then searching
all of the regions contained in S which contain the center SC . There may be a large number
of such ?outer regions,? but since we know that each such region contains the center, we
can place very tight bounds on the score of these regions, often allowing us to prune most
or all of them. Thus the basic outline of our search procedure (ignoring pruning, for the
moment) is:
overlap-search(S)
{
call base-case-search(S)
define child regions S_1..S_2d, center S_C as above
call overlap-search(S_i) for i=1..2d
for all S? such that S? is contained in S and contains S_C, call base-case-search(S?)
}
The fractions f i are selected based on the current sizes ki of the region being searched:
if ki = 2m , then fi = 43 , and if ki = 3 ? 2m , then fi = 32 . For simplicity, we assume that
all Ni are powers of two, and thus all region sizes ki will fall into one of these two cases.
Repeating this partitioning recursively, we obtain the overlap-kd tree structure. For d = 2,
the first two levels of the overlap-kd tree are shown in Figure 2.
Figure 2: The first two levels of the twodimensional overlap-kd tree. Each node
represents a gridded region (denoted by
a thick rectangle) of the entire dataset
(thin square and dots).
The overlap-kd tree has several useful properties, which we present here without proof.
First, for every rectangular region S ? G, either S is a gridded region (contained in the
overlap-kd tree), or there exists a unique gridded region S 0 such that S is an outer region
of S0 (i.e. S is contained in S0 , and contains the center region of S0 ). This means that, if
overlap-search is called exactly once for each gridded region2 , and no pruning is done, then
base-case-search will be called exactly once for every rectangular region S ? G. In practice,
we will prune many regions, so base-case-search will be called at most once for every rectangular region, and every region will be either searched or pruned. The second nice property of our overlap-kd tree is that the total number of gridded regions is O(?dj=1 N j log N j ).
This implies that, if we are able to prune (almost) all outer regions, we can find D max of the
grid in O(?dj=1 N j log N j ) time rather than O(?dj=1 N 2j ). In fact, we may not even need to
search all gridded regions, so in many cases the search will be even faster.
2 As
in [4], we use ?lazy expansion? to ensure that gridded regions are not multiply searched.
2.1 Score bounds and pruning
We now consider which regions can be pruned (discarded without searching) during our
multiresolution search procedure. First, given some region S, we must calculate an upper
bound on the scores D(S0 ) for regions S0 ? S. More precisely, we are interested in two
upper bounds: a bound on the score of all subregions S0 ? S, and a bound on the score of
the outer subregions of S (those regions contained in S and containing its center SC ). If the
first bound is less than or equal to Dmax , we can prune region S completely; we do not need
to search any (gridded or outer) subregion of S. If only the second bound is less than or
equal to Dmax , we do not need to search the outer subregions of S, but we must recursively
call overlap-search on the gridded children of S. If both bounds are greater than D max , we
must both recursively call overlap-search and search the outer regions.
Score bounds are calculated based on various pieces of information about the subregions
of S, including: upper and lower bounds bmax , bmin on the baseline of subregions S0 ; an
upper bound dmax on the ratio CB of S0 ; an upper bound dinc on the ratio CB of S0 ? SC ; and
a lower bound dmin on the ratio CB of S ? S0 . We also know the count C and baseline B of
region S, and the count ccenter and baseline bcenter of region SC . Let cin and bin be the count
and baseline of S0 . To find an upper bound on D(S0 ), we must calculate the values of cin
?ccenter
? dinc , bcinin ? dmax ,
and bin which maximize D subject to the given constraints: bcinin ?b
center
C?cin
B?bin
? dmin , and bmin ? bin ? bmax . The solution to this maximization problem is derived
in [4], and (since scores are based only on count and baseline rather than the size and shape
of the region) it applies directly to the multidimensional case. The bounds on baselines and
ratios CB are first calculated using global values (as a fast, ?first-pass? pruning technique).
For the remaining, unpruned regions, we calculate tighter bounds using the quartering
method of [4], and use these to prune more regions.
2.2 Related work
Our work builds most directly on the results of Kulldorff [1], who presents the twodimensional spatial scan framework and the classical (? = 0) likelihood ratio statistic. It
also extends [4], in which we present the two-dimensional fast spatial scan. Our major
extensions in the present work are twofold: the d-dimensional fast spatial scan, and the
generalized likelihood ratio statistics D? . A variety of other cluster detection techniques
exist in the literature on epidemiology [1-3, 7-8], brain imaging [9-11], and machine learning [12-15]. The machine learning literature focuses on heuristic or approximate clusterfinding techniques, which typically cannot deal with spatially varying baselines, and more
importantly, give no information about the statistical significance of the clusters found.
Our technique is exact (in that it calculates the maximum of the likelihood ratio statistic
over all hyper-rectangular spatial regions), and uses a powerful statistical test to determine
significance. Nevertheless, other methods in the literature have some advantages over the
present approach, such as applicability to high-dimensional data and fewer assumptions
on the underlying model. The fMRI literature generally tests significance on a per-voxel
basis (after applying some method of spatial smoothing); clusters must then be inferred
by grouping individually significant voxels, and (with the exception of [10]) no per-cluster
false positive rate is guaranteed. The epidemiological literature focuses on detecting significant circular, two-dimensional clusters, and thus cannot deal with multidimensional data
or elongated regions. Detection of elongated regions is extremely important in both epidemiology (because of the need to detect windborne or waterborne pathogens) and brain
imaging (because of the ?folded sheet? structure of the brain); the present work, as well as
[4], allow detection of such clusters.
3
Results
We now describe results of our fast spatial scan algorithm on three sets of real-world data:
two sets of epidemiological data (from emergency department visits and over-the-counter
drug sales), and one set of fMRI data. Before presenting these results, we wish to emphasize three main points. First, the extension of scan statistics from two-dimensional to
d-dimensional datasets dramatically increases the scope of problems for which these techniques can be used. In addition to datasets with more than two spatial dimensions (for
example, the fMRI data, which consists of a 3D picture of the brain), we can also examine
data with a temporal component (as in the OTC dataset), or where we wish to take demographic information into account (as in the ED dataset). Second, in all of these datasets, the
use of the broader class of likelihood ratio statistics D? (instead of only the classical scan
statistic ? = 0) allows us to focus our search on smaller, denser regions rather than slight
(but statistically significant) increases over a large area. Third, as our results here will
demonstrate, the fast spatial scan gains huge performance improvements over the naive
approach, making the use of the scan statistic feasible in these large, real-world datasets.
Our first test set was a database of (anonymized) Emergency Department data collected
from Western Pennsylvania hospitals in the period 1999-2002. This dataset contains a total
of 630,000 records, each representing a single ED visit and giving the latitude and longitude of the patient?s home location to the nearest 31 mile (a sufficiently low resolution to
ensure anonymity). Additionally, a record contains information about the patient?s gender
and age decile. Thus we map records into a four-dimensional grid, consisting of two spatial dimensions (longitude, latitude) and two ?pseudo-spatial? dimensions (patient gender
and age decile). This has several advantages over the traditional (two-dimensional) spatial
scan. First, our test has higher power to detect syndromes which affect differing patient
demographics to different extents. For example, if a disease primarily strikes male infants,
we might find a cluster with gender = male and age decile = 0 in some spatial region, and
this cluster may not be detectable from the combined data. Second, our method accounts
correctly for multiple hypothesis testing. If we were to instead perform a separate test at
level ? on each combination of gender and age decile, the overall false positive rate would
be much higher than ?. We mapped the ED dataset to a 128 ? 128 ? 2 ? 8 grid, with the
first two coordinates corresponding to longitude and latitude, the third coordinate corresponding to the patient?s gender, and the fourth coordinate corresponding to the patient?s
age decile. We tested for spatial clustering of ?recent? disease cases: the count of a cell was
the number of ED visits in that spatial region, for patients of that age and gender, in 2002,
and the baseline was the total number of ED visits in that spatial region, for patients of that
age and gender, over the entire temporal period 1999-2002. We used the D? scan statistic
with values of ? ranging from 0 to 1.0. For the classical scan statistic (? = 0), we found a
region of size 35 ? 34 ? 2 ? 8; thus the most significant region was spatially localized but
cut across all genders and age groups. The region had C = 3570 and B = 6409, as compared
to CB = 0.05 outside the region, and thus this is clearly an overdensity. This was confirmed
by the algorithm, which found the region statistically significant (p-value 0/100). With
the three other values of ?, the algorithm found almost the same region (35 ? 33 ? 2 ? 8,
C = 3566, B = 6390) and again found it statistically significant (p-value 0/100). For all
values of ?, the fast scan statistic found the most significant region hundreds of times faster
than the naive spatial scan (see Table 1): while the naive approach required approximately
12 hours per replication, the fast scan searched each replica in approximately 2 minutes,
plus 100 minutes to search the original grid. Thus the fast algorithm achieved speedups of
235-325x over the naive approach for the entire run (i.e. searching the original grid and
100 replicas) on the ED dataset.
Our second test set was a nationwide database of retail sales of over-the-counter cough
and cold medication. Sales figures were reported by zip code; the data covered 5000 zip
codes across the U.S. In this case, our goal was to see if the spatial distribution of sales in
a given week (February 7-14, 2004) was significantly different than the spatial distribution
of sales during the previous week, and to identify a significant cluster of increased sales if
one exists. Since we wanted to detect clusters even if they were only present for part of the
week, we used the date (Feb. 7-14) as a third dimension. This is similar to the retrospective
Table 1: Performance of algorithm, real-world datasets
test
ED
(128 ? 128 ? 2 ? 8)
(7.35B regions)
OTC
(128 ? 128 ? 8)
(2.45B regions)
fMRI
(64 ? 64 ? 16)
(588M regions)
?
0
0.25
0.5
1.0
0
0.25
0.5
1.0
0
0.01
0.02
0.03
0.04
0.05
sec/orig
6140
6035
5994
5607
4453
429
334
229
880
597
558
547
538
538
sec/rep
126
100
102
79.6
195
123
51
5.9
384
285
188
97.3
30.0
13.1
speedup
x235
x275
x272
x325
x48
x90
x210
x1400
x7
x9
x14
x27
x77
x148
regions (orig)
358M
352M
348M
334M
302M
12.2M
8.65M
4.40M
39.9M
35.2M
33.1M
32.3M
31.9M
31.7M
regions (rep)
622K
339K
362K
336K
2.46M
1.39M
350K
< 10
14.0M
10.4M
6.65M
3.93M
1.44M
310K
space-time scan statistic of [16], which also uses time as a third dimension. However,
that algorithm searches over cylinders rather than hyper-rectangles, and thus cannot detect
spatially elongated clusters. The count of a cell was taken to be the number of sales in that
spatial region on that day; to adjust for day-of-week effects, the baseline of a cell was taken
to be the number of sales in that spatial region on the day one week prior (Jan. 31-Feb. 7).
Thus we created a 128 ? 128 ? 8 grid, where the first two coordinates were derived from
the longitude and latitude of that zip code, and the third coordinate was temporal, based on
the date. For this dataset, the classical scan statistic (? = 0) found a region of size 123 ?
76 from February 7-11. Unfortunately, since the ratio CB was only 0.99 inside the region
(as compared to 0.96 outside) this region would not be interesting to an epidemiologist.
Nevertheless, the region was found to be significant (p-value 0/100) because of the large
total baseline. Thus, in this case, the classical scan statistic finds a large region of very slight
overdensity rather than a smaller, denser region, and thus is not as useful for detecting
epidemics. For ? = 0.25 and ? = 0.5, the scan statistic found a much more interesting
region: a 4 ? 1 region on February 9 where C = 882 and B = 240. In this region, the
number of sales of cough medication was 3.7x its expected value; the region?s p-value was
computed to be 0/100, so this is a significant overdensity. For ? = 1, the region found was
almost the same, consisting of three of these four cells, with C = 825 and B = 190. Again
the region was found to be significant (p-value 0/100). For this dataset, the naive approach
took approximately three hours per replication. The fast scan statistic took between six
seconds and four minutes per replication, plus ten minutes to search the original grid, thus
obtaining speedups of 48-1400x on the OTC dataset.
Our third and final test set was a set of fMRI data, consisting of two ?snapshots? of a
subject?s brain under null and experimental conditions respectively. The experimental condition was from a test [9] where the subject is given words, one at a time; he must read these
words and identify them as verbs or nouns. The null condition is the subject?s average brain
activity while fixating on a cursor, before any words are presented. Each snapshot consists
of a 64 ? 64 ? 16 grid of voxels, with a reading of fMRI activation for the subset of the
voxels where brain activity is occurring. In this case, the count of a cell is the fMRI activation for that voxel under the experimental condition, and the baseline is the activation for
that voxel under the null condition. For voxels with no brain activity, we have c i = bi = 0.
For the fMRI dataset, the amount of change between activated and non-activated regions is
small, and thus we used values of ? ranging from 0 to 0.05.
For the classical scan statistic (? = 0) our algorithm found a 23 ? 20 ? 11 region, and again
found this region significant (p-value 0/100). However, this is another example where the
classical scan statistic finds a region which is large ( 14 of the entire brain) and only slightly
increased in count: CB = 1.007 inside the region and CB = 1.002 outside the region. For
? = 0.01, we find a more interesting cluster: a 5 ? 10 ? 1 region in the visual cortex containing four non-zero voxels.3 For this region CB = 1.052, a large increase, and the region
is significant at ? = 0.1 (p-value 10/100) though not at ? = 0.05. For ? = 0.02, we find
the same region, but conclude that it is not significant (p-value 32/100). For ? = 0.03 and
? = 0.04, we find a 3 ? 2 ? 1 region with CB = 1.065, but this region is not significant (pvalues 61/100 and 89/100 respectively). Similarly, for ? = 0.05, we find a single voxel with
C
B = 1.075, but again it is not significant (p-value 91/100). For this dataset, the naive approach took approximately 45 minutes per replication. The fast scan statistic took between
13 seconds and six minutes per replication, thus obtaining speedups of 7-148x on the fMRI
dataset.
Thus we have demonstrated (through tests on a variety of real-world datasets) that the
fast multidimensional spatial scan statistic has significant performance advantages over the
naive approach, resulting in speedups up to 1400x without any loss of accuracy. This makes
it feasible to apply scan statistics in a variety of application domains, including the spatial
and spatio-temporal detection of disease epidemics (taking demographic information into
account), as well as the detection of regions of increased brain activity in fMRI data. We
are currently examining each of these application domains in more detail, and investigating
which statistics are most useful for each domain. The generalized likelihood ratio statistics
presented here are a first step toward this: by adjusting the parameter ?, we can ?tune? the
statistic to detect smaller and denser, or larger but less dense, regions as desired, and our
statistical significance test is adjusted accordingly. We believe that the combination of fast
computational algorithms and more powerful statistical tests presented here will enable the
multidimensional spatial scan statistic to be useful in these and many other applications.
References
[1] M. Kulldorff. 1997. A spatial scan statistic. Communications in Statistics: Theory and Methods 26(6), 1481-1496.
[2] M. Kulldorff. 1999. Spatial scan statistics: models, calculations, and applications. In Glaz and Balakrishnan, eds. Scan
Statistics and Applications. Birkhauser: Boston, 303-322.
[3] D. B. Neill and A. W. Moore. 2003. A fast multi-resolution method for detection of significant spatial disease clusters. In
Advances in Neural Information Processing Systems 16.
[4] D. B. Neill and A. W. Moore. 2004. Rapid detection of significant spatial clusters. To be published in Proc. 10th ACM
SIGKDD Intl. Conf. on Knowledge Discovery and Data Mining.
[5] J. L. Bentley. 1975. Multidimensional binary search trees used for associative searching. Comm. ACM 18, 509-517.
[6] R. A. Finkel and J. L. Bentley. 1974. Quadtrees: a data structure for retrieval on composite keys. Acta Informatica 4, 1-9.
[7] S. Openshaw, et al. 1988. Investigation of leukemia clusters by use of a geographical analysis machine. Lancet 1, 272-273.
[8] L. A. Waller, et al. 1994. Spatial analysis to detect disease clusters. In N. Lange, ed. Case Studies in Biometry. Wiley, 3-23.
[9] T. Mitchell et al. 2003. Learning to detect cognitive states from brain images. Machine Learning, in press.
[10] M. Perone Pacifico et al. 2003. False discovery rates for random fields. Carnegie Mellon University Dept. of Statistics,
Technical Report 771.
[11] K. Worsley et al. 2003. Detecting activation in fMRI data. Stat. Meth. in Medical Research 12, 401-418.
[12] R. Agrawal, et al. 1998. Automatic subspace clustering of high dimensional data for data mining applications. Proc.
ACM-SIGMOD Intl. Conference on Management of Data, 94-105.
[13] J. H. Friedman and N. I. Fisher. 1999. Bump hunting in high dimensional data. Statistics and Computing 9, 123-143.
[14] S. Goil, et al. 1999. MAFIA: efficient and scalable subspace clustering for very large data sets. Northwestern University,
Technical Report CPDC-TR-9906-010.
[15] W. Wang, et al. 1997. STING: a statistical information grid approach to spatial data mining. Proc. 23rd Conference on Very
Large Databases, 186-195.
[16] M. Kulldorff. 1998. Evaluating cluster alarms: a space-time scan statistic and brain cancer in Los Alamos. Am. J. Public
Health 88, 1377-1380.
3 In a longer run on a different subject, where we iterate the scan statistic to pick out multiple
significant regions, we found significant clusters in Broca?s and Wernicke?s areas in addition to the
visual cortex. This makes sense given the nature of the experimental task; however, more data is
needed before we can draw conclusive cross-subject comparisons.
| 2591 |@word nd:1 pick:2 tr:1 recursively:4 moment:1 hunting:1 contains:8 score:11 daniel:1 current:3 activation:6 si:10 must:10 tot:1 partition:1 shape:1 wanted:1 infant:1 half:2 discovering:1 leaf:1 selected:1 fewer:1 indicative:1 accordingly:1 core:1 record:3 detecting:4 node:5 location:2 along:1 replication:8 prove:1 consists:2 inside:3 manner:1 introduce:1 expected:4 rapid:1 examine:2 nor:1 multi:1 brain:15 curse:1 considering:1 increasing:1 sting:1 discover:2 underlying:6 null:8 emerging:1 differing:1 finding:3 pseudo:2 temporal:4 every:4 multidimensional:8 exactly:2 k2:2 sale:10 unit:1 control:1 partitioning:5 decile:5 medical:1 positive:3 before:3 waller:1 fluctuation:2 approximately:4 might:1 plus:2 acta:1 quadtrees:2 limited:1 bi:10 statistically:5 unique:1 testing:5 practice:1 epidemiological:8 procedure:2 cold:1 jan:1 area:4 drug:2 significantly:3 composite:1 word:3 convenience:1 cannot:3 sheet:1 twodimensional:2 applying:1 equivalent:1 elongated:3 map:1 center:8 demonstrated:1 rectangular:8 resolution:2 simplicity:1 splitting:1 ctot:7 array:1 importantly:1 population:1 searching:7 x14:1 coordinate:5 exact:3 us:2 hypothesis:6 pa:1 trick:1 particularly:1 anonymity:1 cut:1 database:3 solved:1 wang:1 thousand:1 calculate:4 region:131 counter:3 highest:1 cin:3 disease:9 comm:1 complexity:2 tight:1 orig:2 f2:1 completely:1 basis:1 po:5 various:1 fast:17 bmax:2 describe:1 sc:7 hyper:3 outside:4 h0:2 whose:1 heuristic:1 larger:2 denser:3 otherwise:1 epidemic:3 statistic:44 final:1 associative:1 advantage:3 agrawal:1 took:4 subtracting:1 qin:6 date:2 multiresolution:5 los:1 parent:1 cluster:22 glaz:1 intl:2 andrew:1 stat:1 nearest:1 school:1 subregion:4 longitude:4 c:1 implies:1 inhomogeneous:1 thick:1 correct:1 awm:1 sgn:2 enable:1 public:1 bin:4 investigation:2 randomization:4 tighter:1 adjusted:1 extension:2 sufficiently:2 cb:10 scope:1 week:5 bump:1 major:1 vary:1 proc:3 currently:1 individually:1 clearly:1 always:1 rather:6 x148:1 finkel:1 x27:1 varying:1 broader:1 derived:2 focus:6 improvement:1 likelihood:9 medication:3 sigkdd:1 baseline:27 detect:9 am:1 sense:1 inference:1 entire:7 typically:3 interested:2 arg:1 overall:1 denoted:1 wernicke:1 spatial:50 special:1 smoothing:1 otc:3 noun:1 equal:3 once:5 field:1 btot:5 represents:3 leukemia:1 thin:1 fmri:16 report:2 primarily:1 modern:1 randomly:1 consisting:3 n1:1 maintain:1 friedman:1 cylinder:1 detection:9 huge:1 mining:5 possibility:1 multiply:1 circular:1 adjust:1 male:2 activated:2 necessary:2 x48:1 tree:14 old:1 desired:1 increased:5 multires:1 maximization:1 applicability:1 deviation:2 subset:1 uniform:2 hundred:1 alamo:1 examining:1 reported:1 combined:1 density:1 epidemiology:2 geographical:1 s2d:2 again:4 x9:1 management:1 containing:2 cognitive:2 conf:1 worsley:1 account:3 fixating:1 sec:2 s_1:2 piece:1 later:1 h1:1 root:1 square:1 ni:1 accuracy:2 who:1 identify:2 iterated:1 none:1 confirmed:1 published:1 ed:9 against:1 associated:1 proof:1 stop:1 gain:1 proved:1 dataset:12 adjusting:1 mitchell:3 qbi:2 knowledge:1 dimensionality:1 higher:6 day:3 tom:1 done:1 though:1 western:1 believe:1 bentley:2 effect:1 contain:1 spatially:4 read:1 moore:4 mile:1 illustrated:1 deal:2 adjacent:1 during:2 generalized:2 presenting:1 outline:1 bmin:2 demonstrate:1 ranging:2 image:1 consideration:3 novel:1 fi:2 common:1 discussed:3 occurred:1 slight:2 he:1 significant:32 mellon:2 automatic:1 rd:1 grid:23 similarly:2 dj:7 dot:1 had:1 cortex:2 longer:1 base:4 feb:2 recent:1 rep:2 binary:1 scoring:1 greater:4 care:1 zip:3 syndrome:1 prune:6 broca:1 determine:4 aggregated:1 maximize:1 period:2 strike:1 branch:1 multiple:3 reduces:1 technical:2 faster:3 calculation:1 cross:1 retrieval:1 divided:1 visit:5 calculates:1 scalable:1 basic:2 patient:9 cmu:1 poisson:2 achieved:1 cell:10 retail:1 addition:3 want:2 subject:6 overdensities:1 balakrishnan:1 call:6 enough:1 variety:3 affect:1 iterate:1 pennsylvania:1 lange:1 whether:4 six:4 retrospective:1 suffer:1 dramatically:1 generally:3 useful:4 covered:1 tune:1 amount:2 repeating:1 ten:2 subregions:7 informatica:1 simplest:1 exist:1 per:7 correctly:1 carnegie:2 group:1 key:1 four:4 threshold:1 nevertheless:2 drawn:1 neither:1 replica:9 rectangle:4 imaging:4 fraction:1 run:3 powerful:2 fourth:1 place:1 almost:3 extends:1 decide:1 home:1 draw:1 entirely:1 bound:20 ki:4 emergency:3 distinguish:1 guaranteed:1 neill:5 activity:6 precisely:2 constraint:1 x7:1 extremely:1 pruned:2 speedup:7 department:3 combination:2 kd:15 smaller:3 across:2 slightly:1 making:1 s1:2 outbreak:2 taken:2 computationally:2 dmax:18 count:26 detectable:1 needed:1 know:3 demographic:3 multiplied:1 apply:2 hierarchical:1 appropriate:1 alternative:3 gridded:9 original:4 assumes:1 cough:3 top:1 ensure:2 remaining:1 clustering:3 pacifico:1 sigmod:1 giving:1 k1:3 build:1 february:3 qout:10 classical:8 g0:2 quantity:3 dependence:1 traditional:1 subspace:2 separate:1 mapped:1 outer:7 collected:1 extent:1 toward:1 x325:1 code:3 relationship:1 ratio:16 unfortunately:1 rise:1 quadtree:1 perform:2 allowing:1 upper:10 dmin:2 observation:1 snapshot:2 datasets:12 sold:1 discarded:1 communication:1 discovered:1 arbitrary:1 verb:1 inferred:1 required:1 conclusive:1 hour:2 able:1 below:2 pattern:3 latitude:4 reading:1 including:3 max:4 power:2 overlap:19 meth:1 representing:1 overdensity:3 picture:1 created:1 naive:12 health:1 nice:1 voxels:7 discovery:3 literature:5 prior:1 relative:1 loss:2 expect:1 northwestern:1 interesting:4 localized:1 age:9 anonymized:1 s0:12 unpruned:1 lancet:1 cancer:1 infeasible:2 allow:1 epidemiologist:3 fall:1 taking:2 dimension:14 calculated:2 world:5 cumulative:2 evaluating:1 historical:1 far:1 voxel:4 pruning:5 approximate:1 emphasize:1 global:1 investigating:2 pittsburgh:1 conclude:3 francisco:1 spatio:1 kulldorff:6 search:25 table:2 additionally:1 nature:1 ignoring:1 obtaining:2 expansion:1 domain:3 significance:7 main:1 dense:1 alarm:1 n2:1 child:10 wiley:1 pereira:1 wish:3 exponential:1 third:6 removing:1 down:1 minute:6 bivariate:1 exists:2 grouping:1 false:4 adding:1 perone:1 ci:10 pathogen:1 illustrates:1 occurring:2 mafia:1 cursor:1 boston:1 generalizing:2 likely:1 visual:2 lazy:1 contained:8 applies:1 gender:9 chance:2 satisfies:1 relies:1 acm:3 goal:5 twofold:1 fisher:1 feasible:2 change:1 folded:1 uniformly:2 pvalues:1 birkhauser:1 total:12 called:3 pas:1 hospital:1 experimental:5 exception:1 searched:5 scan:46 dept:1 tested:1 |
1,752 | 2,592 | Schema Learning: Experience-Based
Construction of Predictive Action Models
Michael P. Holmes
College of Computing
Georgia Institute of Technology
Atlanta, GA 30332-0280
[email protected]
Charles Lee Isbell, Jr.
College of Computing
Georgia Institute of Technology
Atlanta, GA 30332-0280
[email protected]
Abstract
Schema learning is a way to discover probabilistic, constructivist, predictive action models (schemas) from experience. It includes methods for finding and using hidden state to make predictions more accurate. We extend the original schema mechanism [1] to handle arbitrary
discrete-valued sensors, improve the original learning criteria to handle
POMDP domains, and better maintain hidden state by using schema predictions. These extensions show large improvement over the original
schema mechanism in several rewardless POMDPs, and achieve very low
prediction error in a difficult speech modeling task. Further, we compare
extended schema learning to the recently introduced predictive state representations [2], and find their predictions of next-step action effects to
be approximately equal in accuracy. This work lays the foundation for a
schema-based system of integrated learning and planning.
1
Introduction
Schema learning1 is a data-driven, constructivist approach for discovering probabilistic action models in dynamic controlled systems. Schemas, as described by Drescher [1], are
probabilistic units of cause and effect reminiscent of STRIPS operators [3]. A schema predicts how specific sensor values will change as different actions are executed from within
particular sensory contexts. The learning mechanism also discovers hidden state features
in order to make schema predictions more accurate.
In this work we have generalized and extended Drescher?s original mechanism to learn
more accurate predictions by using improved criteria both for discovery and refinement of
schemas as well as for creation and maintenance of hidden state. While Drescher?s work
included mechanisms for action selection, here we focus exclusively on the problem of
learning schemas and hidden state to accurately model the world. In several benchmark
POMDPs, we show that our extended schema learner produces significantly better action
models than the original. We also show that the extended learner performs well on a complex, noisy speech modeling task, and that its prediction accuracy is approximately equal
to that of predictive state representations [2] on a set of POMDPs, with faster convergence.
1
This use of the term schema derives from Piaget?s usage in the 1950s; it bears no relation to
database schemas or other uses of the term.
2
Schema Learning
Schema learning is a process of constructing probabilistic action models of the environment
so that the effects of agent actions can be predicted. Formally, a schema learner is fitted
with a set of sensors S = {s1 , s2 , . . .} and a set of actions A = {a1 , a2 , . . .} through
which it can perceive and manipulate the environment. Sensor values are discrete: sji
means that si has value j. As it observes the effects of its actions on the environment,
the learner constructs predictive units of sensorimotor cause and effect called schemas. A
ai
schema C ??
R essentially says, ?If I take action ai in situation C, I will see result R.?
Schemas thus have three components: (1) the context C = {c1 , c2 , . . . , cn } , which is a set
of sensor conditions ci ? skj that must hold for the schema to be applicable, (2) the action
that is taken, and (3) the result, which is a set of sensor conditions R = {r1 , r2 , . . . , rm }
predicted to follow the action. A schema is said to be applicable if its context conditions are
satisfied, activated if it is applicable and its action is taken, and to succeed if it is activated
and the predicted result is observed. Schema quality is measured by reliability, which is the
ai
probability that activation culminates in success: Rel(C ??
R) = prob(Rt+1 |Ct , ai(t) ).
Note that schemas are not rules telling an agent what to do; rather, they are descriptions of
what will happen if the agent takes a particular action in a specific circumstance. Also note
that schema learning has no predefined states such as those found in a POMDP or HMM;
the set of sensor readings is the state. Because one schema?s result can set up another
schema?s context, schemas fit naturally into a planning paradigm in which they are chained
from the current situation to reach sensor-defined goals.
2.1
Discovery and Refinement
Schema learning comprises two basic phases: discovery, in which context-free action/result
schemas are found, and refinement, in which context is added to increase reliability. In
discovery, statistics track the influence of each action ai on each sensor condition sjr .
Drescher?s original schema mechanism accommodated only binary-valued sensors, but we
have generalized it to allow a heterogeneous set of sensors that take on arbitrary discrete
values. In the present work, we assume that the effects of actions are observed on the
subsequent timestep, which leads to the following criterion for discovering action effects:
count(at , sjr(t+1) ) > ?d ,
(1)
where ?d is a noise-filtering threshold. If this criterion is met, the learner constructs a
ai
schema ? ??
sjr , where the empty set, ?, means that the schema is applicable in any situation. This works in a POMDP because it means that executing ai in some state has caused
sensor sr to give observation j, implying that such a transition exists in the underlying (but
unknown) system model. The presumption is that we can later learn what sensory context
makes this transition reliable. Drescher?s original discovery criterion generalizes in the
non-binary case to:
prob(sjr(t+1) |at )
prob(sjr(t+1) |at )
> ?od ,
(2)
where ?od > 1 and at means a was not taken at time t. Experiments in worlds of known
structure show that this criterion misses many true action effects.
When a schema is discovered, it has no context, so its reliability may be low if the effect
occurs only in particular situations. Schemas therefore begin to look for context conditions
Criterion
Extended Schema Learner
Original Schema Learner
Discovery
count(at , sjr(t+1) )
prob(sr(t+1) |at )
j
j
prob(sr(t+1) |at )
ai
?? R)
>?
?? R)
Annealed threshold
Rel(C
Refinement
?
{sjc }
ai
> ?d
Rel(C
Synthetic Item Creation
Synthetic Item Maintenance
a
i
0 < Rel(C ??
R) < ?
No context refinement possible
Predicted by other schemas
> ?od
Binary sensors only
ai
j
Rel(C ? {sc } ?
? R)
>?
ai
Rel(C ?
? R)
Static threshold
Binary sensors only
ai
0 < Rel(C ??
R) < ?
Schema is locally consistent
Average duration
Table 1: Comparison of extended and original schema learners.
a
i
that increase reliability. The criterion for adding sjc to the context of C ??
R is:
a
i
Rel(C ? {sjc } ??
R)
a
i
Rel(C ??
R)
> ?c ,
(3)
where ?c > 1. In practice we have found it necessary to anneal ?c to avoid adding spurious
ai
context. Once the criterion is met, a child schema C ? ??
R is formed, where C ? = C ?sjc .
2.2
Synthetic Items
In addition to basic discovery and refinement of schemas, a schema learner also discovers
hidden state. Consider the case where no context conditions are found to make a schema
reliable. There must be unperceived environmental factors on which the schema?s reliability depends (see [4]). The schema learner therefore creates a new binary-valued virtual
sensor, called a synthetic item, to represent the presence of conditions in the environment
that allow the schema to succeed. This addresses the state aliasing problem by splitting
the state space into two parts, one where the schema succeeds, and one where it does not.
Synthetic items are said to reify the host schemas whose success conditions they represent;
they have value 1 if the host schema would succeed if activated, and value 0 otherwise.
Upon creation, a synthetic item begins to act as a normal sensor, with one exception: the
agent has no way of directly perceiving its value. Creation and state maintenance criteria
thus emerge as the main problems associated with synthetic items.
Drescher originally posited two conditions for the creation of a synthetic item: (1) a schema
must be unreliable, and (2) the schema must be locally consistent, meaning that if it succeeds once, it has a high probability of succeeding again if activated soon afterward. The
second of these conditions formalizes the assumption that a well-behaved environment has
persistence and does not tend to radically change from moment to moment. This was motivated by the desire to capture Piagetian ?conservation phenomena.? While well-motivated,
we have found that the second condition is simply too restrictive. Our criterion for creating
ai
synthetic items is 0 < Rel(C ??
R) < ?r , subject to the constraint that the statistics
governing possible additional context conditions have converged. When this criterion is
met, a synthetic item is created and is thenceforth treated as a normal sensor, able to be
incorporated into the contexts and results of other schemas.
A newly created synthetic item is grounded: it represents whatever conditions in the world
allow the host schema to succeed when activated. Thus, upon activation of the host schema,
we retroactively know the state of the synthetic item at the time of activation (1 if the
schema succeeded, 0 otherwise). Because the synthetic item is treated as a sensor, we can
Figure 1: Benchmark problems. (left) The flip system. All transitions are deterministic. (right)
The float/reset system. Dashed lines represent float transitions that happen with probability 0.5,
while solid lines represent deterministic reset transitions.
discover which previous actions led to each synthetic item state, and the synthetic item can
come to be included as a result condition in new schemas. Once we have reliable schemas
that predict the state of a synthetic item, we can begin to know its state non-retroactively,
without having to activate the host schema. The synthetic item?s state can potentially be
known just as well as that of the regular sensors, and its addition expands the state representation in just such a way as to make sensory predictions more reliable. Predicted synthetic
item state implicitly summarizes the relevant preceding history: it indicates that one of the
schemas that predicts it was just activated. If the predicting schema also has a synthetic
item in its context, an additional step of history is implied. Such chaining allows synthetic
items to summarize arbitrary amounts of history without explicitly remembering any of it.
This use of schemas to predict synthetic item state is in contrast to [1], which relied on the
average duration of synthetic item states in order to predict them. Table 1 compares our
extended schema learning criteria with Drescher?s original criteria.
3
Empirical Evaluation
In order to test the advantages of the extended learning criteria, we compared four versions of schema learning. The first two were basic learners that made no use of synthetic
items, but discovered and refined schemas using our extended criteria in one case, and the
direct generalizations of Drescher?s original criteria in the other. The second pair added the
extended and original synthetic item mechanisms, respectively, to the first pair.
Our first experimental domains are based on those used in [5]. They have a mixture of
transient and persistent hidden state and, though small, are non-trivial.2 The flip system
is shown on the left in Figure 1; it features deterministic transitions, hidden state, and
a null action that confounds simplistic history approaches to handling hidden state. The
float/reset system is illustrated on the right side of Figure 1; it features both deterministic
and stochastic transitions, as well as a more complicated hidden state structure. Finally, we
use a modified float/reset system in which the f action from the two right-most states leads
deterministically to their left neighbor; this reveals more about the hidden state structure.
To test predictive power, each schema learner, upon taking an action, uses the most reliable
of all activated schemas to predict what the next value of each sensor will be. If there is
no activation of a reliable schema to predict the value of a particular sensor, its value is
predicted to stay constant. Error is measured as the fraction of incorrect predictions.
In these experiments, actions were chosen uniformly at random, and learning was allowed
to continue throughout.3 No learning parameters are changed over time; schemas stop
being created when discovery and refinement criteria cease to generate them. Figure 2
shows the performance in each domain, while Table 2 summarizes the average error.
2
E.g. [5] showed that flip is non-trivial because it cannot be modeled exactly by k-Markov models,
and its EM-trained POMDP representations require far more than the minimum number of states.
3
Note that because a prediction is made before each observation, the observation does not contribute to the learning upon which its predicted value is based.
flip
float/reset
extended
extended baseline
original
original baseline
0.6
0.5
PREDICTION ERROR
PREDICTION ERROR
0.5
0.4
0.3
0.4
0.3
0.2
0.2
0.1
0.1
0
extended
extended baseline
original
original baseline
0.6
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
0
10000
0
1000
2000
3000
4000
modified float/reset
7000
8000
9000
10000
weather predictor
2?context schema learner
3?context schema learner
0.6
0.5
PREDICTION ERROR
0.5
PREDICTION ERROR
6000
speech modeling
extended
extended baseline
original
original baseline
0.6
0.4
0.3
0.4
0.3
0.2
0.2
0.1
0.1
0
5000
TIMESTEP
TIMESTEP
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
0
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
10000
TIMESTEP
TIMESTEP
Figure 2: Prediction error in several domains. The x-axis represents timesteps and the y-axis
represents error. Each point represents average error over 100 timesteps. In the speech modeling
graph, learning is stopped after approximately 4300 timesteps (shown by the vertical line), after
which no schemas are added, though reliabilities continue to be updated.
Learner
Extended
Extended baseline
Original
Original baseline
flip
0.020
0.331
0.426
0.399
float/reset
0.136
0.136
0.140
0.139
modified f/r
0.00716
0.128
0.299
0.315
Table 2: Average error. Calculated over 10 independent runs of 10,000 timesteps each.
3.1
Speech Modeling
The Japanese vowel dataset [6] contains time-series recordings of nine Japanese speakers
uttering the ae vowel combination 54-118 times. Each data point consists of 12 continuousvalued cepstral coefficients, which we transform into 12 sensors with five discrete values
each. The data is noisy and the dynamics are non-stationary between speakers. Each utterance is divided in half, with the first half treated as the action of speaking a and the latter
half as e. In order to more quickly adapt to discontinuity resulting from changes in speaker,
reliability was calculated using an exponential weighting of more recent observations; each
relevant probability p was updated according to:
pt+1 = ?pt + (1 ? ?)
1 if event occurred at time t
.
0 otherwise
(4)
The parameter ? is set equal to the current prediction accuracy so that decreased accuracy
causes faster adaptation. Several modifications were necessary for tractability: (1) schemas
whose reliability fell below a threshold of their parents? reliability were removed, (2) con-
text sizes were, on separate experimental runs, restricted to two and three items, and (3)
the synthetic item mechanisms were deactivated. Figure 2 displays results for this learner
compared to a baseline weather predictor.4
3.2
Analysis
In each benchmark problem, the learners drop to minimum error after no more than 1000
timesteps. Large divergence in the curves corresponds to the creation of synthetic items and
the discovery of schemas that predict synthetic item state. Small divergence corresponds
to differences in discovery and refinement criteria. In flip and modified float/reset, the extended schema learner reaches zero error, having a complete model of the hidden state, and
outperforms all other learners, while the extended basic version outperforms both original
learners. In float/reset, all learners perform approximately equally, reflecting the fact that,
given the hidden stochasticity of this system, the best schema for action r is one that, without reference to synthetic items, gives a prediction of 1. Surprisingly, the original learner
never significantly outperformed its baseline, and even performed worse than the baseline
in flip. This is accounted for by the duration-based maintenance of synthetic items, which
causes the original learner to maintain transient synthetic item state longer than it should.
Prediction-based synthetic item maintenance overcomes this limitation.
The speech modeling results show that schema learning can induce high-quality action
models in a complex, noisy domain. With a maximum of three context conditions, it averaged only 1.2% error while learning, and 1.6% after learning stopped, a large improvement
over the 30.3% error of the baseline weather predictor. Note that allowing three instead
of two context conditions dropped the error from 4.6% to 1.2% and from 9.0% to 1.6% in
the training and testing phases, respectively, demonstrating the importance of incremental
specialization of schemas through context refinement.
All together, these results show that our extended schema learner produces better action
models than the original, and can handle more complex domains. Synthetic items are seen
to effectively model hidden state, and prediction-based maintenance of synthetic item state
is shown to be more accurate than duration-based maintenance in POMDPs. Discovery
of schemas is improved by our criterion, missing fewer legitimate schemas, and therefore
producing more accurate predictions. Refinement using the annealed generalization of the
original criterion performs correctly with a lower false positive rate.
4
Comparison to Predictive State Representations
Predictive state representations (PSRs; [2]), like schema learning, are based on grounded,
sensorimotor predictions that uncover hidden state. Instead of schemas, PSRs rely
on the notion of tests. A test q is a series of alternating actions and observations
a0 o0 a1 o1 . . . an on . In a PSR, the environment state is represented as the probabilities that
each of a set of core tests would yield its observations if its actions were executed. These
probabilities are updated at each timestep by combining the current state with the new action/observation pair. In this way, the PSR implicitly contains a sufficient history-based
statistic for prediction, and should overcome aliasing relative to immediate observations.
[2] shows that linear PSRs are at least as compact and general as POMDPs, while [5] shows
that PSRs can learn to accurately maintain their state in several POMDP problems.
A schema is similar to a one-step PSR test, and schema reliability roughly corresponds to
the probability of a PSR test. Schemas differ, however, in that they only specify context and
result incrementally, incorporating incremental history via synthetic items, while PSR tests
incorporate the complete history and full observations (i.e. all sensor readings at once) into
4
A weather predictor always predicts that values will stay the same as they are presently.
Problem
flip
float/reset
network
paint
PSR
0
0.11496
0.04693
0.20152
Schema Learner
0
0.13369
0.06457
0.21051
Difference
0
0.01873
0.01764
0.00899
Schema Learning Steps
10, 000
10, 000
10, 000
30, 000
Table 3: Prediction error for PSRs and schema learning on several POMDPs. Error is averaged
over 10 epochs of 10,000 timesteps each. Performance differs by less than 2% in every case.
a test probability. A multi-step test can say more about the current state than a schema, but
is not as useful for regression planning because there is no way to extract the probability
that a particular one of its observations will be obtained. Thus, PSRs are more useful as
Markovian state for reinforcement learning, while schemas are useful for explicit planning.
Note that synthetic items and PSR core test probabilities both attempt to capture a sufficient
history statistic without explicitly maintaining history. This suggests a deeper connection
between the two approaches, but the relationship has yet to be formalized.
We compared the predictive performance of PSRs with that of schema learning on some of
the POMDPs from [5]. One-step PSR core tests can be used to predict observations: as an
action is taken, the probability of each observation is the probability of the one-step core
test that uses the current action and terminates in that observation. We choose the most
probable observation as the PSR prediction. This allows us to evaluate PSR predictions
using the same error measure (fraction of incorrect predictions) as in schema learning.5
In our experiments, the extended schema learner was first allowed to learn until it reached
an asymptotic minimum error (no longer than 30,000 steps). Learning was then deactivated,
and the schema learner and PSR each made predictions over a series of randomly chosen
actions. Table 3 presents the average performance for each approach.
Learning PSR parameters required 1-10 million timesteps [5], while schema learning used
no more than 30,000 steps. Also, learning PSR parameters required access to the underlying POMDP [5], whereas schema learning relies solely on sensorimotor information.
5
Related Work
Aside from PSRs, schema learning is also similar to older work in learning planning operators, most notably that of Wang [7], Gil [8], and Shen [9]. These approaches use observations to learn classical, deterministic STRIPS-like operators in predicate logic environments. Unlike schema learning, they make the strong assumption that the environment
does not produce noisy observations. Wang and Gil further assume no perceptual aliasing.
Other work in this area has attempted to handle noise, but only in the problem of context
refinement. Benson [10] gives his learner prior knowledge about action effects, and the
learner finds conditions to make the effects reliable with some tolerance for noise. One
advantage of Benson?s formalism is that his operators are durational, rather than atomic
over a single timestep. Balac et al. [11] use regression trees to find regions of noisy,
continuous sensor space that cause a specified action to vary in the degree of its effect.
Finally, Shen [9] and McCallum [12] have mechanisms for handling state aliasing. Shen
uses differences in successful and failed predictions to identify pieces of history that reveal
hidden state. His approach, however, is completely noise intolerant. McCallum?s UTree
algorithm selectively adds pieces of history in order to maximize prediction of reward.
5
Unfortunately, not all the POMDPs from [5] had one-step core tests to cover the probability of
every observation given every action. We restricted our comparisons to the four systems that had at
least two actions for which the probability of all next-step observations could be determined.
This bears a strong resemblance to the history represented by chains of synthetic items, a
connection that should be explored more fully. Synthetic items, however, are for general
sensor prediction, which contrasts with UTree?s task-specific focus on reward prediction.
Schema learning, PSRs, and the UTree algorithm are all highly related in this sense of
selectively tracking history information to improve predictive performance.
6
Discussion and Future Work
We have shown that our extended schema learner produces accurate action models for a
variety of POMDP systems and for a complex speech modeling task. The extended schema
learner performs substantially better than the original, and compares favorably in predictive
power to PSRs while appearing to learn much faster. Building probabilistic goal-regression
planning on top of the schemas is a logical next step; however, to succeed with real-world
planning problems, we believe that we need to extend the learning mechanism in several
ways. For example, the schema learner must explicitly handle actions whose effects occur
over an extended duration instead of after one timestep. The learner should also be able to
directly handle continuous-valued sensors. Finally, the current mechanism has no means
a
a
a
? x21 and x21 ?
? x31 to xp1 ?
? xp+1
.
of abstracting similar schemas, e.g., to reduce x11 ?
1
Acknowledgements
Thanks to Satinder Singh and Michael R. James for providing POMDP PSR parameters.
References
[1] G. Drescher. Made-up minds: a constructivist approach to artificial intelligence. MIT Press,
1991.
[2] M. L. Littman, R. S. Sutton, and S. Singh. Predictive representations of state. In Advances in
Neural Information Processing Systems, pages 1555?1561. MIT Press, 2002.
[3] R. E. Fikes and N. J. Nilsson. STRIPS: a new approach to the application of theorem proving
to problem solving. Artificial Intelligence, 2:189?208, 1971.
[4] C. T. Morrison, T. Oates, and G. King. Grounding the unobservable in the observable: the
role and representation of hidden state in concept formation and refinement. In AAAI Spring
Symposium on Learning Grounded Representations, pages 45?49. AAAI Press, 2001.
[5] S. Singh, M. L. Littman, N. K. Jong, D. Pardoe, and P. Stone. Learning predictive state representations. In International Conference on Machine Learning, pages 712?719. AAAI Press,
2003.
[6] M. Kudo, J. Toyama, and M. Shimbo. Multidimensional curve classification using passingthrough regions. Pattern Recognition Letters, 20(11?13):1103?1111, 1999.
[7] X. Wang. Learning by observation and practice: An incremental approach for planning operator
acquisition. In International Conference on Machine Learning, pages 549?557. AAAI Press,
1995.
[8] Y. Gil. Learning by experimentation: Incremental refinement of incomplete planning domains.
In International Conference on Machine Learning, pages 87?95. AAAI Press, 1994.
[9] W.-M. Shen. Discovery as autonomous learning from the environment. Machine Learning,
12:143?165, 1993.
[10] Scott Benson. Inductive learning of reactive action models. In International Conference on
Machine Learning, pages 47?54. AAAI Press, 1995.
[11] N. Balac, D. M. Gaines, and D. Fisher. Using regression trees to learn action models. In IEEE
Systems, Man and Cybernetics Conference, 2000.
[12] A. W. McCallum. Reinforcement Learning with Selective Perception and Hidden State. PhD
thesis, University of Rochester, 1995.
| 2592 |@word version:2 solid:1 moment:2 contains:2 exclusively:1 series:3 outperforms:2 current:6 od:3 si:1 activation:4 yet:1 reminiscent:1 must:5 subsequent:1 happen:2 drop:1 succeeding:1 aside:1 implying:1 stationary:1 discovering:2 half:3 item:38 fewer:1 intelligence:2 retroactively:2 mccallum:3 core:5 contribute:1 five:1 c2:1 direct:1 symposium:1 persistent:1 incorrect:2 consists:1 notably:1 roughly:1 planning:9 aliasing:4 multi:1 continuousvalued:1 begin:3 discover:2 underlying:2 null:1 what:4 substantially:1 finding:1 formalizes:1 every:3 toyama:1 act:1 expands:1 multidimensional:1 exactly:1 rm:1 whatever:1 unit:2 producing:1 before:1 positive:1 dropped:1 sutton:1 solely:1 approximately:4 suggests:1 averaged:2 testing:1 atomic:1 practice:2 differs:1 area:1 empirical:1 significantly:2 weather:4 fikes:1 persistence:1 induce:1 regular:1 cannot:1 ga:2 selection:1 operator:5 context:23 influence:1 deterministic:5 missing:1 annealed:2 duration:5 pomdp:8 shen:4 formalized:1 splitting:1 perceive:1 sjc:4 legitimate:1 rule:1 holmes:1 his:3 proving:1 handle:6 notion:1 autonomous:1 updated:3 construction:1 pt:2 us:4 recognition:1 lay:1 skj:1 predicts:3 database:1 observed:2 role:1 wang:3 capture:2 region:2 removed:1 observes:1 environment:9 reward:2 littman:2 dynamic:2 chained:1 trained:1 singh:3 solving:1 predictive:13 creation:6 creates:1 upon:4 learner:33 completely:1 represented:2 activate:1 artificial:2 sc:1 formation:1 refined:1 whose:3 valued:4 say:2 otherwise:3 statistic:4 transform:1 noisy:5 advantage:2 reset:10 adaptation:1 relevant:2 combining:1 achieve:1 description:1 convergence:1 empty:1 parent:1 r1:1 produce:4 incremental:4 executing:1 measured:2 strong:2 predicted:7 come:1 met:3 differ:1 stochastic:1 transient:2 virtual:1 require:1 generalization:2 probable:1 learning1:1 extension:1 hold:1 normal:2 predict:7 vary:1 a2:1 outperformed:1 applicable:4 mit:2 sensor:26 always:1 modified:4 rather:2 avoid:1 gatech:2 focus:2 improvement:2 indicates:1 contrast:2 baseline:12 sense:1 integrated:1 a0:1 hidden:18 relation:1 spurious:1 selective:1 x11:1 unobservable:1 classification:1 equal:3 construct:2 once:4 having:2 never:1 represents:4 look:1 future:1 randomly:1 divergence:2 phase:2 maintain:3 vowel:2 attempt:1 atlanta:2 highly:1 evaluation:1 mixture:1 durational:1 activated:7 chain:1 predefined:1 accurate:6 succeeded:1 psrs:10 necessary:2 experience:2 tree:2 incomplete:1 gaines:1 accommodated:1 utree:3 fitted:1 stopped:2 formalism:1 modeling:7 markovian:1 cover:1 tractability:1 predictor:4 predicate:1 successful:1 too:1 synthetic:37 thanks:1 international:4 stay:2 lee:1 probabilistic:5 michael:2 together:1 quickly:1 again:1 aaai:6 satisfied:1 thesis:1 choose:1 worse:1 creating:1 includes:1 coefficient:1 caused:1 explicitly:3 depends:1 piece:2 later:1 performed:1 schema:108 reached:1 relied:1 complicated:1 rochester:1 formed:1 accuracy:4 yield:1 identify:1 confounds:1 drescher:9 accurately:2 pomdps:8 cc:2 cybernetics:1 converged:1 history:13 reach:2 strip:3 sensorimotor:3 acquisition:1 james:1 naturally:1 associated:1 static:1 con:1 stop:1 newly:1 dataset:1 logical:1 knowledge:1 uncover:1 reflecting:1 originally:1 follow:1 specify:1 improved:2 though:2 governing:1 just:3 until:1 incrementally:1 quality:2 reveal:1 resemblance:1 behaved:1 believe:1 usage:1 effect:13 grounding:1 concept:1 true:1 building:1 inductive:1 alternating:1 illustrated:1 speaker:3 chaining:1 criterion:21 generalized:2 stone:1 complete:2 performs:3 meaning:1 discovers:2 recently:1 charles:1 million:1 extend:2 occurred:1 ai:14 stochasticity:1 had:2 reliability:10 access:1 longer:2 add:1 showed:1 recent:1 driven:1 sji:1 binary:5 success:2 continue:2 seen:1 minimum:3 additional:2 remembering:1 preceding:1 paradigm:1 maximize:1 dashed:1 morrison:1 full:1 faster:3 adapt:1 kudo:1 posited:1 divided:1 host:5 manipulate:1 equally:1 a1:2 controlled:1 prediction:31 basic:4 maintenance:7 heterogeneous:1 essentially:1 circumstance:1 simplistic:1 ae:1 regression:4 represent:4 grounded:3 c1:1 addition:2 whereas:1 decreased:1 float:10 unlike:1 sr:3 fell:1 subject:1 tend:1 recording:1 presence:1 xp1:1 variety:1 fit:1 timesteps:7 reduce:1 cn:1 motivated:2 specialization:1 o0:1 speech:7 speaking:1 cause:5 nine:1 action:44 useful:3 pardoe:1 amount:1 locally:2 generate:1 gil:3 track:1 correctly:1 discrete:4 four:2 threshold:4 demonstrating:1 timestep:8 graph:1 fraction:2 run:2 prob:5 letter:1 throughout:1 x31:1 summarizes:2 ct:1 display:1 occur:1 constraint:1 isbell:2 spring:1 according:1 combination:1 jr:1 terminates:1 em:1 modification:1 s1:1 nilsson:1 presently:1 benson:3 restricted:2 taken:4 count:2 mechanism:11 know:2 flip:8 mind:1 piaget:1 generalizes:1 experimentation:1 appearing:1 original:26 top:1 x21:2 maintaining:1 restrictive:1 classical:1 implied:1 added:3 paint:1 occurs:1 rt:1 said:2 separate:1 mph:1 hmm:1 trivial:2 o1:1 modeled:1 relationship:1 providing:1 difficult:1 executed:2 unfortunately:1 potentially:1 favorably:1 unknown:1 perform:1 allowing:1 vertical:1 observation:19 shimbo:1 markov:1 benchmark:3 immediate:1 situation:4 extended:25 incorporated:1 discovered:2 culminates:1 constructivist:3 arbitrary:3 introduced:1 pair:3 required:2 specified:1 connection:2 discontinuity:1 address:1 able:2 below:1 pattern:1 scott:1 perception:1 reading:2 summarize:1 reliable:7 oates:1 deactivated:2 power:2 event:1 treated:3 rely:1 predicting:1 older:1 improve:2 technology:2 axis:2 created:3 extract:1 utterance:1 text:1 epoch:1 prior:1 discovery:12 acknowledgement:1 relative:1 asymptotic:1 fully:1 bear:2 presumption:1 abstracting:1 limitation:1 filtering:1 afterward:1 foundation:1 agent:4 degree:1 sufficient:2 consistent:2 xp:1 changed:1 accounted:1 surprisingly:1 free:1 soon:1 side:1 allow:3 deeper:1 telling:1 institute:2 neighbor:1 taking:1 cepstral:1 emerge:1 tolerance:1 curve:2 calculated:2 overcome:1 world:4 transition:7 sensory:3 made:4 refinement:13 uttering:1 reinforcement:2 far:1 compact:1 observable:1 implicitly:2 unreliable:1 overcomes:1 logic:1 satinder:1 reveals:1 conservation:1 continuous:2 table:6 learn:7 complex:4 anneal:1 constructing:1 domain:7 japanese:2 main:1 s2:1 noise:4 child:1 allowed:2 georgia:2 comprises:1 deterministically:1 explicit:1 exponential:1 perceptual:1 weighting:1 theorem:1 specific:3 r2:1 explored:1 cease:1 derives:1 exists:1 incorporating:1 rel:10 adding:2 effectively:1 importance:1 ci:1 false:1 phd:1 led:1 simply:1 psr:14 failed:1 desire:1 tracking:1 radically:1 corresponds:3 environmental:1 relies:1 succeed:5 goal:2 king:1 sjr:6 fisher:1 man:1 change:3 included:2 determined:1 perceiving:1 uniformly:1 miss:1 called:2 experimental:2 succeeds:2 attempted:1 exception:1 formally:1 college:2 selectively:2 jong:1 latter:1 reactive:1 incorporate:1 evaluate:1 phenomenon:1 handling:2 |
1,753 | 2,593 | PAC-Bayes Learning of Conjunctions and
Classification of Gene-Expression Data
Mario Marchand
IFT-GLO, Universit?e Laval
Sainte-Foy (QC) Canada, G1K-7P4
[email protected]
Mohak Shah
SITE, University of Ottawa
Ottawa, Ont. Canada,K1N-6N5
[email protected]
Abstract
We propose a ?soft greedy? learning algorithm for building small
conjunctions of simple threshold functions, called rays, defined on
single real-valued attributes. We also propose a PAC-Bayes risk
bound which is minimized for classifiers achieving a non-trivial
tradeoff between sparsity (the number of rays used) and the magnitude of the separating margin of each ray. Finally, we test the
soft greedy algorithm on four DNA micro-array data sets.
1
Introduction
An important challenge in the problem of classification of high-dimensional data
is to design a learning algorithm that can often construct an accurate classifier
that depends on the smallest possible number of attributes. For example, in the
problem of classifying gene-expression data from DNA micro-arrays, if one can find
a classifier that depends on a small number of genes and that can accurately predict
if a DNA micro-array sample originates from cancer tissue or normal tissue, then
there is hope that these genes, used by the classifier, may be playing a crucial role
in the development of cancer and may be of relevance for future therapies.
The standard methods used for classifying high-dimensional data are often characterized as either ?filters? or ?wrappers?. A filter is an algorithm used to ?filter out?
irrelevant attributes before using a base learning algorithm, such as the support
vector machine (SVM), which was not designed to perform well in the presence of
many irrelevant attributes. A wrapper, on the other hand, is used in conjunction
with the base learning algorithm: typically removing recursively the attributes that
have received a small ?weight? by the classifier obtained from the base learner.
The recursive feature elimination method is an example of a wrapper that was used
by Guyon et al. (2002) in conjunction with the SVM for classification of micro-array
data. For the same task, Furey et al. (2000) have used a filter which consists of
ranking the attributes (gene expressions) as function of the difference between the
positive-example mean and the negative-example mean. Both filters and wrappers
have sometimes produced good empirical results but they are not theoretically justified. What we really need is a learning algorithm that has provably good guarantees
in the presence of many irrelevant attributes. One of the first learning algorithms
proposed by the COLT community has such a guarantee for the class of conjunc-
tions: if there exists a conjunction, that depends on r out of the n input attributes
and that correctly classifies a training set of m examples, then the greedy covering
algorithm of Haussler (1988) will find a conjunction of at most r ln m attributes that
makes no training errors. Note the absence of dependence on the number n of input
attributes. In contrast, the mistake-bound of the Winnow algorithm (Littlestone,
1988) has a logarithmic dependence on n and will build a classifier on all the n
attributes.
Motivated by this theoretical result and by the fact that simple conjunctions of gene
expression levels seems an interesting learning bias for the classification of DNA
micro-arrays, we propose a ?soft greedy? learning algorithm for building small conjunctions of simple threshold functions, called rays, defined on single real-valued
attributes. We also propose a PAC-Bayes risk bound which is minimized for classifiers achieving a non-trivial tradeoff between sparsity (the number of rays used) and
the magnitude of the separating margin of each ray. Finally, we test the proposed
soft greedy algorithm on four DNA micro-array data sets.
2
Definitions
The input space X consists of all n-dimensional vectors x = (x1 , . . . , xn ) where
each real-valued component xi ? [Ai , Bi ] for i = 1, . . . n. Hence, Ai and Bi are,
respectively, the a priori lower and upper bounds on values for xi . The output
space Y is the set of classification labels that can be assigned to any input vector
x ? X . We focus here on binary classification problems. Thus Y = {0, 1}. Each
example z = (x, y) is an input vector x with its classification label y ? Y. In the
probably approximately correct (PAC) setting, we assume that each example z is
generated independently according to the same (but unknown) distribution D. The
(true) risk R(f ) of a classifier f : X ? Y is defined to be the probability that f
misclassifies z on a random draw according to D:
def
R(f ) = Pr(x,y)?D (f (x) 6= y) = E(x,y)?D I(f (x) 6= y)
where I(a) = 1 if predicate a is true and 0 otherwise. Given a training set
S = (z1 , . . . , zm ) of m examples, the task of a learning algorithm is to construct
a classifier with the smallest possible risk without any information about D. To
achieve this goal, the learner can compute the empirical risk RS (f ) of any given
classifier f according to:
def
RS (f ) =
m
1 X
def
I(f (xi ) 6= yi ) = E(x,y)?S I(f (x) 6= y)
m i=1
We focus on learning algorithms that construct a conjunction of rays from a training
set. Each ray is just a threshold classifier defined on a single attribute (component)
xi . More formally, a ray is identified by an attribute index i ? {1, . . . , n}, a threshold
value t ? [Ai , Bi ], and a direction d ? {?1, +1} (that specifies whether class 1 is on
i
(x)
the largest or smallest values of xi ). Given any input example x, the output rtd
of a ray is defined as:
?
1 if (xi ? t)d > 0
def
i
rtd (x) =
0 if (xi ? t)d ? 0
To specify a conjunction of rays we need first to list all the attributes who?s ray
def
is present in the conjunction. For this purpose, we use a vector i = (i1 , . . . , i|i| )
of attribute indices ij ? {1, . . . , n} such that i1 < i2 < . . . < i|i| where |i| is the
number of indices present in i (and thus the number of rays in the conjunction) 1 .
To complete the specification of a conjunction of rays, we need a vector t =
(ti1 , ti2 , . . . , ti|i| ) of threshold values and a vector of d = (di1 , di2 , . . . , di|i| ) of directions where ij ? {1, . . . , n} for j ? {1, . . . , |i|}. On any input example x, the
i
output Ctd
(x) of a conjunction of rays is given by:
(
1 if rtjj dj (x) = 1 ?j ? i
def
i
Ctd (x) =
0 if ?j ? i : rtjj dj (x) = 0
Finally, any algorithm that builds a conjunction can be used to build a disjunction
just by exchanging the role of the positive and negative labelled examples. Due to
lack of space, we describe here only the case of a conjunction.
3
A PAC-Bayes Risk Bound
The PAC-Bayes approach, initiated by McAllester (1999), aims at providing PAC
guarantees to ?Bayesian? learning algorithms. These algorithms are specified in
terms of a prior distribution P over a space of classifiers that characterizes our
prior belief about good classifiers (before the observation of the data) and a posterior distribution Q (over the same space of classifiers) that takes into account
the additional information provided by the training data. A remarkable result that
came out from this line of research, known as the ?PAC-Bayes theorem?, provides
a tight upper bound on the risk of a stochastic classifier called the Gibbs classifier .
Given an input example x, the label GQ (x) assigned to x by the Gibbs classifier
is defined by the following process. We first choose a classifier h according to the
posterior distribution Q and then use h to assign the label h(x) to x. The risk of
GQ is defined as the expected risk of classifiers drawn according to Q:
def
R(GQ ) = Eh?Q R(h) = Eh?Q E(x,y)?D I(f (x) 6= y)
The PAC-Bayes theorem was first proposed by McAllester (2003). The version
presented here is due to Seeger (2002) and Langford (2003).
Theorem 1 Given any space H of classifiers. For any data-independent prior
distribution P over H and for any (possibly data-dependent) posterior distribution
Q over H, with probability at least 1 ? ? over the random draws of training sets S
of m examples:
KL(QkP ) + ln m+1
?
kl(RS (GQ )kR(GQ )) ?
m
where KL(QkP ) is the Kullback-Leibler divergence between distributions2 Q and P :
def
KL(QkP ) = Eh?Q ln
Q(h)
P (h)
and where kl(qkp) is the Kullback-Leibler divergence between the Bernoulli distributions with probabilities of success q and p:
def
kl(qkp) = q ln
q
1?q
+ (1 ? q) ln
p
1?p
for q < p
1
Although it is possible to use up to two rays on any attribute, we limit ourselves here
to the case where each attribute can be used for only one ray.
2
Here Q(h) denotes the probability density function associated to Q, evaluated at h.
The bound given by the PAC-Bayes theorem for the risk of Gibbs classifiers can be
turned into a bound for the risk of Bayes classifiers in the following way. Given a
posterior distribution Q, the Bayes classifier BQ performs a majority vote (under
measure Q) of binary classifiers in H. When BQ misclassifies an example x, at least
half of the binary classifiers (under measure Q) misclassifies x. It follows that the
error rate of GQ is at least half of the error rate of BQ . Hence R(BQ ) ? 2R(GQ ).
In our case, we have seen that ray conjunctions are specified in terms of a mixture
of discrete parameters i and d and continuous parameters t. If we denote by Pi,d (t)
the probability density function associated with a prior P over the class of ray
conjunctions, we consider here priors of the form:
1
1 Y
1
; ?tj ? [Aj , Bj ]
Pi,d (t) = ? n ? p(|i|) |i|
Bj ? Aj
2
|i|
j?i
n
If I denotes the set of all 2 possible attribute index vectors and Di denotes de set
of all 2|i| binary direction vectors d of dimension |i|, we have that:
X X Y Z Bj
dtj Pi,d (t) = 1
whenever
Pn
e=0
i?I d?Di j?i
Aj
p(e) = 1.
The reasons motivating this choice for the prior are the following. The first two
factors come from the belief that the final classifier, constructed from the group of
attributes specified by i, should depend only on the number |i| of attributes in this
group. If we have complete ignorance about the number of rays the final classifier is
likely to have, we should choose p(e) = 1/(n + 1) for e ? {0, 1, . . . , n}. However, we
should choose a p that decreases as we increase e if we have reasons to believe that
the number of rays of the final classifier will be much smaller than n. The third
factor of Pi,d (t) gives equal prior probabilities for each of the two possible values of
direction dj . Finally, for each ray, every possible threshold value t should have the
same prior probability of being chosen if we do not have any prior knowledge that
would favor some values over the others. Since each attribute value xi is constrained,
a priori, to be in [Ai , Bi ], we have chosen a uniform probability density on [Ai , Bi ]
for each ti such that i ? i. This explains the last factors of Pi,d (t).
Given a training set S, the learner will choose an attribute group i and a direction
vector d. For each attribute xi ? [Ai , Bi ] : i ? i, a margin interval [ai , bi ] ? [Ai , Bi ]
will also be chosen by the learner. A deterministic ray-conjunction classifier is then
specified by choosing the thresholds values ti ? [ai , bi ]. It is tempting at this point
to choose ti = (ai + bi )/2 ?i ? i (i.e., in the middle of each interval). However, we
will see shortly that the PAC-Bayes theorem offers a better guarantee for another
type of deterministic classifier.
The Gibbs classifier is defined with a posterior distribution Q having all its weight
on the same i and d as chosen by the learner but where each ti is uniformly chosen
in [ai , bi ]. The KL divergence between this posterior Q and the prior P is then
given by:
?Q
?1 ?
Y Z bj dtj
i?i (bi ? ai )
ln
KL(QkP ) =
bj ? aj
Pi,d (t)
j?i aj
?
? ?
?
X ? B i ? Ai ?
n
1
+ |i| ln(2) +
= ln
+ ln
ln
p(|i|)
bi ? ai
|i|
i?i
Hence, we see that the KL divergence between the ?continuous components? of Q
and P (given by the last term) vanishes when [ai , bi ] = [Ai , Bi ] ?i ? i. Furthermore,
the KL divergence between the ?discrete components? of Q and P is small for small
values of |i| (whenever p(|i|) is not too small). Hence, this KL divergence between
our choices for Q and P exhibits a tradeoff between margins (large values of bi ? ai )
and sparsity (small value of |i|) for Gibbs classifiers. According to Theorem 1,
the Gibbs classifier with the smallest guarantee of risk R(GQ ) should minimize a
non trivial combination of KL(QkP ) (margins-sparsity tradeoff) and empirical risk
RS (GQ ).
Since the posterior Q is identified by an attribute group vector i, a direction vector
d, and intervals [ai , bi ] ?i ? i, we will refer to the Gibbs classifier GQ by Gid
ab
where a and b are the vectors formed by the unions of ai s and bi s respectively.
We can obtain a closed-form expression for RS (Gid
ab ) by first considering the risk
id
id
R(x,y) (Gid
)
on
a
single
example
(x,
y)
since
R
(G
S
ab
ab ) = E(x,y)?S R(x,y) (Gab ). From
our definition for Q, we find that:
"
#
Y
R(x,y) (Gid
?adii ,bi (xi ) ? y
(1)
ab ) = (1 ? 2y)
i?i
where we have used the following piece-wise linear functions:
?
?
if x < a
if x < a
? 0
? 1
def
def
x?a
?
+
b?x
if a ? x ? b
if a ? x ? b
; ?a,b
(x) =
?a,b
(x) =
b?a
b?a
? 1
?
if b < x
0
if b < x
(2)
id
Hence we notice that R(x,1) (Gid
ab ) = 1 (and R(x,0) (Gab ) = 0) whenever there exist
di
i ? i : ?ai ,bi (xi ) = 0. This occurs iff there exists a ray which outputs 0 on x. We
i
can also verify that the expression for R(x,y) (Ctd
) is identical to the expression for
id
R(x,y) (Gab ) except that the piece-wise linear functions ?adii ,bi (xi ) are replaced by
the indicator functions I((xi ? ti )di > 0).
The PAC-Bayes theorem provides a risk bound for the Gibbs classifier Gid
ab . Since
id
the Bayes classifier Bab
just performs a majority vote under the same posterior
id
distribution as the one used by Gid
ab , we have that Bab (x) = 1 iff the probability
id
that Gab classifies x as positive exceeds 1/2. Hence, it follows that
(
Q
? di (x ) > 1/2
1 if
id
Qi?i adii ,bi i
(3)
Bab (x) =
0 if
i?i ?ai ,bi (xi ) ? 1/2
id
id
Note that Bab
has an hyperbolic decision surface. Consequently, Bab
is not representable as a conjunction of rays. There is, however, no computational difficulty at
id
obtaining the output of Bab
(x) for any x ? X .
id
id
From the relation between Bab
and Gid
ab , it also follows that R(x,y) (Bab ) ?
id
id
2R(x,y) (Gid
ab ) for any (x, y). Consequently, R(Bab ) ? 2R(Gab ). Hence, we have
our main theorem:
TheoremP
2 Given all our previous definitions, for any ? ? (0, 1], and for any p
n
satisfying e=0 p(e) = 1, we have:
?
? ? ?
?
n
1
id
ln
+
PrS?Dm ?i, d, a, b : R(Gid
)
?
sup
?
:
kl(R
(G
)k?)
?
S
ab
ab
m
|i|
#) !
? X ?
?
?
Bi ? Ai
1
m+1
+
ln
+ ln
+ |i| ln(2) + ln
?1??
p(|i|)
bi ? ai
?
i?i
id
Furthermore: R(Bab
) ? 2R(Gid
ab )
?i, d, a, b.
4
A Soft Greedy Learning Algorithm
id
Theorem 2 suggests that the learner should try to find the Bayes classifier Bab
that
uses a small number of attributes (i.e., a small |i|), each with a large separating
margin (bi ? ai ), while keeping the empirical Gibbs risk RS (Gid
ab ) at a low value.
To achieve this goal, we have adapted the greedy algorithm for the set covering
machine (SCM) proposed by Marchand and Shawe-Taylor (2002). It consists of
choosing the feature (here a ray) i with the largest utility Ui where:
Ui = |Qi | ? p|Ri |
where Qi is the set of negative examples covered (classified as 0) by feature i, Ri
is the set of positive examples misclassified by this feature, and p is a learning
parameter that gives a penalty p for each misclassified positive example. Once the
feature with the largest Ui is found, we remove Qi and Pi from the training set S
and then repeat (on the remaining examples) until either no more negative examples
are present or that a maximum number s of features has been reached.
In our case, however, we need to keep the Gibbs risk on S low instead of the risk
of a deterministic classifier. Since the Gibbs risk is a ?soft measure? that uses the
d
piece-wise linear functions ?a,b
instead of the ?hard? indicator functions, we need
a ?softer? version of the utility function Ui . Indeed, a negative example that falls
d
in the linear region of a ?a,b
is in fact partly covered. Following this observation,
let k be the vector of indices of the attributes that we have used so far for the
kd
construction of the classifier. Let us first define the covering value C(Gkd
ab ) of Gab
kd
by the ?amount? of negative examples assigned to class 0 by Gab :
?
?
X
Y d
def
=
(1 ? y) ?1 ?
?ajj ,bj (xj )?
C(Gkd
ab )
j?k
(x,y)?S
kd
We also define the positive-side error E(Gkd
ab ) of Gab as the ?amount? of positive
examples assigned to class 0 :
?
?
X
Y
def
d
E(Gkd
=
y ?1 ?
?ajj ,bj (xj )?
ab )
j?k
(x,y)?S
We now want to add another ray on another attribute, call it i, to obtain a new
vector k0 containing this new attribute in addition to those present in k. Hence, we
now introduce the covering contribution of ray i as:
h
iY
X
0 0
def
d
kd
?ajj ,bj (xj )
(i) = C(Gak0 bd0 ) ? C(Gkd
(1 ? y) 1 ? ?adii ,bi (xi )
Cab
ab ) =
(x,y)?S
j?k
and the positive-side error contribution of ray i as:
h
iY
X
0 0
def
d
kd
Eab
(i) = E(Gak0 bd0 ) ? E(Gkd
y 1 ? ?adii ,bi (xi )
?ajj ,bj (xj )
ab ) =
(x,y)?S
j?k
Typically, the covering contribution of ray i should increase its ?utility? and its
positive-side error should decrease it. Moreover, we want to decrease the ?utility?
of ray i by an amount which would become large whenever it has a small separating
margin. Our expression for KL(QkP ) suggests that this amount should be proportional to ln((Bi ? Ai )/(bi ? ai )). Furthermore we should compare this margin
term with the fraction of the remaining negative examples that ray i has covered
(instead of the absolute amount of negative examples covered). Hence the coverkd
kd
ing contribution Cab
(i) of ray i should be divided by the amount Nab
of negative
examples that remains to be covered before considering ray i:
X
Y d
kd def
Nab
=
(1 ? y)
?ajj ,bj (xj )
(x,y)?S
j?k
which is simply the amount of negative examples that have been assigned to class 1
kd
by Gkd
ab . If P denotes the set of positive examples, we define the utility Uab (i) of
kd
adding ray i to Gab as:
kd
E kd (i)
Bi ? Ai
Cab
(i)
? p ab
? ? ln
kd
|P |
bi ? ai
Nab
where parameter p represents the penalty of misclassifying a positive example and
? is another parameter that controls the importance of having a large margin.
These learning parameters can be chosen by cross-validation. For fixed values of
these parameters, the ?soft greedy? algorithm simply consists of adding, to the
current Gibbs classifier, a ray with maximum added utility until either the maximum
number s of rays has been reached or that all the negative examples have been
(totally) covered. It is understood that, during this soft greedy algorithm, we
can remove an example (x, y) from S whenever it is totally covered. This occurs
Q
d
whenever j?k ?ajj ,bj (xj ) = 0.
kd
Uab
(i)
5
def
=
Results for Classification of DNA Micro-Arrays
We have tested the soft greedy learning algorithm on the four DNA micro-array data
sets shown in Table 1. The colon tumor data set (Alon et al., 1999) provides the
expression levels of 40 tumor and 22 normal colon tissues measured for 6500 human
genes. The ALL/AML data set (Golub et al., 1999) provides the expression levels
of 7129 human genes for 47 samples of patients with acute lymphoblastic leukemia
(ALL) and 25 samples of patients with acute myeloid leukemia (AML). The B MD
and C MD data sets (Pomeroy et al., 2002) are micro-array samples containing
the expression levels of 6817 human genes. Data set B contains 25 classic and 9
desmoplastic medulloblastomas whereas data set C contains 39 medulloblastomas
survivors and 21 treatment failures (non-survivors).
We have compared the soft greedy learning algorithm with a linear-kernel softmargin SVM trained both on all the attributes (gene expressions) and on a subset
of attributes chosen by the filter method of Golub et al. (1999). It consists of ranking
the attributes as function of the difference between the positive-example mean and
the negative-example mean and then use only the first ` attributes. The resulting
learning algorithm, named SVM+gs in Table 1, is basically the one used by Furey
et al. (2000) for the same task. Guyon et al. (2002) claimed obtaining better results
with the recursive feature elimination method but, as pointed out by Ambroise and
McLachlan (2002), their work contained a methodological flaw and, consequently,
the superiority of this wrapper method is questionable.
Each algorithm was tested with the 5-fold cross validation (CV) method. Each of
the five training sets and testing sets was the same for all algorithms. The learning
parameters of all algorithms and the gene subsets (for SVM+gs) were chosen from
the training sets only. This was done by performing a second (nested) 5-fold CV
on each training set. For the gene subset selection procedure of SVM+gs, we have
considered the first ` = 2i genes (for i = 0, 1, . . . , 12) ranked according to the
criterion of Golub et al. (1999) and have chosen the i value that gave the smallest
5-fold CV error on the training set.
Data Set
Name
#exs
Colon
62
B MD
34
C MD
60
ALL/AML
72
SVM
errs
12
12
29
18
SVM+gs
size
11
256
6
32
21
1024
10
64
errs
Soft Greedy
ratio
0.42
0.10
0.077
0.002
size
1
1
3
2
G-errs
B-errs
12
6
24
19
9
6
22
17
Bound
18
20
40
38
Table 1: DNA micro-array data sets and results.
For each algorithm, the ?errs? columns of Table 1 contain the 5-fold CV error
expressed as the sum of errors over the five testing sets and the ?size? columns
contain the number of attributes used by the classifier averaged over the five testing
sets. The ?G-err? and ?B-err? columns refer to the Gibbs and Bayes error rates.
The ?ratio? column refers to the average value of (bi ? ai )/(Bi ? Ai ) obtained for
the rays used by classifiers and the ?bound? column refers to the average risk bound
of Theorem 2 multiplied by the total number of examples. We see that the gene
selection filter generally improves the error rate of SVM and that the Bayes error
rate is slightly better than the Gibbs error rate. Finally, the error rates of Bayes
and SVM+gs are competitive but the number of genes selected by the soft greedy
algorithm is always much smaller.
References
U. Alon, N. Barkai, D.A. Notterman, K. Gish, S. Ybarra, D. Mack, and A.J. Levine. Broad
patterns of gene expression revealed by clustering analysis of tumor and normal colon
tissues probed by oligonucleotide arrays. PNAS USA, 96:6745?6750, 1999.
C. Ambroise and G. J. McLachlan. Selection bias in gene extraction on the basis of
microarray gene-expression data. Proc. Natl. Acad. Sci. USA, 99:6562?6566, 2002.
T. S. Furey, N. Cristianini, N. Duffy, D. W. Bednarski, M. Schummer, and D. Haussler. Support vector machine classification and validation of cancer tissue samples using
microarray expression data. Bioinformatics, 16:906?914, 2000.
T.R. Golub, D.K. Slonim, and Many More Authors. Molecular classification of cancer:
class discovery and class prediction by gene expression monitoring. Science, 286:531?
537, 1999.
I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. Gene selection for cancer classification
using support vector machines. Machine Learning, 46:389?422, 2002.
D. Haussler. Quantifying inductive bias: AI learning algorithms and Valiant?s learning
framework. Artificial Intelligence, 36:177?221, 1988.
John Langford.
Tutorial on practical prediction theory for classification.
http://hunch.net/~jl/projects/prediction_bounds/tutorial/tutorial.ps, 2003.
N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold
algorithm. Machine Learning, 2(4):285?318, 1988.
Mario Marchand and John Shawe-Taylor. The set covering machine. Journal of Machine
Learning Reasearch, 3:723?746, 2002.
David McAllester. Some PAC-Bayesian theorems. Machine Learning, 37:355?363, 1999.
David McAllester. PAC-Bayesian stochastic model selection. Machine Learning, 51:5?21,
2003. A priliminary version appeared in proceedings of COLT?99.
S. L. Pomeroy, P. Tamayo, and Many More Authors. Prediction of central nervous system
embryonal tumour outcome based on gene expression. Nature, 415:436?442, 2002.
Matthias Seeger. PAC-Bayesian generalization bounds for gaussian processes. Journal of
Machine Learning Research, 3:233?269, 2002.
| 2593 |@word middle:1 version:3 seems:1 tamayo:1 r:6 ajj:6 gish:1 myeloid:1 recursively:1 wrapper:5 contains:2 err:2 current:1 di2:1 john:2 remove:2 designed:1 eab:1 greedy:13 half:2 selected:1 intelligence:1 nervous:1 provides:4 five:3 constructed:1 become:1 consists:5 ray:38 introduce:1 theoretically:1 indeed:1 expected:1 ont:1 considering:2 totally:2 abound:1 provided:1 classifies:2 moreover:1 furey:3 project:1 what:1 gid:12 guarantee:5 every:1 ti:6 questionable:1 universit:1 classifier:42 control:1 originates:1 reasearch:1 superiority:1 before:3 positive:12 understood:1 mistake:1 limit:1 acad:1 slonim:1 id:18 initiated:1 approximately:1 suggests:2 bi:34 averaged:1 practical:1 testing:3 recursive:2 union:1 procedure:1 empirical:4 hyperbolic:1 refers:2 selection:5 risk:20 deterministic:3 independently:1 qc:1 haussler:3 array:11 classic:1 ambroise:2 qkp:8 construction:1 us:2 hunch:1 satisfying:1 role:2 levine:1 notterman:1 region:1 decrease:3 vanishes:1 ui:4 cristianini:1 trained:1 depend:1 tight:1 learner:6 basis:1 k0:1 uottawa:1 describe:1 artificial:1 choosing:2 outcome:1 disjunction:1 valued:3 otherwise:1 favor:1 nab:3 final:3 net:1 matthias:1 propose:4 gq:10 p4:1 zm:1 turned:1 iff:2 achieve:2 p:1 gab:9 tions:1 alon:2 measured:1 ij:2 received:1 come:1 direction:6 correct:1 attribute:35 filter:7 stochastic:2 aml:3 human:3 lymphoblastic:1 mcallester:4 softer:1 elimination:2 explains:1 assign:1 generalization:1 really:1 therapy:1 considered:1 normal:3 predict:1 bj:11 smallest:5 purpose:1 proc:1 label:4 largest:3 hope:1 mclachlan:2 always:1 gaussian:1 aim:1 pn:1 conjunction:20 focus:2 methodological:1 bernoulli:1 survivor:2 contrast:1 seeger:2 colon:4 flaw:1 dependent:1 typically:2 relation:1 misclassified:2 i1:2 provably:1 classification:12 colt:2 priori:2 development:1 misclassifies:3 constrained:1 equal:1 construct:3 once:1 having:2 bab:11 extraction:1 identical:1 represents:1 broad:1 leukemia:2 future:1 minimized:2 di1:1 others:1 micro:10 divergence:6 replaced:1 ourselves:1 ab:22 golub:4 mixture:1 tj:1 natl:1 accurate:1 bq:4 taylor:2 littlestone:2 theoretical:1 column:5 soft:12 rtd:2 exchanging:1 ottawa:2 subset:3 uniform:1 predicate:1 too:1 motivating:1 density:3 iy:2 quickly:1 central:1 containing:2 choose:5 possibly:1 account:1 de:1 ranking:2 depends:3 piece:3 try:1 closed:1 mario:3 characterizes:1 sup:1 reached:2 bayes:17 competitive:1 scm:1 minimize:1 formed:1 contribution:4 who:1 bayesian:4 accurately:1 produced:1 basically:1 monitoring:1 tissue:5 classified:1 barnhill:1 whenever:6 definition:3 failure:1 dm:1 associated:2 di:6 treatment:1 knowledge:1 improves:1 specify:1 evaluated:1 done:1 furthermore:3 just:3 langford:2 until:2 hand:1 lack:1 mohak:1 aj:5 believe:1 barkai:1 name:1 usa:2 building:2 verify:1 true:2 contain:2 inductive:1 hence:9 assigned:5 leibler:2 i2:1 ignorance:1 during:1 covering:6 ulaval:1 criterion:1 complete:2 performs:2 wise:3 laval:1 ti2:1 jl:1 ybarra:1 refer:2 gibbs:14 ai:31 cv:4 pointed:1 shawe:2 dj:3 specification:1 acute:2 surface:1 glo:1 base:3 add:1 posterior:8 softmargin:1 winnow:1 irrelevant:4 claimed:1 binary:4 came:1 success:1 errs:5 yi:1 uab:2 seen:1 additional:1 tempting:1 pnas:1 exceeds:1 ing:1 characterized:1 offer:1 cross:2 divided:1 molecular:1 qi:4 prediction:3 n5:1 patient:2 kernel:1 sometimes:1 justified:1 addition:1 want:2 whereas:1 interval:3 microarray:2 crucial:1 probably:1 call:1 presence:2 revealed:1 xj:6 gave:1 identified:2 tradeoff:4 ti1:1 whether:1 expression:17 motivated:1 utility:6 penalty:2 generally:1 covered:7 amount:7 dna:8 http:1 specifies:1 exist:1 misclassifying:1 ctd:3 notice:1 tutorial:3 correctly:1 discrete:2 probed:1 group:4 four:3 threshold:8 achieving:2 drawn:1 fraction:1 sum:1 oligonucleotide:1 named:1 guyon:3 draw:2 decision:1 def:17 bound:13 fold:4 marchand:4 g:5 adapted:1 ri:2 performing:1 according:7 combination:1 representable:1 kd:13 smaller:2 slightly:1 pr:2 mack:1 ln:17 remains:1 gkd:7 multiplied:1 shah:1 shortly:1 denotes:4 remaining:2 clustering:1 build:3 added:1 occurs:2 dependence:2 md:4 tumour:1 exhibit:1 separating:4 sci:1 majority:2 trivial:3 reason:2 index:5 providing:1 ratio:2 negative:12 design:1 unknown:1 perform:1 upper:2 observation:2 community:1 canada:2 david:2 specified:4 kl:14 z1:1 pattern:1 appeared:1 sparsity:4 challenge:1 sainte:1 belief:2 difficulty:1 eh:3 ranked:1 indicator:2 prior:10 discovery:1 interesting:1 proportional:1 remarkable:1 validation:3 classifying:2 playing:1 pi:7 cancer:5 ift:2 repeat:1 last:2 keeping:1 bias:3 side:3 fall:1 absolute:1 dimension:1 xn:1 author:2 far:1 kullback:2 gene:21 keep:1 xi:16 k1n:1 continuous:2 table:4 nature:1 ca:2 obtaining:2 main:1 dtj:2 x1:1 site:2 third:1 removing:1 theorem:11 pac:15 list:1 adii:5 svm:10 exists:2 vapnik:1 adding:2 valiant:1 kr:1 bd0:2 cab:3 importance:1 magnitude:2 duffy:1 margin:9 logarithmic:1 simply:2 likely:1 g1k:1 expressed:1 contained:1 nested:1 weston:1 goal:2 consequently:3 quantifying:1 labelled:1 absence:1 hard:1 except:1 uniformly:1 tumor:3 called:3 total:1 partly:1 vote:2 formally:1 support:3 pomeroy:2 relevance:1 bioinformatics:1 tested:2 ex:1 |
1,754 | 2,594 | Computing regularization paths
for learning multiple kernels
Francis R. Bach & Romain Thibaux
Computer Science
University of California
Berkeley, CA 94720
{fbach,thibaux}@cs.berkeley.edu
Michael I. Jordan
Computer Science and Statistics
University of California
Berkeley, CA 94720
[email protected]
Abstract
The problem of learning a sparse conic combination of kernel functions
or kernel matrices for classification or regression can be achieved via the
regularization by a block 1-norm [1]. In this paper, we present an algorithm that computes the entire regularization path for these problems.
The path is obtained by using numerical continuation techniques, and
involves a running time complexity that is a constant times the complexity of solving the problem for one value of the regularization parameter.
Working in the setting of kernel linear regression and kernel logistic regression, we show empirically that the effect of the block 1-norm regularization differs notably from the (non-block) 1-norm regularization
commonly used for variable selection, and that the regularization path is
of particular value in the block case.
1
Introduction
Kernel methods provide efficient tools for nonlinear learning problems such as classification or regression. Given a learning problem, two major tasks faced by practitioners are to
find an appropriate kernel and to understand how regularization affects the solution and its
performance. This paper addresses both of these issues within the supervised learning setting by combining three themes from recent statistical machine learning research, namely
multiple kernel learning [2, 3, 1], computation of regularization paths [4, 5], and the use of
path following methods [6].
The problem of learning the kernel from data has recently received substantial attention,
and several formulations have been proposed that involve optimization over the conic structure of the space of kernels [2, 1, 3]. In this paper we follow the specific formulation of [1],
who showed that learning a conic combination of basis kernels is equivalent to regularizing
the original supervised learning problem by a weighted block 1-norm (see Section 2.2 for
further details). Thus, by solving a single convex optimization problem, the coefficients
of the conic combination of kernels and the values of the parameters (the dual variables)
are obtained. Given the basis kernels and their coefficients, there is one free parameter
remaining?the regularization parameter.
Kernel methods are nonparametric methods, and thus regularization plays a crucial role in
their behavior. In order to understand a nonparametric method, in particular complex non-
parametric methods such as those considered in this paper, it is useful to be able to consider
the entire path of regularization, that is, the set of solutions for all values of the regularization parameter [7, 4]. Moreover, if it is relatively cheap computationally to compute this
path, then it may be of practical value to compute the path as standard practice in fitting a
model. This would seem particularly advisable in cases in which performance can display
local minima along the regularization path. In such cases, standard local search methods
may yield unnecessarily poor performance.
For least-squares regression with a 1-norm penalty or for the support vector machine, there
exist efficient computational techniques to explore the regularization path [4, 5]. These
techniques exploit the fact that for these problems the path is piecewise linear. In this paper
we consider the extension of these techniques to the multiple kernel learning problem. As
we will show (in Section 3), in this setting the path is no longer piecewise linear. It is,
however, piecewise smooth, and we are able to follow it by using numerical continuation
techniques [8, 6]. To do this in a computationally efficient way, we invoke logarithmic barrier techniques analogous to those used in interior point methods for convex optimization
(see Section 3.3). As we shall see, the complexity of our algorithms essentially depends
on the number of ?kinks? in the path, i.e., the number of discontinuity points of the derivative. Our experiments suggest that the number of those kinks is always less than a small
constant times the number of basis kernels. The empirical complexity of our algorithm is
thus a constant times the complexity of solving the problem using interior point methods
for one value of the regularization parameter (see Section 3.4 for details).
In Section 4, we present simulation experiments for classification and regression problems,
using a large set of basis kernels based on the most widely used kernels (linear, polynomial,
Gaussian). In particular, we show empirically that the number of kernels in the conic combination is not a monotonic function of the amount of regularization. This contrasts with
the simpler non-block 1-norm case for variable selection (i.e., blocks of size one [4]), where
the number of variables is usually monotonic (or nearly so). Thus the need to compute full
regularization paths is particularly acute in our more complex (block 1-norm regularization)
case.
2
Block 1-norm regularization
In this section we review the block 1-norm regularization framework of [1] as it applies
to differentiable loss functions. To provide necessary background we begin with a short
review of classical 2-norm regularization.
2.1
Classical 2-norm regularization
Primal formulation We consider the general regularized learning optimization problem [7], where the data xi , i = 1, . . . , n, belong to the input space X , and yi , i = 1, . . . , n
are the responses (lying either in {?1, 1} for classification or R for regression). We map
the data into a feature space F through x 7? ?(x). The kernel associated with this feature
map is denoted k(x, y) = ?(x)> ?(y). The optimization problem is the following1 :
Pn
minw?Rp i=1 `(yi , w> ?(xi )) + ?2 ||w||2 ,
(1)
where ? > 0 is a regularization parameter and ||w|| is the 2-norm of w, defined as ||w|| =
(w> w)1/2 . The loss function ` is any function from R ? R to R. In this paper, we focus
on loss functions that are strictly convex and twice continuously differentiable in their
second argument. Let ?i (v), v ? R, be the Fenchel conjugate [9] of the convex function
?i (u) = `(yi , u), defined as ?i (v) = maxu?R (vu ? ?i (u)). Since we have assumed that
1
We omit the intercept as it can be included by adding the constant variable equal to 1 to each
feature vector ?(xi ).
` is strictly convex and differentiable, the maximum defining ?i (v) is attained at a unique
point equal to ?i0 (v) (possibly equal to +? or ??). The function ?i (v) is then strictly
convex and twice differentiable in its domain.
In particular, we have the following examples in mind: for least-squares regression, we
have ?i (u) = 12 (yi ? u)2 and ?i (v) = 12 v 2 + vyi , while for logistic regression, we have
?i (u) = log(1 + exp(?yi ui )), where yi ? {?1, 1}, and ?i (v) = (1 + vyi ) log(1 + vyi ) ?
vyi log(?vyi ) if vyi ? (?1, 0), +? otherwise.
Dual formulation and optimality conditions The Lagrangian for problem (1) is
P
P
L(w, u, ?) = i ?i (ui ) + ?2 ||w||2 ? ? i ?i (ui ? w> ?(xi ))
P
and is minimized with respect to u and w with w = ? i ?i ?(xi ). The dual problem is
then
P
max??Rn ? i ?i (??i ) ? ?2 ?> K? ,
(2)
where K ? Rn?n is the kernel matrix of the points, i.e., Kab = k(xa , xb ). The optimality
condition for the dual variable ? is then:
?i, (K?)i + ?i0 (??i ) = 0
2.2
(3)
Block 1-norm regularization
In this paper, we map the input space X to m different feature spaces F1 , . . . , Fm , through
m feature maps ?1 (x), . . . , ?m (x). We now have m different variables wj ? Fj , j =
1, . . . , m. We use the notation ?(x) = (?1 (x), . . . , ?m (x)) and w = (w1 , . . . , wm ), and
from now on, we use the implicit convention that the index i ranges over data points (from
1 to n), while the index j ranges over kernels/feature spaces (from 1 to m).
Let dj , j = 1, . . . , m, be weights associated with each kernel. We will see in Section 4 how
these should be linked to the rank of the kernel matrices. Following [1], we consider the following problem with weighted block 1-norm regularization2 (where ||wj || = (wj> wj )1/2
still denotes the 2-norm of wj ):
P
P
minw?F1 ?????Fm i ?i (w> ?(xi )) + ? j dj ||wj ||.
(4)
The problem (4) is a convex problem, but not differentiable. In order to derive optimality conditions, we can reformulate it with conic constraints and derive the following dual
problem (we omit details for brevity) [9, 1]:
P
max? ? i ?i (??i ) such that ?j, ?> Kj ? 6 d2j
(5)
where Kj is the kernel matrix associated with kernel kj , i.e., defined as (Kj )ab =
kj (xa , xb ). From the KKT conditions for problem Eq. (5), we obtain that the dual variable ? is optimal if and only if there exists ? ? Rm such that ? > 0 and
P
?i, ( j ?j Kj ?)i + ?i0 (??i ) = 0
(6)
?j, ?> Kj ? 6 d2j , ?j > 0, ?j (d2i ? ?> Kj ?) = 0.
We can go back and forth between optimal w and ? by w = ?? Diag(?)
?i = ?1 ?0i (w> xi ).
P
i
?i xi or
We see that the solution of Eq. (5) can be obtained by using only the kernel matrices Kj
(i.e., this is indeed a kernel machine) and that the optimal solution of the block 1-norm
2
In [1], the square of the block 1-norm was used. However, when the entire regularization path is
sought, it is easy to show that the two problems are equivalent. The advantage of the current formulation is that when the blocks are of size one the problem reduces to classical 1-norm regularization [4].
path
target
?/?
Predictor step
Corrector steps
(? 0,?0)
(?1,? 1)
Path
Figure 1: (Left) Geometric interpretation of the dual problem in Eq. (5) for linear regression; see text for details. (Right) Predictor-corrector algorithm.
problem in Eq. (5), with optimality conditions
P in Eq. (6), is the solution of the regular 2norm problem in Eq. (2) with kernel K = j ?j Kj . Thus, with this formulation, we learn
the coefficients of the conic combination of kernels as well as the dual variables ? [1]. As
shown in [1], the conic combination is sparse, i.e., many of the coefficients ?j are equal to
zero.
2.3
Geometric interpretation of dual problem
Each function ?i is strictly convex, with a strict minimum at ?i defined by ?i0 (?i ) = 0
(for least-squares regression we have ?i = P
?yi , and for the logistic regression we have
?i = ?yi /2). The negated dual objective i ?i (??i ) is thus a metric between ? and
?/? (for least-squares regression, this is simply the squared distance while for logistic
regression, this is an entropy distance). Therefore, the dual problem aims to minimize a
metric between ? and the target ?/?, under the constraint that ? belongs to an intersection
of m ellipsoids {? ? Rn , ?> Kj ? 6 d2j }.
When computing the regularization path from ? = +? to ? = 0, the target goes from 0
to ? in the direction ? (see Figure 1). The geometric interpretation immediately implies
that as long as ?12 ? > Kj ? 6 d2j , the active set is empty, the optimal ? is equal to ?/?
and the optimal w is equal to 0. We thus initialize the path following technique with
? = maxj (? > Kj ?/d2j )1/2 and ? = ?/?.
3
Building the regularization path
In this section, the goal is to vary ? from +? (no regularization) to 0 (full regularization) and obtain a representation of the path of solutions (?(?), ?(?)). We will essentially
approximate the path by a piecewise linear function of ? = log(?).
3.1
Active set method
For the dual formulation Eq. (5)-Eq. (6), if the set of active kernels J (?) is known, i.e., the
set of kernels that are such that ?> Kj ? = d2j , then the optimality conditions become
?j ? J , ?> Kj ? = d2j
P
?i, ( j?J ?j Kj ?)i + ?i0 (??i ) = 0
(7)
and they are valid as long as ?j ?
/ J , ?> Kj ? 6 d2j and ?j ? J , ?j > 0.
The path is thus piecewise smooth, with ?kinks? at each point where the active set J
changes. On each of the smooth sections, only those kernels with index belonging to J
are used to define ? and ?, through Eq. (7). When all blocks have size one, or equivalently
when all kernel matrices have rank one, then the path is provably linear in 1/? between
each kink [4] and is thus easy to follow. However, when the kernel matrices have higher
rank, this is not the case and additional numerical techniques are needed, which we now
present. In the regularized formulation we present in Section 3.3, the optimal ? is a function
of ?, and therefore we only have to follow the optimal ?, as a function of ? = log(?).
3.2
Following a smooth path using numerical continuation techniques
In this section, we provide a brief review of path following, focusing on predictor-corrector
methods [8]. We assume that the function ?(?) ? Rd is defined implicitly by J(?, ?) = 0,
where J is C ? from Rd+1 to Rd and ? is a real variable. Starting from a point ?0 , ?0
such that J(?0 , ?0 ) = 0, by the implicit function theorem, the solution is well defined
?J
and C ? if the differential ??
? Rd?d is invertible. The derivative at ?0 is then equal to
?1 ?J
d?
?J
d? (?0 ) = ? ?? (?0 , ?0 )
?? (?0 , ?0 ).
In order to follow the curve ?(?), the most effective numerical method is the predictorcorrector method, which works as follows (see Figure 1):
? predictor step : from (?0 , ?0 ) predict where ?(?0 + h) should be using the first order
expansion, i.e., take ?1 = ?0 + h, ?1 = ?0 + h d?
d? (?0 ) (note that h can be chosen
positive or negative, depending on the direction we want to follow).
? corrector steps : (?1 , ?1 ) might not satisfy J(?1 , ?1 ) = 0, i.e., the tangent prediction
might (and generally will) leave the curve ?(?). In order to return to the curve, Newton?s method is used to solve the nonlinear system of equations (in ?) J(?, ?1 ) = 0,
starting from ? = ?1 . If h is small enough, then the Newton steps will converge
quadratically to a solution ?2 of J(?, ?1 ) = 0 [8].
Methods that do only one of the two steps are not as efficient: doing only predictor steps
is not stable and the algorithm leaves the path very quickly, whereas doing only corrector
steps (with increasing ?) is essentially equivalent to seeding the optimizer for a given ?
with the solution for a previous ?, which is very inefficient in sections where the path is
close to linear. Predictor-corrector methods approximate the path by a sequence of points
on that path, which can be joined to provide a piecewise linear approximation.
At first glance, in order to follow the piecewise smooth path all that is needed is to follow
each piece and detect when the active set changes, i.e, when ?j ?
/ J , ?> Kj ? = d2j or
?j ? J , ?j = 0. However this approach can be tricky numerically [8]. We instead prefer
to use a numerical regularization technique that will (a) make the entire path smooth, (b)
make sure that the Newton steps are globally convergent, and (c) will still enable us to use
only a subset of the kernels to define the path locally.
3.3
Numerical regularization
We borrow a classical regularization method from interior point methods, in which a constrained problem is made unconstrained by using a convex log-barrier [9]. In the dual
formulation, we solve the following problem (note that we now use a min-problem and
we have divided by ?2 , which leaves the problem unchanged), where ? is a fixed small
constant:
P
? P
2
>
min? F (?, ?) where F (?, ?) = i ?12 ?i (??i ) ? 2?
(8)
j log(dj ? ? Kj ?)
For ? fixed, ? 7? F (?, ?) is C ? and strictly convex in its domain {?, ?j, ?> Kj ? < d2j },
and thus the global minimum is uniquely defined by ?F
??P= 0. If we define ?j (?) =
?F
1 0
1
=
?
(??
)
+
?/(d2j ? ?> Kj ?), then we have ??
i
j ?j (?)(Kj ?)i , and thus, the
? i
?
i
optimality condition for the problem with the log-barrier is exactly equivalent to the one in
Eq. (6). But now instead of having ?j (d2j ? ?> Kj ?) = 0 (which would define an optimal
solution of the numerically unregularized problem), we have ?j (d2j ? ?> Kj ?) = ?. Any
10
4
8
?
?
3
2
4
1
0
6
2
0
2
?log(?)
4
6
0
0
5
?log(?)
10
Figure 2: Examples of variation of ? along the regularization path for linear regression
(left) and logistic regression (right).
dual-feasible variables ? and ? (not necessarily linked through a functional relationship)
define primal-dual variables and the quantity ?j (d2j ??> Kj ?) is exactly the duality gap [9],
i.e., the difference between the primal and dual objectives. Thus the parameter ? holds fixed
the duality gap we are willing to pay. In simulations, we used ? = 10?3 .
We can apply the techniques of Section 3.2 to follow the path for a fixed ?, for the variables
? only, since ? is now a function of ?. The corrector steps, are not only Newton steps for
solving a system of nonlinear equations, they are also Newton-Raphson steps to minimize
a strictly convex function, and are thus globally convergent [9].
3.4
Path following algorithm
Our path following algorithm is simply a succession of predictor-corrector steps, described
in Section 3.2, with J(?, ?) = ?F
?? (?, ?) defined in Section 3.3, where ? = log(?). The
initialization presented in Section 2.3 is used.
In Figure 2, we show simple examples of the values of the kernel weights ? along the
path for a toy problem with a small number of kernels, for kernel linear regression and
kernel logistic regression. It is worth noting that the weights are not even approximately
monotonic functions of ?; also the behavior of those weights as ? approaches zero (or ?
grows unbounbed) is very specific: they become constant for linear regression and they
grow up to infinity for logistic regression. In Section 4, we show (a) why these behaviors
occur and (b) what the consequences are regarding the performance of the multiple kernel
learning problem. In the remaining of this section, we review some important algorithmic
issues3 .
Step size selection A major issue in path following methods is the choice of the step h:
if h is too big, the predictor will end up very far from the path and many Newton steps have
to be performed, while if h is too small, progress is too slow. We chose a simple adaptive
scheme where at each predictor step we select the biggest h so that the predictor step stays
in the domain |J(?, ?)| 6 ?. The precision parameter ? is itself adapted at each iteration:
if the number of corrector steps at the previous iteration is greater than 8 then ? is decreased
whereas if this number is less than 4, it is increased.
Running time complexity Between each kink, the path is smooth, thus there is a bounded
number of steps [8, 9]. Each of those steps has complexity O(n3 + mn2 ). We have
observed empirically that the overall number of those steps is O(m), thus the total empirical
complexity is O(mn3 + m2 n2 ). The complexity of solving the optimization problem in
Eq. (5) using an interior point method for only one value of the regularization parameter is
O(mn3 ) [2], thus if m 6 n, the empirical complexity of our algorithm, which yields the
entire regularization path, is a constant times the complexity of obtaining only one point in
the path using an interior point method. This makes intuitive sense, as both methods follow
a path, by varying ? in the case of the interior point method, and by varying ? in our case.
The difference is that every point along our path is meaningful, not just the destination.
3
A Matlab implementation can be downloaded from www.cs.berkeley.edu/?fbach .
0
2
4
?log(?)
6
0
8
30
20
10
0
0.1
number of kernels
number of kernels
0
0.2
0
2
4
?log(?)
6
8
0
2
4
?log(?)
6
8
30
20
10
0
0.6
0.4
0.2
0
2
4
?log(?)
6
8
0
0
5
?log(?)
10
50
40
30
20
10
0
mean square error
0.1
1
0.8
number of kernels
0.2
0.3
number of kernels
error rate
error rate
0.3
mean square error
0.4
0.4
0
5
?log(?)
10
1
0.8
0.6
0.4
0.2
0
0
5
10
5
10
?log(?)
50
40
30
20
10
0
0
?log(?)
Figure 3: Varying the weights (dj ): (left) classification on the Liver dataset, (right) regression on the Boston dataset ; for each dataset, two different values of ?, (left) ? = 0 and
(right) ? = 1 . (Top) training set accuracy in bold, testing set accuracy in dashed, (bottom)
number of kernels in the conic combination.
Efficient implementation Because of our numerical regularization, none of the ?j ?s are
equal to zero (in fact each ?j is lower bounded by ?/d2j ). We thus would have to use all
kernels when computing the various derivatives. We circumvent this by truncating those ?j
that are close to their lower bound to zero: we thus only use the kernels that are numerically
present in the combination.
Second-order predictor step The implicit function theorem also allows to compute
derivative of the path of higher orders. By using a second-order approximation of the path,
we can reduce significantly the number of predictor-corrector steps required for the path.
4
Simulations
We have performed simulations on the Boston dataset (regression, 13 variables, 506 data
points) and Liver dataset (classification, 6 variables, 345 data points) from the UCI repository, with the following kernels: linear kernel on all variables, linear kernels on single variables, polynomial kernels (with 4 different orders), Gaussian kernels on all variables (with
7 different kernel widths), Gaussian kernels on subsets of variables (also with 7 different
kernel widths), and the identity matrix. This makes 110 kernels for the Boston dataset and
54 for the Liver dataset. All kernel matrices were normalized to unit trace.
Intuitively, the regularization weight dj for kernel Kj should be an increasing function of
the rank of Kj , i.e., we should penalize more feature spaces of higher dimensions. In order
to explore the effect of dj on performance, we set dj as follows: we compute the number
1
pj of eigenvalues of Kj that are greater than 2n
(remember that because of the unit trace
constraint, these n eigenvalues sum to 1), and we take dj = p?j . If ? = 0, then all dj ?s are
equal to one, and when ? increases, kernel matrices of high rank such as the identity matrix
have relatively higher weights, noting that a higher weight implies a heavier regularization.
In Figure 3, for the Boston and liver datasets, we plot the number of kernels in the conic
combination as well as the training and testing errors, for ? = 0 and ? = 1. We can make
the following simple observations:
Number of kernels The number of kernels present in the sparse conic combination
is a non monotonic function of the regularization parameter. When the blocks are onedimensional, a situation equivalent to variable selection with a 1-norm penalty, this number
is usually a nearly monotonic function of the regularization parameter [4].
Local minima Validation set performance may exhibit local minima, and thus algorithms
based on hill-climbing might exhibit poor performance by being trapped in a local minimum, whereas our approach where we compute the entire path would avoid that.
Behavior for small ? For all values of ?, as ? goes to zero, the number of kernels remains
the same, the training error goes to zero, while the testing error remains constant. What
changes when ? changes is the value of ? at which this behavior appears; in particular, for
small values of ?, it happens before the testing error goes back up, leading to an unusual
validation performance curve (an usual cross-validation curve would diverge to large values
when the regularization parameter goes to zero). It is thus crucial to use weights dj that
grow with the ?size? of the kernel, and not simply constant.
This behavior can be confirmed by a detailed analysis of the optimality conditions, which
show that if one of the kernel has a flat spectrum (such as the identity matrix), then, as ?
goes to zero, ? tends to a limit, ? tends to a limit for linear regression and goes to infinity
as log(1/?) for logistic regression. Also, once in that limiting regime, the training error
goes to zero quickly, while the testing error remains constant.
5
Conclusion
We have presented an algorithm to compute entire regularization paths for the problem
of multiple kernel learning. Empirical results using this algorithm have provided us with
insight into the effect of regularization for such problems. In particular we showed that the
behavior of the block 1-norm regularization differs notably from traditional (non-block)
1-norm regularization.
As presented, the empirical results suggest that our algorithm scales quadratically in the
number of kernels, but cubically in the number of data points. Indeed, the main computational burden (for both predictor and corrector steps) is the inversion of a Hessian. In order
to make the computation of entire paths efficient for problems involving a large number of
data points, we are currently investigating inverse Hessian updating, a technique which is
commonly used in quasi-Newton methods [10].
Acknowledgments
We wish to acknowledge support from NSF grant 0412995, a grant from Intel Corporation,
and a graduate fellowship to Francis Bach from Microsoft Research.
References
[1] F. R. Bach, G. R. G. Lanckriet, and M. I. Jordan. Multiple kernel learning, conic duality, and
the SMO algorithm. In ICML, 2004.
[2] G. R. G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M. I. Jordan. Learning the
kernel matrix with semidefinite programming. JMLR, 5:27?72, 2004.
[3] C. S. Ong, A. J. Smola, and R. C. Williamson. Hyperkernels. In NIPS 15, 2003.
[4] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. Ann. Stat.,
32(2):407?499, 2004.
[5] T. Hastie, S. Rosset, R. Tibshirani, and J. Zhu. The entire regularization path for the support
vector machine. In NIPS 17, 2005.
[6] A. Corduneanu and T. Jaakkola. Continuation methods for mixing heterogeneous sources. In
UAI, 2002.
[7] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer-Verlag,
2001.
[8] E. L. Allgower and K. Georg. Continuation and path following. Acta Numer., 2:1?64, 1993.
[9] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge Univ. Press, 2003.
[10] J. F. Bonnans, J. C. Gilbert, C. Lemar?echal, and C. A. Sagastizbal. Numerical Optimization
Theoretical and Practical Aspects. Springer, 2003.
| 2594 |@word repository:1 inversion:1 polynomial:2 norm:22 willing:1 simulation:4 current:1 numerical:9 cheap:1 seeding:1 plot:1 leaf:2 short:1 simpler:1 along:4 become:2 differential:1 fitting:1 notably:2 indeed:2 behavior:7 globally:2 increasing:2 begin:1 provided:1 moreover:1 notation:1 bounded:2 what:2 corporation:1 berkeley:5 every:1 remember:1 exactly:2 rm:1 tricky:1 unit:2 grant:2 omit:2 positive:1 before:1 local:5 tends:2 limit:2 consequence:1 path:54 approximately:1 might:3 chose:1 twice:2 initialization:1 acta:1 range:2 graduate:1 acknowledgment:1 practical:2 unique:1 testing:5 vu:1 practice:1 block:19 differs:2 empirical:5 significantly:1 boyd:1 regular:1 suggest:2 interior:6 selection:4 close:2 intercept:1 www:1 equivalent:5 map:4 lagrangian:1 gilbert:1 go:9 attention:1 starting:2 truncating:1 convex:12 immediately:1 m2:1 insight:1 borrow:1 vandenberghe:1 variation:1 analogous:1 limiting:1 target:3 play:1 programming:1 lanckriet:2 romain:1 element:1 particularly:2 updating:1 observed:1 role:1 bottom:1 wj:6 substantial:1 complexity:11 ui:3 cristianini:1 ong:1 d2i:1 solving:5 basis:4 various:1 univ:1 mn2:1 effective:1 widely:1 solve:2 otherwise:1 statistic:1 itself:1 advantage:1 differentiable:5 sequence:1 eigenvalue:2 uci:1 combining:1 mixing:1 forth:1 intuitive:1 kink:5 empty:1 leave:1 derive:2 advisable:1 depending:1 stat:1 liver:4 allgower:1 received:1 progress:1 eq:11 c:3 involves:1 implies:2 convention:1 direction:2 enable:1 bonnans:1 f1:2 extension:1 strictly:6 hold:1 lying:1 considered:1 exp:1 maxu:1 algorithmic:1 predict:1 major:2 sought:1 vary:1 optimizer:1 currently:1 tool:1 weighted:2 always:1 gaussian:3 aim:1 pn:1 avoid:1 varying:3 jaakkola:1 mn3:2 focus:1 rank:5 contrast:1 detect:1 sense:1 el:1 i0:5 cubically:1 entire:9 quasi:1 provably:1 issue:2 classification:6 dual:16 overall:1 denoted:1 vyi:6 constrained:1 initialize:1 equal:9 once:1 having:1 unnecessarily:1 icml:1 nearly:2 minimized:1 piecewise:7 maxj:1 microsoft:1 ab:1 friedman:1 numer:1 semidefinite:1 primal:3 xb:2 necessary:1 minw:2 theoretical:1 fenchel:1 increased:1 subset:2 predictor:13 too:3 thibaux:2 rosset:1 stay:1 destination:1 invoke:1 invertible:1 michael:1 diverge:1 continuously:1 quickly:2 w1:1 squared:1 possibly:1 derivative:4 inefficient:1 return:1 leading:1 toy:1 bold:1 coefficient:4 satisfy:1 depends:1 piece:1 performed:2 linked:2 francis:2 doing:2 wm:1 minimize:2 square:7 accuracy:2 who:1 succession:1 yield:2 climbing:1 none:1 worth:1 confirmed:1 associated:3 dataset:7 efron:1 back:2 focusing:1 appears:1 attained:1 higher:5 supervised:2 follow:10 response:1 formulation:9 xa:2 implicit:3 just:1 smola:1 working:1 nonlinear:3 glance:1 logistic:8 corduneanu:1 grows:1 building:1 effect:3 kab:1 normalized:1 regularization:48 width:2 uniquely:1 hill:1 fj:1 d2j:15 recently:1 functional:1 empirically:3 belong:1 interpretation:3 numerically:3 onedimensional:1 cambridge:1 rd:4 unconstrained:1 dj:10 stable:1 longer:1 acute:1 recent:1 showed:2 belongs:1 verlag:1 yi:8 minimum:6 additional:1 greater:2 converge:1 dashed:1 multiple:6 full:2 reduces:1 smooth:7 bach:3 long:2 raphson:1 cross:1 divided:1 prediction:1 involving:1 regression:25 heterogeneous:1 essentially:3 metric:2 iteration:2 kernel:70 achieved:1 penalize:1 background:1 fellowship:1 want:1 whereas:3 decreased:1 grow:2 source:1 crucial:2 fbach:2 strict:1 sure:1 seem:1 jordan:4 practitioner:1 noting:2 easy:2 enough:1 affect:1 hastie:3 fm:2 reduce:1 regarding:1 heavier:1 bartlett:1 penalty:2 hessian:2 matlab:1 useful:1 generally:1 detailed:1 involve:1 amount:1 nonparametric:2 locally:1 continuation:5 exist:1 nsf:1 trapped:1 tibshirani:3 shall:1 georg:1 pj:1 sum:1 inverse:1 angle:1 prefer:1 bound:1 pay:1 display:1 convergent:2 adapted:1 occur:1 constraint:3 infinity:2 n3:1 flat:1 aspect:1 argument:1 optimality:7 min:2 relatively:2 combination:10 poor:2 conjugate:1 belonging:1 happens:1 intuitively:1 ghaoui:1 unregularized:1 computationally:2 equation:2 remains:3 needed:2 mind:1 end:1 unusual:1 apply:1 appropriate:1 rp:1 original:1 denotes:1 running:2 remaining:2 top:1 newton:7 exploit:1 classical:4 unchanged:1 objective:2 quantity:1 parametric:1 usual:1 traditional:1 exhibit:2 distance:2 index:3 corrector:11 reformulate:1 ellipsoid:1 relationship:1 equivalently:1 trace:2 negative:1 implementation:2 negated:1 observation:1 datasets:1 acknowledge:1 defining:1 situation:1 rn:3 namely:1 required:1 california:2 smo:1 quadratically:2 discontinuity:1 nip:2 address:1 able:2 usually:2 regime:1 max:2 regularized:2 circumvent:1 zhu:1 scheme:1 brief:1 conic:12 kj:28 faced:1 review:4 geometric:3 text:1 tangent:1 loss:3 validation:3 downloaded:1 echal:1 free:1 understand:2 johnstone:1 barrier:3 sparse:3 curve:5 dimension:1 valid:1 computes:1 commonly:2 made:1 adaptive:1 far:1 approximate:2 implicitly:1 global:1 kkt:1 active:5 investigating:1 uai:1 assumed:1 xi:8 spectrum:1 search:1 why:1 learn:1 ca:2 obtaining:1 expansion:1 williamson:1 complex:2 necessarily:1 domain:3 diag:1 main:1 big:1 n2:1 biggest:1 intel:1 slow:1 precision:1 theme:1 wish:1 jmlr:1 theorem:2 specific:2 exists:1 burden:1 adding:1 gap:2 boston:4 entropy:1 intersection:1 logarithmic:1 simply:3 explore:2 joined:1 monotonic:5 applies:1 springer:2 goal:1 identity:3 ann:1 lemar:1 feasible:1 change:4 included:1 hyperkernels:1 total:1 duality:3 meaningful:1 select:1 support:3 brevity:1 regularizing:1 |
1,755 | 2,595 | Learning Gaussian Process Kernels via
Hierarchical Bayes
Anton Schwaighofer
Fraunhofer FIRST
Intelligent Data Analysis (IDA)
Kekul?estrasse 7, 12489 Berlin
[email protected]
Volker Tresp, Kai Yu
Siemens Corporate Technology
Information and Communications
81730 Munich, Germany
{volker.tresp,kai.yu}@siemens.com
Abstract
We present a novel method for learning with Gaussian process regression in a hierarchical Bayesian framework. In a first step, kernel matrices on a fixed set of input points are learned from data using a simple
and efficient EM algorithm. This step is nonparametric, in that it does
not require a parametric form of covariance function. In a second step,
kernel functions are fitted to approximate the learned covariance matrix
using a generalized Nystr?om method, which results in a complex, data
driven kernel. We evaluate our approach as a recommendation engine
for art images, where the proposed hierarchical Bayesian method leads
to excellent prediction performance.
1
Introduction
In many real-world application domains, the available training data sets are quite small,
which makes learning and model selection difficult. For example, in the user preference
modelling problem we will consider later, learning a preference model would amount to
fitting a model based on only 20 samples of a user?s preference data. Fortunately, there
are situations where individual data sets are small, but data from similar scenarios can
be obtained. Returning to the example of preference modelling, data for many different
users are typically available. This data stems from clearly separate individuals, but we can
expect that models can borrow strength from data of users with similar tastes. Typically,
such problems have been handled by either mixed effects models or hierarchical Bayesian
modelling.
In this paper we present a novel approach to hierarchical Bayesian modelling in the context
of Gaussian process regression, with an application to recommender systems. Here, hierarchical Bayesian modelling essentially means to learn the mean and covariance function
of the Gaussian process.
In a first step, a common collaborative kernel matrix is learned from the data via a simple
and efficient EM algorithm. This circumvents the problem of kernel design, as no parametric form of kernel function is required here. Thus, this form of learning a covariance matrix
is also suited for problems with complex covariance structure (e.g. nonstationarity).
A portion of the learned covariance matrix can be explained by the input features and, thus,
generalized to new objects via a content-based kernel smoother. Thus, in a second step,
we generalize the covariance matrix (learned by the EM-algorithm) to new items using a
generalized Nystr?om method. The result is a complex content-based kernel which itself
is a weighted superposition of simple smoothing kernels. This second part could also be
applied to other situations where one needs to extrapolate a covariance matrix on a finite
set (e.g. a graph) to a continuous input space, as, for example, required in induction for
semi-supervised learning [14].
The paper is organized as follows. Sec. 2 casts Gaussian process regression in a hierarchical
Bayesian framework, and shows the EM updates to learn the covariance matrix in the first
step. Extrapolating the covariance matrix is shown in Sec. 3. We illustrate the function of
the EM-learning on a toy example in Sec. 4, before applying the proposed methods as a
recommender system for images in Sec. 4.1.
1.1
Previous Work
In statistics, modelling data from related scenarios is typically done via mixed effects models or hierarchical Bayesian (HB) modelling [6]. In HB, parameters of models for individual scenarios (e.g. users in recommender systems) are assumed to be drawn from a
common (hyper)prior distribution, allowing the individual models to interact and regularize each other. Recent examples of HB modelling in machine learning include [1, 2]. In
other contexts, this learning framework is called multi-task learning [4]. Multi-task learning with Gaussian processes has been suggested by [8], yet with the rather stringent assumption that one has observations on the same set of points in each individual scenario.
Based on sparse approximations of GPs, a more general GP multi-task learner with parametric covariance functions has been presented in [7]. In contrast, the approach presented
in this paper only considers covariance matrices (and is thus non-parametric) in the first
step. Only in a second extrapolation step, kernel smoothing leads to predictions based on a
covariance function that is a data-driven combination of simple kernel functions.
2
Learning GP Kernel Matrices via EM
The learning task we are concerned with can be stated as follows: The data are observations
i
from M different scenarios. In the i.th scenario, we have observations y i = (y1i , . . . , yN
i)
i
i
i
i
on a total of N points, X = {x1 , . . . , xN i }. In order to analyze this data in a hierarchical
Bayesian way, we assume that the data for each scenario is a noisy sample of a Gaussian
process (GP) with unknown mean and covariance function. We assume that mean and
covariance function are shared across different scenarios.1
In the first modelling step presented in this section, we consider transductive learning (?labelling a partially labelled data set?), that is, we are interested in the model?s behavior only
SM
on points X, with X = i=1 X i and cardinality N = |X|. This situation is relevant
for most collaborative filtering applications. Thus, test points are the unlabelled points in
each scenario. This reduces the whole ?infinite dimensional? Gaussian process to its finite
dimensional projection on points X, which is an N -variate Gaussian distribution with covariance matrix K and mean vector m. For the EM algorithm to work, we also require that
there is some overlap between scenarios, that is, X i ? X j 6= ? for some i, j. Coming back
to the user modelling problem mentioned above, this means that at least some items have
been rated by more than one user.
Thus, our first modelling step focusses on directly learning the covariance matrix K and
1
Alternative HB approaches for collaborative filtering, like that discussed in [5], assume that
model weights are drawn from a shared Gaussian distribution.
m from the data via an efficient EM algorithm. This may be of particular help in problems
where one would need to specify a complex (e.g. nonstationary) covariance function.
Following the hierarchical Bayesian assumption, the data observed in each scenario is thus
a partial sample from N (y | m, K + ? 2 1), where 1 denotes the unit matrix. The joint
model is simply
M
Y
p(m, K)
p(y i | f i )p(f i | m, K),
(1)
i=1
where p(m, K) denotes the prior distribution for mean and covariance. We assume a Gaussian likelihood p(y i | f i ) with diagonal covariance matrix ? 2 1.
2.1
EM Learning
For the above hierarchical Bayesian model, Eq. (1), the marginal likelihood becomes
M Z
Y
p(m, K)
p(y i | f i )p(f i | m, K) df i .
(2)
i=1
To obtain simple and stable solutions when estimating m and K from the data, we consider point estimates of the parameters m and K, based on a penalized likelihood approach
with conjugate priors.2 The conjugate prior for mean m and covariance K of a multivariate Gaussian is the so-called Normal-Wishart distribution [6], which decomposes into the
product of an inverse Wishart distribution for K and a Normal distribution for m,
p(m, K) = N (m | ?, ? ?1 K)Wi?1 (K|?, U ).
(3)
That is, the prior for the Gram matrix K is given by an inverse Wishart distribution with
scalar parameter ? > 1/2(N ? 1) and U being a symmetric positive-definite matrix. Given
the covariance matrix K, m is Gaussian distributed with mean ? and covariance ? ?1 K,
where ? is a positive scalar. The parameters can be interpreted in terms of an equivalent
data set for the mean (this data set has size A, with A = ?, and mean ? = ?) and a data set
for the covariance that has size B, with ? = (B + N )/2, and covariance S, U = (B/2)S.
In order to write down the EM algorithm in a compact way, we denote by I(i) the set of
indices of those data points that have been observed in the i.th scenario, that is I(i) =
{j | j ? {1, . . . , N } and xj ? X i }. Keep in mind that in most applications of interest
N i N such that most targets are missing in training. KI(i),I(i) denotes the square
submatrix of K that corresponds to points I(i), that is, the covariance matrix for points in
the i.th scenario. By K?,I(i) we denote the covariance matrix of all N points versus those
in the i.th scenario.
2.1.1
E-step
i
In the E-step, one first computes f? , the expected value of functional values on all N
points for each scenario i. The expected value is given by the standard equations for the
predictive mean of Gaussian process models, where the covariance functions are replaced
by corresponding sub-matrices of the current estimate for K:
i
f? = K?,I(i) (KI(i),I(i) + ? 2 1)?1 (y i ? mI(i) ) + m,
i = 1, . . . , M.
(4)
Also, covariances between all pairs of points are estimated, based on the predictive covariance for the GP models: (> denotes matrix transpose)
C? i = K ? K?,I(i) (KI(i),I(i) + ? 2 1)?1 K > , i = 1, . . . , M.
(5)
?,I(i)
2
2
An efficient EM-based solution for the case ? = 0 is also given by [9].
2.1.2
M-step
In the M-step, the vector of mean values m, the covariance matrix K and the noise variance
? 2 are being updated. Denoting the updated quantities by m0 , K 0 , and (? 2 )0 , we get
!
M
X
i
1
0
?
m =
A? +
f
M +A
i=1
!
M
X
i
i
1
A(m0 ? ?)(m0 ? ?)> + BS +
K0 =
(f? ? m0 )(f? ? m0 )> + C? i
M +B
i=1
!
M
X
i
1
i
ky i ? f? I(i) k2 + trace C?I(i),I(i)
.
(? 2 )0 =
N i=1
An intuitive explanation of the M-step is as follows: The new mean m0 is a weighted
combination of the prior mean, weighted by the equivalent sample size, and the predictive
mean. The covariance update is a sum of four terms. The first term is typically irrelevant,
it is a result of the coupling of the Gaussian and the inverse Wishart prior distributions via
K. The second term contains the prior covariance matrix, again weighted by the equivalent
sample size. As the third term, we get the empirical covariance, based on the estimated
and measured functional values f i . Finally, the fourth term gives a correction term to
compensate for the fact that the functional values f i are only estimates, thus the empirical
covariance will be too small.
3
Learning the Covariance Function via Generalized Nystr?om
Using the EM algorithm described in Sec. 2.1, one can easily and efficiently learn a covariance matrix K and mean vector m from data obtained in different related scenarios. Once
K is found, predictions within the set X can easily be made, by appealing to the same
equations used in the EM algorithm (Eq. (4) for the predictive mean and Eq. (5) for the
covariance). This would, for example, be of interest in a collaborative filtering application
with a fixed set of items. In this section we describe how the covariance can be generalized
to new inputs z 6? X.
Note that, in all of the EM algorithm, the content features xij do not contribute at all. In
order to generalize the learned covariance matrix, we employ a kernel smoother with an
auxiliary kernel function r(?, ?) that takes a pair of content features as input. As a constraint, we need to guarantee that the derived kernel is positive definite, such that straightforward interpolation schemes cannot readily be applied. Thus our strategy is to interpolate
the eigenvectors of K instead and subsequently derive a positive definite kernel. This approach is related to the Nystr?om method, which is primarily a method for extrapolating
eigenfunctions that are only known at a discrete set of points. In contrast to Nystr?om,
the extrapolating smoothing kernel is not known in our setting and we employ a generic
smoothing kernel r(?, ?) instead [12].
Let K = U ?U T be the eigendecomposition of covariance matrix K, with a diagonal
matrix of eigenvalues ? and orthonormal eigenvectors U . With V = U ?1/2 , the columns
of V are scaled eigenvectors. We now approximate the i-th scaled eigenvector v i by a
Gaussian process with covariance function r(?, ?) and obtain as an approximation of the
scaled eigenfunction
N
X
?i (w) =
r(w, xj )bi,j
(6)
j=1
with weights bi = (bi,1 , . . . , bi,N )> = (R + ?I)?1 v i . R denotes the Gram matrix for the
smoothing kernel on all N points. An additional regularization term ?I is introduced to
stabilize the inverse. Based on the approximate scaled eigenfunctions, the resulting kernel
function is simply
X
l(w, z) =
?i (w)?i (z) = r(w)> (R + ?I)?1 K(R + ?I)?1 r(z).
(7)
i
with r(w)> = (r(x1 , w), . . . , r(xN , w)). R (resp. L) are the Gram matrices at the training data points X for kernel function r (resp. l) . ? is a tuning parameter that determines
which proportion of K is explained by the content kernel. With ? = 0, L = K is reproduced which means that all of K can be explained by the content kernel. With ? ? ?
then l(w, z) ? 0 and no portion of K is explained by the content kernel.3 Also, note that
the eigenvectors are only required in the derivation, and do not need to be calculated when
evaluating the kernel.4
Similarly, one can build a kernel smoother to extrapolate from the mean vector m to an
approximate mean function m(?).
?
The prediction for a new object v in scenario i thus
becomes
X
f i (v) = m(v)
?
+
l(v, xj ) ?ji
(8)
j?I(i)
i
with weights ? given by ? = (KI(i),I(i) + ? 2 I)?1 (y i ? mI(i) ).
It is important to note l has a much richer structure than the auxiliary kernel r. By expanding the expression for l, one can see that l amounts to a data-dependent covariance function
that can be written as a superposition of kernels r,
l(v, w) =
N
X
r(xi , v)aw
j ,
(9)
i=1
with input dependent weights aw = (R + ?I)?1 K(R + ?I)?1 r w .
4
Experiments
We first illustrate the process of covariance matrix learning on a small toy example: Data
is generated by sampling from a Gaussian process with the nonstationary ?neural network
covariance function? [11]. Independent Gaussian noise of variance 10?4 is added. Input
points X are 100 randomly placed points in the interval [?1, 1]. We consider M = 20
scenarios, where each scenario has observations on a random subset X i of X, with N i ?
0.1N . In Fig. 1(a), each scenario corresponds to one ?noisy line? of points.
Using the EM-based covariance matrix learning (Sec. 2.1) on this data, the nonstationarity
of the data does no longer pose problems, as Fig. 1 illustrates. The (stationary) covariance
matrix shown in Fig. 1(c) was used both as the initial value for K and for the prior covariance S in Eq. (3). While the learned covariance matrix Fig. 1(d) does not fully match the
true covariance, it clearly captures the nonstationary effects.
4.1
A Recommendation Engine
As a testbed for the proposed methods, we consider an information filtering task. The
goal is to predict individual users? preferences for a large collection of art images5 , where
3
Note that, also if the true interpolating kernel was known, i.e., r = k, and with ? = 0, we obtain
l(w, z) = k(w, z)K ?1 k(w, z) which is the approximate kernel obtained with Nystr?om.
4
A related form of kernel matrix extrapolation has been recently proposed by [10].
5
http://honolulu.dbs.informatik.uni-muenchen.de:8080/paintings/index.jsp
(a) Training data
(b) True covariance matrix
(c) Initial covariance matrix
(d) Covariance matrix learned via EM
Figure 1: Example to illustrate covariance matrix learning via EM. The data shown in
(a) was drawn from a Gaussian process with a nonstationary ?neural network? covariance
function. When initialized with the stationary matrix shown in (c), EM learning resulted in
the covariance matrix shown in (d). Comparing the learned matrix (d) with the true matrix
(b) shows that the nonstationary structure is captured well
each user rated a random subset out of a total of 642 paintings, with ratings ?like? (+1),
?dislike?(?1), or ?not sure? (0). In total, ratings from M = 190 users were collected,
where each user had rated 89 paintings on average. Each image is also described by a 275dimensional feature vector (containing correlogram, color moments, and wavelet texture).
Fig. 2(a) shows ROC curves for collaborative filtering when preferences of unrated items
within the set of 642 images are predicted. Here, our transductive approach (Eq. (4), ?GP
with EM covariance?) is compared with a collaborative approach using Pearson correlation [3] (?Collaborative Filtering?) and an alternative nonparametric hierarchical Bayesian
approach [13] (?Hybrid Filter?). All algorithms are evaluated in a 10-fold cross validation
scheme (repeated 10 times), where we assume that ratings for 20 items are known for each
test user. Based on the 20 known ratings, predictions can be made for all unrated items. We
obtain an ROC curve by computing sensitivity and specificity for the proportion of truly
liked paintings among the N top ranked paintings, averaged over N . The figure shows that
our approach is considerably better than collaborative filtering with Pearson correlation and
even gains a (yet small) advantage over the hybrid filtering technique.
Note that the EM algorithm converged6 very quickly, requiring about 4?6 EM steps to learn
the covariance matrix K. Also, we found that the performance is rather insensitive with
respect to the hyperparameters, that is, the choice of ?, S and the equivalent sample sizes
A and B.
Fig. 2(b) shows ROC curves for the inductive setting where predictions for items outside
6
S was set by learning a standard parametric GPR model from the preference data of one randomly chosen user, setting kernel parameters via marginal likelihood, and using this model to generate a full covariance matrix for all points.
(a) Transductive methods
(b) Inductive methods
Figure 2: ROC curves of different methods for predicting user preferences for art images
the training set are to be made (sometimes referred to as the ?new item problem?). Shown
is the performance obtained with the generalized Nystr?om method ( Eq. (8), ?GP with
Generalized Nystr?om?)7 , and when predicting user preferences from image features via an
SVM with squared exponential kernel (?SVM content-based filtering?). It is apparent that
the new approach with the learned kernel is superior to the standard SVM approach. Still,
the overall performance of the inductive approach is quite limited. The low-level content
features are only very poor indicators for the high level concept ?liking an art image?, and
inductive approaches in general need to rely on content-dependent collaborative filtering.
The purely content-independent collaborative effect, which is exploited in the transductive
setting, cannot be generalized to new items. The purely content-independent collaborative
effect can be viewed as correlated noise in our model.
5
Summary and Conclusions
This article introduced a novel method of learning Gaussian process covariance functions
from multi-task learning problems, using a hierarchical Bayesian framework. In the hierarchical framework, the GP models for individual scenarios borrow strength from each other
via a common prior for mean and covariance. The learning task was solved in two steps:
First, an EM algorithm was used to learn the shared mean vector and covariance matrix
on a fixed set of points. In a second step, the learned covariance matrix was generalized
to new points via a generalized form of Nystr?om method. Our initial experiments, where
we use the method as a recommender system for art images, showed very promising results. Also, in our approach, a clear distinction is made between content-dependent and
content-independent collaborative filtering.
We expect that our approach will be even more effective in applications where the content
features are more powerful (e.g. in recommender systems for textual items such as news
articles), and allow a even better prediction of user preferences.
Acknowledgements This work was supported in part by the IST Programme of the European Union, under the PASCAL Network of Excellence (EU # 506778).
7
To obtain the kernel r, we fitted GP user preference models for a few randomly chosen users,
with individual ARD weights for each input dimension in a squared exponential kernel. ARD weights
for r are taken to be the medians of the fitted ARD weights.
References
[1] Bakker, B. and Heskes, T. Task clustering and gating for bayesian multitask learning. Journal
of Machine Learning Research, 4:83?99, 2003.
[2] Blei, D. M., Ng, A. Y., and Jordan, M. I. Latent Dirichlet allocation. Journal of Machine
Learning Research, 3:993?1022, 2003.
[3] Breese, J. S., Heckerman, D., and Kadie, C. Empirical analysis of predictive algorithms for
collaborative filtering. Tech. Rep. MSR-TR-98-12, Microsoft Research, 1998.
[4] Caruana, R. Multitask learning. Machine Learning, 28(1):41?75, 1997.
[5] Chapelle, O. and Harchaoui, Z. A machine learning approach to conjoint analysis. In L. Saul,
Y. Weiss, and L. Bottou, eds., Neural Information Processing Systems 17. MIT Press, 2005.
[6] Gelman, A., Carlin, J., Stern, H., and Rubin, D. Bayesian Data Analysis. CRCPress, 1995.
[7] Lawrence, N. D. and Platt, J. C. Learning to learn with the informative vector machine. In
R. Greiner and D. Schuurmans, eds., Proceedings of ICML04. Morgan Kaufmann, 2004.
[8] Minka, T. P. and Picard, R. W. Learning how to learn is learning with point sets, 1999. Unpublished manuscript. Revised 1999.
[9] Schafer, J. L. Analysis of Incomplete Multivariate Data. Chapman&Hall, 1997.
[10] Vishwanathan, S., Guttman, O., Borgwardt, K. M., and Smola, A. Kernel extrapolation, 2005.
Unpublished manuscript.
[11] Williams, C. K. Computation with infinite neural networks. Neural Computation, 10(5):1203?
1216, 1998.
[12] Williams, C. K. I. and Seeger, M. Using the nystr?om method to speed up kernel machines. In
T. K. Leen, T. G. Dietterich, and V. Tresp, eds., Advances in Neural Information Processing
Systems 13, pp. 682?688. MIT Press, 2001.
[13] Yu, K., Schwaighofer, A., Tresp, V., Ma, W.-Y., and Zhang, H. Collaborative ensemble learning:
Combining collaborative and content-based information filtering via hierarchical Bayes. In
C. Meek and U. Kj?rulff, eds., Proceedings of UAI 2003, pp. 616?623, 2003.
[14] Zhu, X., Ghahramani, Z., and Lafferty, J. Semi-supervised learning using Gaussian fields and
harmonic functions. In Proceedings of ICML03. Morgan Kaufmann, 2003.
Appendix
To derive an EM algorithm for Eq. (2), we treat the functional values f i in each scenario
i as the unknown variables. In each EM iteration t, the parameters to be estimated are
?(t) = {m(t) , K (t) , ? 2(t) }. In the E-step, the sufficient statistics are computed,
E
M
X
M
X
i,(t)
f i | y i , ?(t) =
f?
i=1
E
M
X
(10)
i=1
M
X
i,(t)
i,(t) >
f i (f i )> | y i , ?(t) =
f?
(f?
) + C? i
(11)
i=1
i=1
i
with f? and C? i defined in Eq. (4) and (5). In the M-step, the parameters ? are re-estimated
as ?(t+1) = arg max? Q(? | ?(t) ), with
h
i
Q(? | ?(t) ) = E lp (? | f , y) | y, ?(t) ,
(12)
where lp stands for the penalized log-likelihood of the complete data,
lp (? | f , y) = log Wi?1 (K | ?, ?) + log N (m | ?, ? ?1 K)+
+
M
X
i=1
i
log N (f? | m, K) +
M
X
i
log N (y iI(i) | f? I(i) , ? 2 1) (13)
i=1
Updated parameters are obtained by setting the partial derivatives of Q(? | ?(t) ) to zero.
| 2595 |@word multitask:2 msr:1 proportion:2 covariance:63 nystr:10 tr:1 moment:1 initial:3 contains:1 denoting:1 current:1 ida:1 com:1 comparing:1 yet:2 written:1 readily:1 informative:1 extrapolating:3 update:2 stationary:2 item:10 blei:1 contribute:1 preference:11 zhang:1 fitting:1 excellence:1 expected:2 behavior:1 multi:4 cardinality:1 becomes:2 estimating:1 schafer:1 interpreted:1 bakker:1 eigenvector:1 guarantee:1 returning:1 k2:1 scaled:4 platt:1 unit:1 yn:1 before:1 positive:4 treat:1 interpolation:1 limited:1 bi:4 averaged:1 union:1 definite:3 empirical:3 honolulu:1 projection:1 specificity:1 get:2 cannot:2 selection:1 gelman:1 context:2 applying:1 equivalent:4 missing:1 straightforward:1 williams:2 borrow:2 regularize:1 orthonormal:1 updated:3 resp:2 target:1 user:18 gps:1 observed:2 solved:1 capture:1 news:1 eu:1 mentioned:1 predictive:5 purely:2 learner:1 easily:2 joint:1 k0:1 derivation:1 describe:1 effective:1 hyper:1 pearson:2 outside:1 quite:2 richer:1 kai:2 apparent:1 statistic:2 gp:8 transductive:4 itself:1 noisy:2 reproduced:1 advantage:1 eigenvalue:1 coming:1 product:1 relevant:1 combining:1 intuitive:1 ky:1 jsp:1 liked:1 object:2 help:1 illustrate:3 coupling:1 derive:2 pose:1 measured:1 ard:3 eq:8 auxiliary:2 predicted:1 filter:1 subsequently:1 stringent:1 require:2 correction:1 hall:1 normal:2 lawrence:1 predict:1 m0:6 superposition:2 weighted:4 mit:2 clearly:2 gaussian:21 rather:2 volker:2 derived:1 focus:1 modelling:11 likelihood:5 tech:1 contrast:2 seeger:1 dependent:4 typically:4 rulff:1 icml03:1 fhg:1 interested:1 germany:1 overall:1 among:1 arg:1 pascal:1 art:5 smoothing:5 marginal:2 field:1 once:1 ng:1 sampling:1 chapman:1 yu:3 intelligent:1 employ:2 primarily:1 few:1 randomly:3 resulted:1 interpolate:1 individual:8 replaced:1 microsoft:1 interest:2 picard:1 truly:1 partial:2 incomplete:1 initialized:1 re:1 fitted:3 column:1 caruana:1 kekul:1 subset:2 too:1 aw:2 considerably:1 borgwardt:1 sensitivity:1 guttman:1 quickly:1 again:1 squared:2 containing:1 wishart:4 derivative:1 toy:2 de:2 sec:6 stabilize:1 kadie:1 later:1 extrapolation:3 analyze:1 portion:2 bayes:2 collaborative:15 om:10 square:1 variance:2 kaufmann:2 efficiently:1 ensemble:1 painting:5 generalize:2 anton:2 bayesian:14 informatik:1 nonstationarity:2 ed:4 pp:2 minka:1 mi:2 gain:1 color:1 organized:1 back:1 manuscript:2 supervised:2 specify:1 wei:1 leen:1 done:1 evaluated:1 smola:1 correlation:2 effect:5 dietterich:1 requiring:1 true:4 concept:1 inductive:4 regularization:1 symmetric:1 generalized:10 complete:1 image:8 harmonic:1 novel:3 recently:1 common:3 superior:1 functional:4 ji:1 insensitive:1 discussed:1 tuning:1 images5:1 heskes:1 similarly:1 had:1 chapelle:1 stable:1 longer:1 multivariate:2 recent:1 showed:1 irrelevant:1 driven:2 scenario:22 rep:1 unrated:2 exploited:1 captured:1 morgan:2 fortunately:1 additional:1 semi:2 smoother:3 ii:1 corporate:1 harchaoui:1 reduces:1 stem:1 full:1 liking:1 match:1 unlabelled:1 cross:1 compensate:1 prediction:7 regression:3 muenchen:1 essentially:1 df:1 iteration:1 kernel:39 sometimes:1 interval:1 median:1 eigenfunctions:2 sure:1 db:1 lafferty:1 jordan:1 nonstationary:5 concerned:1 hb:4 xj:3 variate:1 carlin:1 expression:1 handled:1 clear:1 eigenvectors:4 amount:2 nonparametric:2 http:1 generate:1 xij:1 estimated:4 write:1 discrete:1 ist:1 four:1 drawn:3 graph:1 sum:1 inverse:4 fourth:1 powerful:1 circumvents:1 appendix:1 submatrix:1 ki:4 meek:1 fold:1 strength:2 constraint:1 vishwanathan:1 y1i:1 speed:1 munich:1 combination:2 poor:1 conjugate:2 across:1 heckerman:1 em:24 wi:2 appealing:1 lp:3 b:1 explained:4 taken:1 equation:2 mind:1 available:2 hierarchical:15 generic:1 alternative:2 denotes:5 top:1 include:1 clustering:1 dirichlet:1 ghahramani:1 build:1 added:1 quantity:1 parametric:5 strategy:1 diagonal:2 separate:1 berlin:1 considers:1 collected:1 induction:1 index:2 difficult:1 trace:1 stated:1 design:1 stern:1 unknown:2 allowing:1 recommender:5 observation:4 revised:1 sm:1 finite:2 situation:3 communication:1 rating:4 introduced:2 cast:1 required:3 pair:2 unpublished:2 engine:2 learned:11 distinction:1 testbed:1 textual:1 extrapolate:2 eigenfunction:1 suggested:1 max:1 explanation:1 overlap:1 ranked:1 hybrid:2 rely:1 predicting:2 indicator:1 zhu:1 scheme:2 technology:1 rated:3 fraunhofer:1 tresp:4 kj:1 prior:10 taste:1 acknowledgement:1 dislike:1 fully:1 expect:2 mixed:2 filtering:13 allocation:1 versus:1 conjoint:1 validation:1 eigendecomposition:1 sufficient:1 article:2 rubin:1 penalized:2 summary:1 placed:1 supported:1 transpose:1 allow:1 saul:1 sparse:1 distributed:1 curve:4 calculated:1 xn:2 world:1 gram:3 evaluating:1 computes:1 dimension:1 stand:1 made:4 collection:1 programme:1 approximate:5 compact:1 uni:1 keep:1 uai:1 assumed:1 xi:1 continuous:1 latent:1 decomposes:1 promising:1 learn:7 expanding:1 schuurmans:1 interact:1 excellent:1 complex:4 interpolating:1 european:1 domain:1 bottou:1 whole:1 noise:3 hyperparameters:1 repeated:1 x1:2 fig:6 referred:1 roc:4 sub:1 exponential:2 third:1 gpr:1 wavelet:1 down:1 gating:1 svm:3 texture:1 labelling:1 illustrates:1 suited:1 simply:2 greiner:1 correlogram:1 schwaighofer:2 partially:1 scalar:2 recommendation:2 corresponds:2 determines:1 ma:1 goal:1 viewed:1 labelled:1 shared:3 content:16 infinite:2 called:2 total:3 breese:1 siemens:2 evaluate:1 correlated:1 |
1,756 | 2,596 | Matrix Exponentiated Gradient Updates
for On-line Learning and Bregman Projection
Koji Tsuda??, Gunnar R?atsch?? and Manfred K. Warmuth?
?
Max Planck Institute for Biological Cybernetics
Spemannstr. 38, 72076 T?ubingen, Germany
?
AIST CBRC, 2-43 Aomi, Koto-ku, Tokyo, 135-0064, Japan
?
Fraunhofer FIRST, Kekul?estr. 7, 12489 Berlin, Germany
?
University of California at Santa Cruz
{koji.tsuda,gunnar.raetsch}@tuebingen.mpg.de, [email protected]
Abstract
We address the problem of learning a symmetric positive definite matrix.
The central issue is to design parameter updates that preserve positive
definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key
applications that exemplify our methods: On-line learning with a simple
square loss and finding a symmetric positive definite matrix subject to
symmetric linear constraints. The updates generalize the Exponentiated
Gradient (EG) update and AdaBoost, respectively: the parameter is now
a symmetric positive definite matrix of trace one instead of a probability
vector (which in this context is a diagonal positive definite matrix with
trace one). The generalized updates use matrix logarithms and exponentials to preserve positive definiteness. Most importantly, we show how
the analysis of each algorithm generalizes to the non-diagonal case. We
apply both new algorithms, called the Matrix Exponentiated Gradient
(MEG) update and DefiniteBoost, to learn a kernel matrix from distance
measurements.
1 Introduction
Most learning algorithms have been developed to learn a vector of parameters from data.
However, an increasing number of papers are now dealing with more structured parameters. More specifically, when learning a similarity or a distance function among objects,
the parameters are defined as a symmetric positive definite matrix that serves as a kernel
(e.g. [14, 11, 13]). Learning is typically formulated as a parameter updating procedure to
optimize a loss function. The gradient descent update [6] is one of the most commonly used
algorithms, but it is not appropriate when the parameters form a positive definite matrix,
because the updated parameter is not necessarily positive definite. Xing et al. [14] solved
this problem by always correcting the updated matrix to be positive. However no bound
has been proven for this update-and-correction approach. In this paper, we introduce the
Matrix Exponentiated Gradient update which works as follows: First, the matrix logarithm
of the current parameter matrix is computed. Then a step is taken in the direction of the
steepest descent. Finally, the parameter matrix is updated to the exponential of the modified
log-matrix. Our update preserves symmetry and positive definiteness because the matrix
exponential maps any symmetric matrix to a positive definite matrix.
Bregman divergences play a central role in the motivation and the analysis of on-line learning algorithms [5]. A learning problem is essentially defined by a loss function, and a divergence that measures the discrepancy between parameters. More precisely, the updates
are motivated by minimizing the sum of the loss function and the Bregman divergence,
where the loss function is multiplied by a positive learning rate. Different divergences lead
to radically different updates [6]. For example, the gradient descent is derived from the
squared Euclidean distance, and the exponentiated gradient from the Kullback-Leibler divergence. We use the von Neumann divergence (also called quantum relative entropy) for
measuring the discrepancy between two positive definite matrices [8]. We derive a new
Matrix Exponentiated Gradient update from this divergence (which is a Bregman divergence for positive definite matrices). Finally we prove relative loss bounds using the von
Neumann divergence as a measure of progress.
Also the following related key problem has received a lot of attention recently [14, 11,
13]: Find a symmetric positive definite matrix that satisfies a number of symmetric linear
inequality constraints. The new DefiniteBoost algorithm greedily chooses the most violated
constraint and performs an approximated Bregman projection. In the diagonal case, we
recover AdaBoost [9]. We also show how the convergence proof of AdaBoost generalizes
to the non-diagonal case.
2 von Neumann Divergence or Quantum Relative Entropy
If F is a real convex differentiable function on the parameter domain (symmetric d ? d
positive definite matrices) and f (W) := ?F(W), then the Bregman divergence between
f and W is defined as
two parameters W
f W) = F(W)
f ? F(W) ? tr[(W
f ? W)f (W)].
?F (W,
When choosing F(W) = tr(W log W ? W), then f (W) = log W and the corresponding
Bregman divergence becomes the von Neumann divergence [8]:
f W) = tr(W
f log W
f ?W
f log W ? W
f + W).
?F (W,
(1)
In this paper, we are primarily interested in the normalized case (when tr(W) = 1). In this
case, the positive symmetric definite matrices are related to density matrices commonly
f W) = tr(W
f log W
f?
used in Statistical Physics and the divergence simplifies to ?F (W,
f log W).
W
P
If W = i ?i v i v >
i is our notation for the eigenvalue decomposition, then we can rewrite
the normalized divergence as
X
X
2
? i ln ?
?i +
? i ln ?j (?
f W) =
?F (W,
?
?
v>
i vj ) .
i
i,j
So this divergence quantifies the difference in the eigenvalues as well as the eigenvectors.
3 On-line Learning
In this section, we present a natural extension of the Exponentiated Gradient (EG) update [6] to an update for symmetric positive definite matrices.
At the t-th trial, the algorithm receives a symmetric instance matrix Xt ? Rd?d . It then
produces a prediction y?t = tr(Wt Xt ) based on the algorithm?s current symmetric positive
definite parameter matrix Wt . Finally it incurs for instance1 a quadratic loss (?
yt ? yt )2 ,
1
For the sake of simplicity, we use the simple quadratic loss: Lt (W) = (tr(Xt W) ? yt )2 .
For the general update, the gradient ?Lt (Wt ) is exponentiated in the update (4) and this gradient
must be symmetric. Following [5], more general loss functions (based on Bregman divergences) are
amenable to our techniques.
and updates its parameter matrix Wt . In the update we aim to solve the following problem:
Wt+1 = argminW ?F (W, Wt ) + ?(tr(WXt ) ? yt )2 ,
(2)
where the convex function F defines the Bregman divergence. Setting the derivative with
respect to W to zero, we have
f (Wt+1 ) ? f (Wt ) + ??[(tr(Wt+1 Xt ) ? yt )2 ] = 0.
(3)
The update rule is derived by solving (3) with respect to Wt+1 , but it is not solvable in
closed form. A common way to avoid this problem is to approximate tr(Wt+1 Xt ) by
tr(Wt Xt ) [5]. Then, we have the following update:
Wt+1 = f ?1 (f (Wt ) ? 2?(?
yt ? yt )Xt ).
In our case, F(W) = tr(W log W ? W) and thus f (W) = log W and f ?1 (W) =
exp W. We also augment (2) with the constraint tr(W) = 1, leading to the following
Matrix Exponential Gradient (MEG) Update:
1
Wt+1 =
yt ? yt )Xt ),
(4)
exp(log Wt ? 2?(?
Zt
where the normalization factor Zt is tr[exp(log Wt ? 2?(?
yt ? yt )Xt )]. Note that in the
above update, the exponent log Wt ? 2?(?
yt ? yt )Xt is an arbitrary symmetric matrix and
the matrix exponential converts this matrix back into a symmetric positive definite matrix.
A numerically stable version of the MEG update is given in Section 3.2.
3.1 Relative Loss Bounds
We now begin with the definitions needed for the relative loss bounds. Let S =
(X1 , y1 ), . . . , (XT , yT ) denote a sequence of examples, where the instance matrices Xt ?
Rd?d are symmetric and the labels yt ? R. For any symmetric positive semi-definite maPT
trix U with tr(U) = 1, define its total loss as LU (S) = t=1 (tr(UXt ) ? yt )2 . The total
PT
loss of the on-line algorithm is LMEG (S) = t=1 (tr(Wt Xt ) ? yt )2 . We prove a bound
on the relative loss LMEG (S) ? LU (S) that holds for any U. The proof generalizes a similar bound for the Exponentiated Gradient update (Lemmas 5.8 and 5.9 of [6]). The relative
loss bound is derived in two steps: Lemma 3.1 bounds the relative loss for an individual
trial and Lemma 3.2 for a whole sequence (Proofs are given in the full paper).
Lemma 3.1 Let Wt be any symmetric positive definite matrix. Let Xt be any symmetric
matrix whose smallest and largest eigenvalues satisfy ?max ? ?min ? r. Assume Wt+1 is
produced from Wt by the MEG update and let U be any symmetric positive semi-definite
matrix. Then for any constants a and b such that 0 < a ? 2b/(2 + r2 b) and any learning
rate ? = 2b/(2 + r2 b), we have
a(yt ? tr(Wt Xt ))2 ? b(yt ? tr(UXt ))2 ? ?(U, Wt ) ? ?(U, Wt+1 )
(5)
In the proof, we use the Golden-Thompson inequality [3], i.e., tr[exp(A + B)] ?
tr[exp(A) exp(B)] for symmetric matrices A and B. We also needed to prove the following generalization of Jensen?s inequality to matrices: exp(?1 A + ?2 (I ? A)) ?
exp(?1 )A + exp(?2 )(I ? A) for finite ?1 , ?2 ? R and any symmetric matrix A with
0 < A ? I. These two key inequalities will also be essential for the analysis of DefiniteBoost in the next section.
Lemma 3.2 Let W1 and U be arbitrary symmetric positive definite initial and comparison
matrices, respectively. Then for any c such that ? = 2c/(r2 (2 + c)),
1 1 2
c
LU (S) +
+
r ?(U, W1 ).
(6)
LMEG (S) ? 1 +
2
2 c
Proof For the maximum tightness of (5), a should be chosen as a = ? = 2b/(2 + r2 b).
Let b = c/r2 , and thus a = 2c/(r2 (2 + c)). Then (5) is rewritten as
2c
(yt ? tr(Wt Xt ))2 ? c(yt ? tr(UXt ))2 ? r2 (?(U, Wt ) ? ?(U, Wt+1 ))
2+c
Adding the bounds for t = 1, ? ? ? , T , we get
2c
LMEG (S) ? cLU (S) ? r2 (?(U, W1 ) ? ?(U, Wt+1 )) ? r2 ?(U, W1 ),
2+c
which is equivalent to (6).
Assuming LU (S) ? `max and ?(U, W1 ) ? dmax , the bound (6) is tightest when c =
p
?
2
r 2dmax /`max . Then we have LMEG (S) ? LU (S) ? r 2`max dmax + r2 ?(U, W1 ).
3.2 Numerically stable MEG update
The MEG update is numerically unstable when the eigenvalues of Wt are around zero.
However we can ?unwrap? Wt+1 as follows:
Wt+1 =
t
X
1
exp(ct I + log W1 ? 2?
(?
ys ? ys )Xs ),
Z?t
s=1
(7)
where the constant Z?t normalizes the trace of Wt+1 to one. As long as the eigen values of
W1 are not too small then the computation of log Wt is stable. Note that the update is independent of the choice of ct ? R. We incrementally maintain an eigenvalue decomposition
of the matrix in the exponent (O(n3 ) per iteration):
Vt ?t VtT = ct I + log W1 ? 2?
t
X
s=1
(?
ys ? ys )Xs ),
where the constant ct is chosen so that the maximum eigenvalue of the above is zero. Now
Wt+1 = Vt exp(?t )VtT /tr(exp(?t )).
4 Bregman Projection and DefiniteBoost
In this section, we address the following Bregman projection problem2
W? = argminW ?F (W, W1 ), tr(W) = 1, tr(WCj ) ? 0, for j = 1, . . . , n,
(8)
where the symmetric positive definite matrix W1 of trace one is the initial parameter matrix, and C1 , . . . , Cn are arbitrary symmetric matrices. Prior knowledge about W is encoded in the constraints, and the matrix closest to W1 is chosen among the matrices satisfying all constraints. Tsuda and Noble [13] employed this approach for learning a kernel
matrix among graph nodes, and this method can be potentially applied to learn a kernel
matrix in other settings (e.g. [14, 11]).
The problem (8) is a projection of W1 to the intersection of convex regions defined by the
constraints. It is well known that the Bregman projection into the intersection of convex
regions can be solved by sequential projections to each region [1]. In the original papers
only asymptotic convergence was shown. More recently a connection [4, 7] was made to
the AdaBoost algorithm which has an improved convergence analysis [2, 9]. We generalize
the latter algorithm and its analysis to symmetric positive definite matrices and call the new
algorithm DefiniteBoost. As in the original setting, only approximate projections (Figure 1)
are required to show fast convergence.
2
Note that if ? is large then the on-line update (2) becomes a Bregman projection subject to a
single equality constraint tr(WXt ) = yt .
Approximate
Projection
Exact
Projection
Figure 1: In (exact) Bregman projections, the intersection
of convex sets (i.e., two lines here) is found by iterating projections to each set. We project only approximately, so the
projected point does not satisfy the current constraint. Nevertheless, global convergence to the optimal solution is guaranteed via our proofs.
Before presenting the algorithm, let us derive the dual problem of (8) by means of Lagrange
multipliers ?,
? ?
??
n
X
? ? = argmin? log ?tr ?exp(log W1 ?
?j Cj )?? , ?j ? 0.
(9)
j=1
See [13] for a detailed derivation of the dual problem. When
Pn (8) is feasible, the optimal solution is described as W? = Z(?1 ? ) exp(log W1 ? j=1 ?j? Cj ), where Z(? ? ) =
Pn
tr[exp(log W1 ? j=1 ?j? Cj )].
4.1 Exact Bregman Projections
First, let us present the exact Bregman projection algorithm to solve (8). We start from
the initial parameter W1 . At the t-th step, the most unsatisfied constraint is chosen,
jt = argmaxj=1,??? ,n tr(Wt Cj ). Let us use Ct as the short notation for Cjt . Then, the
following Bregman projection with respect to the chosen constraint is solved.
Wt+1 = argminW ?(W, Wt ), tr(W) = 1, tr(WCt ) ? 0.
(10)
By means of a Lagrange multiplier ?, the dual problem is described as
?t = argmin? tr[exp(log Wt ? ?Ct )],
? ? 0.
(11)
Using the solution of the dual problem, Wt is updated as
Wt+1 =
1
exp(log Wt ? ?t Ct )
Zt (?t )
(12)
where the normalization factor is Zt (?t ) = tr[exp(log Wt ? ?t Ct )]. Note that we can use
the same numerically stable update as in the previous section.
4.2 Approximate Bregman Projections
The solution of (11) cannot be obtained in closed form. However, one can use the following
approximate solution:
1
1 + rt /?max
t
?t = max
log
,
(13)
?t ? ?min
1 + rt /?min
t
t
when the eigenvalues of Ct lie in the interval [?min
, ?max
] and rt = tr(Wt Ct ). Since the
t
t
most unsatisfied constraint is chosen, rt ? 0 and thus ?t ? 0. Although the projection is
done only approximately,3 the convergence of the dual objective (9) can be shown using
the following upper bound.
3
The approximate Bregman projection (with ?t as in (13) can also be motivated as an online
algorithm based on an entropic loss and learning rate one (following Section 3 and [4]).
Theorem 4.1 The dual objective (9) is bounded as
?
?
??
n
T
X
Y
tr ?exp ?log W1 ?
?j Cj ?? ?
?(rt )
where ?(rt ) = 1 ?
rt
?max
t
(14)
t=1
j=1
?max
t
?max
??min
t
t
rt
1 ? min
?t
??min
t
?max
??min
t
t
.
The dual objective is monotonically decreasing, because ?(rt ) ? 1. Also, since rt corresponds to the maximum value among all constraint violations {rj }nj=1 , we have ?(rt ) = 1
only if rt = 0. Thus the dual objective continues to decrease until all constraints are
satisfied.
4.3 Relation to Boosting
When all matrices are diagonal, the DefiniteBoost degenerates to AdaBoost [9]: Let
{xi , yi }di=1 be the training samples, where xi ? Rm and yi ? {?1, 1}. Let
h1 (x), . . . , hn (x) ? [?1, 1] be the weak hypotheses. For the j-th hypothesis hj (x), let
max / min
us define Cj = diag(y1 hj (x1 ), . . . , yd hj (xd )). Since |yhj (x)| ? 1, ?t
= ?1 for
any t. Setting W1 = I/d, the dual objective (14) is rewritten as
?
?
d
n
X
1X
exp ??yi
?j hj (xi )? ,
d i=1
j=1
which is equivalent to the exponential loss function used in AdaBoost. Since Cj and W1
are diagonal, the matrix Wt stays diagonal after the update. If wti = [Wt ]ii , the updating
formula (12) becomes the AdaBoost update: wt+1,i = wti exp(??t yi ht (xi ))/Zt (?t ). The
1+rt
approximate solution of ?t (13) is described as ?t = 12 log 1?r
, where rt is the weighted
t
Pd
training error of the t-th hypothesis, i.e. rt = i=1 wti yi ht (xi ).
5 Experiments on Learning Kernels
In this section, our technique is applied to learning a kernel matrix from a set of distance
measurements. This application is not on-line per se, but it shows nevertheless that the
theoretical bounds can be reasonably tight on natural data.
When K is a d ? d kernel matrix among d objects, then the Kij characterizes the similarity
between objects i and j. In the feature space, Kij corresponds to the inner product between
object i and j, and thus the Euclidean distance can be computed from the entries of the
kernel matrix [10]. In some cases, the kernel matrix is not given explicitly, but only a set
of distance measurements is available. The data are represented either as (i) quantitative
distance values (e.g., the distance between i and j is 0.75), or (ii) qualitative evaluations
(e.g., the distance between i and j is small) [14, 13]. Our task is to obtain a positive definite
kernel matrix which fits well to the given distance data.
On-line kernel learning In the first experiment, we consider the on-line learning scenario
in which only one distance example is shown to the learner at each time step. The distance
example at time t is described as {at , bt , yt }, which indicates that the squared Euclidean
distance between objects at and bt is yt . Let us define a time-developing sequence of kernel
matrices as {Wt }Tt=1 , and the corresponding points in the feature space as {xti }di=1 (i.e.
[Wt ]ab = x>
ta xtb ). Then, the total loss incurred by this sequence is
T
X
t=1
kxtat ? xtbt k2 ? yt
2
=
T
X
t=1
(tr(Wt Xt ) ? yt )2 ,
1.8
0.45
1.6
0.4
1.4
Classification Error
0.35
Total Loss
1.2
1
0.8
0.6
0.3
0.25
0.2
0.4
0.15
0.2
0.1
0
0
0.5
1
1.5
Iterations
2
2.5
3
5
x 10
0.05
0
0.5
1
1.5
Iterations
2
2.5
3
5
x 10
Figure 2: Numerical results of on-line learning. (Left) total loss against the number of iterations. The
dashed line shows the loss bound. (Right) classification error of the nearest neighbor classifier using
the learned kernel. The dashed line shows the error by the target kernel.
where Xt is a symmetric matrix whose (at , at ) and (bt , bt ) elements are 0.5, (at , bt ) and
(bt , at ) elements are -0.5, and all the other elements are zero. We consider a controlled
experiment in which the distance examples are created from a known target kernel matrix.
We used a 52 ? 52 kernel matrix among gyrB proteins of bacteria (d = 52). This data
contains three bacteria species (see [12] for details). Each distance example is created
by randomly choosing one element of the target kernel. The initial parameter was set as
W1 = I/d. When the comparison matrix U is set to the target matrix, LU (S) = 0 and
`max = 0, because all the distance examples are derived from the target matrix. Therefore
we choose learning rate ? = 2, which minimizes the relative loss bound of Lemma 3.2.
The total loss of the kernel matrix sequence obtained by the matrix exponential update is
shown in Figure 2 (left). In the plot, we have also shown the relative loss bound. The
bound seems to give a reasonably tight performance guarantee?it is about twice the actual
total loss. To evaluate the learned kernel matrix, the prediction accuracy of bacteria species
by the nearest neighbor classifier is calculated (Figure 2, right), where the 52 proteins are
randomly divided into 50% training and 50% testing data. The value shown in the plot
is the test error averaged over 10 different divisions. It took a large number of iterations
(? 2 ? 105 ) for the error rate to converge to the level of the target kernel. In practice one
can often increase the learning rate for faster convergence, but here we chose the small rate
suggested by our analysis to check the tightness of the bound.
Kernel learning by Bregman projection Next, let us consider a batch learning scenario where we have a set of qualitative distance evaluations (i.e. inequality constraints).
Given n pairs of similar objects {aj , bj }nj=1 , the inequality constraints are constructed
as kxaj ? xbj k ? ?, j = 1, . . . , n, where ? is a predetermined constant. If Xj is defined as in the previous section and Cj = Xj ? ?I, the inequalities are then rewritten as
tr(WCj ) ? 0, j = 1, . . . , n. The largest and smallest eigenvalues of any Cj are 1 ? ?
and ??, respectively. As in the previous section, distance examples are generated from the
target kernel matrix between gyrB proteins. Setting ? = 0.2/d, we collected all object
pairs whose distance in the feature space is less than ? to yield 980 inequalities (n = 980).
Figure 3 (left) shows the convergence of the dual objective function as proven in Theorem 4.1. The convergence was much faster than the previous experiment, because, in the
batch setting, one can choose the most unsatisfied constraint, and optimize the step size as
well. Figure 3 (right) shows the classification error of the nearest neighbor classifier. As
opposed to the previous experiment, the error rate is higher than that of the target kernel
matrix, because substantial amount of information is lost by the conversion to inequality
constraints.
0.8
50
0.7
45
0.6
Classification Error
Dual Obj
55
40
35
30
0.5
0.4
0.3
25
0.2
20
0.1
15
0
50
100
150
Iterations
200
250
300
0
0
50
100
150
Iterations
200
250
300
Figure 3: Numerical results of Bregman projection. (Left) convergence of the dual objective function.
(Right) classification error of the nearest neighbor classifier using the learned kernel.
6 Conclusion
We motivated and analyzed a new update for symmetric positive matrices using the von
Neumann divergence. We showed that the standard bounds for on-line learning and Boosting generalize to the case when the parameters are a symmetric positive definite matrix (of
trace one) instead of a probability vector. As in quantum physics, the eigenvalues act as
probabilities.
Acknowledgment We would like to thank B. Sch?olkopf, M. Kawanabe, J. Liao and
W.S. Noble for fruitful discussions. M.W. was supported by NSF grant CCR 9821087 and
UC Discovery grant LSIT02-10110. K.T. and G.R. gratefully acknowledge partial support
from the PASCAL Network of Excellence (EU #506778). Part of this work was done while
all three authors were visiting the National ICT Australia in Canberra.
References
[1] L.M. Bregman. Finding the common point of convex sets by the method of successive projections. Dokl. Akad. Nauk SSSR, 165:487?490, 1965.
[2] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an
application to boosting. Journal of Computer and System Sciences, 55(1):119?139, 1997.
[3] S. Golden. Lower bounds for the Helmholtz function. Phys. Rev., 137:B1127?B1128, 1965.
[4] J. Kivinen and M. K. Warmuth. Boosting as entropy projection. In Proc. 12th Annu. Conference
on Comput. Learning Theory, pages 134?144. ACM Press, New York, NY, 1999.
[5] J. Kivinen and M. K. Warmuth. Relative loss bounds for multidimensional regression problems.
Machine Learning, 45(3):301?329, 2001.
[6] J. Kivinen and M.K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1?63, 1997.
[7] J. Lafferty. Additive models, boosting, and inference for generalized divergences. In Proc. 12th
Annu. Conf. on Comput. Learning Theory, pages 125?133, New York, NY, 1999. ACM Press.
[8] M.A. Nielsen and I.L. Chuang. Quantum Computation and Quantum Information. Cambridge
University Press, 2000.
[9] R.E. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions.
Machine Learning, 37:297?336, 1999.
[10] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[11] I.W. Tsang and J.T. Kwok. Distance metric learning with kernels. In Proceedings of the International Conference on Artificial Neural Networks (ICANN?03), pages 126?129, 2003.
[12] K. Tsuda, S. Akaho, and K. Asai. The em algorithm for kernel matrix completion with auxiliary
data. Journal of Machine Learning Research, 4:67?81, May 2003.
[13] K. Tsuda and W.S. Noble. Learning kernels from biological networks by maximizing entropy.
Bioinformatics, 2004. to appear.
[14] E.P. Xing, A.Y. Ng, M.I. Jordan, and S. Russell. Distance metric learning with application to
clustering with side-information. In S. Thrun S. Becker and K. Obermayer, editors, Advances in
Neural Information Processing Systems 15, pages 505?512. MIT Press, Cambridge, MA, 2003.
| 2596 |@word trial:2 version:1 seems:1 decomposition:2 incurs:1 tr:38 initial:4 contains:1 current:3 must:1 cruz:1 additive:1 numerical:2 predetermined:1 treating:1 plot:2 update:36 warmuth:4 steepest:1 short:1 manfred:2 boosting:6 cse:1 node:1 successive:1 ucsc:1 constructed:1 qualitative:2 prove:3 introduce:1 excellence:1 vtt:2 mpg:1 decreasing:1 xti:1 actual:1 increasing:1 becomes:3 begin:1 project:1 notation:2 bounded:1 argmin:2 minimizes:1 developed:1 finding:2 nj:2 guarantee:1 quantitative:1 multidimensional:1 golden:2 act:1 xd:1 rm:1 k2:1 classifier:4 grant:2 appear:1 planck:1 positive:30 before:1 gyrb:2 approximately:2 yd:1 chose:1 twice:1 averaged:1 acknowledgment:1 testing:1 practice:1 lost:1 definite:24 procedure:1 projection:23 confidence:1 protein:3 get:1 cannot:1 context:1 optimize:2 equivalent:2 map:1 fruitful:1 yt:26 maximizing:1 attention:1 asai:1 convex:6 thompson:1 simplicity:1 correcting:1 rule:1 importantly:1 updated:4 pt:1 play:1 target:8 exact:4 hypothesis:3 element:4 helmholtz:1 approximated:1 satisfying:1 updating:2 continues:1 role:1 solved:3 tsang:1 region:3 eu:1 decrease:1 russell:1 substantial:1 pd:1 solving:1 rewrite:1 tight:2 division:1 learner:1 represented:1 derivation:1 fast:1 artificial:1 choosing:2 whose:3 encoded:1 solve:2 tightness:2 online:1 sequence:5 differentiable:1 eigenvalue:9 took:1 product:1 argminw:3 degenerate:1 nauk:1 olkopf:2 convergence:10 neumann:6 mapt:1 produce:1 object:7 derive:2 completion:1 nearest:4 received:1 progress:1 auxiliary:1 direction:1 sssr:1 tokyo:1 koto:1 australia:1 generalization:2 biological:2 extension:1 correction:1 hold:1 around:1 exp:21 bj:1 clu:1 entropic:1 smallest:2 proc:2 label:1 largest:2 weighted:1 mit:2 always:1 aim:1 modified:1 rather:1 avoid:1 pn:2 hj:4 derived:4 focus:1 indicates:1 check:1 greedily:1 inference:1 typically:1 bt:6 relation:1 interested:1 germany:2 issue:1 among:6 dual:12 classification:5 augment:1 exponent:2 pascal:1 uc:1 ng:1 noble:3 discrepancy:2 primarily:1 randomly:2 preserve:3 divergence:21 national:1 individual:1 maintain:1 ab:1 evaluation:2 violation:1 analyzed:1 amenable:1 bregman:22 partial:1 bacteria:3 euclidean:3 koji:2 logarithm:2 tsuda:5 theoretical:1 instance:2 kij:2 measuring:1 kekul:1 unwrap:1 entry:1 predictor:1 too:1 chooses:1 density:1 international:1 stay:1 physic:2 w1:21 aomi:1 central:2 von:6 squared:2 satisfied:1 hn:1 choose:2 opposed:1 conf:1 derivative:1 leading:1 japan:1 de:1 satisfy:2 explicitly:1 h1:1 lot:1 closed:2 characterizes:1 xing:2 recover:1 start:1 square:1 accuracy:1 yield:1 generalize:3 weak:1 produced:1 lu:6 cybernetics:1 phys:1 definition:1 against:1 proof:6 di:2 exemplify:1 knowledge:1 cj:9 nielsen:1 back:1 ta:1 higher:1 adaboost:7 improved:2 done:2 smola:1 until:1 receives:1 incrementally:1 defines:1 aj:1 normalized:2 multiplier:2 equality:1 symmetric:30 leibler:1 aist:1 eg:2 generalized:2 presenting:1 tt:1 theoretic:1 performs:1 estr:1 recently:2 common:2 xbj:1 numerically:4 raetsch:1 measurement:3 cambridge:3 rd:2 akaho:1 gratefully:1 stable:4 similarity:2 closest:1 wxt:2 showed:1 scenario:2 ubingen:1 inequality:9 vt:2 yi:5 employed:1 converge:1 monotonically:1 dashed:2 semi:2 ii:2 full:1 rj:1 faster:2 long:1 divided:1 y:4 controlled:1 prediction:3 regression:1 liao:1 essentially:1 metric:2 iteration:7 kernel:28 normalization:2 cbrc:1 c1:1 interval:1 wct:1 wcj:2 sch:2 subject:2 spemannstr:1 lafferty:1 obj:1 call:1 jordan:1 xj:2 fit:1 wti:3 inner:1 simplifies:1 cn:1 motivated:4 becker:1 york:2 iterating:1 santa:1 eigenvectors:1 detailed:1 se:1 amount:1 schapire:2 nsf:1 per:2 ccr:1 key:3 gunnar:2 nevertheless:2 ht:2 graph:1 sum:1 convert:1 decision:1 bound:20 ct:10 guaranteed:1 quadratic:2 constraint:18 precisely:1 n3:1 sake:1 min:9 structured:1 developing:1 em:1 rev:1 taken:1 ln:2 dmax:3 argmaxj:1 needed:2 singer:1 serf:1 generalizes:3 tightest:1 rewritten:3 available:1 multiplied:1 apply:1 kawanabe:1 kwok:1 appropriate:1 batch:2 eigen:1 original:2 chuang:1 clustering:1 objective:7 rt:15 diagonal:7 visiting:1 obermayer:1 gradient:15 distance:21 thank:1 berlin:1 thrun:1 collected:1 tuebingen:1 unstable:1 uxt:3 meg:6 assuming:1 minimizing:1 akad:1 potentially:1 trace:5 design:1 zt:5 upper:1 conversion:1 finite:1 acknowledge:1 descent:4 y1:2 arbitrary:3 pair:2 required:1 connection:1 california:1 learned:3 address:2 suggested:1 dokl:1 max:14 natural:2 solvable:1 kivinen:3 rated:1 created:2 fraunhofer:1 prior:1 ict:1 discovery:1 relative:11 asymptotic:1 unsatisfied:3 loss:27 freund:1 proven:2 versus:1 incurred:1 editor:1 normalizes:1 supported:1 side:1 exponentiated:10 institute:1 neighbor:4 calculated:1 quantum:5 author:1 commonly:2 made:1 projected:1 approximate:7 kullback:1 dealing:1 global:1 xi:5 quantifies:1 ku:1 learn:3 reasonably:2 symmetry:1 necessarily:1 domain:1 vj:1 diag:1 icann:1 motivation:1 whole:1 x1:2 canberra:1 definiteness:3 ny:2 exponential:7 comput:2 lie:1 theorem:2 formula:1 annu:2 xt:18 jt:1 jensen:1 r2:10 x:2 essential:1 adding:1 sequential:1 entropy:4 intersection:3 lt:2 cjt:1 lagrange:2 trix:1 problem2:1 radically:1 corresponds:2 satisfies:1 acm:2 ma:2 formulated:1 feasible:1 specifically:1 wt:50 lemma:6 called:2 total:7 specie:2 atsch:1 support:1 latter:1 bioinformatics:1 violated:1 evaluate:1 |
1,757 | 2,597 | Coarticulation in Markov Decision
Processes
Khashayar Rohanimanesh
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
[email protected]
Robert Platt
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
[email protected]
Sridhar Mahadevan
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
[email protected]
Roderic Grupen
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
[email protected]
Abstract
We investigate an approach for simultaneously committing to multiple activities, each modeled as a temporally extended action in
a semi-Markov decision process (SMDP). For each activity we define a set of admissible solutions consisting of the redundant set of
optimal policies, and those policies that ascend the optimal statevalue function associated with them. A plan is then generated by
merging them in such a way that the solutions to the subordinate
activities are realized in the set of admissible solutions satisfying
the superior activities. We present our theoretical results and empirically evaluate our approach in a simulated domain.
1
Introduction
Many real-world planning problems involve concurrent optimization of a set of prioritized subgoals of the problem by dynamically merging a set of (previously learned)
policies optimizing the subgoals. A familiar example of this type of problem would
be a driving task which may involve subgoals such as safely navigating the car, talking on the cell phone, and drinking coffee, with the first subgoal taking precedence
over the others. In general this is a challenging problem, since activities often have
conflicting objectives and compete for limited amount of resources in the system.
We refer to the behavior of an agent that simultaneously commits to multiple objectives as Coarticulation, inspired by the coarticulation phenomenon in speech.
In this paper we investigate a framework based on semi-Markov decision processes
(SMDPs) for studying this problem. We assume that the agent has access to a set
of learned activities modeled by a set of SMDP controllers ? = {C1 , C2 , . . . , Cn } each
achieving a subgoal ?i from a set of subgoals ? = {?1 , ?2 , . . . , ?n }. We further
assume that the agent-environment interaction is an episodic task where at the be-
ginning of each episode a subset of subgoals ? ? ? are introduced to the agent,
where subgoals are ranked according to some priority ranking system. The agent
is to devise a global policy by merging the policies associated with the controllers
into a global policy that simultaneously commits to them according to their degree
of significance. In general optimal policies of controllers do not offer flexibility required for the merging process. Thus for every controller we also compute a set of
admissible suboptimal policies that reflect the degree of flexibility we can afford in
it. Given a controller, an admissible policy is either an optimal policy, or it is a policy that ascends the optimal state-value function associated with the controller (i.e.,
in average leads to states with higher values), and is not too off from the optimal
policy. To illustrate this idea, consider Figure 1(a) that shows a two dimensional
C
d
c b
C1
a
a
C2
b
c
S
S
(a)
(b)
Figure 1: (a) actions a, b, and c are ascending on the state-value function associated
with the controller C, while action d is descending; (b) action a and c ascend the
state-value function C1 and C2 respectively, while they descend on the state-value
function of the other controller. However action b ascends the state-value function
of both controllers.
state-value function. Regions with darker colors represents states with higher values. Assume that the agent is currently in state marked s. The arrows show the
direction of state transition as a result of executing different actions, namely actions
a, b, c, and d. The first three actions lead the agent to states with higher values, in
other words they ascend the state-value function, while action d descends it. Figure 1(b) shows how introducing admissible policies enables simultaneously solving
multiple subgoals. In this figure, action a and c are optimal in controllers C 1 and C2
respectively, but they both descend the state-value function of the other controller.
However if we allow actions such as action b, we are guaranteed to ascend both
value functions, with a slight degeneracy in optimality.
Most of the related work in the context of MDPs assume that the subprocesses
modeling the activities are additive utility independent [1, 2] and do not address
concurrent planning with temporal activities. In contrast we focus on problems that
involve temporal abstraction where the overall utility function may be expressed as
a non-linear function of sub-utility functions that have different priorities. Our approach is also similar in spirit to the redundancy utilization formalism in robotics
[4, 3, 6]. Most of these ideas, however, have been investigated in continuous domains
and have not been extended to discrete domains. In contrast we focus on discrete
domains modeled as MDPs.
In this paper we formally introduce the framework of redundant controllers in terms
of the set of admissible policies associated with them and present an algorithm for
merging such policies given a coarticulation task. We also present a set of theoretical results analyzing various properties of such controllers, and also the performance
of the policy merging algorithm. The theoretical results are complemented by an
experimental study that illustrates the trade-offs between the degree of flexibility
of controllers and the performance of the policy generated by the merging process.
2
Redundant Controllers
In this section we introduce the framework of redundant controllers and formally
define the set of admissible policies in them. For modeling controllers, we use
the concept of subgoal options [7]. A subgoal option can be viewed as a closed
loop controller that achieves a subgoal of some kind. Formally, a subgoal option
of an MDP M = hS, A, P, Ri is defined by a tuple C = hMC , I, ?i. The MDP
MC = hSC , AC , PC , RC i is the option MDP induced by the option C in which
SC ? S, AC ? A, PC is the transition probability function induced by P, and RC is
chosen to reflect the subgoal of the option. The policy component of such options
are the solutions to the option MDP MC associated with them. For generality,
throughout this paper we refer to subgoal options simply as controllers.
For theoretical reasons, in this paper we assume that each controller optimizes a
minimum cost-to-goal problem. An MDP M modeling a minimum cost-to-goal
problem includes a set of goal states SG ? S. We also represent the set of non-goal
states by S?G = S ? SG . Every action in a non-goal state incurs some negative
reward and the agent receives a reward of zero in goal states. A controller C is a
minimum cost-to-goal controller, if MC optimizes a minimum cost-to-goal problem.
The controller also terminates with probability one in every goal state. We are
now ready to formally introduce the concept of ascending policies in an MDP:
Definition 1: Given an MDP M = hS, A, P, Ri, a function L : S ? IR, and
a deterministic policy ? : S ? A, let ?? (s) = Es0 ?P ?(s) {L(s0 )} ? L(s), where
s
Es0 ?P ?(s) {.} is the expectation with respect to the distribution over next states
s
given the current state and the policy ?. Then ? is ascending on L, if for every
state s (except for the goal states if the MDP models a minimum cost-to-goal
problem) we have ?? (s) > 0.
For an ascending policy ? on a function L, function ? : S ? IR+ gives a strictly
positive value that measures how much the policy ? ascends on L in state s. A
deterministic policy ? is descending on L, if for some state s, ?? (s) < 0. In general
we would like to study how a given policy behaves with respect to the optimal
value function in a problem. Thus we choose the function L to be the optimal
state value function (i.e., V ? ). The above condition can be interpreted as follows:
we are interested in policies that in average lead to states with higher values,
or in other words ascend the state-value function surface. Note that Definition
1 is closely related to the Lyapunov functions introduced in [5]. The minimum
and maximum rate at which an ascending policy in average ascends V ? are given by:
Definition 2: Assume that the policy ? is ascending on the optimal state value
function V ? . Then ? ascends on V ? with a factor at least ?, if for all non-goal states
s ? S?G , ?? (s) ? ? > 0. We also define the guaranteed expected ascend rate of ? as:
?? = mins?S?G ?? (s). The maximum possible achievable expected ascend rate of ?
is also given by ? ? = maxs?S?G ?? (s).
One problem with ascending policies is that Definition 1 ignores the immediate reward which the agent receives. For example it could be the case that as a result
of executing an ascending policy, the agent transitions to some state with a higher
value, but receives a huge negative reward. This can be counterbalanced by adding
a second condition that keeps the ascending policies close to the optimal policy:
Definition 3: Given a minimum cost-to-goal problem modeled by an MDP
M = hS, A, P, Ri, a deterministic policy ? is -ascending on M if: (1) ? is ascending on V ? , and (2) is the maximum value in the interval (0, 1] such that
?s ? S we have Q? (s, ?(s)) ? 1 V ? (s).
Here, measures how close the ascending policy ? is to the optimal policy. For any
, the second condition assures that: ?s ? S, Q? (s, ?(s)) ? [ 1 V ? (s), V ? (s)] (note
that because M models a minimum cost-to-goal problem, all values are negative).
Naturally we often prefer policies that are -ascending for values close to 1. In
section 3 we derive a lower bound on such that no policy for values smaller than
this bound is ascending on V ? (in other words cannot be arbitrarily small). Similarly, a deterministic policy ? is called -ascending on C, if ? is -ascending on M C .
Next, we introduce the framework of redundant controllers:
Definition 4: A minimum cost-to-goal controller C is an -redundant controller if
there exist multiple deterministic policies that are either optimal, or -ascending
on C. We represent the set of such admissible policies by ?C . Also, the minimum
ascend rate of C is defined as: ?
? = min???C ?? , where ?? is the ascend rate of a
policy ? ? ?C (see Definition 2).
We can compute the -redundant set of policies for a controller C as follows. Using
the reward model, state transition model, V ? and Q? , in every state s ? S, we
compute the set of actions that are -ascending on C represented by AC (s) = {a ?
A|a = ?(s), ? ? ?C }, that satisfy both conditions of Definition 2.
Next, we present an algorithm for merging policies associated with a set of prioritized redundant controllers that run in parallel. For specifying the order of priority
relation among the controllers we use the expression Cj / Ci , where the relation
?/? expresses the subject-to relation (taken from [3]). This equation should read:
controller Cj subject-to controller Ci . A priority ranking system is then specified
by a set of relations {Cj / Ci }. Without loss of generality we assume that the
controllers are prioritized based on the following ranking system: {Cj / Ci |i < j}.
Algorithm MergeController summarizes the policy merging process. In this algo-
Algorithm 1 Function MergeController(s, C1 , C3 , . . . , Cm )
1: Input: current state s; the set of controllers Ci ; the redundant-sets ACii (s) for
every controller Ci .
2: Initialize: ?1 (s) = AC11 (s).
3: For i = 2, 3, . . . , n perform:
?i (s) = {a | a ? ACii (s) ? a ? ?f (i) (s)} where f (i) = max j <
i such that ?j (s) 6= ? (initially f (1) = 1).
4: Return an action a ? ?f (n+1) (s).
rithm, ?i (s) represents the ordered intersection of the redundant-sets ACjj up to
the controller Ci (i.e., 1 ? j ? i) constrained by the order of priority. In other
words, each set ?i (s) contains a set of actions in state s that are all i -ascending
with respect to the superior controllers C1 , C2 , . . . , Ci . Due to the limited amount of
redundancy in the system, it is possible that the system may not be able to commit
to some of the subordinate controllers. This happens when none of the actions
with respect to some controller Cj (i.e., a ? ACjj (s)) are -ascending with respect
to the superior controllers. In this case the algorithm skips the controller C j , and
continues the search in the redundant-sets of the remaining subordinate controllers.
The complexity of the above algorithm consists of the following costs: (1) cost of
computing the redundant-sets ACii for a controller which is linear in the number of
states and actions: O(|S| |A|), (2) cost of performing Algorithm MergeController in
every state s, which is O((m ? 1) |A|2 ), where m is the number of subgoals. In the
next section, we theoretically analyze redundant controllers and the performance of
the policy merging algorithm in various situations.
3
Theoretical Results
In this section we present some of our theoretical results characterizing -redundant
controllers, in terms of the bounds on the number of time steps it takes for a controller to complete its task, and the performance of the policy merging algorithm.
For lack of space, we have left out the proofs and refer the readers to [8]. In section
2 we stated that there is a lower bound on such that there exist no -ascending
policy for values smaller than this bound. In the first theorem we compute this
lower bound:
Theorem 1 Let M = hS, A, P, Ri be a minimum cost-to-goal MDP and let ?
|V ? |
be an -ascending policy defined on M. Then is bounded by > |Vmax
, where
?
min |
?
?
= maxs?S?G V ? (s).
Vmin
= mins?S?G V ? (s) and Vmax
Such a lower bound characterizes the maximum flexibility we can afford in a redundant controller and gives us an insight on the range of values that we can choose
for it. In the second theorem we derive an upper bound on the expected number of
steps that a minimum cost-to-goal controller takes to complete when executing an
-ascending policy:
Theorem 2 Let C be an -ascending minimum cost-to-goal controller and let s
denote the current state of the controller. Then any -ascending policy ? on C will
terminate the controller in some goal state with probability one. Furthermore, ter?
mination occurs in average in at most d ?V??(s) e steps, where ?? is the guaranteed
expected ascend rate of the policy ?.
This result assures that the controller arrives in a goal state and will achieve its
goal in a bounded number of steps. We use this result when studying performance
of running multiple redundant controllers in parallel. Next, we study how concurrent execution of two controllers using Algorithm MergeController impacts each
controller (this result can be trivially extended to the case when a set of m > 2
controllers are executed concurrently):
Theorem 3 Given an MDP M = hS, A, P, Ri, and any two minimum cost-to-goal
redundant controllers {C1 , C2 } defined over M, the policy ? obtained by Algorithm
MergeController based on the ranking system {C2 / C1 } is 1 -ascending on C1 (s).
Moreover, if ?s ? S, AC11 (s) ? AC22 (s) 6= ?, policy ? will be ascending on both controllers with the ascend rate at least ?? = min{??1 , ??2 }.
This theorem states that merging policies of two controllers using Algorithm MergeController would generate a policy that remains 1 -ascending on the superior controller. In other words it does not negatively impact the superior controller. In the
next theorem, we establish bounds on the expected number of steps that it takes
for the policy obtained by Algorithm MergeController to achieve a set of prioritized
subgoals ? = {?1 , . . . , ?m } by concurrently executing the associated controllers
{C1 , . . . , Cm }:
Theorem 4 Assume ? = {C1 , C2 , . . . , Cm } is a set of minimum cost-to-goal i redundant (i = 1, . . . , m) controllers defined over MDP M. Let the policy ? denote
the policy obtained by Algorithm MergeController based on the ranking system
{Cj / Ci |i < j}. Let ?? (s) denote the expected number of steps for the policy ? for
achieving all the subgoals {?1 , ?2 , . . . , ?m } associated with the set of controllers,
assuming that the current state of the system is s. Then the following expression
holds:
maxd
i
m
X
X
?Vi? (s)
?Vi? (h(i))
e
?
?
(s)
?
P(h)
d
e
?
?i?
?
?i
i=1
(1)
h?H
where ?i? is the maximum possible achievable expected ascend rate for the controller
Ci (see Definition 2), H is the set of sequences h = hs, g1 , g2 , . . . , gm i in which
gi is a goal state in controller Ci (i.e., gi ? SGi ). The probability distribution
Qm
C1
Ci
P(h) = Psg
i=2 Pgi?1 gi over sequences h ? H gives the probability of executing
1
the set of controllers in sequence based on the order of priority starting in state s,
and observing the goal state sequence hg1 , . . . , gm i.
Based on Theorem 3, when Algorithm MergeController always finds a policy ? that
i
optimizes all controllers (i.e., ?s ? S, ?m
i=1 ACi (s) 6= ?), policy ? will ascend on all
controllers. Thus in average the total time for all controllers to terminate equals the
time required for a controller that takes the most time to complete which has the
?V ? (s)
lower bound of maxi d ??i(s) e. The worst case happens when the policy ? generated
by Algorithm MergeController can not optimize more than one controller at a time.
In this case ? always optimizes the controller with the highest priority until its
termination, then optimizes the second highest priority controller and continues this
process to the end in a sequential manner. The right hand side of the inequality
given by Equation 1 gives an upper bound for the expected time required for all
controllers to complete when they are executed sequentially. The above theorem
implicitly states that when Algorithm MergeController generates a policy that in
average commits to more than one subgoal it potentially takes less number of steps
to achieve all the subgoals, compared to a policy that sequentially achieves them
according to their degree of significance.
4
Experiments
In this section we present our experimental results analyzing redundant controllers
and the policy merging algorithm described in section 2. Figure 2(a) shows a
10 ? 10 grid world where an agent is to visit a set of prioritized locations marked
by G1 , . . . , Gm (in this example m = 4). The agent?s goal is to achieve all of the
subgoals by focusing on superior subgoals and coarticulating with the subordinate
ones. Intuitively, when the agent is navigating to some subgoal Gi of higher priority,
if some subgoal of lower priority Gj is en route to Gi , or not too off from the optimal
path to Gi , the agent may choose to visit Gj . We model this problem by an MDP
G1
G1
G1
G1
G3
G4
G2
(a)
(b)
(c)
(d)
Figure 2: (a) A 10 ? 10 grid world where an agent is to visit a set of prioritized
subgoal locations; (b) The optimal policy associated with the subgoal G 1 ; (c) The
-ascending policy for = 0.95; (d) The -ascending policy for = 0.90.
M = hS, A, R, Pi, where S is the set of states consisting of 100 locations in the
room, and A is the set of actions consisting of eight stochastic navigation actions
(four actions in the compass direction, and four diagonal actions). Each action
moves the agent in the corresponding direction with probability p and fails with
probability (1 ? p) (in all of the experiments we used success probability p = 0.9).
Upon failure the agent is randomly placed in one of the eight-neighboring locations
with equal probability. If a movement would take the agent into a wall, then the
agent will remain in the same location. The agent also receives a reward of ?1 for
every action executed. We assume that the gent has access to a set of controllers
C1 , . . . , Cm , associated with the set of subgoal locations G1 , . . . , Gm . A controller Ci
is a minimum cost-to-goal subgoal option Ci = hMCi , I, ?i, where MCi = M, the
initiation set I includes any locations except for the subgoal location, and ? forces
the option to terminate only in the subgoal location. Figures 2(b)-(d) show examples of admissible policies for subgoal G1 : Figure 2(b) shows the optimal policy of
the controller C1 (navigating the agent to the location G1 ). Figures 2(c) and 2(d)
show the -redundant policies for = 0.95 and = 0.90 respectively. Note that by
reducing , we obtain a larger set of admissible policies although less optimal.
We use two different planning methods: (1) sequential planning, where we achieve
the subgoals sequentially by executing the controllers one at a time according to
the order of priority of subgoals, (2) concurrent planning, where we use Algorithm
MergeController for merging the policies associated with the controllers. In the
first set of experiments, we fix the number of subgoals. At the beginning of each
episode the agent is placed in a random location, and a fixed number of subgoals
(in our experiments m = 4) are randomly selected. Next, the set of admissible
policies (using = 0.9) for every subgoal is computed. Figure 3(a) shows the performance of both planning methods, for every starting location in terms of number
of steps for completing the overall task. The concurrent planning method consistently outperforms the sequential planning in all starting locations. Next, for the
30
Concurrent
23
26
Average (steps)
Average (steps)
24
Concurrent
Sequential
28
24
22
20
22
21
20
18
19
16
0
20
40
60
State
(a)
80
100
0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95
Epsilon
1
1.05
(b)
Figure 3: (a) Performance of both planning methods in terms of the average number
of steps in every starting state; (b) Performance of the concurrent method for
different values of .
same task, we measure how the performance of the concurrent method varies by
varying , when computing the set of -ascending policies for every subgoal. Figure
3(b) shows the performance of the concurrent method and Figure 4(a) shows the
average number of subgoals coarticulated by the agent ? averaged over all states
? for different values of . We varied from 0.6 to 1.0 using 0.05 intervals. All of
these results are also averaged over 100 episodes, each consisting of 10 trials. Note
that for = 1, the only admissible policy is the optimal policy and thus it does
not offer much flexibility with respect to the other subgoals. This can be seen in
Figure 3(b) in which the policy generated by the merging algorithm for = 1.0
has the minimum commitment to the other subgoals. As we reduce , we obtain a
larger set of admissible policies, thus we observe improvement in the performance.
However, the more we reduce , the less optimal admissible policies we obtain. Thus
the performance degrades (here we can observe it for the values below = 0.85).
Figure 4(a) also shows by relaxing optimality (reducing ), the policy generated by
the merging algorithm commits to more subgoals simultaneously.
In the final set of experiments, we fixed to 0.9 and varied the number of subgoals from m = 2 to m = 50 (all of these results are averaged over 100 episodes,
each consisting of 10 trials). Figure 4(b) shows the performance of both planning
methods. It can be observed that the concurrent method consistently outperforms
the sequential method by increasing the number of subgoals (top curve shows the
performance of the sequential method and bottom curve shows that of concurrent
method). This is because when there are many subgoals, the concurrent planning
180
Concurrent
Concurrent
Sequential
160
1.4
140
1.35
Average (steps)
Number of subgoals committed
1.5
1.45
1.3
1.25
1.2
1.15
120
100
80
60
1.1
40
1.05
20
1
0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95
Epsilon
0
1
(a)
0
5
10
15 20 25 30 35
Number of subgoals
40
45
50
(b)
Figure 4: (a) Average number of subgoals coarticulated using the concurrent planning method for different values of ; (b) Performance of the planning methods in
terms of the average number of steps in every starting state.
method is able to visit multiple subgoals of lower priority en route the primary
subgoals, thus it can save more time.
5
Concluding Remarks
There are a number of questions and open issues that remain to be addressed and
many interesting directions in which this work can be extended. In many problems,
the strict order of priority of subtasks may be violated: in some situations we may
want to be sub-optimal with respect to the superior subtasks in order to improve
the overall performance. One other interesting direction is to study situations when
actions are structured. We are currently investigating compact representation of
the set of admissible policies by exploiting the structure of actions.
Acknowledgements
This research is supported in part by a grant from the National Science Foundation
#ECS-0218125.
References
[1] C. Boutilier, R. Brafman, and C. Geib. Prioritized goal decomposition of Markov decision processes:
Towards a synthesis of classical and decision theoretic planning. In Martha Pollack, editor, Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence, pages 1156?1163,
San Francisco, 1997. Morgan Kaufmann.
[2] C. Guestrin and G. Gordon. Distributed planning in hierarchical factored mdps. In In the Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence, pages 197 ? 206,
Edmonton, Canada, 2002.
[3] M. Huber. A Hybrid Architecture for Adaptive Robot Control. PhD thesis, University of Massachusetts, Amherst, 2000.
[4] Y. Nakamura. Advanced robotics: redundancy and optimization. Addison-Wesley Pub. Co., 1991.
[5] Theodore J. Perkins and Andrew G. Barto. Lyapunov-constrained action sets for reinforcement learning. In Proc. 18th International Conf. on Machine Learning, pages 409?416. Morgan Kaufmann,
San Francisco, CA, 2001.
[6] R. Platt, A. Fagg, and R. Grupen. Nullspace composition of control laws for grasping. In the Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS),
2002.
[7] D. Precup. Temporal Abstraction in Reinforcement Learning. PhD thesis, Department of Computer
Science, University of Massachusetts, Amherst., 2000.
[8] K. Rohanimanesh, R. Platt, S. Mahadevan, and R. Grupen. A framework for coarticulation in
markov decision processes. Technical Report 04-33, (www.cs.umass.edu/~khash/coarticulation04.
pdf), Department of Computer Science, University of Massachusetts, Amherst, Massachusetts, USA.,
2004.
| 2597 |@word h:7 trial:2 achievable:2 open:1 termination:1 decomposition:1 incurs:1 contains:1 uma:5 pub:1 outperforms:2 current:4 additive:1 enables:1 smdp:2 intelligence:2 vmin:1 selected:1 beginning:1 location:13 c22:1 rc:2 c2:8 grupen:4 consists:1 manner:1 introduce:4 g4:1 theoretically:1 ascend:13 huber:1 expected:8 behavior:1 planning:15 ascends:5 inspired:1 es0:2 increasing:1 bounded:2 moreover:1 kind:1 interpreted:1 cm:4 safely:1 temporal:3 every:13 qm:1 platt:3 utilization:1 control:2 grant:1 positive:1 analyzing:2 path:1 dynamically:1 subprocesses:1 challenging:1 specifying:1 relaxing:1 co:1 theodore:1 limited:2 range:1 averaged:3 episodic:1 word:5 cannot:1 close:3 context:1 descending:2 optimize:1 www:1 deterministic:5 eighteenth:1 starting:5 factored:1 insight:1 gm:4 satisfying:1 continues:2 bottom:1 observed:1 worst:1 descend:2 region:1 episode:4 grasping:1 trade:1 highest:2 movement:1 environment:1 complexity:1 reward:6 hsc:1 solving:1 algo:1 negatively:1 upon:1 joint:1 various:2 represented:1 committing:1 artificial:2 sc:1 larger:2 gi:6 g1:9 commit:1 khash:2 final:1 sequence:4 interaction:1 commitment:1 neighboring:1 loop:1 flexibility:5 achieve:5 sgi:1 exploiting:1 executing:6 illustrate:1 derive:2 ac:2 andrew:1 descends:1 c:5 skip:1 lyapunov:2 direction:5 coarticulation:5 closely:1 stochastic:1 subordinate:4 fix:1 wall:1 precedence:1 strictly:1 drinking:1 hold:1 driving:1 achieves:2 proc:1 currently:2 concurrent:16 offs:1 concurrently:2 always:2 varying:1 barto:1 focus:2 improvement:1 consistently:2 contrast:2 abstraction:2 initially:1 relation:4 interested:1 overall:3 among:1 issue:1 plan:1 constrained:2 initialize:1 equal:2 represents:2 others:1 report:1 gordon:1 intelligent:1 randomly:2 simultaneously:5 national:1 gent:1 familiar:1 consisting:5 huge:1 investigate:2 navigation:1 arrives:1 pc:2 tuple:1 theoretical:6 pollack:1 formalism:1 modeling:3 compass:1 cost:17 introducing:1 subset:1 too:2 varies:1 international:3 amherst:7 off:2 synthesis:1 precup:1 thesis:2 reflect:2 choose:3 priority:13 conf:1 return:1 includes:2 satisfy:1 ranking:5 vi:2 closed:1 analyze:1 characterizes:1 observing:1 hg1:1 coarticulated:2 option:11 parallel:2 ir:2 kaufmann:2 mc:3 none:1 psg:1 definition:9 failure:1 pgi:1 naturally:1 associated:12 proof:1 degeneracy:1 massachusetts:8 color:1 car:1 cj:6 focusing:1 wesley:1 higher:6 subgoal:20 generality:2 furthermore:1 until:1 hand:1 receives:4 lack:1 mdp:13 usa:1 acjj:2 concept:2 read:1 pdf:1 complete:4 theoretic:1 roderic:1 superior:7 behaves:1 empirically:1 subgoals:29 slight:1 refer:3 composition:1 trivially:1 grid:2 similarly:1 access:2 robot:2 surface:1 gj:2 optimizing:1 optimizes:5 phone:1 route:2 initiation:1 inequality:1 arbitrarily:1 maxd:1 success:1 devise:1 seen:1 minimum:17 morgan:2 guestrin:1 cii:3 c11:2 redundant:20 semi:2 multiple:6 technical:1 offer:2 visit:4 impact:2 controller:80 expectation:1 fifteenth:1 represent:2 robotics:2 cell:1 c1:13 want:1 interval:2 addressed:1 strict:1 induced:2 subject:2 spirit:1 mahadeva:1 ter:1 mahadevan:2 counterbalanced:1 architecture:1 suboptimal:1 reduce:2 idea:2 cn:1 expression:2 utility:3 speech:1 afford:2 action:27 remark:1 boutilier:1 involve:3 amount:2 generate:1 exist:2 discrete:2 express:1 redundancy:3 four:2 achieving:2 iros:1 fagg:1 compete:1 run:1 uncertainty:1 throughout:1 reader:1 decision:6 prefer:1 summarizes:1 bound:11 completing:1 guaranteed:3 activity:8 perkins:1 ri:5 generates:1 optimality:2 min:5 concluding:1 performing:1 department:6 structured:1 according:4 terminates:1 smaller:2 remain:2 g3:1 happens:2 intuitively:1 taken:1 resource:1 equation:2 previously:1 assures:2 remains:1 addison:1 ascending:31 end:1 studying:2 eight:2 observe:2 hierarchical:1 save:1 top:1 remaining:1 running:1 commits:4 epsilon:2 coffee:1 establish:1 classical:1 rsj:1 objective:2 move:1 question:1 realized:1 occurs:1 degrades:1 primary:1 diagonal:1 navigating:3 simulated:1 khashayar:1 reason:1 assuming:1 modeled:4 hmc:1 executed:3 robert:1 potentially:1 negative:3 stated:1 policy:83 perform:1 upper:2 markov:5 immediate:1 situation:3 extended:4 committed:1 varied:2 subtasks:2 canada:1 introduced:2 namely:1 required:3 specified:1 c3:1 learned:2 conflicting:1 address:1 able:2 below:1 max:3 ranked:1 force:1 hybrid:1 nakamura:1 advanced:1 improve:1 mdps:3 temporally:1 ready:1 sg:2 acknowledgement:1 law:1 loss:1 interesting:2 foundation:1 agent:23 degree:4 s0:1 editor:1 pi:1 placed:2 supported:1 brafman:1 side:1 allow:1 taking:1 characterizing:1 distributed:1 curve:2 world:3 transition:4 ignores:1 adaptive:1 vmax:2 san:2 reinforcement:2 ec:1 compact:1 implicitly:1 keep:1 global:2 sequentially:3 investigating:1 francisco:2 aci:1 continuous:1 search:1 terminate:3 rohanimanesh:2 ca:1 investigated:1 domain:4 significance:2 arrow:1 sridhar:1 en:2 edmonton:1 rithm:1 darker:1 sub:2 fails:1 nullspace:1 admissible:15 theorem:10 maxi:1 merging:16 adding:1 sequential:7 ci:14 phd:2 execution:1 illustrates:1 intersection:1 simply:1 expressed:1 ordered:1 g2:2 talking:1 complemented:1 ma:4 marked:2 viewed:1 goal:28 towards:1 prioritized:7 room:1 martha:1 smdps:1 except:2 reducing:2 called:1 total:1 experimental:2 mci:1 formally:4 violated:1 geib:1 evaluate:1 phenomenon:1 |
1,758 | 2,598 | Triangle Fixing Algorithms for the Metric
Nearness Problem
Inderjit S. Dhillon
Suvrit Sra
Dept. of Computer Sciences
The Univ. of Texas at Austin
Austin, TX 78712.
{inderjit,suvrit}@cs.utexas.edu
Joel A. Tropp
Dept. of Mathematics
The Univ. of Michigan at Ann Arbor
Ann Arbor, MI, 48109.
[email protected]
Abstract
Various problems in machine learning, databases, and statistics involve
pairwise distances among a set of objects. It is often desirable for these
distances to satisfy the properties of a metric, especially the triangle inequality. Applications where metric data is useful include clustering,
classification, metric-based indexing, and approximation algorithms for
various graph problems. This paper presents the Metric Nearness Problem: Given a dissimilarity matrix, find the ?nearest? matrix of distances
that satisfy the triangle inequalities. For `p nearness measures, this paper develops efficient triangle fixing algorithms that compute globally
optimal solutions by exploiting the inherent structure of the problem.
Empirically, the algorithms have time and storage costs that are linear
in the number of triangle constraints. The methods can also be easily
parallelized for additional speed.
1
Introduction
Imagine that a lazy graduate student has been asked to measure the pairwise distances
among a group of objects in a metric space. He does not complete the experiment, and
he must figure out the remaining numbers before his adviser returns from her conference.
Obviously, all the distances need to be consistent, but the student does not know very much
about the space in which the objects are embedded. One way to solve his problem is to find
the ?nearest? complete set of distances that satisfy the triangle inequalities. This procedure
respects the measurements that have already been taken while forcing the missing numbers
to behave like distances.
More charitably, suppose that the student has finished the experiment, but?measurements
being what they are?the numbers do not satisfy the triangle inequality. The student knows
that they must represent distances, so he would like to massage the data so that it corresponds with his a priori knowledge. Once again, the solution seems to require the ?nearest?
set of distances that satisfy the triangle inequalities.
Matrix nearness problems [6] offer a natural framework for developing this idea. If there
are n points, we may collect the measurements into an n ? n symmetric matrix whose
(j, k) entry represents the dissimilarity between the j-th and k-th points. Then, we seek to
approximate this matrix by another whose entries satisfy the triangle inequalities. That is,
mik ? mij + mjk for every triple (i, j, k). Any such matrix will represent the distances
among n points in some metric space. We calculate approximation error with a distortion
measure that depends on how the corrected matrix should relate to the input matrix. For
example, one might prefer to change a few entries significantly or to change all the entries
a little.
We call the problem of approximating general dissimilarity data by metric data the Metric
Nearness (MN) Problem. This simply stated problem has not previously been studied, although the literature does contain some related topics (see Section 1.1). This paper presents
a formulation of the Metric Nearness Problem (Section 2), and it shows that every locally
optimal solution is globally optimal. To solve the problem we present triangle-fixing algorithms that take advantage of its structure to produce globally optimal solutions. It can
be computationally prohibitive, both in time and storage, to solve the MN problem without
these efficiencies.
1.1
Related Work
The Metric Nearness (MN) problem is novel, but the literature contains some related work.
The most relevant research appears in a recent paper of Roth et al. [11]. They observe
that machine learning applications often require metric data, and they propose a technique
for metrizing dissimilarity data. Their method, constant-shift embedding, increases all the
dissimilarities by an equal amount to produce a set of Euclidean distances (i.e., a set of
numbers that can be realized as the pairwise distances among an ensemble of points in a
Euclidean space). The size of the translation depends on the data, so the relative and absolute changes to the dissimilarity values can be large. Our approach to metrizing data is
completely different. We seek a consistent set of distances that deviates as little as possible from the original measurements. In our approach, the resulting set of distances can
arise from an arbitrary metric space; we do not restrict our attention to obtaining Euclidean
distances. In consequence, we expect metric nearness to provide superior denoising. Moreover, our techniques can also learn distances that are missing entirely.
There is at least one other method for inferring a metric. An article of Xing et al. [12]
proposes a technique
for learning a Mahalanobis distance for data in Rs . That is, a metric
p
dist(x, y) = (x ? y)T G(x ? y), where G is an s ? s positive semi-definite matrix.
The user specifies that various pairs of points are similar or dissimilar. Then the matrix
G is computed by minimizing the total squared distances between similar points while
forcing the total distances between dissimilar points to exceed one. The article provides
explicit algorithms for the cases where G is diagonal and where G is an arbitrary positive
semi-definite matrix. In comparison, the metric nearness problem is not restricted to Mahalanobis distances; it can learn a general discrete metric. It also allows us to use specific
distance measurements and to indicate our confidence in those measurements (by means of
a weight matrix), rather than forcing a binary choice of ?similar? or ?dissimilar.?
The Metric Nearness Problem may appear similar to metric Multi-Dimensional Scaling
(MDS) [8], but we emphasize that the two problems are distinct. The MDS problem endeavors to find an ensemble of points in a prescribed metric space (usually a Euclidean
space) such that the distances between these points are close to the set of input distances.
In contrast, the MN problem does not seek to find an embedding. In fact MN does not
impose any hypotheses on the underlying space other than requiring it to be a metric space.
The outline of rest of the paper is as follows. Section 2 formally describes the MN problem.
In Section 3, we present algorithms that allow us to solve MN problems with ` p nearness
measures. Some applications and experimental results follow in Section 4. Section 5 discusses our results, some interesting connections, and possibilities for future research.
2
The Metric Nearness Problem
We begin with some basic definitions. We define a dissimilarity matrix to be a nonnegative,
symmetric matrix with zero diagonal. Meanwhile, a distance matrix is defined to be a
dissimilarity matrix whose entries satisfy the triangle inequalities. That is, M is a distance
matrix if and only if it is a dissimilarity matrix and mik ? mij + mjk for every triple of
distinct indices (i, j, k). Distance matrices arise from measuring the distances among n
points in a pseudo-metric space (i.e., two distinct points can lie at zero distance from each
other). A distance matrix contains N = n (n ? 1)/2 free parameters, so we denote the
collection of all distance matrices by MN . The set MN is a closed, convex cone.
The metric nearness problem requests a distance matrix M that is closest to a given dissimilarity matrix D with respect to some measure of ?closeness.? In this work, we restrict
our attention to closeness measures that arise from norms. Specifically, we seek a distance
matrix M so that,
M ? argmin W X ? D
,
(2.1)
X?MN
where k ? k is a norm, W is a symmetric non-negative weight matrix, and ?? denotes the
elementwise (Hadamard) product of two matrices. The weight matrix reflects our confi2
dence in the entries of D. When each dij represents a measurement with variance ?ij
, we
2
might set wij = 1/?ij . If an entry of D is missing, one can set the corresponding weight
to zero.
Theorem 2.1. The function X 7?
W X ? D
always attains its minimum on MN .
Moreover, every local minimum is a global minimum. If, in addition, the norm is strictly
convex and the weight matrix has no zeros or infinities off its diagonal, then there is a
unique global minimum.
Proof. The main task is to show that the objective function has no directions of recession,
so it must attain a finite minimum on MN . Details appear in [4].
It is possible to use any norm in the metric nearness problem. We further restrict our
attention to the `p norms. The associated Metric Nearness Problems are
X
1/p
wjk (xjk ? djk )p
min
for 1 ? p < ?, and
(2.2)
X?MN
min
X?MN
j6=k
max wjk (xjk ? djk )
j6=k
for p = ?.
(2.3)
Note that the `p norms are strictly convex for 1 < p < ?, and therefore the solution to (2.2)
is unique. There is a basic intuition for choosing p. The `1 norm gives the absolute sum
of the (weighted) changes to the input matrix, while the `? only reflects the maximum
absolute change. The other `p norms interpolate between these extremes. Therefore, a
small value of p typically results in a solution that makes a few large changes to the original
data, while a large value of p typically yields a solution with many small changes.
3
Algorithms
This section describes efficient algorithms for solving the Metric Nearness Problems (2.2)
and (2.3). For ease of exposition, we assume all weights to equal one. At first, it may
appear that one should use quadratic programming (QP) software when p = 2, linear programming (LP) software when p = 1 or p = ?, and convex programming software for
the remaining p. It turns out that the time and storage requirements of this approach can
be prohibitive. An efficient algorithm must exploit the structure of the triangle inequalities.
In this paper, we develop one such approach, which may be viewed as a triangle-fixing
algorithm. This method examines each triple of points in turn and optimally enforces any
triangle inequality that fails. (The definition of ?optimal? depends on the `p nearness measure.) By introducing appropriate corrections, we can ensure that this iterative algorithm
converges to a globally optimal solution of MN.
Notation. We must introduce some additional notation before proceeding. To each matrix
X of dissimilarities or distances, we associate the vector x formed by stacking the columns
of the lower triangle, left to right. We use xij to refer to the (i, j) entry of the matrix as
well as the corresponding component of the vector. Define a constraint matrix A so that
M is a distance matrix if and only if Am ? 0. Note that each row of A contains three
nonzero entries, +1, ?1, and ?1.
3.1
MN for the `2 norm
We first develop a triangle-fixing algorithm for solving (2.2) with respect to the ` 2 norm.
This case turns out to be the simplest and most illuminating case. It also plays a pivotal
role in the algorithms for the `1 and `? MN problems.
Given a dissimilarity vector d, we wish to find its orthogonal projection m onto the cone
MN . Let us introduce an auxiliary variable e = m ? d that represents the changes to the
original distances. We also define b = ?Ad. The negative entries of b indicate how much
each triangle inequality is violated. The problem becomes
mine kek2 ,
subject to Ae ? b.
(3.1)
After finding the minimizer e? , we can use the relation m? = d+e? to recover the optimal
distance vector.
Here is our approach. We initialize the vector of changes to zero (e = 0), and then we
begin to cycle through the triangles. Suppose that the (i, j, k) triangle inequality is violated,
i.e., eij ? ejk ? eki > bijk . We wish to remedy this violation by making an `2 -minimal
adjustment of eij , ejk , and eki . In other words, the vector e is projected orthogonally onto
the constraint set {e0 : e0ij ? e0jk ? e0ki ? bijk }. This is tantamount to solving
mine0 21 (e0ij ? eij )2 + (e0jk ? ejk )2 + (e0ki ? eki )2 ) ,
(3.2)
subject to e0ij ? e0jk ? e0ki = bijk .
It is easy to check that the solution is given by
e0ij ? eij ? ?ijk ,
e0jk ? ejk + ?ijk ,
and
e0ki ? eki + ?ijk ,
(3.3)
where ?ijk = 13 (eij ? ejk ? eki ? bijk ) > 0. Only three components of the vector e
need to be updated. The updates in (3.3) show that the largest edge weight in the triangle
is decreased, while the other two edge weights are increased.
In turn, we fix each violated triangle inequality using (3.3). We must also introduce a
correction term to guide the algorithm to the global minimum. The corrections have a
simple interpretation in terms of the dual of the minimization problem (3.1). Each dual
variable corresponds to the violation in a single triangle inequality, and each individual
correction results in a decrease in the violation. We continue until no triangle receives a
significant update.
Algorithm 3.1 displays the complete iterative scheme that performs triangle fixing along
with appropriate corrections.
Algorithm 3.1: Triangle Fixing For `2 norm.
T RIANGLE F IXING(D, )
Input: Input dissimilarity matrix D, tolerance
Output: M = argminX?MN kX ? Dk2 .
for 1 ? i < j < k ? n
(zijk , zjki , zkij ) ? 0 {Initialize correction terms}
for 1 ? i < j ? n
eij ? 0
{Initial error values for each dissimilarity dij }
? ?1+
{Parameter for testing convergence}
while (? > )
{convergence test}
foreach triangle (i, j, k)
b ? dki + djk ? dij
? ? 13 (eij ? ejk ? eki ? b)
? ? min{??, zijk }
{Stay within half-space of constraint}
eij ? eij ? ?, ejk ? ejk + ?, eki ? eki + ?
zijk ? zijk ? ?
{Update correction term}
end foreach
? ? sum of changes in the e values
end while
return M = D + E
(?)
(??)
Remark: Algorithm 3.1 is an efficient adaptation of Bregman?s method [1]. By itself,
Bregman?s method would suffer the same storage and computation costs as a general convex optimization algorithm. Our triangle fixing operations allow us to compactly represent
and compute the intermediate variables required to solve the problem. The correctness and
convergence properties of Algorithm 3.1 follow from those of Bregman?s method. Furthermore, our algorithms are very easy to implement.
3.2
MN for the `1 and `? norms
The basic triangle fixing algorithm succeeds only when the norm used in (2.2) is strictly
convex. Hence, it cannot be applied directly to the `1 and `? cases. These require a more
sophisticated approach.
First, observe that the problem of minimizing the `1 norm of the changes can be written as
an LP:
min 0T e + 1T f
e,f
subject to Ae ? b,
(3.4)
?e ? f ? 0,
e ? f ? 0.
The auxiliary variable f can be interpreted as the absolute value of e. Similarly, minimizing
the `? norm of the changes can be accomplished with the LP
min 0T e + ?
e,?
subject to Ae ?b,
(3.5)
?e ? ?1 ? 0,
e ? ?1 ? 0.
We interpret ? = kek? .
Solving these linear programs using standard software can be prohibitively expensive because of the large number of constraints. Moreover, the solutions are not unique because
the `1 and `? norms are not strictly convex. Instead, we replace the LP by a quadratic
program (QP) that is strictly convex and returns the solution of the LP that has minimum
`2 -norm. For the `1 case, we have the following result.
Theorem 3.1 (`1 Metric Nearness). Let z = [e; f ] and c = [0; 1] be partitioned conformally. If (3.4) has a solution, then there exists a ?0 > 0, such that for all ? ? ?0 ,
argmin kz + ??1 ck2
=
argmin kzk2 ,
(3.6)
z?Z ?
z?Z
where Z is the feasible set for (3.4) and Z ? is the set of optimal solutions to (3.4). The
minimizer of (3.6) is unique.
Theorem 3.1 follows from a result of Mangasarian [9, Theorem 2.1-a-i]. A similar theorem
may be stated for the `? case.
The QP (3.6) can be solved using an augmented triangle-fixing algorithm since the majority of the constraints in (3.6) are triangle inequalities. As in the `2 case, the triangle
constraints are enforced using (3.3). Each remaining constraint is enforced by computing
an orthogonal projection onto the corresponding halfspace. We refer the reader to [5] for
the details.
3.3
MN for `p norms (1 < p < ?)
Next, we explain how to use triangle fixing to solve the MN problem for the remaining ` p
norms, 1 < p < ?. The computational costs are somewhat higher because the algorithm
requires solving a nonlinear equation. The problem may be phrased as
mine
1
kekpp
p
subject to
Ae ? b.
(3.7)
To enforce a triangle constraint optimally in the `p norm, we need to compute a projection
of the vector e onto the constraint set. Define ?(x) = p1 kxkpp , and note that (??(x))i =
sgn(xi ) |xi |p?1 . The projection of e onto the (i, j, k) violating constraint is the solution of
mine0 ?(e0 ) ? ?(e) ? h??(e), e0 ? ei subject to
aTijk e0 = bijk ,
where aijk is the row of the constraint matrix corresponding to the triangle inequality
(i, j, k). The projection may be determined by solving
??(e0 ) = ??(e) + ?ijk aijk
so that
aTijk e0 = bijk .
(3.8)
Since aijk has only three nonzero entries, we see that e only needs to be updated in three
components. Therefore, in Algorithm 3.1 we may replace (?) by an appropriate numerical
computation of the parameter ?ijk and replace (??) by the computation of the new value
of e. Further details are available in [5].
4
Applications and Experiments
Replacing a general graph (dissimilarity matrix) by a metric graph (distance matrix) can
enable us to use efficient approximation algorithms for NP-Hard graph problems (M AX C UT clustering) that have guaranteed error for metric data, for example, see [7]. The error
from MN will carry over to the graph problem, while retaining the bounds on total error
incurred. As an example, constant factor approximation algorithms for M AX -C UT exist
for metric graphs [3], and can be used for clustering applications. See [4] for more details.
Applications that use dissimilarity values, such as clustering, classification, searching, and
indexing, could potentially be sped up if the data is metric. MN is a natural candidate for
enforcing metric properties on the data to permit these speedups.
We were originally motivated to formulate and solve MN by a problem that arose in connection with biological databases [13]. This problem involves approximating mPAM matrices,
which are a derivative of mutation probability matrices [2] that arise in protein sequencing.
They represent a certain measure of dissimilarity for an application in protein sequencing.
Owing to the manner in which these matrices are formed, they tend not to be distance matrices. Query operations in biological databases have the potential to be dramatically sped
up if the data were metric (using a metric based indexing scheme). Thus, one approach is
to find the nearest distance matrix to each mPAM matrix and use that approximation in the
metric based indexing scheme.
We approximated various mPAM matrices by their nearest distance matrices. The relative
errors of the approximations kD ? M k/kDk are reported in Table 1.
Table 1: Relative errors for mPAM dataset (`1 , `2 , `? nearness, respectively)
Dataset
mPAM50
mPAM100
mPAM150
mPAM250
mPAM300
kD?M k1
kDk1
kD?M k2
kDk2
kD?M k?
kDk?
0.339
0.142
0.054
0.004
0.002
0.402
0.231
0.121
0.025
0.017
0.278
0.206
0.151
0.042
0.056
4.1 Experiments
The MN problem has an input of size N = n(n ? 1)/2, and the number of constraints is
roughly N 3/2 . We ran experiments to ascertain the empirical behavior of the algorithm.
Figure 1 shows log?log plots of the running time of our algorithms for solving the ` 1
Log?Log plot showing runtime behavior of l1 MN
Log?Log plot of running time for l2 MN
8
6.2
6
6
Log(Running time in seconds)
Log(Running time in seconds)
5.8
4
2
0
?2
5.6
5.4
5.2
5
4.8
?4
4.6
y=1.6x?6.3
Running Time
?6
1
2
3
4
5
Log(N) ?? N is the input size
6
7
8
4.4
y=1.5x ? 6.1
Running time
7
7.1
7.2
7.3
7.4
7.5
7.6
7.7
Log(N) ?? where N is the input size
7.8
7.9
8
Figure 1: Running time for `1 and `2 norm solutions (plots have different scales).
and `2 Metric Nearness Problems. Note that the time cost appears to be O(N 3/2 ), which
is linear in the number of constraints. The results plotted in the figure were obtained
by executing the algorithms on random dissimilarity matrices. The procedure was halted
when the distance values changed less than 10?3 from one iteration to the next. For both
problems, the results were obtained with a simple M ATLAB implementation. Nevertheless,
this basic version outperforms M ATLAB?s optimization package by one or two orders of
magnitude (depending on the problem), while numerically achieving similar results. A
more sophisticated (C or parallel) implementation could improve the running time even
more, which would allow us to study larger problems.
5
Discussion
In this paper, we have introduced the Metric Nearness problem, and we have developed algorithms for solving it for `p nearness measures. The algorithms proceed by fixing violated
triangles in turn, while introducing correction terms to guide the algorithm to the global optimum. Our experiments suggest that the algorithms require O(N 3/2 ) time, where N is the
total number of distances, so it is linear in the number of constraints. An open problem is
to obtain an algorithm with better computational complexity.
Metric Nearness is a rich problem. It can be shown that a special case (allowing only
decreases in the dissimilarities) is identical with the All Pairs Shortest Path problem [10].
Thus one may check whether the N distances satisfy metric properties in O(APSP) time.
However, we are not aware if this is a lower bound.
It is also possible to incorporate other types of linear and convex constraints into the Metric
Nearness Problem. Some other possibilities include putting box constraints on the distances
(l ? m ? u), allowing ? triangle inequalities (mij ? ?1 mik +?2 mkj ), or enforcing order
constraints (dij < dkl implies mij < mkl ).
We plan to further investigate the application of MN to other problems in data mining,
machine learning, and database query retrieval.
Acknowledgments
This research was supported by NSF grant CCF-0431257, NSF Career Award ACI0093404, and NSF-ITR award IIS-0325116.
References
[1] Y. Censor and S. A. Zenios. Parallel Optimization: Theory, Algorithms, and Applications. Numerical Mathematics and Scientific Computation. OUP, 1997.
[2] M. O. Dayhoff, R. M. Schwarz, and B. C. Orcutt. A model of evolutionary change in
proteins. Atlas of Protein Sequence and Structure, 5(Suppl. 3), 1978.
[3] W. F. de la Vega and C. Kenyon. A randomized approximation scheme for Metric
MAX-CUT. J. Comput. Sys. and Sci., 63:531?541, 2001.
[4] I. S. Dhillon, S. Sra, and J. A. Tropp. The Metric Nearness Problems with Applications. Tech. Rep. TR-03-23, Comp. Sci. Univ. of Texas at Austin, 2003.
[5] I. S. Dhillon, S. Sra, and J. A. Tropp. Triangle Fixing Algorithms for the Metric
Nearness Problem. Tech. Rep. TR-04-22, Comp. Sci., Univ. of Texas at Austin, 2004.
[6] N. J. Higham. Matrix nearness problems and applications. In M. J. C. Gower and
S. Barnett, editors, Applications of Matrix Theory, pages 1?27. Oxford University
Press, 1989.
[7] P. Indyk. Sublinear time algorithms for metric space problems. In 31st Symposium
on Theory of Computing, pages 428?434, 1999.
[8] J. B. Kruskal and M. Wish. Multidimensional Scaling. Number 07-011. Sage Publications, 1978. Series: Quantitative Applications in the Social Sciences.
[9] O. L. Mangasarian. Normal solutions of linear programs. Mathematical Programming
Study, 22:206?216, 1984.
[10] C. G. Plaxton. Personal Communication, 2003?2004.
[11] V. Roth, J. Laub, J. M. Buhmann, and K.-R. Mu? ller. Going metric: Denoising pariwise data. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural
Information Processing Systems (NIPS) 15, 2003.
[12] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance metric learning, with
application to clustering with side constraints. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems (NIPS) 15, 2003.
[13] W. Xu and D. P. Miranker. A metric model of amino acid substitution. Bioinformatics,
20(0):1?8, 2004.
| 2598 |@word version:1 seems:1 norm:21 open:1 seek:4 r:1 tr:2 carry:1 initial:1 substitution:1 contains:3 series:1 outperforms:1 must:6 written:1 numerical:2 plot:4 atlas:1 update:3 half:1 prohibitive:2 sys:1 ck2:1 nearness:27 provides:1 mathematical:1 along:1 symposium:1 laub:1 dayhoff:1 kdk2:1 manner:1 introduce:3 pairwise:3 behavior:2 p1:1 dist:1 roughly:1 multi:1 globally:4 little:2 bijk:6 becomes:1 begin:2 moreover:3 underlying:1 notation:2 what:1 argmin:3 interpreted:1 developed:1 finding:1 pseudo:1 quantitative:1 every:4 multidimensional:1 runtime:1 prohibitively:1 k2:1 grant:1 appear:3 before:2 positive:2 local:1 jtropp:1 consequence:1 oxford:1 path:1 might:2 studied:1 collect:1 ease:1 graduate:1 unique:4 acknowledgment:1 enforces:1 testing:1 definite:2 implement:1 procedure:2 empirical:1 significantly:1 attain:1 projection:5 confidence:1 word:1 protein:4 suggest:1 mkj:1 onto:5 close:1 cannot:1 storage:4 missing:3 roth:2 attention:3 convex:9 formulate:1 examines:1 his:3 embedding:2 searching:1 updated:2 imagine:1 suppose:2 play:1 user:1 programming:4 hypothesis:1 associate:1 expensive:1 approximated:1 cut:1 database:4 role:1 solved:1 calculate:1 cycle:1 decrease:2 russell:1 ran:1 intuition:1 mu:1 complexity:1 asked:1 mine:2 personal:1 solving:8 efficiency:1 completely:1 triangle:37 compactly:1 easily:1 various:4 tx:1 univ:4 distinct:3 query:2 choosing:1 whose:3 larger:1 solve:7 distortion:1 statistic:1 itself:1 indyk:1 obviously:1 eki:8 advantage:1 sequence:1 propose:1 product:1 adaptation:1 relevant:1 hadamard:1 wjk:2 exploiting:1 convergence:3 requirement:1 optimum:1 produce:2 converges:1 executing:1 object:3 depending:1 develop:2 fixing:13 ij:2 nearest:5 auxiliary:2 c:1 involves:1 indicate:2 implies:1 direction:1 owing:1 sgn:1 enable:1 require:4 fix:1 biological:2 strictly:5 correction:8 normal:1 kxkpp:1 kruskal:1 utexas:1 schwarz:1 largest:1 correctness:1 reflects:2 weighted:1 minimization:1 always:1 rather:1 arose:1 publication:1 ax:2 sequencing:2 check:2 tech:2 contrast:1 attains:1 am:1 censor:1 typically:2 her:1 relation:1 wij:1 going:1 djk:3 among:5 classification:2 adviser:1 dual:2 priori:1 retaining:1 proposes:1 plan:1 special:1 initialize:2 equal:2 once:1 aware:1 ng:1 barnett:1 identical:1 represents:3 future:1 np:1 develops:1 inherent:1 few:2 interpolate:1 individual:1 argminx:1 possibility:2 investigate:1 mining:1 joel:1 violation:3 extreme:1 bregman:3 edge:2 orthogonal:2 euclidean:4 plotted:1 xjk:2 e0:6 minimal:1 increased:1 column:1 halted:1 measuring:1 cost:4 introducing:2 stacking:1 entry:11 dij:4 optimally:2 reported:1 st:1 randomized:1 stay:1 off:1 again:1 squared:1 derivative:1 return:3 potential:1 de:1 student:4 satisfy:8 kzk2:1 depends:3 ad:1 closed:1 xing:2 recover:1 parallel:2 mutation:1 halfspace:1 formed:2 variance:1 kek:1 acid:1 ensemble:2 yield:1 comp:2 j6:2 explain:1 definition:2 atlab:2 proof:1 mi:1 associated:1 dataset:2 knowledge:1 ut:2 sophisticated:2 appears:2 higher:1 originally:1 violating:1 follow:2 formulation:1 box:1 furthermore:1 until:1 receives:1 tropp:3 ei:1 replacing:1 nonlinear:1 mkl:1 zijk:4 scientific:1 kenyon:1 contain:1 requiring:1 remedy:1 ccf:1 hence:1 symmetric:3 dhillon:3 nonzero:2 mahalanobis:2 outline:1 complete:3 performs:1 l1:1 novel:1 mangasarian:2 vega:1 superior:1 sped:2 empirically:1 qp:3 foreach:2 he:3 interpretation:1 elementwise:1 interpret:1 numerically:1 measurement:7 refer:2 significant:1 mathematics:2 similarly:1 recession:1 aijk:3 closest:1 recent:1 forcing:3 certain:1 suvrit:2 inequality:16 binary:1 continue:1 rep:2 accomplished:1 minimum:7 additional:2 somewhat:1 impose:1 parallelized:1 shortest:1 ller:1 semi:2 ii:1 desirable:1 offer:1 retrieval:1 award:2 dkl:1 basic:4 ae:4 metric:49 iteration:1 represent:4 suppl:1 addition:1 decreased:1 rest:1 subject:6 tend:1 jordan:1 call:1 exceed:1 intermediate:1 easy:2 restrict:3 zenios:1 idea:1 itr:1 texas:3 shift:1 whether:1 motivated:1 becker:2 suffer:1 proceed:1 remark:1 dramatically:1 useful:1 involve:1 amount:1 locally:1 simplest:1 specifies:1 xij:1 exist:1 nsf:3 discrete:1 group:1 putting:1 nevertheless:1 achieving:1 graph:6 cone:2 sum:2 enforced:2 package:1 reader:1 prefer:1 scaling:2 entirely:1 bound:2 guaranteed:1 display:1 quadratic:2 nonnegative:1 constraint:19 infinity:1 software:4 phrased:1 dence:1 speed:1 prescribed:1 min:5 oup:1 speedup:1 developing:1 request:1 kd:4 describes:2 ascertain:1 partitioned:1 lp:5 making:1 restricted:1 indexing:4 taken:1 computationally:1 equation:1 previously:1 discus:1 turn:5 know:2 end:2 umich:1 available:1 operation:2 permit:1 observe:2 appropriate:3 enforce:1 original:3 denotes:1 clustering:5 include:2 remaining:4 ensure:1 running:8 gower:1 exploit:1 k1:1 especially:1 approximating:2 objective:1 already:1 realized:1 md:2 diagonal:3 obermayer:2 evolutionary:1 distance:45 sci:3 thrun:2 majority:1 topic:1 enforcing:2 index:1 minimizing:3 plaxton:1 potentially:1 relate:1 stated:2 negative:2 sage:1 implementation:2 allowing:2 finite:1 behave:1 ejk:8 communication:1 arbitrary:2 introduced:1 pair:2 required:1 connection:2 nip:2 usually:1 program:3 max:2 natural:2 kek2:1 buhmann:1 mn:29 scheme:4 mjk:2 improve:1 orthogonally:1 finished:1 zkij:1 deviate:1 literature:2 l2:1 relative:3 tantamount:1 embedded:1 expect:1 sublinear:1 interesting:1 triple:3 illuminating:1 incurred:1 consistent:2 article:2 editor:3 translation:1 austin:4 row:2 changed:1 supported:1 free:1 guide:2 mik:3 allow:3 conformally:1 side:1 absolute:4 dk2:1 tolerance:1 rich:1 kz:1 kdk:2 collection:1 projected:1 social:1 approximate:1 emphasize:1 global:4 xi:2 iterative:2 table:2 learn:2 sra:3 career:1 obtaining:1 meanwhile:1 main:1 arise:4 pivotal:1 amino:1 xu:1 augmented:1 fails:1 inferring:1 explicit:1 wish:3 comput:1 lie:1 candidate:1 theorem:5 specific:1 showing:1 dki:1 closeness:2 exists:1 higham:1 dissimilarity:19 magnitude:1 kx:1 michigan:1 simply:1 eij:9 lazy:1 adjustment:1 inderjit:2 mij:4 corresponds:2 minimizer:2 viewed:1 endeavor:1 ann:2 exposition:1 replace:3 feasible:1 change:13 hard:1 specifically:1 determined:1 corrected:1 miranker:1 denoising:2 total:4 arbor:2 experimental:1 succeeds:1 ijk:6 la:1 formally:1 dissimilar:3 bioinformatics:1 violated:4 incorporate:1 dept:2 |
1,759 | 2,599 | Economic Properties of Social Networks
Sham M. Kakade
Michael Kearns
Luis E. Ortiz
Robin Pemantle
Siddharth Suri
University of Pennsylvania
Philadelphia, PA 19104
Abstract
We examine the marriage of recent probabilistic generative models
for social networks with classical frameworks from mathematical economics. We are particularly interested in how the statistical structure of
such networks influences global economic quantities such as price variation. Our findings are a mixture of formal analysis, simulation, and
experiments on an international trade data set from the United Nations.
1
Introduction
There is a long history of research in economics on mathematical models for exchange markets, and the existence and properties of their equilibria. The work of Arrow and Debreu
[1954], who established equilibrium existence in a very general commodities exchange
model, was certainly one of the high points of this continuing line of inquiry. The origins
of the field go back at least to Fisher [1891].
While there has been relatively recent interest in network models for interaction in economics (see Jackson [2003] for a good review), it was only quite recently that a network or
graph-theoretic model that generalizes the classical Arrow-Debreu and Fisher models was
introduced (Kakade et al. [2004]). In this model, the edges in a network over individual
consumers (for example) represent those pairs of consumers that can engage in direct trade.
As such, the model captures the many real-world settings that can give rise to limitations on
the trading partners of individuals (regulatory restrictions, social connections, embargoes,
and so on). In addition, variations in the price of a good can arise due to the topology of
the network: certain individuals may be relatively favored or cursed by their position in the
graph.
In a parallel development over the last decade or so, there has been an explosion of interest
in what is broadly called social network theory ? the study of apparently ?universal?
properties of natural networks (such as small diameter, local clustering of edges, and heavytailed distribution of degree), and statistical generative models that explain such properties.
When viewed as economic networks, the assumptions of individual rationality in these
works are usually either non-existent, or quite weak, compared to the Arrow-Debreu or
Fisher models.
In this paper we examine classical economic exchange models in the modern light of social
network theory. We are particularly interested in the interaction between the statistical
structure of the underlying network and the variation in prices at equilibrium. We quantify
the intuition that increased levels of connectivity in the network result in the equalization of
prices, and establish that certain generative models (such as the the preferential attachment
model of network formation (Barabasi and Albert [1999]) are capable of explaining the
heavy-tailed distribution of wealth first observed by Pareto. Closely related work to ours is
that of Kranton and Minehart [2001], which also considers networks of buyers and sellers,
though they focus more on the economics of network formation.
Many of our results are based on a powerful new local approximation method for global
equilibrium prices: we show that in the preferential attachment model, prices computed
from only local regions of a network yield strikingly good estimates of the global prices.
We exploit this method theoretically and computationally. Our study concludes with an
application of our model to United Nations international trade data.
2
Market Economies on Networks
We first describe the standard Fisher model, which consists of a set of consumers and a set
of goods. We assume that there are gj units of good j in the market, and that each good j is
be sold at some price pj . Each consumer i has a cash endowment ei , to be used to purchase
goods in a manner that maximizes the consumers? utility. In this paper we make the wellstudied assumption that the utility function of each consumer is linear in the amount of
goods consumed (see Gale [1960]), and leave the more general case to future research. Let
uij ? 0 denote the utility derived by i on obtaining
P a single unit of good j. If i consumes
xij amount of good j, then the utility i derives is j uij xij .
A set of prices {pj } and consumption plans {xij } constitutes an equilibrium if the following two conditions hold:
P
1. The market clears, i.e. supply equals demand. More formally, for each j, i xij = gj .
2. For each consumer i, their consumption plan {xij }j is optimal. By this we mean that
the consumption plan maximizes the linear utility function of i, subject to the constraint
that the total cost of the goods purchased by i is not more than the endowment e i .
It turns out that such an equilibrium always exists if each good j has a consumer which
derives nonzero utility for good j ? that is, uij > 0 for some i (see Gale [1960]). Furthermore, the equilibrium prices are unique.
We now consider the graphical Fisher model, so named because of the introduction of a
graph-theoretic or network structure to exchange. In the basic Fisher model, we implicitly
assume that all goods are available in a centralized exchange, and all consumers have equal
access to these goods. In the graphical Fisher model, we desire to capture the fact that each
good may have multiple vendors or sellers, and that individual buyers may have access
only to some, but not all, of these sellers. There are innumerable settings where such asymmetries arise. Examples include the fact that consumers generally purchase their groceries
from local markets, that social connections play a major role in business transactions, and
that securities regulations prevent certain pairs of parties from engaging in stock trades.
Without loss of generality, we assume that each seller j sells only one of the available
goods. (Each good may have multiple competing sellers.) Let G be a bipartite graph,
where buyers and sellers are represented as vertices, and all edges are between a buyerseller pair. The semantics of the graph are as follows: if there is an edge from buyer i to
seller j, then buyer i is permitted to purchase from seller j. Note that if buyer i is connected
to two sellers of the same good, he will always choose to purchase from the cheaper source,
since his utility is identical for both sellers (they sell the same good).
The graphical Fisher model is a special case of a more general and recently introduced
framework (Kakade et al. [2004]). One of the most interesting features of this model is the
fact that at equilibrium, significant price variations can appear solely due to structural properties of the underlying network. We now describe some generative models of economies.
3
Generative Models for Social Networks
For simplicity, in the sequel we will consider economies in which the numbers of buyers
and sellers are equal. We will also restrict attention to the case in which all sellers sell the
same good1 .
The simplest generative model for the bipartite graph G might be the random graph, in
which each edge between a buyer i and a seller j is included independently with probability
p. This is simply the bipartite version of the classical Erdos-Renyi model (Bollobas [2001]).
Many researchers have sought more realistic models of social network formation, in order
to explain observed phenomena such as heavy-tailed degree distributions. We now describe
a slight variant of the preferential attachment model (see Mitzenmacher [2003]) for the case
of a bipartite graph. We start with a graph in which one buyer is connected to one seller. At
each time step, we add one buyer and one seller as follows. With probability ?, the buyer
is connected to a seller in the existing graph uniformly at random; and with probability
1 ? ?, the buyer is connected to a seller chosen in proportion to the degree of the seller
(preferential attachment). Simultaneously, a seller is attached in a symmetric manner: with
probability ? the seller is connected to a buyer chosen uniformly at random, and with
probability 1 ? ? the seller is connected under preferential attachment. The parameter ? in
this model thus allows us to move between a pure preferential attachment model (? = 0),
and a model closer to classical random graph theory (? = 1), in which new parties are
connected to random extant parties2 .
Note that the above model always produces trees, since the degree of a new party is always
1 upon its introduction to the graph. We thus will also consider a variant of this model in
which at each time step, a new seller is still attached to exactly one extant buyer, while
each new buyer is connected to ? > 1 extant sellers. The procedure for edge selection is as
outlined above, with the modification that the ? new edges of the buyer are added without
replacement ? meaning that we resample so that each buyer gets attached to exactly ?
distinct sellers. In a forthcoming long version, we provide results on the statistics of these
networks.
The main purpose of the introduction of ? is to have a model capable of generating highly
cyclical (non-tree) networks, while having just a single parameter that can ?tune? the asymmetry between the (number of) opportunities for buyers and sellers. There are also economic motivations: it is natural to imagine that new sellers of the good arise only upon
obtaining their first customer, but that new buyers arrive already aware of several alternative sellers.
In the sequel, we shall refer to the generative model just described as the bipartite (?, ?)model. We will use n to denote the number of buyers and the number of sellers, so the
network has 2n vertices. Figure 1 and its caption provide an example of a network generated by this model, along with a discussion of its equilibrium properties.
4
Economics of the Network: Theory
We now summarize our theoretical findings. The proofs will be provided in a forthcoming
long version. We first present a rather intuitive ?frontier? theorem, which implies a scheme
in which we can find upper and lower bounds on the equilibrium prices using only local
computations. To state the theorem we require some definitions. First, note that any subset
V 0 of buyers and sellers defines a natural induced economy, where the induced graph G 0
1
From a mathematical and computational standpoint, this restriction is rather weak: when considered in the graphical setting, it already contains the setting of multiple goods with binary utility
values, since additional goods can be encoded in the network structure.
2
We note that ? = 1 still does not exactly produce the Erdos-Renyi model due to the incremental
nature of the network generation: early buyers and sellers are still more likely to have higher degree.
B13
B18
B7
S10: 1.00
S9: 0.75
S11: 1.00
S12: 1.00
S4: 1.00
S6: 0.67
B15
S3: 1.00
S19: 0.75
B11
B3
S1: 1.50
B1
S15: 0.67
B2
S2: 1.00
B0
S16: 0.67
S13: 1.00
S18: 0.75
S0: 1.50
S5: 1.50
B6
B17
S7: 1.50
B10
S8: 1.00
B19
B8
B9
B12
B14
S14: 0.75
B4
B16
B5
S17: 1.00
Figure 1: Sample network generated by the bipartite (? = 0, ? = 2)-model. Buyers and sellers
are labeled by ?B? or ?S? respectively, followed by an index indicating the time step at which they
were introduced to the network. The solid edges in the figure show the exchange subgraph ? those
pairs of buyers and sellers who actually exchange currency and goods at equilibrium. The dotted
edges are edges of the network that are unused at equilibrium because they represent inferior prices
for the buyers, while the dashed edges are edges of the network that have competitive prices, but are
unused at equilibrium due to the specific consumption plan required for market clearance. Each seller
is labeled with the price they charge at equilibrium. The example exhibits non-trivial price variation
(from 2.00 down to 0.33 per unit good). Note that while there appears to be a correlation between
seller degree and price, it is far from a deterministic relation, a topic we shall examine later.
consists of all edges between buyers and sellers in V 0 that are also in G. We say that G0
has a buyer (respectively, seller) frontier if on every (simple) path in G from a node in V 0
to a node outside of V 0 , the last node in V 0 on this path is a buyer (respectively, seller).
Theorem 1 (Frontier Bound) If V 0 has a subgraph G0 with a seller (respectively, buyer)
frontier, then the equilibrium price of any good j in the induced economy on V 0 is a lower
bound (respectively, upper bound) on the equilibrium price of j in G.
Theorem 1 implies a simple price upper bound: the price commanded by any seller j is
bounded by its degree d. Although the same upper bound can be seen from first principles,
it is instructive to apply Theorem 1. Let G0 be the immediate neighborhood of j (which is j
and its d buyers); then the equilibrium price in G0 is just d, since all d buyers are forced to
buy from seller j. This provides an upper bound since G0 has a buyer frontier. Since it can
be shown that the degree distribution obeys a power law in the bipartite (?, ?)-model, we
have an upper bound on the cumulative price distribution. We use ? = (1 ? ?)?/(1 + ?).
Theorem 2 In the bipartite (?, ?)-model, the proportion of sellers with price greater than
w is O(w?1/? ). For example, if ? = 0 (pure preferential attachment) and ? = 1, the
proportion falls off as 1/w 2 .
We do not yet have such a closed-form lower bound on the cumulative price distribution.
However, as we shall see in Section 5, the price distributions seen in large simulation results
do indeed show power-law behavior. Interestingly, this occurs despite the fact that degree
is a poor predictor of individual seller price.
3
3
0
10
10
10
0
10
k=2
k=3
?2
10
?3
10
?=1
2
10
?=2
?=3
1
?=4
10
Maximum to Minimum Wealth
?2
10
?1
Maximum to Minimum Wealth
k=1
?1
10
Average Error
Cumulative of Degree/Wealth
10
2
10
1
10
k=4
?4
10
0
?3
?1
10
0
10
1
10
Degree/Wealth
2
10
10
50
100
150
N
200
250
0
10 1
10
2
10
10
0
0.2
0.4
0.6
0.8
1
alpha
N
Figure 2: See text for descriptions.
Another quantity of interest is what we might call price variation ? the ratio of the price
of the richest seller to the poorest seller. The following theorem addresses this.
Theorem 3 In the bipartite (?, ?)-model, if ?(? 2 + 1) < 1, then the ratio of the maximum
2??(? 2 +1)
1+?
price to the minimum price scales with number of buyers n as ?(n
simplest case in which ? = 0 and ? = 1, this lower bound is just ?(n).
). For the
We conclude our theoretical results with a remark on the price variation in the Erdos-Renyi
(random graph) model. First, let us present a condition for there to be no price variation.
Theorem 4 A necessary and sufficient condition for there to be no price variation, ie for
all prices to be equal to 1, is that for all sets of vertices S, |N (S)| ? |S|, where N (S) is
the set of vertices connected by an edge to some vertex in S.
This can be viewed as an extremely weak version of standard expansion properties wellstudied in graph theory and theoretical computer science ? rather than demanding that
neighbor sets be strictly larger, we simply ask that they not be smaller. One can further show
that for large n, the probability that a random graph (for any edge probability p > 0) obeys
this weak expansion property approaches 1. In other words, in the Erdos-Renyi model,
there is no variation in price ? a stark contrast to the preferential attachment results.
5
Economics of the Network: Simulations
We now present a number of studies on simulated networks (generated according to the
bipartite (?, ?)-model). Equilibrium computations were done using the algorithm of Devanur et al. [2002] (or via the application of this algorithm to local subgraphs). We note that
it was only the recent development of this algorithm and related ones that made possible
the simulations described here (involving hundreds of buyers and sellers in highly cyclical
graphs). However, even the speed of this algorithm limits our experiments to networks with
n = 250 if we wish to run repeated trials to reduce variance. Many of our results suggest
that the local approximation schemes discussed below may be far more effective.
Price and Degree Distributions: The first (leftmost) panel of Figure 2 shows empirical
cumulative price and degree distributions on a loglog scale, averaged over 25 networks
drawn according to the bipartite (? = 0.4, ? = 1)-model with n = 250. The cumulative
degree distribution is shown as a dotted line, where the y-axis represents the fraction of
the sellers with degree greater than or equal to d, and the degree d is plotted on the x-axis.
Similarly, the solid curve plots the fraction of sellers with price greater than some value w,
where the price w is shown on the x-axis. The thin sold line has our theoretically predicted
slope of ?1
? = ?3.33, which shows that degree distribution is quite consistent with our
expectations, at least in the tails. Though a natural conjecture from the plots is that the
price of a seller is essentially determined by its degree, below we will see that the degree
is a rather poor predictor of an individual seller price, while more complex (but still local)
properties are extremely accurate predictors.
Perhaps the most interesting finding is that the tail of the price distribution looks linear, i.e.
it also exhibits power law behavior. Our theory provided an upper bound, which is precisely
the cumulative degree distribution. We do not yet have a formal lower bound. This plot
(and other experiments we have done) further confirm the robustness of the power law
behavior in the tail, for ? < 1 and ? = 1.
As discussed in the Introduction, Pareto?s original observation was that the wealth (which
corresponds to seller price in our model) distribution in societies obey a power law, which
has been born out in many studies on western economies. Since Pareto?s original observation, there have been too many explanations of this phenomena to recount here. However,
to our knowledge, all of these explanations are more dynamic in nature (eg a dynamical
system of wealth exchange) and don?t capture microscopic properties of individual rationality. Here we have power law wealth distribution arising from the combination of certain
natural statistical properties of the network, and classical theories of economic equilibrium.
Bounds via Local Computations: Recall that Theorem 1 suggests a scheme by which we
can do only local computations to approximate the global equilibrium price for any seller.
More precisely, for some seller j, consider the subgraph which contains all nodes that are
within distance k of j. In our bipartite setting, for k odd, this subgraph has a buyer frontier,
and for k even, this subgraph has a seller frontier, since we start from a seller. Hence,
the equilibrium computation on the odd k (respectively, even k) subgraph will provide an
upper (respectively, lower) bound.
This provides an heuristic in which one can examine the equilibrium properties of small
regions of the graph, without having to do expensive global equilibrium computations.
The effectiveness of this heuristic will of course depend on how fast the upper and lower
bounds tighten. In general, it is possible to create specific graphs in which these bounds
are arbitrarily poor until k is large enough to encompass the entire graph. As we shall see,
the performance of this heuristic is dramatically better in the bipartite (?, ?)-model.
The second panel in Figure 2 shows how rapidly the local equilibrium computations converge to the true global equilibrium prices as a function of k, and also how this convergence is influenced by n. In these experiments, graphs were generated by the bipartite
(? = 0, ? = 1) model. The value of n is given on the x-axis; the average errors (over
5 trials for each value of k and n) in the local equilibrium computations are given on the
y-axis; and there is a separate plot for each of 4 values for k. It appears that for each value
of k, the quality of approximation obtained has either mild or no dependence on n.
Furthermore, the regular spacing of the four plots on the logarithmic scaling of the y-axis
establishes the fact that the error of the local approximations is decaying exponentially
with increased k ? indeed, by examining only neighborhoods of 3 steps from a seller in an
economy of hundreds, we are already able to compute approximations to global equilibrium
prices with errors in the second decimal place. Since the diameter for n = 250 was often
about 17, this local graph is considerably smaller than the global. However, for the crudest
approximation k = 1, which corresponds exactly to using seller degree as a proxy for
price, we can see that this performs rather poorly. Computationally, we found that the time
required to do all 250 local computations for k = 3 was about 60% less than the global
computation, and would result in presumably greater savings at much larger values of n.
Parameter Dependencies: We now provide a brief examination of how price variation
depends on the parameters of the bipartite (?, ?)-model. We first experimentally evaluate
the lower bounds provided in Theorem 3. The third panel of Figure 2 shows the maximum
to minimum price as function of n (averaged over 25 trials) on a loglog scale. Each line is
for a fixed value of ?, and the values of ? range form 1 to 4 (? = 0).
2
Recall from Theorem 3, our lower bound on the ratio is ?(n 1+? ) (using ? = 0). We
conjecture that this is tight, and, if so, the slopes of lines (in the loglog plot) should
2
be 1+?
, which would be (1, 0.67, 0.5, 0.4). The estimated slopes are somewhat close:
(1.02, 0.71, 0.57, 0.53). The overall message is that for small values of ?, price variation
increases rapidly with the economy size n in preferential attachment.
The rightmost panel of Figure 2 is a scatter plot of ? vs. the maximum to minimum price
in a graph (where n = 250) . Here, each point represents the maximum to minimum price
ratio in a specific network generated by our model. The circles are for economies generated
with ? = 1 and the x?s are for economies generated with ? = 3. Here we see that in general,
increasing ? dramatically decreases price variation (note that the price ratio is plotted on a
log scale). This justifies the intuition that as ? is increased, more ?economic equality? is
introduced in the form of less preferential bias in the formation of new edges. Furthermore,
the data for ? = 1 shows much larger variation, suggesting that a larger value of ? also has
the effect of equalizing buyer opportunities and therefore prices.
6
An Experimental Illustration on International Trade Data
We conclude with a brief experiment exemplifying some of the ideas discussed
so far.
The statistics division of the United Nations makes available extensive data sets detailing the amounts of trade between major sovereign nations (see
http://unstats.un.org/unsd/comtrade). We used a data set indicating, for each pair of nations, the total amount of trade in U.S. dollars between that pair in the year 2002.
For our purposes, we would like to extract a discrete network structure from this numerical
data. There are many reasonable ways this could be done; here we describe just one.
For each of the 70 largest nations (in terms of total trade), we include connections from
that nation to each of its top k trading partners, for some integer k > 1. We are thus
including the more ?important? edges for each nation. Note that each nation will have
degree at least k, but as we shall see, some nations will have much higher degree, since
they frequently occur as a top k partner of other nations. To further cast this extracted
network into the bipartite setting we have been considering, we ran many trials in which
each nation is randomly assigned a role as either a buyer or seller (which are symmetric
roles), and then computed the equilibrium prices of the resulting network economy. We
have thus deliberately created an experiment in which the only economic asymmetries are
those determined by the undirected network structure.
The leftmost panel of Figure 3 show results for 1000 trials under the choice k = 3. The
upper plot shows the average equilibrium price for each nation, where the nations have been
sorted by this average price. We can immediately see that there is dramatic price variation
due to the network structure; while many nations suffer equilibrium prices well under $1,
the most topologically favored nations command prices of $4.42 (U.S.), $4.01 (Germany),
$3.67 (Italy), $3.16 (France), $2.27 (Japan), and $2.09 (Netherlands). The lower plot of the
leftmost panel shows a scatterplot of a nation?s degree (x-axis) and its average equilibrium
price (y-axis). We see that while there is generally a monotonic relationship, at smaller
degree values there can be significant price variation (on the order of $0.50).
The center panel of Figure 3 shows identical plots for the choice k = 10. As suggested
by the theory and simulations, increasing the overall connectivity of each party radically
reduces price variation, with the highest price being just $1.10 and the lowest just under $1.
Interestingly, the identities of the nations commanding the highest prices (in order, U.S.,
France, Switzerland, Germany, Italy, Spain, Netherlands) overlaps significantly with the
k = 3 case, suggesting a certain robustness in the relative economic status predicted by
the model. The lower plot shows that the relationship between degree and price divides the
population into ?have? (degree above 10) and ?have not? (degree below 10) components.
The preponderance of European nations among the top prices suggests our final experi-
UN data network, top 3 links, full set of nations
UN data network, top 10 links, full set of nations
UN data network, top 3 links, EU collapsed nation set
1.4
5
8
1.2
4
6
2
0.8
price
price
price
1
3
0.6
0.4
1
10
20
30
40
price rank
50
60
70
0
80
1.15
4
1.1
3
2
1
0
10
20
30
40
price rank
50
60
70
0
80
5
10
15
average degree
20
25
1
0.9
5
10
15
20
price rank
25
30
35
40
6
1.05
4
2
0.95
0
0
8
average price
0
5
0
2
0.2
average price
average price
0
4
0
5
10
15
20
average degree
25
30
35
0
0
2
4
6
8
average degree
10
12
14
Figure 3: See text for descriptions.
ment, in which we modified the k = 3 network by merging the 15 current members of the
European Union (E.U.) into a single economic nation. This merged vertex has much higher
degree than any of its original constituents and can be viewed as an (extremely) idealized
experiment in the economic power that might be wielded by a truly unified Europe.
The rightmost panel of Figure 3 provides the results, where we show the relative prices and
the degree-price scatterplot for the 35 largest nations. The top prices are now commanded
by the E.U. ($7.18), U.S. ($4.50), Japan ($2.96), Turkey ($1.32), and Singapore ($1.22).
The scatterplot shows a clear example in which the highest degree (held by the U.S.) does
not command the highest price.
Acknowledgments
We are grateful to Tejas Iyer and Vijay Vazirani for providing their software implementing
the Devanur et al. [2002] algorithm. Siddharth Suri acknowledges the support of NIH grant
T32HG0046. Robin Pemantle acknowledges the support of NSF grant DMS-0103635.
References
Kenneth J. Arrow and Gerard Debreu. Existence of an equilibrium for a competitive economy. Econometrica, 22(3):265?290, July 1954.
A. Barabasi and R. Albert. Emergence of scaling in random networks. Science, 286:509?512, 1999.
B. Bollobas. Random Graphs. Cambridge University Press, 2001.
Nikhil R. Devanur, Christos H. Papadimitriou, Amin Saberi, and Vijay V. Vazirani. Market equilibrium via a primal-dual-type algorithm. In FOCS, 2002.
Irving Fisher. PhD thesis, Yale University, 1891.
D. Gale. Theory of Linear Economic Models. McGraw Hill, N.Y., 1960.
Matthew Jackson. A survey of models of network formation: Stability and efficiency. In Group
Formation in Economics: Networks, Clubs and Coalitions. Cambridge University Press, 2003.
S. Kakade, M. Kearns, and L. Ortiz. Graphical economics. COLT, 2004.
R. Kranton and D. Minehart. A theory of buyer-seller networks. American Economic Review, 2001.
M. Mitzenmacher. A brief history of generative models for power law and lognormal distributions.
Internet Mathematics, 1, 2003.
| 2599 |@word mild:1 trial:5 version:4 proportion:3 simulation:5 dramatic:1 solid:2 born:1 contains:2 united:3 ours:1 interestingly:2 rightmost:2 existing:1 s16:1 current:1 yet:2 scatter:1 luis:1 numerical:1 realistic:1 plot:11 v:1 generative:8 provides:3 node:4 club:1 org:1 mathematical:3 along:1 direct:1 supply:1 focs:1 consists:2 manner:2 theoretically:2 indeed:2 market:7 behavior:3 examine:4 frequently:1 siddharth:2 considering:1 increasing:2 provided:3 spain:1 underlying:2 bounded:1 maximizes:2 panel:8 lowest:1 what:2 unified:1 finding:3 commodity:1 every:1 nation:24 charge:1 exactly:4 unit:3 grant:2 appear:1 local:15 limit:1 despite:1 solely:1 path:2 might:3 b12:1 suggests:2 commanded:2 range:1 obeys:2 averaged:2 unique:1 acknowledgment:1 union:1 procedure:1 universal:1 empirical:1 significantly:1 word:1 regular:1 suggest:1 get:1 close:1 selection:1 s9:1 collapsed:1 influence:1 equalization:1 restriction:2 deterministic:1 customer:1 center:1 bollobas:2 go:1 economics:8 attention:1 independently:1 devanur:3 survey:1 simplicity:1 immediately:1 pure:2 subgraphs:1 jackson:2 his:1 s6:1 population:1 stability:1 variation:17 imagine:1 rationality:2 play:1 engage:1 caption:1 origin:1 engaging:1 pa:1 expensive:1 particularly:2 labeled:2 observed:2 role:3 capture:3 region:2 connected:9 eu:1 trade:8 decrease:1 highest:4 consumes:1 ran:1 intuition:2 econometrica:1 seller:58 dynamic:1 existent:1 depend:1 tight:1 grateful:1 upon:2 bipartite:16 division:1 efficiency:1 strikingly:1 stock:1 represented:1 distinct:1 forced:1 describe:4 effective:1 fast:1 formation:6 outside:1 neighborhood:2 quite:3 encoded:1 larger:4 pemantle:2 heuristic:3 say:1 nikhil:1 statistic:2 emergence:1 final:1 equalizing:1 ment:1 interaction:2 rapidly:2 subgraph:6 poorly:1 amin:1 intuitive:1 description:2 constituent:1 convergence:1 asymmetry:3 gerard:1 produce:2 generating:1 incremental:1 leave:1 odd:2 b0:1 predicted:2 trading:2 implies:2 quantify:1 switzerland:1 closely:1 merged:1 implementing:1 exchange:8 require:1 s18:1 frontier:7 strictly:1 hold:1 marriage:1 considered:1 b9:1 presumably:1 equilibrium:33 matthew:1 major:2 sought:1 early:1 barabasi:2 heavytailed:1 resample:1 purpose:2 s12:1 largest:2 create:1 establishes:1 always:4 modified:1 rather:5 cash:1 command:2 derived:1 focus:1 rank:3 contrast:1 dollar:1 economy:12 entire:1 uij:3 relation:1 france:2 interested:2 semantics:1 germany:2 overall:2 among:1 dual:1 colt:1 favored:2 development:2 plan:4 grocery:1 special:1 field:1 equal:5 aware:1 having:2 saving:1 identical:2 sell:3 represents:2 look:1 constitutes:1 thin:1 purchase:4 future:1 papadimitriou:1 modern:1 randomly:1 simultaneously:1 individual:8 cheaper:1 replacement:1 ortiz:2 interest:3 centralized:1 message:1 highly:2 certainly:1 wellstudied:2 mixture:1 truly:1 light:1 primal:1 held:1 accurate:1 edge:17 capable:2 explosion:1 preferential:10 closer:1 necessary:1 tree:2 continuing:1 detailing:1 divide:1 circle:1 plotted:2 theoretical:3 increased:3 cost:1 vertex:6 subset:1 predictor:3 hundred:2 examining:1 too:1 dependency:1 considerably:1 international:3 ie:1 sequel:2 probabilistic:1 off:1 michael:1 b8:1 s17:1 extant:3 connectivity:2 thesis:1 choose:1 gale:3 american:1 stark:1 japan:2 s13:1 suggesting:2 b2:1 depends:1 idealized:1 later:1 closed:1 apparently:1 start:2 s15:1 competitive:2 parallel:1 decaying:1 slope:3 b6:1 variance:1 who:2 yield:1 weak:4 researcher:1 history:2 inquiry:1 explain:2 influenced:1 definition:1 dm:1 proof:1 ask:1 recall:2 knowledge:1 actually:1 back:1 appears:2 higher:3 permitted:1 mitzenmacher:2 though:2 done:3 generality:1 furthermore:3 just:7 correlation:1 until:1 ei:1 western:1 defines:1 quality:1 perhaps:1 b3:1 effect:1 true:1 deliberately:1 preponderance:1 hence:1 equality:1 assigned:1 symmetric:2 nonzero:1 eg:1 irving:1 inferior:1 clearance:1 leftmost:3 hill:1 theoretic:2 performs:1 saberi:1 meaning:1 suri:2 recently:2 nih:1 attached:3 b4:1 exponentially:1 s8:1 he:1 slight:1 discussed:3 tail:3 significant:2 refer:1 s5:1 cambridge:2 outlined:1 mathematics:1 similarly:1 access:2 europe:1 gj:2 add:1 recent:3 italy:2 certain:5 binary:1 arbitrarily:1 s11:1 seen:2 minimum:6 additional:1 greater:4 somewhat:1 converge:1 dashed:1 july:1 multiple:3 currency:1 sham:1 encompass:1 reduces:1 debreu:4 full:2 turkey:1 long:3 embargo:1 variant:2 basic:1 involving:1 essentially:1 expectation:1 albert:2 represent:2 addition:1 spacing:1 wealth:8 source:1 standpoint:1 subject:1 induced:3 undirected:1 member:1 effectiveness:1 call:1 integer:1 structural:1 unused:2 enough:1 b7:1 forthcoming:2 pennsylvania:1 topology:1 competing:1 restrict:1 economic:13 s14:1 reduce:1 idea:1 consumed:1 b5:1 utility:8 s7:1 suffer:1 remark:1 dramatically:2 generally:2 clear:2 tune:1 netherlands:2 amount:4 s4:1 diameter:2 simplest:2 http:1 xij:5 nsf:1 singapore:1 s3:1 dotted:2 estimated:1 arising:1 per:1 broadly:1 discrete:1 shall:5 group:1 four:1 drawn:1 prevent:1 pj:2 kenneth:1 graph:24 fraction:2 year:1 recount:1 run:1 powerful:1 topologically:1 named:1 arrive:1 place:1 reasonable:1 commanding:1 scaling:2 poorest:1 bound:18 internet:1 followed:1 yale:1 occur:1 constraint:1 s10:1 precisely:2 exemplifying:1 software:1 speed:1 extremely:3 relatively:2 conjecture:2 according:2 combination:1 poor:3 coalition:1 smaller:3 kakade:4 modification:1 s1:1 computationally:2 vendor:1 turn:1 generalizes:1 available:3 apply:1 obey:1 alternative:1 robustness:2 existence:3 original:3 top:7 clustering:1 include:2 graphical:5 opportunity:2 exploit:1 establish:1 society:1 classical:6 purchased:1 move:1 g0:5 added:1 quantity:2 already:3 occurs:1 dependence:1 exhibit:2 microscopic:1 distance:1 separate:1 link:3 simulated:1 consumption:4 topic:1 partner:3 considers:1 trivial:1 consumer:10 index:1 relationship:2 illustration:1 ratio:5 decimal:1 providing:1 regulation:1 rise:1 upper:10 observation:2 sold:2 immediate:1 innumerable:1 introduced:4 pair:6 required:2 cast:1 extensive:1 connection:3 security:1 established:1 address:1 able:1 suggested:1 usually:1 below:3 dynamical:1 summarize:1 including:1 explanation:2 s19:1 power:8 overlap:1 demanding:1 natural:5 business:1 examination:1 scheme:3 brief:3 attachment:9 axis:8 concludes:1 created:1 acknowledges:2 b10:1 extract:1 philadelphia:1 text:2 review:2 relative:2 law:7 loss:1 interesting:2 limitation:1 generation:1 degree:34 sufficient:1 consistent:1 s0:1 proxy:1 principle:1 experi:1 pareto:3 heavy:2 endowment:2 course:1 last:2 formal:2 bias:1 explaining:1 fall:1 neighbor:1 lognormal:1 curve:1 world:1 cumulative:6 made:1 party:4 far:3 social:8 transaction:1 tighten:1 vazirani:2 alpha:1 approximate:1 erdos:4 implicitly:1 status:1 mcgraw:1 confirm:1 global:9 buy:1 b1:1 conclude:2 don:1 un:4 regulatory:1 decade:1 tailed:2 robin:2 nature:2 b11:1 obtaining:2 expansion:2 complex:1 european:2 main:1 arrow:4 motivation:1 s2:1 arise:3 repeated:1 christos:1 position:1 wish:1 loglog:3 third:1 renyi:4 theorem:12 down:1 specific:3 derives:2 exists:1 scatterplot:3 merging:1 phd:1 iyer:1 justifies:1 demand:1 vijay:2 logarithmic:1 simply:2 likely:1 desire:1 cyclical:2 monotonic:1 corresponds:2 radically:1 extracted:1 tejas:1 viewed:3 sorted:1 identity:1 b13:1 price:82 fisher:9 experimentally:1 included:1 determined:2 uniformly:2 kearns:2 called:1 total:3 experimental:1 buyer:38 indicating:2 formally:1 support:2 phenomenon:2 evaluate:1 instructive:1 |
1,760 | 26 | 387
Neural Net and Traditional Classifiers1
William Y. Huang and Richard P. Lippmann
MIT Lincoln Laboratory
Lexington, MA 02173, USA
Abstract. Previous work on nets with continuous-valued inputs led to generative
procedures to construct convex decision regions with two-layer perceptrons (one hidden
layer) and arbitrary decision regions with three-layer perceptrons (two hidden layers).
Here we demonstrate that two-layer perceptron classifiers trained with back propagation
can form both convex and disjoint decision regions. Such classifiers are robust, train
rapidly, and provide good performance with simple decision regions. When complex
decision regions are required, however, convergence time can be excessively long and
performance is often no better than that of k-nearest neighbor classifiers. Three neural
net classifiers are presented that provide more rapid training under such situations.
Two use fixed weights in the first one or two layers and are similar to classifiers that
estimate probability density functions using histograms. A third "feature map classifier"
uses both unsupervised and supervised training. It provides good performance with
little supervised training in situations such as speech recognition where much unlabeled
training data is available. The architecture of this classifier can be used to implement
a neural net k-nearest neighbor classifier.
1. INTRODUCTION
Neural net architectures can be used to construct many different types of classifiers [7]. In particular, multi-layer perceptron classifiers with continuous valued inputs trained with back propagation are robust, often train rapidly, and provide performance similar to that provided by Gaussian classifiers when decision regions are convex
[12,7,5,8]. Generative procedures demonstrate that such classifiers can form convex decision regions with two-layer perceptrons (one hidden layer) and arbitrary decision regions
with three-layer perceptrons (two hidden layers) [7,2,9]. More recent work has demonstrated that two-layer perceptrons can form non-convex and disjoint decision regions.
Examples of hand crafted two-layer networks which generate such decision regions are
presented in this paper along with Monte Carlo simulations where complex decision
regions were generated using back propagation training. These and previous simulations [5,8] demonstrate that convergence time with back propagation can be excessive
when complex decision regions are desired and performance is often no better than that
obtained with k-nearest neighbor classifiers [4]. These results led us to explore other
neural net classifiers that might provide faster convergence. Three classifiers called,
"fixed weight," "hypercube," and "feature map" classifiers, were developed and evaluated. All classifiers were tested on illustrative problems with two continuous-valued
inputs and two classes (A and B). A more restricted set of classifiers was tested with
vowel formant data.
2.
CAPABILITIES OF Two LAYER PERCEPTRONS
Multi-layer perceptron classifiers with hard-limiting nonlinearities (node outputs
of 0 or 1) and continuous-valued inputs can form complex decision regions. Simple
constructive proofs demonstrate that a three-layer perceptron (two hidden layers) can
1 This work was sponsored by the Defense Advanced Research Projects Agency and the Department
of the Air Force. The views expressed are those of the authors and do not reflect the policy or position
of the U. S. Government.
? American Institute of Physics 1988
388
DECISION REGION FOR CLASS A
b,
,
X2
2
b2
,
b4
,
~,
~-1
~-1
I
I
I
----[J'
-: -:
~, .:-):
1 f-----
I
I ---
-":-<i:/ ___ _
I
I
I
I
o
2
3
FIG. 1. A two-layer perceptron that form! di!joint deci!ion region! for cia!! A (!haded area!). Connection weight! and node ojJ!eb are !hown in the left. Hyperplane! formed by all hidden node! are drawn
a! da!hed line! with node labek Arrow! on theu line! point to the half plane where the hidden node
output i! "high".
form arbitrary decision regions and a two-layer perceptron (one hidden layer) can form
single convex decision regions [7,2,9]. Recently, however, it has been demonstrated that
two-layer perceptrons can form decision regions that are not simply convex [14]. Fig. 1,
for example, shows how disjoint decision regions can be generated using a two-layer
perceptron. The two disjoint shaded areas in this Fig. represent the decision region
for class A (output node has a "high" output, y = 1). The remaining area represents
the decision region for class B (output node has a "low" output, y = 0). Nodes in
this Fig. contain hard-limiting nonlinearities. Connection weights and node offsets are
indicated in the left diagram. Ten other complex decision regions formed using two-layer
perceptrons are presented in Fig. 2.
The above examples suggest that two-layer perceptrons can form decision regions
with arbitrary shapes. We, however, know of no general proof of this capability. A
1965 book by Nilson discusses this issue and contains a proof that two-layer nets can
divide a finite number of points into two arbitrary sets ([10] page 89). This proof
involves separating M points using at most M - 1 parallel hyperplanes formed by firstlayer nodes where no hyperplane intersects two or more points. Proving that a given
decision region can be formed in a two-layer net involves testing to determine whether
the Boolean representations at the output of the first layer for all points within the
decision region for class A are linearly separable from the Boolean representations for
class B. One test for linear separability was presented in 1962 [13].
A problem with forming complex decision regions with two-layer perceptrons is that
weights and offsets must be adjusted carefully because they interact extensively to form
decision regions. Fig. 1 illustrates this sensitivity problem. Here it can be seen that
weights to one hidden node form a hyperplane which influences decision regions in
an entire half-plane. For example, small errors in first layer weights that results in a
change in the slopes of hyperplanes bs and b6 might only slightly extend the Al region
but completely eliminate the A2 region. This interdependence can be eliminated in
three layer perceptrons.
It is possible to train two-layer perceptrons to form complex decision regions using
back propagation and sigmoidal nonlinearities despite weight interactions. Fig. 3, for
example, shows disjoint decision regions formed using back propagation for the problem
of Fig. 1. In this and all other simulations, inputs were presented alternately from
classes A and B and selected from a uniform distribution covering the desired decision
region. In addition, the back propagation rate of descent term, TJ, was set equal to the
momentum gain term, a and TJ = a = .01. Small values for TJ and a were necessary to
guarantee convergence for the difficult problems in Fig. 2. Other simulation details are
389
~llll
I
IEl
I
blEl
I
5)
I
mJ
I
3)
=(3 m1
I
I
I
9)
I
I
6)
rm
I
I
+
10)
4)
1=
~ftfI
r
I
I
I
I
FIG. 2. Ten complex deci6ion region6 formed by two-layer perceptron6. The number6 a66igned to each
ca6e are the "ca6e" number6 u6ed in the re6t of thi6 paper.
as in [5,8]. Also shown in Fig. 3 are hyperplanes formed by those first-layer nodes with
the strongest connection weights to the output node. These hyperplanes and weights
are similar to those in the networks created by hand except for sign inversions, the
occurrence of multiple similar hyperplanes formed by two nodes, and the use of node
offsets with values near zero.
3.
COMPARATIVE RESULTS OF TWO-LAYERS VS. THREE-LAYERS
Previous results [5,8], as well as the weight interactions mentioned above, suggest
that three-layer perceptrons may be able to form complex decision regions faster with
back propagation than two-layer perceptrons. This was explored using Monte Carlo
simulations for the first nine cases of Fig. 2. All networks have 32 nodes in the first
hidden layer. The number of nodes in the second hidden layer was twice the number
of convex regions needed to form the decision region (2, 4, 6, 4, 6, 6, 8, 6 and 6 for
Cases 1 through 9 respectively). Ten runs were typically averaged together to obtain
a smooth curve of percentage error vs. time (number of training trials) and enough
trials were run (to a limit of 250,000) until the curve appeared to flatten out with little
improvement over time. The error curve was then low-pass filtered to determine the
convergence time. Convergence time was defined as the time when the curve crossed a
value 5 percentage points above the final percentage error. This definition provides a
framework for comparing the convergence time of the different classifiers. It, however, is
not the time after which error rates do not improve. Fig. 4 summarizes results in terms
of convergence time and final percentage error. In those cases with disjoint decision
regions, back propagation sometimes failed to form separate regions after 250,000 trials.
For example, the two disjoint regions required in Case 2 were never fully separated with
390
,
!...
~2~
JI
J
21-
--=~I
0--
?2
----
"
I
.7.2 :
,
,..
I 12.7 ~ '9.3,-
4.5
7.6
t..-[-
-]-(1--- , __ '
,
I
I
II
""
"
I
, __
,- ---r--
,--r
I
II....-409
'
I
I
11.9
,
I
I
I
I
I _ _---,I_ _ _~I_ _......I ____I,-----,
~_....
o
?2
2
4
6
FIG. 3. Deci!ion region, formed u,ing bacle propagation for Ca!e! ! of Fig. !. Thiele !olid line! repre!ent
deci,ion boundariu. Da,hed line! and arrow! have the lame meaning a! in Fig. 1. Only hyperplane!
for hidden node, with large weight! to the output node are !hown. Over 300,000 training trial! were
required to form !eparote N!gion!.
a two-layer perceptron but were separated with a three-layer perceptron. This is noted
by the use of filled symbols in Fig. 4.
Fig. 4 shows that there is no significant performance difference between two and
three layer perceptrons when forming complex decision regions using back propagation
training. Both types of classifiers take an excessively long time (> 100,000 trials) to
form complex decision regions. A minor difference is that in Cases 2 and 7 the two-layer
network failed to separate disjoint regions after 250,000 trials whereas the three-layer
network was able to do so. This, however, is not significant in terms of convergence time
and error rate. Problems that are difficult for the two-layer networks are also difficult
for the three-layer networks, and vice versa.
4.
ALTERNATIVE CLASSIFIERS
Results presented above and previous results [5,8] demonstrate that multi-layer perceptron classifiers can take very long to converge for complex decision regions. Three
alternative classifiers were studied to determine whether other types of neural net classifiers could provide faster convergence.
4.1. FIXED WEIGHT CLASSIFIERS
Fixed weight classifiers attempt to reduce training time by adapting only weights
between upper layers of multi-layer perceptrons. Weights to the first layer are fixed
before training and remain unchanged. These weights form fixed hyperplanes which
can be used by upper layers to form decision regions. Performance will be good if the
fixed hyperplanes are near the decision region boundaries that are required in a specific
problem. Weights between upper layers are trained using back propagation as described
above. Two methods were used to adjust weights to the first layer. Weights were
adjusted to place hyperplanes randomly or in a grid in the region (-1 < Xl,X2 < 10).
All decision regions in Fig. 2 fall within this region. Hyperplanes formed by first layer
nodes for "fixed random" and "fixed grid" classifiers for Case 2 of Fig. 2 are shown as
dashed lines in Fig. 5. Also shown in this Fig. are decision regions (shaded areas) formed
391
12
o 2-1ayers
10
o .~.~.~.~x.~:':".::.......
8
ERROR RATE
6
04
2
OL-__L-__L-__L-__L-__L-__L-__L-__
200000
L-~~~
CONVERGENCE TIME
FIG. 4. Percentage errOr (top) and convergence time (bottom) for Ca8e6 1 through 9 of Fig. 2 for
two-and three-layer perceptron clauifier6 trained u6ing back propagation. Filled 6ymbol6 indicate that
6eparate di6joint region6 were not formed after 250,000 triak
using back propagation to train only the upper network layers. These regions illustrate
how fixed hyperplanes are combined to form decision regions. It can be seen that decision
boundaries form along the available hyperplanes. A good solution is possible for the
fixed grid classifier where desired decision region boundaries are near hyperplanes. The
random grid classifier provides a poor solution because hyperplanes are not near desired
decision boundaries. The performance of a fixed weight classifier depends both on the
placement of hyperplanes and on the number of hyperplanes provided.
4.2. HYPERCUBE CLASSIFIER
Many traditional classifiers estimate probability density functions of input variables
for different classes using histogram techniques [41. Hypercube classifiers use this technique by fixing weights in the first two layers to break the input space into hypercubes
(squares in the case of two inputs). Hypercube classifiers are similar to fixed weight
classifiers, except weights to the first two layers are fixed, and only weights to output
nodes are trained. Hypercube classifiers are also similar in structure to the CMAC
model described by Albus [11. The output of a second layer node is "high" only if the
input is in the hypercube corresponding to that node. This is illustrated in Fig. 6 for a
network with two inputs.
The top layer of a hypercube classifier can be trained using back propagation. A
maximum likelihood approach, however, suggests a simpler training algorithm which
consists of counting. The output of second layer node Hi is connected to the output
node corresponding to that class with greatest frequency of occurrence of training inputs
in hypercube Hi. That is, if a sample falls in hypercube Hi, then it is classified as class
(J* where
(1)
Nj,o. > Ni,O for all (J f:. (J ??
In this equation, Ni,O is the number of training tokens in hypercube Hi which belong to
class (J. This will be called maximum likelihood (ML) training. It can be implemented
by connection second-layer node Hi only to that output node corresponding to class (J.
in Eq. (1). In all simulations hypercubes covered the area (-1 < Xl, X2 < 10).
392
GRID
RANDOM
o
FIG. 5. Deci.ion region. formed with "fixed random" and "fixed grid" clal6ifier. for Ca.e ! from Fig.
! ruing back propagation training. Line! !hown are hyperplane! formed by the fird layer node!. Shaded
area. repre.ent the deci.ion region for clau A.
FOUR BINS CREATED
BY FIXED LAYERS
A
B
}
TRAINED
LAYER
"2
3
2
FIXED
LAYERS
"1
INPUT
FIG. 6. A hypercube clauifier (left) i! a three-layer perceptron with fixed weight! to the fird two layen,
and trainable weight! to output node!. Weights are initialized !uch that output! of nodes HI through H.
(left) are "high" only when the input i! in the corre!ponding hypercube (right).
393
OUTPUT (Only One High)
SElECT
[
CLASS
WITH MAJORITY
IN TOP k
SUPERVISED
ASSOCIATIVE
LEARNING
SELECT TOP [
k EXEMPLARS
CALCULATE
CORRELATION
TO STORED
EXEMPLARS
UNSUPERVISED
KOHONEN
FEATURE MAP
LEARNING
II,
INPUT
FIo. 1. Feature map clauifier.
4.3. FEATURE MAP CLASSIFIER
In many speech and image classification problems a large quantity of unlabeled
training data can be obtained, but little labeled data is available. In such situations
unsupervised training with unlabeled training data can substantially reduce the amount
of supervised training required [3]. The feature map classifier shown in Fig. 7 uses combined supervised/unsupervised training, and is designed for such problems. It is similar
to histogram classifiers used in discrete observation hidden Markov models [11] and the
classifier used in [6]. The first layer of this classifier forms a feature map using a self
organizing clustering algorithm described by Kohonen [6]. In all simulations reported in
this paper 10,000 trials of unsupervised training were used. After unsupervised training, first-layer feature nodes sample the input space with node density proportional to
the combined probability density of all classes. First layer feature map nodes perform a
function similar to that of second layer hypercube nodes except each node has maximum
output for input regions that are more general than hypercubes and only the output of
the node with a maximum output is fed to the output nodes. Weights to output nodes
are trained with supervision after the first layer has been trained. Back propagation, or
maximum likelihood training can be used. Maximum likelihood training requires Ni,8
(Eq. 1) to be the number of times first layer node i has maximum output for inputs
from class 8. In addition, during classification, the outputs of nodes with Ni,8 = 0 for
all 8 (untrained nodes) are not considered when the first-layer node with the maximum
output is selected. The network architecture of a feature map classifier can be used
to implement a k-nearest neighbor classifier. In this case, the feedback connections in
Fig. 7 (large circular summing nodes and triangular integrators) used to select those
k nodes with the maximum outputs must be slightly modified. K is 1 for a feature
map classifier and must be adjusted to the desired value of k for a k-nearest neighbor
classifier.
5.
COMPARISON BETWEEN CLASSIFIERS
The results of Monte Carlo simulations using all classifiers for Case 2 are shown in
Fig. 8. Error rates and convergence times were determined as in Section 3. All alter-
394
Percent Correct
Fixed Weight
Conventional
Hypercube
Feature Map
12
%
8
4
0
Tr ials
2500
Convergence Time
77K
1
I
2000
2-1ay
1500
1000
? ?
500
0
KNN
~id
I
I
GAUSS
32
3&
40
120
Number ot Hidden Nodes
FIG. 8. Comparative performance of clauifier8for Ca8e 2. Training time of the feature map clauifier8
doe8 not include the 10,000 un8upervi8ed training trials.
native classifiers had shorter convergence times than multi-layer perceptron classifiers
trained with back propagation. The feature map classifier provided best performance.
With 1,600 nodes, its error rate was similar to that of the k-nearest neighbor classifiers
but it required fewer than 100 supervised training tokens. The larger fixed weight and
hypercube classifiers performed well but required more supervised training than the
feature map classifiers. These classifiers will work well when the combined probability
density function of all classes varies smoothly and the domain where this function is
non-zero is known. In this case weights and offsets can be set such that hyperplanes and
hypercubes cover the domain and provide good performance. The feature map classifier
automatically covers the domain. Fixed weight "random" classifiers performed substantially worse than fixed weight "grid" classifiers. Back propagation training (BP) was
generally much slower than maximum likelihood training (ML).
6.
VOWEL CLASSIFICATION
Multi layer perceptron, feature map, and traditional classifiers were tested with
vowel formant data from Peterson and Barney [11]. These data had been obtained
by spectrographic analysis of vowels in /hVd/ context spoken by 67 men, women and
children. First and second formant data of ten vowels was split into two sets, resulting
in a total of 338 training tokens and 333 testing tokens. Fig. 9 shows the test data
and the decision regions formed by a two-layer percept ron classifier trained with back
propagation. The performance of classifiers is presented in Table I. All classifiers had
similar error rates. The feature map classifier with only 100 nodes required less than 50
supervised training tokens (5 samples per vowel class) for convergence. The perceptron
classifier trained with back propagation required more than 50,000 training tokens. The
first stage of the feature map classifier and the multi-layer perceptron classifier were
trained by randomly selecting entries from the 338 training tokens after labels had been
removed and using tokens repetitively.
395
4000
D
..
+
?
..
?
o
(
)
D
..
2000
F2
(lIz)
+
..
..
head
hid
hod
had
hawed
heard
heed
hud
vho' d
" hood
+
1000
.
+
500
1400
0
F1
(Hz)
FIG. 9. DecilJion regionlJ formed by a two-layer network using BP after 200,000 training tokens from
PeterlJon'lJ steadylJtate vowel data [PeterlJon, 1952}. AllJo shown are samplelJ of the telJting lJet. Legend
IJhow example 0/ the pronunciation of the 10 vowels and the error within each vowel.
I ALGORITHM
I TRAINING
TABLE
I
TOKENS
I % ERROR I
Performance of classifiers on IJteady IJtate vowel data.
396
7.
CONCLUSIONS
Neural net architectures form a flexible framework that can be used to construct
many different types of classifiers. These include Gaussian, k-nearest neighbor, and
multi-layer perceptron classifiers as well as classifiers such as the feature map classifier
which use unsupervised training. Here we first demonstrated that two-layer perceptrons
(one hidden layer) can form non-convex and disjoint decision regions. Back propagation
training, however, can be extremely slow when forming complex decision regions with
multi-layer perceptrons. Alternative classifiers were thus developed and tested. All
provided faster training and many provided improved performance. Two were similar to
traditional classifiers. One (hypercube classifier) can be used to implement a histogram
classifier, and another (feature map classifier) can be used to implement a k-nearest
neighbor classifier. The feature map classifier provided best overall performance. It
used combined supervised/unsupervised training and attained the same error rate as a
k-nearest neighbor classifier, but with fewer supervised training tokens. Furthermore,
it required fewer nodes then a k-nearest neighbor classifier.
REFERENCES
[1] J. S. Albus, Brains, Behavior, and Robotics. McGraw-Hill, Petersborough, N.H., 1981.
[2] D. J. Burr, "A neural network digit recognizer," in Proceedings of the International Conference
on Systems, Man, and Cybernetics, IEEE, 1986.
[3] D. B. Cooper and J. H. Freeman, "On the asymptotic improvement in the outcome of supervised
learning provided by additional nonsupervised learning," IEEE Transactions on Computers,
vol. C-19, pp. 1055-63, November 1970.
[4] R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis. John-Wiley &. Sons, New
York, 1973.
[5] W. Y. Huang and R. P. Lippmann, "Comparisons between conventional and neural net classifiers,"
in 1st International Conference on Neural Network, IEEE, June 1987.
[6] T. Kohonen, K. Makisara, and T. Saramaki, "Phonotopic maps - insightful representation of
phonological features for speech recognition," in Proceedings of the 7th International Conference on Pattern Recognition, IEEE, August 1984.
[7] R. P. Lippmann, "An introduction to computing with neural nets," IEEE A SSP Magazine, vol. 4,
pp. 4-22, April 1987.
[8] R. P. Lippmann and B. Gold, "Neural classifiers useful for speech recognition," in 1st International
Conference on Neural Network, IEEE, June 1987.
[9] I. D. Longstaff and J. F. Cross, "A pattern recognition approach to understanding the multi-layer
perceptron," Mem. 3936, Royal Signals and Radar Establishment, July 1986.
[10] N. J. Nilsson, Learning Machines. McGraw Hill, N.Y., 1965.
[11] T. Parsons, Voice and Speech Processing. McGraw-Hill, New York, 1986.
[12] F. Rosenblatt, Perceptrons and the Theory of Brain Mechanisms. Spartan Books, 1962.
[13] R. C. Singleton, "A test for linear separability as applied to self-organizing machines," in SelfOrganization Systems, 1962, (M. C. Yovits, G. T. Jacobi, and G. D. Goldstein, eds.), pp. 503524, Spartan Books, Washington, 1962.
[14] A. Wieland and R. Leighton, "Geometric analysis of neural network capabilities," in 1st International Conference on Neural Networks, IEEE, June 1987.
| 26 |@word trial:8 selforganization:1 inversion:1 duda:1 leighton:1 simulation:8 tr:1 barney:1 contains:1 selecting:1 comparing:1 must:3 john:1 shape:1 designed:1 sponsored:1 v:2 generative:2 half:2 selected:2 fewer:3 plane:2 filtered:1 provides:3 node:48 ron:1 hyperplanes:16 sigmoidal:1 simpler:1 along:2 consists:1 burr:1 interdependence:1 rapid:1 behavior:1 multi:10 ol:1 integrator:1 brain:2 freeman:1 automatically:1 little:3 provided:7 project:1 lame:1 substantially:2 developed:2 spoken:1 lexington:1 nj:1 guarantee:1 classifier:83 rm:1 before:1 limit:1 despite:1 id:1 might:2 twice:1 eb:1 studied:1 suggests:1 shaded:3 averaged:1 hood:1 testing:2 implement:4 digit:1 procedure:2 cmac:1 area:6 adapting:1 flatten:1 suggest:2 unlabeled:3 context:1 influence:1 conventional:2 map:22 demonstrated:3 convex:9 proving:1 limiting:2 magazine:1 us:2 recognition:5 native:1 labeled:1 bottom:1 calculate:1 region:59 connected:1 removed:1 thiele:1 mentioned:1 agency:1 radar:1 trained:13 f2:1 completely:1 joint:1 intersects:1 train:4 separated:2 monte:3 spartan:2 outcome:1 pronunciation:1 larger:1 valued:4 tested:4 triangular:1 formant:3 knn:1 final:2 associative:1 net:12 interaction:2 kohonen:3 hid:1 rapidly:2 organizing:2 lincoln:1 gold:1 albus:2 ent:2 convergence:16 comparative:2 illustrate:1 ojj:1 fixing:1 exemplar:2 nearest:10 minor:1 eq:2 implemented:1 involves:2 indicate:1 correct:1 bin:1 government:1 f1:1 adjusted:3 considered:1 liz:1 a2:1 recognizer:1 label:1 vice:1 mit:1 gaussian:2 modified:1 establishment:1 june:3 improvement:2 likelihood:5 entire:1 eliminate:1 typically:1 lj:1 hidden:15 issue:1 classification:4 flexible:1 overall:1 equal:1 construct:3 never:1 phonological:1 washington:1 eliminated:1 makisara:1 represents:1 unsupervised:8 excessive:1 alter:1 richard:1 randomly:2 hawed:1 william:1 vowel:10 attempt:1 deci:5 circular:1 adjust:1 hed:2 tj:3 necessary:1 shorter:1 filled:2 divide:1 initialized:1 desired:5 boolean:2 cover:2 entry:1 uniform:1 stored:1 reported:1 varies:1 combined:5 hypercubes:4 density:5 international:5 sensitivity:1 st:3 physic:1 together:1 reflect:1 huang:2 woman:1 worse:1 book:3 american:1 nonlinearities:3 singleton:1 b2:1 depends:1 crossed:1 performed:2 view:1 break:1 hud:1 repre:2 capability:3 parallel:1 slope:1 b6:1 air:1 formed:16 square:1 ni:4 percept:1 carlo:3 cybernetics:1 classified:1 strongest:1 ed:1 definition:1 frequency:1 pp:3 proof:4 di:1 jacobi:1 gain:1 carefully:1 goldstein:1 back:21 attained:1 supervised:11 improved:1 april:1 evaluated:1 furthermore:1 stage:1 until:1 correlation:1 hand:2 propagation:22 indicated:1 spectrographic:1 usa:1 excessively:2 contain:1 firstlayer:1 laboratory:1 illustrated:1 during:1 self:2 covering:1 illustrative:1 noted:1 hill:3 ay:1 demonstrate:5 percent:1 meaning:1 image:1 recently:1 heed:1 ji:1 b4:1 extend:1 belong:1 m1:1 significant:2 versa:1 grid:7 had:5 supervision:1 recent:1 seen:2 additional:1 determine:3 converge:1 dashed:1 ii:3 signal:1 multiple:1 july:1 smooth:1 ing:1 faster:4 repetitively:1 long:3 cross:1 hart:1 histogram:4 represent:1 sometimes:1 robotics:1 ion:5 addition:2 whereas:1 diagram:1 ot:1 hz:1 legend:1 near:4 counting:1 split:1 enough:1 architecture:4 reduce:2 whether:2 defense:1 speech:5 york:2 nine:1 generally:1 heard:1 covered:1 useful:1 amount:1 ten:4 extensively:1 generate:1 percentage:5 wieland:1 sign:1 disjoint:9 per:1 rosenblatt:1 discrete:1 vol:2 four:1 drawn:1 nilson:1 run:2 place:1 decision:47 summarizes:1 layer:84 hi:6 corre:1 placement:1 bp:2 x2:3 scene:1 extremely:1 separable:1 department:1 poor:1 remain:1 slightly:2 fio:1 separability:2 son:1 b:1 nilsson:1 restricted:1 equation:1 discus:1 mechanism:1 needed:1 know:1 fed:1 available:3 llll:1 cia:1 occurrence:2 alternative:3 voice:1 slower:1 top:4 remaining:1 clustering:1 include:2 hypercube:16 unchanged:1 quantity:1 traditional:4 ssp:1 separate:2 separating:1 majority:1 gion:1 difficult:3 uch:1 policy:1 perform:1 i_:2 upper:4 observation:1 markov:1 finite:1 descent:1 november:1 situation:3 head:1 ruing:1 arbitrary:5 august:1 required:10 connection:5 alternately:1 able:2 pattern:3 appeared:1 royal:1 greatest:1 force:1 advanced:1 improve:1 created:2 understanding:1 geometric:1 asymptotic:1 fully:1 men:1 proportional:1 token:11 perceptron:18 institute:1 neighbor:10 fall:2 peterson:1 curve:4 boundary:4 feedback:1 author:1 transaction:1 lippmann:4 mcgraw:3 ml:2 mem:1 summing:1 continuous:4 hown:3 table:2 mj:1 robust:2 ca:2 parson:1 interact:1 untrained:1 complex:13 domain:3 da:2 linearly:1 arrow:2 child:1 crafted:1 fig:34 slow:1 cooper:1 wiley:1 position:1 momentum:1 xl:2 third:1 clauifier:2 specific:1 insightful:1 symbol:1 offset:4 explored:1 iel:1 illustrates:1 hod:1 smoothly:1 led:2 simply:1 explore:1 forming:3 failed:2 expressed:1 hvd:1 ma:1 man:1 hard:2 change:1 determined:1 except:3 hyperplane:5 clau:1 called:2 total:1 pas:1 gauss:1 perceptrons:19 select:3 u6ing:1 constructive:1 trainable:1 |
1,761 | 260 | Discovering High Order Features with Mean Field Modules
Discovering high order features with mean field
modules
Conrad C. Galland and Geoffrey E. Hinton
Physics Dept. and Computer Science Dept.
University of Toronto
Toronto, Canada
M5S lA4
ABSTRACT
A new form of the deterministic Boltzmann machine (DBM) learning procedure is presented which can efficiently train network modules to discriminate between input vectors according to some criterion. The new technique directly utilizes the free energy of these
"mean field modules" to represent the probability that the criterion
is met, the free energy being readily manipulated by the learning
procedure. Although conventional deterministic Boltzmann learning fails to extract the higher order feature of shift at a network
bottleneck, combining the new mean field modules with the mutual information objective function rapidly produces modules that
perfectly extract this important higher order feature without direct
external supervision.
1
INTRODUCTION
The Boltzmann machine learning procedure (Hinton and Sejnowski, 1986) can be
made much more efficient by using a mean field approximation in which stochastic
binary units are replaced by deterministic real-valued units (Peterson and Anderson,
1987). Deterministic Boltzmann learning can be used for "multicompletion" tasks
in which the subsets of the units that are treated as input or output are varied
from trial to trial (Peterson and Hartman, 1988). In this respect it resembles other
learning procedures that also involve settling to a stable state (Pineda, 1987). Using
the multicompletion paradigm, it should be possible to force a network to explicitly
extract important higher order features of an ensemble of training vectors by forcing
the network to pass the information required for correct completions through a
narrow bottleneck. In back-propagation networks with two or three hidden layers,
the use of bottlenecks sometimes allows the learning to explictly discover important.
509
510
Galland and Hinton
underlying features (Hinton, 1986) and our original aim was to demonstrate that
the same idea could be used effectively in a DBM with three hidden layers. The
initial simulations using conventional techniques were not successful, but when we
combined a new type of DBM learning with a new objective function, the resulting
network extracted the crucial higher order features rapidly and perfectly.
2
THE MULTI-COMPLETION TASK
Figure 1 shows a network in which the input vector is divided into 4 parts. Al is a
random binary vector. A2 is generated by shifting Al either to the right or to the
left by one "pixel", using wraparound. B1 is also a random binary vector, and B2
is generated from B1 by using the same shift as was used to generate A2 from Al.
This means that any three of AI, A2, B1, B2 uniquely specify the fourth (we filter
out the ambiguous cases where this is not true). To perform correct completion, the
network must explicitly represent the shift in the single unit that connects its two
halves. Shift is a second order property that cannot be extracted without hidden
units.
A2
B2
Al
BI
Figure 1.
3
SIMULATIONS USING STANDARD DETERMINISTIC
BOLTZMANN LEARNING
The following discussion assumes familiarity with the deterministic Boltzmann learning procedure, details of which can be obtained from Hinton (1989). During the
positive phase of learning, each of the 288 possible sets of shift matched four-bit
vectors were clamped onto inputs AI, A2 and B1, B2, while in the negative phase,
one of the four was allowed to settle undamped. The weights were changed after
each training case using the on-line version of the DBM learning procedure. The
choice of which input not to damp changed systematically throughout the learning
process so that each was left undamped equally often. This technique, although
successful in problems with only one hidden layer, could not train the network to
correctly perform the multicompletion task where any of the four input layers would
settle to the correct state when the other three were clamped. As a result, the single
Discovering High Order Features with Mean Field Modules
central unit failed to extract shift. In general, the DBM learning procedure, like its
stochastic predecessor, seems to have difficulty learning tasks in multi-hidden layer
nets. This failure led to the development of the new procedure which, in one form,
manages to correctly extract shift without the need for many hidden layers or direct
external supervision.
4
A NEW LEARNING PROCEDURE FOR MEAN FIELD
MODULES
A DBM with unit states in the range [-1,1] has free energy
(1)
The DBM settles to a free energy minimum, F*, at a non-zero temperature, where
the states of the units are given by
Yi
1
= tanh( T 2: Yj Wij )
(2)
j
At the minimum, the derivative of F* with respect to a particular weight (assuming
T = 1) is given by (Hinton, 1989)
(3)
Suppose that we want a network module to discriminate between input vectors that
"fit" some criterion and input vectors that don't. Instead of using a net with an
output unit that indicates the degree of fit, we could view the negative of the mean
field free energy of the whole module as a measure of how happy it is with the
clamped input vector. From this standpoint, we can define the probability that
input vector Q fits the criterion as
1
Pcx
(4)
= (1 + eF~)
where F~ is the equilibrium free energy of the module with vector
the inputs.
Q
clamped on
Supervised training can be performed by using the cross-entropy error function
(Hinton, 1987):
N+
C= -
L
i=cx
N_
log(pcx) -
L log(1- P/3)
(5)
j=/3
where the first sum is over the N + input cases that fit the criterion, and the second
is over the N _ cases that don't. The cross-entropy expression is used to specify error
511
512
Galland and Hinton
derivatives for Pa and hence for F~. Error derivatives for each weight can then be
obtained by using equation (3), and the module is trained by gradient descent to
have high free energy for the "negative" training cases and low free energy for the
"positive" cases.
Thus, for each positive case
1
r oF~
e'" - F
1+e :
OWij
1
(-YiYj)
1 + e- F :
olog(Pa)
OWij
For each negative case,
olog(1 - P13)
of*
_13_
OWij
OWij
To test the new procedure, we trained a shift detecting module, composed of the
the input units Al and A2 and the hidden units HA from figure 1, to have low
free energy for all and only the right shifts. Each weight was changed in an on-line
fashion according to
~w;J'
. =
1
f
1 + e-F~ Y;YJ'
?
for each right shifted case, and
for each left shifted case. Only 10 sweeps through the 24 possible training cases
were required to successfully train the module to detect shift. The training was
particularly easy because the hidden units only receive connections from the input
units which are always clamped, so the network settles to a free energy minimum
in one iteration. Details of the simulations are given in Galland and Hinton (1990).
5
MAXIMIZING MUTUAL INFORMATION BETWEEN
MEAN FIELD MODULES
At first sight, the new learning procedure is inherently supervised, so how can it
be used to discover tha.t shift is an important underlying feature? One method
Discovering High Order Features with Mean Field Modules
is to use two modules that each supervise the other. The most obvious way of
implementing this idea quickly creates modules that always agree because they are
always "on". If, however, we try to maximize the mutual information between the
stochastic binary variables represented by the free energies of the modules, there is
a strong pressure for each binary variable to have high entropy across cases because
the mutual information between binary variables A and B is:
(6)
where HAB is the entropy of the joint distribution of A and B over the training
cases, and H A and H B are the entropies of the individual distributions.
Consider two mean field modules with associated stochastic binary variables A,B
E {O, I}. For a given case a,
p(Aa
=1) = 1 +e1F.
(7)
A.at
where FA a is the free energy of the A module with the training case a clamped on
the input:
We can compute the probability that the A module is on or off by averaging over
the input sample distribution, with pa being the prior probability of an input case
a:
p(A=O) = 1- p(A=I)
Similarly, we can compute the four possible values in the joint probability distribution of A and B:
p(A=I,B=I)
p(A=O,B=I)
p(A=I,B=O)
p( A = 0, B
=0) =
= p(B=I)-p(A=I,B=I)
= p(A=I)-p(A=I,B=I)
1 - p( B = 1) - p( A = 1) + p( A
=1, B = 1)
Using equation (3), the partial derivatives of the various individual and joint probability functions with respect to a weight Wile in the A module are readily calculated.
(8)
513
514
Galland and Hinton
op(A:: 1, B == 1) == """ pa op(Aa = 1) p(Ba = 1)
OW?k
L.J
OW?k
,
a
'
(9)
The entropy of the stochastic binary variable A is
HA
= - <logp(A) > = -
2: p(A::a) logp(A=a)
a=O,l
The entropy of the joint distribution is given by
HAB
-
<logp(A, B) >
- 2:p(A=a, B=b) logp(A=a, B=b)
a,b
The partial derivative of I(A; B) with respect to a single weight Wik in the A module
can now be computed; since HB does not depend on Wik, we need only differentiate
HA and HAB. As shown in Galland and Hinton (1990), the derivative is given by
oI(A; B)
OWik
OWik
OWik
2: pa (p(Aa == 1) -
p(A -1)
1) p(Aa == 1)(YiYk) [ log p(A :0)
a
_ p(Ba = 1) log p(A= I, B= 1) _ p(Ba =0) log p(A= I, B=O)]
p(A=O, B= 1)
p(A=O, B= 0)
The above derivation is drawn from Becker and Hinton (1989) who show that mutual
information can be used as a learning signal in back-propagation nets. We can now
perform gradient ascent in I(A; B) for each weight in both modules using a two-pass
procedure, the probabilities across cases being accumulated in the first pass.
This approach was applied to a system of two mean field modules (the left and
right halves of figure 1 without the connecting central unit) to detect shift. As in
the multi-completion task, random binary vectors were clamped onto inputs AI,
A2 and Bl, B2 related only by shift. Hence, the only way the two modules can
provide mutual information to each other is by representing the shift. Maximizing
the mutual information between them created perfect shift detecting modules in
only 10 two-pass sweeps through the 288 training cases. That is, after training,
each module was found to have low free energy for either left or right shifts, and
high free energy for the other. Details of the simulations are again given in G all an cl
and Hinton (1990).
Discovering High Order Features with Mean Field Modules
6
SUMMARY
Standard deterministic Boltzmann learning failed to extract high order features
in a network bottleneck. We then explored a variant of DBM learning in which
the free energy of a module represents a stochastic binary variable. This variant
can efficiently discover that shift is an important feature without using external
supervision, provided we use an architecture and an objective function that are
designed to extract higher order features which are invariant across space.
Acknowledgements
We would like to thank Sue Becker for many helpful comments. This research was
supported by grants from the Ontario Information Technology Research Center and
the National Science and Engineering Research Council of Canada. Geoffrey Hinton
is a fellow of the Canadian Institute for Advanced Research.
References
Becker, S. and Hinton, G. E. (1989). Spatial coherence as an internal teacher for a
neural network. Technical Report CRG-TR-89-7, University of Toronto.
Galland, C. C. and Hinton, G. E. (1990). Experiments on discovering high order
features with mean field modules. University of Toronto Connectionist Research
Group Technical Report, forthcoming.
Hinton, G. E. (1986) Learning distributed representations of concepts. Proceedings
of the Eighth Annual Conference of the Cognitive Science Society, Amherst, Mass.
Hinton, G. E. (1987) Connectionist learning procedures. Technical Report CMUCS-87-115, Carnegie Mellon University.
Hinton, G. E. (1989) Deterministic Boltzmann learning performs steepest descent
in weight-space. Neural Computation, 1.
Hinton, G. E. and Sejnowski, T. J. (1986) Learning and relearning in Boltzmann
machines. In Rumelhart, D. E., McClelland, J. L., and the PDP group, Parallel
Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1:
Foundations, MIT Press, Cambridge, MA.
Hopfield, J. J. (1984) Neurons with graded response have collective computational
properties like those of two-state neurons. Proceedings of the National Academy of
Sciences U.S.A., 81, 3088-3092.
Peterson, C. and Anderson, J. R. (1987) A mean field theory learning algorithm for
neural networks. Complex Systems, 1, 995-1019.
Peterson, C. and Hartman, E. (1988) Explorations of the mean field theory learning
algorithm. Technical Report ACA-ST/HI-065-88, Microelectronics and Computer
Technology Corporation, Austin, TX.
Pineda, F . J. (1987) Generalization of backpropagation to recurrent neural networks. Phys. Rev. Lett., 18, 2229-2232.
SIS
| 260 |@word trial:2 concept:1 version:1 true:1 graded:1 seems:1 met:1 hence:2 society:1 bl:1 sweep:2 correct:3 yiyj:1 filter:1 simulation:4 stochastic:6 exploration:2 fa:1 settle:4 pressure:1 during:1 implementing:1 tr:1 uniquely:1 ambiguous:1 objective:3 gradient:2 ow:2 initial:1 microstructure:1 generalization:1 criterion:5 thank:1 demonstrate:1 crg:1 performs:1 temperature:1 assuming:1 si:1 ef:1 happy:1 must:1 readily:2 equilibrium:1 cognition:1 dbm:8 a2:7 negative:4 designed:1 volume:1 ba:3 collective:1 half:2 discovering:6 boltzmann:9 tanh:1 mellon:1 perform:3 neuron:2 council:1 cambridge:1 ai:3 steepest:1 successfully:1 descent:2 similarly:1 detecting:2 mit:1 hinton:20 toronto:4 always:3 aim:1 sight:1 pdp:1 stable:1 varied:1 supervision:3 direct:2 predecessor:1 canada:2 wraparound:1 required:2 connection:1 forcing:1 indicates:1 narrow:1 binary:10 detect:2 pcx:2 helpful:1 multi:3 yi:1 conrad:1 accumulated:1 minimum:3 eighth:1 hidden:8 paradigm:1 wij:1 maximize:1 provided:1 discover:3 underlying:2 matched:1 pixel:1 mass:1 signal:1 shifting:1 treated:1 technical:4 development:1 settling:1 spatial:1 cross:2 force:1 mutual:7 difficulty:1 field:16 divided:1 advanced:1 corporation:1 equally:1 wik:2 representing:1 technology:2 fellow:1 represents:1 variant:2 created:1 extract:7 sue:1 report:4 connectionist:2 iteration:1 represent:2 unit:14 grant:1 sometimes:1 hab:3 prior:1 manipulated:1 composed:1 positive:3 national:2 engineering:1 individual:2 receive:1 want:1 replaced:1 phase:2 connects:1 crucial:1 standpoint:1 geoffrey:2 ascent:1 comment:1 undamped:2 foundation:1 degree:1 resembles:1 systematically:1 canadian:1 bi:1 range:1 easy:1 hb:1 austin:1 changed:3 fit:4 summary:1 yj:2 forthcoming:1 architecture:1 partial:2 perfectly:2 supported:1 backpropagation:1 idea:2 free:15 cmucs:1 procedure:13 institute:1 peterson:4 shift:17 bottleneck:4 expression:1 distributed:2 calculated:1 p13:1 becker:3 lett:1 made:1 cannot:1 onto:2 logp:4 owik:3 subset:1 involve:1 conventional:2 deterministic:8 center:1 maximizing:2 successful:2 mcclelland:1 b1:4 generate:1 damp:1 teacher:1 explictly:1 don:2 combined:1 shifted:2 st:1 amherst:1 correctly:2 physic:1 off:1 carnegie:1 inherently:1 connecting:1 quickly:1 group:2 suppose:1 four:4 again:1 central:2 cl:1 complex:1 drawn:1 pa:5 rumelhart:1 e1f:1 particularly:1 external:3 cognitive:1 derivative:6 whole:1 sum:1 allowed:1 module:32 fourth:1 b2:5 throughout:1 fashion:1 explicitly:2 utilizes:1 fails:1 acknowledgement:1 performed:1 view:1 try:1 coherence:1 aca:1 bit:1 clamped:7 layer:6 hi:1 parallel:1 trained:2 depend:1 annual:1 oi:1 familiarity:1 creates:1 who:1 efficiently:2 ensemble:1 explored:1 microelectronics:1 joint:4 hopfield:1 represented:1 various:1 tx:1 manages:1 effectively:1 derivation:1 train:3 m5s:1 sejnowski:2 according:2 relearning:1 phys:1 entropy:7 cx:1 led:1 across:3 failure:1 valued:1 energy:15 failed:2 rev:1 obvious:1 hartman:2 supervise:1 associated:1 wile:1 invariant:1 aa:4 la4:1 pineda:2 differentiate:1 equation:2 agree:1 net:3 extracted:2 tha:1 ma:1 back:2 combining:1 rapidly:2 higher:5 supervised:2 specify:2 ontario:1 response:1 academy:1 averaging:1 anderson:2 discriminate:2 pas:4 galland:7 original:1 produce:1 owij:4 perfect:1 assumes:1 propagation:2 internal:1 recurrent:1 completion:4 n_:1 olog:2 dept:2 op:2 strong:1 |
1,762 | 2,600 | Following Curved Regularized Optimization
Solution Paths
Saharon Rosset
IBM T.J. Watson Research Center
Yorktown Heights, NY 10598
[email protected]
Abstract
Regularization plays a central role in the analysis of modern data, where
non-regularized fitting is likely to lead to over-fitted models, useless for
both prediction and interpretation. We consider the design of incremental algorithms which follow paths of regularized solutions, as the regularization varies. These approaches often result in methods which are
both efficient and highly flexible. We suggest a general path-following
algorithm based on second-order approximations, prove that under mild
conditions it remains ?very close? to the path of optimal solutions and
illustrate it with examples.
1 Introduction
Given a data sample (xi , yi )ni=1 (with xi ? Rp and yi ? R for regression, yi ? {?1} for
classification), the generic regularized optimization problem calls for fitting models to the
data while controlling complexity by solving a penalized fitting problem:
X
?
(1)
C(yi , ? 0 xi ) + ?J(?)
?(?)
= arg min
?
i
where C is a convex loss function and J is a convex model complexity penalty (typically
taken to be the lq norm of ?, with q ? 1).1
Many commonly used supervised learning methods can be cast in this form, including
regularized 1-norm and 2-norm support vector machines [13, 4], regularized linear and
logistic regression (i.e. Ridge regression, lasso and their logistic equivalents) and more. In
[8] we show that boosting can also be described as approximate regularized optimization,
with an l1 -norm penalty.
Detailed discussion of the considerations in selecting penalty and loss functions for regularized fitting is outside the scope of this paper. In general, there are two main areas we
need to consider in this selection:
1. Statistical considerations: robustness (which affects selection of loss), sparsity (l1 -norm
penalty encourages sparse solutions) and identifiability are among the questions we should
1
We assume a linear model in (1), but this is much less limiting than it seems, as the model can
be linear in basis expansions of the original predictors, and so our approach covers Kernel methods,
wavelets, boosting and more
keep in mind when selecting our formulation.
2. Computational considerations: we should be able to solve the problems we pose with
the computational resources at our disposal. Kernel methods and boosting are examples
of computational tricks that allow us to solve very high dimensional problems ? exactly or
approximately ? with a relatively small cost. In this paper we suggest a new computational
approach.
Once we have settled on a loss and penalty, we are still faced with the problem of selecting a ?good? regularization parameter ?, in terms of prediction performance. A common
approach is to solve (1) for several values of ?, then use holdout data (or theoretical approaches, like AIC or SRM) to select a good value. However, if we view the regularized
optimization problem as a family of problems, parameterized by the regularization parame?
ter ?, it allows us to define the ?path? of optimal solutions {?(?)
: 0 ? ? ? ?}, which is a
p
1-dimensional curve through R . Path following methods attempt to utilize the mathematical properties of this curve to devise efficient procedures for ?following? it and generating
the full set of regularized solutions with a (relatively) small computational cost.
As it turns out, there is a family of well known and interesting regularized problems for
which efficient exact path following algorithms can be devised. These include the lasso [3],
1- and 2-norm support vector machines [13, 4] and many others [9]. The main property of
these problems which makes them amenable to such methods is the piecewise linearity of
the regularized solution path in Rp . See [9] for detailed exposition of these properties and
the resulting algorithms.
However, the path following idea can stretch beyond these exact piecewise linear algorithms. The ?first order? approach is to use gradient-based approaches. In [8] we have
described boosting as an approximate gradient-based algorithm for following l1 -norm regularized solution paths. [6] suggest a gradient descent algorithm for finding an optimal solution for a fixed value of ? and are seemingly unaware that the path they are going through
is of independent interest as it consists of approximate (alas very approximate) solutions
to l1 -regularized problems. Gradient-based methods, however, can only follow regularized
paths under strict and non-testable conditions, and theoretical ?closeness? results to the
optimal path are extremely difficult to prove for them (see [8] for details).
In this paper, we suggest a general second-order algorithm for following ?curved? regularized solution paths (i.e. ones that cannot be followed exactly by piecewise-linear algorithms). It consists of iteratively changing the regularization parameter, while making a
single Newton step at every iteration towards the optimal penalized solution, for the current
value of ?. We prove that if both the loss and penalty are ?nice? (in terms of bounds on
their relevant derivatives in the relevant region), then the algorithm is guaranteed to stay
?very close? to the true optimal path, where ?very close? is defined as:
If the change in the regularization parameter at every iteration is ?, then
the solution path we generate is guaranteed to be within O(?2 ) from the
true path of penalized optimal solutions
In section 2 we present the algorithm, and we then illustrate it on l1 - and l2 -regularized
logistic regression in section 3. Section 4 is devoted to a formal statement and proof outline
of our main result. We discuss possible extensions and future work in section 5.
2 Path following algorithm
We assume throughout that the loss function C is twice differentiable. Assume for now
also that the penalty J is twice differentiable (this assumption does not apply to the l1 norm penalty which is of great interest and we address this point later). The key to our
method are the normal equations for (1):
?
?
?C(?(?))
+ ??J(?(?))
=0
(2)
(?)
Our algorithm iteratively constructs an approximate solution ?t by taking ?small?
Newton-Raphson steps trying to maintain (2) as the regularization changes. Our main
result in this paper is to show, both empirically and theoretically, that for small ?, the dif(?)
? 0 + ? ? t)k is small, and thus that our method successfully tracks the
ference k?t ? ?(?
path of optimal solutions to (1).
Algorithm 1 gives a formal description of our quadratic tracking method. We start from a
?
solution to (1) for some fixed ?0 (e.g. ?(0),
the non-regularized solution). At each iteration
we increase ? by ? and take a single Newton-Raphson step towards the solution to (2) with
the new ? value in step 2(b).
Algorithm 1 Approximate incremental quadratic algorithm for regularized optimization
(?)
? 0 ), set t = 0.
1. Set ?0 = ?(?
2. While (?t < ?max )
(a) ?t+1 = ?t + ?
(?)
(b) ?t+1 =
h
i?1 h
i
(?)
(?)
(?)
(?)
(?)
?t ? ?2 C(?t ) + ?t+1 ?2 J(?t )
?C(?t ) + ?t+1 ?J(?t )
(c) t = t + 1
2.1 The l1 -norm penalty
The l1 -norm penalty, J(?) = k?k1 , is of special interest because of its favorable statistical
properties (e.g. [2]) and its widespread use in popular methods, such as the lasso [10] and
1-norm SVM [13]. However it is not differentiable and so our algorithm does not apply to
l1 -penalized problems directly.
To understand how we can generalize Algorithm 1 to this situation, we need to consider the
Karush-Kuhn-Tucker (KKT) conditions for optimality of the optimization problem implied
by (1). It is easy to verify that the normal equations (2) can be replaced by the following
KKT-based condition for l1 -norm penalty:
(3)
(4)
?
?
|?C(?(?))
j | < ? ? ?(?)j = 0
? j 6= 0 ? |?C(?(?))
?
?(?)
j| = ?
these conditions hold for any differentiable loss and tell us that at each point on the path we
have a set A of non-0 coefficients which corresponds to the variables whose current ?gen?
eralized correlation? |?C(?(?))
j | is maximal and equal to ?. All variables with smaller
generalized correlation have 0 coefficient at the optimal penalized solution for this ?. Note
that the l1 -norm penalty is twice differentiable everywhere except at 0. So if we carefully
manage the set of non-0 coefficients according to these KKT conditions, we can still apply
our algorithm in the lower-dimensional subspace spanned by non-0 coefficients only.
Thus we get Algorithm 2, which employs the Newton approach of Algorithm 1 for twice
differentiable penalty, limited to the sub-space of ?active? coefficients denoted by A. It
adds to Algorithm 1 updates for the ?add variable to active set? and ?remove variable from
active set? events, when a variable becomes ?highly correlated? as defined in (4) and when
a coefficient hits 0 , respectively. 2
Algorithm 2 Approximate incremental quadratic algorithm for regularized optimization
with lasso penalty
(?)
? 0 ), set t = 0, set A = {j : ?(?
? 0 )j 6= 0}.
1. Set ?0 = ?(?
2. While (?t < ?max )
(a) ?t+1 = ?t + ?
(b)
h
i?1 h
i
(?)
(?)
(?)
(?)
(?)
?t+1 = ?t ? ?2 C(?t )A
? ?C(?t )A + ?t+1 sgn(?t )A
(?)
(c) A = A ? {j ?
/ A : ?C(?t+1 )j > ?t+1 }
(?)
(d) A = A ? {j ? A : |?t+1,j | < ?}
(e) t = t + 1
2.2 Computational considerations
For a fixed ?0 and ?max , Algorithms 1 and 2 take O(1/?) steps. At each iteration they need
to calculate the Hessians of both the loss and the penalty at a typical computational cost of
O(n ? p2 ); invert the resulting p ? p matrix at a cost of O(p3 ); and perform the gradient
calculation and multiplication, which are o(n ? p2 ) and so do not affect the complexity
calculation. Since we implicitly assume throughout that n ? p, we get overall complexity
of O(n ? p2 /?). The choice of ? represents a tradeoff between computational complexity
and accuracy (in section 4 we present theoretical results on the relationship between ? and
the accuracy of the path approximation we get). In practice, our algorithm is practical for
problems with up to several hundred predictors and several thousand observations. See the
example in section 3.
It is interesting to compare this calculation to the obvious alternative, which is to solve
O(1/?) regularized problems (1) separately, using a Newton-Raphson approach, resulting
in the same complexity (assuming the number of Newton-Raphson iterations for finding
each solution is bounded). There are several reasons why our approach is preferable:
? The number of iterations until convergence of Newton-Raphson may be large even
if it does converge. Our algorithm guarantees we stay very close to the optimal
solution path with a single Newton step at each new value of ?.
? Empirically we observe that in some cases our algorithm is able to follow the path
while direct solution for some values of ? fails to converge. We assume this is
related to various numeric properties of the specific problems being solved.
? For the interesting case of l1 -norm penalty and a ?curved? loss function (like logistic log-likelihood), there is no direct Newton-Raphson algorithm. Re-formulating
the problem into differentiable form requires doubling the dimensionality. Using
our Algorithm 2, we can still utilize the same Newton method, with significant
computational savings when many coefficients are 0 and we work in a lowerdimensional subspace.
2
When a coefficient hits 0 it not only hits a non-differentiability point in the penalty, it also
ceases to be maximally correlated as defined in (4). A detailed proof of this fact and the rest of the
?accounting? approach can be found in [9]
On the flip side, our results in section 4 below indicate that to guarantee successful tracking
we require ? to be small, meaning the number of steps we do in the algorithm may be
significantly larger than the number of distinct problems we would typically solve to select
? using a non-path approach.
2.3 Connection to path following methods from numerical analysis
There is extensive literature on path-following methods for solution paths of general parametric problems. A good survey is given in [1]. In this context, our method can be described
as a ?predictor-corrector? method with a redundant first order predictor step. That is, the
corrector step starts from the previous approximate solution. These methods are recognized
as attractive options when the functions defining the path (in our case, the combination of
loss and penalty) are ?smooth? and ?far from linear?. These conditions for efficacy of our
approach are reflected in the regularity conditions for the closeness result in Section 4.
3 Example: l2 - and l1 -penalized logistic regression
Regularized logistic regression has been successfully used as a classification and probability estimation approach [11, 12]. We first illustrate applying our quadratic method to
this regularized problem using a small subset of the ?spam? data-set, available from the
UCI repository (http://www.ics.uci.edu/?mlearn/MLRepository.html)
which allows us to present some detailed diagnostics. Next, we apply it to the full ?spam?
data-set, to demonstrate its time complexity on bigger problems.
We first choose five variables and 300 observations and track the solution paths to two
regularized logistic regression problems with the l2 -norm and the l1 -norm penalties:
(5)
?
?(?)
= arg min log(1 + exp{?yi ? 0 xi }) + ?k?k22
(6)
?
?(?)
= arg min log(1 + exp{?yi ? 0 xi }) + ?k?k1
?
?
Figure 1 shows the solution paths ? (?) (t) generated by running Algorithms 1 and 2 on this
data using ? = 0.02 and starting at ? = 0, i.e. from the non-regularized logistic regression
solution. The interesting graphs for our purpose are the ones on the right. They represent
the ?optimality gap?:
(?)
et =
?C(?t )
(?)
?J(?t )
+??t
where the division is done componentwise (and so the five curves in each plot correspond
?
to the five variables we are using). Note that the optimal solution ?(t?)
is uniquely defined
by the fact that (2) holds and therefore the ?optimality gap? is equal to zero componentwise
?
at ?(t?).
By convexity and regularity of the loss and the penalty, there is a correspondence
?
between small values of e and small distance k? (?) (t)? ?(t?)k.
In our example we observe
that the components of e seem to be bounded in a small region around 0 for both paths (note
the small scale of the y axis in both plots ? the maximal error is under 10?3 ). We conclude
that on this simple example our method tracks the optimal solution paths well, both for the
l1 - and l2 -regularized problems. The plots on the left show the actual coefficient paths ?
the curve in R5 is shown as five coefficient traces in R, each corresponding to one variable,
with the non-regularized solution (identical for both problems) on the extreme left.
Next, we run our algorithm on the full ?spam? data-set, containing p = 57 predictors and n = 4601 observations. For both the l1 - and l2 -penalized paths we used
?4
x 10
2.5
2
?C/?J+?
4
??(?/?)
1.5
1
0.5
2
0
0
?0.5
0
10
20
?
30
?2
40
0
10
20
?
30
40
10
20
?
30
40
?4
x 10
2.5
2
?C/?J+?
4
??(?/?)
1.5
1
0.5
2
0
0
?0.5
0
10
20
?
30
40
?2
0
Figure 1: Solution paths (left) and optimality criterion (right) for l1 penalized logistic regression (top) and l2 penalized logistic regression (bottom). These result from running
Algorithms 2 and 1, respectively, using ? = 0.02 and starting from the non-regularized
logistic regression solution (i.e. ? = 0)
?0 = 0, ?max = 50, ? = 0.02, and the whole path was generated in under 5 minutes
using a Matlab implementation on an IBM T-30 Laptop. Like in the small scale example,
the ?optimality criterion? was uniformly small throughout the two paths, with none of its
57 components exceeding 10?3 at any point.
4 Theoretical closeness result
In this section we prove that our algorithm can track the path of true solutions to (1).
We show that under regularity conditions on the loss and penalty (which hold for all the
candidates we have examined), if we run Algorithm 1 with a specific step size ?, then we
remain within O(?2 ) of the true path of optimal regularized solutions.
Theorem 1 Assume ?0 > 0, then for ? small enough and under regularity conditions on
the derivatives of C and J,
? 0 + c)k = O(?2 )
?0 < c < ?max ? ?0 , k? (?) (c/?) ? ?(?
So there is a uniform bound O(?2 ) on the error which does not depend on c.
Proof We give the details of the proof in Appendix A of [7]. Here we give a brief review
of the main steps.
Similar to section 3 we define the ?optimality gap?:
?
?
?
? ?C(? (?) )
?
?
t
(7)
)
+
?
? = etj
?(
j
t
?
? ?J(? (?) )
t
Also define a ?regularity constant? M , which depends on ?0 and the first, second and third
derivatives of the loss and penalty.
The proof is presented as a succession of lemmas:
Lemma 2 Let u1 = M ? p ? ?2 , ut = M (ut?1 +
?
p ? ?)2 , then: ket k2 ? ut
This lemma gives a recursive expression bounding the error in the optimality gap (7) as the
algorithm proceeds. The proof is based on separate Taylor expansions of the numerator and
denominator of the ratio ?C
?J in the optimality gap and some tedious algebra.
Lemma 3 If
?
p?M ? 1/4 then ut %
1
2M
?
?
p???
?
?
1?4 p??M
2M
= O(?2 ) , ?t
This lemma shows that the recursive bound translates to a uniform O(?2 ) bound, if ? is
small enough. The proof consists of analytically finding the fixed point of the increasing
series ut .
Lemma 4 Under regularity conditions on the penalty and loss functions in the neighborhood of the solutions to (1), the O(?2 ) uniform bound of lemma 3 translates to an O(?2 )
? 0 + c)k
uniform bound on k? (?) (c/?) ? ?(?
Finally, this lemma translates the optimality gap bound to an actual closeness result. This
is proven via a Lipschitz argument.
4.1 Required regularity conditions
Regularity in the loss and the penalty is required in the definition of the regularity constant
M and in the translation of the O(?2 ) bound on the ?optimality gap? into one on the distance
from the path in lemma 4. The exact derivation of the regularity conditions is highly technical and lengthy. They require us to bound the norm of third derivative ?hyper-matrices?
for the loss and the penalty as well as the norms of various functions of the gradients and
Hessians of both (the boundedness is required only in the neighborhood of the optimal path
where our approximate path can venture, obviously). We also need to have ?0 > 0 and
?max < ?. Refer to Appendix A of [7] for details. Assuming that ?0 > 0 and ?max < ?
these conditions hold for every interesting example we have encountered, including:
? Ridge regression and the lasso (that is, l2 - and l1 - regularized squared error loss).
? l1 - and l2 -penalized logistic regression. Also Poisson regression and other exponential family models.
? l1 - and l2 -penalized exponential loss.
Note that in our practical examples above we have started from ?0 = 0 and our method still
worked well. We observe in figure 1 that the tracking algorithm indeed suffers the biggest
inaccuracy for the small values of ?, but manages to ?self correct? as ? increases.
5 Extensions
We have described our method in the context of linear models for supervised learning.
There are several natural extensions and enhancements to consider.
Basis expansions and Kernel methods
Our approach obviously applies, as is, to models that are linear in basis expansions of the
original variables (like wavelets or kernel methods) as long as p < n is preserved. However,
the method can easily be applied to high (including infinite) dimensional kernel versions
of regularized models where RKHS theory applies. We know that the solution path is fully
within the span of the representer functions, that is the columns of the Kernel matrix. With
a kernel matrix K with columns k1 , ..., kn and the standard l2 -norm penalty, the regularized
problem becomes:
X
?(?)
?
= arg min
C(yi , ?0 ki ) + ??0 K?
?
i
so the penalty now also contains the Kernel matrix, but this poses no complications in using
Algorithm 1. The only consideration we need to keep in mind is the computational one,
as our complexity is O(n3 /?). So our method is fully applicable and practical for kernel
methods, as long as the number of observations, and the resulting kernel matrix, are not too
large (up to several hundreds).
Unsupervised learning
There is no reason to limit the applicability of this approach to supervised learning. Thus,
for example, adaptive density estimation using negative log-likelihood as a loss can be
regularized and the solution path be tracked using our algorithm.
Computational tricks
The computational complexity of our algorithm limits its applicability to large problems.
To improve its scalability we primarily need to reduce the effort in the Hessian calculation
and inversion. The obvious suggestion here would be to keep the Hessian part of step 2(b)
in Algorithm 1 fixed for many iterations and change the gradient part only, then update
the Hessian occasionally. The clear disadvantage would be that the ?closeness? guarantees
would no longer hold. We have not tried this in practice but believe it is worth pursuing.
Acknowledgements. The author thanks Noureddine El Karoui for help with the proof and
Jerome Friedman, Giles Hooker, Trevor Hastie and Ji Zhu for helpful discussions.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
Allgower, E. L. & Georg, K. (1993). Continuation and path following. Acta Numer., 2:164
Donoho, D., Johnstone, I., Kerkyachairan, G. & Picard, D. (1995). Wavelet shrinkage: Asymptopia? Annals of Statistics
Efron, B., Hastie, T., Johnstone, I. & Tibshirani, R.(2004). Least Angle Regression. Annals of
Statistics .
Hastie, T., Rosset, S., Tibshirani, R. & Zhu, J. (2004). The Entire Regularization Path for the
Support Vector Machine. Journal of Machine Learning Research, 5(Oct):1391?1415.
Hastie, T., Tibshirani, R. & Friedman, J. (2001). Elements of Stat. Learning. Springer-Verlag
Kim, Y & Kim, J. (2004) Gradient LASSO for feature selection. ICML-04, to appear.
Rosset, S. (2003). Topics in Regularization and Boosting. PhD thesis, dept. of Statistics, Stanford University.
http://www-stat.stanford.edu/?saharon/papers/PhDThesis.pdf
Rosset, S., Zhu, J. & Hastie,T. (2003). Boosting as a regularized path to a maximum margin
classifier. Journal of Machine Learning Research, 5(Aug):941-973.
Rosset, S. & Zhu, J. (2003). Piecewise linear regularized solution paths. Submitted.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. JRSSB
Wahba, G., Gu, C., Wang, Y. & Chappell, R. (1995) Soft Classification, a.k.a. Risk Estimation,
via Penalized Log Likelihood and Smoothing Spline Analysis of Variance. In D.H. Wolpert,
editor, The Mathematics of Generalization.
Zhu, J. & Hastie, T. (2003). Classification of Gene Microarrays by Penalized Logistic Regression. Biostatistics, to appear.
Zhu, J., Hastie, T., Rosset, S. & Tibshirani, R. (2004). 1-norm support vector machines. Neural
Information Processing Systems, 16.
| 2600 |@word mild:1 repository:1 version:1 inversion:1 norm:20 seems:1 tedious:1 tried:1 accounting:1 boundedness:1 series:1 efficacy:1 selecting:3 contains:1 rkhs:1 ala:1 current:2 com:1 karoui:1 numerical:1 remove:1 plot:3 update:2 boosting:6 complication:1 five:4 height:1 mathematical:1 direct:2 prove:4 consists:3 fitting:4 theoretically:1 indeed:1 actual:2 increasing:1 becomes:2 linearity:1 bounded:2 laptop:1 biostatistics:1 finding:3 guarantee:3 every:3 exactly:2 preferable:1 k2:1 hit:3 classifier:1 appear:2 limit:2 path:48 approximately:1 twice:4 acta:1 examined:1 dif:1 limited:1 practical:3 practice:2 recursive:2 procedure:1 area:1 significantly:1 suggest:4 get:3 cannot:1 close:4 selection:4 context:2 applying:1 risk:1 www:2 equivalent:1 center:1 starting:2 convex:2 survey:1 spanned:1 limiting:1 annals:2 controlling:1 play:1 exact:3 trick:2 element:1 bottom:1 role:1 solved:1 wang:1 calculate:1 thousand:1 region:2 convexity:1 complexity:9 depend:1 solving:1 algebra:1 division:1 basis:3 gu:1 asymptopia:1 easily:1 various:2 derivation:1 distinct:1 tell:1 hyper:1 outside:1 neighborhood:2 whose:1 larger:1 solve:5 stanford:2 statistic:3 seemingly:1 obviously:2 differentiable:7 maximal:2 relevant:2 uci:2 gen:1 description:1 venture:1 scalability:1 convergence:1 regularity:10 etj:1 enhancement:1 generating:1 incremental:3 help:1 illustrate:3 allgower:1 stat:2 pose:2 aug:1 p2:3 indicate:1 kuhn:1 correct:1 sgn:1 require:2 karush:1 generalization:1 extension:3 stretch:1 hold:5 around:1 ic:1 normal:2 exp:2 great:1 scope:1 purpose:1 favorable:1 estimation:3 applicable:1 successfully:2 shrinkage:2 likelihood:3 kim:2 helpful:1 el:1 typically:2 entire:1 going:1 arg:4 classification:4 flexible:1 among:1 denoted:1 overall:1 html:1 ference:1 smoothing:1 special:1 equal:2 once:1 construct:1 saving:1 identical:1 represents:1 r5:1 unsupervised:1 icml:1 representer:1 future:1 others:1 spline:1 piecewise:4 employ:1 primarily:1 modern:1 replaced:1 maintain:1 attempt:1 friedman:2 interest:3 highly:3 picard:1 numer:1 extreme:1 diagnostics:1 devoted:1 amenable:1 taylor:1 re:1 srosset:1 theoretical:4 fitted:1 column:2 soft:1 giles:1 cover:1 disadvantage:1 cost:4 applicability:2 subset:1 predictor:5 srm:1 hundred:2 successful:1 uniform:4 too:1 kn:1 varies:1 rosset:6 thanks:1 density:1 stay:2 eralized:1 squared:1 central:1 settled:1 manage:1 containing:1 choose:1 thesis:1 ket:1 derivative:4 coefficient:10 depends:1 later:1 view:1 start:2 option:1 identifiability:1 ni:1 accuracy:2 variance:1 succession:1 correspond:1 generalize:1 manages:1 none:1 worth:1 submitted:1 mlearn:1 suffers:1 trevor:1 lengthy:1 definition:1 tucker:1 obvious:2 proof:8 holdout:1 popular:1 ut:5 dimensionality:1 efron:1 carefully:1 disposal:1 supervised:3 follow:3 reflected:1 maximally:1 formulation:1 done:1 correlation:2 until:1 jerome:1 widespread:1 logistic:13 believe:1 k22:1 verify:1 true:4 regularization:9 analytically:1 iteratively:2 attractive:1 numerator:1 self:1 encourages:1 uniquely:1 yorktown:1 mlrepository:1 criterion:2 generalized:1 trying:1 pdf:1 outline:1 ridge:2 demonstrate:1 l1:20 saharon:2 meaning:1 consideration:5 common:1 empirically:2 tracked:1 ji:1 interpretation:1 significant:1 refer:1 mathematics:1 longer:1 add:2 occasionally:1 verlag:1 watson:1 yi:7 devise:1 lowerdimensional:1 recognized:1 converge:2 redundant:1 full:3 smooth:1 technical:1 calculation:4 raphson:6 long:2 devised:1 bigger:1 prediction:2 regression:17 denominator:1 poisson:1 iteration:7 kernel:10 represent:1 invert:1 preserved:1 separately:1 rest:1 strict:1 seem:1 call:1 ter:1 easy:1 enough:2 affect:2 hastie:7 lasso:7 wahba:1 reduce:1 idea:1 tradeoff:1 translates:3 microarrays:1 expression:1 effort:1 penalty:27 hessian:5 matlab:1 detailed:4 clear:1 differentiability:1 generate:1 http:2 continuation:1 track:4 tibshirani:5 georg:1 key:1 changing:1 chappell:1 utilize:2 graph:1 run:2 angle:1 parameterized:1 everywhere:1 family:3 throughout:3 pursuing:1 p3:1 appendix:2 bound:9 ki:1 guaranteed:2 followed:1 aic:1 correspondence:1 quadratic:4 encountered:1 worked:1 n3:1 u1:1 argument:1 min:4 extremely:1 optimality:10 formulating:1 span:1 relatively:2 according:1 combination:1 smaller:1 remain:1 making:1 taken:1 resource:1 equation:2 remains:1 turn:1 discus:1 mind:2 flip:1 know:1 available:1 apply:4 observe:3 generic:1 alternative:1 robustness:1 rp:2 original:2 top:1 running:2 include:1 newton:10 testable:1 k1:3 implied:1 question:1 parametric:1 jrssb:1 gradient:8 subspace:2 distance:2 separate:1 parame:1 topic:1 reason:2 assuming:2 useless:1 relationship:1 corrector:2 ratio:1 difficult:1 statement:1 trace:1 negative:1 design:1 implementation:1 perform:1 observation:4 descent:1 curved:3 situation:1 defining:1 cast:1 required:3 extensive:1 connection:1 componentwise:2 inaccuracy:1 address:1 able:2 beyond:1 proceeds:1 below:1 sparsity:1 including:3 max:7 event:1 natural:1 regularized:35 zhu:6 improve:1 brief:1 axis:1 started:1 faced:1 nice:1 literature:1 l2:10 review:1 acknowledgement:1 multiplication:1 loss:19 fully:2 interesting:5 suggestion:1 proven:1 editor:1 ibm:3 translation:1 penalized:13 formal:2 allow:1 understand:1 side:1 johnstone:2 taking:1 sparse:1 curve:4 numeric:1 unaware:1 author:1 commonly:1 adaptive:1 spam:3 far:1 approximate:9 implicitly:1 keep:3 gene:1 kkt:3 active:3 conclude:1 xi:5 hooker:1 why:1 expansion:4 main:5 whole:1 bounding:1 biggest:1 ny:1 sub:1 fails:1 exceeding:1 lq:1 exponential:2 candidate:1 third:2 wavelet:3 minute:1 theorem:1 specific:2 svm:1 cease:1 closeness:5 noureddine:1 phd:1 margin:1 gap:7 wolpert:1 likely:1 tracking:3 doubling:1 applies:2 springer:1 corresponds:1 oct:1 donoho:1 exposition:1 towards:2 lipschitz:1 change:3 typical:1 except:1 uniformly:1 infinite:1 lemma:9 select:2 support:4 dept:1 correlated:2 |
1,763 | 2,601 | The Correlated Correspondence Algorithm for
Unsupervised Registration of Nonrigid Surfaces
Dragomir Anguelov1 , Praveen Srinivasan1 , Hoi-Cheung Pang1 ,
Daphne Koller1 , Sebastian Thrun1 , James Davis2 ?
1
Stanford University, Stanford, CA 94305
2
University of California, Santa Cruz, CA 95064
e-mail:{drago,praveens,hcpang,koller,thrun,jedavis}@cs.stanford.edu
Abstract
We present an unsupervised algorithm for registering 3D surface scans of
an object undergoing significant deformations. Our algorithm does not
need markers, nor does it assume prior knowledge about object shape, the
dynamics of its deformation, or scan alignment. The algorithm registers
two meshes by optimizing a joint probabilistic model over all point-topoint correspondences between them. This model enforces preservation
of local mesh geometry, as well as more global constraints that capture
the preservation of geodesic distance between corresponding point pairs.
The algorithm applies even when one of the meshes is an incomplete
range scan; thus, it can be used to automatically fill in the remaining surfaces for this partial scan, even if those surfaces were previously only
seen in a different configuration. We evaluate the algorithm on several
real-world datasets, where we demonstrate good results in the presence
of significant movement of articulated parts and non-rigid surface deformation. Finally, we show that the output of the algorithm can be used for
compelling computer graphics tasks such as interpolation between two
scans of a non-rigid object and automatic recovery of articulated object
models.
1
Introduction
The construction of 3D object models is a key task for many graphics applications. It is
becoming increasingly common to acquire these models from a range scan of a physical
object. This paper deals with an important subproblem of this acquisition task ? the
problem of registering two deforming surfaces corresponding to different configurations of
the same non-rigid object.
The main difficulty in the 3D registration problem is determining the correspondences of
points on one surface to points on the other. Local regions on the surface are rarely distinctive enough to determine the correct correspondence, whether because of noise in the scans,
or because of symmetries in the object shape. Thus, the set of candidate correspondences to
a given point is usually large. Determining the correspondence for all object points results
in a combinatorially large search problem. The existing algorithms for deformable surface
?
A results video is available at http://robotics.stanford.edu/?drago/cc/video.mp4
Figure 1: A) Registration results for two meshes. Nonrigid ICP and its variant augmented with spin
images get stuck in local maxima. Our CC algorithm produces a largely correct registration, although
with an artifact in the right shoulder (inset). B) Illustration of the link deformation process C) The
CC algorithm which uses only deformation potentials can violate mesh geometry. Near regions can
map to far ones (segment AB) and far regions can map to near ones (points C,D).
registration make the problem tractable by assuming significant prior knowledge about the
objects being registered. Some rely on the presence of markers on the object [1, 20], while
others assume prior knowledge about the object dynamics [16], or about the space of nonrigid deformations [15, 5]. Algorithms that make neither restriction [18, 12] simplify the
problem by decorrelating the choice of correspondences for the different points in the scan.
However, this approximation is only good in the case when the object deformation is small;
otherwise, it results in poor local maxima as nearby points in one scan are allowed to map
to far-away points in the other.
Our algorithm defines a joint probabilistic model over all correspondences, which explicitly model the correlations between them ? specifically, that nearby points in one mesh
should map to nearby points in the other. Importantly, the notion of ?nearby? used in our
model is defined in terms of geodesic distance over the mesh. We define a probabilistic
model over the set of correspondences, that encodes these geodesic distance constraints as
well as penalties for link twisting and stretching, and high-level local surface features [14].
We then apply loopy belief propagation [21] to this model, in order to solve for the entire
set of correspondences simultaneously. The result is a registration that respects the surface
geometry. To the best of our knowledge, the algorithm we present in this paper is the first
algorithm which allows the registration of 3D surfaces of an object where the object configurations can vary significantly, there is no prior knowledge about object shape or dynamics
of deformation, and nothing whatsoever is known about the object alignment. Moreover,
unlike many methods, our algorithm can be used to register a partial scan to a complete
model, greatly increasing its applicability.
We apply our approach to three datasets containing 3D scans of a wooden puppet, a
human arm and entire human bodies in different configurations. We demonstrate good
registration results for scan pairs exhibiting articulated motion, non-rigid deformations, or
both. We also describe three applications of our method. In our first application, we show
how a partial scan of an object can be registered onto a fully specified model in a different configuration. The resulting registration allows us to use the model to ?complete?
the partial scan in a way that preserves the local surface geometry. In the second, we use
the correspondences found by our algorithm to smoothly interpolate between two different
poses of an object. In our final application, we use a set of registered scans of the same
object in different positions to recover a decomposition of the object into approximately
rigid parts, and recover an articulated skeleton linking the parts. All of these applications
are done in an unsupervised way, using only the output of our Correlated Correspondence
algorithm applied to pairs of poses with widely varying deformations, and unknown initial
alignments. These results demonstrate the value of a high-quality solution to the registration problem to a range of graphics tasks.
2
Previous Work
Surface registration is a fundamental building block in computer graphics. The classical solution for registering rigid surfaces is the Iterative Closest Point algorithm (ICP) [4, 6, 17].
Recently, there has been work extending ICP to non-rigid surfaces [18, 8, 12, 1]. These
algorithms treat one of the scans (usually a complete model of the surface) as a deformable
template. The links between adjacent points on the surface can be thought of as springs,
which are allowed to deform at a cost. Similarly to ICP, these algorithms iterate between
two subproblems ? estimating the non-rigid transformation ? and estimating the set of
correspondences C between the scans. The step estimating the correspondences assumes
that a good estimate of the nonrigid transformation ? is available. Under this assumption,
the assignments to the correspondence variables become decorrelated: each point in the
second scan is associated with the nearest point (in the Euclidean distance sense) in the
deformed template scan. However, the decomposition also induces the algorithm?s main
limitation. By assigning points in the second scan to points on the deformed model independently, nearby points in the scan can get associated to remote points in the model if the
estimate of ? is poor (Fig. 1A). While several approaches have been proposed to address
this problem of incorrect correspondences, their applicability is largely limited to problems
where the deformation is local, and the initial alignment is approximately correct.
Another line of related work is the work on deformable template matching in the computer vision community. In the 3D case, this framework is used for detection of articulated
object models in images [13, 22, 19]. The algorithms assume the decomposition of the
object into a relatively small number of parts is known, and that a detector for each object
part is available. Template matching approaches have also been applied to deformable 2D
objects, where very efficient solutions exist [9, 11]. However, these methods do not extend
easily to the case of 3D surfaces.
3
The Correlated Correspondence Algorithm
The input to the algorithm is a set of two meshes (surfaces tessellated into polygons).
The model mesh X = (V X , E X ) is a complete model of the object, in a particular pose.
V X = (x1 , . . . , xN ) denotes the mesh points, while E X is the set of links between adjacent
points on the mesh surface. The data mesh Z = (V Z , E Z ) is either a complete model or a
partial view of the object in a different configuration. Each data mesh point z k is associated
with a correspondence variable ck , specifying the corresponding model mesh point. The
task of registration is one of estimating the set of all correspondences C and a non-rigid
transformation ? which aligns the corresponding points.
3.1
Probabilistic Model
We formulate the registration problem as one of finding an embedding of the data mesh
Z into the model mesh X, which is encoded as an assignment to all correspondence variables C = (c1 , . . . , cK ). The main idea behind our approach is to preserve the consistency of the embedding by explicitly correlating the assignments to the correspondence
variables. We define a joint distribution over the correspondence variables c 1 , . . . , cK , represented as a Markov network. For each pair of adjacent data mesh points zk , zl , we want
to define a probabilistic potential ?(ck , cl ) that constrains this pair of correspondences to
reasonable and
Q consistent.
Q This gives rise to a joint probability distribution of the form
p(C) = Z1 k ?(ck ) k,l ?(ck , cl ) which contains only single and pairwise potentials.
Performing probabilistic inference to find the most likely joint assignment to the entire set
of correspondence variables C should yield a good and consistent registration.
Deformation Potentials. We want our model to encode a preference for embeddings
of mesh Z into mesh X, which minimize the amount of deformation ? induced by the
embedding. In order to quantify the amount of deformation ?, applied to the model, we
will follow the ideas of H?ahnel et al. [12] and treat the links in the set E X as springs, which
resist stretching and twisting at their endpoints. Stretching is easily quantified by looking at
changes in the link length induced by the transformation ?. Link twisting, however, is illspecified by looking only at the Cartesian coordinates of the points alone. Following [12],
we attach an imaginary local coordinate system to each point on the model. This local
coordinate system allows us to quantify the ?twist? of a point xj relative to a neighbor xi .
A non-rigid transformation ? defines, for each point xi , a translation of its coordinates and
a rotation of its local coordinate system.
To evaluate the deformation penalty, we parameterize each link in the model in terms
of its length and its direction relative to its endpoints (see Fig. 1B). Specifically, we define
li,j to be the distance between xi and xj ; di?j is a unit vector denoting the direction of
the point xj in the coordinate system of xi (and vice versa). We use ei,j to denote the set
of edge parameters (li,j , di?j , dj?i ). It is now straightforward to specify the penalty for
model deformations. Let ? be a transformation, and let e?i,j denote the triple of parameters
associated with the link between xi and xj after applying ?. Our model penalizes twisting
and stretching, using a separate zero-mean Gaussian noise model for each:
P (?
ei,j | ei,j ) = P (?li,j | li,j ) P (d?i?j | di?j ) P (d?j?i | dj?i )
(1)
In the absence of prior information, we assume that all links are equally likely to deform.
In order to quantify the deformation induced by an embedding C, we need to include
Z
a potential ?d (ck , cl ) for each link eZ
k,l ? E . Every probability ?d (ck = i, cl = j)
corresponds to the deformation penalty incurred by deforming model link e i,j to generate
X
link eZ
k,l and is defined in (1). We do not restrict ourselves to the set of links in E , since
the original mesh tessellation is sparse and local. Any two points in X are allowed to
implicitly define a link.
Unfortunately, we cannot directly estimate the quantity P (eZ
k,l | ei,j ), since the link paZ
rameters ek,l depend on knowing the nonrigid transformation, which is not given as part
of the input. The key issue is estimating the (unknown) relative rotation of the link endpoints. In effect, this rotation is an additional latent variable, which must also be part of the
probabilistic model. To remain within the realm of discrete Markov networks, allowing the
application of standard probabilistic inference algorithms, we discretize the space of the
possible rotations, and fold it into the domains of the correspondence variables. For each
possible value of the correspondence variable ck = i we select a small set of candidate
rotations, consistent with local geometry. We do this by aligning local patches around the
points xi and zk using rigid ICP. We extend the domain of each correspondence variables
ck , where each value encodes a matching point and a particular rotation from the precomputed set for that point. Now the edge parameters eZ
k,l are fully determined and so is the
probabilistic potential.
Geodesic Distances. Our proposed approach raises the question as to what constitutes
the best constraint between neighboring correspondence variables. The literature on scan
registration ? for rigid and non-rigid models alike ? relies on the preserving Euclidean
distance. While Euclidean distance is meaningful for rigid objects, it is very sensitive to deformations, especially those induced by moving parts. For example, in Fig. 1C, we see that
the two legs in one configuration of our puppet are fairly close together, allowing the algorithm to map two adjacent points in the data mesh to the two separate legs, with minimal
deformation penalty. In the complementary situation, especially when object symmetries
are present, two distant yet similar points in one scan might get mapped to the same region
in the other. For example, in the same figure, we see that points in both an arm and a leg in
the data mesh get mapped to a single leg in the model mesh.
We therefore want to enforce constraints preserving distance along the mesh surface
(geodesic distance). Our probabilistic framework easily incorporate such constraints as
correlations between pairs of correspondence variables. We encode a nearness preservation
Figure 2: A) Automatic interpolation between two scans of an arm and a wooden puppet. B) Registration results on two scans of the same man sitting and standing up (select points were displayed)
C) Registration results on scans of a larger man and a smaller woman. The algorithm is robust to
small changes in object scale.
constraint which prevents adjacent points in mesh Z to be mapped to distant points in X
in the geodesic distance sense. For adjacent points zk , zl in the data mesh, we define the
following potential:
0 distGeodesic (xi , xj ) > ??
?n (ck = i, cl = j) =
(2)
1 otherwise
where ? is the data mesh resolution and ? is some constant, chosen to be 3.5.
The farness preservation potentials encode the complementary constraint. For every
pair of points zk , zl whose geodesic distance is more than 5? on the data mesh, we have a
potential:
0 distGeodesic (xi , xj ) < ??
(3)
?f (ck = i, cl = j) =
1 otherwise
where ? is also a constant, chosen to be 2 in our implementation. The intuition behind this
constraint is fairly clear: if zk , zl are far apart on the data mesh, then their corresponding
points must be far apart on the model mesh.
Local Surface Signatures. Finally, we encode a set of potentials that correspond to
the preservation of local surface properties between the model mesh and data mesh. The
use of local surface signatures is important, because it helps to guide the optimization in
the exponential space of assignments. We use spin images [14] compressed with principal component analysis to produce a low-dimensional signature sx of the local surface
geometry around a point x. When data and model points correspond, we expect their local signatures to be similar. We introduce a potential whose values ?s (ck ) = i enforce a
zero-mean Gaussian penalty for discrepancies between sxi and szk .
3.2
Optimization
In the previous section, we defined a Markov network, which encodes a joint probability
distribution over the correspondence variables as a product of single and pairwise potentials. Our goal is to find a joint assignment to these variables that maximizes this probability. This problem is one of standard probabilistic inference over the Markov network.
However, the Markov network is quite large, and contains a large number of loops, so that
exact inference is computationally infeasible. We therefore apply an approximate inference
method known as loopy belief propagation (LBP)[21], which has been shown to work in a
wide variety of applications. Running LBP until convergence results in a set of probabilistic assignments to the different correspondence variables, which are locally consistent. We
then simply extract the most likely assignment for each variable to obtain a correspondence.
One remaining complication arises from the form of our farness preservation constraints.
In general, most pairs of points in the mesh are not close, so that the total number of
such potentials grows as O(M 2 ), where M is the number of points in the data mesh.
However, rather than introducing all these potentials into the Markov net from the start, we
introduce them as needed. First, we run LBP without any farness preservation potentials.
If the solution violates a set of farness preservation constraints, we add it and rerun BP. In
practice, this approach adds a very small number of such constraints.
4
Experimental Results
Basic Registration. We applied our registration algorithm to three different datasets,
containing meshes of a human arm, wooden puppet and the CAESAR dataset of whole
human bodies [1], all acquired by a 3D range scanner. The meshes were not complete
surfaces, but several techniques exist for filling the holes (e.g., [10]).
We ran the Correlated Correspondence algorithm using the same probabilistic model and
the same parameters on all data sets. We use a coarse-to-fine strategy, using the result of a
coarse sub-sampling of the mesh surface to constrain the correspondences at a finer-grained
level. The resulting set of correspondences were used as markers to initialize the non-rigid
ICP algorithm of H?ahnel et al. [12].
The Correlated Correspondence algorithm successfully aligned all mesh pairs in our human arm data set containing 7 arms. In the puppet data set we registered one of the meshes
to the remaining 6 puppets. The algorithm correctly registered 4 out of 6 data meshes to the
model mesh. In the two remaining cases, the algorithm produced a registration where the
torso was flipped, so that the front was mapped to the back. This problem arises from ambiguities induced by the puppet symmetry, whose front and back are almost identical. Importantly, our probabilistic model assigns a higher likelihood score to the correct solution,
so that the incorrect registration is a consequence of local maxima in the LBP algorithm.
This fact allows us to address the issue in an unsupervised way simply by running loopy
BP several times, with different initialization. For details on the unsupervised initialization
scheme we used, please refer to our technical report [2]. We ran the modified algorithm
to register one puppet mesh to the remaining 6 meshes in the dataset, obtaining the correct
registration in all cases. In particular, as shown in Fig. 1A, we successfully deal with the
case on which the straightforward nonrigid ICP algorithm failed. The modified algorithm
was applied to the CAESAR dataset and produced very good registration for challenging
cases exhibiting both articulated motion and deformation (Fig. 2B), or exhibiting deformation and a (small) change in object scale (Fig. 2C).
Overall, the algorithm performed robustly, producing a close-to-optimal registrations
even for pairs of meshes that involve large deformations, articulated motion or both. The
registration is accomplished in an unsupervised way, without any prior knowledge about
object shape, dynamics, or alignment.
Partial view completion. The Correlated Correspondence algorithm allows us to register
a data mesh containing only a partial scan of an object to a known complete surface model
of the object, which serves as a template. We can then transform the template mesh to the
partial scan, a process which leaves undisturbed the links that are not involved in the partial
mesh. The result is a mesh that matches the data on the observed points, while completing
the unknown portion of the surface using the template.
We take a partial mesh, which is missing the entire back part of the puppet in a particular
pose. The resulting partial model is displayed in Fig. 3B-1; for comparison, the correct
complete model in this configuration (which was not available to the algorithm), is shown in
Fig. 3B-2. We register the partial mesh to models of the object in a different pose (Fig. 3B3), and compare the completions we obtain (Fig. 3B-4), to the ground truth represented in
Fig. 3B-2. The result demonstrates a largely correct reconstruction of the complete surface
geometry from the partial scan and the deformed template. We report additional shape
completion results in [2].
Interpolation. Current research [20] shows that if a nonrigid transformation ? between
the poses is available, believable animation can be produced by linear interpolation be-
Figure 3: A) The results produced by the CC algorithm were used for unsupervised recovery of
articulated models. 15 puppet parts and 4 arm parts, as well as the articulated object skeletons, were
recovered. B) Partial view completion results. The missing parts of the surface were estimated by
registering the partial view to a complete model of the object in a different configuration.
tween the model mesh and the transformed model mesh. The interpolation is performed
in the space of local link parameters (li,j , di?j , dj?i ), We demonstrate that transformation estimates produced by our algorithm can be used to automatically generate believable
animation sequences between fairly different poses, as shown in Fig. 2A.
Recovering Articulated Models. Articulated object models have a number of applications in animation and motion capture, and there has been work on recovering them
automatically from 3D data [7, 3]. We show that our unsupervised registration capability
can greatly assist articulated model recovery. In particular, the algorithm in [3] requires
an estimate of the correspondences between a template mesh and the remaining meshes in
the dataset. We supplied it with registration computed with the Correlated Correspondence
algorithm. As a result we managed to recover in a completely unsupervised way all 15
rigid parts of the puppet, as well as the joints between them (Fig. 3A). We demonstrate
successful articulation recovery even for objects which are not purely rigid, as is the case
with the human arm (see Fig. 3A).
5
Conclusion
The contribution of this paper is an algorithm for unsupervised registration of non-rigid 3D
surfaces in significantly different configurations. Our results show that the algorithm can
deal with articulated objects subject to large joint movements, as well as with non-rigid surface deformations. The algorithm was not provided with markers or other cues regarding
correspondence, and makes no assumptions about object shape, dynamics, or alignment.
We show the quality and the utility of the registration results we obtain by using them as a
starting point for compelling computer graphics applications: partial view completion, interpolation between scans, and recovery of articulated object models. Importantly, all these
results were generated in a completely unsupervised manner from a set of input meshes.
The main limitation of our approach is the fact that it makes the assumption of (approximate) preservation of geodesic distance. Although this assumption is desirable in many
cases, it is not always warranted. In some cases, the mesh topology may change drastically,
for example, when an arm touches the body. We can try to extend our approach to handle
these cases by trying to detect when they arise, and eliminating the associated constraints.
However, even this solution is likely to fail on some cases. A second limitation of our approach is that it assumes that the data mesh is a subset of the model mesh. If the data mesh
contains clutter, our algorithm will attempt to embed the clutter into the model. We feel that
the general nonrigid registration problem becomes underspecified when significant clutter
and occlusion are present simultaneously. In this case, additional assumptions about the
surfaces will be needed.
Despite the fact that our algorithm performs quite well, there are limitations to what
can be accurately inferred about the object from just two scans. Given more scans of the
same object, we can try to learn the deformation penalty associated with different links,
and bootstrap the algorithm. Such an extension would be a step toward the goal of learning
models of object shape and dynamics from raw data.
Acknowledgments. This work has been supported by the ONR Young Investigator (PECASE) grant
N00014-99-1-0464, and ONR Grant N00014-00-1-0637 under the DoD MURI program.
References
[1] B Allen, B Curless, and Z Popovic. The space of human body shapes:reconstruction and parameterization from range scans. In Proc. SIGGRAPH, 2003.
[2] D. Anguelov, D.Koller, P. Srinivasan, S.Thrun, H. Pang, and J.Davis. The correlated correspondence algorithm for unsupervised registration of nonrigid surfaces. In TR-SAIL-2004-100, at
http://robotics.stanford.edu/?drago/cc/tr100.pdf, 2004.
[3] D. Anguelov, D. Koller, H. Pang, P. Srinivasan, and S. Thrun. Recovering articulated object
models from 3d range data. In Proc. UAI, 2004.
[4] P. Besl and N. McKay. A method for registration of 3d shapes. Transactions on Pattern Analysis
and Machine Intelligence, 14(2):239?256, 1992.
[5] V Blanz and T Vetter. A morphable model for the synthesis of 3d faces. In SIGGRAPH, 1999.
[6] Y. Chen and G. Medioni. Object modeling by registration of multiple range images. In Proc.
IEEE Conf. on Robotics and Automation, 1991.
[7] K. Cheung, S. Baker, and T. Kanade. Shape-from-silhouette of articulated objects and its use
for human body kinematics estimation and motion capture. In Proc. IEEE CVPR, 2003.
[8] H. Chui and A. Rangarajan. A new point matching algorithm for non-rigid registration. In
Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), 2000.
[9] J. Coughlan and S. Ferreira. Finding deformable shapes using loopy belief propagation. In
Proc. ECCV, volume 3, pages 453?468, 2002.
[10] J. Davis, S. Marschner, M. Garr, and M. Levoy. Filling holes in complex surfaces using volumetric diffusion. In Symposium on 3D Data Processing, Visualization, and Transmission, 2002.
[11] Pedro Felzenszwalb. Representation and detection of shapes in images. In PhD Thesis. Massachusetts Institute of Technology, 2003.
[12] D. H?ahnel, S. Thrun, and W. Burgard. An extension of the ICP algorithm for modeling nonrigid
objects with mobile robots. In Proc. IJCAI, Acapulco, Mexico, 2003.
[13] D. Huttenlocher and P. Felzenszwalb. Efficient matching of pictorial structures. In CVPR, 2003.
[14] Andrew Johnson. Spin-Images: A Representation for 3-D Surface Matching. PhD thesis,
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, August 1997.
[15] Michael Leventon. Statistic models in medical image analysis. In PhD Thesis. Massachusetts
Institute of Technology, 2000.
[16] Michael H. Lin. Tracking articulated objects in real-time range image sequences. In ICCV (1),
pages 648?653, 1999.
[17] S. Rusinkiewicz and M. Levoy. Efficient variants of the ICP algorithm. In Proc. 3DIM, Quebec
City, Canada, 2001. IEEEComputer Society.
[18] Christian Shelton. Morphable surface models. In International Journal of Computer Vision,
2000.
[19] Leonid Sigal, Michael Isard, Benjamin H. Sigelman, and Michael J. Black. Attractive people:
Assembling loose-limbed models using non-parametric belief propagation. In NIPS, 2003.
[20] R. Sumner and Jovan Popovi?c. Deformation transfer for triangle meshes. In SIGGRAPH, 2004.
[21] J. Yedidia, W. Freeman, and Y Weiss. Understanding belief propagation and its generalizations.
In Exploring Artificial Intelligence in the New Millennium. Science & Technology Books, 2003.
[22] S. Yu, R. Gross, and J. Shi. Concurrent object recognition and segmentation with graph partitioning. In Proc. NIPS, 2002.
| 2601 |@word deformed:3 eliminating:1 decomposition:3 tr:1 initial:2 configuration:10 contains:3 score:1 denoting:1 existing:1 imaginary:1 current:1 recovered:1 assigning:1 yet:1 must:2 cruz:1 mesh:60 distant:2 shape:12 christian:1 alone:1 cue:1 leaf:1 intelligence:2 isard:1 parameterization:1 coughlan:1 nearness:1 coarse:2 complication:1 preference:1 daphne:1 registering:4 along:1 become:1 symposium:1 incorrect:2 manner:1 introduce:2 acquired:1 pairwise:2 nor:1 freeman:1 automatically:3 increasing:1 becomes:1 provided:1 estimating:5 moreover:1 baker:1 maximizes:1 what:2 whatsoever:1 finding:2 transformation:9 every:2 ferreira:1 demonstrates:1 puppet:11 partitioning:1 zl:4 unit:1 grant:2 medical:1 producing:1 local:20 treat:2 consequence:1 despite:1 interpolation:6 becoming:1 approximately:2 might:1 black:1 initialization:2 quantified:1 specifying:1 challenging:1 limited:1 range:8 sail:1 acknowledgment:1 enforces:1 practice:1 block:1 bootstrap:1 significantly:2 thought:1 matching:6 vetter:1 get:4 onto:1 cannot:1 close:3 applying:1 restriction:1 map:5 missing:2 shi:1 straightforward:2 starting:1 independently:1 sumner:1 formulate:1 resolution:1 recovery:5 assigns:1 importantly:3 fill:1 embedding:4 handle:1 notion:1 coordinate:6 feel:1 construction:1 exact:1 us:1 pa:1 recognition:2 underspecified:1 muri:1 huttenlocher:1 observed:1 subproblem:1 capture:3 parameterize:1 region:4 remote:1 movement:2 ran:2 gross:1 intuition:1 benjamin:1 skeleton:2 constrains:1 dynamic:6 geodesic:8 signature:4 depend:1 raise:1 segment:1 purely:1 distinctive:1 completely:2 triangle:1 easily:3 joint:9 siggraph:3 polygon:1 represented:2 articulated:17 describe:1 artificial:1 whose:3 encoded:1 stanford:5 solve:1 widely:1 larger:1 cvpr:3 otherwise:3 compressed:1 besl:1 blanz:1 statistic:1 transform:1 final:1 sequence:2 net:1 reconstruction:2 product:1 neighboring:1 aligned:1 loop:1 deformable:5 convergence:1 ijcai:1 rangarajan:1 extending:1 transmission:1 produce:2 object:51 help:1 andrew:1 completion:5 pose:7 nearest:1 recovering:3 c:1 quantify:3 exhibiting:3 direction:2 correct:7 human:8 hoi:1 violates:1 generalization:1 acapulco:1 extension:2 exploring:1 scanner:1 around:2 ground:1 vary:1 estimation:1 proc:8 sensitive:1 concurrent:1 combinatorially:1 vice:1 successfully:2 city:1 gaussian:2 always:1 modified:2 ck:13 rather:1 varying:1 mobile:1 encode:4 likelihood:1 greatly:2 sense:2 detect:1 wooden:3 inference:5 dim:1 rigid:20 entire:4 koller:3 transformed:1 rerun:1 issue:2 overall:1 fairly:3 initialize:1 undisturbed:1 sampling:1 identical:1 flipped:1 yu:1 unsupervised:12 constitutes:1 filling:2 caesar:2 discrepancy:1 others:1 report:2 simplify:1 simultaneously:2 preserve:2 interpolate:1 pictorial:1 geometry:7 ourselves:1 occlusion:1 ab:1 attempt:1 detection:2 drago:3 alignment:6 behind:2 edge:2 partial:16 incomplete:1 euclidean:3 penalizes:1 deformation:26 minimal:1 ahnel:3 modeling:2 compelling:2 leventon:1 assignment:8 tessellation:1 loopy:4 applicability:2 cost:1 introducing:1 subset:1 mckay:1 burgard:1 dod:1 successful:1 paz:1 johnson:1 front:2 graphic:5 fundamental:1 international:1 standing:1 probabilistic:14 michael:4 icp:9 together:1 pecase:1 synthesis:1 thesis:3 ambiguity:1 containing:4 woman:1 conf:1 book:1 ek:1 li:5 deform:2 potential:15 automation:1 register:5 explicitly:2 performed:2 view:5 try:2 portion:1 start:1 recover:3 capability:1 contribution:1 minimize:1 pang:2 spin:3 stretching:4 largely:3 yield:1 sitting:1 correspond:2 raw:1 curless:1 accurately:1 produced:5 cc:5 finer:1 detector:1 sebastian:1 decorrelated:1 aligns:1 volumetric:1 acquisition:1 involved:1 james:1 associated:6 di:4 dataset:4 massachusetts:2 knowledge:6 realm:1 torso:1 segmentation:1 back:3 higher:1 popovi:1 follow:1 specify:1 wei:1 decorrelating:1 done:1 just:1 correlation:2 until:1 ei:4 touch:1 marker:4 propagation:5 defines:2 quality:2 artifact:1 grows:1 building:1 effect:1 b3:1 managed:1 believable:2 deal:3 attractive:1 adjacent:6 please:1 davis:2 trying:1 nonrigid:10 pdf:1 complete:10 demonstrate:5 performs:1 dragomir:1 topoint:1 motion:5 allen:1 image:8 recently:1 common:1 rotation:6 physical:1 twist:1 endpoint:3 volume:1 linking:1 extend:3 assembling:1 significant:4 refer:1 anguelov:2 versa:1 mellon:1 automatic:2 consistency:1 similarly:1 dj:3 moving:1 robot:1 surface:39 morphable:2 add:2 aligning:1 closest:1 optimizing:1 apart:2 n00014:2 onr:2 accomplished:1 seen:1 preserving:2 additional:3 determine:1 preservation:9 violate:1 desirable:1 multiple:1 technical:1 match:1 lin:1 equally:1 variant:2 basic:1 vision:3 robotics:4 c1:1 lbp:4 want:3 fine:1 unlike:1 induced:5 subject:1 quebec:1 near:2 presence:2 enough:1 embeddings:1 iterate:1 xj:6 quite:2 variety:1 restrict:1 topology:1 idea:2 regarding:1 knowing:1 whether:1 utility:1 assist:1 penalty:7 santa:1 clear:1 involve:1 amount:2 clutter:3 locally:1 induces:1 http:2 generate:2 supplied:1 exist:2 estimated:1 correctly:1 discrete:1 carnegie:1 srinivasan:2 key:2 neither:1 sxi:1 registration:34 diffusion:1 graph:1 limbed:1 run:1 almost:1 reasonable:1 patch:1 completing:1 correspondence:41 fold:1 constraint:12 constrain:1 bp:2 encodes:3 nearby:5 chui:1 spring:2 performing:1 relatively:1 poor:2 remain:1 smaller:1 increasingly:1 alike:1 praveen:1 leg:4 iccv:1 computationally:1 visualization:1 previously:1 precomputed:1 fail:1 kinematics:1 needed:2 loose:1 tractable:1 serf:1 available:5 yedidia:1 apply:3 away:1 enforce:2 robustly:1 original:1 assumes:2 remaining:6 denotes:1 include:1 running:2 especially:2 classical:1 society:1 question:1 quantity:1 strategy:1 parametric:1 distance:13 link:20 separate:2 thrun:4 mapped:4 mail:1 toward:1 assuming:1 length:2 illustration:1 acquire:1 mexico:1 unfortunately:1 subproblems:1 rise:1 implementation:1 unknown:3 allowing:2 discretize:1 datasets:3 markov:6 displayed:2 situation:1 shoulder:1 looking:2 august:1 community:1 canada:1 inferred:1 pair:10 specified:1 z1:1 resist:1 twisting:4 california:1 registered:5 nip:2 address:2 usually:2 pattern:2 articulation:1 sigelman:1 program:1 video:2 belief:5 medioni:1 difficulty:1 rely:1 attach:1 arm:9 scheme:1 millennium:1 technology:3 extract:1 prior:6 literature:1 understanding:1 determining:2 relative:3 fully:2 expect:1 limitation:4 rameters:1 triple:1 incurred:1 jovan:1 consistent:4 sigal:1 translation:1 eccv:1 supported:1 infeasible:1 drastically:1 guide:1 institute:3 neighbor:1 template:9 wide:1 face:1 felzenszwalb:2 sparse:1 xn:1 world:1 stuck:1 levoy:2 far:5 transaction:1 approximate:2 implicitly:1 silhouette:1 global:1 correlating:1 uai:1 pittsburgh:1 popovic:1 xi:8 search:1 iterative:1 latent:1 kanade:1 learn:1 zk:5 robust:1 ca:2 transfer:1 symmetry:3 obtaining:1 rusinkiewicz:1 warranted:1 cl:6 complex:1 domain:2 tween:1 main:4 whole:1 noise:2 animation:3 arise:1 nothing:1 allowed:3 complementary:2 body:5 augmented:1 fig:14 x1:1 sub:1 position:1 exponential:1 candidate:2 grained:1 young:1 embed:1 inset:1 undergoing:1 mp4:1 phd:3 cartesian:1 sx:1 hole:2 chen:1 smoothly:1 simply:2 likely:4 ez:4 failed:1 prevents:1 tracking:1 applies:1 pedro:1 corresponds:1 truth:1 relies:1 cheung:2 goal:2 absence:1 man:2 change:4 leonid:1 specifically:2 determined:1 principal:1 total:1 experimental:1 deforming:2 meaningful:1 rarely:1 select:2 people:1 scan:33 arises:2 investigator:1 incorporate:1 evaluate:2 shelton:1 correlated:8 |
1,764 | 2,602 | Maximum Margin Clustering
Linli Xu? ?
James Neufeld? Bryce Larson?
?
University of Waterloo
?
University of Alberta
Dale Schuurmans?
Abstract
We propose a new method for clustering based on finding maximum margin hyperplanes through data. By reformulating the problem in terms
of the implied equivalence relation matrix, we can pose the problem as
a convex integer program. Although this still yields a difficult computational problem, the hard-clustering constraints can be relaxed to a
soft-clustering formulation which can be feasibly solved with a semidefinite program. Since our clustering technique only depends on the data
through the kernel matrix, we can easily achieve nonlinear clusterings in
the same manner as spectral clustering. Experimental results show that
our maximum margin clustering technique often obtains more accurate
results than conventional clustering methods. The real benefit of our approach, however, is that it leads naturally to a semi-supervised training
method for support vector machines. By maximizing the margin simultaneously on labeled and unlabeled training data, we achieve state of the
art performance by using a single, integrated learning principle.
1
Introduction
Clustering is one of the oldest forms of machine learning. Nevertheless, it has received a
significant amount of renewed attention with the advent of nonlinear clustering methods
based on kernels. Kernel based clustering methods continue to have a significant impact on
recent work in machine learning [14, 13], computer vision [16], and bioinformatics [9].
Although many variations of kernel based clustering has been proposed in the literature,
most of these techniques share a common ?spectral clustering? framework that follows a
generic recipe: one first builds the kernel (?affinity?) matrix, normalizes the kernel, performs dimensionality reduction, and finally clusters (partitions) the data based on the resulting representation [17].
In this paper, our primary focus will be on the final partitioning step where the actual
clustering occurs. Once the data has been preprocessed and a kernel matrix has been constructed (and its rank possibly reduced), many variants have been suggested in the literature
for determining the final partitioning of the data. The predominant strategies include using
k-means clustering [14], minimizing various forms of graph cut cost [13] (relaxations of
which amount to clustering based on eigenvectors [17]), and finding strongly connected
components in a Markov chain defined by the normalized kernel [4]. Some other recent
alternatives are correlation clustering [12] and support vector clustering [1].
What we believe is missing from this previous work however, is a simple connection to
other types of machine learning, such as semisupervised and supervised learning. In fact,
one of our motivations is to seek unifying machine learning principles that can be used
to combine different types of learning problems in a common framework. For example, a
useful goal for any clustering technique would be to find a way to integrate it seamlessly
with a supervised learning technique, to obtain a principled form of semisupervised learning. A good example of this is [18], which proposes a general random field model based
on a given kernel matrix. They then find a soft cluster assignment on unlabeled data that
minimizes a joint loss with observed labels on supervised training data. Unfortunately, this
technique actually requires labeled data to cluster the unlabeled data. Nevertheless, it is a
useful approach.
Our goal in this paper is to investigate another standard machine learning principle?
maximum margin classification?and modify it for clustering, with the goal of achieving a
simple, unified way of solving a variety of problems, including clustering and semisupervised learning.
Although one might be skeptical that clustering based on large margin discriminants can
perform well, we will see below that, combined with kernels, this strategy can often be
more effective than conventional spectral clustering. Perhaps more significantly, it also immediately suggests a simple semisupervised training technique for support vector machines
(SVMs) that appears to improve the state of the art.
The remainder of this paper is organized as follows. After establishing the preliminary
ideas and notation in Section 2, we tackle the problem of computing a maximum margin
clustering for a given kernel matrix in Section 3. Although it is not obvious that this problem can be solved efficiently, we show that the optimal clustering problem can in fact be
formulated as a convex integer program. We then propose a relaxation of this problem
which yields a semidefinite program that can be used to efficiently compute a soft clustering. Section 4 gives our experimental results for clustering. Then, in Section 5 we extend
our approach to semisupervised learning by incorporating additional labeled training data
in a seamless way. We then present experimental results for semisupervised learning in
Section 6 and conclude.
2
Preliminaries
Since our main clustering idea is based on finding maximum margin separating hyperplanes, we first need to establish the background ideas from SVMs as well as establish the
notation we will use.
For SVM training, we assume we are given labeled training examples
(x1 , y 1 ), ..., (xN , y N ), where each example is assigned to one of two classes
y i ? {?1, +1}. The goal of an SVM of course is to find the linear discriminant
fw,b (x) = w> ?(x) + b that maximizes the minimum misclassification margin
??
=
max ?
subject to
w,b,?
y i (w> ?(xi ) + b) ? ?, ?N
i=1 , kwk2 = 1
(1)
Here the Euclidean normalization constraint on w ensures that the Euclidean distance between the data and the separating hyperplane (in ?(x) space) determined by w ? , b? is
maximized. It is easy to show that this same w ? , b? is a solution to the quadratic program
? ? ?2
=
min kwk2
w,b
subject to
y i (w> ?(xi ) + b) ? 1, ?N
i=1
(2)
Importantly, the minimum value of this quadratic program, ? ? ?2 , is just the inverse square
of the optimal solution value ? ? to (1) [10].
To cope with potentially inseparable data, one normally introduces slack variables to reduce
the dependence on noisy examples. This leads to the so called soft margin SVM (and its
dual) which is controlled by a tradeoff parameter C
? ? ?2
=
min kwk2 + C> e
subject to
w,b,
=
max 2?> e ? hK ? ??> , yy> i
?
y i (w> ?(xi ) + b) ? 1 ? i , ?N
i=1 , ? 0
subject to
0 ? ? ? C, ?> y = 0
(3)
The notation we use in this dual formulation requires some explanation, since we will use it
below: Here K denotes the N ? N kernel matrix formed from the inner products of feature
vectors ? = [?(x1 ), ..., ?(xN )] such that K = ?> ?. Thus kij = ?(xi )> ?(xj ). The
vector e denotes the vector of allP1 entries. We let A ? B denote componentwise matrix
multiplication, and let hA, Bi = ij aij bij . Note that (3) is derived from the standard dual
SVM by using the fact that ?> (K ? yy> )? = hK ? yy> , ??> i = hK ? ??> , yy> i.
To summarize: for supervised maximum margin training, one takes a given set of labeled
training data (x1 , y 1 ), ..., (xN , y N ), forms the kernel matrix K on data inputs, forms the
kernel matrix yy> on target outputs, sets the slack parameter C, and solves the quadratic
program (3) to obtain the dual solution ?? and the inverse square maximum margin value
? ? ?2 . Once these are obtained, one can then recover a classifier directly from ?? [15].
Of course, our main interest initially is not to find a large margin classifier given labels on
the data, but instead to find a labeling that results in a large margin classifier.
3
Maximum margin clustering
The clustering principle we investigate is to find a labeling so that if one were to subsequently run an SVM, the margin obtained would be maximal over all possible labellings. That is, given data x1 , .., xN , we wish to assign the data points to two classes
y i ? {?1, +1} so that the separation between the two classes is as wide as possible.
Unsurprisingly, this is a hard computational problem. However, with some reformulation
we can express it as a convex integer program, which suggests that there might be some
hope of obtaining practical solutions. However, more usefully, we can relax the integer
constraint to obtain a semidefinite program that yields soft cluster assignments which approximately maximize the margin. Therefore, one can obtain soft clusterings efficiently
using widely available software. However, before proceeding with the main development,
there are some preliminary issues we need to address.
First, we clearly need to impose some sort of constraint on the class balance, since otherwise one could simply assign all the data points to the same class and obtain an unbounded
margin. A related issue is that we would also like to avoid the problem of separating a single outlier (or very small group of outliers) from the rest of the data. Thus, to mitigate these
effects we will impose a constraint that the difference in class sizes be bounded. This will
turn out to be a natural constraint for semisupervised learning and is very easy to enforce.
Second, we would like the clustering to behave gracefully on noisy data where the classes
may in fact overlap, so we adopt the soft margin formulation of the maximum margin criterion. Third, although it is indeed possible to extend our approach to the multiclass case
[5], the extension is not simple and for ease of presentation we focus on simple two class
clustering in this paper. Finally, there is a small technical complication that arises with one
of the SVM parameters: It turns out that an unfortunate nonconvexity problem arises when
we include the use of the offset b in the underlying large margin classifier. We currently
do not have a way to avoid this nonconvexity, and therefore we currently set b = 0 and
therefore only consider homogeneous linear classifiers. The consequence of this restriction
is that the constraint ?> y = 0 is removed from the dual SVM quadratic program (3). Although it would seem like this is a harsh restriction, the negative effects are mitigated by
centering the data at the origin, which can always be imposed. Nevertheless, dropping this
restriction remains an important question for future research. With these caveats in mind,
we proceed to the main development.
We wish to solve for a labeling y ? {?1, +1}N that leads to a maximum (soft) margin. Straightforwardly, one could attempt to tackle this optimization problem by directly
formulating
min
y?{?1,+1}N
? ? ?2 (y)
subject to
? ` ? e> y ? `
where
? ? ?2 (y) = max 2?> e ? hK ? ??> , yy> i
?
subject to
0???C
Unfortunately, ? ? ?2 (y) is not a convex function of y, and this formulation does not lead to
an effective algorithmic approach. In fact, to obtain an efficient technique for solving this
problem we need two key insights.
The first key step is to re-express this optimization, not directly in terms of the cluster labels
y, but instead in terms of the label kernel matrix M = yy > . The main advantage of doing
so is that the inverse soft margin ? ? ?2 is in fact a convex function of M
? ? ?2 (M )
=
max 2?> e ? hK ? ??> , M i
?
subject to
0???C
(4)
The convexity of ? ? ?2 with respect to M is easy to establish since this quantity is just a
maximum over linear functions of M [3]. This observation parallels one of the key insights
of [10], here applied to M instead of K.
Unfortunately, even though we can pose a convex objective, it does not allow us to immediately solve our problem because we still have to relate M to y, and M = yy > is not
a convex constraint. Thus, the main challenge is to find a way to constrain M to ensure
M = yy> while respecting the class balance constraints ?` ? e> y ? `. One obvious
way to enforce M = yy> would be to impose the constraint that rank(M ) = 1, since
combined with M ? {?1, +1}N ?N this forces M to have a decomposition yy > for some
y ? {?1, +1}N . Unfortunately, rank(M ) = 1 is not a convex constraint on M [7].
Our second key idea is to realize that one can indirectly enforce the desired relationship
M = yy> by imposing a different set of linear constraints on M . To do so, notice that any
such M must encode an equivalence relation over the training points. That is, if M = yy >
for some y ? {?1, +1}N then we must have
1
if yi = yj
mij =
?1 if yi 6= yj
Therefore to enforce the constraint M = yy > for y ? {?1, +1}N it suffices to impose
the set of constraints: (1) M encodes an equivalence relation, namely that it is transitive,
reflexive and symmetric; (2) M has at most two equivalence classes; and (3) M has at
least two equivalence classes. Fortunately we can enforce each of these requirements by
imposing a set of linear constraints on M ? {?1, +1}N ?N respectively:
L1 : mii = 1; mij = mji ; mik ? mij + mjk ? 1; ?ijk
L2 : mjk ? ?mij ? mik ? 1; ?ijk
P
L3 :
i mij ? N ? 2; ?j
The result is that with only linear constraints on M we can enforce the condition M =
yy> .1 Finally, we can enforce the class balance constraint ?` ? e> y ? ` by imposing the
additional set of linear constraints:
1
Interestingly, for M ? {?1, +1}N ?N the first two sets of linear constraints can be replaced by
the compact set of convex constraints diag(M ) = e, M 0 [7, 11]. However, when we relax the
integer constraint below, this equivalence is no longer true and we realize some benefit in keeping the
linear equivalence relation constraints.
L4 : ?` ?
P
i
mij ? `; ?j
which obviates L3 .
The combination of these two steps leads to our first main result: One can solve for a
hard clustering y that maximizes the soft margin by solving a convex integer program. To
accomplish this, one first solves for the equivalence relation matrix M in
min
M ?{?1,+1}N ?N
max 2?> e ? hK ? ??> , M i subject to 0 ? ? ? C, L1 , L2 , L4
?
(5)
Then, from the solution M ? recover the optimal cluster assignment y ? simply by setting
y? to any column vector in M ? .
Unfortunately, the formulation (5) is still not practical because convex integer programming
is still a hard computational problem. Therefore, we are compelled to take one further step
and relax the integer constraint on M to obtain a convex optimization problem over a
continuous parameter space
min
max 2?> e ? hK ? ??> , M i subject to 0 ? ? ? C, L1 , L2 , L4 , M 0 (6)
M ?[?1,+1]N ?N ?
This can be turned into an equivalent semidefinite program using essentially the same
derivation as in [10], yielding
min
M,?,?,?
?
subject to
L1 , L2 , L4 , ? ? 0, ? ? 0, M 0
M ?K
e+???
0
(e + ? ? ?)> ? ? 2C? > e
(7)
This gives us our second main result: To solve for a soft clustering y that approximately
maximizes the soft margin, first solve the semidefinite program (7), and ?
then from the
solution matrix M ? recover the soft cluster assignment y by setting y = ?1 v1 , where
?1 , v1 are the maximum eigenvalue and corresponding eigenvector of M ? .2
4
Experimental results
We implemented the maximum margin clustering algorithm based on the semidefinite programming formulation (7), using the SeDuMi library, and tested it on various data sets.
In these experiments we compared the performance of our maximum margin clustering
technique to the spectral clustering method of [14] as well as straightforward k-means
clustering. Both maximum margin clustering and spectral clustering were run with the
same radial basis function kernel and matching width parameters. In fact, in each case, we
chose the best width parameter for spectral clustering by searching over a small set of five
widths related to the scale of the problem. In addition, the slack parameter for maximum
margin clustering was simply set to an arbitrary value.3
To assess clustering performance we first took a set of labeled data, removed the labels,
ran the clustering algorithms, labeled each of the resulting clusters with the majority class
according to the original training labels, and finally measured the number of misclassifications made by each clustering.
Our first experiments were conducted on the synthetic data sets depicted in Figure 1. Table 1 shows that for the first three sets of data (Gaussians, Circles, AI) maximum margin
and spectral clustering obtained identical small error rates, which were in turn significantly
2
One could also employ randomized rounding to choose a hard class assignment y.
It turns out that the slack parameter C did not have a significant effect on any of our preliminary
investigations, so we just set it to C = 100 for all of the experiments reported here.
3
smaller than those obtained by k-means. However, maximum margin clustering demonstrates a substantial advantage on the fourth data set (Joined Circles) over both spectral and
k-means clustering.
We also conducted clustering experiments on the real data sets, two of which are depicted
in Figures 2 and 3: a database of images of handwritten digits of twos and threes (Figure 2),
and a database of face images of two people (Figure 3). The last two columns of Table 1
show that maximum margin clustering obtains a slight advantage on the handwritten digits
data, and a significant advantage on the faces data.
5
Semi-supervised learning
Although the clustering results are reasonable, we have an additional goal of adapting
the maximum margin approach to semisupervised learning. In this case, we assume we
are given a labeled training set (x1 , y 1 ), ..., (xn , y n ) as well as an unlabeled training set
xn+1 , ..., xN , and the goal is to combine the information in these two data sets to produce
a more accurate classifier.
In the context of large margin classifiers, many techniques have been proposed for incorporating unlabeled data in an SVM, most of which are intuitively based on ensuring that large
margins are also preserved on the unlabeled training data [8, 2], just as in our case. However, none of these previous proposals have formulated a convex optimization procedure
that was guaranteed to directly maximize the margin, as we propose in Section 3.
For our procedure, extending the maximum margin clustering approach of Section 3 to
semisupervised training is easy: We simply add constraints on the matrix M to force it
to respect the observed equivalence relations among the labeled training data. In addition,
we impose the constraint that each unlabeled example belongs to the same class as at least
one labeled training example. These conditions can be enforced with the simple set of
additional linear constraints
S1 : mij = yi yj for labeled examples i, j ? {1, ..., n}
Pn
S2 :
i=1 mij ? 2 ? n for unlabeled examples j ? {n + 1, ..., N }
Note that the observed training labels yi for i ? {1, ..., n} are constants, and therefore the
new constraints are still linear in the parameters of M that are being optimized.
The resulting training procedure is similar to that of [6], with the addition of the constraints
L1 ?L4 , S2 which enforce two classes and facilitate the ability to perform clustering on the
unlabeled examples.
6
Experimental results
We tested our approach to semisupervised learning on various two class data sets from
the UCI repository. We compared the performance of our technique to the semisupervised
SVM technique of [8]. In each case, we evaluated the techniques transductively. That is,
we split the data into a labeled and unlabeled part, held out the labels of the unlabeled
portion, trained the semisupervised techniques, reclassified the unlabeled examples using
the learned results, and measured the misclassification error on the held out labels.
Here we see that the maximum margin approach based on semidefinite programming can
often outperform the approach of [8]. Table 2 shows that our maximum margin method
is effective at exploiting unlabeled data to improve the prediction of held out labels. In
every case, it significantly reduces the error of plain SVM, and obtains the best overall
performance of the semisupervised learning techniques we have investigated.
2
1.5
2.5
1
2
0.5
1.5
0
1
?0.5
0.5
?1
0
0.8
0.6
1.8
0.4
1.6
0.2
1.4
0
1.2
?0.2
1
?0.4
0.8
0.6
0.5
?0.6
1
1.5
2
?1.5
?1.5
?1
?0.5
0
0.5
1
1.5
?0.5
?0.2
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
?0.8
?0.8
?0.6
?0.4
?0.2
0
0.2
0.4
0.6
0.8
Figure 1: Four artificial data sets used in the clustering experiments. Each data set consists
of eighty two-dimensional points. The points and stars show the two classes discovered by
maximum margin clustering.
Figure 2: A sampling of the handwritten digits (twos and threes). Each row shows a random
sampling of images from a cluster discovered by maximum margin clustering. Maximum
margin made very few misclassifications on this data set, as shown in Table 1.
Figure 3: A sampling of the face data (two people). Each row shows a random sampling of
images from a cluster discovered by maximum margin clustering. Maximum margin made
no misclassifications on this data set, as shown in Table 1.
Maximum Margin
Spectral Clustering
K-means
Gaussians
1.25
1.25
5
Circles
0
0
50
AI
0
0
38.5
Joined Circles
1
24
50
Digits
3
6
7
Faces
0
16.7
24.4
Table 1: Percentage misclassification errors of the various clustering algorithms on the
various data sets.
Max Marg
Spec Clust
TSVM
SVM
HWD 1-7
3.3
4.2
4.6
4.5
HWD 2-3
4.7
6.4
5.4
10.9
UCI Austra.
32
48.7
38.7
37.5
UCI Flare
34
40.7
33.3
37
UCI Vote
14
13.8
17.5
20.4
UCI Diabet.
35.55
44.67
35.89
39.44
Table 2: Percentage misclassification errors of the various semisupervised learning algorithms on the various data sets. SVM uses no unlabeled data. TSVM is due to [8].
7
Conclusion
We have proposed a simple, unified principle for clustering and semisupervised learning
based on the maximum margin principle popularized by supervised SVMs. Interestingly,
this criterion can be approximately optimized using an efficient semidefinite programming
formulation. The results on both clustering and semisupervised learning are competitive
with, and sometimes exceed the state of the art. Overall, margin maximization appears to
be an effective way to achieve a unified approach to these different learning problems.
For future work we plan to address the restrictions of the current method, including the
ommission of an offset b and the restriction to two class problems. We note that a multiclass
extension to our approach is possible, but it is complicated by the fact that it cannot be
conveniently based on the standard multiclass SVM formulation of [5]
Acknowledgements
Research supported by the Alberta Ingenuity Centre for Machine Learning, NSERC, MITACS, IRIS and the Canada Research Chairs program.
References
[1] A. Ben-Hur, D. Horn, H. Siegelman, and V. Vapnik. Support vector clustering. In Journal of
Machine Learning Research 2 (2001), 2001.
[2] K. Bennett and A. Demiriz. Semi-supervised support vector machines. In Advances in Neural
Information Processing Systems 11 (NIPS-98), 1998.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge U. Press, 2004.
[4] Chakra Chennubhotla and Allan Jepson. Eigencuts: Half-lives of eigenflows for spectral clustering. In In Advances in Neural Information Processing Systems, 2002, 2002.
[5] K. Crammer and Y. Singer. On the algorithmic interpretation of multiclass kernel-based vector
machines. Journal of Machine Learning Research, 2, 2001.
[6] T. De Bie and N. Cristianini. Convex methods for transduction. In Advances in Neural Information Processing Systems 16 (NIPS-03), 2003.
[7] C. Helmberg. Semidefinite programming for combinatorial optimization. Technical Report
ZIB-Report ZR-00-34, Konrad-Zuse-Zentrum Berlin, 2000.
[8] T. Joachims. Transductive inference for text classification using support vector machines. In
International Conference on Machine Learning (ICML-99), 1999.
[9] Y. Kluger, R. Basri, J. Chang, and M. Gerstein. Spectral biclustering of microarray cancer data:
co-clustering genes and conditions. Genome Research, 13, 2003.
[10] G. Lanckriet, N. Cristianini, P. Bartlett, L Ghaoui, and M. Jordan. Learning the kernel matrix
with semidefinite programming. Journal of Machine Learning Research, 5, 2004.
[11] M. Laurent and S. Poljak. On a positive semidefinite relaxation of the cut polytope. Linear
Algebra and its Applications, 223/224, 1995.
[12] S. Chawla N. Bansal, A. Blum. Correlation clustering. In Conference on Foundations of Computer Science (FOCS-02), 2002.
[13] J. Kandola N. Cristianini, J. Shawe-Taylor. Spectral kernel methods for clustering. In In Advances in Neural Information Processing System, 2001, 2001.
[14] A. Ng, M. Jordan, and Y Weiss. On spectral clustering: analysis and an algorithm. In Advances
in Neural Information Processing Systems 14 (NIPS-01), 2001.
[15] B. Schoelkopf and A. Smola. Learning with Kernels: Support Vector Machines, Regularization,
Optimization, and Beyond. MIT Press, 2002.
[16] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans PAMI, 22(8), 2000.
[17] Y. Weiss. Segmentation using eigenvectors: a unifying view. In International Conference on
Computer Vision (ICCV-99), 1999.
[18] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and
harmonic functions. In International Conference on Machine Learning (ICML-03), 2003.
| 2602 |@word repository:1 seek:1 decomposition:1 reduction:1 renewed:1 interestingly:2 current:1 bie:1 must:2 realize:2 partition:1 spec:1 half:1 hwd:2 flare:1 oldest:1 compelled:1 caveat:1 complication:1 hyperplanes:2 five:1 unbounded:1 constructed:1 focs:1 consists:1 combine:2 manner:1 allan:1 indeed:1 ingenuity:1 alberta:2 actual:1 notation:3 bounded:1 maximizes:3 underlying:1 advent:1 mitigated:1 what:1 minimizes:1 eigenvector:1 unified:3 finding:3 mitigate:1 every:1 tackle:2 usefully:1 classifier:7 demonstrates:1 partitioning:2 normally:1 before:1 positive:1 modify:1 consequence:1 establishing:1 laurent:1 approximately:3 pami:1 might:2 chose:1 equivalence:9 suggests:2 co:1 ease:1 bi:1 practical:2 horn:1 yj:3 digit:4 procedure:3 significantly:3 adapting:1 matching:1 boyd:1 radial:1 cannot:1 unlabeled:14 context:1 marg:1 restriction:5 equivalent:1 conventional:2 imposed:1 missing:1 maximizing:1 shi:1 straightforward:1 attention:1 convex:15 immediately:2 insight:2 importantly:1 vandenberghe:1 searching:1 variation:1 target:1 programming:6 homogeneous:1 us:1 origin:1 lanckriet:1 cut:3 labeled:12 database:2 observed:3 solved:2 ensures:1 connected:1 schoelkopf:1 removed:2 ran:1 principled:1 substantial:1 convexity:1 respecting:1 cristianini:3 trained:1 solving:3 algebra:1 basis:1 easily:1 joint:1 various:7 derivation:1 effective:4 artificial:1 labeling:3 widely:1 solve:5 relax:3 otherwise:1 ability:1 transductive:1 demiriz:1 noisy:2 final:2 advantage:4 eigenvalue:1 neufeld:1 took:1 propose:3 product:1 maximal:1 remainder:1 turned:1 uci:5 achieve:3 recipe:1 exploiting:1 cluster:10 requirement:1 extending:1 produce:1 ben:1 pose:2 measured:2 ij:1 received:1 solves:2 implemented:1 eigenflows:1 subsequently:1 kluger:1 assign:2 suffices:1 preliminary:4 investigation:1 extension:2 algorithmic:2 inseparable:1 adopt:1 label:10 currently:2 combinatorial:1 waterloo:1 hope:1 zib:1 mit:1 clearly:1 always:1 gaussian:1 avoid:2 pn:1 encode:1 derived:1 focus:2 joachim:1 rank:3 seamlessly:1 hk:7 inference:1 transductively:1 integrated:1 initially:1 relation:6 issue:2 classification:2 dual:5 among:1 overall:2 proposes:1 development:2 art:3 plan:1 field:2 once:2 ng:1 sampling:4 identical:1 icml:2 future:2 report:2 feasibly:1 employ:1 eighty:1 few:1 simultaneously:1 kandola:1 zentrum:1 replaced:1 attempt:1 interest:1 investigate:2 predominant:1 introduces:1 semidefinite:11 yielding:1 held:3 chain:1 accurate:2 sedumi:1 euclidean:2 poljak:1 taylor:1 re:1 desired:1 circle:4 kij:1 column:2 soft:13 assignment:5 maximization:1 cost:1 reflexive:1 entry:1 rounding:1 conducted:2 reported:1 straightforwardly:1 accomplish:1 synthetic:1 combined:2 international:3 randomized:1 seamless:1 choose:1 possibly:1 de:1 star:1 depends:1 view:1 doing:1 portion:1 competitive:1 recover:3 sort:1 parallel:1 tsvm:2 complicated:1 ass:1 square:2 formed:1 efficiently:3 maximized:1 yield:3 handwritten:3 mji:1 helmberg:1 clust:1 none:1 centering:1 james:1 obvious:2 naturally:1 hur:1 dimensionality:1 organized:1 segmentation:2 actually:1 chakra:1 appears:2 supervised:9 wei:2 formulation:8 evaluated:1 though:1 strongly:1 mitacs:1 just:4 smola:1 correlation:2 nonlinear:2 perhaps:1 believe:1 semisupervised:16 facilitate:1 effect:3 normalized:2 true:1 regularization:1 assigned:1 reformulating:1 symmetric:1 konrad:1 width:3 larson:1 iris:1 criterion:2 bansal:1 performs:1 l1:5 image:5 harmonic:1 common:2 discriminants:1 extend:2 slight:1 interpretation:1 kwk2:3 significant:4 cambridge:1 imposing:3 ai:2 centre:1 shawe:1 l3:2 longer:1 add:1 chennubhotla:1 recent:2 belongs:1 continue:1 life:1 yi:4 minimum:2 additional:4 relaxed:1 impose:5 fortunately:1 maximize:2 semi:4 reduces:1 technical:2 controlled:1 impact:1 ensuring:1 variant:1 prediction:1 vision:2 essentially:1 kernel:20 normalization:1 sometimes:1 preserved:1 background:1 addition:3 proposal:1 microarray:1 rest:1 subject:10 lafferty:1 seem:1 jordan:2 integer:8 exceed:1 split:1 easy:4 variety:1 xj:1 misclassifications:3 reduce:1 idea:4 inner:1 tradeoff:1 multiclass:4 bartlett:1 proceed:1 linli:1 useful:2 eigenvectors:2 amount:2 svms:3 reduced:1 outperform:1 percentage:2 notice:1 yy:15 dropping:1 express:2 group:1 key:4 four:1 reformulation:1 nevertheless:3 blum:1 achieving:1 preprocessed:1 nonconvexity:2 v1:2 graph:1 relaxation:3 enforced:1 run:2 inverse:3 fourth:1 reasonable:1 separation:1 mii:1 gerstein:1 guaranteed:1 quadratic:4 constraint:28 constrain:1 software:1 encodes:1 min:6 formulating:1 chair:1 according:1 popularized:1 combination:1 smaller:1 labellings:1 s1:1 outlier:2 intuitively:1 iccv:1 ghaoui:1 remains:1 slack:4 turn:4 singer:1 mind:1 available:1 gaussians:2 spectral:13 generic:1 enforce:8 indirectly:1 chawla:1 alternative:1 original:1 obviates:1 denotes:2 clustering:69 include:2 ensure:1 unfortunate:1 unifying:2 ghahramani:1 build:1 establish:3 implied:1 objective:1 malik:1 question:1 quantity:1 occurs:1 strategy:2 primary:1 dependence:1 affinity:1 distance:1 separating:3 berlin:1 majority:1 gracefully:1 polytope:1 discriminant:1 relationship:1 minimizing:1 balance:3 difficult:1 unfortunately:5 potentially:1 relate:1 negative:1 perform:2 observation:1 markov:1 behave:1 discovered:3 arbitrary:1 canada:1 namely:1 connection:1 componentwise:1 optimized:2 learned:1 nip:3 trans:1 address:2 beyond:1 suggested:1 below:3 summarize:1 challenge:1 program:14 max:7 including:2 explanation:1 misclassification:4 overlap:1 natural:1 force:2 zr:1 zhu:1 improve:2 mjk:2 library:1 harsh:1 transitive:1 bryce:1 text:1 literature:2 l2:4 acknowledgement:1 multiplication:1 determining:1 unsurprisingly:1 loss:1 foundation:1 integrate:1 principle:6 share:1 normalizes:1 cancer:1 skeptical:1 course:2 row:2 supported:1 last:1 keeping:1 aij:1 allow:1 mik:2 wide:1 face:4 benefit:2 plain:1 xn:7 genome:1 dale:1 made:3 cope:1 obtains:3 compact:1 basri:1 gene:1 conclude:1 xi:4 continuous:1 reclassified:1 table:7 obtaining:1 schuurmans:1 investigated:1 diag:1 jepson:1 did:1 main:8 motivation:1 s2:2 xu:1 x1:5 transduction:1 wish:2 austra:1 third:1 bij:1 offset:2 svm:13 incorporating:2 vapnik:1 margin:47 depicted:2 simply:4 conveniently:1 nserc:1 joined:2 biclustering:1 chang:1 mij:8 goal:6 formulated:2 presentation:1 bennett:1 hard:5 fw:1 determined:1 hyperplane:1 called:1 experimental:5 ijk:2 vote:1 l4:5 support:7 people:2 arises:2 crammer:1 bioinformatics:1 tested:2 |
1,765 | 2,603 | Co-Validation: Using Model Disagreement on
Unlabeled Data to Validate Classification
Algorithms
Omid Madani, David M. Pennock, Gary W. Flake
Yahoo! Research Labs
3rd floor, Pasadena Ave.
Pasadena, CA 91103
{madani|pennockd|flakeg}@yahoo-inc.com
Abstract
In the context of binary classification, we define disagreement as a measure of how often two independently-trained models differ in their classification of unlabeled data. We explore the use of disagreement for error
estimation and model selection. We call the procedure co-validation,
since the two models effectively (in)validate one another by comparing
results on unlabeled data, which we assume is relatively cheap and plentiful compared to labeled data. We show that per-instance disagreement
is an unbiased estimate of the variance of error for that instance. We also
show that disagreement provides a lower bound on the prediction (generalization) error, and a tight upper bound on the ?variance of prediction
error?, or the variance of the average error across instances, where variance is measured across training sets. We present experimental results on
several data sets exploring co-validation for error estimation and model
selection. The procedure is especially effective in active learning settings, where training sets are not drawn at random and cross validation
overestimates error.
1
Introduction
Balancing hypothesis-space generality with predictive power is one of the central tasks in
inductive learning. The difficulties that arise in seeking an appropriate tradeoff go by a
variety of names?overfitting, data snooping, memorization, no free lunch, bias-variance
tradeoff, etc.?and lead to a number of known solution techniques or philosophies, including regularization, minimum description length, model complexity penalization (e.g., BIC,
AIC), Ockham?s razor, training with noise, ensemble methods (e.g., boosting), structural
risk minimization (e.g., SVMs), cross validation, hold-out validation, etc.
All of these methods in some way attempt to estimate or control the prediction (generalization) error of an induced function on unseen data. In this paper, we explore a method
of error estimation that we call co-validation. The method trains two independent functions that in a sense validate (or invalidate) one another by examining their mutual rate of
disagreement across a set of unlabeled data. In Section 2, we formally define disagreement. The measure simultaneously reflects notions of algorithm stability, model capacity,
and problem complexity. For example, empirically we find that disagreement goes down
when we increase the training set size, reduce the model?s capacity (complexity), or reduce
the inherent difficulty of the learning problem. Intuitively, the higher the disagreement
rate, the higher the average error rate of the learner, where the average is taken over both
test instances and training subsets. Therefore disagreement is a measure of the fitness of
the learner to the learning task. However, as researchers have noted in relation to various
measures of learner stability in general [Kut02], while robust learners (i.e., algorithms
with low prediction error) are stable, a stable learning algorithm does not necessarily have
low prediction error. In the same vein, we show and explain that the disagreement measure provides only lower bounds on error. Still, our empirical results give evidence that
disagreement can be a useful estimate in certain circumstances.
Since we require a source of unlabeled data?preferably a large source in order to accurately measure disagreement?we assume a semi-supervised setting where unlabeled data
is relatively cheap and plentiful while labeled data is scarce or expensive. This scenario is
often realistic, most notably for text classification. We focus on the binary classification
setting and analyze 0/1 error.
In practice, cross validation?especially leave-one-out cross validation?often provides an
accurate and reliable error estimate. In fact, under the usual assumption that training and
test data both arise from the same distribution, k-fold cross validation provides an unbiased
estimate of prediction error (for functions trained on m(1 ? 1/k) many instances, m being
the total number of labeled instances). However, in many situations, training data may
actually arise from a different distribution than test data. One extreme example of this is
active learning, where training samples are explicitly chosen to be maximally informative,
using a process that is neither independent nor reflective of the test distribution. Even
beyond active learning, in practice the process of gathering data and obtaining labels often
may bias the training set, for example because some inputs are cheaper or easier to label,
or are more readily available or obvious to the data collector, etc. In these cases, the error
estimate obtained from cross validation may not yield an accurate measure of the prediction
error of the learned function, and model selection based on cross validation may suffer.
Empirically we find that in active learning settings, disagreement often provides a more
accurate estimate of prediction error and is more useful as a guide for model selection.
Related to the problem of (average) error estimation is the problem of error variance estimation: both variance across test instances and variance across functions (i.e., training
sets). Even if a learning algorithm exhibits relatively low average error, if it exhibits high
variance, the algorithm may be undesirable depending on the end-user?s risk tolerance.
Variance is also useful for algorithm comparison, to determine whether observed error differences are statistically significant. For variance estimation, cross validation is on much
less solid footing: in fact, Bengio and Grandvalet [BG03] recently proved an impossibility
result showing that no method exists for producing an unbiased estimate of the variance
of cross validation error in a pure supervised setting with labeled training data only. In
this work, we show how disagreement relates to certain measures of variance. First, the
disagreement on a particular instance provides an unbiased estimate of the variance of error on that instance. Second, disagreement provides an upper bound on the variance of
prediction error (the type of variance useful for algorithm comparison).
The paper is organized as follows. In ? 2 we formally define disagreement and prove how
it lower-bounds prediction error and upper-bounds variance of prediction error. In ? 3 we
empirically explore how error estimates and model selection strategies that we devise based
on disagreement compare against cross validation in standard (iid) learning settings and in
active learning settings. In ? 4 we discuss related work. We conclude in ? 5.
2
Error, Variance, and Disagreement
Denote a set of input instances by X. Each instance x ? X is a vector of feature attributes.
Each instance has a unique true classification or label yx ? {0, 1}, in general unknown to
the learner. Let Z ? = {(x, yx )}m be a set of m labeled training instances provided to the
learner. The learner is an algorithm A : Z ? ? F , that inputs labeled instances and output
a function f ? F , where F is the set of all functions (classifiers) that A may output (the
hypothesis space). Each f ? F is a function that maps instances x to labels {0, 1}. The
goal of the algorithm is to choose f ? F to minimize 0/1 error (defined below) on future
unlabeled test instances.
We assume the training set size is fixed at some m > 0, and we take expectations over
one or both of two distributions: (1) the distribution X over instances in X, and (2) the
distribution F induced over the functions F , when learner A is trained on training sets of
size m obtained by sampling from X .
The 0/1 error ex,f of a given function f on a given instance x equals 1 if and only if
the function incorrectly classifies the instances, and equals 0 otherwise; that is, e x,f =
1{f (x) 6= yx }. We define the expected prediction error e of algorithm A as e = Ef,x ef,x ,
where the expectation is taken over instances drawn from X (x ? X ), and functions drawn
from F (f ? F). The variance of prediction error ? 2 is useful for comparing different
learners (e.g., [BG03]). Let ef denote the 0/1 error of function f (i.e., ef = Ex ex,f ). Then
? 2 = Ef ((ef ? e)2 ) = Ef (e2f ) ? e2 .
Define the disagreement between two classifiers f1 and f2 on instance x as 1{f1 (x) 6=
f2 (x)}. The disagreement rate of learner A is then:
d = Ex,f1 ,f2 1{f1 (x) 6= f2 (x)},
(1)
where recall that the expectation is taken over x ? X , f1 ? F, f2 ? F (with respect to
traning sets of some fixed size m).
Let dx be the (expected) disagreement at x when we sample functions from F: dx =
Ef1 ,f2 1{f1 (x) 6= f2 (x)}. Similarly, let ex and ?x2 denote respectively the error and variance at x: ex = P (f (x) 6= yx )) = Ef 1{f (x) 6= yx } = Ef ef,x and ?x2 = V AR(ef ) =
Ef [(1{f (x) 6= yx } ? ex )2 ] = ex (1 ? ex ). (The last equality follow from the fact that
ef,x is a Bernoulli/binary random variable.) Now, we can establish the connection between
disagreement and variance of error (of the learner) at instance x:
dx
= Ef1 ,f2 1{(f1 (x) = yx and f2 (x) 6= yx ) or (f1 (x) 6= yx andf2 (x) = yx )}
= P (1{(f1 (x) = yx andf2 (x) 6= yx ) or (f1 (x) 6= yx andf2 (x) = yx )}
= 2P (f1 (x) = yx and f2 (x) 6= yx ) = 2ex (1 ? ex ) ?
?x2 = dx /2.
(2)
The derivations follow from the fact that the expectation of a Bernoulli random variable is the same as its probability of being 1, and the two events above (the event
(f1 (x) = yx and f2 (x) 6= yx ) and the event (f1 (x) = yx and f2 (x) 6= yx ) ) are mutually
exclusive and have equal probability, and the two events f1 (x) = yx and f2 (x) 6= yx are
conditionally independent (note that the two events are conditioned on x, and the two functions are picked independently of one another). Furthermore, d = Ex Ef1 ,f2 [1{f1 (x) 6=
f2 (x)}] = Ex dx = 2Ex (?x2 ) = 2Ex [ex (1 ? ex )] = 2(e ? Ex e2x ), and therefore:
d
= e ? Ex e2x .
2
2.1
(3)
Bounds on Variance via Disagreement
The variance of prediction error ? 2 can be used to test the significance of the difference
in two learners? error rates. Bengio and Granvalet [BG03] show that there is no unbiased
estimator of the variance of k-fold cross-validation in the supervised setting. We can see
from Equation 2 that having access to disagreement at a given instance x (labeled or not)
does yield the variance of error at that instance. Thus disagreement obtained via 2-fold
training gives us an unbaised estimator of ?x2 , the variance of prediction error at instance
x, for functions trained on m/2 instances. (Note for unbiasedness, none of the functions
should have been trained on the given instance.) Of course, to compare different algorithms
on a given instance, one also needs the average error at that instance.
In terms of overall variance of prediction error ? 2 (where error is averaged across instances
and variance taken across functions), there exist scenarios when ? 2 is 0 but d is not (when
errors of the different functions learned are the same but negatively correlated), and scenarios when ? 2 = d/2 6= 0. In fact, disagreement yields an upper-bound:
Theorem 1 d ? 2? 2 .
Proof (sketch). We show that the result holds for any finite sampling of functions and instances: Consider the binary (0/1) matrix M where the rows correspond to instances and
the columns correspond to functions, and the entries are the binary-valued errors (entry
Mi,j = 1{fj (xi ) 6= yxi }). Thus the average error is the number of 1 entries when samplings of instances and functions are drawn from X and F respectively, and variances and
disagreement can also be readily defined for the matrix. We show the inequality holds
for any such n ? n matrix for any n. This establishes the theorem (by using limiting arguments). Treat the 1 entries (matrix cells) as vertices in a graph, where an edge exists
between two 1 entries if they share a column or a row. For a fixed number of 1 entries
N (N ? n2 ), we show the difference between disagreement and variance is minimized
when the number of edges is maximized. We establish that configuration maximizing the
number of edges occurs when all the 1 entries form a compact formation, that is, all the
matrix entries in row i are filled before filling row i+1 with 1s. Finally, we show that for
such a configuration minimzing the difference, the difference remains nonnegative.
2
In typical small training sample size cases when the errors are nonzero and not entirely
correlated (the patterns of 1s in the matrix is basically scattered) d/2 can be significantly
larger than ? 2 . With increasing training size, the functions learned tend to make the same
errors and d and ? 2 both approach 0.
2.2
Bounds on Error via Disagreement
From Jensen?s inequality, we have that Ex e2x ? (Ex ex )2 = e2 , therefore using eq. 3, we
conclude that d/2 ? e ? e2 . This implies that
?
?
1 ? 1 ? 2d
1 + 1 ? 2d
?e?
.
(4)
2
2
The upper bound derived is often not informative,
as it is greater than 0.5, and often we
?
know the error is less than 0.5. Let el = 1? 21?2d . We next discuss whether/when el
can be far from the actual error, and the related question of whether we can derive a good
upperbound or just a good estimator on error using a measure based on disagreement.
When functions generated by the learner make correlated and frequent mistakes, e l can be
far from true error. The extreme case of this is a learner that always outputs a constant
function. In order to account for weak but stable learners, the error lower bound should be
complemented with some measure that ensures that the learner is actually adapting (i.e.,
doing its job!). We explore using the training (empirical) error for this
P purpose. Let e?
1
denote the average training error of the algorithm: e? = Ef e?f = Ef m
xi ?Z ? 1{f (xi ) 6=
?
yxi }, where Z is the training set that yielded f . Define e? = max(?
e, el ). We explore e? as
a candidate criterion for model selection, which we compare against the cross-validation
criterion in ? 3.
Note that a learner can exhibit low disagreement and low training error, yet still have high
prediction error. For example, the learner could memorize the training data and output a
constant on all other instances. (Though when disagreement is exactly zero, the test error
equals the training error.) A measure of self-disagreement within the labeled training set,
defined by Lang et al. [LBRB02], in conjunction with the empirical training error does
yield an upper bound. Still, we find empirically that, when using SVMs, naive Bayes, or
logistic regression, disagreement on unlabeled data does not tend to wildly underestimate
error, even though it?s theoretically possible.
3
Experiments
We conducted experiments on the ?20 Newsgroups? and Reuters-21578 test categorization datasets, and the Votes, Chess, Adult, and Optics datasets from the UCI collection [BKM98].1 We chose two categorization tasks from the newsgroups sets: (1) identifying Baseball documents in a collection containing both Baseball and Hockey documents (2000 total documents), and (2) identifying alt.atheism documents from among the
alt.atheism, soc.religion.christian, and talk.religion.misc collections (3000 documents). For
the Reuters set, we chose documents belonging to one of the top 10 categories of the corpus (9410 documents), and we attempt to discriminate the ?Earn? (3964) and ?Acq? (2369)
respectively from the remaining nine. These categories are large enough that 0/1 error remains a reasonable measure. We used the bow library for stemming and stop words, kept
features up to 3-grams, and used l2-normalized frequency counts [McC96]. The Votes,
Chess, Adult, and Optics datasets have respectively 435, 3197, 32561 and 1800 instances.
These datasets give us some representation of the various types of learning problems. All
our data set are in a nonnegative feature value representation. We used support vector
machines with polynomial kernels available from the libsvm library [CL01] in all our experiments.2 For the error estimation experiments, we used linear SVMs with a C value of
10. For the model selection experiments, we used polynomial degree as the model selection
parameter.
3.1
Error Estimation
We first examine the use of disagreement for error estimation both in the standard setting
where training and test samples are uniformly iid, and in an active learning scenario.
For each of several training set sizes for each data set, we computed average results and
standard deviation across thirty trials. In each trial, we first generate a training set, sampled either uniformly iid or actively, then set aside 20% of remaining instances as the test
set. Next, we partition the training set into equal halves, train an SVM on each half, and
compute the disagreement rate between the two SVMs across the set of (unlabeled) data
that has not been designated for the training or test set (80% of total ? m instances). We
repeat this inner loop of partitioning, dual training, and disagreement computation thirty
times and take averages.
We examined the utility of our disagreement bound (4) as an estimate of the true test error of
the algorithm trained on the full data set (?trueE?). We also examined using the maximum
of the training error (?trainE?) and lower bound on error from our disagreement measure
(?disE?) as an estimate of trueE (?MaxDtE = max(trainE, disE)?). Note that disE and trainE
are respectively unbiased empirical estimates of expected disagreement d and expected
training error e? of ? 2 for the standard setting. Since our disagreement measure is actually
a bound on half error (i.e., error averaged over training sets of size m/2), we also compare
against two-fold cross-validation error (?2cvE?), and the true test error of the two functions
obtained from training on the two halves (?1/2trueE?).
1
Available from http://www.ics.uci.edu/ and http://www.daviddlewis.com/resources/testcollections/
We observed similar results in error estimation using linear logistic regression and Naive Bayes
learners in preliminary experiments.
2
Linear SVM on BASEBALLvsHockey Dataset
0.5
0.4
0.35
trueE
1/2trueE
2cvE
disE
trainE
maxDtE
0.45
0.4
0.35
0/1 ERROR
0.45
0/1 ERROR
Linear SVM on BASEBALLvsHockey Dataset
0.5
trueE
1/2trueE
2cvE
disE
trainE
maxDtE
0.3
0.25
0.2
0.3
0.25
0.2
0.15
0.15
0.1
0.1
0.05
0.05
0
0
50
100
150
200
50
TRAINING SET SIZE
100
150
200
TRAINING SET SIZE
Figure 1: (a) Random training set. (b) Actively picked.
6
8
6
4
2
1.4
Baseball
Religion
Earn
Acq
Adult
Chess
Votes
Digit 1 (Optics)
5
4
3
2
1
0
80
100
120
140
160
(a) TRAINING SET SIZE
180
200
1
0.8
0.6
0.4
0.2
0
60
Baseball
Religion
Earn
Acq
Adult
Chess
Votes
Digit 1 (Optics)
1.2
Ratio of disE to 1/2trueE
Baseball
Religion
Earn
Acq
Adult
Chess
Votes
Digit 1 (Optics)
10
Ratio of disE to trueE
Ratio of the differences from trueE
12
0
60
80
100
120
140
160
(b) TRAINING SET SIZE
180
200
60
80
100
120
140
160
180
200
(c) TRAINING SET SIZE
- trueE
disE
disE
Figure 2: Plots of ratios when active learning: (a) 2cvE
disE - trueE (b) trueE (c) 1/2trueE .
In the standard scenario, when the training set is chosen uniformly at random from the corpus, leave-one-out cross validated error (?looE?) is generally a very good estimate of trueE,
while 2cvE is a good estimate for 1/2trueE. For all the data sets, as expected our error estimate maxDtE underestimates 1/2trueE. A representative example is shown in Figure 1(a).
In the active learning scenario, the training set is chosen in an attempt to maximize information, and the choice of each new instance depends on the set of previously chosen instances.
Often this means that especially difficult instances are chosen (or at least instances whose
labels are difficult to infer from the current training set). Thus cross validation naturally
overestimates the difficulty of the learning task and so may greatly overestimate error. On
the other hand, an approximate model of active learning is that the instances are iid sampled
from a hard distribution. This ignores the sequential nature of active learning. Measuring
disagreement on the easier test distribution via subsampling the training set may remain a
good estimator of the actual test error.
We used linear SVMs as the basis for our active learning procedure. In each trial, we begin
with random training set size of 10, and then grow the labeled set by using the uncertainty
sampling technique. We computed the various error measures at regular intervals.3 A representative plot of errors during active learning is given in Fig. 1(b). In all the datasets
experimented with, we have observed the same pattern: the error estimate using disagreement provides a much better estimate of 1/2trueE and trueE than does 2cvE (Fig. 2a), and
can be used as an indication of the error and the progress of active learning. Note that while
we have not computed looE error in the error-estimation experiments, figure Fig. 1(b) indicates that 2cvE is not a good estimator of trueE at size m/2 either, and this has been
the case in all our experiments. We have observed that disE estimates the 1/2trueE best
(Fig. 2c). The estimation performance may degrade towards the end of active learning
when the learner converges (disagreement approaches 0). However, we have observed that
both 1/2trueE (obtained via subsampling) and disE tend to overestimate the actual error of
the active learner even at half the training size (e.g., Fig. 1(b)). This observation underlines
the importance of taking the sequential nature of active learning into account.
3
We could use a criterion based on disagreement for selective sampling, but we have not throughly
explored this option.
0.55
1/2cvE
looE
maxDtE
trueE
0.5
0.45
0.1
0.35
looE
error
0.4
0.3
0.01
0.25
0.2
0.15
0.001
0.1
0
0.5
1
1.5
2
2.5
(a) SVM poly degree
3
3.5
4
0.001
0.01
0.1
(b) maxDtE
Figure 3: (a) An example were maxDtE performs particularly well as a model selection criteria,
tracking the true error curve more closely than looE or 2cvE. (b) A summary of all experiments
plotting looE versus maxDtE on a log-log scale: points above the diagonal indicate maxDtE outperforming looE.
3.2
Model Selection
We explore various criteria for selecting the expected best among twenty SVMs, each
trained using a different polynomial degree kernel. For each data set, we manually identify
an interval of polynomial degrees that seems to include the error minimum 4 , then choose
twenty degrees equally spaced within that interval. We compare our disagreement-based
estimate maxDtE with the cross validation estimates looE and 2cvE as model selection criteria. In each trial, we identify the polynomial degree that is expected to be best according
to each criteria, then train an SVM at that degree on the full training set. We compare trueE
at the degree selected by each criteria against trueE at the actual optimal degree.
In the standard uniform iid scenario, though cross validation often does fail as a model
selection criteria for regression problems, it seems that cross validation in general is hard
to beat for classification problems [SS02]. We find that both looE and 2cvE modestly
outperform maxDtE as model selection criteria, though maxDtE is often competitive. We
are exploring using the maximum of cross validation and maxDtE as an alternative with
preliminary evidence of a slight advantage over cross validation alone.
In an active learning setting, even though cross validation overestimates error, it is theoretically possible that cross validation would still function well to identify the best or near-best
model. However, our experiments suggest that the performance of cross validation as a
model selection criteria indeed degrades under active learning. In this situation, maxDtE
serves as a consistently better model selection criteria. Figure 3(a) shows an example where
maxDtE performs particularly well.
The active learning model selection experiments proceed as follows. For each data set,
we use one run of active learning to identify 200 ordered and actively-picked instances.
For each training size m ? {25, 50, 100, 200}, we run thirty experiments using a random
shuffling of the size-m prefix of the 200 actively-picked instances. In each trial and for
each of the twenty polynomial degrees, we measure trueE and looE, then run an inner
loop of thirty random partitionings and dual trainings to measure average d, expE, 2cvE,
and 1/2trueE. Disagreements and errors are measured across the full test set (total ? m
instances), so this is a transductive learning setting. Figure 3(b) summarizes the results.
We observe that model selection based on disagreement often outperforms model selection
based on cross-validation, and at times significantly so. Across 26 experiments, the winloss-tie record of maxDtE versus 2cvE was 16-5-5, the record of maxDtE versus looE was
18-6-2, and the record of 2cvE versus looE was 15-9-2.
4
Although for fractional degress less than 1 the kernal matrix is not guaranteed to be positive
semi-definite, we included such ranges whenever the range included the error minimum. Non-integral
degress greater than 1 do not pose a problem as the feature values in all our problem representations
are nonnegative.
4
Related Work
Previous work has already shown that using various measures of stability on unlabeled data
is useful for ensemble learning, model selection, and regularization, both in supervised and
unsupervised learning [KV95, Sch97, SS02, BC03, LBRB02, LRBB04]. Metric-based
methods for model selection are complementary to our approach in that they are desgined
to prefer models/algorithms that behave similarly on the labeled and unlabeled data [Sch97,
SS02, BC03], while disagreement is a measure of self-consistency on the same dataset (in
this paper, unlabeled data only). Consequently, our method is also applicable to scenarios in
which the test and training distributions are different. Lang et. al [LBRB02, LRBB04] also
explore disagreement on unlabeled data, establishing robust model selection techniques
based on disagreement for clustering. Theoretical work on algorithmic stability focuses
on deriving generalization bounds given that the algorithm has certain inherent stability
properties [KN02].
5
Conclusions and Future Work
Two advantages of co-validation over traditional techniques are: (1) disagreement can be
measured to almost an arbitrary degree assuming unlabeled data is plentiful, and (2) disagreement is measured on unlabeled data drawn from the same distribution as test instances,
the extreme case of which is in transductive learning where the unlabeled and test instances
coincide. In this paper we derived bounds on certain measures of error and variance based
on disagreement, then examined empirically when co-validation might be useful. We found
co-validation particularly useful in active learning settings. Future goals include extending
the theory to active learning, precision/recall, algorithm comparison (using variance), ensemble learning, and regression. We plan to compare semi-supervised and transductive
learning, and consider procedures to generate fictitious unlabeled data.
References
[BC03] Y. Bengio and N. Chapados. Extensions to metric-based model selection. Journal of Machine
Learning Research, 2003.
[BG03] Y. Bengio and Y. Granvalet. No unbiased estimator of the variance of k-fold cross-validation.
In NIPS, 2003.
[BKM98] C.L. Blake, E. Keogh, and C.J. Merz. UCI repository of machine learning databases,
1998.
[CL01] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines,
2001. Software available at http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[KN02] S. Kutin and P. Niyogi. Almost-everywhere algorithmic stability and generalization error.
In UAI, 2002.
[Kut02] S. Kutin. Algorithmic stability and ensemble-based learning. PhD thesis, University of
Chicago, 2002.
[KV95] A. Krogh and J. Vedelsby. Neural network ensembles, cross validation, and active learning.
In NIPS, 1995.
[LBRB02] T. Lange, M. Braun, V. Roth, and J. Buhmann. Stability-based model selection. In NIPS,
2002.
[LRBB04] T. Lange, V. Roth, M. Braun, and J. Buhmann. Stability based validation of clustering
algorithms. Neural Computation, 16, 2004.
[McC96] A. K. McCallum. Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. http://www.cs.cmu.edu/ mccallum/bow, 1996.
[Sch97] D. Schuurmans. A new metric-based approach to model selection. In AAAI, 1997.
[SS02] D. Schuurmans and F. Southey. Metric-based methods for adaptive model selection and
regularization. Machine Learning, pages 51?84, 2002.
| 2603 |@word trial:5 repository:1 polynomial:6 seems:2 underline:1 dise:12 solid:1 plentiful:3 configuration:2 selecting:1 document:7 prefix:1 outperforms:1 current:1 com:2 comparing:2 lang:2 yet:1 dx:5 readily:2 stemming:1 realistic:1 partition:1 informative:2 chicago:1 cheap:2 christian:1 plot:2 aside:1 alone:1 half:5 selected:1 mccallum:2 footing:1 record:3 provides:8 boosting:1 prove:1 theoretically:2 notably:1 indeed:1 degress:2 expected:7 nor:1 examine:1 actual:4 increasing:1 provided:1 classifies:1 begin:1 preferably:1 braun:2 tie:1 exactly:1 classifier:2 control:1 partitioning:2 producing:1 overestimate:5 before:1 positive:1 treat:1 mistake:1 establishing:1 might:1 chose:2 examined:3 co:7 range:2 statistically:1 averaged:2 unique:1 thirty:4 practice:2 definite:1 digit:3 procedure:4 empirical:4 significantly:2 adapting:1 word:1 regular:1 suggest:1 unlabeled:17 selection:25 undesirable:1 context:1 risk:2 memorization:1 www:4 map:1 roth:2 maximizing:1 go:2 independently:2 identifying:2 pure:1 estimator:6 deriving:1 stability:9 notion:1 limiting:1 user:1 hypothesis:2 expensive:1 particularly:3 labeled:10 vein:1 observed:5 database:1 csie:1 ensures:1 complexity:3 trained:7 tight:1 predictive:1 negatively:1 baseball:5 f2:15 learner:21 basis:1 various:5 talk:1 derivation:1 train:3 effective:1 ef1:3 formation:1 whose:1 larger:1 valued:1 otherwise:1 niyogi:1 unseen:1 transductive:3 advantage:2 indication:1 frequent:1 uci:3 loop:2 bow:3 description:1 snooping:1 validate:3 extending:1 categorization:2 leave:2 yxi:2 bkm98:2 converges:1 depending:1 derive:1 pose:1 measured:4 ex:22 progress:1 job:1 eq:1 soc:1 krogh:1 c:1 implies:1 memorize:1 indicate:1 differ:1 closely:1 attribute:1 require:1 f1:15 generalization:4 preliminary:2 ntu:1 keogh:1 exploring:2 extension:1 hold:3 ic:1 blake:1 algorithmic:3 purpose:1 estimation:12 applicable:1 label:5 daviddlewis:1 establishes:1 reflects:1 minimization:1 always:1 conjunction:1 derived:2 focus:2 validated:1 consistently:1 bernoulli:2 indicates:1 impossibility:1 greatly:1 ave:1 sense:1 el:3 pasadena:2 relation:1 selective:1 overall:1 classification:8 among:2 dual:2 yahoo:2 plan:1 mutual:1 equal:5 having:1 sampling:5 manually:1 unsupervised:1 filling:1 future:3 minimized:1 e2f:1 inherent:2 simultaneously:1 madani:2 fitness:1 cheaper:1 attempt:3 extreme:3 accurate:3 edge:3 integral:1 filled:1 theoretical:1 instance:47 column:2 modeling:1 ar:1 measuring:1 vertex:1 subset:1 entry:8 deviation:1 uniform:1 examining:1 conducted:1 traning:1 unbiasedness:1 earn:4 thesis:1 central:1 aaai:1 containing:1 choose:2 chung:1 actively:4 account:2 upperbound:1 inc:1 explicitly:1 depends:1 picked:4 lab:1 analyze:1 doing:1 competitive:1 bayes:2 option:1 acq:4 minimize:1 variance:33 chapados:1 ensemble:5 yield:4 correspond:2 maximized:1 identify:4 spaced:1 weak:1 accurately:1 iid:5 none:1 basically:1 researcher:1 explain:1 whenever:1 against:4 underestimate:2 frequency:1 kn02:2 obvious:1 e2:3 naturally:1 proof:1 mi:1 vedelsby:1 stop:1 sampled:2 proved:1 dataset:3 recall:2 fractional:1 organized:1 actually:3 higher:2 supervised:5 follow:2 maximally:1 though:5 generality:1 furthermore:1 just:1 wildly:1 sketch:1 hand:1 logistic:2 name:1 normalized:1 unbiased:7 true:5 inductive:1 regularization:3 equality:1 nonzero:1 misc:1 conditionally:1 during:1 self:2 razor:1 noted:1 criterion:12 performs:2 fj:1 ef:15 recently:1 empirically:5 slight:1 significant:1 shuffling:1 rd:1 consistency:1 similarly:2 language:1 toolkit:1 stable:3 access:1 invalidate:1 etc:3 expe:1 scenario:8 certain:4 cve:14 inequality:2 binary:5 outperforming:1 devise:1 minimum:3 greater:2 floor:1 determine:1 maximize:1 semi:3 relates:1 full:3 infer:1 cross:26 lin:1 retrieval:1 equally:1 prediction:17 regression:4 circumstance:1 expectation:4 metric:4 cmu:1 kernel:2 cell:1 interval:3 grow:1 source:2 pennock:1 induced:2 tend:3 call:2 reflective:1 structural:1 near:1 bengio:4 enough:1 variety:1 newsgroups:2 bic:1 testcollections:1 reduce:2 inner:2 lange:2 tradeoff:2 whether:3 utility:1 suffer:1 proceed:1 nine:1 useful:8 generally:1 svms:6 category:2 generate:2 http:4 outperform:1 exist:1 per:1 drawn:5 neither:1 libsvm:3 kept:1 graph:1 run:3 everywhere:1 uncertainty:1 almost:2 reasonable:1 chih:2 summarizes:1 prefer:1 entirely:1 bound:17 guaranteed:1 aic:1 fold:5 nonnegative:3 yielded:1 kutin:2 optic:5 x2:5 software:1 argument:1 relatively:3 designated:1 according:1 flake:1 belonging:1 across:11 remain:1 kernal:1 tw:1 lunch:1 chess:5 intuitively:1 gathering:1 taken:4 equation:1 mutually:1 remains:2 resource:1 discus:2 count:1 previously:1 fail:1 cjlin:1 know:1 end:2 serf:1 available:4 observe:1 appropriate:1 disagreement:56 alternative:1 top:1 remaining:2 subsampling:2 include:2 clustering:3 yx:22 especially:3 establish:2 seeking:1 question:1 already:1 occurs:1 strategy:1 degrades:1 exclusive:1 usual:1 diagonal:1 modestly:1 traditional:1 exhibit:3 capacity:2 degrade:1 assuming:1 length:1 ratio:4 difficult:2 e2x:3 unknown:1 twenty:3 upper:6 observation:1 ockham:1 datasets:5 finite:1 behave:1 incorrectly:1 beat:1 situation:2 arbitrary:1 david:1 connection:1 learned:3 nip:3 adult:5 beyond:1 below:1 pattern:2 omid:1 including:1 reliable:1 max:2 power:1 event:5 difficulty:3 buhmann:2 scarce:1 library:3 naive:2 text:2 l2:1 fictitious:1 versus:4 validation:34 penalization:1 southey:1 degree:11 plotting:1 grandvalet:1 share:1 balancing:1 row:4 course:1 summary:1 repeat:1 last:1 free:1 bias:2 guide:1 taking:1 tolerance:1 curve:1 gram:1 ignores:1 collection:3 adaptive:1 coincide:1 far:2 approximate:1 compact:1 active:23 overfitting:1 uai:1 corpus:2 conclude:2 xi:3 hockey:1 nature:2 robust:2 ca:1 obtaining:1 schuurmans:2 necessarily:1 poly:1 significance:1 reuters:2 noise:1 arise:3 n2:1 atheism:2 collector:1 complementary:1 fig:5 representative:2 scattered:1 precision:1 candidate:1 down:1 theorem:2 jen:1 showing:1 jensen:1 explored:1 experimented:1 svm:5 alt:2 evidence:2 exists:2 throughly:1 sequential:2 effectively:1 importance:1 phd:1 conditioned:1 easier:2 explore:7 religion:5 ordered:1 tracking:1 chang:1 gary:1 complemented:1 goal:2 consequently:1 towards:1 hard:2 included:2 typical:1 uniformly:3 total:4 discriminate:1 experimental:1 merz:1 ss02:4 vote:5 formally:2 support:2 philosophy:1 correlated:3 |
1,766 | 2,604 | Assignment of Multiplicative Mixtures in
Natural Images
Odelia Schwartz
HHMI and Salk Institute
La Jolla, CA 92014
[email protected]
Terrence J. Sejnowski
HHMI and Salk Institute
La Jolla, CA 92014
[email protected]
Peter Dayan
GCNU, UCL
17 Queen Square, London
[email protected]
Abstract
In the analysis of natural images, Gaussian scale mixtures (GSM) have
been used to account for the statistics of filter responses, and to inspire hierarchical cortical representational learning schemes. GSMs pose a critical assignment problem, working out which filter responses were generated by a common multiplicative factor. We present a new approach
to solving this assignment problem through a probabilistic extension to
the basic GSM, and show how to perform inference in the model using
Gibbs sampling. We demonstrate the efficacy of the approach on both
synthetic and image data.
Understanding the statistical structure of natural images is an important goal for visual
neuroscience. Neural representations in early cortical areas decompose images (and likely
other sensory inputs) in a way that is sensitive to sophisticated aspects of their probabilistic
structure. This structure also plays a key role in methods for image processing and coding.
A striking aspect of natural images that has reflections in both top-down and bottom-up
modeling is coordination across nearby locations, scales, and orientations. From a topdown perspective, this structure has been modeled using what is known as a Gaussian
Scale Mixture model (GSM).1?3 GSMs involve a multi-dimensional Gaussian (each dimension of which captures local structure as in a linear filter), multiplied by a spatialized
collection of common hidden scale variables or mixer variables? (which capture the coordination). GSMs have wide implications in theories of cortical receptive field development,
eg the comprehensive bubbles framework of Hyv?arinen.4 The mixer variables provide the
top-down account of two bottom-up characteristics of natural image statistics, namely the
?bowtie? statistical dependency,5, 6 and the fact that the marginal distributions of receptive
field-like filters have high kurtosis.7, 8 In hindsight, these ideas also bear a close relationship with Ruderman and Bialek?s multiplicative bottom-up image analysis framework 9 and
statistical models for divisive gain control.6 Coordinated structure has also been addressed
in other image work,10?14 and in other domains such as speech15 and finance.16
Many approaches to the unsupervised specification of representations in early cortical areas
rely on the coordinated structure.17?21 The idea is to learn linear filters (eg modeling simple
cells as in22, 23 ), and then, based on the coordination, to find combinations of these (perhaps
non-linearly transformed) as a way of finding higher order filters (eg complex cells). One
critical facet whose specification from data is not obvious is the neighborhood arrangement,
ie which linear filters share which mixer variables.
?
Mixer variables are also called mutlipliers, but are unrelated to the scales of a wavelet.
Here, we suggest a method for finding the neighborhood based on Bayesian inference of
the GSM random variables. In section 1, we consider estimating these components based
on information from different-sized neighborhoods and show the modes of failure when
inference is too local or too global. Based on these observations, in section 2 we propose
an extension to the GSM generative model, in which the mixer variables can overlap probabilistically. We solve the neighborhood assignment problem using Gibbs sampling, and
demonstrate the technique on synthetic data. In section 3, we apply the technique to image
data.
1
GSM inference of Gaussian and mixer variables
In a simple, n-dimensional, version of a GSM, filter responses l are synthesized ? by multiplying an n-dimensional Gaussian with values g = {g1 . . . gn }, by a common mixer
variable v.
l = vg
(1)
We assume g are uncorrelated (? 2 along diagonal of the covariance matrix). For the analytical calculations, we assume that v has a Rayleigh distribution:
where 0 < a ? 1 parameterizes the strength of the prior
p[v] ? [v exp ?v 2 /2]a
(2)
For ease, we develop the theory for a = 1. As is well known,2 and repeated in figure 1(B),
the marginal distribution of the resulting GSM is sparse and highly kurtotic. The joint
conditional distribution of two elements l1 and l2 , follows a bowtie shape, with the width
of the distribution of one dimension increasing for larger values (both positive and negative)
of the other dimension.
The inverse problem is to estimate the n+1 variables g1 . . . gn , v from the n filter responses
l1 . . . ln . It is formally ill-posed, though regularized through the prior distributions. Four
posterior distributions are particularly relevant, and can be derived analytically from the
model:
rv
distribution
posterior mean
?
?
? ?
q
|l1 |
2
2
l
|l1 | B? 1, ? ?
|l1 |
v
1
?
?
exp ? 2 ? 2v2 ?2
p[v|l1 ]
?
1 |l1 |
1 |l1 |
B
p[v|l]
p[|g1 ||l1 ]
p[|g1 ||l]
?
B
2, ?
1 (n?2)
2
2
2
( )
?(n?1)
exp ? v2 ? 2vl2 ?2
l v
B(1? n
2 ,?)
?
?|l1 |
g2
l2
?
? 12 exp ? 12 ? 12
2?
1 |l1 |
g
2g
l
?
B ?2,
?|l1 |
?1
|l1 | 2 (2?n)
l
n
l
2 ?1, ?
?
B(
)
?
(n?3)
g1
q
1
?
1
g2 2
exp ? 2?12 ll2 ?
1
l12
2g12
?
q
q
|l1 |
?
l
?
(
(
2, ?
)
)
l
B 32 ? n
2 ,?
l
B 1? n
,
? 2 ??
|l |
B 0, ?1
|l1 | ?
?
? B ? 1 , |l1 |
2
?
q
n
1 l
|l1 | B( 2 ? 2 , ? )
n
l B( ?1, l )
2
?
pP
2
where B(n, x) is the modified Bessel function of the second kind (see also24 ), l =
i li
and gi is forced to have the same sign as li , since the mixer variables are always positive.
Note that p[v|l1 ] and p[g1 |l1 ] (rows 1,3) are local estimates, while p[v|l] and p[g|l] (rows
2,4) are estimates according to filter outputs {l1 . . . ln }. The posterior p[v|l] has also been
estimated numerically in noise removal for other mixer priors, by Portilla et al 25
The full GSM specifies a hierarchy of mixer variables. Wainwright2 considered a prespecified tree-based hierarhical arrangement. In practice, for natural sensory data, given a
heterogeneous collection of li , it is advantageous to learn the hierachical arrangement from
examples. In an approach related to that of the GSM, Karklin and Lewicki19 suggested
We describe the l as being filter responses even in the synthetic case, to facilitate comparison
with images.
?
B
A
?
1
...
g
v
20
1
...
?
0.1
l
0
-5
0
l
2 0
21 0
0
5
l
1
0
l
1
1
l
... l
21 40
20
Actual
Distribution
0
D Gaussian
0
5
0
0
-5
0
0
5
0
5
-5
0
g
1
0
5
E(g 1 | l1)
1 .. 40
)
0.06
-5
0
0
5
2
E(g |l
1 1 .. 20
)
0
1
E(g | l
)
-5
5
E(g | l
1
2 1 .. 20
5
?
E(g |l 1 .. 20 )
E(g |l
0
E(v | l
?
0.06
E(g | l2)
2
2
0
5
E(v | l 1 .. 20 )
E(g | l1)
1
1
g
0
1
0.06
0
0.06
E(v?l | )
g
40 filters, too global
0.06
0.06
0.06
Distribution
20 filters
1 filter, too local
0.06
v?
E Gaussian
joint conditional
40
l
l
C Mixer
g
...
21
Multiply
Multiply
l
g
Distribution
g
v
1 .. 40
1 .. 40
)
)
E(g | l
1
1 .. 40
)
Figure 1: A Generative model: each filter response is generated by multiplying its Gaussian
variable by either mixer variable v? , or mixer variable v? . B Marginal and joint conditional
statistics (bowties) of sample synthetic filter responses. For the joint conditional statistics,
intensity is proportional to the bin counts, except that each column is independently re-scaled
to fill the range of intensities. C-E Left: actual distributions of mixer and Gaussian variables;
other columns: estimates based on different numbers of filter responses. C Distribution of
estimate of the mixer variable v? . Note that mixer variable values are by definition positive.
D Distribution of estimate of one of the Gaussian variables, g1 . E Joint conditional statistics
of the estimates of Gaussian variables g1 and g2 .
generating log mixer values for all the filters and learning the linear combinations of a
smaller collection of underlying values. Here, we consider the problem in terms of multiple
mixer variables, with the linear filters being clustered into groups that share a single mixer.
This poses a critical assignment problem of working out which filter responses share which
mixer variables. We first study this issue using synthetic data in which two groups of filter
responses l1 . . . l20 and l21 . . . l40 are generated by two mixer variables v? and v? (figure 1).
We attempt to infer the components of the GSM model from the synthetic data.
Figure 1C;D shows the empirical distributions of estimates of the conditional means of a
mixer variable E(v? |{l}) and one of the Gaussian variables E(g1 |{l}) based on different
assumed assignments. For estimation based on too few filter responses, the estimates do not
well match the actual distributions. For example, for a local estimate based on a single filter
response, the Gaussian estimate peaks away from zero. For assignments including more
filter responses, the estimates become good. However, inference is also compromised if the
estimates for v? are too global, including filter responses actually generated from v? (C and
D, last column). In (E), we consider the joint conditional statistics of two components, each
1
v
v
?
v?
?
g
1
...
v
v?
B Actual
A Generative model
1
100
1
100
0
v
01
l1 ... l100
0
l
1
20
2
0
0
l
1
0
-4
100
Filter number
v?
?
1
100
1
0
Filter number
100
1
Filter number
0
E(g 1 | l )
Gibbs fit
assumed
0.15
E(g | l ) 0
2
0
1
Mixer
Gibbs fit
assumed
0.1
4
0
E(g 1 | l )
Distribution
Distribution
Distribution
l
100
Filter number
Gaussian
0.2
-20
1
1
0
Filter number
Inferred v ?
Multiply
100
1
Filter number
Pixel
v?
1
g
0
C
?
E(v | l )
?
0
0
0
15
E(v | l )
?
0
E(v | l )
?
Figure 2: A Generative model in which each filter response is generated by multiplication
of its Gaussian variable by a mixer variable. The mixer variable, v ? , v? , or v? , is chosen
probabilistically upon each filter response sample, from a Rayleigh distribution with a = .1.
B Top: actual probability of filter associations with v? , v? , and v? ; Bottom: Gibbs estimates
of probability of filter associations corresponding to v? , v? , and v? . C Statistics of generated
filter responses, and of Gaussian and mixer estimates from Gibbs sampling.
estimating their respective g1 and g2 . Again, as the number of filter responses increases,
the estimates improve, provided that they are taken from the right group of filter responses
with the same mixer variable. Specifically, the mean estimates of g1 and g2 become more
independent (E, third column). Note that for estimations based on a single filter response,
the joint conditional distribution of the Gaussian appears correlated rather than independent
(E, second column); for estimation based on too many filter responses (40 in this example),
the joint conditional distribution of the Gaussian estimates shows a dependent (rather than
independent) bowtie shape (E, last column). Mixer variable joint statistics also deviate
from the actual when the estimations are too local or global (not shown).
We have observed qualitatively similar statistics for estimation based on coefficients in
natural images. Neighborhood size has also been discussed in the context of the quality of
noise removal, assuming a GSM model.26
2
Neighborhood inference: solving the assignment problem
The plots in figure 1 suggest that it should be possible to infer the assignments, ie work
out which filter responses share common mixers, by learning from the statistics of the
resulting joint dependencies. Hard assignment problems (in which each filter response
pays allegiance to just one mixer) are notoriously computationally brittle. Soft assignment
problems (in which there is a probabilistic relationship between filter responses and mixers)
are computationally better behaved. Further, real world stimuli are likely better captured
by the possibility that filter responses are coordinated in somewhat different collections in
different images.
We consider a richer, mixture GSM as a generative model (Figure 2). To model the generation of filter responses li for a single image patch, we multiply each Gaussian variable gi by
a single mixer variable from the set v1 . . . vm . We assume that gi has association probabil-
P
ity pij (satisfying j pij = 1, ?i) of being assigned to mixer variable vj . The assignments
are assumed to be made independently for each patch. We use si ? {1, 2, . . . m} for the
assignments:
li = g i vs i
(3)
Inference and learning in this model proceeds in two stages, according to the expectation
maximization algorithm. First, given a filter response li , we use Gibbs sampling for the
E phase to find possible appropriate (posterior) assignments. Williams et al.27 suggested
using Gibbs sampling to solve a similar assignment problem in the context of dynamic tree
models. Second, for the M phase, given the collection of assignments across multiple filter
responses, we update the association probabilities pij . Given sample mixer assignments,
we can estimate the Gaussian and mixer components of the GSM using the table of section 1, but restricting the filter response samples just to those associated with each mixer
variable.
We tested the ability of this inference method to find the associations in the probabilistic
mixer variable synthetic example shown in figure 2, (A,B). The true generative model specifies probabilistic overlap of 3 mixer variables. We generated 5000 samples for each filter
according to the generative model. We ran the Gibbs sampling procedure, setting the number of possible neighborhoods to 5 (e.g., > 3); after 500 iterations the weights converged
near to the proper probabilities. In (B, top), we plot the actual probability distributions
for the filter associations with each of the mixer variables. In (B, bottom), we show the
estimated associations: the three non-zero estimates closely match the actual distributions;
the other two estimates are zero (not shown). The procedure consistently finds correct associations even in larger examples of data generated with up to 10 mixer variables. In (C)
we show an example of the actual and estimated distributions of the mixer and Gaussian
components of the GSM. Note that the joint conditional statistics of both mixer and Gaussian are independent, since the variables were generated as such in the synthetic example.
The Gibbs procedure can be adjusted for data generated with different parameters a of
equation 2, and for related mixers,2 allowing for a range of image coefficient behaviors.
3
Image data
Having validated the inference model using synthetic data, we turned to natural images.
We derived linear filters from a multi-scale oriented steerable pyramid,28 with 100 filters,
at 2 preferred orientations, 25 non-overlapping spatial positions (with spatial subsampling
of 8 pixels), and two phases (quadrature pairs), and a single spatial frequency peaked at 1/6
cycles/pixel. The image ensemble is 4 images from a standard image compression database
(boats, goldhill, plant leaves, and mountain) and 4000 samples.
We ran our method with the same parameters as for synthetic data, with 7 possible neighborhoods and Rayleigh parameter a = .1 (as in figure 2). Figure 3 depicts the association
weights pij of the coefficients for each of the obtained mixer variables. In (A), we show
a schematic (template) of the association representation that will follow in (B, C) for the
actual data. Each mixer variable neighborhood is shown for coefficients of two phases
and two orientations along a spatial grid (one grid for each phase). The neighborhood is
illustrated via the probability of each coefficient to be generated from a given mixer variable. For the first two neighborhoods (B), we also show the image patches that yielded
the maximum log likelihood of P (v|patch). The first neighborhood (in B) prefers vertical patterns across most of its ?receptive field?, while the second has a more localized
region of horizontal preference. This can also be seen by averaging the 200 image patches
with the maximum log likelihood. Strikingly, all the mixer variables group together two
phases of quadrature pair (B, C). Quadrature pairs have also been extracted from cortical
data, and are the components of ideal complex cell models. Another tendency is to group
Phase 2
Phase 1
19
Y position
Y position
A
0
-19
Phase 1
0
-19
-19
0
19
X position
-19 0
19
X position
B Neighborhood
Example max patches
Average
Neighborhood
Example max patches
Average
Gaussian
0.25
l2 0
0
l
1
50
0
l
1
C Neighborhood
Mixer
Gibbs fit
assumed
Gibbs fit
assumed
Distribution
Distribution
Distribution
D Coefficient
-50
Phase 2
19
0.12
E(g | l ) 0
2
0
-5
0
E(g 1 | l )
5
0
E(g 1 | l )
0.15
)
E(v | l )
?
0
00
15
E(v | l )
?
0
E(v | l )
?
Figure 3: A Schematic of the mixer variable neighborhood representation. The probability
that each coefficient is associated with the mixer variable ranges from 0 (black) to 1 (white).
Left: Vertical and horizontal filters, at two orientations, and two phases. Each phase is
plotted separately, on a 38 by 38 pixel spatial grid. Right: summary of representation, with
filter shapes replaced by oriented lines. Filters are approximately 6 pixels in diameter, with
the spacing between filters 8 pixels. B First two image ensemble neighborhoods obtained
from Gibbs sampling. Also shown, are four 38?38 pixel patches that had the maximum
log likelihood of P (v|patch), and the average of the first 200 maximal patches. C Other
image ensemble neighborhoods. D Statistics of representative coefficients of two spatially
displaced vertical filters, and of inferred Gaussian and mixer variables.
orientations across space. The phase and iso-orientation grouping bear some interesting
similarity to other recent suggestions;17, 18 as do the maximal patches.19 Wavelet filters
have the advantage that they can span a wider spatial extent than is possible with current
ICA techniques, and the analysis of parameters such as phase grouping is more controlled.
We are comparing the analysis with an ICA first-stage representation, which has other obvious advantages. We are also extending the analysis to correlated wavelet filters; 25 and to
simulations with a larger number of neighborhoods.
From the obtained associations, we estimated the mixer and Gaussian variables according
to our model. In (D) we show representative statistics of the coefficients and of the inferred
variables. The learned distributions of Gaussian and mixer variables are quite close to our
assumptions. The Gaussian estimates exhibit joint conditional statistics that are roughly
independent, and the mixer variables are weakly dependent.
We have thus far demonstrated neighborhood inference for an image ensemble, but it is also
interesting and perhaps more intuitive to consider inference for particular images or image
classes. In figure 4 (A-B) we demonstrate example mixer variable neighborhoods derived
from learning patches of a zebra image (Corel CD-ROM). As before, the neighborhoods
are composed of quadrature pairs; however, the spatial configurations are richer and have
A Neighborhood
B Neighborhood
Average
Example max patches
Top 25 max patches
Average
Example max patches
Top 25 max patches
Figure 4: Example of Gibbs on Zebra image. Image is 151?151 pixels, and each spatial neighborhood spans 38?38 pixels. A, B Example mixer variable neighborhoods. Left:
example mixer variable neighborhood, and average of 200 patches that yielded the maximum likelihood of P (v|patch). Right: Image and marked on top of it example patches that
yielded the maximum likelihood of P (v|patch).
not been previously reported with unsupervised hierarchical methods: for example, in (A),
the mixture neighborhood captures a horizontal-bottom/vertical-top spatial configuration.
This appears particularly relevant in segmenting regions of the front zebra, as shown by
marking in the image the patches i that yielded the maximum log likelihood of P (v|patch).
In (B), the mixture neighborhood captures a horizontal configuration, more focused on
the horizontal stripes of the front zebra. This example demonstrates the logic behind a
probabilistic mixture: coefficients corresponding to the bottom horizontal stripes might be
linked with top vertical stripes (A) or to more horizontal stripes (B).
4
Discussion
Work on the study of natural image statistics has recently evolved from issues about scalespace hierarchies, wavelets, and their ready induction through unsupervised learning models (loosely based on cortical development) towards the coordinated statistical structure of
the wavelet components. This includes bottom-up (eg bowties, hierarchical representations
such as complex cells) and top-down (eg GSM) viewpoints. The resulting new insights
inform a wealth of models and ideas and form the essential backdrop for the work in this
paper. They also link to impressive engineering results in image coding and processing.
A most critical aspect of an hierarchical representational model is the way that the structure
of the hierarchy is induced. We addressed the hierarchy question using a novel extension
to the GSM generative model in which mixer variables (at one level of the hierarchy) enjoy probabilistic assignments to filter responses (at a lower level). We showed how these
assignments can be learned (using Gibbs sampling), and illustrated some of their attractive
properties using both synthetic and a variety of image data. We grounded our method firmly
in Bayesian inference of the posterior distributions over the two classes of random variables
in a GSM (mixer and Gaussian), placing particular emphasis on the interplay between the
generative model and the statistical properties of its components.
An obvious question raised by our work is the neural correlate of the two different posterior
variables. The Gaussian variable has characteristics resembling those of the output of divisively normalized simple cells;6 the mixer variable is more obviously related to the output
of quadrature pair neurons (such as orientation energy or motion energy cells, which may
also be divisively normalized). How these different information sources may subsequently
be used is of great interest.
Acknowledgements This work was funded by the HHMI (OS, TJS) and the Gatsby Charitable Foundation (PD). We are very grateful to Patrik Hoyer, Mike Lewicki, Zhaoping Li,
Simon Osindero, Javier Portilla and Eero Simoncelli for discussion.
References
[1] D Andrews and C Mallows. Scale mixtures of normal distributions. J. Royal Stat. Soc., 36:99?102, 1974.
[2] M J Wainwright and E P Simoncelli. Scale mixtures of Gaussians and the statistics of natural images. In S. A. Solla, T. K.
Leen, and K.-R. M?uller, editors, Adv. Neural Information Processing Systems, volume 12, pages 855?861, Cambridge, MA,
May 2000. MIT Press.
[3] M J Wainwright, E P Simoncelli, and A S Willsky. Random cascades on wavelet trees and their use in modeling and
analyzing natural imagery. Applied and Computational Harmonic Analysis, 11(1):89?123, July 2001. Special issue on
wavelet applications.
[4] A Hyv?arinen, J Hurri, and J Vayrynen. Bubbles: a unifying framework for low-level statistical properties of natural image
sequences. Journal of the Optical Society of America A, 20:1237?1252, May 2003.
[5] R W Buccigrossi and E P Simoncelli. Image compression via joint statistical characterization in the wavelet domain. IEEE
Trans Image Proc, 8(12):1688?1701, December 1999.
[6] O Schwartz and E P Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience, 4(8):819?825,
August 2001.
[7] D J Field. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am.
A, 4(12):2379?2394, 1987.
[8] H Attias and C E Schreiner. Temporal low-order statistics of natural sounds. In M Jordan, M Kearns, and S Solla, editors,
Adv in Neural Info Processing Systems, volume 9, pages 27?33. MIT Press, 1997.
[9] D L Ruderman and W Bialek. Statistics of natural images: Scaling in the woods. Phys. Rev. Letters, 73(6):814?817, 1994.
[10] C Zetzsche, B Wegmann, and E Barth. Nonlinear aspects of primary vision: Entropy reduction beyond decorrelation. In
Int?l Symposium, Society for Information Display, volume XXIV, pages 933?936, 1993.
[11] J Huang and D Mumford. Statistics of natural images and models. In CVPR, page 547, 1999.
[12] J. Romberg, H. Choi, and R. Baraniuk. Bayesian wavelet domain image modeling using hidden Markov trees. In Proc.
IEEE Int?l Conf on Image Proc, Kobe, Japan, October 1999.
[13] A Turiel, G Mato, N Parga, and J P Nadal. The self-similarity properties of natural images resemble those of turbulent flows.
Phys. Rev. Lett., 80:1098?1101, 1998.
[14] J Portilla and E P Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. Int?l
Journal of Computer Vision, 40(1):49?71, 2000.
[15] Helmut Brehm and Walter Stammler. Description and generation of spherically invariant speech-model signals. Signal
Processing, 12:119?141, 1987.
[16] T Bollersley, K Engle, and D Nelson. ARCH models. In B Engle and D McFadden, editors, Handbook of Econometrics V.
1994.
[17] A Hyv?arinen and P Hoyer. Emergence of topography and complex cell properties from natural images using extensions of
ICA. In S. A. Solla, T. K. Leen, and K.-R. Mu? ller, editors, Adv. Neural Information Processing Systems, volume 12, pages
827?833, Cambridge, MA, May 2000. MIT Press.
[18] P Hoyer and A Hyv?arinen. A multi-layer sparse coding network learns contour coding from natural images. Vision Research,
42(12):1593?1605, 2002.
[19] Y Karklin and M S Lewicki. Learning higher-order structures in natural images. Network: Computation in Neural Systems,
14:483?499, 2003.
[20] W Laurenz and T Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4):715?
770, 2002.
[21] C Kayser, W Einh?auser, O D?ummer, P K?onig, and K P K?ording. Extracting slow subspaces from natural videos leads to
complex cells. In G Dorffner, H Bischof, and K Hornik, editors, Proc. Int?l Conf. on Artificial Neural Networks (ICANN-01),
pages 1075?1080, Vienna, Aug 2001. Springer-Verlag, Heidelberg.
[22] B A Olshausen and D J Field. Emergence of simple-cell receptive field properties by learning a sparse factorial code. Nature,
381:607?609, 1996.
[23] A J Bell and T J Sejnowski. The ?independent components? of natural scenes are edge filters. Vision Research, 37(23):3327?
3338, 1997.
[24] U Grenander and A Srivastava. Probabibility models for clutter in natural images. IEEE Trans. on Patt. Anal. and Mach.
Intel., 23:423?429, 2002.
[25] J Portilla, V Strela, M Wainwright, and E Simoncelli. Adaptive Wiener denoising using a Gaussian scale mixture model in
the wavelet domain. In Proc 8th IEEE Int?l Conf on Image Proc, pages 37?40, Thessaloniki, Greece, Oct 7-10 2001. IEEE
Computer Society.
[26] J Portilla, V Strela, M Wainwright, and E P Simoncelli. Image denoising using a scale mixture of Gaussians in the wavelet
domain. IEEE Trans Image Processing, 12(11):1338?1351, November 2003.
[27] C K I Williams and N J Adams. Dynamic trees. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Adv. Neural
Information Processing Systems, volume 11, pages 634?640, Cambridge, MA, 1999. MIT Press.
[28] E P Simoncelli, W T Freeman, E H Adelson, and D J Heeger. Shiftable multi-scale transforms. IEEE Trans Information
Theory, 38(2):587?607, March 1992. Special Issue on Wavelets.
| 2604 |@word version:1 compression:2 advantageous:1 hyv:4 simulation:1 covariance:1 reduction:1 configuration:3 efficacy:1 ording:1 current:1 comparing:1 si:1 shape:3 plot:2 update:1 v:1 generative:9 leaf:1 iso:1 prespecified:1 characterization:1 location:1 preference:1 along:2 become:2 symposium:1 ica:3 roughly:1 behavior:1 multi:4 freeman:1 actual:10 l20:1 increasing:1 laurenz:1 provided:1 estimating:2 unrelated:1 underlying:1 what:1 mountain:1 kind:1 evolved:1 nadal:1 strela:2 hindsight:1 finding:2 temporal:1 finance:1 scaled:1 demonstrates:1 schwartz:2 uk:1 control:2 onig:1 enjoy:1 segmenting:1 positive:3 before:1 engineering:1 local:6 gsms:3 mach:1 analyzing:1 approximately:1 black:1 might:1 emphasis:1 ease:1 range:3 mallow:1 practice:1 kayser:1 steerable:1 procedure:3 area:2 empirical:1 bell:1 cascade:1 hierachical:1 suggest:2 close:2 romberg:1 context:2 demonstrated:1 resembling:1 williams:2 independently:2 focused:1 schreiner:1 insight:1 fill:1 ity:1 hierarchy:5 play:1 element:1 satisfying:1 particularly:2 stripe:4 econometrics:1 database:1 bottom:8 role:1 observed:1 divisively:2 mike:1 capture:4 region:2 cycle:1 adv:4 solla:4 ran:2 pd:1 mu:1 dynamic:2 weakly:1 solving:2 grateful:1 upon:1 strikingly:1 ll2:1 joint:14 america:1 walter:1 forced:1 describe:1 london:1 sejnowski:3 artificial:1 mixer:60 neighborhood:29 whose:1 richer:2 larger:3 solve:2 posed:1 quite:1 cvpr:1 ability:1 statistic:22 gi:3 g1:11 emergence:2 obviously:1 interplay:1 advantage:2 sequence:1 grenander:1 kurtosis:1 analytical:1 ucl:2 propose:1 maximal:2 relevant:2 turned:1 representational:2 intuitive:1 description:1 probabil:1 extending:1 generating:1 adam:1 wider:1 develop:1 ac:1 stat:1 pose:2 andrew:1 aug:1 soc:2 resemble:1 closely:1 correct:1 filter:61 subsequently:1 bin:1 arinen:4 clustered:1 decompose:1 opt:1 adjusted:1 extension:4 considered:1 normal:1 exp:5 great:1 early:2 estimation:5 proc:6 coordination:3 sensitive:1 uller:1 mit:4 gaussian:30 always:1 modified:1 rather:2 probabilistically:2 derived:3 validated:1 consistently:1 likelihood:6 helmut:1 am:1 inference:12 dayan:2 dependent:2 wegmann:1 hidden:2 relation:1 transformed:1 pixel:9 issue:4 orientation:7 ill:1 development:2 spatial:9 raised:1 special:2 auser:1 marginal:3 field:6 having:1 sampling:8 placing:1 zhaoping:1 adelson:1 unsupervised:4 peaked:1 stimulus:1 few:1 kobe:1 oriented:2 composed:1 comprehensive:1 replaced:1 phase:14 attempt:1 interest:1 highly:1 possibility:1 multiply:4 mixture:11 behind:1 zetzsche:1 implication:1 edge:1 respective:1 tree:5 loosely:1 re:1 plotted:1 column:6 modeling:4 soft:1 facet:1 gn:2 kurtotic:1 assignment:19 queen:1 maximization:1 shiftable:1 osindero:1 too:8 front:2 reported:1 dependency:2 synthetic:11 peak:1 ie:2 probabilistic:7 terrence:1 vm:1 together:1 again:1 imagery:1 huang:1 conf:3 li:7 japan:1 account:2 coding:4 includes:1 coefficient:11 int:5 coordinated:4 multiplicative:3 linked:1 simon:1 square:1 wiener:1 characteristic:2 ensemble:4 bayesian:3 parga:1 multiplying:2 notoriously:1 mato:1 converged:1 l21:1 gsm:18 phys:2 inform:1 hierarhical:1 definition:1 failure:1 energy:2 pp:1 frequency:1 obvious:3 gcnu:1 associated:2 gain:2 greece:1 javier:1 sophisticated:1 actually:1 barth:1 appears:2 higher:2 follow:1 response:31 inspire:1 leen:2 though:1 just:2 stage:2 vayrynen:1 arch:1 working:2 horizontal:7 ruderman:2 cohn:1 o:1 overlapping:1 nonlinear:1 mode:1 quality:1 perhaps:2 behaved:1 olshausen:1 facilitate:1 normalized:2 true:1 analytically:1 assigned:1 spatially:1 spherically:1 illustrated:2 eg:5 white:1 attractive:1 width:1 self:1 demonstrate:3 scalespace:1 l1:24 reflection:1 motion:1 image:53 harmonic:1 novel:1 recently:1 common:4 corel:1 volume:5 association:11 discussed:1 synthesized:1 numerically:1 cambridge:3 gibbs:15 zebra:4 grid:3 turiel:1 had:1 funded:1 specification:2 similarity:2 impressive:1 posterior:6 recent:1 showed:1 perspective:1 jolla:2 verlag:1 captured:1 seen:1 goldhill:1 somewhat:1 brehm:1 ller:1 bessel:1 signal:3 july:1 rv:1 full:1 multiple:2 simoncelli:9 infer:2 sound:1 match:2 calculation:1 hhmi:3 controlled:1 schematic:2 vl2:1 basic:1 heterogeneous:1 vision:4 expectation:1 iteration:1 grounded:1 pyramid:1 cell:10 separately:1 spacing:1 addressed:2 wealth:1 source:1 induced:1 december:1 flow:1 jordan:1 extracting:1 near:1 ideal:1 variety:1 fit:4 idea:3 parameterizes:1 attias:1 engle:2 peter:1 dorffner:1 speech:1 prefers:1 involve:1 factorial:1 transforms:1 clutter:1 diameter:1 specifies:2 sign:1 neuroscience:2 estimated:4 patt:1 group:5 key:1 four:2 v1:1 wood:1 inverse:1 letter:1 baraniuk:1 striking:1 tjs:1 patch:22 scaling:1 layer:1 pay:1 display:1 yielded:4 strength:1 scene:1 nearby:1 aspect:4 span:2 optical:1 marking:1 according:4 combination:2 march:1 across:4 smaller:1 rev:2 invariant:1 taken:1 ln:2 computationally:2 equation:1 previously:1 count:1 turbulent:1 gaussians:2 multiplied:1 apply:1 hierarchical:4 v2:2 away:1 appropriate:1 top:10 subsampling:1 unifying:1 vienna:1 society:3 arrangement:3 question:2 mumford:1 receptive:4 primary:1 parametric:1 diagonal:1 bialek:2 exhibit:1 hoyer:3 subspace:1 link:1 nelson:1 extent:1 l12:1 induction:1 willsky:1 rom:1 assuming:1 code:1 modeled:1 relationship:2 october:1 info:1 negative:1 anal:1 proper:1 perform:1 allowing:1 vertical:5 observation:1 displaced:1 neuron:1 markov:1 november:1 portilla:5 august:1 intensity:2 inferred:3 namely:1 pair:5 bischof:1 learned:2 trans:4 beyond:1 suggested:2 topdown:1 proceeds:1 pattern:1 including:2 max:6 video:1 royal:1 terry:1 wainwright:4 critical:4 overlap:2 natural:24 rely:1 regularized:1 decorrelation:1 karklin:2 boat:1 scheme:1 improve:1 firmly:1 ready:1 bubble:2 patrik:1 deviate:1 prior:3 understanding:1 l2:4 removal:2 acknowledgement:1 multiplication:1 plant:1 bear:2 brittle:1 mcfadden:1 generation:2 interesting:2 proportional:1 suggestion:1 topography:1 vg:1 localized:1 foundation:1 pij:4 viewpoint:1 charitable:1 editor:6 uncorrelated:1 share:4 cd:1 row:2 summary:1 einh:1 last:2 buccigrossi:1 institute:2 wide:1 template:1 sparse:3 dimension:3 cortical:7 bowtie:5 world:1 lett:1 contour:1 sensory:3 collection:5 qualitatively:1 made:1 adaptive:1 far:1 correlate:1 preferred:1 logic:1 global:4 handbook:1 assumed:6 eero:1 hurri:1 compromised:1 table:1 learn:2 nature:2 ca:2 correlated:2 hornik:1 heidelberg:1 complex:6 domain:5 vj:1 icann:1 linearly:1 noise:2 repeated:1 quadrature:5 representative:2 intel:1 depicts:1 backdrop:1 gatsby:2 salk:4 slow:2 position:5 heeger:1 third:1 wavelet:13 learns:1 down:3 choi:1 grouping:2 essential:1 restricting:1 texture:1 entropy:1 rayleigh:3 likely:2 visual:1 g2:5 lewicki:2 springer:1 extracted:1 ma:3 oct:1 conditional:11 goal:1 sized:1 marked:1 g12:1 towards:1 hard:1 specifically:1 except:1 averaging:1 denoising:2 kearns:2 called:1 invariance:1 divisive:1 la:2 tendency:1 formally:1 odelia:2 tested:1 srivastava:1 |
1,767 | 2,605 | Semi-supervised Learning via Gaussian
Processes
Neil D. Lawrence
Department of Computer Science
University of Sheffield
Sheffield, S1 4DP, U.K.
[email protected]
Michael I. Jordan
Computer Science and Statistics
University of California
Berkeley, CA 94720, U.S.A.
[email protected]
Abstract
We present a probabilistic approach to learning a Gaussian Process
classifier in the presence of unlabeled data. Our approach involves
a ?null category noise model? (NCNM) inspired by ordered categorical noise models. The noise model reflects an assumption that
the data density is lower between the class-conditional densities.
We illustrate our approach on a toy problem and present comparative results for the semi-supervised classification of handwritten
digits.
1
Introduction
The traditional machine learning classification problem involves a set of input vecT
T
tors X = [x1 . . . xN ] and associated labels y = [y1 . . . yN ] , yn ? {?1, 1}. The
goal is to find a mapping between the inputs and the labels that yields high predictive accuracy. It is natural to consider whether such predictive performance can be
improved via ?semi-supervised learning,? in which a combination of labeled data
and unlabeled data are available.
Probabilistic approaches to classification either estimate the class-conditional densities or attempt to model p (yn |xn ) directly. In the latter case, if we fail to make
any assumptions about the underlying distribution of input data, the unlabeled
data will not affect our predictions. Thus, most attempts to make use of unlabeled
data within a probabilistic framework focus
P on incorporating a model of p (x n ): for
example, by treating it as a mixture, yn p (xn |yn ) p (yn ), and inferring p (yn |xn )
(e.g., [5]), or by building kernels based on p (xn ) (e.g., [8]). These approaches can be
unwieldy, however, in that the complexities of the input distribution are typically
of little interest when performing classification, so that much of the effort spent
modelling p (xn ) may be wasted.
An alternative is to make weaker assumptions regarding p (xn ) that are of particular
relevance to classification. In particular, the cluster assumption asserts that the
data density should be reduced in the vicinity of a decision boundary (e.g., [2]).
Such a qualitative assumption is readily implemented within the context of nonprobabilistic kernel-based classifiers. In the current paper we take up the challenge
Figure 1: The ordered categorical noise model. The plot shows p (yn |fn ) for different
values of yn . Here we have assumed three categories.
of showing how it can be achieved within a (nonparametric) probabilistic framework.
Our approach involves a notion of a ?null category region,? a region which acts
to exclude unlabeled data points. Such a region is analogous to the traditional
notion of a ?margin? and indeed our approach is similar in spirit to the transductive
SVM [10], which seeks to maximize the margin by allocating labels to the unlabeled
data. A major difference, however, is that our approach maintains and updates the
process variance (not merely the process mean) and, as we will see, this variance
turns out to interact in a significant way with the null category concept.
The structure of the paper is as follows. We introduce the basic probabilistic framework in Section 2 and discuss the effect of the null category in Section 3. Section 4
discusses posterior process updates and prediction. We present comparative experimental results in Section 5 and present our conclusions in Section 6.
2
Probabilistic Model
In addition to the input vector xn and the label yn , our model includes a latent
process variable
R fn , such that the probability of class membership decomposes as
p (yn |xn ) = p (yn |fn ) p (fn |xn ) dfn . We first focus on the noise model, p (yn |fn ),
deferring the discussion of an appropriate process model, p (fn |xn ), to later.
2.1
Ordered categorical models
We introduce a novel noise model which we have termed a null category noise model,
as it derives from the general class of ordered categorical models [1]. In the specific
context of binary classification, our focus in this paper, we consider an ordered
categorical model containing three categories1 .
?
? ? fn + w2
for yn = ?1
?
p (yn |fn ) =
? fn + w2 ? ? fn ? w2
for yn = 0 ,
?
? fn ? w2
for yn = 1
Rx
where ? (x) = ?? N (z|0, 1) dz is the cumulative Gaussian distribution function
and w is a parameter giving the width of category yn = 0 (see Figure 1). We
can also express this model in an equivalent and simpler form by replacing the
1
See also [9] who makes use of a similar noise model in a discussion of Bayesian interpretations of the SVM.
Figure 2: Graphical representation of the null category model. The fully-shaded nodes
are always observed, whereas the lightly-shaded node is observed when zn = 0.
cumulative Gaussian distribution by a Heaviside step
independent Gaussian noise to the process model:
?
H ? fn + 21
?
p (yn |fn ) =
H fn + 12 ? H fn ? 12
?
H fn ? 12
function H(?) and adding
for yn = ?1
for yn = 0 ,
for yn = 1
where we have standardized the width parameter to 1, by assuming that the overall
scale is also handled by the process model.
To use this model in an unlabeled setting we introduce a further variable, z n , which
is one if a data point is unlabeled and zero otherwise. We first impose
p (zn = 1|yn = 0) = 0;
(1)
in other words, a data point can not be from the category yn = 0 and be unlabeled.
We assign probabilities of missing labels to the other classes p (zn = 1|yn = 1) = ?+
and p (zn = 1|yn = ?1) = ?? . We see from the graphical representation in Figure 2
that zn is d-separated from xn . Thus when yn is observed, the posterior process is
updated by using p (yn |fn ). On the other hand, when the data point is unlabeled
the posterior process must be updated by p (zn |fn ) which is easily computed as:
X
p (zn = 1|fn ) =
p (yn |fn ) p (zn = 1|yn ) .
yn
The ?effective likelihood function? for a single data point, L (fn ), therefore takes
one of three forms:
?
?
H ?
fn + 12
L (fn ) =
?? H ? fn + 12 + ?+ H fn ? 21
?
H fn ? 12
for yn = ?1, zn = 0
.
for
zn = 1
for
yn = 1 zn = 0
The constraint imposed by (1) implies that an unlabeled data point never comes
from the class yn = 0. Since yn = 0 lies between the labeled classes this is equivalent
to a hard assumption that no data comes from the region around the decision
boundary. We can also soften this hard assumption if so desired by injection of
noise into the process model. If we also assume that our labeled data only comes
from the classes yn = 1 and yn = ?1 we will never obtain any evidence for data
with yn = 0; for this reason we refer to this category as the null category and the
overall model as a null category noise model (NCNM).
3
Process Model and Effect of the Null Category
We work within the Gaussian process framework and assume
p (fn |xn ) = N (fn |? (xn ) , ? (xn )) ,
where the mean ? (xn ) and the variance ? (xn ) are functions of the input space. A
natural consideration in this setting is the effect of our likelihood function on the
Figure 3: Two situations of interest. Diagrams show the prior distribution over fn (long
dashes) the effective likelihood function from the noise model when zn = 1 (short dashes)
and a schematic of the resulting posterior over fn (solid line). Left: The posterior is
bimodal and has a larger variance than the prior. Right: The posterior has one dominant
mode and a lower variance than the prior. In both cases the process is pushed away from
the null category.
distribution over fn from incorporating a new data point. First we note that if
yn ? {?1, 1} the effect of the likelihood will be similar to that incurred in binary
classification, in that the posterior will be a convolution of the step function and a
Gaussian distribution. This is comforting as when a data point is labeled the model
will act in a similar manner to a standard binary classification model. Consider now
the case when the data point is unlabeled. The effect will depend on the mean and
variance of p (fn |xn ). If this Gaussian has little mass in the null category region,
the posterior will be similar to the prior. However, if the Gaussian has significant
mass in the null category region, the outcome may be loosely described in two ways:
1. If p (fn |xn ) ?spans the likelihood,? Figure 3 (Left), then the mass of the
posterior can be apportioned to either side of the null category region,
leading to a bimodal posterior. The variance of the posterior will be greater
than the variance of the prior, a consequence of the fact that the effective
likelihood function is not log-concave (as can be easily verified).
2. If p (fn |xn ) is ?rectified by the likelihood,? Figure 3 (Right), then the mass
of the posterior will be pushed in to one side of the null category and the
variance of the posterior will be smaller than the variance of the prior.
Note that for all situations when a portion of the mass of the prior distribution
falls within the null category region it is pushed out to one side or both sides. The
intuition behind the two situations is that in case 1, it is not clear what label the
data point has, however it is clear that it shouldn?t be where it currently is (in the
null category). The result is that the process variance increases. In case 2 the data
point is being assigned a label and the decision boundary is pushed to one side of
the point so that it is classified according to the assigned label.
4
Posterior Inference and Prediction
Broadly speaking the effects discussed above are independent of the process model:
the effective likelihood will always force the latent function away from the null
category. To implement our model, however, we must choose a process model and
an inference method. The nature of the noise model means that it is unlikely that we
will find a non-trivial process model for which inference (in terms of marginalizing
fn ) will be tractable. We therefore turn to approximations which are inspired by
?assumed density filtering? (ADF) methods; see, e.g., [3]. The idea in ADF is to
approximate the (generally non-Gaussian) posterior with a Gaussian by matching
the moments between the approximation and the true posterior. ADF has also been
extended to allow each approximation to be revisited and improved as the posterior
distribution evolves [7].
Recall from Section 3 that the noise model is not log-concave. When the variance
of the process increases the best Gaussian approximation to our noise model can
have negative variance. This situation is discussed in [7], where various suggestions
are given to cope with the issue. In our implementation we followed the simplest
suggestion: we set a negative variance to zero.
One important advantage of the Gaussian process framework is that hyperparameters in the covariance function (i.e., the kernel function), can be optimized by
type-II maximum likelihood. In practice, however, if the process variance is maximized in an unconstrained manner the effective width of the null category can be
driven to zero, yielding a model that is equivalent to a standard binary classification
noise model2 . To prevent this from happening we regularize with an L1 penalty on
the process variances (this is equivalent to placing an exponential prior on those
parameters).
4.1
Prediction with the NCNM
Once the parameters of the process model have been learned, we wish to make
predictions about a new test-point x? via the marginal distribution p (y? |x? ). For
the NCNM an issue arises here: this distribution will have a non-zero probability
of y? = 0, a label that does not exist in either our labeled or unlabeled data. This
is where the role of z becomes essential. The new point also has z? = 1 so in reality
the probability that a data point is from the positive class is given by
p (y? |x? , z? ) ? p (z? |y? ) p (y? |x? ) .
(2)
The constraint that p (z? |y? = 0) = 0 causes the predictions to be correctly normalized. So for the distribution to be correctly normalized for a test data point we
must assume that we have observed z? = 1.
An interesting consequence is that observing x? will have an effect on the process
model. This is contrary to the standard Gaussian process setup (see, e.g., [11])
in which the predictive distribution depends only on the labeled training data and
the location of the test point x? . In the NCNM the entire process model p (f? |x? )
should be updated after the observation of x? . This is not a particular disadvantage
of our approach; rather, it is an inevitable consequence of any method that allows
unlabeled data to affect the location of the decision boundary?a consequence that
our framework makes explicit. In our experiments, however, we disregard such considerations and make (possibly suboptimal) predictions of the class labels according
to (2).
5
Experiments
Sparse representations of the data set are essential for speeding up the process of
learning. We made use of the informative vector machine3 (IVM) approach [6] to
2
Recall, as discussed in Section 1, that we fix the width of the null category to unity:
changes in the scale of the process model are equivalent to changing this width.
3
The informative vector machine is an approximation to a full Gaussian Process which
is competitive with the support vector machine in terms of speed and accuracy.
10
10
5
5
0
0
?5
?5
?10
?10
?5
0
5
10
?10
?10
?5
0
5
10
Figure 4: Results from the toy problem. There are 400 points, which are labeled with
probability 0.1. Labelled data-points are shown as circles and crosses. Data-points in the
active set are shown as large dots. All other data-points are shown as small dots. Left:
Learning on the labeled data only with the IVM algorithm. All labeled points are used in
the active set. Right: Learning on the labeled and unlabeled data with the NCNM. There
are 100 points in the active set. In both plots decision boundaries are shown as a solid
line; dotted lines represent contours within 0.5 of the decision boundary (for the NCNM
this is the edge of the null category).
greedily select an active set according to information-theoretic criteria. The IVM
also enables efficient learning of kernel hyperparameters, and we made use of this
feature in all of our experiments. In all our experiments we used a kernel of the
form
T
knm = ?2 exp ??1 (xn ? xm ) (xn ? xm ) + ?3 ?nm ,
where ?nm is the Kronecker delta function. The IVM algorithm selects an active
set, and the parameters of the kernel were learned by performing type-II maximum
likelihood over the active set. Since active set selection causes the marginalized
likelihood to fluctuate it cannot be used to monitor convergence, we therefore simply
iterated fifteen times between active set selection and kernel parameter optimisation.
The parameters of the noise model, {?+ , ?? } can also be optimized, but note that
if we constrain ?+ = ?? = ? then the likelihood is maximized by setting ? to the
proportion of the training set that is unlabeled.
We first considered an illustrative toy problem to demonstrate the capabilities of our
model. We generated two-dimensional data in which two class-conditional densities
interlock. There were 400 points in the original data set. Each point was labeled
with probability 0.1, leading to 37 labeled points. First a standard IVM classifier
was trained on the labeled data only (Figure 4, Left). We then used the null
category approach to train a classifier that incorporates the unlabeled data. As
shown in Figure 4 (Right), the resulting decision boundary finds a region of low
data density and more accurately reflects the underlying data distribution.
5.1
High-dimensional example
To explore the capabilities of the model when the data set is of a much higher
dimensionality we considered the USPS data set4 of handwritten digits. The task
chosen was to separate the digit 3 from 5. To investigate performance across a range
of different operating conditions, we varied the proportion of unlabeled data between
4
The data set contains 658 examples of 5s and 556 examples of 3s.
area under ROC curve
1
0.9
0.8
?2
10
10
?1
prob. of label present
Figure 5: Area under the ROC curve plotted against probability of a point being labeled.
Mean and standard errors are shown for the IVM (solid line), the NCNM (dotted line),
the SVM (dash-dot line) and the transductive SVM (dashed line).
0.2 and 1.25 ? 10?2 . We compared four classifiers: a standard IVM trained on the
labeled data only, a support vector machine (SVM) trained on the labeled data only,
the NCNM trained on the combined labeled-unlabeled data, and an implementation
of the transductive SVM trained on the combined labeled-unlabeled data. The SVM
and transductive SVM used the SVMlight software [4]. For the SVM, the kernel
inverse width hyperparameter ?1 was set to the value learned by the IVM. For the
transductive SVM it was set to the higher of the two values learned by the IVM
and the NCNM5 . For the SVM-based models we set ?2 = 1 and ?3 = 0; the margin
error cost, C, was left at the SVMlight default setting.
The quality of the resulting classifiers was evaluated by computing the area under
the ROC curve for a previously unseen test data set. Each run was completed ten
times with different random seeds. The results are summarized in Figure 5.
The results show that below a label probability of 2.5 ? 10?2 both the SVM and
transductive SVM outperform the NCNM. In this region the estimate ? 1 provided
by the NCNM was sometimes very low leading to occasional very poor results
(note the large error bar). Above 2.5 ? 10?2 a clear improvement is obtained for
the NCNM over the other models. It is of interest to contrast this result with an
analogous experiment on discriminating twos vs. threes in [8], where p (x n ) was used
to derive a kernel. No improvement was found in this case, which [8] attributed to
the difficulties of modelling p (xn ) in high dimensions. These difficulties appear to
be diminished for the NCNM, presumably because it never explicitly models p (x n ).
We would not want to read too much into the comparison between the transductive
SVM and the NCNM since an exhaustive exploration of the regularisation parameter C was not undertaken. Similar comments also apply to the regularisation of
the process variances for the NCNM. However, these preliminary results appear
encouraging for the NCNM. Code for recreating all our experiments is available at
http://www.dcs.shef.ac.uk/~neil/ncnm.
5
Initially we set the value to that learned by the NCNM, but performance was improved
by selecting it to be the higher of the two.
6
Discussion
We have presented an approach to learning a classifier in the presence of unlabeled
data which incorporates the natural assumption that the data density between
classes should be low. Our approach implements this qualitative assumption within
a probabilistic framework without explicit, expensive and possibly counterproductive modeling of the class-conditional densities.
Our approach is similar in spirit to the transductive SVM, but with a major difference that in the SVM the process variance is discarded. In the NCNM, the process
variance is a key part of data point selection; in particular, Figure 3 illustrated how
inclusion of some data points actually increases the posterior process variance. Discarding process variance has advantages and disadvantages?an advantage is that
it leads to an optimisation problem that is naturally sparse, while a disadvantage is
that it prevents optimisation of kernel parameters via type-II maximum likelihood.
In Section 4.1 we discussed how test data points affect the location of our decision
boundary. An important desideratum would be that the location of the decision
boundary should converge as the amount of test data goes to infinity. One direction
for further research would be to investigate whether or not this is the case.
Acknowledgments
This work was supported under EPSRC Grant No. GR/R84801/01 and a grant
from the National Science Foundation.
References
[1] A. Agresti. Categorical Data Analysis. John Wiley and Sons, 2002.
[2] O. Chapelle, J. Weston, and B. Sch?
olkopf. Cluster kernels for semi-supervised learning. In Advances in Neural Information Processing Systems, Cambridge, MA, 2002.
MIT Press.
[3] L. Csat?
o. Gaussian Processes ? Iterative Sparse Approximations. PhD thesis, Aston
University, 2002.
[4] T. Joachims. Making large-scale SVM learning practical. In Advances in Kernel
Methods: Support Vector Learning, Cambridge, MA, 1998. MIT Press.
[5] N. D. Lawrence and B. Sch?
olkopf. Estimating a kernel Fisher discriminant in the
presence of label noise. In Proceedings of the International Conference in Machine
Learning, San Francisco, CA, 2001. Morgan Kaufmann.
[6] N. D. Lawrence, M. Seeger, and R. Herbrich. Fast sparse Gaussian process methods: The informative vector machine. In Advances in Neural Information Processing
Systems, Cambridge, MA, 2003. MIT Press.
[7] T. P. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis,
Massachusetts Institute of Technology, 2001.
[8] M. Seeger. Covariance kernels from Bayesian generative models. In Advances in
Neural Information Processing Systems, Cambridge, MA, 2002. MIT Press.
[9] P. Sollich. Probabilistic interpretation and Bayesian methods for support vector machines. In Proceedings 1999 International Conference on Artificial Neural Networks,
ICANN?99, pages 91?96, 1999.
[10] V. N. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998.
[11] C. K. I. Williams. Prediction with Gaussian processes: From linear regression to
linear prediction and beyond. In Learning in Graphical Models, Cambridge, MA,
1999. MIT Press.
| 2605 |@word proportion:2 seek:1 covariance:2 fifteen:1 solid:3 moment:1 contains:1 selecting:1 current:1 must:3 readily:1 john:2 fn:34 informative:3 enables:1 treating:1 plot:2 update:2 v:1 generative:1 short:1 node:2 revisited:1 location:4 herbrich:1 simpler:1 qualitative:2 manner:2 introduce:3 indeed:1 inspired:2 little:2 encouraging:1 becomes:1 provided:1 estimating:1 underlying:2 mass:5 null:21 what:1 berkeley:2 act:2 concave:2 classifier:7 uk:2 comforting:1 grant:2 dfn:1 yn:39 appear:2 positive:1 consequence:4 shaded:2 range:1 acknowledgment:1 practical:1 practice:1 implement:2 digit:3 area:3 matching:1 word:1 cannot:1 unlabeled:21 selection:3 context:2 www:1 equivalent:5 imposed:1 dz:1 missing:1 go:1 williams:1 regularize:1 notion:2 analogous:2 updated:3 expensive:1 labeled:18 observed:4 role:1 epsrc:1 region:10 apportioned:1 intuition:1 complexity:1 trained:5 depend:1 predictive:3 usps:1 model2:1 easily:2 various:1 train:1 separated:1 fast:1 effective:5 artificial:1 outcome:1 exhaustive:1 larger:1 agresti:1 otherwise:1 statistic:1 neil:3 unseen:1 transductive:8 advantage:3 asserts:1 olkopf:2 convergence:1 cluster:2 comparative:2 spent:1 illustrate:1 derive:1 ac:2 implemented:1 c:1 involves:3 implies:1 come:3 direction:1 exploration:1 assign:1 fix:1 preliminary:1 around:1 considered:2 exp:1 presumably:1 lawrence:3 mapping:1 seed:1 tor:1 major:2 label:13 currently:1 reflects:2 mit:5 gaussian:18 always:2 rather:1 fluctuate:1 focus:3 joachim:1 improvement:2 modelling:2 likelihood:13 contrast:1 seeger:2 greedily:1 inference:4 membership:1 typically:1 unlikely:1 entire:1 initially:1 selects:1 overall:2 classification:9 issue:2 marginal:1 once:1 never:3 placing:1 inevitable:1 national:1 set4:1 attempt:2 interest:3 investigate:2 mixture:1 yielding:1 behind:1 allocating:1 edge:1 loosely:1 desired:1 circle:1 plotted:1 modeling:1 disadvantage:3 zn:12 soften:1 cost:1 gr:1 too:1 combined:2 density:9 international:2 discriminating:1 probabilistic:8 michael:1 thesis:2 nm:2 containing:1 choose:1 possibly:2 leading:3 toy:3 exclude:1 knm:1 summarized:1 includes:1 explicitly:1 depends:1 later:1 observing:1 portion:1 competitive:1 maintains:1 capability:2 accuracy:2 variance:21 who:1 kaufmann:1 maximized:2 yield:1 counterproductive:1 handwritten:2 bayesian:4 iterated:1 accurately:1 rx:1 rectified:1 classified:1 against:1 minka:1 naturally:1 associated:1 attributed:1 massachusetts:1 recall:2 dimensionality:1 actually:1 adf:3 higher:3 supervised:4 improved:3 evaluated:1 hand:1 replacing:1 mode:1 quality:1 building:1 effect:7 concept:1 true:1 normalized:2 vicinity:1 assigned:2 read:1 illustrated:1 width:6 illustrative:1 criterion:1 theoretic:1 demonstrate:1 l1:1 consideration:2 novel:1 discussed:4 interpretation:2 significant:2 refer:1 cambridge:5 unconstrained:1 inclusion:1 dot:3 chapelle:1 operating:1 dominant:1 posterior:18 driven:1 termed:1 binary:4 morgan:1 greater:1 impose:1 converge:1 maximize:1 dashed:1 semi:4 ii:3 full:1 cross:1 long:1 schematic:1 prediction:9 desideratum:1 basic:1 sheffield:2 regression:1 optimisation:3 kernel:14 represent:1 sometimes:1 bimodal:2 achieved:1 addition:1 whereas:1 want:1 shef:2 diagram:1 sch:2 w2:4 comment:1 contrary:1 incorporates:2 spirit:2 jordan:2 presence:3 svmlight:2 affect:3 suboptimal:1 regarding:1 idea:1 whether:2 handled:1 effort:1 penalty:1 speaking:1 cause:2 york:1 generally:1 clear:3 amount:1 nonparametric:1 ten:1 category:25 simplest:1 reduced:1 http:1 outperform:1 exist:1 dotted:2 delta:1 correctly:2 csat:1 broadly:1 hyperparameter:1 express:1 key:1 four:1 monitor:1 changing:1 prevent:1 verified:1 undertaken:1 wasted:1 merely:1 shouldn:1 run:1 prob:1 inverse:1 family:1 decision:9 pushed:4 dash:3 followed:1 constraint:2 kronecker:1 constrain:1 infinity:1 software:1 lightly:1 speed:1 span:1 performing:2 injection:1 department:1 according:3 combination:1 poor:1 smaller:1 across:1 son:2 sollich:1 unity:1 deferring:1 evolves:1 making:1 s1:1 previously:1 turn:2 discus:2 fail:1 tractable:1 available:2 apply:1 occasional:1 away:2 appropriate:1 alternative:1 original:1 standardized:1 completed:1 graphical:3 marginalized:1 giving:1 traditional:2 dp:1 separate:1 discriminant:1 trivial:1 reason:1 assuming:1 code:1 setup:1 negative:2 implementation:2 vect:1 convolution:1 observation:1 discarded:1 situation:4 extended:1 dc:2 y1:1 varied:1 optimized:2 california:1 learned:5 beyond:1 bar:1 below:1 xm:2 challenge:1 natural:3 force:1 difficulty:2 aston:1 technology:1 categorical:6 speeding:1 prior:8 marginalizing:1 regularisation:2 fully:1 recreating:1 nonprobabilistic:1 suggestion:2 interesting:1 filtering:1 foundation:1 incurred:1 supported:1 side:5 weaker:1 allow:1 institute:1 fall:1 sparse:4 boundary:9 curve:3 xn:23 default:1 cumulative:2 contour:1 dimension:1 made:2 san:1 cope:1 approximate:2 active:8 assumed:2 francisco:1 latent:2 iterative:1 decomposes:1 reality:1 nature:1 ca:2 interact:1 icann:1 noise:18 hyperparameters:2 x1:1 roc:3 wiley:2 inferring:1 wish:1 explicit:2 exponential:1 lie:1 ncnm:19 unwieldy:1 specific:1 discarding:1 showing:1 svm:17 evidence:1 derives:1 incorporating:2 essential:2 vapnik:1 adding:1 phd:2 margin:3 simply:1 explore:1 happening:1 prevents:1 ordered:5 ivm:9 ma:5 weston:1 conditional:4 goal:1 labelled:1 fisher:1 hard:2 change:1 diminished:1 experimental:1 disregard:1 select:1 support:4 latter:1 arises:1 relevance:1 heaviside:1 |
1,768 | 2,606 | Generalization Error and Algorithmic
Convergence of Median Boosting
Bal?azs K?egl
Department of Computer Science and Operations Research, University of Montreal
CP 6128 succ. Centre-Ville, Montr?eal, Canada H3C 3J7
[email protected]
Abstract
We have recently proposed an extension of A DA B OOST to regression
that uses the median of the base regressors as the final regressor. In this
paper we extend theoretical results obtained for A DA B OOST to median
boosting and to its localized variant. First, we extend recent results on efficient margin maximizing to show that the algorithm can converge to the
maximum achievable margin within a preset precision in a finite number
of steps. Then we provide confidence-interval-type bounds on the generalization error.
1
Introduction
In a recent paper [1] we introduced M ED B OOST, a boosting algorithm that trains base
regressors and returns their weighted median as the final regressor. In another line of research, [2, 3] extended A DA B OOST to boost localized or confidence-rated experts with
input-dependent weighting of the base classifiers. In [4] we propose a synthesis of the two
methods, which we call L OC M ED B OOST. In this paper we analyze the algorithmic convergence of M ED B OOST and L OC M ED B OOST, and provide bounds on the generalization
error.
We start by describing the algorithm in its most general form, and extend the result of [1] on
the convergence of the robust (marginal) training error (Section 2). The robustness of the
regressor is measured in terms of the dispersion of the expert population, and with respect to
the underlying average confidence estimate. In Section 3, we analyze the algorithmic convergence. In particular, we extend recent results [5] on efficient margin maximizing to show
that the algorithm can converge to the maximum achievable margin within a preset precision in a finite number of steps. In Section 4, we provide confidence-interval-type bounds
on the generalization error by generalizing results obtained for A DA B OOST [6, 2, 3]. As
in the case of A DA B OOST, the bounds justify the algorithmic objective of minimizing the
robust training error. Note that the omitted proofs can be found in [4].
2
The L OC M ED B OOST algorithm and the convergence result
For the formal description, let the training data be Dn = (x1 , y1 ), . . . , (xn , yn ) where
data points (xi , yi ) are from the set Rd ? R. The algorithm maintains a weight distribu(t)
(t)
tion w(t) = w1 , . . . , wn over the data points. The weights are initialized uniformly
L OC M ED B OOST(Dn , C (y 0 , y), BASE(Dn , w), %, T )
1
w ? (1/n, . . . , 1/n)
2
3
4
5
for t ? 1 to T
(h(t) , ?(t) ) ? BASE(Dn , w)
for i ? 1 to n
?i ? 1 ? 2C h(t) (xi ), yi
6
?i ? ?(t) (xi )
7
?(t) ? arg min e%?
?
8
9
10
if ?
(t)
=?
(t)
return f
if ?(t) < 0
. see (1)
. base rewards
. base confidences
n
X
(t)
wi e???i ?i
i=1
. ?i ?i ? % for all i = 1, . . . , n
(?) = med?,?(?) h(?)
. equivalent to
n
X
(t)
w i ?i ? i < %
i=1
11
return f
12
for i ? 1 to n
13
wi
(t+1)
(t?1)
(?) = med?,?(?) h(?)
(t)
exp(??(t) ?i ?i )
?wi Pn
(t)
wj exp(??(t) ?j ?j )
h(?)
j=1
14
return f (T ) (?) = med?,?(?)
(t)
(t) exp(?? ?i ?i )
Z (t)
=wi
Figure 1: The pseudocode of the L OC M ED B OOST algorithm. Dn is the training data,
C (y 0 , y) ? I{|y?y0 |>} is the cost function, BASE(Dn , w) is the base regression algorithm, % is the robustness parameter, and T is the number of iterations.
in line 1, and are updated in each iteration in line 13 (Figure 1). We suppose that we are
given a base learner algorithm BASE(Dn , w) that, in each iteration t, returns a base hypothesis that consists of a real-valued base regressor h(t) ? H and a non-negative base
confidence function ?(t) ? K. In general, the base learner should attempt to minimize the
base objective
n
X
(t)
(t)
e1 (Dn ) = 2
wi ?(t) (xi )C h(t) (xi ), yi ? ?
? (t) ,
(1)
i=1
where C (y, y 0 ) is an -dependent loss function satisfying
and
C (y, y 0 ) ? C(0?1) (y, y 0 ) = I{|y ? y 0 | > }, 1
?
? (t) =
n
X
(2)
wi ?(t) (xi )
(3)
i=1
(t)
is the average confidence of ?(t) on the training set. Intuitively, e1 (Dn ) is a mixture
of the two objectives of error minimization and confidence maximization. The first term
is a weighted regression loss where the weight of a point xi is the product of its ?con(t)
stant? weight wi and the confidence ?(t) (xi ) of the base hypothesis. Minimizing this
1
The indicator function I{A} is 1 if its argument A is true and 0 otherwise.
term means to place the high-confidence region of the base regressor into areas where the
regression error is small. On the other hand, the minimization of the second term drives the
high-confidence region of the base regressor into dense areas. After Theorem 1, we will
explain the derivation of the base objective (1).
To simplify the notation in Figure 1 and in Theorem 1 below, we define the base rewards
(t)
(t)
?i and the base confidences ?i for each training point (xi , yi ), i = 1, . . . , n, base re(t)
gressor h , and base confidence function ?(t) , t = 1, . . . , T , as
(t)
?i
respectively.
2
(t)
= 1 ? 2C (h(t) (xi ), yi ) and ?i = ?(t) (xi ),
(4)
After computing the base rewards and the base confidences in lines 5 and 6, the algorithm
sets the weight ?(t) of the base regressor h(t) to the value that minimizes the exponential
loss
n
X
(t)
E%(t) (?) = e%?
wi e???i ?i ,
(5)
i=1
where % is a robustness parameter that has a role in keeping the algorithm in its operating
range, in avoiding over- and underfitting, and in maximizing the margin (Section 3). If
(t)
?i ?i ? % for all training points, then ?(t) = ? and E% (?(t) ) = 0, so the algorithm
returns the actual regressor (line 9). Intuitively, this means that the capacity of the set of
base hypotheses is too large, so we are overfitting. If ?(t) < 0, the algorithm returns the
regressor up to the last iteration (line 11). Intuitively, this means that the capacity of the
set of base hypotheses is too small, so we cannot find a new base regressor that would
decrease the training loss. In general, ?(t) can be found easily by line-search because of
(t)
the convexity of E% (?). In some special cases, ?(t) can be computed analytically.
In lines 9, 11, or 14, the algorithm returns the weighted median of the base regressors.
For the analysis of the algorithm, we formally define the final regressor in a more general
(t)
manner. First, let ?
e(t) = PT? ?(j) be the normalized coefficient of the base hypothesis
j=1
(h(t) , ?(t) ), and let
c(T ) (x) =
T
X
t=1
?
e(t) ?(t) (x) =
PT
?(t) ?(t) (x)
PT
(t)
t=1 ?
t=1
(T )
(6)
(T )
be the average confidence function3 after the T th iteration. Let f?+ (x) and f?? (x) be the
(T )
(T )
weighted 1+?/c2 (x) - and 1??/c2 (x) -quantiles, respectively, of the base regressors
h(1) (x), . . . , h(T ) (x) with respective weights ?(1) ?(1) (x), . . . , ?(T ) ?(T ) (x) (Figure 2(a)).
Formally, for any ? ? R, if ?c(T ) (x) < ? < c(T ) (x), let
(
)
PT
(t) (t)
(j)
(t)
1 ? c(T?) (x)
(T )
(j)
t=1 ? ? (x)I{h (x) < h (x)}
f?+ (x) = min h (x) :
<
, (7)
PT
(t) (t)
j
2
t=1 ? ? (x)
(
)
PT
(t) (t)
(j)
(t)
1 ? c(T?) (x)
(T )
(j)
t=1 ? ? (x)I{h (x) > h (x)}
f?? (x) = max h (x) :
<
,(8)
PT
(t) (t)
j
2
t=1 ? ? (x)
(T )
(T )
otherwise (including the case when c(T ) (x) = 0) let f?+ (x) = ? ? (+?) and f?? (x) =
(T )
? ? (??)4 . Then the weighted median is defined as f (T ) (?) = med?,?(?) h(?) = f0+ (?).
Note that we will omit the iteration index (t) where it does not cause confusion.
Not to be confused with ?
? (t) in (3) which is the average base confidence over the training data.
4
In the degenerative case we define 0 ? ? = 0/0 = ?.
2
3
PSfrag replacements
PSfrag replacements
}<
f?+
1?
?
c(T ) (?)
2
yi +
f?+
f0+
med?,?(?) = f0+
f??
}<
1?
?
c(T ) (?)
yi
f??
2
(a)
yi ?
(b)
Figure 2: (a) Weighted
1+?/c(T ) (x)
2
- and
1??/c(T ) (x)
2
(t)
-quantiles, and the weighted me-
dian of linear base regressors with equal weights ? = 1/9, constant base confidence
functions ?(x) ? 1, and c(T?) (x) ? 0.25. (b) ?-robust -precise regressor.
To assess the final regressor f (T ) (?), we say that f (T ) (?) is ?-robust -precise on (xi , yi )
(T )
(T )
if and only if f?+ (xi ) ? yi + , and f?? (xi ) ? yi ? . For ? ? 0, this condition is
equivalent to both quantiles being in the ?-tube? around yi (Figure 2(b)).
In the rest of this section we show that the algorithm minimizes the relative frequency
of training points on which f (T ) (?) is not %-robust -precise. Formally, let the ?-robust
-precise training error of f (T ) be defined as
o
1 X n (T )
(T )
I f?+ (xi ) > yi + ? f?? (xi ) < yi ? .5
n i=1
n
L(?) (f (T ) ) =
(9)
If ? = 0, L(0) (f (T ) ) gives the relative frequency of training points on which the regressor
f (T ) has a larger L1 error than . If we have equality in (2), this is exactly the average loss
of the regressor f (T ) on the training data. A small value for L(0) (f (T ) ) indicates that the
regressor predicts most of the training points with -precision, whereas a small value for
L(?) (f (T ) ) with a positive ? suggests that the prediction is not only precise but also robust
in the sense that a small perturbation of the base regressors and their weights will not
increase L(0) (f (T ) ). For classification with bi-valued base classifiers h : Rd 7? {?1, 1},
the definition (9) (with = 1) recovers the traditional notion of robust training error, that
is, L(?) (f (T ) ) is the relative frequency of data points with margin smaller than ?.
The following theorem upper bounds the ?-robust -precise training error L (?) of the regressor f (T ) output by L OC M ED B OOST.
Theorem 1 Let L(?) (f (T ) ) defined as in (9) and suppose that condition (2) holds for the
(t)
(t)
loss function C (?, ?). Define the base rewards ?i and the base confidences ?i as in (4).
(t)
Let wi be the weight of training point xi after the tth iteration (updated in line 13 in
Figure 1), and let ?(t) be the weight of the base regressor h(t) (?) (computed in line 7 in
Figure 1). Then for all ? ? R
L(?) (f (T ) ) ?
T
Y
E?(t) (?(t) ),
(10)
t=1
(t)
where E? (?(t) ) is defined in (5).
For the sake of simplicity, in the notation we suppress the fact that L(?) depends on the whole
sequence of base regressors, base confidences, and weights, not only on the final regressor f(T ) .
5
The proof is based on the observation that if the median of the base regressors goes further
than from the real response yi at training point xi , then most of the base regressors must
also be far from yi , giving small base rewards to this point.
The goal of L OC M ED B OOST is to minimize L(?) (f (T ) ) at ? = % so, in view of Theorem 1,
(t)
our goal in each iteration t is to minimize E% (5). To derive the base objective (1), we
follow the two step functional gradient descent procedure [7], that is, first we maximize
the negative gradient ?E%0 (?) in ? = 0, then we do a line search to determine ?(t) . Using
Pn
(t)
this approach, the base objective becomes e1 (Dn ) = ? i=1 wi ?i ?i , which is identical
(t)
(t)
to (1). Note that since E% (?) is convex and E% (0) = 1, a positive ?(t) means that
(t)
(t)
min? E% (?) = E% (?(t) ) < 1, so the condition in line 10 in Figure 1 guarantees that the
upper bound of (10) decreases in each step.
3
Setting % and maximizing the minimum margin
In practice, A DA B OOST works well with % = 0, so setting % to a positive value is only
an alternative regularization option to early stopping. In the case of L OC M ED B OOST,
however, one must carefully choose % to keep the algorithm in its operating range and to
avoid over- and underfitting. A too small % means that the algorithm can overfit and stop in
line 9. In binary classification this is an unrealistic situation: it means that there is a base
classifier that correctly classifies all data points. On the other hand, it can happen easily
in the abstaining classifier/regressor model, when ?(t) (x) = 0 on a possibly large input
region. In this case, a base classifier can correctly classify (or a base regressor can give
positive base rewards ?i to) all data points on which it does not abstain, so if % = 0, the
algorithm stops in line 9. At the other end of the spectrum, a large % can make the algorithm
underfit and stop in line 11, so one needs to set % carefully in order to avoid early stopping
in lines 9 or 11.
From the point of view of generalization, % also has an important role as a regularization
parameter. A larger % decreases the stepsize ?(t) in the functional gradient view. From
another aspect, a larger % decreases the effective capacity of the the class of base hypotheses
by restricting the set of admissible base hypotheses to those having small errors. In general,
% has a potential role in balancing between over- and underfitting so, in practice, we suggest
that it be validated together with the number of iterations T and other possible complexity
parameters of the base hypotheses.
In the context of A DA B OOST, there have been several proposals to set % in an adaptive
way to effectively maximize the minimum margin. In the rest of this section, we extend the
analysis of marginal boosting [5] to this general case. Although the agressive maximization
of the minimum margin can lead to overfitting, the analysis can provide valuable insight
into the understanding of L OC M ED B OOST and so it can guide the setting of % in practice.
6
For the sake of simplicity, let us assume that
hypotheses (h, ?) come
base
from a finite set
(t)
(1)
(1)
(t)
(t)
HN with cardinality N , and let H = (h , ? ), . . . , (h , ? ) be the set of base
hypotheses after the tth iteration. Let us define the edge of the base hypothesis (h, ?) ? H N
as7
n
n
X
X
?(h,?) (w) =
w i ?i ? i =
wi ?(xi ) 1 ? 2C h(xi ), yi ,
i=1
i=1
and the maximum edge in the tth iteration as ? ? (t) = max(h,?)?HN ?(h,?) (w(t) ). Note
that ?(h,?) (w) = ?e1 (Dn ), so with this terminology, the objective of the base learner is
6
7
The analysis can be extended to infinite base sets along the lines of [5].
For the sake of simplicity, in the notation we suppress the dependence of ? (h,?) on Dn .
to maximize the edge ? (t) = ?(h(t) ,?(t) ) (w(t) ) (if the maximum is achieved, then ? (t) =
? ? (t) ), and the algorithm stops in line 11 if the edge ? (t) is less than %. On the other hand,
let us define the margin on a point (x, y) as the average reward8
?(x,y) (?) =
N
X
j=1
?
e(j) ?(j) ?(j) =
N
X
j=1
?
e(j) ?(j) (x) 1 ? 2C h(j) (x), y .
Let us denote the minimum margin over the data points in the tth iteration by
?? (t) =
min
(x,y)?Dn
?(x,y) (?(t?1) ),
(11)
where ?(t?1) = ?(1) , . . . , ?(t?1) is the vector of base hypothesis coefficients up to the
(t ? 1)th iteration.
It is easy to see that in each iteration, the maximum edge over the base hypotheses is at
least the minimum margin over the training points:
? ? (t) =
max
(h,?)?HN
?(h,?) (w(t) )
?
min
(x,y)?Dn
?(x,y) (?(t?1) ) = ?? (t) .
Moreover, as several authors (e.g., [5]) noted in the context of A DA B OOST, by the MinMax-Theorem of von Neumann [8] we have
? ? = min
max
w (h,?)?HN
?(h,?) (w) = max
?
min
(x,y)?Dn
?(x,y) (?) = ?? ,
so the minimum achievable maximal edge by any weighting over the training points is equal
to the maximum achievable minimal margin by any weighting over the base hypotheses.
To converge to ?? within a factor ? in finite time, [5] sets
(t)
%RW = min ? (j) ? ?,
j=1,...,t
and shows that ?? (t) exceeds ?? ? ? after
l
2 log n
?2
m
+ 1 steps.
In the following, we extend these results to the general case of L OC M ED B OOST. First we
define the minimum and maximum achievable base rewards by
?min =
min
min ?(x) 1 ? 2C h(x), y ,
(12)
(h,?)?HN (x,y)?Dn
?max =
max
max ?(x) 1 ? 2C h(x), y ,
(13)
(h,?)?HN (x,y)?Dn
respectively. Let A = ?max ? ?min , ?
e(t) = ? (t) ? ?min , and %e(t) = %(t) ? ?min .9
Lemma 1 (Generalization of Lemma 3 in [5]) Assume that ?min ? %(t) ? ? (t) . Then
(t)
(t)
%e
A ? %e(t)
A ? %e(t)
%e
(t)
log
?
log
.
(14)
E%(t) (?(t) ) ? exp ?
A
?
e(t)
%(t)
A??
e(t)
Finite convergence of L OC M ED B OOST both with %(t) = % = const. and with an adaptive
(t)
%(t) = %RW is based on the following general result.
PT
Theorem 2 Assume that %(t) ? l? (t) ? ?.m Let ? = t=1 ?
e(t) %(t) . Then L(?) (f (T ) ) = 0
2
A
log
n
(t)
(so ?? > ?) after at most T =
+ 1 iterations.
2? 2
8
9
For the sake of simplicity, in the notation we suppress the dependence of ? (x,y) on HN .
In binary classification, ?min = ?1, ?max = 1, A = 2, ?
e(t) = 1 + ? (t) , and %e(t) = 1 + %(t) .
The first consequence is the convergence of L OC M ED B OOST with a constant %.
Corollary 1 (Generalization of Corollary 4 in [5]) Assume that the weak learner always
(t)
achieves
an
? ?? . If ?min ? % < ?? , then ?? (t) > % after at most T =
m edge ?
l 2
A log n
2(?? ?%)2 + 1 steps.
(t)
The second corollary shows that if % is set adaptively to %RW then the minimum margin
?? (t) will converge to ?? within a precision ? in a finite number of steps.
Corollary 2 (Generalization of Theorem 6 in [5]) Assume that the weak learner always
?
(t)
(t)
? (t)
achieves anl edge ? (t)
> ?? ? ? after at
m ? ? . If ?min ? % = ? ? ?, ? > 0, then ?
2
A log n
most T =
+ 1 iterations.
2? 2
4
The generalization error
In this section we extend probabilistic bounds on the generalization error obtained for
A DA B OOST [6], confidence-rated A DA B OOST [2], and localized boosting [3]. Here we
suppose that the data set Dn is generated independently according to a distribution D over
Rd ? R. The results provide bounds on the confidence-interval-type error
h
i
L(f (T ) ) = PD f (T ) (X) ? Y > ,
where (X, Y ) is a random point generated according to D independently from points in
Dn . The bounds state that with a large probability,
L(f (T ) ) < L(?) (f (T ) ) + C(n, ?, H, K),
where the complexity term C depends on the size or the pseudo-dimension of the base
regressor set H, and the smoothness of the base confidence functions in K. As in the case
of A DA B OOST, these bounds qualitatively justify the minimization of the robust training
error L(?) (f (T ) ).
Let C be the set of combined regressors obtained as a weighted median of base regressors
from H, that is,
N
C = f (?) = med?,?(?) h(?) h ? HN , ? ? R+ , ? ? KN , N ? Z+ .
In the simplest case, we assume that H is finite and base coefficients are constant.
Theorem 3 (Generalization of Theorem 1 in [6]) Let D be a distribution over R d ? R,
and let Dn be a sample of n points generated independently at random according to D.
Assume that the base regressor set H is finite, and K contains only ?(x) ? 1. Then with
probability 1 ? ? over the random choice of the training set Dn , any f ? C satisfies the
following bound for all ? > 0:
1/2 !
1
log
n
log
|H|
1
L(f ) < L(?) (f ) + O ?
+ log
.
?2
?
n
Similarly to the proof of Theorem 1 in [6], we construct a set CN that contains
unweighted medians of N base functions from H, then approximate f by g(?) =
med1 h1 (?), . . . , hN (?) ? CN where the base functions hi are selected randlomly ace We then separate the one-sided error into two
cording to the coefficient distribution ?.
terms by
PD f (X) > Y + ? PD g ?2 + (X) > Y + + PD g ?2 + (X) ? Y + f (X) > Y + ,
and then upper bound the two terms as in [6].
The second theorem extends the first to the case of infinite base regressor sets.
Theorem 4 (Generalization of Theorem 2 of [6]) Let D be a distribution over R d ? R,
and let Dn be a sample of n points generated independently at random according to D.
Assume that the base regressor set H has pseudodimension p, and K contains only ?(x) ?
1. Then with probability 1 ? ? over the random choice of the training set Dn , any f ? C
satisfies the following bound for all ? > 0:
1/2 !
1
1
p log2 (n/p)
(?)
L(f ) < L (f ) + O ?
+ log
.
?2
?
n
The proof goes as in Theorem
until we upper bound the shatter
n 3 and in Theorem 2 in [6]
o
(x, y) : g ?2 + (x) > y + : g ? CN , ? = 0, N4 , . . . , 2N
coefficient of the set A =
by
N
(N/2 + 1)(en/p)pN where p is the pseudodimension of H (or the VC dimension of H+ =
{(x, y) : h(x) > y} : h ? H ).
In the most general case K can contain smooth functions.
Theorem 5 (Generalization of Theorem 1 of [3]) Let D be a distribution over R d ? R,
and let Dn be a sample of n points generated independently at random according to D.
Assume that the base regressor set H has pseudodimension p, and K contains functions
?(x) which are lower bounded by a constant a, and which satisfy for all x, x0 ? Rd the
Lipschitz condition |?(x) ? ?(x0 )| ? Lkx ? x0 k? . Then with probability 1 ? ? over the
random choice of the training set Dn , any f ? C satisfies the following bound for all ? > 0:
1/2 !
1
(L/(a?))d p log2 (n/p)
1
(?)
L(f ) < L (f ) + O ?
+ log
.
?2
?
n
5
Conclusion
In this paper we have analyzed the algorithmic convergence of L OC M ED B OOST by generalizing recent results on efficient margin maximization, and provided bounds on the generalization error by extending similar bounds obtained for A DA B OOST.
References
[1] B. K?egl, ?Robust regression by boosting the median,? in Proceedings of the 16th Conference on
Computational Learning Theory, Washington, D.C., 2003, pp. 258?272.
[2] R. E. Schapire and Y. Singer, ?Improved boosting algorithms using confidence-rated predictions,? Machine Learning, vol. 37, no. 3, pp. 297?336, 1999.
[3] R. Meir, R. El-Yaniv, and S. Ben-David, ?Localized boosting,? in Proceedings of the 13th
Annual Conference on Computational Learning Theory, 2000, pp. 190?199.
[4] B. K?egl, ?Confidence-rated regression by boosting the median,? Tech. Rep. 1241, Department
of Computer Science, University of Montreal, 2004.
[5] G. R?atsch and M. K. Warmuth, ?Efficient margin maximizing with boosting,? Journal of Machine
Learning Research (submitted), 2003.
[6] R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee, ?Boosting the margin: a new explanation
for the effectiveness of voting methods,? Annals of Statistics, vol. 26, no. 5, pp. 1651?1686,
1998.
[7] L. Mason, P. Bartlett, J. Baxter, and M. Frean, ?Boosting algorithms as gradient descent,? in
Advances in Neural Information Processing Systems. 2000, vol. 12, pp. 512?518, The MIT Press.
[8] J. von Neumann, ?Zur Theorie der Gesellschaftsspiele,? Math. Ann., vol. 100, pp. 295?320,
1928.
| 2606 |@word achievable:5 agressive:1 minmax:1 contains:4 must:2 happen:1 selected:1 warmuth:1 math:1 boosting:12 dn:25 c2:2 along:1 shatter:1 psfrag:2 consists:1 underfitting:3 manner:1 x0:3 actual:1 cardinality:1 becomes:1 confused:1 classifies:1 underlying:1 notation:4 moreover:1 bounded:1 provided:1 minimizes:2 guarantee:1 pseudo:1 voting:1 exactly:1 classifier:5 omit:1 yn:1 positive:4 consequence:1 suggests:1 range:2 bi:1 practice:3 procedure:1 area:2 confidence:24 suggest:1 cannot:1 context:2 equivalent:2 maximizing:5 go:2 independently:5 convex:1 simplicity:4 insight:1 population:1 notion:1 updated:2 annals:1 pt:8 suppose:3 us:1 hypothesis:14 satisfying:1 predicts:1 role:3 wj:1 region:3 decrease:4 valuable:1 pd:4 convexity:1 complexity:2 reward:7 learner:5 easily:2 succ:1 derivation:1 train:1 effective:1 ace:1 larger:3 valued:2 say:1 otherwise:2 statistic:1 h3c:1 final:5 sequence:1 dian:1 propose:1 product:1 maximal:1 description:1 az:1 convergence:8 yaniv:1 neumann:2 extending:1 ben:1 derive:1 montreal:2 frean:1 measured:1 come:1 vc:1 generalization:14 extension:1 hold:1 around:1 exp:4 algorithmic:5 achieves:2 early:2 omitted:1 weighted:8 minimization:3 mit:1 j7:1 always:2 pn:3 avoid:2 corollary:4 validated:1 indicates:1 degenerative:1 tech:1 sense:1 dependent:2 stopping:2 el:1 arg:1 classification:3 special:1 marginal:2 equal:2 construct:1 having:1 washington:1 identical:1 simplify:1 replacement:2 attempt:1 montr:1 function3:1 mixture:1 analyzed:1 edge:8 respective:1 initialized:1 re:1 theoretical:1 minimal:1 eal:1 classify:1 maximization:3 cost:1 too:3 kn:1 combined:1 adaptively:1 probabilistic:1 lee:1 regressor:26 synthesis:1 together:1 w1:1 von:2 tube:1 choose:1 possibly:1 hn:9 expert:2 return:8 potential:1 coefficient:5 satisfy:1 depends:2 tion:1 view:3 h1:1 analyze:2 start:1 maintains:1 option:1 minimize:3 ass:1 weak:2 drive:1 submitted:1 explain:1 ed:15 definition:1 frequency:3 pp:6 proof:4 recovers:1 con:1 stop:4 carefully:2 follow:1 response:1 improved:1 until:1 overfit:1 hand:3 pseudodimension:3 normalized:1 true:1 contain:1 analytically:1 equality:1 regularization:2 noted:1 oc:13 bal:1 confusion:1 cp:1 l1:1 abstain:1 recently:1 umontreal:1 pseudocode:1 functional:2 extend:7 smoothness:1 rd:4 similarly:1 centre:1 f0:3 operating:2 lkx:1 base:74 recent:4 binary:2 rep:1 yi:17 der:1 minimum:8 converge:4 maximize:3 determine:1 exceeds:1 smooth:1 e1:4 prediction:2 variant:1 regression:6 stant:1 iteration:16 achieved:1 zur:1 proposal:1 whereas:1 interval:3 median:11 rest:2 med:6 effectiveness:1 call:1 easy:1 wn:1 baxter:1 cn:3 bartlett:2 cause:1 tth:4 rw:3 simplest:1 schapire:2 meir:1 correctly:2 vol:4 terminology:1 abstaining:1 ville:1 place:1 extends:1 bound:17 hi:1 annual:1 sake:4 aspect:1 argument:1 min:18 department:2 according:5 smaller:1 y0:1 wi:11 n4:1 intuitively:3 sided:1 describing:1 singer:1 end:1 operation:1 stepsize:1 alternative:1 robustness:3 log2:2 const:1 giving:1 objective:7 dependence:2 traditional:1 gradient:4 separate:1 capacity:3 me:1 iro:1 index:1 minimizing:2 theorie:1 negative:2 suppress:3 upper:4 observation:1 dispersion:1 finite:8 descent:2 kegl:1 situation:1 extended:2 precise:6 y1:1 perturbation:1 canada:1 introduced:1 david:1 boost:1 below:1 max:10 including:1 explanation:1 unrealistic:1 indicator:1 rated:4 understanding:1 relative:3 freund:1 loss:6 localized:4 balancing:1 cording:1 last:1 keeping:1 distribu:1 formal:1 guide:1 dimension:2 xn:1 unweighted:1 author:1 qualitatively:1 adaptive:2 regressors:11 far:1 approximate:1 keep:1 overfitting:2 xi:20 anl:1 spectrum:1 search:2 robust:11 ca:1 da:12 dense:1 whole:1 underfit:1 x1:1 quantiles:3 en:1 precision:4 exponential:1 weighting:3 admissible:1 theorem:18 mason:1 restricting:1 effectively:1 egl:3 margin:17 generalizing:2 satisfies:3 goal:2 ann:1 oost:27 lipschitz:1 infinite:2 uniformly:1 justify:2 preset:2 lemma:2 atsch:1 formally:3 avoiding:1 |
1,769 | 2,607 | Large-Scale Prediction of Disulphide Bond
Connectivity
Pierre Baldi Jianlin Cheng
Schoolof Information and Computer Science
University of California, Irvine
Irvine, CA 92697-3425
{pfbaldi,jianlinc}@ics.uci.edu
Alessandro Vullo
Computer Science Department
University College Dublin
Dublin, Ireland
[email protected]
Abstract
The formation of disulphide bridges among cysteines is an important feature of protein structures. Here we develop new methods for the prediction of disulphide bond connectivity. We first build a large curated data
set of proteins containing disulphide bridges and then use 2-Dimensional
Recursive Neural Networks to predict bonding probabilities between cysteine pairs. These probabilities in turn lead to a weighted graph matching
problem that can be addressed efficiently. We show how the method consistently achieves better results than previous approaches on the same
validation data. In addition, the method can easily cope with chains with
arbitrary numbers of bonded cysteines. Therefore, it overcomes one of
the major limitations of previous approaches restricting predictions to
chains containing no more than 10 oxidized cysteines. The method can
be applied both to situations where the bonded state of each cysteine is
known or unknown, in which case bonded state can be predicted with
85% precision and 90% recall. The method also yields an estimate for
the total number of disulphide bridges in each chain.
1
Introduction
The formation of covalent links among cysteine (Cys) residues with disulphide bridges is
an important and unique feature of protein folding and structure. Simulations [1], experiments in protein engineering [15, 8, 14], theoretical studies [7, 18], and even evolutionary
models [9] stress the importance of disulphide bonds in stabilizing the native state of proteins. Disulphide bridges may link distant portions of a protein sequence, providing strong
structural constraints in the form of long-range interactions. Thus prediction/knowledge of
the disulphide connectivity of a protein is important and provides essential insights into its
structure and possibly also into its function and evolution.
Only recently has the problem of predicting disulphide bridges received increased attention.
In the current literature, this problems is typically split into three subproblems: (1) prediction of whether a protein chain contains intra-chain disulphide bridges or not; (2) prediction of the intra-chain bonded/non-bonded state of individual cysteines; and (3) prediction
of intra-chain disulphide bridges, i.e. of the actual pairings between bonded cysteines (see
Fig.1). In this paper, we address the problem of intra-chain connectivity prediction, and
AVITGACERDLQCGKGTCCAVSLWIKSVRVCTPVGTSGEDCHPASHKIPFSGQRKMHHTCPCAPNLACVQTSPKKFKCLSK
Figure 1: Structure (top) and connectivity pattern (bottom) of intestinal toxin 1, PDB code 1IMT.
Disulphide bonds in the structure are shown as thick lines.
specifically the solution of problem (3) alone, and of problems (2) and (3) simultaneously.
Existing approaches to connectivity prediction use stochastic global optimization [10],
combinatorial optimization [13] and machine learning techniques [11, 17]. The method in
[10] represents the set of potential disulphide bridges in a sequence as a complete weighted
undirected graph. Vertices are oxidized cysteines and edges are labeled by the strength of
interaction (contact potential) in the associated pair of cysteines. A simulated annealing
approach is first used to find an optimal set of weights. After a complete labeled graph
is obtained, candidate bridges are then located by finding the maximum weight perfect
matching1 .
The method in [17] attempts to solve the problem using a different machine learning approach. Candidate connectivity patterns are modelled as undirected graphs. A recursive
neural network architecture is trained to score candidate patterns according to a similarity
metric with respect to correct graphs. Vertices of the graphs are labeled by fixed-size vectors corresponding to multiple alignment profiles in a local window around each cysteine.
During prediction, the score computed by the network is used to exhaustively search the
space of candidate graphs. This method, tested on the same data as in [11], achieved the
best results. Unfortunately, for computational reasons, both this method and the previous
one can only deal with sequences containing a limited number of bonds (K ? 5).
A different approach to predicting disulphide bridges is reported in [13], where finding
disulphide bridges is part of a more general protocol aimed at predicting the topology
of ?-sheets in proteins. Residue-to-residue contacts (including Cys-Cys bridges) are predicted by solving a series of integer linear programming problems in which customized
hydrophobic contact energies must be maximized. This method cannot be compared with
the other approaches because the authors report validation results only for two relatively
short polypeptides with few bonds (2 and 3).
In this paper we use 2-Dimensional Recursive Neural Network (2D-RNN, [4]) to predict
disulphide connectivity in proteins starting from their primary sequence and its homologues. The output of 2D-RNN are the pairwise probabilities of the existence of a bridge
between any pair of cysteines. Candidate disulphide connectivities are predicted by finding
the maximum weight perfect matching. The proposed framework represents a significant
improvement in disulphide connectivity prediction for several reasons. First, we show how
the method consistently achieves better results than all previous approaches on the same
validation data. Second, our architecture can easily cope with chains with arbitrary number
1
A perfect matching of a graph (V, E) is a subset E 0 ? E such that each vertex v ? V is met by
only one edge in E 0 .
O i,j
Output Plane
NE i,j-1
NE i,j
NE i+1,j
NE
NW i,j+1
NW i,j
NW
4 Hidden Planes
SW i-1,j
NW i+1,j
SW
SW i,j
SW i,j+1
SE i-1,j
SE
SE i,j-1
Input Plane
(a)
SE i,j
I i,j
(b)
Figure 2: (a) General layout of a 2D-RNN for processing two-dimensional objects such as disulphide
contacts, with nodes regularly arranged in one input plane, one output plane, and four hidden planes.
In each plane, nodes are arranged on a square lattice. The hidden planes contain directed edges
associated with the square lattices. All the edges of the square lattice in each hidden plane are
oriented in the direction of one of the four possible cardinal corners: NE, NW, SW, SE. Additional
directed edges run vertically in column from the input plane to each hidden plane, and from each
hidden plane to the output plane. (b) Connections within a vertical column (i, j) of the directed
graph. Iij represents the input, Oij the output, and N Eij represents the hidden variable in the
North-East hidden plane.
of bonded cysteines. Therefore, it overcomes the limitation of previous approaches which
restrict predictions to chains with no more than 10 oxidized cysteines. Third, our methods can be applied both to situations where the bonded state of each cysteine is known
or unknown. And finally, once trained, our system is very rapid and can be used on a
high-throughput scale.
2
Methods
Algorithms
To predict disulphide connectivity patterns, we use the 2D-RNN approach described in [4],
whereby a suitable Bayesian network is recast, for computational effectiveness, in terms
of recursive neural networks, where local conditional probability tables in the underlying
directed graph are replaced by deterministic relationships between a variable and its parent
node variables. These functions are parameterized by neural networks using appropriate
weight sharing as described below. Here the underlying directed graph for disulphide connectivity has six 2D-layers: input, output, and four hidden layers (Figure 2(a)). Vertical
connections, within an (i, j) column, run from input to hidden and output layers, and from
hidden layers to output (Figure 2(b)). In each one of the four hidden planes, square lattice
connections are oriented towards one of the four cardinal corners. Detailed motivation for
these architectures can be found in [4] and a mathematical analysis of their relationships
to Bayesian networks in [5]. The essential point is that they combine the flexibility of
graphical models with the deterministic propagation and learning speed of artificial neural
networks. Unlike traditional neural networks with fixed-size input, these architectures can
process inputs of variable structure and length, and allow lateral propagation of contextual
information over considerable length scales.
In a disulphide contact map prediction, the (i, j) output represents the probability of
whether the i-th and j-th cysteines in the sequence are linked by a disulphide bridge
or not. This prediction depends directly on the (i, j) input and the four-hidden units in
the same column, associated with omni-directional contextual propagation in the hidden
planes. Hence, using weight sharing across different columns, the model can be summarized by 5 distinct neural networks in the form
?
NW
NE
SW
SE
Oij = NO (Iij , Hi,j
, Hi,j
, Hi,j
, Hi,j
)
?
?
?
NE
NE
NE
?
Hi,j = NN E (Ii,j , Hi?1,j , Hi,j?1 )
?
NW
NW
NW
Hi,j
= NN W (Ii,j , Hi+1,j
, Hi,j?1
)
?
SW
SW
SW
?
H
=
N
(I
,
H
,
H
)
?
SW
i,j
i,j
i+1,j
i,j+1
?
?
SE
SE
SE
Hi,j
= NSE (Ii,j , Hi?1,j
, Hi,j+1
)
(1)
where N represents NN parameterization. Learning can proceed by gradient descent (backpropagation) due to the acyclic nature of the underlying graph.
The input information is based on the sequence itself or rather the corresponding profile
derived by multiple alignment methods to leverage evolutionary information, possibly augmented with secondary structure and solvent accessibility information derived from the
PDB files and/or our SCRATCH suite of predictors [16, 3, 4]. For a sequence of length N
and containing M cysteines, the output layer contains M ? M units. The input and hidden
layer can scale like N ? N if the full sequence is used, or like M ? M if only fixed-size
windows around each cysteine are used, as in the experiments reported here. The results
reported here are obtained using local windows of size 5 around each cysteine, as in [17].
The input of each position within a window is the normalized frequency of all 20 amino
acids at that position in the multiple alignment generated by aligning the sequence with the
sequences in the NR database using the PSI-BLAST program as described, for instance, in
[16]. Gaps are treated as one additional amino acid. For each (i, j) location an extra input
is added to represent the absolute linear distance between the two corresponding cysteines.
Finally, it is essential to remark that the same 2D-RNN approach can be trained and applied
here in two different modes. In the first mode, we can assume that the bonded state of the
individual cysteines is known, for instance through the use of a specialized predictor for
bonded/non-bonded states. Then if the sequence contains M cysteines, 2K (2K ? M )
of which are intra-chain disulphide bonded, the prediction of the connectivity can focus
on the 2K bonded cysteines exclusively and ignore the remaining M ? 2K cysteines that
are not bonded. In the second mode, we can try to solve both prediction problems?bond
state and connectivity?at the same time by focusing on all cysteines in a given sequence.
In both cases, the output is an array of pairwise probabilities from which the connectivity
pattern graph must be inferred. In the first case, the total number of bonds or edges in the
connectivity graph is known (K). In the second case, the total number of edges must be
inferred. In section 3, we show that sum of all probabilities across the output array can be
used to estimate the number of disulphide contacts.
Data Preparation
In order to assess our method, two data sets of known disulphide connectivities were compiled from the Swiss-Prot archive [2]. First, we considered the same selection of sequences
as adopted in [11, 17] and taken from the Swiss-Prot database release no. 39 (October
2000). Additionally, we collected and filtered a more recent selection of chains extracted
from the latest available Swiss-Prot archive, version 41.19 (August 2003). In the following,
we refer to these two data sets as SP39 and SP41, respectively.
SP41 was compiled with the same filtering procedure used for SP39. Specifically, only
chains whose structure is deposited in the Protein Data Bank PDB [6] were retained. We
filtered out proteins with disulphide bonds assigned tentatively or disulphide bonds inferred
by similarity. We finally ended up with 966 chains, each with a number of disulphide bonds
in the range of 1 to 24. As previously pointed out, our methodology is not limited by the
number of disulphide bonds, hence we were able to retain and test the algorithm on the
whole filtered set of non-trivial chains. This set consists of 712 sequences, each containing
at least two bridges (K ? 2)?the case K = 1 being trivial when the bonded state is known.
By comparison, SP39 includes 446 chains with no more than 5 bridges; SP41 additionally
includes 266 sequences and 112 of these have more than 10 oxidized cysteines.
In order to avoid biases during the assessment procedure and to perform k-fold cross validation, SP41 was partitioned in ten different subsets, with the constraint that sequence
similarity between two different subsets be less or equal to 30%. This is similar to the
criteria adopted in [17, 10], where SP39 was splitted into four subsets.
Graph Matching to Derive Connectivity from Output Probabilities
In the case where the bonded state of the cysteines is known, one has a graph with 2K
nodes, one for each bonded cysteine. The weight associated with each edge is the probability that the corresponding bridge exists, as computed by the predictor. The problem is
then to find a connectivity pattern with K edges and maximum weight, where each cysteine is paired uniquely with another cysteine. The maximum weight matching algorithm
of Gabow [12] is used to chosen paired cysteines (edges), whose time complexity is cubic
O(V 3 ) = O(K 3 ), where V is the number of vertices and linear O(V ) = O(K) space complexity beyond the storage of the graph. Note that because the number of bonded cysteines
in general is not very large, it is also possible in many cases to use an exhaustive search of
all possible combinations. Indeed, the number of combinations is 1 ? 3 ? 5 ? . . . (2K ? 1)
which yields 945 connectivity patterns in the case of 10 bonded cysteines.
The case where the bonded state of the cysteines is not known is slightly more involved
and the Gabow algorithm cannot be applied directly since the graph has M nodes but, if
some of the cysteines are not bonded, only a subset of 2K < M nodes participate in the
final maximum weighted matching. Alternatively, we use a greedy algorithm to derive
the connectivity pattern using the estimate of the total number of bonds. First, we order
the edges in decreasing order of probabilities. Then we pick the edge with the highest
probability. Then we pick the next edge with highest probability that is not incident to the
first edge and so forth, until K edges have been selected. Because this greedy procedure is
not guaranteed to find the global optimum, we find it useful to make it a little more robust
by repeating L times. In each run i = 1, . . . , L, the first edge selected is the i-th most
probable edge. In other words the different runs differ by the choice of the first edge, noting
that in practice the optimal solution always contain one of the top L edges. This procedure
works well in practice because the edges with largest probabilities tend to occur in the final
pattern. For L reasonably large, the optimal connectivity pattern can usually be found. We
have compared this method with Gabow?s algorithm in the case where the bonding state is
known and observed that when L = 6, this greedy heuristic yields results that are as good
as those obtained with Gabow?s algorithm which, in this case, is guaranteed to find a global
optimum. The results reported here are obtained using the greedy procedure with L = 6.
The advantage of the greedy algorithm is its low O(LM 2 ) complexity time. It is important
to note that this method ends up by producing a prediction of both the connectivity pattern
and of the bonding state of each cysteine.
3
Results
Disulphide Connectivity Prediction for Bonded Cysteines
Here we assume that the bonding state is known. We train 2D-RNN architectures using
the SP39 data set to compare with other published results. We evaluate the performance
using the precision P (P =TP/(TP+FP) with TP = true positives and FP = false positives)
and recall R (R=TP/(TP+FN) with FN = false negatives).
As shown in Table 1, in all but one case the results are superior to what has been previ-
K
2
3
4
5
2...5
Pair Precision
0.74* (0.73)
0.61* (0.51)
0.44* (0.37)
0.41* (0.30)
0.56* (0.49)
Pattern Precision
0.74* (0.73)
0.51* (0.41)
0.27* (0.24)
0.11 (0.13)
0.49* (0.44)
Table 1: Disulphide connectivity prediction with 2D-RNN assuming the bonding state is known.
Last row reports performance on all test chains. * denote levels of precision that exceeds previously
reported best results in the literature [17] (in parentheses).
Figure 3: Correlation between number of bonded cysteines (2K) and
qP
i6=j
Oi,j log M .
ously reported in the literature [17, 11]. In some cases, results are substantially better. For
instance, in the case of 3 bonded cysteines, the precision reaches 0.61 and 0.51 at the pair
and pattern levels, whereas the best similar results reported in the literature are 0.51 (pair)
and 0.41 (pattern).
Estimation of the Number K of Bonds
Analysis of the prediction
results shows that there is a relationship between the sum of
P
all the probabilities, i6=j Oi,j , in the graph (or the output layer of the 2D-RNN) and
the total number of bonded cysteines (2K). For instance,
Pon one of the cross-validation
training sets, the correlation coefficient between 2K and i6=j Oi,j is 0.7, the correlation
coefficient
between 2K and M is 0.68, and the correlation coefficient between 2K and
qP
i6=j Oi,j log M is 0.72. As shown in Figure 3, there is a reasonably linear relationship
qP
between the total number 2K of bonded cysteines and the product
i6=j Oi,j log M ,
where M is the total number of cysteines in the sequence being considered. The slope and
y-intercept for the line are respectively 0.66 and 3.01 on one training data set. Using this,
we estimate the total number of bonded cysteines using linear regression and rounding off,
making sure that the total number is even and does not exceed the total number of cysteines
in the sequence. In the following experiments, the regression equation for predicting K is
solved separately based on each cross-validation training set.
K
2
3
4
5
Pair Recall
0.59
0.50
0.36
0.28
Pair Precision
0.49
0.45
0.37
0.31
Pattern Precision
0.40
0.32
0.15
0.03
Table 2: Prediction of disulphide connectivity pattern with 2D-RNN on all the cysteines, without
assuming knowledge of the bonding state.
Disulphide Connectivity Prediction from Scratch
In the last set of experiments, we do not assume any knowledge of the bonding state and
apply the 2D-RNN approach to all the cysteines (both bonded and not bonded) in each
sequence. We predict the number of bonds, the bonding state, and connectivity pattern
using one predictor. Experiments are run both on SP39 (4-fold cross validation) and SP41
(10-fold cross validation).
For lack of space, we cannot report all the results but, for example, precision and recall
for SP39 are given in Table 2 for 2 ? K ? 5. Table 3 shows the kind of results that are
obtained when the method is applied to sequences with more than K = 5 bonds in SP41.
The pair precision remains quite good, although the results can be noisy for certain values
because there are not many such examples in the data. Finally, the precision of bonded state
prediction is 0.85, and the recall of bonded state prediction is 0.9. The precision and recall
of bond number prediction is 0.68. The average absolute difference between true bond and
predicted bond number is 0.42. The average absolute difference between true bond number
and wrongly predicted bond number is 1.3.
K
Precision
6
0.41
7
0.40
8
0.34
9
0.37
10
0.5
11
0.4
12
0.17
15
0.37
16
0.57
17
0.40
18
0.56
19
0.42
24
0.24
Table 3: Prediction of disulphide connectivity pattern with 2D-RNN on all the cysteines, without
assuming knowledge of the bonding state and when the number of bridges K exceeds 5.
4
Conclusion
We have presented a complete system for disulphide connectivity prediction in cysteinerich proteins. Assuming knowledge of cysteine bonding state, the method outperforms
existing approaches on the same validation data. The results also show that the 2D-RNN
method achieves good recall and accuracy on the prediction of connectivity pattern even
when the bonding state of individual cysteines is not known. Differently from previous
approaches, our method can be applied to chains with K > 5 bonds and yields good, cooperative, predictions of the total number of bonds, as well as of the bonding states and bond
locations. Training can take days but once trained predictions can be carried on a proteomic
or protein engineering scale. Several improvements are currently in progress including (a)
developing a classifier to discriminate protein chains that do not contain any disulphide
bridges, using kernel methods; (b) assessing the effect on prediction of additional input
information, such as secondary structure and solvent accessibility; (c) leveraging the predicted cysteine contacts in 3D protein structure prediction; and (d) curating a new larger
training set. The current version of our disulphide prediction server DIpro (which includes
step (a)) is available through: http://www.igb.uci.edu/servers/psss.html.
Acknowledgments
Work supported by an NIH grant, an NSF MRI grant, a grant from the University of California Systemwide Biotechnology Research and Education Program, and by the Institute
for Genomics and Bioinformatics at UCI.
References
[1] V.I. Abkevich and E.I. Shankhnovich. What can disulfide bonds tell us about protein energetics,
function and folding: simulations and bioinformatics analysis. J. Math. Biol., 300:975?985,
2000.
[2] A. Bairoch and R. Apweiler. The SWISS-PROT protein sequence database and its supplement
TrEMBL. Nucleic Acids Res., 28:45?48, 2000.
[3] P. Baldi and G. Pollastri. Machine learning structural and functional proteomics. IEEE Intelligent Systems. Special Issue on Intelligent Systems in Biology, 17(2), 2002.
[4] P. Baldi and G. Pollastri. The principled design of large-scale recursive neural network
architectures?dag-rnns and the protein structure prediction problem. Journal of Machine Learning Research, 4:575?602, 2003.
[5] P. Baldi and M. Rosen-Zvi. On the relationship between deterministic and probabilistic directed
graphical models. 2004. Submitted.
[6] H. M. Berman, J. Westbrook, Z. Feng, G. Gilliland, T. N. Bhat, H. Weissig, I. N. Shindyalov,
and P. E. Bourne. The Protein Data Bank. Nucl. Acids Res., 28:235?242, 2000.
[7] S. Betz. Disulfide bonds and the stability of globular proteins. Proteins, Struct., Function
Genet., 21:167?195, 1993.
[8] J. Clarke and A.R. Fersht. Engineered disulfide bonds as probes of the folding pathway of barnase - increasing stability of proteins against the rate of denaturation. Biochemistry, 32:4322?
4329, 1993.
[9] L. Demetrius. Thermodynamics and kinetics of protein folding: an evolutionary perpective. J.
Theor. Biol., 217:397?411, 2000.
[10] P. Fariselli and R. Casadio. Prediction of disulfide connectivity in proteins. Bioinformatics,
17:957?964, 2001.
[11] P. Fariselli, P. L. Martelli, and R. Casadio. A neural network-based method for predicting the
disulfide connectivity in proteins. In E. Damiani et al., editors, Knowledge based intelligent
information engineering systems and allied technologies (KES 2002), volume 1, pages 464?
468. IOS Press, 2002.
[12] H.N. Gabow. An efficient implementation of Edmond?s algorithm for maximum weight matching on graphs. Journal of the ACM, 23(2):221?234, 1976.
[13] J.L. Klepeis and C.A. Floudas. Prediction of ?-sheet topology and disulfide bridges in polypeptides. J. Comput. Chem., 24:191?208, 2003.
[14] T.A. Klink, K.J. Woycechosky, K.M. Taylor, and R.T. Raines. Contribution of disulfide bonds to
the conformational stability and catalytic activity of ribonuclease A. Eur. J. Biochem., 267:566?
572, 2000.
[15] M. Matsumura et al. Substantial increase of protein stability by multiple disulfide bonds. Nature, 342:291?293, 1989.
[16] G. Pollastri, D. Przybylski, B. Rost, and P. Baldi. Improving the prediction of protein secondary
structure in three and eight classes using recurrent neural networks and profiles. Proteins,
47:228?235, 2002.
[17] A. Vullo and P. Frasconi. Disulfide connectivity prediction using recursive neural networks and
evolutionary information. Bioinformatics, 20:653?659, 2004.
[18] W.J. Wedemeyer, E. Welkler, M. Narayan, and H.A. Scheraga. Disulfide bonds and proteinfolding. Biochemistry, 39:4207?4216, 2000.
| 2607 |@word mri:1 version:2 simulation:2 gabow:5 pick:2 contains:3 score:2 series:1 exclusively:1 systemwide:1 outperforms:1 existing:2 current:2 contextual:2 must:3 deposited:1 fn:2 distant:1 alone:1 greedy:5 selected:2 parameterization:1 plane:16 short:1 filtered:3 provides:1 math:1 node:6 location:2 mathematical:1 pairing:1 consists:1 combine:1 pathway:1 baldi:5 blast:1 pairwise:2 indeed:1 rapid:1 decreasing:1 actual:1 little:1 window:4 increasing:1 underlying:3 what:2 kind:1 substantially:1 finding:3 ended:1 suite:1 prot:4 classifier:1 unit:2 grant:3 producing:1 positive:2 engineering:3 local:3 vertically:1 io:1 rnns:1 limited:2 range:2 directed:6 unique:1 acknowledgment:1 recursive:6 practice:2 backpropagation:1 swiss:4 procedure:5 rnn:12 floudas:1 matching:7 word:1 pdb:3 protein:29 cannot:3 selection:2 sheet:2 wrongly:1 storage:1 intercept:1 www:1 deterministic:3 map:1 pon:1 layout:1 attention:1 starting:1 latest:1 stabilizing:1 insight:1 array:2 stability:4 programming:1 cysteine:50 located:1 curated:1 native:1 labeled:3 database:3 bottom:1 observed:1 cooperative:1 solved:1 highest:2 toxin:1 alessandro:2 principled:1 substantial:1 complexity:3 exhaustively:1 trained:4 solving:1 easily:2 homologues:1 ously:1 differently:1 train:1 distinct:1 artificial:1 tell:1 formation:2 exhaustive:1 whose:2 heuristic:1 quite:1 solve:2 larger:1 itself:1 noisy:1 final:2 sequence:21 advantage:1 apweiler:1 interaction:2 product:1 westbrook:1 uci:3 flexibility:1 forth:1 parent:1 optimum:2 assessing:1 perfect:3 object:1 derive:2 develop:1 recurrent:1 narayan:1 received:1 progress:1 strong:1 predicted:6 berman:1 met:1 differ:1 direction:1 thick:1 proteomic:1 correct:1 stochastic:1 engineered:1 globular:1 education:1 probable:1 theor:1 kinetics:1 around:3 considered:2 ic:1 nw:9 predict:4 lm:1 biochemistry:2 major:1 achieves:3 estimation:1 bond:30 combinatorial:1 currently:1 bridge:21 largest:1 weighted:3 always:1 rather:1 avoid:1 derived:2 focus:1 release:1 catalytic:1 improvement:2 consistently:2 ps:1 bairoch:1 nn:3 typically:1 hidden:15 issue:1 among:2 html:1 special:1 equal:1 once:2 frasconi:1 biology:1 represents:6 throughput:1 bourne:1 rosen:1 report:3 intelligent:3 jianlin:1 few:1 cardinal:2 oriented:2 simultaneously:1 individual:3 replaced:1 attempt:1 intra:5 alignment:3 chain:19 edge:20 taylor:1 re:2 theoretical:1 dublin:2 increased:1 column:5 instance:4 tp:5 lattice:4 vertex:4 subset:5 predictor:4 rounding:1 zvi:1 reported:7 igb:1 eur:1 ie:1 retain:1 probabilistic:1 off:1 connectivity:34 containing:5 possibly:2 corner:2 potential:2 fariselli:2 imt:1 summarized:1 north:1 includes:3 coefficient:3 depends:1 try:1 linked:1 portion:1 slope:1 contribution:1 ass:1 square:4 oi:5 accuracy:1 acid:4 efficiently:1 maximized:1 yield:4 directional:1 modelled:1 bayesian:2 published:1 submitted:1 reach:1 splitted:1 sharing:2 against:1 pollastri:3 energy:1 frequency:1 involved:1 associated:4 psi:1 irvine:2 recall:7 knowledge:6 focusing:1 day:1 methodology:1 arranged:2 until:1 correlation:4 assessment:1 propagation:3 lack:1 mode:3 effect:1 contain:3 true:3 normalized:1 evolution:1 hence:2 assigned:1 deal:1 during:2 uniquely:1 whereby:1 criterion:1 bonded:31 stress:1 complete:3 trembl:1 recently:1 nih:1 superior:1 specialized:1 functional:1 qp:3 volume:1 significant:1 refer:1 dag:1 i6:5 pointed:1 similarity:3 compiled:2 biochem:1 aligning:1 ribonuclease:1 recent:1 certain:1 server:2 hydrophobic:1 additional:3 ii:3 multiple:4 full:1 exceeds:2 cross:5 long:1 energetics:1 paired:2 parenthesis:1 prediction:38 regression:2 proteomics:1 metric:1 represent:1 kernel:1 achieved:1 curating:1 folding:4 addition:1 residue:3 whereas:1 separately:1 addressed:1 annealing:1 bhat:1 extra:1 unlike:1 archive:2 file:1 sure:1 tend:1 undirected:2 regularly:1 leveraging:1 effectiveness:1 integer:1 structural:2 leverage:1 noting:1 exceed:1 split:1 covalent:1 pfbaldi:1 architecture:6 topology:2 restrict:1 genet:1 whether:2 six:1 proceed:1 biotechnology:1 remark:1 useful:1 conformational:1 se:9 aimed:1 detailed:1 repeating:1 ten:1 http:1 nsf:1 four:7 kes:1 graph:20 sum:2 run:5 parameterized:1 casadio:2 clarke:1 layer:7 hi:13 guaranteed:2 cheng:1 fold:3 activity:1 strength:1 occur:1 constraint:2 disulfide:10 solvent:2 speed:1 relatively:1 department:1 developing:1 according:1 combination:2 across:2 slightly:1 partitioned:1 making:1 taken:1 equation:1 previously:2 remains:1 turn:1 end:1 adopted:2 available:2 apply:1 probe:1 edmond:1 eight:1 appropriate:1 pierre:1 rost:1 vullo:3 struct:1 existence:1 top:2 remaining:1 graphical:2 sw:10 build:1 contact:7 feng:1 added:1 primary:1 traditional:1 nr:1 evolutionary:4 gradient:1 ireland:1 distance:1 link:2 simulated:1 lateral:1 accessibility:2 participate:1 collected:1 trivial:2 reason:2 assuming:4 code:1 length:3 retained:1 relationship:5 providing:1 unfortunately:1 october:1 subproblems:1 negative:1 design:1 implementation:1 unknown:2 perform:1 vertical:2 nucleic:1 disulphide:39 descent:1 situation:2 omni:1 arbitrary:2 august:1 inferred:3 cys:3 pair:9 connection:3 california:2 bonding:12 allied:1 address:1 able:1 beyond:1 below:1 pattern:19 usually:1 fp:2 program:2 recast:1 including:2 suitable:1 treated:1 oij:2 predicting:5 customized:1 nucl:1 thermodynamics:1 technology:1 ne:9 carried:1 tentatively:1 genomics:1 literature:4 polypeptide:2 przybylski:1 limitation:2 filtering:1 acyclic:1 validation:9 incident:1 editor:1 bank:2 previ:1 nse:1 row:1 supported:1 last:2 bias:1 allow:1 institute:1 martelli:1 absolute:3 author:1 cope:2 ignore:1 overcomes:2 global:3 alternatively:1 search:2 table:7 additionally:2 scratch:2 nature:2 reasonably:2 robust:1 ca:1 improving:1 protocol:1 motivation:1 whole:1 profile:3 amino:2 augmented:1 fig:1 cubic:1 iij:2 precision:13 position:2 comput:1 candidate:5 third:1 essential:3 exists:1 restricting:1 false:2 importance:1 supplement:1 gap:1 eij:1 extracted:1 acm:1 conditional:1 towards:1 considerable:1 specifically:2 ucd:1 total:11 secondary:3 discriminate:1 east:1 college:1 chem:1 bioinformatics:4 preparation:1 evaluate:1 tested:1 biol:2 |
1,770 | 2,608 | Parallel Support Vector Machines:
The Cascade SVM
Hans Peter Graf, Eric Cosatto,
Leon Bottou, Igor Durdanovic, Vladimir Vapnik
NEC Laboratories
4 Independence Way, Princeton, NJ 08540
{hpg, cosatto, leonb, igord, vlad}@nec-labs.com
Abstract
We describe an algorithm for support vector machines (SVM) that
can be parallelized efficiently and scales to very large problems with
hundreds of thousands of training vectors. Instead of analyzing the
whole training set in one optimization step, the data are split into
subsets and optimized separately with multiple SVMs. The partial
results are combined and filtered again in a ?Cascade? of SVMs, until
the global optimum is reached. The Cascade SVM can be spread over
multiple processors with minimal communication overhead and
requires far less memory, since the kernel matrices are much smaller
than for a regular SVM. Convergence to the global optimum is
guaranteed with multiple passes through the Cascade, but already a
single pass provides good generalization. A single pass is 5x ? 10x
faster than a regular SVM for problems of 100,000 vectors when
implemented on a single processor. Parallel implementations on a
cluster of 16 processors were tested with over 1 million vectors
(2-class problems), converging in a day or two, while a regular SVM
never converged in over a week.
1
Introduction
Support Vector Machines [1] are powerful classification and regression tools, but
their compute and storage requirements increase rapidly with the number of training
vectors, putting many problems of practical interest out of their reach. The core of an
SVM is a quadratic programming problem (QP), separating support vectors from the
rest of the training data. General-purpose QP solvers tend to scale with the cube of the
number of training vectors (O(k3)). Specialized algorithms, typically based on
gradient descent methods, achieve impressive gains in efficiency, but still become
impractically slow for problem sizes on the order of 100,000 training vectors (2-class
problems).
One approach for accelerating the QP is based on ?chunking? [2][3][4], where subsets
of the training data are optimized iteratively, until the global optimum is reached.
?Sequential Minimal Optimization? (SMO) [5], which reduces the chunk size to 2
vectors, is the most popular of these algorithms. Eliminating non-support vectors
early during the optimization process is another strategy that provides substantial
savings in computation. Efficient SVM implementations incorporate steps known as
?shrinking? for identifying non-support vectors early [4][6][7]. In combination with
caching of the kernel data, such techniques reduce the computation requirements by
orders of magnitude. Another approach, named ?digesting? optimizes subsets closer to
completion before adding new data [8], saving considerable amounts of storage.
Improving compute-speed through parallelization is difficult due to dependencies
between the computation steps. Parallelizations have been proposed by splitting the
problem into smaller subsets and training a network to assign samples to different
subsets [9]. Variations of the standard SVM algorithm, such as the Proximal SVM
have been developed that are better suited for parallelization [10], but how widely
they are applicable, in particular to high-dimensional problems, remains to be seen. A
parallelization scheme was proposed where the kernel matrix is approximated by a
block-diagonal [11]. A technique called variable projection method [12] looks
promising for improving the parallelization of the optimization loop.
In order to break through the limits of today?s SVM implementations we developed a
distributed architecture, where smaller optimizations are solved independently and
can be spread over multiple processors, yet the ensemble is guaranteed to converge to
the globally optimal solution.
2 T h e Cascad e S VM
As mentioned above, eliminating non-support vectors early from the optimization
proved to be an effective strategy for accelerating SVMs. Using this concept we
developed a filtering process that can be parallelized efficiently. After evaluating
multiple techniques, such as projections onto subspaces (in feature space) or
clustering techniques, we opted to use SVMs as filters. This makes it straightforward
to drive partial solutions towards the global optimum, while alternative techniques
may optimize criteria that are not directly relevant for finding the global solution.
TD / 8
TD / 8
TD / 8
TD / 8
TD / 8
TD / 8
TD / 8
TD / 8
1st layer
SV1
SV2
SV3
SV4
SV5
SV6
SV7
SV8
2nd layer
SV9
SV10
SV11
SV12
3rd layer
SV13
SV14
4th layer
SV15
Figure 1: Schematic of a binary Cascade architecture. The data are split into
subsets and each one is evaluated individually for support vectors in the first
layer. The results are combined two-by-two and entered as training sets for the
next layer. The resulting support vectors are tested for global convergence by
feeding the result of the last layer into the first layer, together with the
non-support vectors. TD: Training data, SVi: Support vectors produced by
optimization i.
We initialize the problem with a number of independent, smaller optimizations and
combine the partial results in later stages in a hierarchical fashion, as shown in Figure
1. Splitting the data and combining the results can be done in many different ways.
Figure 1 merely represents one possible architecture, a binary Cascade that proved to
be efficient in many tests. It is guaranteed to advance the optimization function in
every layer, requires only modest communication from one layer to the next, and
converges to a good solution quickly.
In the architecture of Figure 1 sets of support vectors from two SVMs are combined
and the optimization proceeds by finding the support vectors in each of the combined
subsets. This continues until only one set of vectors is left. Often a single pass through
this Cascade produces satisfactory accuracy, but if the global optimum has to be
reached, the result of the last layer is fed back into the first layer. Each of the SVMs in
the first layer receives all the support vectors of the last layer as inputs and tests its
fraction of the input vectors, if any of them have to be incorporated into the
optimization. If this is not the case for all SVMs of the input layer, the Cascade has
converged to the global optimum, otherwise it proceeds with another pass through the
network.
In this architecture a single SVM never has to deal with the whole training set. If the
filters in the first few layers are efficient in extracting the support vectors then the
largest optimization, the one of the last layer, has to handle only a few more vectors
than the number of actual support vectors. Therefore, in problems where the support
vectors are a small subset of the training vectors - which is usually the case - each of
the sub-problems is much smaller than the whole problem (compare section 4).
2.1
N o t a t i o n ( 2 - c l a s s , ma x i mu m ma r g i n )
We discuss here the 2-class classification problem, solved in dual formulation. The
Cascade does not depend on details of the optimization algorithm and alternative
formulations or regression algorithms map equally well onto this architecture. The
2-class problem is the most difficult one to parallelize because there is no natural split
into sub-problems. Multi-class problems can always be separated into 2-class
problems.
Let us consider a set of l training examples (xi; yi); where x i ? R d represents a
d-dimensional pattern and yi = ? 1 the class label. K(x i,xj ) is the matrix of kernel values
between patterns and ?i the Lagrange coefficients to be determined by the
optimization. The SVM solution for this problem consists in maximizing the
following quadratic optimization function (dual formulation):
l
l
i
j
l
max W (? ) = ? 1 / 2 ? ? ? ? i ? j y i y j K ( x i , x j ) + ? ? i
l
?? y
l
Subject to: 0 ? ? i ? C , ?i
??
i
yi = 0
i= 1
and
i
i
=0
i
The gradient G = ? W (? ) of W with respect to ? is then:
l
?W
Gi =
= ? yi ? y j? j K (xi , x j ) + 1
?? i
j =1
2.2
(1)
i
(2)
F o r ma l p r o o f o f c o n v e r g e n c e
The main issue is whether a Cascade architecture will actually converge to the global
optimum. The following theorems show that this is the case for a wide range of
conditions. Let S denote a subset of the training set ?, W(S) is the optimal objective
function over S (equation 1), and let Sv( S ) ? S be the subset of S for which the
optimal ? are non-zero (support vectors of S). It is obvious that:
?S ? ?, W ( S ) = W ( Sv( S )) ?W (?)
(3)
Let us consider a family F of sets of training examples for which we can independently
compute the SVM solution. The set S * ? F that achieves the greatest W(S) will be
called the best set in family F. We will write W(F) as a shorthand for W(S*), that is:
W ( F ) = max W ( S ) ? W (?)
(4)
S ?F
We are interested in defining a sequence of families Ft such that W(Ft) converges to
the optimum. Two results are relevant for proving convergence.
Theorem 1: Let us consider two families F and G of subsets of ?. If a set T ? G
contains the support vectors of the best set S F* ? F , then
W (G ) ? W ( F ).
Proof: Since Sv ( S ) ? T , we have W ( S ) = W ( Sv( S F* )) ? W (T ). Therefore,
*
F
*
F
W ( F ) = W (S F* ) ? W (T ) ? W (G)
Theorem 2: Let us consider two families F and G of subsets of ?. Assume that every
S F* ? F .
If W (G ) = W ( F ) ? W ( S F* ) = W (U T ?G T ).
Proof: Theorem 1 implies that W (G ) ? W ( F ) . Consider a vector ?* solution of the
set T ?G contains the support vectors of the best set
SVM problem restricted to the support vectors Sv(S F* ) . For all T ?G , we have
W (T ) ? W ( Sv( S F* )) because Sv(S F* ) is a subset of T. We also have
W (T ) ? W (G ) = W ( F ) = W ( S F* ) = W ( Sv( S F* )). Therefore W (T ) = W ( Sv( S F* )) . This
implies that ?* is also a solution of the SVM on set T. Therefore ?* satisfies all the
KKT conditions corresponding to all sets T ? G . This implies that ?* also satisfies the
KKT conditions for the union of all sets in G.
Definition 1. A Cascade is a sequence ( Ft) of families of subsets of ? satisfying:
i) For all t > 1, a set T ? Ft contains the support vectors of the best set in Ft-1.
ii) For all t, there is a k > t such that:
? All sets T ? Fk contain the support vectors of the best set in Fk-1.
? The union of all sets in Fk is equal to ?.
Theorem 3: A Cascade (Ft ) converges to the SVM solution of ? in finite
time, namely:
?t * , ?t > t * , W ( Ft ) = W (? )
Proof: Assumption i) of Definition 1 plus theorem 1 imply that the sequence W(Ft) is
monotonically increasing. Since this sequence is bounded by W( ?), it converges to
some value W * ? W (?) . The sequence W(Ft) takes its values in the finite set of the
W(S) for all S ? ? . Therefore there is a l > 0 such that ?t > l , W ( Ft ) = W * . This
observation, assertion ii) of definition 1, plus theorem 2 imply that there is a k > l such
that W(Fk ) =W(?) . Since W(Ft) is monotonically increasing, W ( Fk ) = W (?) for all t > k.
As stated in theorem 3, a layered Cascade architecture is guaranteed to converge to the
global optimum if we keep the best set of support vectors produced in one layer, and
use it in at least one of the subsets in the next layer. This is the case in the binary
Cascade shown in Figure 1. However, not all layers meet assertion ii) of Definition 1.
The union of sets in a layer is not equal to the whole training set, except in the first
layer. By introducing the feedback loop that enters the result of the last layer into the
first one, combined with all non-support vectors, we fulfill all assertions of Definition
1. We can test for global convergence in layer 1 and do a fast filtering in the
subsequent layers.
2.3
Interpretation of the SVM filtering process
An intuitive picture of the filtering process is provided in Figure 2. If a subset S ? ?
is chosen randomly from the training set, it will most likely not contain all support
vectors of ? and its support vectors may not be support vectors of the whole problem.
However, if there is not a serious bias in a subset, support vectors of S are likely to
contain some support vectors of the whole problem. Stated differently, it is plausible
that ?interior? points in a subset are going to be ?interior? points in the whole set.
Therefore, a non-support vector of a subset has a good chance of being a non-support
vector of the whole set and we can eliminate it from further analysis.
Figure 2: A toy problem illustrating the filtering process. Two disjoint subsets
are selected from the training data and each of them is optimized individually (left,
center; the data selected for the optimizations are the solid elements). The support
vectors in each of the subsets are marked with frames. They are combined for the
final optimization (right), resulting in a classification boundary (solid curve) close
to the one for the whole problem (dashed curve).
3 Di stri b u ted O p ti mi zati on
r rT r
1 r
W i = ? ? iT Q i? i + e i ? i ;
2
r
r
r
G i = ? ? iT Q i + e i ;
(5)
Figure 3: A Cascade with two input sets D1, D2. W i, Gi and Qi are objective
function, gradient, and kernel matrix, respectively, of SVM i (in vector notation); ei
is a vector with all 1. Gradients of SVM 1 and SVM 2 are merged (Extend) as
indicated in (6) and are entered into SVM3. Support vectors of SVM3 are used to
test D 1, D2 for violations of the KKT conditions. Violators are combined with the
support vectors for the next iteration.
Section 2 shows that a distributed architecture like the Cascade indeed converges to the
global solution, but no indication is provided how efficient this approach is. For a good
performance we try to advance the optimization as much as possible in each stage. This
depends on how the data are split initially, how partial results are merged and how well an
optimization can start from the partial results provided by the previous stage. We focus on
gradient-ascent algorithms here, and discuss how to handle merging efficiently.
3.1
Merging subsets
For this discussion we look at a Cascade with two layers (Figure 3). When merging the
two results of SVM1 and SVM2, we can initialize the optimization of SVM3 to
different starting points. In the general case the merged set starts with the following
optimization function and gradient:
r T
1 ?? 1 ?
W3 = ? ? r ?
2 ?? 2 ?
? Q1
?Q
? 21
r
r T r
Q12 ? ?? 1 ? ? e1 ? ?? 1 ?
r ? + ?r ? ? r ?
?
?
Q2 ? ?? 2 ? ?e 2 ? ?? 2 ?
r T
r
?? ?
G3 = ? ? r 1 ?
?? 2 ?
We consider two possible initializations:
r
r
r
Case 1: ? 1 = ? 1 of SVM 1 ; ? 2 = 0 ;
r
r
r
Case 2: ? 1 = ? 1 of SVM 1 ; ? 2 = ? 2 of SVM 2 .
? Q1
?Q
? 21
r
Q12 ? ? e1 ?
+ ?r ?
?
Q 2 ? ?e 2 ?
(6)
(7)
Since each of the subsets fulfills the KKT conditions, each of these cases represents a
feasible starting point with: ? ? i y i = 0 .
Intuitively one would probably assume that case 2 is the preferred one since we start
from a point that is optimal in the two spaces defined by the vectors D1 and D2. If Q12
is 0 (Q21 is then also 0 since the kernel matrix is symmetric), the two spaces are
orthogonal (in feature space) and the sum of the two solutions is the solution of the
whole problem. Therefore, case 2 is indeed the best choice for initialization, because
it represents the final solution. If, on the other hand, the two subsets are identical, then
an initialization with case 1 is optimal, since this represents now the solution of the
whole problem. In general, we are probably somewhere between these two cases and
therefore it is not obvious, which case is best.
While the theorems of section 2 guarantee the convergence to the global optimum,
they do not provide any indication how fast this going to happen. Empirically we find
that the Cascade converges quickly to the global solution, as is indicated in the
examples below. All the problems we tested converge in 2 to 5 passes.
4 E x p eri men tal resu l ts
We implemented the Cascade architecture for a single processor as well as for a
cluster of processors and tested it extensively with several problems; the largest are:
MNIST 1, FOREST2, NORB 3 (all are converted to 2-class problems). One of the main
advantages of the Cascade architecture is that it requires far less memory than a single
SVM, because the size of the kernel matrix scales with the square of the active set.
This effect is illustrated in Figure 4. It has to be emphasized that both cases, single
SVM and Cascade, use shrinking, but shrinking alone does not solve the problem of
exorbitant sizes of the kernel matrix.
A good indication of the Cascade?s inherent efficiency is obtained by counting the
number of kernel evaluations required for one pass. As shown in Table 1, a 9-layer
Cascade requires only about 30% as many kernel evaluations as a single SVM for
1
MNIST: handwritten digits, d=784 (28x28 pixels); training: 60,000; testing: 10,000;
classes: odd digits - even digits; http://yann.lecun.com/exdb/mnist.
2
FOREST: d=54; class 2 versus rest; training: 560,000; testing: 58,100
ftp://ftp.ics.uci.edu/pub/machine-learning-databases/covtype/covtype.info.
3
NORB: images, d=9,216 ; trainingr =48,600; testing=48,600; monocular; merged class 0
and 1 versus the rest. http://www.cs.nyu.edu/~ylclab/data/norb-v1.0
100,000 training vectors. How many kernel evaluations actually have to be computed
depends on the caching strategy and the memory size.
Active set size
6,000
one SVM
4,000
Cascade SVM
2,000
Number of Iterations
Figure 4: The size of the active set as a function of the number of iterations for a
problem with 30,000 training vectors. The upper curve represents a single SVM,
while the lower one shows the active set size for a 4-layer Cascade.
As indicated in Table 1, this parameter, and with it the compute times, are reduced
even more. Therefore, even a simulation on a single processor can produce a speed-up
of 5x to 10x or more, depending on the available memory size. For practical purposes
often a single pass through the Cascade produces sufficient accuracy (compare Figure
5). This offers a particularly simple way for solving problems of a size that would
otherwise be out of reach for SVMs.
Number of Layers
K-eval request x109
K-eval x109
1
106
33
2
89
12
3
77
4.5
4
68
3.9
5
61
2.7
6
55
2.4
7
48
1.9
8
42
1.6
9
38
1.4
Table 1: Number of Kernel evaluations (requests and actual, with a cache size of
800MB) for different numbers of layers in the Cascade (single pass). The number
of Kernel evaluations is reduced as the number of Cascade layers increases. Then,
larger amounts of the problems fit in the cache, reducing the actual Kernel
computations even more. Problem: FOREST, 100K vectors.
Iteration
0
1
2
Training
time
21.6h
22.2h
0.8h
Max # training
vect. per machine
72,658
67,876
61,217
# Support
Vectors
54,647
61,084
61,102
W
Acc.
167427
174560
174564
99.08%
99.14%
99.13%
Table 2: Training times for a large data set with 1,016,736 vectors (MNIST was
expanded by warping the handwritten digits). A Cascade with 5 layers is executed
on a Linux cluster with 16 machines (AMD 1800, dual processors, 2GB RAM per
machine). The solution converges in 3 iterations. Shown are also the maximum
number of training vectors on one machine and the number of support vectors in
the last stage. W: optimization function; Acc: accuracy on test set. Kernel: RBF,
gamma=1; C=50.
Table 2 shows how a problem with over one million vectors is solved in about a day
(single pass) with a generalization performance equivalent to the fully converged
solution. While the full training set contains over 1M vectors, one processor never
handles more than 73k vectors in the optimization and 130k for the convergence test.
The Cascade provides several advantages over a single SVM because it can reduce
compute- as well as storage-requirements. The main limitation is that the last layer
consists of one single optimization and its size has a lower limit given by the number
of support vectors. This is why the acceleration saturates at a relatively small number
of layers. Yet this is not a hard limit since a single optimization can be distributed over
multiple processors as well, and we are working on efficient implementations of such
algorithms.
Figure 5: Speed-up for a parallel implementation of the Cascades with 1 to 5
layers (1 to 16 SVMs, each running on a separate processor), relative to a single
SVM: single pass (left), fully converged (middle) (MNIST, NORB: 3 iterations,
FOREST: 5 iterations). On the right is the generalization performance of a 5-layer
Cascade, measured after each iteration. For MNIST and NORB, the accuracy after
one pass is the same as after full convergence (3 iterations). For FOREST, the
accuracy improves from 90.6% after a single pass to 91.6% after convergence (5
iterations). Training set sizes: MNIST: 60k, NORB: 48k, FOREST: 186k.
References
[1] V. Vapnik, ?Statistical Learning Theory?, Wiley, New York, 1998.
[2] B. Boser, I. Guyon, V. Vapnik, ?A training algorithm for optimal margin classifiers? in
Proc. 5 th Annual Workshop on Computational Learning Theory, Pittsburgh, ACM, 1992.
[3] E. Osuna, R. Freund, F. Girosi, ?Training Support Vector Machines, an Application to Face
Detection?, in Computer vision and Pattern Recognition, pp.130-136, 1997.
[4] T. Joachims, ?Making large-scale support vector machine learning practical?, in Advances
in Kernel Methods, B. Sch?lkopf, C. Burges, A. Smola, (eds.), Cambridge, MIT Press, 1998.
[5] J.C. Platt, ?Fast training of support vector machines using sequential minimal
optimization?, in Adv. in Kernel Methods: Sch?lkopf, C. Burges, A. Smola (eds.), 1998.
[6] C. Chang, C. Lin, ?LIBSVM?, http://www.csie.ntu.edu.tw/~cjlin/libsvm/.
[7] R. Collobert, S. Bengio, and J. Mari?thoz. Torch: A modular machine learning software
library. Technical Report IDIAP-RR 02-46, IDIAP, 2002.
[8] D. DeCoste and B. Sch?lkopf, ?Training Invariant Support Vector Machines?, Machine
Learning, 46, 161-190, 2002.
[9] R. Collobert, Y. Bengio, S. Bengio, ?A Parallel Mixture of SVMs for Very Large Scale
Problems?, in Neural Information Processing Systems, Vol. 17, MIT Press, 2004.
[10] A. Tveit, H. Engum. Parallelization of the Incremental Proximal Support Vector Machine
Classifier using a Heap-based Tree Topology. Tech. Report, IDI, NTNU, Trondheim, 2003.
[11] J. X. Dong, A. Krzyzak , C. Y. Suen, ?A fast Parallel Optimization for Training Support
Vector Machine.? Proceedings of 3rd International Conference on Machine Learning and
Data Mining, P. Perner and A. Rosenfeld (Eds.) Springer Lecture Notes in Artificial
Intelligence (LNAI 2734), pp. 96--105, Leipzig, Germany, July 5-7, 2003.
[12] G. Zanghirati, L. Zanni, ?A parallel solver for large quadratic programs in training support
vector machines?, Parallel Computing, Vol. 29, pp.535-551, 2003.
| 2608 |@word illustrating:1 middle:1 eliminating:2 nd:1 d2:3 simulation:1 q1:2 solid:2 contains:4 pub:1 com:2 mari:1 yet:2 subsequent:1 happen:1 girosi:1 leipzig:1 alone:1 intelligence:1 selected:2 core:1 filtered:1 provides:3 idi:1 become:1 consists:2 shorthand:1 overhead:1 combine:1 indeed:2 multi:1 globally:1 td:9 actual:3 decoste:1 cache:2 solver:2 increasing:2 provided:3 bounded:1 notation:1 q21:1 q2:1 developed:3 finding:2 nj:1 guarantee:1 every:2 ti:1 classifier:2 platt:1 before:1 limit:3 analyzing:1 parallelize:1 meet:1 plus:2 initialization:3 range:1 practical:3 lecun:1 testing:3 union:3 block:1 svi:1 digit:4 cascade:32 projection:2 regular:3 onto:2 interior:2 close:1 layered:1 storage:3 optimize:1 www:2 map:1 equivalent:1 center:1 maximizing:1 straightforward:1 starting:2 independently:2 identifying:1 splitting:2 proving:1 handle:3 variation:1 today:1 programming:1 element:1 approximated:1 satisfying:1 particularly:1 continues:1 recognition:1 database:1 ft:11 csie:1 solved:3 enters:1 thousand:1 adv:1 substantial:1 mentioned:1 mu:1 depend:1 solving:1 eric:1 efficiency:2 differently:1 separated:1 fast:4 describe:1 effective:1 artificial:1 modular:1 widely:1 plausible:1 solve:1 larger:1 tested:4 otherwise:2 gi:2 rosenfeld:1 final:2 sequence:5 indication:3 advantage:2 rr:1 mb:1 relevant:2 loop:2 combining:1 rapidly:1 entered:2 uci:1 achieve:1 intuitive:1 convergence:8 cluster:3 optimum:10 requirement:3 produce:3 incremental:1 converges:7 ftp:2 depending:1 completion:1 measured:1 odd:1 implemented:2 c:1 q12:3 implies:3 idiap:2 merged:4 filter:2 feeding:1 assign:1 generalization:3 ntu:1 ic:1 k3:1 week:1 achieves:1 early:3 heap:1 purpose:2 proc:1 applicable:1 label:1 individually:2 largest:2 tool:1 mit:2 suen:1 always:1 fulfill:1 caching:2 focus:1 joachim:1 tech:1 opted:1 typically:1 eliminate:1 torch:1 initially:1 lnai:1 going:2 interested:1 germany:1 pixel:1 issue:1 classification:3 dual:3 initialize:2 cube:1 equal:2 never:3 saving:2 ted:1 svm2:1 identical:1 represents:6 look:2 igor:1 sv15:1 report:2 serious:1 sv14:1 few:2 inherent:1 randomly:1 gamma:1 detection:1 interest:1 mining:1 eval:2 evaluation:5 violation:1 mixture:1 cosatto:2 closer:1 partial:5 modest:1 orthogonal:1 tree:1 minimal:3 assertion:3 introducing:1 subset:24 hundred:1 dependency:1 sv:9 proximal:2 combined:7 chunk:1 st:1 sv1:1 international:1 vm:1 dong:1 together:1 quickly:2 linux:1 again:1 toy:1 converted:1 coefficient:1 depends:2 collobert:2 later:1 break:1 try:1 lab:1 reached:3 start:3 parallel:7 square:1 accuracy:5 efficiently:3 ensemble:1 lkopf:3 handwritten:2 produced:2 drive:1 processor:11 converged:4 acc:2 reach:2 ed:3 definition:5 pp:3 obvious:2 proof:3 di:1 mi:1 gain:1 proved:2 popular:1 vlad:1 improves:1 actually:2 back:1 day:2 formulation:3 evaluated:1 done:1 stage:4 smola:2 until:3 hand:1 receives:1 working:1 ei:1 sv12:1 indicated:3 effect:1 concept:1 contain:3 symmetric:1 laboratory:1 iteratively:1 satisfactory:1 illustrated:1 deal:1 during:1 criterion:1 exdb:1 image:1 specialized:1 qp:3 empirically:1 million:2 extend:1 interpretation:1 cambridge:1 rd:2 fk:5 svm1:1 han:1 impressive:1 optimizes:1 binary:3 yi:4 seen:1 parallelized:2 converge:4 monotonically:2 july:1 ii:3 dashed:1 multiple:6 full:2 reduces:1 technical:1 faster:1 x28:1 offer:1 lin:1 equally:1 e1:2 schematic:1 converging:1 qi:1 regression:2 vision:1 iteration:10 kernel:17 separately:1 parallelizations:1 zanni:1 sch:3 parallelization:5 rest:3 pass:2 ascent:1 subject:1 tend:1 probably:2 extracting:1 counting:1 split:4 bengio:3 independence:1 xj:1 fit:1 w3:1 architecture:11 topology:1 reduce:2 whether:1 gb:1 accelerating:2 krzyzak:1 peter:1 york:1 amount:2 extensively:1 svms:10 reduced:2 http:3 disjoint:1 per:2 write:1 vol:2 putting:1 libsvm:2 v1:1 ram:1 merely:1 fraction:1 sum:1 powerful:1 named:1 family:6 guyon:1 yann:1 x109:2 layer:36 guaranteed:4 quadratic:3 annual:1 software:1 tal:1 speed:3 leon:1 expanded:1 relatively:1 combination:1 request:2 smaller:5 osuna:1 g3:1 tw:1 making:1 intuitively:1 restricted:1 invariant:1 trondheim:1 chunking:1 equation:1 monocular:1 remains:1 discus:2 cjlin:1 fed:1 available:1 hierarchical:1 alternative:2 clustering:1 running:1 eri:1 somewhere:1 warping:1 objective:2 already:1 strategy:3 rt:1 diagonal:1 gradient:6 subspace:1 separate:1 separating:1 amd:1 vladimir:1 difficult:2 executed:1 info:1 stated:2 implementation:5 upper:1 observation:1 vect:1 finite:2 descent:1 t:1 defining:1 saturates:1 communication:2 incorporated:1 frame:1 namely:1 required:1 optimized:3 smo:1 boser:1 proceeds:2 usually:1 pattern:3 below:1 program:1 max:3 memory:4 hpg:1 greatest:1 natural:1 scheme:1 imply:2 library:1 picture:1 graf:1 relative:1 freund:1 fully:2 lecture:1 men:1 limitation:1 filtering:5 resu:1 versus:2 sufficient:1 last:7 bias:1 burges:2 wide:1 face:1 distributed:3 feedback:1 boundary:1 curve:3 evaluating:1 far:2 preferred:1 sv2:1 keep:1 global:14 kkt:4 active:4 pittsburgh:1 norb:6 xi:2 why:1 table:5 promising:1 improving:2 forest:5 bottou:1 durdanovic:1 spread:2 main:3 whole:11 ntnu:1 fashion:1 slow:1 wiley:1 shrinking:3 sub:2 theorem:9 emphasized:1 nyu:1 svm:32 covtype:2 workshop:1 mnist:7 vapnik:3 sequential:2 adding:1 merging:3 nec:2 magnitude:1 margin:1 suited:1 likely:2 lagrange:1 chang:1 springer:1 satisfies:2 chance:1 violator:1 ma:3 acm:1 marked:1 acceleration:1 rbf:1 towards:1 considerable:1 feasible:1 hard:1 determined:1 except:1 reducing:1 impractically:1 called:2 pas:11 support:45 fulfills:1 incorporate:1 princeton:1 d1:2 |
1,771 | 2,609 | Density Level Detection is Classification
Ingo Steinwart, Don Hush and Clint Scovel
Modeling, Algorithms and Informatics Group, CCS-3
Los Alamos National Laboratory
{ingo,dhush,jcs}@lanl.gov
Abstract
We show that anomaly detection can be interpreted as a binary classification problem. Using this interpretation we propose a support vector
machine (SVM) for anomaly detection. We then present some theoretical results which include consistency and learning rates. Finally, we
experimentally compare our SVM with the standard one-class SVM.
1
Introduction
One of the most common ways to define anomalies is by saying that anomalies are not
concentrated (see e.g. [1, 2]). To make this precise let Q be our unknown data-generating
distribution on the input space X. Furthermore, to describe the concentration of Q we need
a known reference distribution ? on X. Let us assume that Q has a density h with respect
to ?. Then, the sets {h > ?}, ? > 0, describe the concentration of Q. Consequently, to
define anomalies in terms of the concentration we only have to fix a threshold level ? > 0,
so that an x ? X is considered to be anomalous whenever x ? {h ? ?}. Therefore our
goal is to find the density level set {h ? ?}, or equivalently, the ?-level set {h > ?}. Note
that there is also a modification of this problem where ? is not known but can be sampled
from. We will see that our proposed method can handle both problems.
Finding density level sets is an old problem in statistics which also has some interesting applications (see e.g. [3, 4, 5, 6]) other than anomaly detection. Furthermore, a mathematical
framework similar to classical PAC-learning has been proposed in [7]. Despite this effort,
no efficient algorithm is known, which is a) consistent, i.e. it always finds the level set of
interest asymptotically, and b) learns with fast rates under realistic assumptions on h and
?. In this work we propose such an algorithm which is based on an SVM approach.
Let us now introduce some mathematical notions. We begin with emphasizing that?as in
many other papers (see e.g. [5] and [6])?we always assume ?({h = ?}) = 0. Now, let
T = (x1 , . . . , xn ) ? X n be a training set which is i.i.d. according to Q. Then, a density
level detection algorithm constructs a function fT : X ? R such that the set {fT > 0}
is an estimate of the ?-level set {h > ?} of interest. Since in general {fT > 0} does not
exactly coincide with {h > ?} we need a performance measure which describes how well
{fT > 0} approximates the set {h > ?}. Probably the best known performance measure
(see e.g. [6, 7] and the references therein) for measurable functions f : X ? R is
S?,h,? (f ) := ? {f > 0} M {h > ?} ,
where M denotes the symmetric difference. Obviously, the smaller S?,h,? (f ) is, the more
{f > 0} coincides with the ?-level set of h, and a function f minimizes S?,h,? if and
only if {f > 0} is ?-almost surely identical to {h > ?}. Furthermore, for a sequence of
functions fn : X ? R with S?,h,? (fn ) ? 0 we easily see that sign fn ? 1{h>?} both
?-almost and Q-almost surely if 1A denotes the indicator function of a set A. Finally, it
is important to note, that the performance measure S?,h,? is somehow natural in that it is
insensitive to ?-zero sets.
2
Detecting density levels is a classification problem
In this section we show how the density level detection (DLD) problem can be formulated
as a binary classification problem. To this end we write Y := {?1, 1} and define:
Definition 2.1 Let ? and Q be probability measures on X and s ? (0, 1). Then the probability measure Q s ? on X ? Y is defined by
Q s ? (A) := sEx?Q 1A (x, 1) + (1 ? s)Ex?? 1A (x, ?1)
for all measurable A ? X ? Y . Here we used the shorthand 1A (x, y) := 1A ((x, y)).
Obviously, the measure P := Q s ? can be associated with a binary classification problem
in which positive samples are drawn from Q and negative samples are drawn from ?. Inspired by this interpretation let us recall that the binary classification risk for a measurable
function f : X ? R and a distribution P on X ? Y is defined by
RP (f ) = P {(x, y) : sign f (x) 6= y} ,
where we define sign t := 1 if t > 0 and sign
t = ?1 otherwise. Furthermore, we denote
the Bayes risk of P by RP := inf{RP (f ) f : X ? R measurable}. We will show that
learning with respect to S?,h,? is equivalent to learning with respect to RP (.). To this end
we begin with the following easy to prove but fundamental proposition:
Proposition 2.2 Let ? and Q be probability measures on X such that Q has a density h
with respect to ?, and let s ? (0, 1). Then the marginal distribution of P := Q s ? on X
is PX = sQ + (1 ? s)?. Furthermore, we PX -a.s. have
P (y = 1|x) =
sh(x)
.
sh(x) + 1 ? s
Note that the above formula for PX implies that the ?-zero sets of X are exactly the PX zero sets of X. Furthermore, Proposition 2.2 shows that every distribution P := Q s ?
with dQ := hd? and s ? (0, 1) determines a triple (?, h, ?) with ? := (1 ? s)/s and
vice-versa. In the following we therefore use the shorthand SP (f ) := S?,h,? (f ). Let us
now compare RP (.) with SP (.). To this end we first observe that h(x) > ? = 1?s
s is
sh(x)
equivalent to sh(x)+1?s
> 21 . By Proposition 2.2 the latter is ?-almost surely equivalent
to ?(x) := P (y = 1|x) > 1/2 and hence ?({? > 1/2} M {h > ?}) = 0. Now recall,
that binary classification aims to discriminate {? > 1/2} from {? < 1/2}. Thus it is no
surprise that RP (.) can serve as a performance measure as the following theorem shows:
Theorem 2.3 Let ? and Q be distributions on X such that Q has a density h with respect to
1
?. Let ? > 0 satisfy ?({h = ?}) = 0. We write s := 1+?
and define P := Q s ?. Then for
all sequences (fn ) of measurable functions fn : X ? R the following are equivalent:
i) SP (fn ) ? 0.
ii) RP (fn ) ? RP .
In particular, for measurable f : X ? R we have SP (f ) = 0 if and only if RP (f ) = RP .
Proof: For n ? N we define En := {fn > 0} M {h > ?}. Since ?({h > ?} M {? >
1
2 }) = 0 it is easy to see that the classification risk of fn can be computed by
Z
RP (fn ) = RP +
|2? ? 1|dPX .
(1)
En
Now, {|2??1| = 0} is a ?-zero set and hence a PX -zero set. This implies that the measures
|2? ? 1|dPX and PX are absolutely continuous with respect to each other. Furthermore,
we have already observed after Proposition 2.2 that PX and ? are absolutely continuous
with respect to each other. Now, the assertion follows from SP (fn ) = ?(En ).
Theorem 2.3 shows that instead of using SP (.) as a performance measure for the DLD
problem one can alternatively use the classification risk RP (.). Therefore, we will establish
some basic properties of this performance measure in the following. To this end we write
I(y, t) := 1(??,0] (yt), y ? Y and t ? R, for the standard classification loss function.
With this notation we can easily compute RP (f ):
Proposition 2.4 Let ? and Q be probability measures on X. For ? > 0 we write s :=
and define P := Q s ?. Then for all measurable f : X ? R we have
RP (f ) =
1
1+?
1
?
EQ I(1, sign f ) +
E? I(?1, sign f ) .
1+?
1+?
It is interesting that the classification risk RP (.) is strongly connected with another approach for the DLD problem which is based on the so-called excess mass (see e.g. [4], [5],
[6], and the references therein). To be more precise let us first recall that the excess mass
of a measurable function f : X ? R is defined by
EP (f ) := Q({f > 0}) ? ??({f > 0}) ,
where Q, ? and ? have the usual meaning. The following proposition, that shows that
RP (.) and EP (.) are essentially the same, can be easily checked:
Proposition 2.5 Let ? and Q be probability measures on X. For ? > 0 we write s :=
and define P := Q s ?. Then for all measurable f : X ? R we have
1
1+?
EP (f ) = 1 ? (1 + ?)RP (f ) .
If Q is an empirical measure based on a training set T in the definition of EP (.) we obtain
the empirical excess mass which we denote by ET (.). Then given a function class F the
(empirical) excess mass approach chooses a function fT ? F which maximizes ET (.)
within F. Since the above proposition shows
n
ET (f ) = 1 ?
1X
I(1, sign f (xi )) ? ?E? I(?1, sign f ) .
n i=1
we see that this approach is actually a type of empirical risk minimization (ERM).
In the above mentioned papers the analysis of the excess mass approach needs an additional
assumption on the behaviour of h around the level ?. Since this condition can be used to
establish a quantified version of Theorem 2.3 we will recall it now.
Definition
2.6 Let ? be a distribution on X and h : X ? [0, ?) be a measurable function
R
with hd? = 1, i.e. h is a density with respect to ?. For ? > 0 and 0 ? q ? ? we say
that h is of ?-exponent q if there exists a constant C > 0 such that for all sufficiently small
t > 0 we have
? {|h ? ?| ? t} ? Ctq .
(2)
Condition (2) was first considered in [5, Thm. 3.6.]. This paper also provides an example
of a class of densities on Rd , d ? 2, which has exponent q = 1. Later, Tsybakov [6, p. 956]
used (2) for the analysis of a DLD method which is based on a localized version of the
empirical excess mass approach. Surprisingly, (2) is satisfied if and only if P := Q s ?
has Tsybakov exponent q in the sense of [8], i.e.
PX |2? ? 1| ? t ? C ? tq
(3)
for some constant C > 0 and all sufficiently small t > 0 (see the proof of Theorem 2.7
for (2) ? (3) and [9] for the other direction). Recall that recently (3) has played a crucial
1
role for establishing learning rates faster than n? 2 for ERM algorithms and SVM?s (see
e.g. [10] and [8]). Remarkably, it was already observed in [11], that the classification
problem can be analyzed by methods originally developed for the DLD problem. However,
to our best knowledge the exact relation between the DLD problem and binary classification
has not been presented, yet. In particular, it has not been observed yet, that this relation
opens a non-heuristic way to use classification methods for the DLD problem as we will
demonstrate by example in the next section.
Let us now use the ?-exponent to establish inequalities between SP (.) and RP (.):
Theorem 2.7 Let ? > 0 and ? and Q be probability measures on X such that Q has a
1
density h with respect to ?. For s := 1+?
we write P := Q s ?. Then we have
i) If h is bounded there is a c > 0 such that for all measurable f : X ? R we have
RP (f ) ? RP ? c SP (f ) .
ii) If h has ?-exponent q there is a c > 0 such that for all measurable f : X ? R we have
q
SP (f ) ? c RP (f ) ? RP 1+q .
Sketch of the proof: The first assertion directly follows from (1) and Proposition 2.2. For
the second assertion we first show (2) ? (3). To this end we observe that for 0 < t < 12 we
have Q |h ? ?| ? t ? (1 + ?)? |h ? ?| ? t . Thus there exists a C? > 0 such that
? q for all 0 < t < 1 . Furthermore, |2? ? 1| = h?? implies
PX {|h ? ?| ? t} ? Ct
2
h+?
n1 ? t
1+t o
|2? ? 1| ? t =
??h?
? ,
1+t
1?t
2t
2t
whenever 0 < t < 21 . Let us now define tl := 1+t
and tr := 1?t
. This gives 1 ? tl =
1+t
and 1 + tr = 1?t . Furthermore, we obviously also have tl ? tr . Therefore we find
n1 ? t
1+t o
??h?
? ? |h ? ?| ? tr ? ,
1+t
1?t
which shows (3). Now the assertion follows from [10, Prop. 1].
3
1?t
1+t
A support vector machine for density level detection
One of the benefits of interpreting the DLD problem as a classification problem is that we
can construct an SVM for the DLD problem. To this end let k : X ? X ? R be a positive
definite kernel with reproducing kernel Hilbert space (RKHS) H. Furthermore, let ? be
a known probability measure on X and l : Y ? R ? [0, ?) be the hinge loss function,
i.e. l(y, t) := max{0, 1 ? yt}, y ? Y , t ? R. Then for a training set T = (x1 , . . . , xn ) ?
X n , a regularization parameter ? > 0, and ? > 0 our SVM for the DLD problem chooses
a pair (fT,?,? , bT,?,? ) ? H ? R which minimizes
n
X
1
?
?kf k2H +
l(1, f (xi ) + b) +
Ex?? l(?1, f (x) + b)
(4)
(1 + ?)n i=1
1+?
in H ? R. The corresponding decision function of this SVM is fT,?,? + bT,?,? : X ? R.
Although the measure ? is known, almost always the expectation Ex?? l(?1, f (x)) can
be only numerically approximated by using finitely many function evaluations of f . Unfortunately, since the hinge loss is not differentiable we do not know a deterministic
method to choose these function evaluations efficiently. Therefore in the following we
will use points T 0 := (z1 , . . . , zm ) which are randomly sampled from ? in order to approximate Ex?? l(?1, f (x)). We denote the corresponding approximate solution of (4) by
(fT,T 0 ,? , bT,T 0 ,? ). Since this modification of (4) is identical to the standard SVM formulation besides the weighting factors in front of the empirical l-risk terms we do not discuss
algorithmic issues. However note that this approach simultaneously addresses the original
?? is known? and the modified ?? can be sampled from? problems described in the introduction. Furthermore it is also closely related to some heuristic methods for anomaly
detection that are based on artificial samples (see [9] for more information).
The fact that the SVM for DLD essentially coincides with the standard L1-SVM also allows
us to modify many known results for these algorithms. For simplicity we will only state
results for the Gaussian RBF kernel with width 1/?, i.e. k(x, x0 ) = exp(?? 2 kx ? x0 k22 ),
x, x0 ? Rd , and the case m = n. More general results can be found in [12, 9]. We begin
with a consistency result with respect to the performance measure RP (.). Recall that by
Theorem 2.3 this is equivalent to consistency with respect to SP (.):
Theorem 3.1 Let X ? Rd be compact and k be the Gaussian RBF kernel with width 1/?
on X. Furthermore, let ? > 0, and ? and Q be distributions on X such that Q has a
1
density h with respect to ?. For s := 1+?
we write P := Q s ?. Then for all positive
1+?
sequences (?n ) with ?n ? 0 and n?n ? ? for some ? > 0, and for all ? > 0 we have
lim (Q ? ?)n (T, T 0 ) ? (X ? X)n : RP (fT,T 0 ,? + bT,T 0 ,? ) > RP + ? = 0 .
n??
Sketch of the proof: Let us introduce the shorthand ? = Q ? ? for the product measure
of Q and ?. Moreover, for a measurable function f : X ? R we define the function
g ? f : X ? X ? R by
1
?
g ? f (x, x0 ) :=
l(1, f (x)) +
l(?1, f (x0 )) ,
x, x0 ? X.
1+?
1+?
Furthermore, we write l ? f (x, y) := l(y, f (x)), x ? X, y ? Y . Then it is easy to check
that we always have E? g ? f = EP l ? f . Analogously, we see ET ?T 0 g ? f = ET s T 0 l ? f
if T ? T 0 denotes the product measure of the empirical measures based on T and T 0 . Now,
using Hoeffding?s inequality for ? it is easy to establish a concentration inequality in the
sense of [13, Lem. III.5]. The rest of the proof is analogous to the steps in [13] since these
steps are independent of the specific structure of the data-generating measure.
In general, we cannot obtain convergence rates in the above theorem without assuming
specific conditions on h, ?, and ?. We will now present such a condition which can be used
to establish rates. To this end we write
d(x, {h > ?}) if x ? {h < ?}
?x :=
d(x, {h < ?}) if x ? {h ? ?} ,
where d(x, A) denotes the Euclidian distance between x and a set A. Now we define:
Definition 3.2R Let ? be a distribution on X ? Rd and h : X ? [0, ?) be a measurable
function with hd? = 1, i.e. h is a density with respect to ?. For ? > 0 and 0 < ? ? ?
we say that h has geometric ?-exponent ? if
Z
?x??d |h ? ?|d? < ? .
X
Since {h > ?} and {h ? ?} are the classes which have to be discriminated when interpreting the DLD problem as a classification problem it is easy to check by Proposition 2.2 that h has geometric ?-exponent ? if and only if for P := Q s ? we have
(x 7? ?x?1 ) ? L?d (|2? ? 1|dPX ). The latter is a sufficient condition for P to have geometric noise exponent ? in the sense of [8]. We can now state our result on learning rates
which is proved in [12].
Theorem 3.3 Let X be the closed unit ball of the Euclidian space Rd , and ? and Q be
distributions on X such that dQ = hd?. For fixed ? > 0 assume that the density h has
both ?-exponent 0 < q ? ? and geometric ?-exponent 0 < ? < ?. We define
(
?+1
n? 2?+1
if ? ? q+2
2q
?n :=
2(?+1)(q+1)
? 2?(q+2)+3q+4
n
otherwise ,
?
1
1
and ?n := ?n (?+1)d in both cases. For s := 1+?
we write P := Q s ?. Then for all
? > 0 there exists a constant C > 0 such that for all x ? 1 and all n ? 1 the SVM using
?n and Gaussian RBF kernel with width 1/?n satisfies
?
(Q ? ?)n (T, T 0 ) ? (X ? X)n : RP (fT,T 0 ,? + bT,T 0 ,? ) > RP + Cx2 n? 2?+1 +? ? e?x
if ? ?
q+2
2q
and
2?(q+1)
(Q??)n (T, T 0 ) ? X 2n : RP (fT,T 0 ,? +bT,T 0 ,? ) > RP +Cx2 n? 2?(q+2)+3q+4 +? ? e?x
?
otherwise. If ? = ? the latter holds if ?n = ? is a constant with ? > 2 d.
Remark 3.4 With the help of Theorem 2.7 we immediately obtain rates with respect to the
performance measure SP (.). It turns out that these rates are very similar to those in [5] and
[6] for the empirical excess mass approach.
4
Experiments
We present experimental results for anomaly detection problems where the set X is a subset
of Rd . Two SVM type learning algorithms are used to produce functions f which declare
the set {x : f (x) < 0} anomalous. These algorithms are compared based on their risk
RP (f ). The data in each problem is partitioned into three pairs of sets; the training sets
(T, T 0 ), the validation sets (V, V 0 ) and the test sets (W, W 0 ). The sets T , V and W contain
samples drawn from Q and the sets T 0 ,V 0 and W 0 contain samples drawn from ?. The
training and validation sets are used to design f and the test sets are used to estimate its
performance by computing an empirical version of RP (f ) that we denote R(W,W 0 ) (f ).
The first learning algorithm is the density level detection support vector machine (DLD?
SVM) with Gaussian RBF kernel described in the previous section. With ? and ? 2 fixed
and the expected value Ex?? l(?1, f (x) + b) in (4) replaced with an empirical estimate
based on T 0 this formulation can be solved using, for example, the C?SVC option in the
LIBSVM
software [14] by setting C = 1 and setting the class weights to w1 = 1/ |T |(1 +
?) and w?1 = ?/ |T 0 |(1 + ?) . The regularization parameters ? and ? 2 are chosen to
(approximately) minimize the empirical risk R(V,V 0 ) (f ) on the validation sets. This is
accomplished by employing a grid search over ? and a combined grid/iterative search over
? 2 . In particular, for each fixed grid value of ? we seek a minimizer over ? 2 by evaluating
the validation risk at a coarse grid of ? 2 values and then performing a Golden search over
the interval defined by the two ? 2 values on either side of the coarse grid minimum. As the
overall search proceeds the (?, ? 2 ) pair with the smallest validation risk is retained.
The second learning algorithm is the one?class support vector machine (1CLASS?SVM)
introduced by Sch?olkopf et al. [15]. Due to its ?one?class? nature this method does not
use the set T 0 in the production of f . Again we employ the Gaussian RBF kernel with
parameter ? 2 . The one?class algorithm in Sch?olkopf et al. contains a parameter ? which
controls the size of the set {x ? T : f (x) ? 0} (and therefore controls the measure
Q(f ? 0) through generalization). To make a valid comparison with the DLD?SVM we
determine ? automatically as a function of ?. In particular both ? and ? 2 are chosen to
(approximately) minimize the validation risk using the search procedure described above
for the DLD?SVM where the grid search for ? is replaced by a Golden search (over [0, 1])
for ?.
Data for the first experiment are generated as follows. Samples of the random variable
x ? Q are generated by transforming samples of the random variable u that is uniformly
distributed over [0, 1]27 . The transform is x = Au where A is a 10?by?27 matrix whose
rows contain between m = 2 and m = 5 non-zero entries with value 1/m. Thus the
support of Q is the hypercube [0, 1]10 and Q is concentrated about its centers. Partial
overlap in the nonzero entries across the rows of A guarantee that the components of x are
partially correlated. We chose ? to be the uniform distribution over [0, 1]10 . Data for the
second experiment are identical to the first except that the vector (0, 0, 0, 0, 0, 0, 0, 0, 0, 1)
is added to the samples of x with probability 0.5. This gives a bi-modal distribution Q and
since the support of the last component is extended to [0, 2] the corresponding component
of ? is also extended to this range. The training and validation set sizes are |T | = 1000,
|T 0 | = 2000, |V | = 500, and |V 0 | = 2000. The test set sizes |W | = 100, 000 and
|W 0 | = 100, 000 are large enough to provide very accurate estimates of risk. The ? grid
for the DLD?SVM method consists of 15 values ranging from 10?7 to 1 and the coarse ? 2
grid for the DLD?SVM and 1CLASS?SVM methods consists of 9 values that range from
10?3 to 102 . The learning algorithms are applied for values of ? ranging from 10?2 to
102 . Figure 1(a) plots the risk R(W,W 0 ) versus ? for the two learning algorithms. In both
experiments the performance of DLD?SVM is superior to 1CLASS?SVM at smaller values
of ?. The difference in the bi?modal case is substantial. Comparisons for larger sizes of
|T | and |V | yield similar results, but at smaller sample sizes the superiority of DLD?SVM
is even more pronounced. This is significant because ? 1 appears to have little utility
in the general anomaly detection problem since it defines anomalies in regions where the
concentration of Q is much larger than the concentration of ?, which is contrary to our
premise that anomalies are not concentrated.
The third experiment considers a real world application in cybersecurity. The goal is to
monitor the network traffic of a computer and determine when it exhibits anomalous behavior. The data for this experiment was collected from an active computer in a normal
working environment over the course of 16 months. Twelve features were computed over
each 1 hour time frame to give a total of 11664 12?dimensional feature vectors. These features are normalized to the range [0, 1] and treated as samples from Q. We chose ? to be the
uniform distribution over [0, 1]12 . The training, validation and test set sizes are |T | = 4000,
|T 0 | = 10000, |V | = 2000, |V 0 | = 100, 000, |W | = 5664 and |W 0 | = 100, 000. The ?
PSfrag replacements
DLD-SVM
0.0025
DLD-SVM-1
1CLASS-SVM-1
DLD-SVM-2
1CLASS-SVM-2
0.08
R(W,W 0 )
R(W,W 0 )
0.1
PSfrag replacements
0.06
1CLASS-SVM
0.04
DLD-SVM-1
DLD-SVM
1CLASS-SVM
0.002
0.0015
0.001
DLD-SVM-2
0.02
1CLASS-SVM-1
0.0005
1CLASS-SVM-2
0
0.01
0
0.1
1
?
(a) Experiments 1 & 2
10
100
0.01
0.1
?
1
10
100
(b) Cybersecurity Experiment
Figure 1: Comparison of DLD?SVM and 1CLASS?SVM. The curves with extension -1
and -2 in Figure 1(a) correspond to experiments 1 and 2 respectively.
grid for the DLD?SVM method consists of 7 values ranging from 10?7 to 10?1 and the
coarse ? 2 grid for the DLD?SVM and 1CLASS?SVM methods consists of 6 values that
range from 10?3 to 102 . The learning algorithms are applied for values of ? ranging from
0.05 to 50.0. Figure 1(b) plots the risk R(W,W 0 ) versus ? for the two learning algorithms.
The performance of DLD?SVM is superior to 1CLASS?SVM at all values of ?.
References
[1] B.D. Ripley. Pattern recognition and neural networks. Cambridge Univ. Press, 1996.
[2] B. Sch?
olkopf and A.J. Smola. Learning with Kernels. MIT Press, 2002.
[3] J.A. Hartigan. Clustering Algorithms. Wiley, New York, 1975.
[4] J.A. Hartigan. Estimation of a convex density contour in 2 dimensions. J. Amer. Statist. Assoc.,
82:267?270, 1987.
[5] W. Polonik. Measuring mass concentrations and estimating density contour clusters?an excess
mass aproach. Ann. Stat., 23:855?881, 1995.
[6] A.B. Tsybakov. On nonparametric estimation of density level sets. Ann. Statist., 25:948?969,
1997.
[7] S. Ben-David and M. Lindenbaum. Learning distributions by their density levels: a paradigm
for learning without a teacher. J. Comput. System Sci., 55:171?182, 1997.
[8] C. Scovel and I. Steinwart. Fast rates for support vector machines. Ann. Statist., submitted,
2003. http://www.c3.lanl.gov/?ingo/publications/ann-03.ps.
[9] I. Steinwart, D. Hush, and C. Scovel. A classification framework for anomaly detection. Technical report, Los Alamos National Laboratory, 2004.
[10] A.B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Ann. Statist., 32:135?
166, 2004.
[11] E. Mammen and A. Tsybakov. Smooth discrimination analysis. Ann. Statist., 27:1808?1829,
1999.
[12] C. Scovel, D. Hush, and I. Steinwart. Learning rates for support vector machines for density
level detection. Technical report, Los Alamos National Laboratory, 2004.
[13] I. Steinwart. Consistency of support vector machines and other regularized kernel machines.
IEEE Trans. Inform. Theory, to appear, 2005.
[14] Chih-Chung Chang and Chih-Jen Lin. LIBSVM: a library for support vector machines, 2004.
[15] B. Sch?
olkopf, J.C. Platt, J. Shawe-Taylor, and A.J. Smola. Estimating the support of a highdimensional distribution. Neural Computation, 13:1443?1471, 2001.
| 2609 |@word version:3 sex:1 open:1 seek:1 euclidian:2 tr:4 contains:1 rkhs:1 scovel:4 yet:2 fn:11 realistic:1 plot:2 discrimination:1 provides:1 coarse:4 detecting:1 mathematical:2 psfrag:2 shorthand:3 prove:1 consists:4 introduce:2 x0:6 expected:1 behavior:1 inspired:1 automatically:1 gov:2 little:1 begin:3 estimating:2 notation:1 bounded:1 maximizes:1 mass:9 moreover:1 interpreted:1 minimizes:2 developed:1 finding:1 guarantee:1 every:1 golden:2 exactly:2 assoc:1 classifier:1 platt:1 control:2 unit:1 superiority:1 appear:1 positive:3 declare:1 modify:1 despite:1 establishing:1 clint:1 approximately:2 chose:2 therein:2 au:1 quantified:1 bi:2 range:4 definite:1 dpx:3 sq:1 procedure:1 empirical:11 lindenbaum:1 cannot:1 risk:15 www:1 equivalent:5 measurable:14 deterministic:1 yt:2 center:1 convex:1 simplicity:1 immediately:1 hd:4 handle:1 notion:1 analogous:1 anomaly:12 exact:1 approximated:1 recognition:1 observed:3 ft:11 ep:5 role:1 solved:1 region:1 connected:1 mentioned:1 substantial:1 transforming:1 environment:1 serve:1 easily:3 univ:1 fast:2 describe:2 artificial:1 whose:1 heuristic:2 larger:2 say:2 otherwise:3 statistic:1 transform:1 obviously:3 sequence:3 differentiable:1 propose:2 product:2 zm:1 pronounced:1 olkopf:4 los:3 convergence:1 cluster:1 p:1 produce:1 generating:2 ben:1 help:1 stat:1 finitely:1 ex:5 eq:1 implies:3 direction:1 closely:1 premise:1 behaviour:1 fix:1 generalization:1 proposition:11 extension:1 hold:1 around:1 considered:2 sufficiently:2 normal:1 exp:1 k2h:1 algorithmic:1 smallest:1 estimation:2 vice:1 minimization:1 mit:1 always:4 gaussian:5 aim:1 modified:1 publication:1 check:2 dld:29 sense:3 bt:6 relation:2 issue:1 classification:17 overall:1 exponent:10 polonik:1 marginal:1 construct:2 identical:3 report:2 employ:1 randomly:1 simultaneously:1 national:3 replaced:2 replacement:2 tq:1 n1:2 detection:13 interest:2 evaluation:2 analyzed:1 sh:4 accurate:1 partial:1 old:1 taylor:1 theoretical:1 modeling:1 assertion:4 measuring:1 subset:1 entry:2 alamo:3 uniform:2 front:1 teacher:1 chooses:2 combined:1 density:22 fundamental:1 twelve:1 informatics:1 analogously:1 w1:1 again:1 satisfied:1 choose:1 hoeffding:1 chung:1 satisfy:1 later:1 closed:1 traffic:1 bayes:1 option:1 aggregation:1 minimize:2 efficiently:1 yield:1 correspond:1 cc:1 submitted:1 inform:1 whenever:2 checked:1 definition:4 associated:1 proof:5 sampled:3 proved:1 recall:6 knowledge:1 lim:1 hilbert:1 actually:1 appears:1 originally:1 modal:2 formulation:2 amer:1 strongly:1 furthermore:13 smola:2 sketch:2 steinwart:5 working:1 somehow:1 defines:1 k22:1 contain:3 normalized:1 hence:2 regularization:2 symmetric:1 laboratory:3 nonzero:1 width:3 mammen:1 coincides:2 demonstrate:1 l1:1 interpreting:2 meaning:1 ranging:4 svc:1 recently:1 common:1 superior:2 discriminated:1 insensitive:1 interpretation:2 approximates:1 numerically:1 significant:1 versa:1 cambridge:1 rd:6 jcs:1 consistency:4 grid:10 shawe:1 inf:1 inequality:3 binary:6 accomplished:1 minimum:1 additional:1 surely:3 determine:2 paradigm:1 ii:2 smooth:1 technical:2 faster:1 lin:1 anomalous:3 basic:1 essentially:2 expectation:1 kernel:9 remarkably:1 interval:1 crucial:1 sch:4 rest:1 probably:1 contrary:1 iii:1 easy:5 enough:1 utility:1 effort:1 york:1 remark:1 nonparametric:1 tsybakov:5 statist:5 concentrated:3 http:1 sign:8 write:10 group:1 threshold:1 monitor:1 drawn:4 hartigan:2 libsvm:2 asymptotically:1 saying:1 almost:5 chih:2 decision:1 ct:1 played:1 software:1 performing:1 aproach:1 px:9 according:1 ball:1 describes:1 smaller:3 across:1 partitioned:1 modification:2 lem:1 erm:2 discus:1 turn:1 know:1 end:7 observe:2 rp:32 original:1 denotes:4 clustering:1 include:1 hinge:2 establish:5 classical:1 hypercube:1 already:2 added:1 concentration:7 usual:1 exhibit:1 distance:1 sci:1 considers:1 collected:1 assuming:1 besides:1 retained:1 equivalently:1 unfortunately:1 negative:1 design:1 unknown:1 ingo:3 extended:2 precise:2 frame:1 reproducing:1 thm:1 introduced:1 david:1 pair:3 lanl:2 c3:1 z1:1 hush:3 hour:1 trans:1 address:1 proceeds:1 pattern:1 max:1 overlap:1 natural:1 treated:1 regularized:1 indicator:1 library:1 geometric:4 kf:1 loss:3 interesting:2 versus:2 localized:1 triple:1 validation:8 sufficient:1 consistent:1 dq:2 production:1 row:2 course:1 surprisingly:1 last:1 side:1 benefit:1 distributed:1 curve:1 dimension:1 xn:2 evaluating:1 valid:1 world:1 contour:2 coincide:1 employing:1 excess:8 approximate:2 compact:1 active:1 xi:2 alternatively:1 don:1 ripley:1 continuous:2 search:7 iterative:1 nature:1 sp:11 cx2:2 noise:1 x1:2 en:3 tl:3 wiley:1 comput:1 weighting:1 third:1 learns:1 formula:1 emphasizing:1 theorem:11 specific:2 jen:1 pac:1 svm:42 exists:3 kx:1 surprise:1 partially:1 chang:1 minimizer:1 determines:1 satisfies:1 prop:1 goal:2 formulated:1 month:1 consequently:1 rbf:5 ann:6 experimentally:1 except:1 uniformly:1 called:1 total:1 discriminate:1 experimental:1 cybersecurity:2 highdimensional:1 support:11 latter:3 absolutely:2 correlated:1 |
1,772 | 261 | 566
Atlas, Cohn and Ladner
Training Connectionist Networks with
Queries and Selective Sampling
Les Atlas
Dept. of E.E.
David Cohn
Dept. of C.S. & E.
Richard Ladner
Dept. of C.S. & E.
M.A. El-Sharkawi, R.J. Marks II, M.E. Aggoune, and D.C. Park
Dept. of E.E.
University of Washington, Seattle, WA 98195
ABSTRACT
"Selective sampling" is a form of directed search that can greatly
increase the ability of a connectionist network to generalize accurately. Based on information from previous batches of samples, a
network may be trained on data selectively sampled from regions
in the domain that are unknown. This is realizable in cases when
the distribution is known, or when the cost of drawing points from
the target distribution is negligible compared to the cost of labeling them with the proper classification. The approach is justified
by its applicability to the problem of training a network for power
system security analysis. The benefits of selective sampling are
studied analytically, and the results are confirmed experimentally.
1
Introduction: Random Sampling vs. Directed Search
A great deal of attention has been applied to the problem of generalization based
on random samples drawn from a distribution, frequently referred to as "learning
from examples." Many natural learning learning systems however, do not simply
rely on this passive learning technique, but instead make use of at least some form
of directed search to actively examine the problem domain. In many problems,
directed search is provably more powerful than passively learning from randomly
given examples.
Training Connectionist Networks with Queries and Selective Sampling
Typically, directed search consists of membership queries, where the learner asks for
the classification of specific points in the domain. Directed search via membership
queries may proceed simply by examining the information already given and determining a region of uncertainty, the area in the domain where the learner believes
mis-classification is still possible. The learner then asks for examples exclusively
from that region.
This paper discusses one form of directed search: selective sampling. In Section 2,
we describe theoretical foundations of directed search and give a formal definition
of selective sampling. In Section 3 we describe a neural network implementation
of this technique, and we discuss the resulting improvements in generalization on a
number of tasks in Section 4.
2
Learning and Selective Sampling
For some arbitrary domain learning theory defines a concept as being some subset of
points in the domain. For example, if our domain is ~2, we might define a concept
as being all points inside a region bounded by some particular rectangle.
A concept class is simply the set of concepts in some description language.
A concept class of particular interest for this paper is that defined by neural network
architectures with a single output node. Architecture refers to the number and types
of units in a network and their connectivity. The configuration of a network specifies
the weights on the connections and the thresholds of the units 1 .
A single-output architecture plus configuration can be seen as a specification of
a concept classifier in that it classifies the set of all points producing a network
output above some threshold value. Similarly, an architecture may be seen as a
specification of a concept class. It consists of all concepts classified by configurations
of the network that the learning rule can produce (figure 1).
Input~
outPu~
>
Figure 1: A network architecture as a concept class specification
2.1
Generalization and formal learning theory
An instance, or training example, is a pair (x, f(x)) consisting of a point x in
the domain, usually drawn from some distribution P, along with its classification
1 For the purposes of this discussion, a neural network will be considered to be a feedforward
network of neuron-like components that compute a weighted swn of their inputs and modify
that swn with a sigmoidal transfer function. The methods described, however should be equally
applicable to other, more general classifiers as well.
567
568
Atlas, Cohn and Ladner
according to some target concept I. A concept c is consistent with an instance
(x,/(x? if c(x) = I(x), that is, if the concept produces the same classification of
point x as the target. The error( c, I, P) of a concept c, with respect to a target
concept 1 and a distribution P, is the probability that c and 1 will disagree on a
random sample drawn from P.
The generalization problem, is posed by formal learning theory as: for a given
concept class C, an unknown target I, and an arbitrary error rate f, how many
samples do we have to draw from an arbitrary distribution P in order to find a
concept c E C such that error( c, I, P) < f with high confidence? This problem
has been studied for neural networks in (Baum and Haussler, 1989) and (Haussler,
1989).
2.2
'R(sm), the region of uncertainty
If we consider a concept class C and a set sm of m instances, the classification of
some regions of the domain may be implicitly determined; all concepts in C that
are consistent with all of the instances may agree in these parts. What we are
interested in here is what we define to be the region 01 uncertainty:
'R(sm)
= {x : 3CI, C2 E C, CI, C2 are consistent with all s E sm,
and
CI(X)
1= C2(X)}.
For an arbitrary distribution P, we can define a measure on the size of this region as
a
Pr[x E'R(sm)]. In an incremental learning procedure, as we classify and train
on more points, a will be monotonically non-increasing. A point that falls outside
'R(sm) will leave it unchanged; a point inside will further restrict the region. Thus,
a is the probability that a new, random point from P will reduce our uncertainty.
=
A key point is that since 'R(sm) serves as an envelope for consistent concepts, it
also bounds the potential error of any consistent hypothesis we choose. If the error
of our current hypothesis is f, then f < a. Since we have no basis for changing
our current hypothesis without a contradicting point, f is also the probability of an
additional point red ucing our error.
2.3
Selective sampling is a directed search
Consider the case when the cost of drawing a point from our distribution is small
compared to the cost of finding the point's proper classification. Then, after training
on n instances, if we have some inexpensive method of testing for membership in
'R( sn), we can "filter" points drawn from our distribution, selecting, classifying and
training on only those that show promise of improving our representation.
Mathematically, we can approximate this filtering by defining a new distribution pI
that is zero outside 'R(sn), but maintains the relative distribution of P. Since the
next sample from pI would be guaranteed to land inside the region, it would have,
with high confidence, the effect of at least 1/a samples drawn from P.
The filtering process can be applied iteratively. Start out with the distribution
PO,n P. Inductively, train on n samples chosen from Pi,n to obtain a new region
=
Training Connectionist Networks with Queries and Selective Sampling
of uncertainty, 'R(s"n), and define from it P'+l,n = P'"n. The total number of
training points to calculate P'"n is m = in.
Selective sampling can be contrasted with random sampling in terms of efficiency.
In random sampling, we can view training as a single, non-selective pass where
n m. As the region of uncertainty shrinks, so does the probability that any given
additional sample will help. The efficiency of the samples decreases with the error.
=
By filtering out useless samples before committing resources to them, as we can do
in selective sampling, the efficiency of the samples we do classify remains high. In
the limit where n
1, this regimen has the effect of querying: each sample is taken
from a region based on the cumulative information from all previous samples, and
each one will reduce the size of'R(sm).
=
3
Training Networks with Selective Sampling
A leading concern in connectionist research is how to achieve good generalization
with a limited number of samples. This suggests that selective sampling, properly
implemented, should be a useful tool for training neural networks.
3.1
A na'ive neural network querying algorithm
Since neural networks with real-valued outputs are generally trained to within some
tolerance (say, less than 0.1 for a zero and greater than 0.9 for a one), one is tempted
to use the part of the domain between these limits as 'R(sm) (figure 2) .
.
Input~
outPu~
>
.. .
,,:~~,', . . ~ . .
.~
.
Figure 2: The region of uncertainty captured by a nai?ve neural network
The problem with applying this na?ive approach to neural networks is that when
training, a network tends to become "overly confident" in regions that are still
unknown. The 'R( sm) chosen by this method will in general be a very small subset
of the true region of uncertainty.
3.2
Version-space search and neural networks
Mitchell (1978) describes a learning procedure based on the partial-ordering in
generality of the concepts being learned. One maintains two sets of plausible hypotheses: Sand G. S contains all "most specific" concepts consistent with present
information, and G contains all consistent "most general" concepts. The "version
space," which is the set of all plausible concepts in the class being considered, lies
569
570
Atlas, Cohn and Ladner
between these two bounding sets. Directed search proceeds by examining instances
that fall in the difference of Sand G. Specifically, the search region for a versionspace search is equal to {U s~g : s E S, g E G}. If an instance in this region
proves positive, then some s in S will have to generalize to accommodate the new
information; if it proves negative, some 9 in G will have to be modified to exclude
it. In either case, the version space, the space of plausible hypotheses, is reduced
with every query.
This search region is exactly the 'R.(sm) that we are attempting to capture. Since
sand 9 consist of most specific/general concepts in the class we are considering,
their analogues are the most specific and most general networks consistent with the
known data.
This search may be roughly implemented by training two networks in parallel. One
network, which we will label N s, is trained on the known examples as well as given
a large number of random "background" patterns, which it is trained to classify
with as negative. The global minimum error for N s is achieved when it classifies
all positive training examples as positive and as much else as possible as negative.
The result is a "most specific" configuration consistent with the training examples.
Similarly, N G is trained on the known examples and a large number of random
background examples which it is to classify as positive. Its global minimum error is
achieved when it classifies all negative training examples as negative and as much
else possible as positive.
Assuming our networks Ns and NG converge to near-global minima, we can now define a region 'R.,t:.g, the symmetric difference of the outputs of Ns and NG. Because
Ns and NG lie near opposite extremes of'R.(sm), we have captured a well-defined
region of uncertainty to search (figure 3).
3.3
Limitations of the technique
The neural network version-space technique is not without problems in general
application to directed search. One limitation of this implementation of version
1nput
output
Figure 3: 'R.,t:.g contains the difference between decision regions of N sand N G as
well as their own regions of uncertainty.
Training Connectionist Networks with Queries and Selective Sampling
space search is that a version space is bounded by a set of most general and most
specific concepts, while an S-G network maintains only one most general and most
specific network. As a result, n6~g will contain only a subset of the true n(sm).
This limitation is softened by the global minimizing tendency of the networks. As
new examples are added and the current N s (or N G) is forced to a more general
(or specific) configuration, the network will relax to another, now more specific (or
general) configuration. The effect is that of a traversal of concepts in Sand G. If
the number of samples in each pass is kept sufficiently small, all "most general" and
most specific" concepts in n(sm) may be examined without excessive sampling on
one particular configuration.
There is a remaining difficulty inherent in version-space search itself: Haussler
(1987) points out that even in some very simple cases, the size of Sand G may
grow exponentially in the number of examples.
A limitation inherent to neural networks is the necessary assumption that the networks N sand N G will in fact converge to global minima, and that they will do so
in a reasonable amount of time. This is not always a valid assumption; it has been
shown that in (Blum and Rivest, 1989) and (Judd, 1988) that the network loading
problem is NP-complete, and that finding a global minimum may therefore take an
exponential amount of time.
This concern is ameliorated by the fact that if the number of samples in each pass is
kept small, the failure of one network to converge will only result in a small number
of samples being drawn from a less useful area, but will not cause a large-scale
failure of the technique.
4
Experimental Results
Experiments were run on three types of problems: learning a simple square-shaped
region in ~2, learning a 25-bit majority function, and recognizing the secure region
of a small power system.
4.1
The square learner
A two-input network with one hidden layer of 8 units was trained on a distribution
of samples that were positive inside a square-shaped region at the center of the
domain and negative elsewhere. This task was chosen because of its intuitive visual
appeal (figure 4).
The results of training an S-G network provide support for the method. As can be
seen in the accompanying plots, the Ns plots a tight contour around the positive
instances, while NG stretches widely around the negative ones.
4.2
Majority function
Simulations training on a 25-bit majority function were run using selective sampling
in 2, 3, 4 and 20 passes, as well as baseline simulations using random sampling for
error comparIson.
571
572
Atlas, Cohn and Ladner
Figure 4: Learning a square by selective sampling
In all cases, there was a significant improvement of the selective sampling passes
over the random sampling ones (figure 5). The randomly sampled passes exhibited a
roughly logarithmic generalization curve, as expected following Blumer et al (1988).
The selectively sampled passes, however, exhibited a steeper, more exponential drop
in the generalization error, as would be expected from a directed search method.
Furthermore, the error seemed to decrease as the sampling process was broken up
into smaller, more frequent passes, pointing at an increased efficiency of sampling
as new information was incorporated earlier into the sampling process.
100
0.5
~
5
c
.~
N
~
...~
c
~
0
______ random sampling
_ _ selective sampling
(20 passes)
0.4
0.3
0.2
...
0.1
0
0
-..... ....... -
50
100
150
200
Number of training samples
IS
5
?a5
i13
10.1
c~
0
10-2
0
100
150
200
50
Number of training samples
Figure 5: Error rates for random vs. selective sampling
4.3
Power system security analysis
If various load parameters of a power system are within a certain range, the system
is secure. Otherwise it risks thermal overload and brown-out. Previous research
(Aggoune et aI, 1989) determined that this problem was amenable to neural network
learning, but that random sampling of the problem domain was inefficient in terms
of samples needed. The fact that arbitrary points in the domain may be analyzed for
stability makes the problem well-suited to learning by means of selective sampling.
A baseline case was tested using 3000 data points representing power system configurations and compared with a two-pass, selectively-sampled data set. The latter
was trained on an initial 1500 points, then on a second 1500 derived from a S-G
network as described in the previous section. The error for the baseline case was
0.86% while that of the selectively sampled case was 0.56%.
Training Connectionist Networks with Queries and Selective Sampling
5
Discussion
In this paper we have presented a theory of selective sampling, described a connectionist implementation of the theory, and examined the performance of the resulting
system in several domains.
The implementation presented, the S-G network, is notable in that, even though
it is an imperfect implementation of the theory, it marks a sharp departure from
the standard method of training neural networks. Here, the network itself decides
what samples are worth considering and training on. The results appear to give
near-exponential improvements over standard techniques.
The task of active learning is an important one; in the natural world much learning
is directed at least somewhat by the learner. We feel that this theory and these
experiments are just initial forays into the promising area of self-training networks.
Acknowledgements
This work was supported by the National Science Foundation, the Washington
Technology Center, and the IBM Corporation. Part of this work was done while D.
Cohn was at IBM T.J. Watson Research Center, Yorktown Heights, NY 10598.
References
M. Aggoune, L. Atlas, D. Cohn, M. Damborg, M. EI-Sharkawi, and R. Marks II. Artificial neural networks for power system static security assessment. In Proceedings,
International Symposium on Circuits and Systems, 1989.
Eric Baum and David Haussler. What size net gives valid generalization? In Neural
Information Processing Systems, Morgan Kaufmann 1989.
Anselm Blumer, Andrej Ehrenfeucht, David Haussler, and Manfred Warmuth. Learnability and the Vapnik-Chervonenkis dimension. UCSC Tech Report UCSC-CRL87-20, October 1988.
Avrim Blum and Ronald Rivest. Training a 3-node neural network is NP-complete.
In Neural Information Processing Systems, Morgan Kaufmann 1989.
David Haussler. Learning conjunctive concepts in structural domains. In Proceedings, AAAI '87, pages 466-470. 1987.
David Haussler. Generalizing the pac model for neural nets and other learning
applications. UCSC Tech Report UCSC-CRL-89-30, September 1989.
Stephen Judd. On the complexity of loading shallow neural networks . Journal of
Complexity, 4:177-192, 1988.
Tom Mitchell. Version spaces: an approach to concept learning. Tech Report CS78-711, Dept. of Computer Science, Stanford Univ., 1978.
Leslie Valiant. A theory of the learnable. Communications of the A CM, 27:11341142, 1984.
573
| 261 |@word version:8 loading:2 simulation:2 asks:2 accommodate:1 initial:2 configuration:8 contains:3 exclusively:1 selecting:1 chervonenkis:1 current:3 conjunctive:1 ronald:1 atlas:6 plot:2 drop:1 v:2 warmuth:1 manfred:1 node:2 sigmoidal:1 height:1 along:1 c2:3 ucsc:4 become:1 symposium:1 consists:2 inside:4 expected:2 roughly:2 frequently:1 examine:1 considering:2 increasing:1 classifies:3 bounded:2 rivest:2 circuit:1 what:4 cm:1 finding:2 corporation:1 every:1 exactly:1 classifier:2 unit:3 appear:1 producing:1 before:1 negligible:1 positive:7 modify:1 tends:1 limit:2 might:1 plus:1 studied:2 examined:2 suggests:1 limited:1 range:1 directed:13 testing:1 procedure:2 area:3 confidence:2 refers:1 andrej:1 risk:1 applying:1 center:3 baum:2 attention:1 rule:1 haussler:7 stability:1 feel:1 target:5 hypothesis:5 capture:1 calculate:1 region:26 ordering:1 aggoune:3 decrease:2 broken:1 complexity:2 inductively:1 traversal:1 trained:7 tight:1 efficiency:4 learner:5 basis:1 eric:1 po:1 various:1 train:2 univ:1 forced:1 committing:1 describe:2 query:8 artificial:1 labeling:1 outside:2 posed:1 ive:2 valued:1 say:1 drawing:2 plausible:3 relax:1 widely:1 ability:1 otherwise:1 itself:2 net:2 frequent:1 achieve:1 description:1 intuitive:1 seattle:1 produce:2 incremental:1 leave:1 help:1 implemented:2 filter:1 sand:7 generalization:8 mathematically:1 stretch:1 accompanying:1 sufficiently:1 considered:2 around:2 great:1 pointing:1 anselm:1 purpose:1 applicable:1 label:1 tool:1 weighted:1 always:1 modified:1 derived:1 improvement:3 properly:1 greatly:1 secure:2 tech:3 baseline:3 realizable:1 el:1 membership:3 typically:1 hidden:1 selective:23 interested:1 provably:1 classification:7 equal:1 shaped:2 washington:2 sampling:33 ng:4 park:1 excessive:1 connectionist:8 np:2 report:3 richard:1 inherent:2 randomly:2 ve:1 national:1 consisting:1 interest:1 a5:1 analyzed:1 extreme:1 amenable:1 partial:1 necessary:1 theoretical:1 instance:8 classify:4 increased:1 earlier:1 leslie:1 cost:4 applicability:1 subset:3 recognizing:1 examining:2 learnability:1 confident:1 international:1 na:2 connectivity:1 aaai:1 choose:1 inefficient:1 leading:1 actively:1 potential:1 exclude:1 notable:1 view:1 steeper:1 red:1 start:1 maintains:3 parallel:1 square:4 kaufmann:2 generalize:2 accurately:1 regimen:1 confirmed:1 worth:1 classified:1 definition:1 failure:2 inexpensive:1 mi:1 static:1 sampled:5 mitchell:2 tom:1 done:1 shrink:1 though:1 generality:1 furthermore:1 just:1 ei:1 cohn:7 assessment:1 defines:1 effect:3 concept:29 true:2 contain:1 brown:1 analytically:1 symmetric:1 iteratively:1 ehrenfeucht:1 deal:1 self:1 yorktown:1 complete:2 passive:1 exponentially:1 significant:1 ai:1 similarly:2 language:1 specification:3 own:1 certain:1 watson:1 seen:3 captured:2 additional:2 greater:1 minimum:5 somewhat:1 morgan:2 converge:3 monotonically:1 ii:2 stephen:1 equally:1 achieved:2 justified:1 background:2 else:2 grow:1 envelope:1 swn:2 exhibited:2 pass:6 structural:1 near:3 feedforward:1 architecture:5 restrict:1 opposite:1 reduce:2 imperfect:1 proceed:1 cause:1 useful:2 generally:1 amount:2 reduced:1 specifies:1 overly:1 promise:1 key:1 threshold:2 blum:2 drawn:6 changing:1 kept:2 rectangle:1 run:2 powerful:1 uncertainty:10 reasonable:1 draw:1 decision:1 bit:2 layer:1 bound:1 guaranteed:1 passively:1 attempting:1 softened:1 according:1 describes:1 smaller:1 shallow:1 damborg:1 pr:1 taken:1 resource:1 agree:1 remains:1 discus:2 needed:1 serf:1 batch:1 remaining:1 prof:2 unchanged:1 nai:1 already:1 nput:1 added:1 september:1 majority:3 foray:1 assuming:1 useless:1 minimizing:1 october:1 negative:7 implementation:5 proper:2 unknown:3 ladner:5 disagree:1 neuron:1 sm:14 thermal:1 defining:1 incorporated:1 communication:1 arbitrary:5 sharp:1 david:5 pair:1 connection:1 security:3 learned:1 proceeds:1 usually:1 pattern:1 departure:1 belief:1 analogue:1 power:6 natural:2 rely:1 difficulty:1 representing:1 technology:1 n6:1 sn:2 acknowledgement:1 determining:1 relative:1 limitation:4 filtering:3 querying:2 foundation:2 consistent:9 classifying:1 pi:3 land:1 ibm:2 elsewhere:1 supported:1 formal:3 fall:2 benefit:1 tolerance:1 curve:1 judd:2 dimension:1 valid:2 cumulative:1 contour:1 seemed:1 world:1 approximate:1 ameliorated:1 implicitly:1 global:6 decides:1 active:1 search:20 promising:1 transfer:1 improving:1 domain:15 bounding:1 contradicting:1 referred:1 ny:1 n:4 exponential:3 lie:2 stanford:1 load:1 specific:10 pac:1 outpu:2 appeal:1 learnable:1 concern:2 consist:1 vapnik:1 avrim:1 valiant:1 ci:3 sharkawi:2 suited:1 generalizing:1 logarithmic:1 simply:3 visual:1 blumer:2 tempted:1 crl:1 experimentally:1 determined:2 specifically:1 contrasted:1 total:1 pas:4 tendency:1 experimental:1 selectively:4 mark:3 support:1 latter:1 overload:1 dept:5 tested:1 |
1,773 | 2,610 | Semi-supervised Learning with Penalized
Probabilistic Clustering
Zhengdong Lu and Todd K. Leen
Department of Computer Science and Engineering
OGI School of Science and Engineering , OHSU
Beaverton, OR 97006
{zhengdon,tleen}@cse.ogi.edu
Abstract
While clustering is usually an unsupervised operation, there are circumstances in which we believe (with varying degrees of certainty) that items
A and B should be assigned to the same cluster, while items A and C
should not. We would like such pairwise relations to influence cluster
assignments of out-of-sample data in a manner consistent with the prior
knowledge expressed in the training set. Our starting point is probabilistic clustering based on Gaussian mixture models (GMM) of the data
distribution. We express clustering preferences in the prior distribution
over assignments of data points to clusters. This prior penalizes cluster
assignments according to the degree with which they violate the preferences. We fit the model parameters with EM. Experiments on a variety
of data sets show that PPC can consistently improve clustering results.
1 Introduction
While clustering is usually executed completely unsupervised, there are circumstances in
which we have prior belief that pairs of samples should (or should not) be assigned to
the same cluster. Such pairwise relations may arise from a perceived similarity (or dissimilarity) between samples, or from a desire that the algorithmically generated clusters
match the geometric cluster structure perceived by the experimenter in the original data.
Continuity, which suggests that neighboring pairs of samples in a time series or in an image
are likely to belong to the same class of object, is also a source of clustering preferences.
We would like these preferences to be incorporated into the cluster structure so that the
assignment of out-of-sample data to clusters captures the concept(s) that give rise to the
preferences expressed in the training data.
Some work [1, 2, 3] has been done on adopting traditional clustering methods, such as Kmeans, to incorporate pairwise relations. These models are based on hard clustering and
the clustering preferences are expressed as hard pairwise constraints that must be satisfied. While this work was in progress, we became aware of the algorithm of Shental et al.
[4] who propose a Gaussian mixture model (GMM) for clustering that incorporates hard
pairwise constraints.
In this paper, we propose a soft clustering algorithm based on GMM that expresses cluster-
ing preferences (in the form of pairwise relations) in the prior probability on assignments of
data points to clusters. This framework naturally accommodates both hard constraints and
soft preferences in a framework in which the preferences are expressed as a Bayesian probability that pairs of points should (or should not) be assigned to the same cluster. We call
the algorithm Penalized Probabilistic Clustering (PPC). Experiments on several datasets
demonstrate that PPC can consistently improve the clustering result by incorporating reliable prior knowledge.
2 Prior Knowledge for Cluster Assignments
PPC begins with a standard GMM
P (x|?) =
M
X
?? P (x|?, ?? )
?=1
where ? = (?1 , . . . ?K , ?1 , . . . , ?K ). We augment the dataset X = {xi }, i = 1 . . . N with
(latent) cluster assignments Z = z(xi ), i = 1, . . . , N to form the familiar complete data
(X, Z). The complete data likelihood is
P (X, Z|?) = P (X|Z, ?)P (Z|?).
(1)
2.1 Prior distribution in latent space
We incorporate our clustering preferences by manipulating the prior probability PQ
(Z|?).
In the standard Gaussian mixture model, the prior distribution is trivial: P (Z|?) = i ?zi .
We incorporate prior knowledge (our clustering preferences) through a weighting function
g(Z) that has large values when the assignment of data points to clusters Z conforms to
our preferences, and low values when Z conflicts with our preferences. Hence we write
Q
1 Y
i ?zi g(Z)
?zi g(Z)
(2)
?
P (Z|?, G) = P Q
K i
Z
j ?zj g(Z)
where the sum is over all possible assignments of the data to clusters. The likelihood
of the data, given a specific cluster assignment, is independent of the cluster assignment
preferences, and so the complete data likelihood is
P (X, Z|?, G) = P (X|Z, ?)
1 Y
1
?zi g(Z) = P (X, Z|?)g(Z),
K i
K
(3)
where P (X, Z|?) is the complete data likelihood for a standard GMM. The data likelihood is the sum
P of complete data likelihood over all possible Z, that is, L(X|?) =
P (X|?, G) = Z P (X, Z|?, G), which can be maximized with the EM algorithm. Once
the model parameters are fit, we do soft clustering according to the posterior probabilities
for new data p(?|x, ?). (Note that cluster assignment preferences are not expressed for the
new data, only for the training data.)
2.2 Pairwise relations
Pairwise relations provide a special case of the framework discussed above. We specify
two types of pairwise relations:
? link: two sample should be assigned into one cluster
? do-not-link: two samples should be assigned into different clusters.
The weighting factor given to the cluster assignment configuration Z is simple:
Y
g(Z) =
exp(Wijp ?(zi , zj )),
i,j
where ? is the Kronecker ?-function and Wijp is the weight associated with sample pair
(xi , xj ). It satisfies
p
Wijp ? [??, ?], Wijp = Wji
.
The weight Wijp reflects our preference and confidence in assigning xi and xj into one
cluster. We use a positive Wijp when we prefer to assign xi and xj into one cluster (link),
and a negative Wijp when we prefer to assign them into different clusters (do-not-link). The
value |Wijp | reflects how certain we are in the preference. If Wijp = 0, we have no prior
knowledge on the assignment relevancy of xi and xj . In the extreme cases where |Wijp | ?
?, the Z violating the pairwise relations about xi and xj have zero prior probability, since
for those assignments
Q
Q
p
n ?z n
i,j exp(Wij ?(zi , zj ))
Q
P (Z|?, G) = P Q
? 0.
p
Z
n ?zn
i,j exp(Wij ?(zi , zj ))
Then the relations become hard constraints, while the relations with |Wijp | < ? are called
soft preferences. In the remainder of this paper, we will use W p to denote the prior knowledge on pairwise relations, that is
Y
1
P (X, Z|?, W p ) = P (X, Z|?)
exp(Wijp ?(zi , zj ))
(4)
K
i,j
2.3 Model fitting
We use the EM algorithm [5] to fit the model parameters ?.
?? = arg max L(X|?, G)
?
The expectation step (E-step) and maximization step (M-step) are
E-step: Q(?, ?(t?1) ) = EZ|X (log P (X, Z|?, G)|X, ?(t?1) , G)
M-step: ?(t) = arg max Q(?, ?(t?1) )
?
In the M-step, the optimal mean and covariance matrix of each component is:
PN
(t?1)
, G)
j=1 xj P (k|xj , ?
?k =
PN
(t?1)
, G)
j=1 P (k|xj , ?
PN
(t?1)
, G)(xj ? ?k )(xj ? ?k )T
j=1 P (k|xj , ?
?k =
.
PN
(t?1) , G)
j=1 P (k|xj , ?
However, the update of prior probability of each component is more difficult than for the
standard GMM, we need to find
? ? {?1 , . . . , ?m } = arg max
?
M X
N
X
log ?l P (l|xi , ?(t?1) , G) ? log K(?).
l=1 i=1
In this paper, we use a numerical method to find the solution.
2.4 Posterior Inference and Gibbs sampling
The M-step requires the cluster membership posterior. Computing this posterior is simple
for the standard GMM since each data point xi can be assigned to a cluster independent of
the other data points and we have the familiar cluster origin posterior p(zi = k|xi , ?).
For the PPC model calculating the posteriors is no longer trivial. If two sample points, xi
and xj participate in a pairwise relations, equation (4) tells us
P (zi , zj |X, ?, W p ) 6= P (zi |X, ?, W p )P (zj |X, ?, W p ) .
and the posterior probability of xi and xj cannot be computed separately.
For pairwise relations, the joint posterior distribution must be calculated over the entire
transitive closure of the ?link? or ?do-not-link? relations. See Fig. 1 for an illustration.
(a)
(b)
Figure 1: (a) Links (solid line) and do-not-links (dotted line) among six samples; (b) Relevancy (solid line) translated from links in (a)
In the remainder of this paper, we will refer to the smallest sets of samples whose posterior
assignment probabilities can be calculated independently as cliques. The posterior probability of a given sample xi in a clique T is calculated by marginalizing the posterior over
the entire clique
X
P (zi = k|X, ?, W p ) =
P (ZT |XT , ?, W p ),
ZT |zi =k
with the posterior on the clique given by
P (ZT |XT , ?, W p ) =
P (ZT , XT |?, W p )
P (ZT , XT |?, W p )
P
=
.
0
p
P (XT |?, W p )
Z 0 P (ZT , XT |?, W )
T
Computing the posterior probability of a sample in clique T requires time complexity
O(M |T | ), where |T | is the size of clique T and M is the number of components in the
mixture model. This is very expensive if |T | is very big and model size M ? 2. Hence
small size cliques are required to make the marginalization computationally reasonable.
In some circumstances it is natural to limit ourselves to the special case of pairwise relation
with |T | ? 2, called non-overlapping relations. See Fig. 2 for illustration. More generally,
we can avoid the expensive computation in posterior inference by breaking large clique
into many small ones. To do this, we need to ignore some links or do-not-links. In section
3.2, we will give an application of this idea.
For some choices of g(Z), the posterior probability can be given in a simple form even
when the clique is big. One example is when there are only hard links. This case is useful
when we are sure that a group of samples are from one source. For more general cases,
where exact inference is computationally prohibitive, we propose to use Gibbs sampling
[6] to estimate the posterior probability.
(b)
(a)
Figure 2: (a) Overlapping pairwise relations; (b) Non-overlapping pairwise relations.
In Gibbs sampling, we estimate P (zi |X, ?, G) as a sample mean
P (zi = k|X, ?, G) = E(?(zi , k)|X, ?, G) ?
S
1X
(t)
?(zi , k)
S t=1
where the sum is over a sequence of S samples from P (Z|X, ?, G) generated by the
Gibbs MCMC. The tth sample in the sequence is generated by the usual Gibbs sampling
technique:
(t)
(t?1) (t?1)
(t?1)
? Pick z1 from distribution P (z1 |z2
, z3
, ..., zN , X, G, ?)
(t)
(t?1)
(t?1)
, ..., zN , X, G, ?)
? Pick z2 from distribution P (z2 |z1t , z3
???
(t)
(t) (t)
(t)
? Pick zN from distribution P (zN |z1 , z2 , ..., zN ?1 , X, G, ?)
For pairwise relations it is helpful to introduce some notation. Let Z?i denote an assignment of data points to clusters that leaves out the assignment of xi . Let U (i) be
the indices of the set of samples that participate in a pairwise relation with sample xi ,
U (i) = {j : Wijp 6= 0}. Then we have
Y
P (zi |Z?i , X, ?, W p ) ? P (xi , zi |?)
exp(2Wijp ?(zi , zj )).
(5)
j?U (i)
When W p is sparse, the size of U (i) is small, thus calculating P (zi |Z?i , X, ?, W p ) is
very cheap and Gibbs sampling can effectively estimate the posterior probability.
3 Experiments
3.1 Clustering with different number of hard pairwise constraints
In this experiment, we demonstrate how the number of pairwise relations affects the performance of clustering. We apply PPC model to three UCI data sets: Iris,Waveform, and
Pendigits. Iris data set has 150 samples and three classes, 50 samples in each class; Waveform data set has 5000 samples and three classes, 33% samples in each class; Pendigits data
set includes four classes (digits 0,6,8,9), each with 750 samples. All data sets have labels
for all samples, which are used to generate the relations and to evaluate performance.
We try PPC (with component number same as the number of classes) with various number
of pairwise relations. For each relations number, we conduct 100 runs and calculate the
averaged classification accuracy. In each run, the data set is randomly split into training set
(90%) and test set (10%). The pairwise relations are generated as follows: we randomly
pick two samples from the training set without replacement and check their labels. If
the two have the same label, we then add a link constraint between them; otherwise, we
add a do-not-link constraint. Note the generated pairwise relations are non-overlapping,
as described in section 2.4. The model fitted on the training set is applied to test set.
Experiment results on two data sets are shown in Fig. 3 (a) and (b) respectively. As Fig.
3 indicates, PPC can consistently improve its clustering accuracy on the training set when
more pairwise constraints are added; also, the effect brought by constraints generalizes to
the test set.
0.84
0.95
0.9
0.85
0.8
0.75
0
on training set
on test set
10
20
30
Number of Relations
(a) on Iris data
40
50
0.9
Averaged Classification Accuracy
Averaged Classification Accuracy
Averaged Classification Accuracy
1
0.82
0.8
0.78
0.76
on training set
on test set
0.74
0.72
0
100
200
300
400
Number of Relations
500
600
(b) on Waveform data
0.85
0.8
0.75
on training set
on test set
0.7
0
200
400
600
800 1000
Number of relations
1200
(c) on Pendigits data
Figure 3: The performance of PPC with various number of relations
3.2 Hard pairwise constraints for encoding partial label
The experiment in this subsection shows the application of pairwise constraints on partially
labeled data. For example, consider a problem with six classes A, B, ..., F . The classes are
grouped into several class-sets C1 = {A, B, C}, C2 = {D, E}, C3 = {F }. The samples
are partially labeled in the sense that we are told which class-set a sample is from, but not
which specific class it is from. We can logically derive a do-not-link constraint between
any pair of samples known to belong to different class-sets, while no link constraint can be
derived if each class-set has more than one class in it.
Fig. 4 (a) is a 120x400 region from Greenland ice sheet from NASA Langley DAAC. This
region is partially labeled into snow area and non-snow area, as indicated in Fig. 4 (b).
The snow area can be ice, melting snow or dry snow, while the non-snow area can be bare
land, water or cloud. Each pixel has attributes from seven spectrum bands. To segment the
image, we first divide the image into 5x5x7 blocks (175 dim vectors). We use the first 50
principal components as feature vectors.
For PPC, we use half of data samples for training set and the rest for test. Hard do-not-link
constraints (only on training set) are generated as follows: for each block in the non-snow
area, we randomly choose (without replacement) six blocks from the snow area to build
do-not-link constraints. By doing this, we achieve cliques with size seven (1 non-snow
block + 6 snow blocks). Like in section 3.1, we apply the model fitted with PPC to test
set and combine the clustering results on both data sets into a complete picture. A typical
clustering result of 3-component standard GMM and 3-component PPC are shown as Fig.
4 (c) and (d) respectively. From Fig. 4, standard GMM gives a clustering that is clearly
in disagreement with the human labeling in Fig. 4 (b). The PPC segmentation makes far
fewer mis-assignments of snow areas (tagged white and gray) to non-snow (black) than
does the GMM. The PPC segmentation properly labels almost all of the non-snow regions
as non-snow. Furthermore, the segmentation of the snow areas into the two classes (not
labeled) tagged white and gray in Fig. 4 (d) reflects subtle differences in the snow regions
captured by the gray-scale image from spectral channel 2, as shown in Fig. 4 (a).
Figure 4: (a) Gray-scale image from the first spectral channel 2. (b) Partial label given by
expert, black pixels denote non-snow area and white pixels denote snow area. Clustering
result of standard GMM (c) and PPC (d). (c) and (d) are colored according to image blocks?
assignment.
3.3 Soft pairwise preferences for texture image segmentation
In this subsection, we propose an unsupervised texture image segmentation algorithm as
an application of PPC model. Like in section 3.2, the image is divided into blocks and
rearranged into feature vectors. We use GMM to model those feature vectors, hoping
each Gaussian component represents one texture. However, standard GMM often fails
to give a good segmentation because it cannot make use of the spatial continuity of image,
which is essential in many image segmentation models, such as random field [7]. In our
algorithm, the spatial continuity is incorporated as the soft link preferences with uniform
weight between each block and its neighbors. The complete data likelihood is
Y Y
1
P (X, Z|?, W p ) = P (X, Z|?)
exp(w ?(zi , zj )),
(6)
K
i
j?U (i)
where U (i) means the neighbors of the ith block. The EM algorithm can be roughly interpreted as iterating on two steps: 1) estimating the texture description (parameters of
mixture model) based on segmentation, and 2) segmenting the image based on the texture
description given by step 1. Gibbs sampling is used to estimate the posterior probability in
each EM iteration. Equation (5) is reduced to
Y
P (zi |Z?i , X, ?, W p ) ? P (xi , zi |?)
exp(2w ?(zi , zj )).
j?U (i)
The image shown in Fig. 5 (a) is combined from four Brodatz textures 1 . This image is
divided into 7x7 blocks and then rearranged to 49-dim vectors. We use those vectors? first
five principal components as the associated feature vectors. For PPC model, the soft links
1
Downloaded from http://sipi.usc.edu/services/database/Database.html, April, 2004
with weight w are added between each block and its four neighbors, as shown in Fig. 5
(b). A typical clustering result of 4-component standard GMM and 4-component PPC with
w = 2 are shown in Fig. 5 (c) and Fig. 5 (d) respectively. Obviously, PPC achieves a better
segmentation after incorporating spatial continuity.
Figure 5: (a) Texture combination. (b) One block and its four neighbor. Clustering result of
standard GMM (c) and PPC (d). (c) and (d) are shaded according to the blocks assignments
to clusters.
4 Conclusion and Discussion
We have proposed a probabilistic clustering model that incorporates prior knowledge in
the form of pairwise relations between samples. Unlike previous work in semi-supervised
clustering, this work formulates clustering preferences as a Bayesian prior over the assignment of data points to clusters, and so naturally accommodates both hard constraints and
soft preferences. For the computational difficulty brought by large cliques, we proposed a
Markov chain estimation method to reduce the computational cost. Experiments on different data sets show that pairwise relations can consistently improve the performance of the
clustering process.
Acknowledgments
The authors thank Ashok Srivistava for helpful conversations. This work was funded by
NASA Collaborative Agreement NCC 2-1264.
References
[1] K. Wagstaff, C. Cardie, S. Rogers, and S. Schroedl. Constrained K-means clustering with background knowledge. In Proceedings of the Eighteenth International Conference on Machine
Learning, pages 577?584, 2001.
[2] S. Basu, A. Bannerjee, and R. Mooney. Semi-supervised clustering by seeding. In Proceedings
of the Nineteenth International Conference on Machine Learning, pages 19?26, 2002.
[3] D. Klein, S. Kamvar, and C. Manning. From instance Level to space-level constraints: making
the most of prior knowledge in data clustering. In Proceedings of the Nineteenth International
Conference on Machine Learning, pages 307?313, 2002.
[4] N. Shental, A. Bar-Hillel, T. Hertz, and D. Weinshall. Computing Gaussian mixture models
with EM using equivalence constraints. In Advances in Neural Information Processing System,
volume 15, 2003.
[5] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society, Series B, 39:1?38, 1977.
[6] R. Neal. Probabilistic inference using Markov Chain Monte Carlo methods. Technical Report
CRG-TR-93-1, Computer Science Department, Toronto University, 1993.
[7] C. Bouman and M. Shapiro. A multiscale random field model for Bayesian image segmentation.
IEEE Trans. Image Processing, 3:162?177, March 1994.
| 2610 |@word relevancy:2 closure:1 covariance:1 pick:4 tr:1 solid:2 configuration:1 series:2 z2:4 assigning:1 must:2 numerical:1 cheap:1 hoping:1 seeding:1 update:1 half:1 prohibitive:1 leaf:1 item:2 fewer:1 ith:1 colored:1 cse:1 toronto:1 preference:22 five:1 c2:1 become:1 fitting:1 combine:1 introduce:1 manner:1 pairwise:29 roughly:1 begin:1 estimating:1 notation:1 weinshall:1 interpreted:1 certainty:1 segmenting:1 positive:1 ice:2 engineering:2 service:1 todd:1 limit:1 encoding:1 black:2 pendigits:3 equivalence:1 suggests:1 shaded:1 averaged:4 acknowledgment:1 block:13 digit:1 langley:1 area:10 confidence:1 cannot:2 sheet:1 influence:1 eighteenth:1 starting:1 independently:1 exact:1 origin:1 agreement:1 expensive:2 labeled:4 database:2 cloud:1 capture:1 calculate:1 region:4 dempster:1 complexity:1 x400:1 segment:1 completely:1 translated:1 joint:1 various:2 monte:1 tell:1 labeling:1 hillel:1 whose:1 nineteenth:2 otherwise:1 laird:1 obviously:1 sequence:2 propose:4 remainder:2 neighboring:1 uci:1 achieve:1 description:2 cluster:31 brodatz:1 object:1 derive:1 school:1 progress:1 waveform:3 snow:18 attribute:1 human:1 rogers:1 assign:2 crg:1 ppc:20 exp:7 achieves:1 smallest:1 perceived:2 estimation:1 label:6 grouped:1 reflects:3 brought:2 clearly:1 gaussian:5 pn:4 avoid:1 varying:1 derived:1 properly:1 consistently:4 likelihood:8 check:1 indicates:1 logically:1 sense:1 helpful:2 inference:4 dim:2 membership:1 entire:2 relation:32 manipulating:1 wij:2 pixel:3 arg:3 among:1 classification:4 html:1 augment:1 spatial:3 special:2 constrained:1 field:2 aware:1 once:1 sampling:6 represents:1 unsupervised:3 report:1 randomly:3 familiar:2 usc:1 ourselves:1 replacement:2 mixture:6 extreme:1 chain:2 partial:2 conforms:1 conduct:1 incomplete:1 divide:1 penalizes:1 fitted:2 bouman:1 instance:1 soft:8 formulates:1 zn:6 assignment:22 maximization:1 cost:1 uniform:1 combined:1 international:3 probabilistic:5 told:1 satisfied:1 choose:1 expert:1 includes:1 try:1 doing:1 collaborative:1 accuracy:5 became:1 who:1 maximized:1 dry:1 zhengdong:1 bayesian:3 lu:1 cardie:1 carlo:1 mooney:1 ncc:1 naturally:2 associated:2 mi:1 experimenter:1 dataset:1 knowledge:9 subsection:2 conversation:1 segmentation:10 subtle:1 nasa:2 supervised:3 violating:1 specify:1 melting:1 april:1 leen:1 done:1 furthermore:1 multiscale:1 overlapping:4 continuity:4 gray:4 indicated:1 believe:1 effect:1 concept:1 hence:2 assigned:6 tagged:2 neal:1 white:3 ogi:2 iris:3 complete:7 demonstrate:2 image:16 volume:1 belong:2 discussed:1 refer:1 gibbs:7 pq:1 funded:1 similarity:1 longer:1 add:2 posterior:18 certain:1 wji:1 captured:1 ashok:1 semi:3 violate:1 ing:1 technical:1 match:1 divided:2 circumstance:3 expectation:1 iteration:1 adopting:1 c1:1 background:1 separately:1 kamvar:1 source:2 rest:1 unlike:1 sure:1 incorporates:2 call:1 split:1 variety:1 xj:14 fit:3 zi:25 marginalization:1 affect:1 reduce:1 idea:1 six:3 generally:1 useful:1 iterating:1 band:1 tth:1 rearranged:2 generate:1 reduced:1 http:1 shapiro:1 zj:10 dotted:1 algorithmically:1 klein:1 write:1 shental:2 express:2 group:1 four:4 gmm:15 bannerjee:1 sum:3 run:2 almost:1 reasonable:1 prefer:2 tleen:1 constraint:18 kronecker:1 x7:1 department:2 according:4 combination:1 manning:1 march:1 hertz:1 em:7 making:1 wagstaff:1 daac:1 computationally:2 equation:2 generalizes:1 operation:1 apply:2 spectral:2 disagreement:1 original:1 clustering:33 beaverton:1 calculating:2 build:1 society:1 added:2 schroedl:1 usual:1 traditional:1 link:20 thank:1 accommodates:2 participate:2 seven:2 trivial:2 water:1 index:1 illustration:2 z3:2 difficult:1 executed:1 negative:1 rise:1 zt:6 datasets:1 markov:2 incorporated:2 pair:5 required:1 c3:1 z1:3 conflict:1 trans:1 bar:1 usually:2 reliable:1 max:3 royal:1 belief:1 natural:1 difficulty:1 improve:4 picture:1 transitive:1 bare:1 prior:18 geometric:1 marginalizing:1 downloaded:1 degree:2 consistent:1 rubin:1 land:1 penalized:2 neighbor:4 greenland:1 basu:1 sparse:1 calculated:3 author:1 far:1 ignore:1 clique:11 xi:17 spectrum:1 latent:2 channel:2 big:2 arise:1 fig:15 fails:1 breaking:1 weighting:2 specific:2 xt:6 incorporating:2 essential:1 effectively:1 texture:7 dissimilarity:1 ohsu:1 likely:1 ez:1 expressed:5 desire:1 partially:3 satisfies:1 z1t:1 kmeans:1 hard:10 typical:2 principal:2 called:2 incorporate:3 evaluate:1 mcmc:1 |
1,774 | 2,611 | Implicit Wiener Series for Higher-Order Image
Analysis
Matthias O. Franz
Bernhard Sch?olkopf
Max-Planck-Institut f?ur biologische Kybernetik
Spemannstr. 38, D-72076 T?ubingen, Germany
mof;[email protected]
Abstract
The computation of classical higher-order statistics such as higher-order
moments or spectra is difficult for images due to the huge number of
terms to be estimated and interpreted. We propose an alternative approach in which multiplicative pixel interactions are described by a series of Wiener functionals. Since the functionals are estimated implicitly
via polynomial kernels, the combinatorial explosion associated with the
classical higher-order statistics is avoided. First results show that image
structures such as lines or corners can be predicted correctly, and that
pixel interactions up to the order of five play an important role in natural
images.
Most of the interesting structure in a natural image is characterized by its higher-order
statistics. Arbitrarily oriented lines and edges, for instance, cannot be described by the
usual pairwise statistics such as the power spectrum or the autocorrelation function: From
knowing the intensity of one point on a line alone, we cannot predict its neighbouring
intensities. This would require knowledge of a second point on the line, i.e., we have
to consider some third-order statistics which describe the interactions between triplets of
points. Analogously, the prediction of a corner neighbourhood needs at least fourth-order
statistics, and so on.
In terms of Fourier analysis, higher-order image structures such as edges or corners are
described by phase alignments, i.e. phase correlations between several Fourier components
of the image. Classically, harmonic phase interactions are measured by higher-order spectra
[4]. Unfortunately, the estimation of these spectra for high-dimensional signals such as
images involves the estimation and interpretation of a huge number of terms. For instance, a
sixth-order spectrum of a 16?16 sized image contains roughly 1012 coefficients, about 1010
of which would have to be estimated independently if all symmetries in the spectrum are
considered. First attempts at estimating the higher-order structure of natural images were
therefore restricted to global measures such as skewness or kurtosis [8], or to submanifolds
of fourth-order spectra [9].
Here, we propose an alternative approach that models the interactions of image points
in a series of Wiener functionals. A Wiener functional of order n captures those image
components that can be predicted from the multiplicative interaction of n image points. In
contrast to higher-order spectra or moments, the estimation of a Wiener model does not
require the estimation of an excessive number of terms since it can be computed implicitly
via polynomial kernels. This allows us to decompose an image into components that are
characterized by interactions of a given order.
In the next section, we introduce the Wiener expansion and discuss its capability of modeling higher-order pixel interactions. The implicit estimation method is described in Sect. 2,
followed by some examples of use in Sect. 3. We conclude in Sect. 4 by briefly discussing
the results and possible improvements.
1
Modeling pixel interactions with Wiener functionals
For our analysis, we adopt a prediction framework: Given a d ? d neighbourhood of an
image pixel, we want to predict its gray value from the gray values of the neighbours. We
are particularly interested to which extent interactions of different orders contribute to the
overall prediction. Our basic assumption is that the dependency of the central pixel value
y on its neighbours xi , i = 1, . . . , m = d2 ? 1 can be modeled as a series
y = H0 [x] + H1 [x] + H2 [x] + ? ? ? + Hn [x] + ? ? ?
(1)
of discrete Volterra functionals
H0 [x] = h0 = const.
and
Hn [x] =
Xm
i1 =1
???
Xm
in =1
(n)
hi1 ...in xi1 . . . xin .
(2)
Here, we have stacked the grayvalues of the neighbourhood into the vector x =
(x1 , . . . , xm )> ? Rm . The discrete nth-order Volterra functional is, accordingly, a linear
combination of all ordered nth-order monomials of the elements of x with mn coefficients
(n)
hi1 ...in . Volterra functionals provide a controlled way of introducing multiplicative interactions of image points since a functional of order n contains all products of the input of
order n. In terms of higher-order statistics, this means that we can control the order of the
statistics used since an nth-order Volterra series leads to dependencies between maximally
n + 1 pixels.
Unfortunately, Volterra functionals are not orthogonal to each other, i.e., depending on the
input distribution, a functional of order n generally leads to additional lower-order interactions. As a result, the output of the functional will contain components that are proportional
to that of some lower-order monomials. For instance, the output of a second-order Volterra
functional for Gaussian input generally has a mean different from zero [5]. If one wants
to estimate the zeroeth-order component of an image (i.e., the constant component created
without pixel interactions) the constant component created by the second-order interactions
needs to be subtracted. For general Volterra series, this correction can be achieved by decomposing it into a new series y = G0 [x] + G1 [x] + ? ? ? + Gn [x] + ? ? ? of functionals
Gn [x] that are uncorrelated, i.e., orthogonal with respect to the input. The resulting Wiener
functionals1 Gn [x] are linear combinations of Volterra functionals up to order n. They
are computed from the original Volterra series by a procedure akin to Gram-Schmidt orthogonalization [5]. It can be shown that any Wiener expansion of finite degree minimizes
the mean squared error between the true system output and its Volterra series model [5].
The orthogonality condition ensures that a Wiener functional of order n captures only the
component of the image created by the multiplicative interaction of n pixels. In contrast to
general Volterra functionals, a Wiener functional is orthogonal to all monomials of lower
order [5].
So far, we have not gained anything compared to classical estimation of higher-order moments or spectra: an nth-order Volterra functional contains the same number of terms as
1
Strictly speaking, the term Wiener functional is reserved for orthogonal Volterra functionals with
respect to Gaussian input. Here, the term will be used for orthogonalized Volterra functionals with
arbitrary input distributions.
the corresponding n + 1-order spectrum, and a Wiener functional of the same order has an
even higher number of coefficients as it consists also of lower-order Volterra functionals.
In the next section, we will introduce an implicit representation of the Wiener series using
polynomial kernels which allows for an efficient computation of the Wiener functionals.
2
Estimating Wiener series by regression in RKHS
Volterra series as linear functionals in RKHS. The nth-order Volterra functional is
a weighted sum of all nth-order monomials of the input vector x. We can interpret the
evaluation of this functional for a given input x as a map ?n defined for n = 0, 1, 2, . . . as
?0 (x) = 1
and
?n (x) = (xn1 , xn?1
x2 , . . . , x1 xn?1
, xn2 , . . . , xnm )
1
2
(3)
n
such that ?n maps the input x ? Rm into a vector ?n (x) ? Fn = Rm containing all mn
ordered monomials of degree n. Using ?n , we can write the nth-order Volterra functional
in Eq. (2) as a scalar product in Fn ,
Hn [x] = ?n> ?n (x),
(n)
(4)
(n)
(n)
with the coefficients stacked into the vector ?n = (h1,1,..1 , h1,2,..1 , h1,3,..1 , . . . )> ? Fn .
The same idea can be applied to the entire pth-order Volterra series. By stacking the maps
?n into a single map ?(p) (x) = (?0 (x), ?1 (x), . . . , ?p (x))> , one obtains a mapping from
p+1
2
p
Rm into F(p) = R ? Rm ? Rm ? . . . Rm = RM with dimensionality M = 1?m
1?m . The
entire pth-order Volterra series can be written as a scalar product in F(p)
Xp
Hn [x] = (? (p) )> ?(p) (x)
(5)
n=0
(p)
(p)
with ? ? F . Below, we will show how we can express ? (p) as an expansion in terms
of the training points. This will dramatically reduce the number of parameters we have to
estimate.
This procedure works because the space Fn of nth-order monomials has a very special
property: it has the structure of a reproducing kernel Hilbert space (RKHS). As a consequence, the dot product in Fn can be computed by evaluating a positive definite kernel
function kn (x1 , x2 ). For monomials, one can easily show that (e.g., [6])
n
?n (x1 )> ?n (x2 ) = (x>
1 x2 ) =: kn (x1 , x2 ).
(6)
Since F(p) is generated as a direct sum of the single spaces Fn , the associated scalar product
is simply the sum of the scalar products in the Fn :
Xp
n
(p)
(x>
(x1 , x2 ).
(7)
?(p) (x1 )> ?(p) (x2 ) =
1 x2 ) = k
n=0
Thus, we have shown that the discretized Volterra series can be expressed as a linear functional in a RKHS2 .
Linear regression in RKHS. For our prediction problem (1), the RKHS property of the
Volterra series leads to an efficient solution which is in part due to the so called representer theorem (e.g., [6]). It states the following: suppose we are given N observations
2
A similar approach has been taken by [1] using the inhomogeneous polynomial kernel
p
= (1 + x>
1 x2 ) . This kernel implies a map ?inh into the same space of monomials, but it weights the degrees of the monomials differently as can be seen by applying the binomial
theorem.
(p)
kinh (x1 , x2 )
(x1 , y1 ), . . . , (xN , yN ) of the function (1) and an arbitrary cost function c, ? is a nondecreasing function on R>0 and k.kF is the norm of the RKHS associated with the kernel k.
If we minimize an objective function
c((x1 , y1 , f (x1 )), . . . , (xN , yN , f (xN ))) + ?(kf kF ),
(8)
3
over all functions in the RKHS, then an optimal solution can be expressed as
XN
f (x) =
aj k(x, xj ), aj ? R.
j=1
(9)
In other words, although we optimized over the entire RKHS including functions which
are defined for arbitrary input points, it turns out that we can always express the solution
in terms of the observations xj only. Hence the optimization problem over the extremely
large number of coefficients ? (p) in Eq. (5) is transformed into one over N variables aj .
Let us consider the special case where the cost function is the mean squared error,
PN
c((x1 , y1 , f (x1 )), . . . , (xN , yN , f (xN ))) = N1 j=1 (f (xj ) ? yj )2 , and the regularizer
? is zero4 . The solution for a = (a1 , . . . , aN ) is readily computed by setting the derivative
of (8) with respect to the vector a equal to zero; it takes the form a = K ?1 y with the Gram
matrix defined as Kij = k(xi , xj ), hence5
y = f (x) = a> z(x) = y> K ?1 z(x),
>
(10)
N
where z(x) = (k(x, x1 ), k(x, x2 ), . . . k(x, xN )) ? R .
Implicit Wiener series estimation. As we stated above, the pth-degree Wiener expansion is the pth-order Volterra series that minimizes the squared error. This can be put into
the regression framework: since any finite Volterra series can be represented as a linear
functional in the corresponding RKHS, we can find the pth-order Volterra series that minimizes the squared error by linear regression. This, by definition, must be the pth-degree
Wiener series since no other Volterra series has this property6 . From Eqn. (10), we obtain
the following expressions for the implicit Wiener series
Xp
Xp
1
G0 [x] = y> 1,
Hn [x] = y> Kp?1 z(p) (x)
(11)
Gn [x] =
n=0
n=0
N
(p)
where the Gram matrix Kp and the coefficient vector z (x) are computed using the kernel
from Eq. (7) and 1 = (1, 1, . . . )> ? RN . Note that the Wiener series is represented only
implicitly since we are using the RKHS representation as a sum of scalar products with the
training points. Thus, we can avoid the ?curse of dimensionality?, i.e., there is no need to
compute the possibly large number of coefficients explicitly.
The explicit Volterra and Wiener expansions can be recovered from Eq. (11) by collecting
all terms containing monomials of the desired order and summing them up. The individual
nth-order Volterra functionals in a Wiener series of degree p > 0 are given implicitly by
Hn [x] = y> Kp?1 zn (x)
n
> n
>
n >
with zn (x) = ((x>
1 x) , (x2 x) , . . . , (xN x) ) . For p = 0 the only term
constant zero-order Volterra functional H0 [x] = G0 [x]. The coefficient vector
(n)
(n)
(n)
(h1,1,...1 , h1,2,...1 , h1,3,...1 , . . . )> of the explicit Volterra functional is obtained as
?1
?n = ? >
n Kp y
3
(12)
is the
?n =
(13)
for conditions on uniqueness of the solution, see [6]
Note that this is different from the regularized approach used by [1]. If ? is not zero, the resulting
Volterra series are different from the Wiener series since they are not orthogonal with respect to the
input.
5
If K is not invertible, K ?1 denotes the pseudo-inverse of K.
6
assuming symmetrized Volterra kernels which can be obtained from any Volterra expanson.
4
using the design matrix ?n = (?n (x1 )> , ?n (x1 )> , . . . , ?n (x1 )> )> . The individual
Wiener functionals can only be recovered by applying the regression procedure twice. If
we are interested in the nth-degree Wiener functional, we have to compute the solution
for the kernels k (n) (x1 , x2 ) and k (n?1) (x1 , x2 ). The Wiener functional for n > 0 is then
obtained from the difference of the two results as
h
i
Xn
Xn?1
?1
Gn [x] =
Gi [x] ?
Gi [x] = y> Kn?1 z(n) (x) ? Kn?1
z(n?1) (x) . (14)
i=0
i=0
The corresponding ith-order Volterra functionals of the nth-degree Wiener functional are
computed analogously to Eqns. (12) and (13) [3].
Orthogonality. The resulting Wiener functionals must fulfill the orthogonality condition
which in its strictest form states that a pth-degree Wiener functional must be orthogonal to
all monomials in the input of lower order. Formally, we will prove the following
Theorem 1 The functionals obtained from Eq. (14) fulfill the orthogonality condition
E [m(x)Gp [x]] = 0
(15)
where E denotes the expectation over the input distribution and m(x) an arbitrary ithorder monomial with i < p.
We will show that this a consequence of the least squares fit of any linear expansion in a set
PM
of basis functions of the form y = j=1 ?j ?j (x). In the case of the Wiener and Volterra
expansions, the basis functions ?j (x) are monomials of the components of x.
PM
We denote the error of the expansion as e(x) = y ? j=1 ?j ?j (xi ). The minimum of the
expected quadratic loss L with respect to the expansion coefficient ?k is given by
?
?L
=
Eke(x)k2 = ?2E [?k (x)e(x)] = 0.
??k
??k
(16)
This means that, for an expansion in a set of basis functions minimizing the squared error,
the error is orthogonal to all basis functions used in the expansion.
Now let us assume we know the Wiener series expansion (which minimizes the mean
squared error) of a system up to degree p ? 1. TheP
approximation error is given by the
?
sum of the higher-order Wiener functionals e(x) = n=p Gn [x], so Gp [x] is part of the
error. As a consequence of the linearity of the expectation, Eq. (16) implies
X?
X?
E [?k (x)Gn [x]] = 0
(17)
E [?k (x)Gn [x]] = 0 and
n=p
n=p+1
for any ?k of order less than p. The difference of both equations yields E [?k (x)Gp [x]] =
0, so that Gp [x] must be orthogonal to any of the lower order basis functions, namely to all
monomials with order smaller than p.
?
3
Experiments
Toy examples. In our first experiment, we check whether our intuitions about higher-order
statistics described in the introduction are captured by the proposed method. In particular,
we expect that arbitrarily oriented lines can only be predicted using third-order statistics.
As a consequence, we should need at least a second-order Wiener functional to predict lines
correctly.
Our first test image (size 80 ? 110, upper row in Fig. 1) contains only lines of varying
orientations. Choosing a 5 ? 5 neighbourhood, we predicted the central pixel using (11).
original image
0th-order
component/
reconstruction
1st-order
reconstruction
1st-order
component
2nd-order
reconstruction
2nd-order
component
3rd-order
reconstruction
mse = 583.7
mse = 0.006
mse = 0
mse = 1317
mse = 37.4
mse = 0.001
mse = 1845
mse = 334.9
mse = 19.0
3rd-order
component
Figure 1: Higher-order components of toy images. The image components of different orders are
created by the corresponding Wiener functionals. They are added up to obtain the different orders
of reconstruction. Note that the constant 0-order component and reconstruction are identical. The
reconstruction error (mse) is given as the mean squared error between the true grey values of the
image and the reconstruction. Although the linear first-order model seems to reconstruct the lines, this
is actually not true since the linear model just smoothes over the image (note its large reconstruction
error). A correct prediction is only obtained by adding a second-order component to the model. The
third-order component is only significant at crossings, corners and line endings.
Models of orders 0 . . . 3 were learned from the image by extracting the maximal training
set of 76 ? 106 patches of size 5 ? 57 . The corresponding image components of order 0 to 3
were computed according to (14). Note the different components generated by the Wiener
functionals can also be negative. In Fig. 1, they are scaled to the gray values [0..255]. The
behaviour of the models conforms to our intuition: the linear model cannot capture the line
structure of the image thus leading to a large reconstruction error which drops to nearly
zero when a second-order model is used. The additional small correction achieved by the
third-order model is mainly due to discretization effects.
Similar to lines, we expect that we need at least a third-order model to predict crossings
or corners correctly. This is confirmed by the second and third test image shown in the
corresponding row in Fig. 1. Note that the third-order component is only significant at
crossings, corners and line endings. The fourth- and fifth-order terms (not shown) have
only negligible contributions. The fact that the reconstruction error does not drop to zero
for the third image is caused by the line endings which cannot be predicted to a higher
accuracy than one pixel.
Application to natural images. Are there further predictable structures in natural images
that are not due to lines, crossings or corners? This can be investigated by applying our
method to a set of natural images (an example of size 80 ? 110 is depicted in Fig. 2). Our
7
In contrast to the usual setting in machine learning, training and test set are identical in our
case since we are not interested in generalization to other images, but in analyzing the higher-order
components of the image at hand.
original image
0th-order
component/
reconstruction
1st-order
reconstruction
mse = 1070
1st-order
component
2nd-order
reconstruction
mse = 957.4
2nd-order
component
3rd-order
reconstruction
mse = 414.6
3rd-order
component
4th-order
reconstruction
mse = 98.5
4th-order
component
5th-order
reconstruction
mse = 18.5
5th-order
component
6th-order
reconstruction
mse = 4.98
6th-order
component
7th-order
reconstruction
mse = 1.32
7th-order
component
8th-order
reconstruction
mse = 0.41
8th-order
component
Figure 2: Higher-order components and reconstructions of a photograph. Interactions up to the fifth
order play an important role. Note that significant components become sparser with increasing model
order.
results on a set of 10 natural images of size 50 ? 70 show an an approximately exponential
decay of the reconstruction error when more and more higher-order terms are added to
the reconstruction (Fig. 3). Interestingly, terms up to order 5 still play a significant role,
although the image regions with a significant component become sparser with increasing
model order (see Fig. 2). Note that the nonlinear terms reduce the reconstruction error to
almost 0. This suggests a high degree of higher-order redundancy in natural images that
cannot be exploited by the usual linear prediction models.
4
Conclusion
The implicit estimation of Wiener functionals via polynomial kernels opens up new possibilities for the estimation of higher-order image statistics. Compared to the classical
methods such as higher-order spectra, moments or cumulants, our approach avoids the
combinatorial explosion caused by the exponential increase of the number of terms to be
estimated and interpreted. When put into a predictive framework, multiplicative pixel interactions of different orders are easily visualized and conform to the intuitive notions about
image structures such as edges, lines, crossings or corners.
There is no one-to-one mapping between the classical higher-order statistics and multiplicative pixel interactions. Any nonlinear Wiener functional, for instance, creates infinitely
many correlations or cumulants of higher order, and often also of lower order. On the other
700
Figure 3: Mean square reconstruction error of
600
models of different order for a set of 10 natural
images.
mse
500
400
300
200
100
0
0
1
2
3
4
5
6
7
model order
hand, a Wiener functional of order n produces only harmonic phase interactions up to order
n + 1, but sometimes also of lower orders. Thus, when one analyzes a classical statistic of a
given order, one often cannot determine by which order of pixel interaction it was created.
In contrast, our method is able to isolate image components that are created by a single
order of interaction.
Although of preliminary nature, our results on natural images suggest an important role of
statistics up to the fifth order. Most of the currently used low-level feature detectors such
as edge or corner detectors maximally use third-order interactions. The investigation of
fourth- or higher-order features is a field that might lead to new insights into the nature and
role of higher-order image structures.
As often observed in the literature (e.g. [2][7]), our results seem to confirm that a large
proportion of the redundancy in natural images is contained in the higher-order pixel interactions. Before any further conclusions can be drawn, however, our study needs to be
extended in several directions: 1. A representative image database has to be analyzed. The
images must be carefully calibrated since nonlinear statistics can be highly calibrationsensitive. In addition, the contribution of image noise has to be investigated. 2. Currently,
only images up to 9000 pixels can be analyzed due to the matrix inversion required by
Eq. 11. To accomodate for larger images, our method has to be reformulated in an iterative
algorithm. 3. So far, we only considered 5 ? 5-patches. To systematically investigate patch
size effects, the analysis has to be conducted in a multi-scale framework.
References
[1] T. J. Dodd and R. F. Harrison. A new solution to Volterra series estimation. In CD-Rom Proc.
2002 IFAC World Congress, 2002.
[2] D. J. Field. What is the goal of sensory coding? Neural Computation, 6:559 ? 601, 1994.
[3] M. O. Franz and B. Sch?
olkopf. Implicit Wiener series. Technical Report 114, Max-PlanckInstitut f?
ur biologische Kybernetik, T?
ubingen, June 2003.
[4] C. L. Nikias and A. P. Petropulu. Higher-order spectra analysis. Prentice Hall, Englewood
Cliffs, NJ, 1993.
[5] M. Schetzen. The Volterra and Wiener theories of nonlinear systems. Krieger, Malabar, 1989.
[6] B. Sch?
olkopf and A. J. Smola. Learning with kernels. MIT Press, Cambridge, MA, 2002.
[7] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control. Nature
Neurosc., 4(8):819 ? 825, 2001.
[8] M. G. A. Thomson. Higher-order structure in natural scenes. J. Opt.Soc. Am. A, 16(7):1549 ?
1553, 1999.
[9] M. G. A. Thomson. Beats, kurtosis and visual coding. Network: Compt. Neural Syst., 12:271 ?
287, 2001.
| 2611 |@word briefly:1 inversion:1 polynomial:5 norm:1 seems:1 nd:4 proportion:1 open:1 d2:1 grey:1 moment:4 series:30 contains:4 rkhs:10 interestingly:1 recovered:2 discretization:1 written:1 readily:1 must:5 fn:7 drop:2 alone:1 accordingly:1 ith:1 contribute:1 five:1 kinh:1 direct:1 become:2 xnm:1 consists:1 prove:1 autocorrelation:1 introduce:2 pairwise:1 expected:1 roughly:1 mpg:1 multi:1 discretized:1 eke:1 curse:1 increasing:2 estimating:2 linearity:1 what:1 submanifolds:1 interpreted:2 skewness:1 minimizes:4 nj:1 pseudo:1 collecting:1 rm:8 k2:1 scaled:1 control:2 schwartz:1 yn:3 planck:1 positive:1 negligible:1 before:1 congress:1 kybernetik:2 consequence:4 analyzing:1 cliff:1 approximately:1 might:1 twice:1 suggests:1 yj:1 definite:1 dodd:1 procedure:3 word:1 suggest:1 cannot:6 put:2 prentice:1 applying:3 map:5 independently:1 insight:1 notion:1 compt:1 play:3 suppose:1 neighbouring:1 element:1 crossing:5 particularly:1 database:1 observed:1 role:5 capture:3 region:1 ensures:1 sect:3 intuition:2 predictable:1 predictive:1 creates:1 basis:5 easily:2 differently:1 represented:2 regularizer:1 stacked:2 describe:1 kp:4 choosing:1 h0:4 larger:1 reconstruct:1 statistic:16 gi:2 g1:1 gp:4 nondecreasing:1 matthias:1 kurtosis:2 propose:2 reconstruction:25 interaction:23 product:7 maximal:1 intuitive:1 olkopf:3 produce:1 depending:1 measured:1 eq:7 soc:1 predicted:5 involves:1 implies:2 direction:1 inhomogeneous:1 correct:1 require:2 behaviour:1 generalization:1 decompose:1 preliminary:1 investigation:1 opt:1 strictly:1 correction:2 considered:2 hall:1 mapping:2 predict:4 adopt:1 uniqueness:1 estimation:10 proc:1 combinatorial:2 currently:2 weighted:1 mit:1 gaussian:2 always:1 fulfill:2 pn:1 avoid:1 varying:1 june:1 improvement:1 check:1 mainly:1 contrast:4 am:1 entire:3 transformed:1 i1:1 germany:1 interested:3 pixel:16 overall:1 orientation:1 special:2 equal:1 field:2 identical:2 excessive:1 representer:1 nearly:1 report:1 oriented:2 neighbour:2 individual:2 phase:4 n1:1 attempt:1 huge:2 englewood:1 possibility:1 highly:1 investigate:1 evaluation:1 alignment:1 analyzed:2 edge:4 explosion:2 conforms:1 institut:1 orthogonal:8 desired:1 orthogonalized:1 instance:4 kij:1 modeling:2 gn:8 cumulants:2 zn:2 stacking:1 introducing:1 cost:2 monomials:13 conducted:1 dependency:2 kn:4 calibrated:1 st:4 xi1:1 invertible:1 analogously:2 squared:7 central:2 containing:2 hn:6 possibly:1 classically:1 corner:9 derivative:1 leading:1 toy:2 syst:1 de:1 coding:2 coefficient:9 explicitly:1 caused:2 multiplicative:6 h1:7 biologische:2 capability:1 contribution:2 minimize:1 square:2 accuracy:1 wiener:41 reserved:1 yield:1 confirmed:1 detector:2 sixth:1 definition:1 associated:3 xn1:1 gain:1 knowledge:1 dimensionality:2 hilbert:1 carefully:1 actually:1 higher:30 maximally:2 just:1 implicit:7 smola:1 correlation:2 hand:2 eqn:1 nonlinear:4 aj:3 gray:3 effect:2 contain:1 true:3 hence:1 eqns:1 anything:1 thomson:2 orthogonalization:1 image:50 harmonic:2 functional:25 interpretation:1 interpret:1 significant:5 cambridge:1 rd:4 pm:2 dot:1 ubingen:2 arbitrarily:2 discussing:1 exploited:1 nikias:1 seen:1 minimum:1 additional:2 captured:1 analyzes:1 determine:1 signal:2 xn2:1 simoncelli:1 ifac:1 technical:1 characterized:2 a1:1 controlled:1 prediction:6 basic:1 regression:5 expectation:2 sometimes:1 kernel:13 achieved:2 addition:1 want:2 harrison:1 sch:3 isolate:1 spemannstr:1 seem:1 extracting:1 xj:4 fit:1 reduce:2 idea:1 knowing:1 schetzen:1 whether:1 expression:1 akin:1 reformulated:1 speaking:1 dramatically:1 generally:2 visualized:1 estimated:4 correctly:3 conform:1 discrete:2 write:1 express:2 redundancy:2 drawn:1 hi1:2 sum:5 inverse:1 fourth:4 almost:1 smoothes:1 patch:3 followed:1 quadratic:1 orthogonality:4 x2:14 scene:1 fourier:2 extremely:1 according:1 combination:2 smaller:1 ur:2 b:1 restricted:1 taken:1 equation:1 discus:1 turn:1 know:1 decomposing:1 neighbourhood:4 subtracted:1 alternative:2 schmidt:1 symmetrized:1 original:3 binomial:1 denotes:2 const:1 neurosc:1 classical:6 objective:1 g0:3 added:2 volterra:37 usual:3 extent:1 tuebingen:1 rom:1 assuming:1 modeled:1 minimizing:1 difficult:1 unfortunately:2 stated:1 negative:1 design:1 upper:1 observation:2 finite:2 beat:1 extended:1 inh:1 y1:3 rn:1 reproducing:1 arbitrary:4 intensity:2 namely:1 required:1 optimized:1 learned:1 able:1 below:1 xm:3 max:2 including:1 power:1 natural:13 regularized:1 nth:11 mn:2 created:6 literature:1 mof:1 kf:3 loss:1 expect:2 interesting:1 proportional:1 h2:1 degree:11 xp:4 systematically:1 uncorrelated:1 cd:1 row:2 monomial:1 fifth:3 xn:12 gram:3 evaluating:1 ending:3 avoids:1 world:1 sensory:2 franz:2 avoided:1 pth:7 far:2 functionals:24 obtains:1 implicitly:4 bernhard:1 confirm:1 global:1 summing:1 conclude:1 xi:3 thep:1 spectrum:12 iterative:1 triplet:1 nature:3 symmetry:1 expansion:12 mse:19 investigated:2 strictest:1 noise:1 x1:19 fig:6 representative:1 explicit:2 exponential:2 third:9 theorem:3 decay:1 adding:1 gained:1 accomodate:1 krieger:1 sparser:2 depicted:1 photograph:1 simply:1 infinitely:1 visual:1 expressed:2 ordered:2 contained:1 scalar:5 ma:1 sized:1 goal:1 called:1 xin:1 formally:1 |
1,775 | 2,612 | Discrete profile alignment via constrained
information bottleneck
Sean O?Rourke?
[email protected]
Gal Chechik?
[email protected]
Robin Friedman?
[email protected]
Eleazar Eskin?
[email protected]
Abstract
Amino acid profiles, which capture position-specific mutation probabilities, are a richer encoding of biological sequences than the individual sequences themselves. However, profile comparisons are
much more computationally expensive than discrete symbol comparisons, making profiles impractical for many large datasets. Furthermore, because they are such a rich representation, profiles can
be difficult to visualize. To overcome these problems, we propose a
discretization for profiles using an expanded alphabet representing
not just individual amino acids, but common profiles. By using an
extension of information bottleneck (IB) incorporating constraints
and priors on the class distributions, we find an informationally
optimal alphabet. This discretization yields a concise, informative
textual representation for profile sequences. Also alignments between these sequences, while nearly as accurate as the full profileprofile alignments, can be computed almost as quickly as those
between individual or consensus sequences. A full pairwise alignment of SwissProt would take years using profiles, but less than
3 days using a discrete IB encoding, illustrating how discrete encoding can expand the range of sequence problems to which profile
information can be applied.
1
Introduction
One of the most powerful techniques in protein analysis is the comparison of a
target amino acid sequence with phylogenetically related or homologous proteins.
Such comparisons give insight into which portions of the protein are important by
revealing the parts that were conserved through natural selection. While mutations
in non-functional regions may be harmless, mutations in functional regions are often
lethal. For this reason, functional regions of a protein tend to be conserved between
organisms while non-functional regions diverge.
?
?
Department of Computer Science and Engineering, University of California San Diego
Department of Computer Science, Stanford University
Many of the state-of-the-art protein analysis techniques incorporate homologous
sequences by representing a set of homologous sequences as a probabilistic profile,
a sequence of the marginal distributions of amino acids at each position in the
sequence. For example, Yona et al.[10] uses profiles to align distant homologues from
the SCOP database[3]; the resulting alignments are similar to results from structural
alignments, and tend to reflect both secondary and tertiary protein structure. The
PHD algorithm[5] uses profiles purely for structure prediction. PSI?BLAST[6] uses
them to refine database searches.
Although profiles provide a lot of information about the sequence, the use of profiles comes at a steep price. While extremely efficient string algorithms exist for
aligning protein sequences (Smith-Waterman[8]) and performing database queries
(BLAST[6]), these algorithms operate on strings and are not immediately applicable to profile alignment or profile database queries. While profile-based methods
can be substantially more accurate than sequence-based ones, they can require at
least an order of magnitude more computation time, since substitution penalties
must be calculated by computing distances between probability distributions. This
makes profiles impractical for use with large bioinformatics databases like SwissProt,
which recently passed 150,000 sequences. Another drawback of profile as compared
to string representations is that it is much more difficult to visually interpret a
sequence of 20 dimensional vectors than a sequence of letters.
Discretizing the profiles addresses both of these problems. First, once a profile is represented using a discrete alphabet, alignment and database search can be performed
using the efficient string algorithms developed for sequences. For example, when
aligning sequences of 1000 elements, runtime decreases from 20 seconds for profiles
to 2 for discrete sequences. Second, by representing each class as a letter, discretized
profiles can be presented in plain text like the original or consensus sequences, while
conveying more information about the underlying profiles. This makes them more
accurate than consensus sequences, and more dense than sequence logos (see figure
1). To make this representation intuitive, we want the discretization not only to
minimize information loss, but also to reflect biologically meaningful categories by
forming a superset of the standard 20-character amino acid alphabet. For example,
we use ?A? and ?a? for strongly- and weakly-conserved Alanine. This formulation
demands two types of constraints: similarities of the centroids to predefined values,
and specific structural similarities between strongly- and weakly-conserved variants.
We show below how these constraints can be added to the original IB formalism.
In this paper, we present a new discrete representation of proteins that takes into
account information from homologues. The main idea behind our approach is to
compress the space of probabilistic profiles in a data-dependent manner by clustering
the actual profiles and representing them by a small alphabet of distributions. Since
this discretization removes some of the information carried by the full profiles,
we cluster the distribution in a way that is directly targeted at minimizing the
information loss. This is achieved using a variant of Information Bottleneck (IB)[9],
a distributional clustering approach for informationally optimal discretization.
We apply our algorithm to a subset of MEROPS[4], a database of peptidases organized structurally by family and clan, and analyze the results in terms of both
information loss and alignment quality. We show that multivariate IB in particular
preserves much of the information in the original profiles using a small number of
classes. Furthermore, optimal alignments for profile sequences encoded with these
classes are much closer to the original profile-profile alignments than are alignments
between the seed proteins. IB discretization is therefore an attractive way to gain
some of the additional sensitivity of profiles with less computational cost.
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.5
0.0
0.0
0.0
0.5
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.09
0.04
0.01
0.38
0.06
0.00
0.02
0.00
0.04
0.01
0.00
0.05
0.02
0.04
0.04
0.16
0.02
0.00
0.00
0.01
0.34
0.01
0.05
0.04
0.00
0.06
0.00
0.00
0.01
0.01
0.00
0.05
0.00
0.05
0.01
0.10
0.10
0.14
0.00
0.00
0.23
0.01
0.14
0.00
0.08
0.01
0.04
0.03
0.01
0.00
0.03
0.01
0.23
0.00
0.00
0.06
0.05
0.03
0.00
0.04
0.12
0.03
0.09
0.04
0.04
0.03
0.00
0.00
0.00
0.09
0.00
0.01
0.00
0.00
0.00
0.29
0.20
0.04
0.00
0.04
0.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
1.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
ND
N
S
GDF
AS
EAP
T
V
S
S
A
D
T
D
F
A
F
G
L
S
N
D
K
Q
E
T
N
R
A
A
A
H
F
Y
Q
E
V
Y
A
A
A
(b)
P00790 Seq.:
---EAPT--Consensus Seq.: NNDEAASGDF
IB Seq.:
NNDeaptGDF
(c)
(a)
Figure 1: (a) Profile, (b) sequence logo[2], and (c) textual representations for part
of an alignment of Pepsin A precursor P00790, showing IB?s concision compared to
profiles and logos, and its precision compared to single sequences.
2
Information Bottleneck
Information Bottleneck [9] is an information theoretic approach for distributional
clustering. Given a joint distribution p(X, Y ) of two random variables X and Y , the
goal is to obtain a compressed representation C of X, while preserving the information about Y . The two goals of compression and information preservation are quanP
p(x,y)
tified by the same measure of mutual information I(X; Y ) = x,y p(x, y) log p(x)p(y)
and the problem is therefore defined as the constrained optimization problem
minp(c|x):I(C;Y )>K I(C; X) where K is a constraint on the level of information
preserved about Y , and thePproblem should also obey the constraints p(y|c) =
P
x p(y|x)p(x|c) and p(y) =
x p(y|x)p(x). This constrained optimization can be
reformulated using Lagrange multipliers, and turned into a tradeoff optimization
function with Lagrange multiplier ?:
def
min L = I(C; X) ? ?I(C; Y )
(1)
p(c|x)
As an unsupervised learning technique, IB aims to characterize the set of solutions
for the complete spectrum of constraint values K. This set of solutions is identical to
the set of solutions of the tradeoff optimization problem obtained for the spectrum
of ? values.
When X is discrete, its natural compression is fuzzy clustering. In this case, the
problem is not convex and cannot be guaranteed to contain a single global minimum.
Fortunately, its solutions can be characterized analytically by a set of self consistent
equations. These self consistent equations can then be used in an iterative algorithm
that is guaranteed to converge to a local minimum. While the optimal solutions of
the IB functional are in general soft clusters, in practice, hard cluster solutions are
sometimes more easily interpreted. A series of algorithms was developed for hard
IB, including an algorithm that can be viewed as a one-step look-ahead sequential
version of K-Means [7].
To apply IB to the problem of profiles discretization discussed here, X is a given
set of probabilistic profiles obtained from a set of aligned sequences and Y is the
set of 20 amino acids.
2.1
Constraints on centroids? semantics
The application studied in this paper differs from standard IB applications in that
we are interested in obtaining a representation that is both efficient and biologically meaningful. This requires that we add two kinds of constraints on clusters?
distributions, discussed below.
First, some clusters? meanings are naturally determined by limiting them to correspond to the common 20-letter alphabet used to describe amino acids. From the
point of view of distributions over amino acids, each of these symbols is used today
as the delta function distribution which is fully concentrated on a single amino acid.
For the goal of finding an efficient representation, we require the centroids to be
close to these delta distributions. More generally, we require the centroids to be
close to some predefined values c?i , thus adding constraints to the IB target function
of the form DKL[p(y|?
ci )||p(y|ci )] < Ki for each constrained centroid. While solving
the constrained optimization problem is difficult, the corresponding tradeoff optimization problem can be made very similar to standard IB. With the additional
constraints, the IB trade-off optimization problem becomes
X
min L0 ? I(C; X) ? ?I(C; Y ) + ?
?(ci )DKL[p(y|?
ci )||p(y|ci )] .
(2)
p(c|x)
ci ?C
We now use the following identity
X
p(x, c)DKL[p(y|x)||p(y|c)]
x,c
=
X
p(x)
X
x
p(y|x) log p(y|x) ?
y
X
c
p(c)
X
log p(y|c)
y
X
p(y|x)p(x|c)
x
= ?H(Y |X) + H(Y |C) = I(X; Y ) ? I(Y ; C)
to rewrite the IB functional of Eq. (1) as
XX
L = I(C; X) + ?
p(x, c)DKL[p(y|x)||p(y|c)] ? ?I(X; Y )
c?C x?X
When
P
?(ci ) ? 1 we can similarly rewrite Eq. (2) as
X
X
L0 = I(C; X) + ?
p(x)
p(ci |x)DKL[p(y|x)||p(y|ci )]
x?X
+?
X
(3)
ci ?C
?(ci )DKL[p(y|?
ci )||p(y|ci )] ? ?I(X; Y )
ci ?C
= I(C; X) + ?
X
x0 ?X 0
p(x0 )
X
p(ci |x0 )DKL[p(y|x0 )||p(y|ci )] ? ?I(X; Y )
ci ?C
The optimization problem therefore becomes equivalent to the original IB problem,
but with a modified set of samples x ? X 0 , containing X plus additional ?pseudocounts? or biases. This is similar to the inclusion of priors in Bayesian estimation.
Formulated this way, the biases can be easily incorporated in standard IB algorithms
by adding additional pseudo-counts x0 with prior probability p(x0 ) = ?i (c).
2.2
Constraints on relations between centroids
We want our discretization to capture correlations between strongly- and weaklyconserved variants of the same symbol. This can be done with standard IB using
separate classes for the alternatives. However, since the distributions of other amino
acids in these two variants are likely to be related, it is preferable to define a single
shared prior for both variants, and to learn a model capturing their correlation.
Friedman et al.[1] describe multivariate information bottleneck (mIB), an extension
of information bottleneck to joint distributions over several correlated input and
cluster variables. For profile discretization, we define two compression variables
connected as in Friedman?s ?parallel IB?: an amino acid class C ? {A, C, . . .} with
an associated prior, and a strength S ? {0, 1}. Since this model correlates strong
and weak variants of each category, it requires fewer priors than simple IB. It also
has fewer parameters: a multivariate model with ns strengths and nc classes has as
many categories as a univariate one with nc0 = ns nc classes, but has only ns +nc ?2
free parameters for each x, instead of ns nc ? 1.
3
Results
To test our method, we apply it to data from MEROPS[4]. Proteins within the same
family typically contain high-confidence alignments, those from different families
in the same clan less so. For each protein, we generate a profile from alignments
obtained from PSI?BLAST with standard parameters, and compute IB classes from
a large subset of these profiles using the priors described below. Finally, we encode
and align pairs of profiles using the learned classes, comparing the results to those
obtained both with the full profiles and with just the original sequences.
For univariate IB, we have used four types of priors reflecting biases on stability,
physical properties, and observed substitution frequencies: (1) Strongly conserved
classes, in which a single symbol is seen with S% probability. These are the only
priors used for multivariate IB. (2) Weakly conserved classes, in which a single
symbol occurs with W % probability; (S ?W )% of the remaining probability mass is
distributed among symbols with non-negative log-odds of substitution. (3) Physical
trait classes, in which all symbols with the same hydrophobicity, charge, polarity,
or aromaticity occur uniformly S% of the time. (4) A uniform class, in which all
symbols occur with their background probabilities.
The choice of S and W depends upon both the data and one?s prior notions of
?strong? and ?weak? conservation. Unbiased IB on a large subset of MEROPS
with several different numbers of unbiased categories yielded a mean frequency
approaching 0.7 for the most common symbol in the 20 most sharply-distributed
classes (0.59 ? 0.13 for |C| = 52; 0.66 ? 0.12 for |C| = 80; 0.70 ? 0.09 for |C| = 100).
Similarly, the next 20 classes have a mean most-likely-symbol frequency around
0.4. These numbers can be seen as lower bounds on S and W . We therefore chose
S = 0.8 and W = 0.5, reflecting a bias toward stronger definitions of conservation
than those inferred from the data.
3.1
Iterative vs. Sequential IB
Slonim[7] compares several IB algorithms, concluding that best hard clustering results are obtained with a sequential method (sIB), in which elements are first assigned to a fixed number of clusters and then individually moved from cluster to
cluster while calculating a 1-step lookahead score, until the score converges. While
sIB is more efficient than exhaustive bottom-up clustering, it neglects information
about the best potential candidates to be assigned to a cluster, yielding slow convergence. Furthermore updates are expensive, since each requires recomputing the
class centroids. Therefore instead of sIB, we use iterative IB (iIB) with hard clustering, which only recomputes the centroids after performing all updates. This reduces
ACDEFGH I KLMNPQRSTVWY
ACDEFGH I KLMNPQRSTVWY
DE G
A
KLMNPQRSTV
H
C
YLPSLSLSLSLSLS
A I
SESVKLSGGGVGWL
S SF
G LLL
S GKL TR ELSVKLEG
LGSD
SKKE
VL SSLG
V
STL I NSGLDGGYVTRVK
ST T AL
SGR I ST TEQ
G
VL
G LAVTP I VAV I
KLK
G
GTVP
TQ
VA
S
A
Q
T
Y
L
K
S
S
N
DA
A
E
V
S
R
P
L
G
R
S
E
L
NT
VG
I
G
V
KR
T
G
E
L
V I A TGG I KNA
LLD
V TK T
I T I
I V
AL
GD
I GEV
TDAP
TNFG TDVDA
VAK I
GKDVDAA
AT
QA
TLEQA
DE
RASLDV TDDNFF I Q I R I DDPA
I KNP
S
VRT TFF AE
ETEY A TFDD
NPK
SFF I D
KNGTSA T APS
I GAPWN
PD
TA
Q I
MV TK
I N
P I F I T Y VSNP I PQG
Q
QNERN
E I
REEFNQ
A
VK AE
I
E
I
M
C
EQVRN
N
P
F
VLDT T VP
RYQYN I QDNSYRF
P
F
FK
NPQ
L
G
V
F
T
S
A
L
Y
KT
RQ
I
E
E
I
G
K
P
Q
F
L
R
A
Q
I
P
F
E
S
H
I
F
F
R
D
E
K
L
R
D
P
G
F
G
R
L
P
A
I
T
I
R
N
P
H
H
D
Q
K
M
I
L
V
I
M
R
R
P
F
M
D
M
C
N
P
T
P
P
E
G
Y
V
R
S
V
P
W
C
E
M
M
H
M
K
W
H
W
H
M
H
VT
G
A
S
LGTQVR
MA
N
A
E
S
F
I
D
S
S
E
N
S
A
G
Q
K
Y
W
I
L
F
L
T
V
T
P
K
E
G
Q
G
W
M
P
F
C
T
C
S
M
Y
I
T
C
N
W
M
W
D
E
S
V
F
K
D
Y
Q
Q
W
W
H
L
M
H
C
R
H
L
I
Y
F
I NQV TKVV
KG
G
H
Y
P
N
H
H
R
G
H
Y
H
H
P
V
A
A
P
R
W
N
T
C
G
L
G
M
N
P
P
S
W
R
P
H
N
G
R
H
Y
A
D
A
A
K
K
P
K
A
P
K
V
G
P
P
P
M
M
I
D
I
F
DYN
M
S
P
T
E
K
PT
E
KV
P
E
PA
WQ
A
K
E
QP
N
FC
A
QY VPK
W
K
D
E
R
EQD
QC
P
QG
N
CV
DDK S
G
RN
YKY
N
FN
D
QDQG
N
RR
R
DR
M
P
L
RY V I RN
R
R
R
KF
YQQ
Y
FE
F
V
L
W
E
H
E
M
N
E
NEK
PD
F
H
TH
H
MY
F
N
P
F
I
EYF
Q
K
I
M
I
EQ
M
YM
YQ
H
K
R
N
H
P
KR
M
M
K
TV
P
E
C
D
K
P
F
I
R
M
R
YM
M
V
Y
Y
M
H
H
H
R
L
M
D
Y
Q
D
G
Y
R
W
H
H
I
I
Y
P
M
N
A
F
Q
M
L
G
I
K
M
M
Q
D
KR
KM
H
H
M
Y
M
W
E
G
R
D
E
E
W
G
Y
Q
V
Q
T
H
C
M
M
NL
P
A
AA T
VQT T V
TSAA T T Y
ELGGSKA
NA I SSFNS
VNT V T A I D
DNGLTR
G
PS
RA
QA AK TPNKDLDVD I
TFEF
I TD
QVF
QSV
DG
AKD
DVLRN
AEFNQRA W
EAAFF
T TS
SL
YA
N
GQE I NE
I TQEVPVM
I G
PY A
L TRG
Q
ARA A
LS
NQ
E
I PQ
QYP
S
M
D
L
R
D
K
R
V
D
N
D
L
M
M
M
M
M
M
R
H
R
C
C
F
H
E
G
K
L
G
V
G
N
T
V
N
Y
K
L
F
H
P
Q
R
Y
H
C
H
H
M
A
SG
PSEGLSGL
I
YK
Q
G L
EF
SELS
K
SH
G
TLSRG
K GT
SLR
CL
VSVL L NLSSLRMK
SVK VKD
I W VV
I GG I K TETS I ASADDPFYD
V
I
GPL
V
E
I
G
T
L
G
P
L
T
I
D
Q
T
V
T
A
NL
G
FS
S
I
ER
Q
R
EY
E
I
Q
N
K
H
T
S
L
S
F
G
F
V
D
N
G
K
L
H
H
C
A
P
G
KY
PQ
V
L
L
A
F
L
F
RSA
R
K
T
I
ET
T
D
I
Q
V
N
T
D
P
W
I L
GN
F
S
N
A
D
E
K
D
TH
Q
R
I
I
I
A
H
K
I
P
C
S
A
MQ
V
N
N
V
R
G
YQ
K
DA
E
R
I
A
Y
M
E
T
H
V
P
K
R
P
D
E
I
TA
R
V
Q
R
I
V
L
L
T
TT
KESS
SY
N
L
F
L
ND
GP
I D
SA
F
C
F
M
C
H
R
M
Y
Q
Y
M
H
C
DEFG KLNP STV
AC
LK
S
Q
R
E
V
E
N
H
F
P
E
M
C
W
H
P
H
H
F
D
WQ
C
R
M
M
H
H
F
M
M
Y
H
M
H
L
M
F
M
M
T
M
A
Q
A
G
M
Q
H
R
Y
A
M
C
E
Q
T
K
N
D
V
G
H
S
N
D
D
C
V
G
F
W
V
G
L
L
M
T
K
D
V
E
F
L
Y
F
Q
K
D
G
Y
H
F
P
N
V
R
V
R
C
D
K
G
R
H
C
Q
L
NW
Q
Y
F
R
S
D
Y
E
W
T
M
L
S
V
Q
M
A
K
F
K
L
NV
T
F
P
L
V
A
D
T
A
Y
A
VF QG
A
F QNKN
RNQKE
R
R
Q
EQD WPK
PDSKGQRVD
SQ
QDAFPQ
P I G
EQDR
KRG
YE
R
DWN
QN
W
E I RN
RAN
P
P
K
N
Q
LN
FPWCPE
E
M
I
DYK V
F YD
EYQ
F
F
QP
NY
CQ
I
I KD
F YMYK
M
EY Y
H
RY Y
K
FP
E
FN
N
W
Y
YW
P
Y
H
FH
RY
P
H
RY
YM
M
R
M
P
E
K
W
NC
C
N
R
AV
E
KS
Q
Q
M
A
Q
NA
G
I
T
CV
L
I
T
S
K
D
P
Q
S
I
P
NT
ET
Q
A
I
N
E
Q
D
Y
P
S
D
AR
G
N
N
AS
FT
KS
G
D
I
I
E
Y
K
P
Q
E
E
D
TA
G
N
E
WQ
M
C
W
SQ
E
P
C
F
F
F
H
F
C
M
E
H
Y
H
M
D
F
R
M
C
M
M
P
I
H
Q
H
R
M
M
Y
W
W
H
H
W
R
C
F
H
C
Y
H
H
M
W
R
H
Figure 2: Stretched sequence logos for categories found by iIB (top) and sIB (bottom), ordered by primary symbol and decreasing information.
the convergence time from several hours to around ten minutes.
Since Slonim argues that sIB outperforms soft iIB in part because sIB?s discrete
steps allow it to escape local optima, we expect hard iIB to have similar behavior.
To test this, we applied three complete sIB iterations initialized with categories
from multivariate iIB. sIB decreased the loss L by only about 3 percent (from 0.380
to 0.368), with most of this gain occurring in the first iteration. Also, the resulting
categories were mostly conserved up to exchanging labels, suggesting that hard iIB
finds categories similar sIB ones (see figure 2).
3.2
Information Loss and Alignments
One measure of the quality of the resulting clusters is the amount of information
about Y lost through discretization, I(Y ; X) ? I(Y ; C). Figure (3b) shows the effect on information loss of varying the prior weight w with three sets of priors: 20
strongly conserved symbols and one background; these plus 20 weakly conserved
symbols; and these plus 10 categories for physical characteristics. As expected,
both decreasing the number of categories and increasing the number or weight of
priors increases information loss. However, with a fixed number of free categories,
information loss is nearly independent of prior strength, suggesting that our priors correspond to actual regularities in the data. Finally, note that despite having
fewer free parameters than the univariate models, mIB?s achieves comparable performance, suggesting that our decomposition into conserved class and degree of
conservation is reasonable.
Since we are ultimately using these classes in alignments, the true cost of discretization is best measured by the amount of change between profile and IB alignments,
and the significance of this change. The latter is important because the best path
can be very sensitive to small changes in the sequences or scoring matrix; if two radically different alignments have similar scores, neither is clearly ?correct?. We can
represent an alignment as a pair of index-insertion sequences, one for each profile
sequence to be aligned (e.g. ?1,2, , ,3,...? versus ?1, ,2, ,3,...?). The edit distance
between these sequences for two alignments then measures how much they differ.
However, even when this distance is large, the difference between two alignments
may not be significant if both choices? scores are nearly the same. That is, if the
optimal profile alignment?s score is only slightly lower than the optimal IB class
alignment?s score as computed with the original profiles, either might be correct.
Figure 4 shows at left both the edit distance and score change per length between
profile alignments and those using IB classes, mIB classes, and the original sequences with the BLOSUM62 scoring matrix. To compare the profile and sequence
alignments, profiles corresponding to gaps in the original sequences are replaced
64
Profile-profile
IB-profile
2e-5 * L^2 + 0.1
3e-3 * L - 0.1
I(Y;X)!-!I(Y;C)
0.46
Time!(s)
16
4
multivariate
21/52 priors
41/52 priors
51/52 priors
0.42
0.38
1
400
800
Length
(a)
1600
0.2
0.4
w
(b)
0.6
0.8
Figure 3: (a) Running times for profile-profile versus IB-profile alignment, showing
speedups of 3.5-12.5x for pairwise global alignment. (b)I(Y ; X) ? I(Y ; C) as a
function of w for different groups of priors. The information loss for 52 categories
without priors is 0.359, for 10, 0.474.
mIB
IB
BLOSUM
mIB
IB
BLOSUM
Edit distance Score change
Same Superfamily
0.154 ? 0.182 0.086 ? 0.166
0.170 ? 0.189 0.107 ? 0.198
0.390 ? 0.065
Same Clan
0.124 ? 0.209 0.019 ? 0.029
0.147 ? 0.232 0.022 ? 0.037
0.360 ? 0.062
Figure 4: Left: alignment differences for IB models and sequence alignment, within
and between superfamilies. Right: ROC curve for same/different superfamily classification by alignment score.
by gaps, and resulting pairs of aligned gaps in the profile-profile alignment are removed. We consider both sequences from the same family and those from other
families in the same clan, the former being more similar than the latter, and therefore having better alignments. Assuming the profile-profile alignment is closest to
the ?true? alignment, iIB alignment significantly outperforms sequence alignment
in both cases, with mIB showing a slight additional improvement. At right is the
ROC curve for detecting superfamily relationships between profiles from different
families based on alignment scores, showing that while IB fares worse than profiles,
simple sequences perform essentially at chance.
Finally, figure 3a compares the performance of profile and IB alignment for different
sequence lengths. To use a profile alphabet for novel alignments, we must map
each input profile to the closest IB class. To be consistent with Yona[10], we use
the Jensen-Shannon (JS) distance with mixing coefficient 0.5 rather than the KL
distance optimized in creating the categories. Aligning two sequences of lengths n
and m requires computing the |C|(n+m) JS-distances between each profile and each
category, a significant improvement over the mn distance computations required for
profile-profile alignment when |C| min(m,n)
. Our results show that JS distance
2
computations dominate running time, since IB alignment time scales linearly with
the input size, while profile alignment scales quadratically, yielding an order of
magnitude improvement for typical 500- to 1000-base-pair sequences.
4
Discussion
We have described a discrete approximation to amino acid profiles, based on minimizing information loss, that allows profile information to be used for alignment
and search without additional computational cost compared to simple sequence
alignment. Alignments of sequences encoded with a modest number of classes correspond to the original profile alignments significantly better than alignments of the
original sequences. In addition to minimizing information loss, the classes can be
constrained to correspond to the standard amino acid representation, yielding an
intuitive, compact textual form for profile information.
Our model is useful in three ways: (1) it makes it possible to apply existing fast
discrete algorithms to arbitrary continuous sequences; (2) it models rich conditional
distribution structures; and (3) its models can incorporate a variety of class constraints. We can extend our approach in each of these directions. For example,
adjacent positions are highly correlated: the average entropy of a single profile is
0.99, versus 1.23 for an adjacent pair. Therefore pairs can be represented more compactly than the cross-product of a single-position alphabet. More generally, we can
encode arbitrary conserved regions and still treat them symbolically for alignment
and search. Other extensions include incorporating structural information in the
input representation; assigning structural significance to the resulting categories;
and learning multivariate IB?s underlying model?s structure.
References
[1] Nir Friedman, Ori Mosenzon, Noam Slonim, and Naftali Tishby. Multivariate
information bottleneck. In Uncertainty in Artificial Intelligence: Proceedings
of the Seventeenth Conference (UAI-2001), pages 152?161, San Francisco, CA,
2001. Morgan Kaufmann Publishers.
[2] Crooks GE, Hon G, Chandonia JM, and Brenner SE. WebLogo: a sequence
logo generator. Genome Research, in press, 2004.
[3] A. G. Murzin, S. E. Brenner, T. Hubbard, and C. Chothia. SCOP: a structural classification of proteins database for the investigation of sequences and
structures. J. Mol. Biol., 247:536?40, 1995.
[4] N.D. Rawlings, D.P. Tolle, and A.J. Barrett. MEROPS: the peptidase
database. Nucleic Acids Res, 32 Database issue:D160?4, 2004.
[5] B. Rost and C. Sander. Prediction of protein secondary structure at better
than 70% accuracy. J. Mol. Bio., 232:584?99, 1993.
[6] Altschul SF, Gish W, Miller W, Myers EW, and Lipman DJ. Basic local
alignment search tool. J Mol Biol, 215(3):403?10, October 1990.
[7] Noam Slonim. The Information Bottleneck: Theory and Applications. PhD
thesis, Hebrew University, Jerusalem, Israel, 2002.
[8] T. F. Smith and M. S. Waterman. Identification of common molecular subsequences. Journal of Molecular Biology, 147:195?197, 1981.
[9] Naftali Tishby, Fernando C. Pereira, and William Bialek. The information
bottleneck method. In Proc. of the 37-th Annual Allerton Conference on Communication, Control and Computing, pages 368?77, 1999.
[10] Golan Yona and Michael Levitt. Within the twilight zone: A sensitive profileprofile comparison tool based on information theory. Journal of Molecular
Biology, 315:1257?75, 2002.
| 2612 |@word illustrating:1 version:1 compression:3 stronger:1 nd:2 km:1 gish:1 decomposition:1 concise:1 tr:1 klk:1 substitution:3 series:1 score:10 outperforms:2 existing:1 discretization:11 comparing:1 nt:2 assigning:1 must:2 fn:2 distant:1 informative:1 remove:1 update:2 aps:1 v:1 intelligence:1 fewer:3 nq:1 tolle:1 smith:2 tertiary:1 eskin:1 detecting:1 allerton:1 gpl:1 manner:1 x0:6 blast:3 pairwise:2 expected:1 ra:1 behavior:1 themselves:1 ry:4 discretized:1 ara:1 rawlings:1 decreasing:2 eap:1 td:1 trg:1 actual:2 jm:1 precursor:1 lll:1 increasing:1 becomes:2 xx:1 underlying:2 mass:1 israel:1 kg:1 kind:1 interpreted:1 string:4 substantially:1 fuzzy:1 developed:2 finding:1 gal:2 impractical:2 pseudo:1 charge:1 runtime:1 preferable:1 bio:1 control:1 slr:1 eleazar:1 engineering:1 local:3 treat:1 slonim:4 despite:1 encoding:3 ak:1 path:1 yd:1 logo:5 plus:3 chose:1 yky:1 studied:1 k:2 might:1 range:1 seventeenth:1 practice:1 rsa:1 lost:1 differs:1 sq:2 significantly:2 revealing:1 chechik:1 confidence:1 protein:13 cannot:1 close:2 selection:1 py:1 equivalent:1 map:1 murzin:1 jerusalem:1 l:1 convex:1 qc:1 immediately:1 insight:1 dominate:1 mq:1 harmless:1 stability:1 notion:1 limiting:1 target:2 diego:1 today:1 pt:1 us:3 pa:1 element:2 expensive:2 iib:7 npq:1 distributional:2 database:10 observed:1 bottom:2 ft:1 capture:2 region:5 connected:1 nek:1 decrease:1 trade:1 removed:1 yk:1 rq:1 ran:1 pd:2 insertion:1 concision:1 ultimately:1 weakly:4 solving:1 rewrite:2 purely:1 upon:1 compactly:1 homologues:2 joint:2 easily:2 represented:2 alphabet:8 recomputes:1 describe:2 fast:1 query:2 artificial:1 exhaustive:1 richer:1 stanford:2 encoded:2 compressed:1 gp:1 sequence:49 rr:1 myers:1 propose:1 product:1 akd:1 turned:1 aligned:3 mixing:1 lookahead:1 intuitive:2 moved:1 kv:1 ky:1 convergence:2 cluster:11 p:1 optimum:1 regularity:1 converges:1 tk:2 ac:1 measured:1 sa:1 eq:3 strong:2 c:2 come:1 differ:1 direction:1 drawback:1 correct:2 require:3 investigation:1 biological:1 extension:3 around:2 knp:1 visually:1 seed:1 nw:1 visualize:1 achieves:1 fh:1 estimation:1 proc:1 phylogenetically:1 applicable:1 label:1 sensitive:2 individually:1 edit:3 hubbard:1 tool:2 clearly:1 aim:1 modified:1 rather:1 varying:1 encode:2 l0:2 vk:1 improvement:3 centroid:8 sgr:1 dependent:1 vl:2 typically:1 relation:1 expand:1 interested:1 semantics:1 issue:1 among:1 classification:2 hon:1 constrained:6 art:1 mutual:1 marginal:1 once:1 having:2 lipman:1 identical:1 biology:2 look:1 unsupervised:1 nearly:3 escape:1 dg:1 preserve:1 individual:3 replaced:1 tq:1 william:1 friedman:4 highly:1 alignment:50 sh:1 nl:2 yielding:3 dyn:1 behind:1 predefined:2 accurate:3 kt:1 closer:1 mib:6 modest:1 initialized:1 re:1 eyf:1 recomputing:1 formalism:1 soft:2 gn:1 ar:1 exchanging:1 cost:3 subset:3 uniform:1 tishby:2 characterize:1 my:1 gd:1 st:2 sensitivity:1 probabilistic:3 off:1 diverge:1 michael:1 ym:3 quickly:1 na:2 thesis:1 reflect:2 containing:1 dr:1 worse:1 creating:1 tet:1 account:1 potential:1 suggesting:3 de:2 scop:2 vnt:1 coefficient:1 wpk:1 mv:1 depends:1 performed:1 view:1 lot:1 ori:1 analyze:1 portion:1 parallel:1 mutation:3 minimize:1 accuracy:1 acid:14 characteristic:1 kaufmann:1 sy:1 yield:1 conveying:1 correspond:4 miller:1 vp:1 weak:2 bayesian:1 identification:1 eqd:2 gkl:1 definition:1 frequency:3 naturally:1 associated:1 psi:2 gain:2 rourke:1 organized:1 sean:1 reflecting:2 ta:3 day:1 formulation:1 done:1 strongly:5 furthermore:3 just:2 correlation:2 until:1 quality:2 effect:1 ye:1 contain:2 multiplier:2 unbiased:2 true:2 former:1 analytically:1 assigned:2 attractive:1 krg:1 adjacent:2 self:2 naftali:2 gg:1 theoretic:1 complete:2 tt:1 argues:1 percent:1 meaning:1 ef:1 recently:1 novel:1 common:4 functional:6 physical:3 vak:1 qp:2 discussed:2 organism:1 slight:1 fare:1 interpret:1 trait:1 extend:1 significant:2 cv:2 stretched:1 fk:1 similarly:2 inclusion:1 vqt:1 dj:1 pq:2 similarity:2 gt:1 align:2 aligning:3 add:1 j:3 multivariate:8 closest:2 base:1 altschul:1 discretizing:1 vt:1 scoring:2 conserved:11 preserving:1 minimum:2 additional:6 fortunately:1 seen:2 morgan:1 ey:2 converge:1 fernando:1 preservation:1 full:4 reduces:1 sib:9 characterized:1 cross:1 molecular:3 dkl:7 va:1 qg:2 prediction:2 variant:6 basic:1 ae:2 essentially:1 iteration:2 sometimes:1 represent:1 achieved:1 aromaticity:1 qy:1 preserved:1 background:2 want:2 addition:1 decreased:1 publisher:1 operate:1 nv:1 tend:2 odds:1 structural:5 superset:1 sander:1 variety:1 chothia:1 approaching:1 idea:1 tradeoff:3 bottleneck:10 passed:1 penalty:1 f:1 reformulated:1 gev:1 generally:2 useful:1 yw:1 se:1 amount:2 ten:1 concentrated:1 category:15 generate:1 sl:1 exist:1 delta:2 per:1 alanine:1 discrete:11 group:1 four:1 neither:1 swissprot:2 symbolically:1 year:1 letter:3 powerful:1 uncertainty:1 almost:1 family:6 ddk:1 reasonable:1 seq:3 vf:1 comparable:1 capturing:1 def:1 ki:1 bound:1 guaranteed:2 refine:1 yielded:1 annual:1 strength:3 ahead:1 occur:2 constraint:12 sharply:1 extremely:1 min:3 concluding:1 performing:2 expanded:1 speedup:1 department:2 tv:1 kd:1 slightly:1 lld:1 character:1 making:1 biologically:2 pseudocounts:1 computationally:1 equation:2 ln:1 count:1 ge:1 apply:4 obey:1 rost:1 alternative:1 original:11 compress:1 top:1 eeskin:1 clustering:7 remaining:1 running:2 include:1 calculating:1 neglect:1 npk:1 added:1 occurs:1 primary:1 bialek:1 distance:10 separate:1 consensus:4 reason:1 toward:1 assuming:1 length:4 index:1 polarity:1 cq:1 relationship:1 minimizing:3 hebrew:1 nc:5 difficult:3 steep:1 mostly:1 fe:1 october:1 noam:2 negative:1 twilight:1 perform:1 av:1 nucleic:1 datasets:1 waterman:2 t:1 incorporated:1 communication:1 rn:3 ucsd:3 arbitrary:2 inferred:1 pair:6 required:1 kl:1 optimized:1 california:1 learned:1 textual:3 quadratically:1 hour:1 qa:2 address:1 below:3 etey:1 fp:1 including:1 natural:2 homologous:3 mn:1 representing:4 tified:1 yq:2 ne:1 lk:1 carried:1 dyk:1 nir:1 text:1 prior:20 sg:1 lethal:1 kf:1 loss:11 fully:1 expect:1 versus:3 vg:1 generator:1 hydrophobicity:1 degree:1 consistent:3 minp:1 informationally:2 free:3 bias:4 allow:1 vv:1 superfamily:4 distributed:2 overcome:1 plain:1 calculated:1 gqe:1 curve:2 rich:2 qn:1 genome:1 made:1 san:2 correlate:1 compact:1 global:2 uai:1 conservation:3 francisco:1 spectrum:2 subsequence:1 search:5 iterative:3 continuous:1 robin:1 learn:1 ca:1 obtaining:1 mol:3 cl:1 da:2 significance:2 dense:1 main:1 linearly:1 profile:73 amino:13 yona:3 levitt:1 roc:2 slow:1 ny:1 n:4 precision:1 structurally:1 position:4 pereira:1 sf:2 candidate:1 ib:42 kna:1 minute:1 specific:2 showing:4 er:1 symbol:13 vkd:1 jensen:1 barrett:1 stl:1 incorporating:2 sequential:3 adding:2 kr:3 ci:17 phd:2 magnitude:2 mosenzon:1 occurring:1 demand:1 gap:3 entropy:1 fc:1 likely:2 univariate:3 forming:1 crook:1 lagrange:2 ordered:1 aa:1 radically:1 clan:4 vav:1 chance:1 ma:1 conditional:1 goal:3 targeted:1 viewed:1 identity:1 formulated:1 price:1 shared:1 brenner:2 hard:6 change:5 determined:1 typical:1 uniformly:1 secondary:2 vrt:1 ya:1 shannon:1 meaningful:2 ew:1 zone:1 wq:3 latter:2 bioinformatics:1 incorporate:2 biol:2 correlated:2 |
1,776 | 2,613 | Boosting on manifolds: adaptive regularization
of base classifiers
Bal?azs K?egl and Ligen Wang
Department of Computer Science and Operations Research, University of Montreal
CP 6128 succ. Centre-Ville, Montr?eal, Canada H3C 3J7
{kegl|wanglige}@iro.umontreal.ca
Abstract
In this paper we propose to combine two powerful ideas, boosting and
manifold learning. On the one hand, we improve A DA B OOST by incorporating knowledge on the structure of the data into base classifier design
and selection. On the other hand, we use A DA B OOST?s efficient learning mechanism to significantly improve supervised and semi-supervised
algorithms proposed in the context of manifold learning. Beside the specific manifold-based penalization, the resulting algorithm also accommodates the boosting of a large family of regularized learning algorithms.
1
Introduction
A DA B OOST [1] is one of the machine learning algorithms that have revolutionized pattern
recognition technology in the last decade. The algorithm constructs a weighted linear combination of simple base classifiers in an iterative fashion. One of the remarkable properties
of A DA B OOST is that it is relatively immune to overfitting even after the training error has
been driven to zero. However, it is now a common knowledge that A DA B OOST can overfit
if it is run long enough. The phenomenon is particularly pronounced on noisy data, so most
of the effort to regularize A DA B OOST has been devoted to make it tolerant to outliers by
either ?softening? the exponential cost function (e.g., [2]) or by explicitly detecting outliers
and limiting their influence on the final classifier [3].
In this paper we propose a different approach based on complexity regularization. Rather
than focusing on possibly noisy data points, we attempt to achieve regularization by favoring base classifiers that are smooth in a certain sense. The situation that motivated
the algorithm is not when the data is noisy, rather when it has a certain structure that is
ignored by ordinary A DA B OOST. Consider, for example, the case when the data set is embedded in a high-dimensional space but concentrated around a low dimensional manifold
(Figure 1(a)). A DA B OOST will compare base classifiers based on solely their weighted
errors so, implicitly, it will consider every base classifier having the same (usually low)
complexity. On the other hand, intuitively, we may hope to achieve better generalization if
we prefer base classifiers that ?cut through? sparse regions to base classifiers that cut into
?natural? clusters or cut the manifold several times. To formalize this intuition, we use the
graph Laplacian regularizer proposed in connection to manifold learning [4] and spectral
clustering [5] (Section 3). For binary base classifiers, this penalty is proportional to the
number of edges of the neighborhood graph that the classifier cuts (Figure 1(b)).
(a)
(b)
Figure 1: (a) Given the data, the vertical stump has a lower ?effective? complexity than
the horizontal stump. (b) The graph Laplacian penalty is proportional to the number of
separated neighbors.
To incorporate this adaptive penalization of base classifiers into A DA B OOST, we will turn
to the marginal A DA B OOST algorithm [6] also known as arc-gv [7]. This algorithm can
be interpreted as A DA B OOST with an L1 weight decay on the base classifier coefficients
with a weight decay coefficient ?. The algorithm has been used to maximize the hard
margin on the data [7, 6] and also for regularization [3]. The coefficient ? is adaptive
in all these applications: in [7] and [6] it depends on the hard margin and the weighted
error, respectively, whereas in [3] it is different for every training point and it quantifies the
?noisiness? of the points. The idea of this paper is to make ? dependent on the individual
base classifiers, in particular, to set ? to the regularization penalty of the base classifier.
First, with this choice, the objective of base learning becomes standard regularized error
minimization so the proposed algorithm accommodates the boosting of a large family of
regularized learning algorithms. Second, the coefficients of the base classifiers are lowered
proportionally with their complexity, which can be interpreted as an adaptive weight decay.
The formulation can be also justified by theoretical arguments which are sketched after the
formal description of the algorithm in Section 2.
Experimental results (Section 4) show that the regularized algorithm can improve generalization. Even when the improvement is not significant, the difference between the training
error and the test error decreases significantly and the final classifier is much sparser than
A DA B OOST?s solution, both of which indicate reduced overfitting. Since the Laplacian
penalty can be computed without knowing the labels, the algorithm can also be used for
semi-supervised learning. Experiments in this context show that algorithm besignificantly
the semi-supervised algorithm proposed in [4].
2
The R EG B OOST algorithm
For the formal description, let the training data be Dn = (x1 , y1 ), . . . , (xn , yn ) where
data points (xi , yi ) are from the set Rd ? {?1, 1}. The algorithm maintains a weight distri(t)
(t)
bution w(t) = w1 , . . . , wn over the data points. The weights are initialized uniformly
in line 1 (Figure 2), and are updated in each iteration
in line 10. We suppose that we are
given a base learner algorithm BASE Dn , w, P (?) that, in each iteration t, returns a base
classifier h(t) coming from a subset of H = h : Rd 7? {?1, 1} . In A DA B OOST, the
goal of the base classifier is to minimize the weighted error
= (t) (h) =
n
X
(t)
wi I {h(xi ) 6= yi } , 12
i=1
1
2
The indicator function I{A} is 1 if its argument A is true and 0 otherwise.
We will omit the iteration index (t) and the argument (h) where it does not cause confusion.
R EG B OOST Dn , BASE(?, ?, ?), P (?), ?, T
1
w ? (1/n, . . . , 1/n)
2
for t ? 1 to T
3
h(t) ? BASE Dn , w(t) , P (?)
4
? (t) ?
n
X
(t)
wi h(t) (xi )yi
. edge
i=1
5
6
7
8
9
10
11
?(t) ? 2?P (h(t) )
. edge offset
1
1 + ? (t) 1 ? ? (t)
?(t) ? ln
?
2
1 ? ? (t) 1 + ? (t)
. base coefficient
if ?(t) ? 0
. ?? base error ? (1 ? ? (t) )/2
Pt?1
return f (t?1) (?) = j=1 ?(j) h(j) (?)
for i ? 1 to n
(t+1)
wi
return f (T ) (?) =
?
PT
exp
(t)
wi P n
(t)
j=1 wj
t=1
??(t) h(t) (xi )yi
exp ? ?(t) h(t) (xj )yj
?(t) h(t) (?)
Figure 2: The pseudocode of the R EG B OOST algorithm with binary base classifiers. D n
is the training data, BASE is the base learner, P is the penalty functional, ? is the penalty
coefficient, and T is the number of iterations.
Pn
(t)
which is equivalent to maximizing the edge ? = 1 ? 2 = i=1 wi h(xi )yi . The goal of
R EG B OOST?s base learner is to minimize the penalized cost
R1 (h) = (h) + ?P (h) =
1 1
? (? ? ?),
2 2
(1)
where P : H 7? R is an arbitrary penalty functional or regularization operator, provided
to R EG B OOST and to the base learner, ? is the penalty coefficient, and ? = 2?P (h) is the
edge offset. Intuitively, the edge ? quantifies by how much h is better than a random guess,
while the edge offset ? indicates by how much h(t) must be better than a random guess.
This means that for complex base classifiers (with large penalties), we require a better
base classification than for simple classifiers. The main advantage of R1 is that it has the
form of conventional regularized error minimization, so it accommodates the boosting of
all learning algorithms that minimize an error functional of this form (e.g., neural networks
with weight decay). However, the minimization of R1 is suboptimal from boosting?s point
of view.3 If computationally possible, the base learner should minimize
s
s
1+?
1??
1+?
1??
1?
1+?
1??
R2 (h) = 2
=
.
(2)
1+?
1??
1+?
1??
3
This statement along with the formulae for R1 , R2 , and ?(t) are explained formally after Theorem 1.
After computing the edge and the edge offset in lines 4 and 5, the algorithm sets the coefficient ?(t) of the base classifier h(t)
to (t)
1
1+?
1
1 + ? (t)
(t)
? = ln
? ln
.
(3)
2
2
1 ? ? (t)
1 ? ? (t)
In line 11, the algorithm returns the weighted average of the base classifiers f (T ) (?) =
PT
(t) (t)
(T )
(x) to classify x.
t=1 ? h (?) as the combined classifier, and uses the sign of f
(t)
(t)
The algorithm must terminate if ? ? 0 which is equivalent to ? ? ?(t) and to (t) ?
(1?? (t) )/2.4 In this case, the algorithm returns the actual combined classifier in line 8. This
means that either the capacity of the set of base classifiers is too small (? (t) is small), or the
penalty is too high (? (t) is high), so we cannot find a new base classifier that would improve
the combined classifier. Note that the algorithm is formally equivalent to A DA B OOST if
?(t) ? 0 and to marginal A DA B OOST if ? (t) ? ? is constant.
For the analysis of the algorithm, we first define the unnormalized margin achieved by f (T )
on (xi , yi ) as
?i = f (T ) (xi )yi ,
and the (normalized) margin as
PT
?(t) h(t) (xi )yi
?i
?ei =
,
(4)
= t=1PT
(t)
k?k1
t=1 ?
PT
where k?k1 = t=1 ?(t) is the L1 norm of the coefficient vector. Let the average penalty
or margin offset be defined as the average edge offset
PT
(t) (t)
t=1 ? ?
?
?= P
.
(5)
T
(t)
t=1 ?
The following theorem upper bounds the marginal training error
n
1X
?
(?)
(T )
L (f ) =
I ?ei < ??
(6)
n i=1
achieved by the combined classifier f (T ) that R EG B OOST outputs.
?
Theorem 1 Let ? (t) = 2?P (h(t) ), let ?? and L(?) (f (T ) ) be as defined in (5) and (6), re(t)
spectively. Let wi be the weight of training point (xi , yi ) after the tth iteration (updated
in line 10 in Figure 2), and let ?(t) be the weight of the base regressor h(t) (?) (computed in
line 6 in Figure 2). Then
T
n
T
Y
(t) (t) X
(t) (t)
4 Y (t)
?
(t)
L(?) (f (T ) ) ?
e? ?
wi e?? h (xi )yi =
E
?(t) , h(t) .
(7)
t=1
t=1
i=1
Proof. The proof is an extension of the proof of Theorem 5 in [8].
( T
)
n
T
X
X
1X
?
(?)
(T )
(t)
(t) (t)
?
I ?
L (f ) =
? ?
? h (xi )yi ? 0
n i=1
t=1
t=1
(8)
n
?
1 X ?? PTt=1 ?(t) ?PTt=1 ?(t) h(t) (xi )yi
e
n i=1
? PT
= e?
t=1
?(t)
T X
n
Y
t=1 j=1
(t)
wj e??
(t)
h(t) (xj )yj
(9)
n
X
(T +1)
wi
.
(10)
i=1
4
Strictly speaking, ?(t) = 0 could be allowed but in this case the ?(t) would remain 0 forever so
it makes no sense to continue.
In (8) we used the definitions (6) and (4), the inequality (9) holds since ex ? I{x ? 0},
and we obtained (10) by recursively applying line 10 in Figure 2. The theorem follows by
Pn
(T +1)
the definition (5) and since i=1 wi
= 1.
First note that Theorem 1 explains the base objectives (1) and (2) and the base coefficient
(3). The goal of R EG B OOST is the greedy minimization of the exponential bound in (7),
that is, in each iteration we attempt to minimize E (t) (?, h). Given h(t) , E (t) ?, h(t) is
minimized by (3), and with this choice for ?(t) , R2 (h) = E (t) ?(t) , h , so the base learner
should attempt to minimize R2 (h). If this is computationally impossible, we follow Mason
et al.?s functional gradient descent approach [2], that is, we find h(t) by maximizing the
(t)
(t)
negative gradient ? ?E ??(?,h) in ? = 0. Since ? ?E ??(?,h)
= ? ? ?, this criterion is
equivalent to the minimization of R1 (h).5
?=0
Theorem 1 also suggests various interpretations of R EG B OOST which indicate why it
would indeed achieve regularization. First, by (9) it can be seen that R EG B OOST directly
minimizes
n
1X
?
exp ??i + ?k?k
1 ,
n i=1
which can be interpreted as an exponential cost on the unnormalized margin with an L 1
weight decay. The weight decay coefficient ?? is proportional to the average complexity
of the base classifiers. Second, Theorem 1 also indicates that R EG B OOST indirectly min?
? again, is moving
imizes the marginal error L(?) (f (T ) ) (6) where the margin parameter ?,
adaptively with the average complexity of the base classifiers. This explanation is supported by theoretical results that bound the generalization error in terms of the marginal
error (e.g., Theorem 2 in [8]). The third explanation is based on results that show that the
difference between the marginal error and the generalization error can be upper bounded in
terms of the complexity of the base classifier class H (e.g., Theorem 4 in [9]). By imposing
a non-zero penalty on the base classifiers, we can reduce the pool of admissible functions
to those of which the edge ? is larger than the edge offset ?. Although the theoretical
results do not apply directly, they support the empirical evidence (Section 4) that indicate
that the reduction of the pool of admissible base classifiers and the sparsity of the combined
classifier play an important role in decreasing the generalization error.
Finally note that the algorithm can be easily extended to real-valued base classifiers along
the lines of [10] and to regression by using the algorithm proposed in [11]. If base classifiers come from the set {h : Rd 7? R}, we can only use the base objective R1 (h) (1),
(t)
and the analytical solution (3) for the base coefficients
? must be replaced by a simple
(t)
(t) 6
numerical minimization (line search) of E
?, h . In the case of regression, the binary cost function I {h(x) 6= y} should be replaced by an appropriate regression cost (e.g.,
quadratic), and the final regressor should be the weighted median of the base regressors
instead of their weighted average.
3
The graph Laplacian regularizer
The algorithm can be used with any regularized base learner that optimizes a penalized
cost of the form (1). In this paper we apply a smoothness functional based on the graph
5
Note that if ? is constant (A DA B OOST or marginal A DA B OOST), the minimization of R 1 (h) and
R2 (h) leads to the same solution, namely, to the base classifier that minimizes the weighted error .
This is no more the case if ? depends on h.
6
As a side remark, note that applying a non-zero (even constant) penalty ? would provide an
alternative solution to the singularity problem (?(t) = ?) in the abstaining base classifier model of
[10].
Laplacian operator, proposed in a similar context by [4]. The advantage of this penalty is
that it is relatively simple to compute for enumerable base classifiers (e.g., decision stumps
or decision trees) and that it suits applications where the data exhibits a low dimensional
manifold structure.
Formally, let G = (V, E) be the neighborhood graph of the training set where the vertex
set V = {x1 , . . . , xn } is identical to the set of observations, and the edge set E contains
pairs of ?neighboring? vertices (xi , xj ) such that either kxi ? xj k < r or xi (xj ) is among
the k nearest neighbors of xj (xi ) where r or k is fixed. This graph plays a crucial role
in several recently developed dimensionality reduction methods since it approximates the
natural topology of the data if it is confined to a low-dimensional smooth manifold in the
embedding space. To penalize base classifiers that cut through dense regions, we use the
smoothness functional
PL (h) =
n
n
2
1 X X
h(xi ) ? h(xj ) Wij ,
2|W| i=1 j=i+1
where W is the adjacency matrix of G, that is, Wij = I (xi , xj ) ? E , and 2|W| =
Pn Pn
2 i=1 j=1 Wij is a normalizing factor so that 0 ? PL (h) ? 1.7 For binary base
classifiers, PL (h) is proportional to the number of separated neighbors, that is, the number
of connected
Pnpairs that are classified differently by h. Let the diagonal matrix D defined
by Dii = j=1 Wij , and let L = D ? W be the graph Laplacian of G. Then it is easy to
see that
2|W|PL (h) = hLhT = hh, Lhi =
n
X
?i hh, ei i,
j=1
where h = h(x1 ), . . . , h(xn ) , and ei and ?i are the (normalized) eigenvectors and eigenvalues of L, that is, Lei = ?i ei , kei k = 1. Since L is positive definite, all the eigenvalues
are non-negative. The eigenvectors with the smallest eigenvalues can be considered as the
?smoothest? functions on the neighborhood graph. Based on this observation, [4] proposed
to learn a linear combination of a small number of the eigenvectors with the smallest eigenvalues. One problem of this approach is that the out-of-sample extension of the obtained
classifier is non-trivial since the base functions are only known at the data points that participated in forming the neighborhood graph, so it can only be used in a semi-supervised
settings (when unlabeled test points are known before the learning). Our approach is based
on the same intuition, but instead of looking for a linear combination of the eigenvectors,
we form a linear combination of known base functions and penalize them according to their
smoothness on the underlying manifold. So, beside semi-supervised learning (explored in
Section 4), our algorithm can also be used to classify out-of-sample test observations.
The penalty functional can also be justified from the point of view of spectral clustering
[5]. The eigenvectors of L with the smallest eigenvalues8 represent ?natural? clusters in
the data set, so PL (h) is small if h is aligned with these eigenvectors, and PL (h) is large if
h splits the corresponding clusters.
7
Another variant (that we did not explore in this paper) is to weight edges decreasingly with their
lengths.
8
Starting from the second smallest; the smallest is 0 and it corresponds to the constant function. Also note that spectral clustering usually uses the eigenvectors of the normalized Laplacian
e = D?1/2 LD?1/2 . Nevertheless, if the neighborhood graph is constructed by connecting a fixed
L
e are
number of nearest neighbors, Dii is approximately constant, so the eigenvectors of L and L
approximately equal.
4
Experiments
In this section we present experimental results on four UCI benchmark datasets. The results are preliminary in the sense that we only validated the penalty coefficient ?, and did
not optimize the number of neighbors (set to k = 8) and the weighting scheme of the edges
of the neighborhood graph (Wij = 0 or 1). We used decision stumps as base classifiers,
10-fold cross validation for estimating errors, and 5-fold cross validation for determining ?.
The results (Figure 3(a)-(d) and Table 1) show that the R EG B OOST consistently improves
generalization. Although the improvement is within the standard deviation, the difference
between the test and the training error decreases significantly in two of the four experiments, which indicates reduced overfitting. The final classifier is also significantly sparser
after 1000 iterations (last two columns of Table 1). To measure how the penalty affects the
base classifier pool, in each iteration we calculated the number of admissible base classifiers relative to the total number of stumps considered by A DA B OOST. Figure 3(e) shows
that, as expected, R EG B OOST traverses only a (sometimes quite small) subset of the base
classifier space.
(a)
(b)
ionosphere
0.25
(c)
breast cancer
training error (AdaBoost)
test error (AdaBoost)
training error (RegBoost)
test error (RegBoost)
0.2
0.09
sonar
training error (AdaBoost)
test error (AdaBoost)
training error (RegBoost)
test error (RegBoost)
0.08
0.6
training error (AdaBoost)
test error (AdaBoost)
training error (RegBoost)
test error (RegBoost)
0.5
0.07
0.06
0.4
0.15
0.05
0.3
0.04
0.1
0.03
0.2
0.02
0.05
0.1
0.01
0
0
1
10
100
1000
1
10
100
t
pima indians diabetes
0.35
0
1000 1
10
t
training error (AdaBoost)
test error (AdaBoost)
training error (RegBoost)
test error (RegBoost)
0.3
1
1000
semi-supervised ionosphere
ionosphere
breast cancer
sonar
pima indians diabetes
0.9
100
t
rate of admissible stumps
0.2
training error (AdaBoost)
test error (AdaBoost)
training error (RegBoost)
test error (RegBoost)
0.18
0.16
0.8
0.14
0.7
0.25
0.12
0.6
0.1
0.5
0.2
0.08
0.4
0.06
0.3
0.15
0.04
0.2
0.1
1
10
100
0.1
1000 1
t
0.02
0
10
100
1000
1
t
10
100
1000
t
(d)
(e)
(f)
Figure 3: Learning curves. Test and training errors for the (a) ionosphere, (b) breast
cancer, (c) sonar, and (d) Pima Indians diabetes data sets. (e) Rate of admissible stumps.
(f) Test and training errors for the ionosphere data set with 100 labeled and 251 unlabeled
data points.
data set
ionosphere
breast cancer
sonar
Pima Indians
training error
A DA B R EG B
0%
0%
0%
2.44%
0%
0%
10.9% 16.0%
test error
A DA B
R EG B
9.14% (7.1)
7.7% (6.0)
5.29% (3.5)
3.82% (3.7)
32.5% (19.8) 29.8% (18.8)
25.3% (5.3)
23.3% (6.8)
# of stumps
A DA B R EG B
182
114
58
30
234
199
175
91
Table 1: Errors rates and number of base classifiers after 1000 iterations.
Since the Laplacian penalty can be computed without knowing the labels, the algorithm
can also be used for semi-supervised learning. Figure 3(f) shows the results when only a
subset of the training points are labeled. In this case, R EG B OOST can use the combined
data set to calculate the penalty, whereas both algorithms can use only the labeled points
to determine the base errors. Figure 3(f) indicates that R EG B OOST has a clear advantage
here. R EG B OOST is also far better than the semi-supervised algorithm proposed in [12]
(their best test error using the same settings is 18%).
5
Conclusion
In this paper we proposed to combine two powerful ideas, boosting and manifold learning. The algorithm can be used to boost any regularized base learner. Experimental results
indicate that R EG B OOST slightly improves A DA B OOST by incorporating knowledge on
the structure of the data into base classifier selection. R EG B OOST also significantly improves a recently proposed semi-supervised algorithm based on the same regularizer. In
the immediate future our goal is to conduct a larger scale experimental study in which
we optimize all the parameters of the algorithm, and compare it not only to A DA B OOST,
but also to marginal A DA B OOST, that is, R EG B OOST with a constant penalty ?. Marginal
A DA B OOST might exhibit a similar behavior on the supervised task (sparsity, reduced number of admissible base classifiers), however, it can not be used to semi-supervised learning.
We also plan to experiment with other penalties which are computationally less costly than
the Laplacian penalty.
References
[1] Y. Freund and R. E. Schapire, ?A decision-theoretic generalization of on-line learning and an
application to boosting,? Journal of Computer and System Sciences, vol. 55, pp. 119?139, 1997.
[2] L. Mason, P. Bartlett, J. Baxter, and M. Frean, ?Boosting algorithms as gradient descent,? in
Advances in Neural Information Processing Systems. 2000, vol. 12, pp. 512?518, The MIT
Press.
[3] G. R?atsch, T. Onoda, and K.-R. M?uller, ?Soft margins for AdaBoost,? Machine Learning, vol.
42, no. 3, pp. 287?320, 2001.
[4] M. Belkin and P. Niyogi, ?Semi-supervised learning on Riemannian manifolds,? Machine
Learning, to appear, 2004.
[5] J. Shi and J. Malik, ?Normalized cuts and image segmentation,? IEEE Transactions on Pettern
Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888?905, 2000.
[6] G. R?atsch and M. K. Warmuth, ?Maximizing the margin with boosting,? in Proceedings of the
15th Conference on Computational Learning Theory, 2002.
[7] L. Breiman, ?Prediction games and arcing classifiers,? Neural Computation, vol. 11, pp. 1493?
1518, 1999.
[8] R. E. Schapire, Y. Freund, P. Bartlett, and W. S. Lee, ?Boosting the margin: a new explanation
for the effectiveness of voting methods,? Annals of Statistics, vol. 26, no. 5, pp. 1651?1686,
1998.
[9] A. Antos, B. K?egl, T. Linder, and G. Lugosi, ?Data-dependent margin-based generalization
bounds for classification,? Journal of Machine Learning Research, pp. 73?98, 2002.
[10] R. E. Schapire and Y. Singer, ?Improved boosting algorithms using confidence-rated predictions,? Machine Learning, vol. 37, no. 3, pp. 297?336, 1999.
[11] B. K?egl, ?Robust regression by boosting the median,? in Proceedings of the 16th Conference
on Computational Learning Theory, Washington, D.C., 2003, pp. 258?272.
[12] M. Belkin, I. Matveeva, and P. Niyogi, ?Regression and regularization on large graphs,? in
Proceedings of the 17th Conference on Computational Learning Theory, 2004.
| 2613 |@word norm:1 recursively:1 ld:1 reduction:2 contains:1 must:3 numerical:1 gv:1 greedy:1 intelligence:1 guess:2 warmuth:1 detecting:1 boosting:13 traverse:1 dn:4 along:2 constructed:1 combine:2 expected:1 indeed:1 behavior:1 decreasing:1 actual:1 becomes:1 distri:1 provided:1 bounded:1 underlying:1 estimating:1 spectively:1 interpreted:3 minimizes:2 developed:1 every:2 voting:1 classifier:53 omit:1 yn:1 appear:1 positive:1 before:1 solely:1 approximately:2 lugosi:1 might:1 suggests:1 yj:2 definite:1 empirical:1 significantly:5 confidence:1 cannot:1 unlabeled:2 selection:2 operator:2 context:3 influence:1 applying:2 impossible:1 optimize:2 equivalent:4 conventional:1 shi:1 maximizing:3 starting:1 regularize:1 imizes:1 embedding:1 limiting:1 updated:2 pt:8 suppose:1 annals:1 play:2 us:2 diabetes:3 matveeva:1 recognition:1 particularly:1 cut:6 labeled:3 role:2 wang:1 calculate:1 region:2 wj:2 connected:1 decrease:2 intuition:2 complexity:7 learner:8 easily:1 succ:1 differently:1 various:1 regularizer:3 separated:2 effective:1 neighborhood:6 quite:1 larger:2 valued:1 otherwise:1 niyogi:2 statistic:1 h3c:1 noisy:3 final:4 advantage:3 eigenvalue:4 analytical:1 propose:2 coming:1 neighboring:1 aligned:1 uci:1 achieve:3 description:2 pronounced:1 az:1 cluster:3 r1:6 montreal:1 frean:1 nearest:2 indicate:4 come:1 dii:2 adjacency:1 explains:1 require:1 generalization:8 preliminary:1 singularity:1 extension:2 strictly:1 pl:6 hold:1 around:1 considered:2 exp:3 smallest:5 label:2 weighted:8 hope:1 minimization:7 mit:1 uller:1 j7:1 rather:2 pn:4 breiman:1 arcing:1 validated:1 regboost:10 noisiness:1 improvement:2 consistently:1 indicates:4 sense:3 dependent:2 favoring:1 wij:5 sketched:1 classification:2 among:1 plan:1 marginal:9 equal:1 construct:1 having:1 washington:1 identical:1 future:1 minimized:1 belkin:2 individual:1 replaced:2 suit:1 attempt:3 montr:1 antos:1 devoted:1 edge:15 tree:1 conduct:1 initialized:1 re:1 theoretical:3 eal:1 classify:2 column:1 soft:1 ordinary:1 cost:6 deviation:1 vertex:2 subset:3 too:2 kxi:1 combined:6 adaptively:1 lee:1 regressor:2 pool:3 connecting:1 w1:1 again:1 possibly:1 return:5 lhi:1 stump:8 coefficient:13 explicitly:1 depends:2 view:2 bution:1 maintains:1 minimize:6 classified:1 definition:2 pp:9 proof:3 riemannian:1 knowledge:3 dimensionality:1 improves:3 segmentation:1 formalize:1 focusing:1 supervised:13 follow:1 adaboost:11 improved:1 formulation:1 overfit:1 hand:3 horizontal:1 ei:5 lei:1 normalized:4 true:1 regularization:8 eg:21 game:1 unnormalized:2 bal:1 criterion:1 theoretic:1 confusion:1 cp:1 l1:2 image:1 recently:2 umontreal:1 common:1 pseudocode:1 functional:7 interpretation:1 approximates:1 significant:1 imposing:1 smoothness:3 rd:3 centre:1 softening:1 immune:1 lowered:1 moving:1 base:66 optimizes:1 driven:1 revolutionized:1 certain:2 inequality:1 binary:4 continue:1 yi:12 seen:1 determine:1 maximize:1 semi:11 smooth:2 ptt:2 cross:2 long:1 laplacian:9 prediction:2 variant:1 regression:5 breast:4 iteration:9 represent:1 sometimes:1 achieved:2 confined:1 penalize:2 justified:2 whereas:2 participated:1 median:2 crucial:1 effectiveness:1 split:1 enough:1 wn:1 easy:1 baxter:1 xj:8 affect:1 topology:1 suboptimal:1 reduce:1 idea:3 knowing:2 enumerable:1 motivated:1 bartlett:2 effort:1 penalty:22 speaking:1 cause:1 remark:1 ignored:1 proportionally:1 clear:1 eigenvectors:8 concentrated:1 tth:1 reduced:3 schapire:3 sign:1 vol:7 four:2 nevertheless:1 abstaining:1 graph:13 ville:1 run:1 powerful:2 family:2 decision:4 prefer:1 bound:4 fold:2 quadratic:1 argument:3 min:1 relatively:2 department:1 according:1 combination:4 remain:1 slightly:1 wi:9 outlier:2 intuitively:2 explained:1 ln:3 computationally:3 turn:1 mechanism:1 hh:2 singer:1 operation:1 apply:2 spectral:3 indirectly:1 appropriate:1 alternative:1 clustering:3 k1:2 objective:3 malik:1 costly:1 diagonal:1 exhibit:2 gradient:3 accommodates:3 capacity:1 manifold:12 trivial:1 iro:1 length:1 index:1 statement:1 pima:4 negative:2 design:1 upper:2 vertical:1 observation:3 datasets:1 arc:1 benchmark:1 descent:2 kegl:1 immediate:1 situation:1 extended:1 looking:1 y1:1 arbitrary:1 canada:1 namely:1 pair:1 connection:1 boost:1 decreasingly:1 usually:2 pattern:1 sparsity:2 explanation:3 natural:3 regularized:7 indicator:1 scheme:1 improve:4 technology:1 rated:1 determining:1 relative:1 beside:2 embedded:1 freund:2 proportional:4 remarkable:1 penalization:2 validation:2 cancer:4 penalized:2 supported:1 last:2 formal:2 side:1 neighbor:5 sparse:1 curve:1 calculated:1 xn:3 adaptive:4 regressors:1 kei:1 far:1 transaction:1 implicitly:1 forever:1 overfitting:3 tolerant:1 xi:17 search:1 iterative:1 decade:1 quantifies:2 why:1 table:3 sonar:4 terminate:1 learn:1 onoda:1 ca:1 robust:1 complex:1 da:25 did:2 main:1 dense:1 allowed:1 x1:3 fashion:1 exponential:3 smoothest:1 third:1 weighting:1 admissible:6 formula:1 theorem:10 specific:1 offset:7 decay:6 r2:5 mason:2 explored:1 evidence:1 normalizing:1 incorporating:2 ionosphere:6 egl:3 margin:11 sparser:2 explore:1 forming:1 corresponds:1 goal:4 oost:40 hard:2 uniformly:1 total:1 experimental:4 atsch:2 formally:3 linder:1 support:1 indian:4 incorporate:1 phenomenon:1 ex:1 |
1,777 | 2,614 | Mass meta-analysis in Talairach space
Finn ?
Arup Nielsen
Neurobiology Research Unit, Rigshospitalet
Copenhagen, Denmark
and
Informatics and Mathematical Modelling, Technical University of Denmark,
Lyngby, Denmark
[email protected]
Abstract
We provide a method for mass meta-analysis in a neuroinformatics
database containing stereotaxic Talairach coordinates from neuroimaging experiments. Database labels are used to group the individual experiments, e.g., according to cognitive function, and the
consistent pattern of the experiments within the groups are determined. The method voxelizes each group of experiments via
a kernel density estimation, forming probability density volumes.
The values in the probability density volumes are compared to
null-hypothesis distributions generated by resamplings from the
entire unlabeled set of experiments, and the distances to the nullhypotheses are used to sort the voxels across groups of experiments. This allows for mass meta-analysis, with the construction
of a list with the most prominent associations between brain areas and group labels. Furthermore, the method can be used for
functional labeling of voxels.
1
Introduction
Neuroimaging experimenters usually report their results in the form of 3dimensional coordinates in the standardized stereotaxic Talairach system [1]. Automated meta-analytic and information retrieval methods are enabled when such data
are represented in databases such as the BrainMap DBJ ([2], www.brainmapdbj.org)
or the Brede database [3]. Example methods include outlier detection [4] and identification of similar volumes [5].
Apart from the stereotaxic coordinates, the databases record details of the experimental situation, e.g., the behavioral domain and the scanning modality. In the
Brede database the main annotation is the so-called ?external components?1 which
are heuristically organized in a simple ontology: A directed graph (more specifically,
a causal network) with the most general components as the roots of the graph, e.g.,
1
External components might be thought of as ?cognitive components? or simply ?brain
functions?, but they are more general, e.g., they also incorporate neuroreceptors. The
components are called ?external? since they are external variables to the brain image.
WOEXT: 41
Cold pain
WOEXT: 40
Pain
WOEXT: 261
Thermal pain
WOEXT: 69
Hot pain
Figure 1: The external components around ?thermal pain? with ?pain? as the
parent of ?thermal pain? and ?cold pain? and ?hot pain? as children.
?hot pain? is a child of ?thermal pain? that in turn is a child of ?pain? (see Figure 1).
The simple ontology is setup from information typically found in the introduction
section of scientific articles, and it is compared with the Medical Subject Headings
ontology of the National Library of Medicine. The ontology is stored in a simple
XML file.
The Brede database is organized, like the BrainMap DBJ, on different levels with
scientific papers on the highest level. Each scientific paper contains one or more
?experiments?, which each in turn contains one or more locations. The individual
experiments are typically labeled with an external component. The experiments
that are labeled with the same external component form a group, and the distribution of locations within the group become relevant: If a specific external component
is localized to a specific brain region, then the locations associated with the external
component should cluster in Talairach space.
We will describe a meta-analytic method that identifies important associations between external components and clustered Talairach coordinates. We have previously
modeled the relation between Talairach coordinates and neuroanatomical terms
[4, 6] and the method that we propose here can be seen as an extension describing
the relationship between Talairach coordinates and, e.g., cognitive components.
2
Method
The data from the Brede database [3] was used, which at the time contained data
from 126 scientific article containing 391 experiments and 2734 locations. There
were 380 external components. The locations referenced with respect to the MNI
atlas were realigned to the Talairach atlas [7].
To form a vectorial representation, each location was voxelized by convolving the
0
location l at position vl = [x, y, z] with a Gaussian kernel [4, 8, 9]. This constructed
a probability density in Talairach space v
(v ? vl )0 (v ? vl )
p(v|l) = (2?? 2 )?3/2 exp ?
,
(1)
2? 2
with the width ? fixed to 1 centimeter. To form a resulting probability density
volume p(v|t) for an external component t the individual components from each
location were multiplied by the appropriate priors and summed
X
p(v|t) =
p(v|l) P (l|e) P (e|t),
(2)
l,e
with P (l|e) = 0 if the l location did not appear in the e experiment and P (e|t) = 0
if the e experiment is not associated with the t external components. The precise
normalization of these priors is an unresolved problem. A paper with many locations
and experiments should not be allowed to dominate the results. This can be the case
if all locations are given equal weight. On the other hand a paper which reports
just a single coordinate should probably not be weighted as much as one with
many experiments and locations: Few reported locations might be due to limited
(statistical) power of the experiment. As a compromise between the two extremes
we used the square root of the number of the locations within an experiment and
the square root of the number of experiments within a paper for the prior P (l|e).
The square root normalization is also an appropriate normalization in certain voting
systems [10]. The second prior was uniform P (e|t) ? 1 for those experiments that
were labeled with the t external component.
The continuous volume were sampled at regular grid points to establish a vector w t
for each external component
wt ? p(v|t).
(3)
Null-hypothesis distributions for the maximum statistics u across the voxels in the
volume were built up by resampling: A number of experiments E was selected
and E experiments were resampled, with replacement, from the entire set of 391
experiments, ignoring the grouping imposed by the external component labeling.
The experiments were resampled without regard to the paper they originated from.
The maximum across voxels was found as:
ur (E) = max [wr (j)] ,
j
(4)
where j is an index over voxels and r is the resample index. With R resamplings we
obtain a vector u(E) = [u1 (E) . . . ur (E) . . . uR (E)] that is a function of the number
of experiments and which forms an empirical distribution u(E). When the value
wt,j of the j voxel of the t external component was compared with the distribution,
a distance to the null-hypothesis can be generated
dt,j = Prob [wt,j > u(Et )] ,
(5)
where 1 ? d is a statistical P -value and where Et is the number of experiment
associated with the t external component. Thus the resampling allows us to convert
the probability density value to a probability that is comparable across external
components of different sizes. The maximum statistics deal automatically with the
multiple comparison problem across voxels [11].
dt,j can be computed by counting the fraction of the resampled values ur that are
below the value of wt,j . The resampling distribution can also be approximated and
smoothed by modeling it with a non-linear function. In our case we used a standard
two-layer feed-forward neural network with hyperbolic tangent hidden units [12, 13]
modeling the function f (E, u) = atanh(2d ? 1) with a quadratic cost function.
The non-linear function allows for a more compact representation of the empirical
distribution of the resampled maximum statistics.
As a final step, the probability volumes for the external components wt were thresholded on selected levels and isosurfaces generated in the distance volume for visualization. Connected voxels within the thresholded volume were found by region
identification and the local maxima in the regions were determined.
Functional labeling of specified voxels is also possible: The distances d t,j were collected in a (external component ? voxel)-matrix D and the elements in the j column
sorted. Lastly, the voxel were labeled with the top external component.
Only the bottom nodes of the causal networks of external components are likely
to be directly associated with experiments. To label the ancestors, the labels from
6
Randomization test statistics
10
test statistics (max pdf)
0.5
0.75
0.9
0.95
0.99
5
10
4
10
0
10
1
10
2
10
Number of experiments
Figure 2: The test statistics at various distances to the null-hypothesis (d = 1 ? P )
after 1000 resamplings. The distance is shown as a function of the number of
experiments E in the resampling.
their descendants were back propagated, e.g., a study explicitly labeled as ?hot
pain? were also be labeled as ?thermal pain? and ?pain?. Apart from this simple
back propagation the hierarchical structure of the external components was not
incorporated into the prior.
3
Results
Figure 2 shows isolines in the cumulative distribution of the resampled maximum
statistics u(E) as a function of the resampling set size (number of experiments)
from E = 1 to E = 100. Since the vectorized volume is not normalized to form a
probability density the curves are increasing with our selected normalization.
Table 1 shows the result of sorting the maximum distances across voxel within the
external components. Topping the list are external components associated with
movement. The voxel with the largest distance is localized in v = (0, ?8, 56) which
most likely is due to movement studies activating the supplementary motor area. In
the Brede database the mean is (6, ?7, 55) for the locations in the right hemisphere
labeled as supplementary motor area. Other voxels with a high distance for the
movement external components are located in the primary motor area.
A number of other entries on the list are associated with pain, with the main voxel
at (0, 8, 32) in the right anterior cingulate. Other important areas are shown in
Figure 3 with isosurfaces in the distance volume for the external component ?pain?
(WOEXT: 40). These are localized in the anterior cingulate, right and left insula
and thalamus.
Other external components high on the list are ?audition? together with ?voice?
#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
d
1.00
1.00
1.00
1.00
1.00
1.00
1.00
0.99
0.99
0.99
0.99
0.99
0.99
0.99
0.99
x
0
0
0
0
56
0
0
0
0
24
56
0
24
0
0
y
?8
?8
8
8
?16
8
8
?56
8
?8
?16
?56
?8
?56
?56
z
56
56
32
32
0
32
32
16
32
?8
0
16
?8
16
16
Name (WOEXT)
Localized movement (266)
Motion, movement, locomotion (4)
Pain (40)
Thermal pain (261)
Audition (14)
Temperature sensation (204)
Somesthesis (17)
Memory retrieval (24)
Warm temperature sensation (207)
Unpleasantness (153)
Voice (167)
Memory (9)
Emotion (3)
Long-term memory (112)
Declarative memory (319)
Table 1: The top 15 elements of the list, showing the external components that
score the highest, the distance to the null-hypothesis d, and the associated Talairach
x, y and z coordinates. The numbers in the parentheses are the Brede database
identifiers for the external components (WOEXT). This list was generated with
coarse 8 ? 8 ? 8mm3 voxels and using the non-linear model approximation for the
cumulative distribution functions.
appearing in left and right superior temporal gyrus, and memory emerging in the
posterior cingulate area. Unpleasantness and emotion are high on the list due to,
e.g., ?fear? and ?disgust? experiments that report activation in the right amygdala
and nearby areas.
An example of the functional labeling of a voxel appears in Table 2. The chosen
voxel is (0, ?56, 16) that appears in the posterior cingulate. Memory retrieval is the
first on the list in accordance with Table 1. Many of the other external components
on the list are also related to memory.
4
Discussion
The Brede database contains many thermal pain experiments, and it causes high
scores for voxels from external components such as ?pain? and ?thermal pain?. The
four focal ?brain activations? that appear in Figure 3 are localized in areas (anterior
cingulate, insula and thalamus) that an expert reviewer has previously identified
as important in pain [14]. Thus there is consistency between our automated metaanalytic technique and a ?manual? expert review.
Many experiments that report activation in the posterior cingulate area have been
included in the Brede database, and this is probably why memory is especially associated with this area. A major review of 275 functional neuroimaging studies
found that episodic memory retrieval is the cognitive function with highest association with the posterior cingulate [15], so our finding is again in alignment with an
Figure 3: Plot of the important areas associated with the external component
?pain?. The red opaque isosurface is on the level d = 0.95 in the distance volume while the gray transparent surface appears at d = 0.05. Yellow glyphs appear
at the local maxima in the thresholded volume. The viewpoint is situated nearest
to the left superior posterior corner of the brain.
expert review.
A number of the substantial associations between brain areas and external components are not surprising, e.g., audition associating with superior temporal gyrus.
Our method has no inherent knowledge of what is already known, and thus not able
distinguish novel associations from trivial.
A down-side with the present method is that it requires the labeling of experiments
during database entry and the construction of the hierarchy of the labels (Figure 1).
Both are prone to ?interpretation? and this is particularly a problem for complex
cognitive functions. Our methodology, however, does not necessarily impose a single
organization of the external components, and it is possible to rearrange these by
defining another adjacency matrix for the graph.
In Table 1 the brain areas are represented in terms of Talairach coordinates. It
should be possible to convert these coordinates further to neuroanatomical terms
#
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
d
0.99
0.99
0.99
0.99
0.99
0.96
0.94
0.94
0.58
0.16
0.14
0.14
0.11
0.09
0.02
Name (WOEXT)
Memory retrieval (24)
Memory (9)
Long-term memory (112)
Declarative memory (319)
Episodic memory (49)
Autobiographical memory (259)
Cognition (2)
Episodic memory retrieval (109)
Disease (79)
Recognition (190)
Psychiatric disorders (82)
Neurotic, stress and somatoform disorders (227)
Severe stress reactions and adjustment disorders (228)
Emotion (3)
Semantic memory (318)
Table 2: Example of a functional label list of a voxel v = (0, ?56, 16) in the posterior
cingulate area.
by using the models between coordinates and lobar anatomy that we previously
have established [4, 6].
Functional labeling should allow us to build a complete functional atlas for the entire
brain. The utility of this approach is, however, limited by the small size of the Brede
database and its bias towards specific brain regions and external components. But
such a functional atlas will serve as a neuroinformatic organizer for the increasing
number of neuroimaging studies.
Acknowledgment
I am grateful to Matthew G. Liptrot for reading and commenting on the manuscript.
Lars Kai Hansen is thanked for discussion, Andrew C. N. Chen for identifying some
of the thermal pain studies and the Villum Kann Rasmussen Foundation for their
generous support of the author.
References
[1] Jean Talairach and Pierre Tournoux. Co-planar Stereotaxic Atlas of the Human
Brain. Thieme Medical Publisher Inc, New York, January 1988.
[2] Peter T. Fox and Jack L. Lancaster. Mapping context and content: the BrainMap model. Nature Reviews Neuroscience, 3(4):319?321, April 2002.
[3] Finn ?
Arup Nielsen. The Brede database: a small database for functional neuroimaging. NeuroImage, 19(2), June 2003. Presented at the 9th International
Conference on Functional Mapping of the Human Brain, June 19?22, 2003,
New York, NY. Available on CD-Rom.
[4] Finn ?
Arup Nielsen and Lars Kai Hansen. Modeling of activation data in
the BrainMapTM database: Detection of outliers. Human Brain Mapping,
15(3):146?156, March 2002.
[5] Finn ?
Arup Nielsen and Lars Kai Hansen. Finding related functional neuroimaging volumes. Artificial Intelligence in Medicine, 30(2):141?151, February 2004.
?rup Nielsen and Lars Kai Hansen. Automatic anatomical labeling of
[6] Finn A
Talairach coordinates and generation of volumes of interest via the BrainMap database. NeuroImage, 16(2), June 2002. Presented at the 8th International Conference on Functional Mapping of the Human Brain, June 2?6,
2002, Sendai, Japan. Available on CD-Rom.
[7] Matthew Brett. The MNI brain and the Talairach atlas. http://www.mrccbu.cam.ac.uk/Imaging/mnispace.html, August 1999. Accessed 2003 March
17.
[8] Peter E. Turkeltaub, Guinevere F. Eden, Karen M. Jones, and Thomas A.
Zeffiro. Meta-analysis of the functional neuroanatomy of single-word reading:
method and validation. NeuroImage, 16(3 part 1):765?780, July 2002.
[9] J. M. Chein, K. Fissell, S. Jacobs, and Julie A. Fiez. Functional heterogeneity
within Broca?s area during verbal working memory. Physiology & Behavior,
77(4-5):635?639, December 2002.
[10] Lionel S. Penrose. The elementary statistics of majority voting. Journal of the
Royal Statistical Society, 109:53?57, 1946.
[11] Andrew. P. Holmes, R. C. Blair, J. D. G. Watson, and I. Ford. Non-parametric
analysis of statistic images from functional mapping experiments. Journal of
Cerebral Blood Flow and Metabolism, 16(1):7?22, January 1996.
[12] Claus Svarer, Lars Kai Hansen, and Jan Larsen. On the design and evaluation
of tapped-delay lines neural networks. In Proceedings of the IEEE International
Conference on Neural Networks, San Francisco, California, USA, volume 1,
pages 46?51, 1993.
?rup Nielsen, Peter Toft, Matthew George Liptrot,
[13] Lars Kai Hansen, Finn A
Cyril Goutte, Stephen C. Strother, Nicholas Lange, Anders Gade, David A.
Rottenberg, and Olaf B. Paulson. ?lyngby? ? a modeler?s Matlab toolbox for
spatio-temporal analysis of functional neuroimages. NeuroImage, 9(6):S241,
June 1999.
[14] Martin Ingvar. Pain and functional imaging. Philosophical Transactions of the
Royal Society of London. Series B, Biological Sciences, 354(1387):1347?1358,
July 1999.
[15] Roberto Cabeza and Lars Nyberg. Imaging cognition II: An empirical review
of 275 PET and fMRI studies. Journal of Cognitive Neuroscience, 12(1):1?47,
January 2000.
| 2614 |@word cingulate:8 heuristically:1 jacob:1 contains:3 score:2 series:1 reaction:1 anterior:3 surprising:1 activation:4 fn:1 analytic:2 motor:3 atlas:6 plot:1 resampling:5 intelligence:1 selected:3 metabolism:1 record:1 coarse:1 node:1 location:15 org:1 accessed:1 mathematical:1 constructed:1 become:1 descendant:1 behavioral:1 sendai:1 commenting:1 behavior:1 ontology:4 brain:15 automatically:1 increasing:2 brett:1 mass:3 null:5 what:1 thieme:1 emerging:1 finding:2 temporal:3 voting:2 uk:1 unit:2 medical:2 appear:3 referenced:1 local:2 accordance:1 might:2 co:1 limited:2 directed:1 acknowledgment:1 cold:2 jan:1 episodic:3 area:15 empirical:3 thought:1 hyperbolic:1 physiology:1 word:1 regular:1 psychiatric:1 unlabeled:1 context:1 www:2 imposed:1 brainmap:4 reviewer:1 disorder:3 identifying:1 holmes:1 dominate:1 enabled:1 coordinate:12 hierarchy:1 construction:2 hypothesis:5 locomotion:1 tapped:1 element:2 approximated:1 particularly:1 located:1 recognition:1 database:18 labeled:7 bottom:1 region:4 connected:1 movement:5 highest:3 substantial:1 disease:1 rup:2 isosurface:1 cam:1 grateful:1 compromise:1 serve:1 represented:2 various:1 describe:1 london:1 artificial:1 labeling:7 lancaster:1 neuroinformatics:1 jean:1 supplementary:2 kai:6 nyberg:1 statistic:9 ford:1 final:1 propose:1 unresolved:1 relevant:1 parent:1 cluster:1 lionel:1 olaf:1 andrew:2 ac:1 nearest:1 blair:1 sensation:2 anatomy:1 lars:7 human:4 strother:1 adjacency:1 activating:1 villum:1 transparent:1 clustered:1 randomization:1 biological:1 elementary:1 topping:1 extension:1 around:1 exp:1 cognition:2 mapping:5 matthew:3 major:1 generous:1 resample:1 estimation:1 label:6 hansen:6 largest:1 weighted:1 gaussian:1 realigned:1 june:5 modelling:1 am:1 anders:1 vl:3 entire:3 typically:2 hidden:1 relation:1 ancestor:1 html:1 summed:1 equal:1 emotion:3 jones:1 fmri:1 report:4 inherent:1 few:1 national:1 individual:3 replacement:1 detection:2 organization:1 interest:1 evaluation:1 severe:1 alignment:1 extreme:1 rearrange:1 unpleasantness:2 fox:1 causal:2 column:1 modeling:3 cost:1 entry:2 uniform:1 delay:1 stored:1 reported:1 scanning:1 density:7 international:3 informatics:1 together:1 again:1 containing:2 corner:1 cognitive:6 external:37 convolving:1 audition:3 expert:3 japan:1 insula:2 inc:1 explicitly:1 root:4 red:1 sort:1 annotation:1 square:3 yellow:1 identification:2 manual:1 larsen:1 associated:9 liptrot:2 modeler:1 propagated:1 sampled:1 experimenter:1 knowledge:1 organized:2 nielsen:6 back:2 appears:3 feed:1 manuscript:1 dt:2 methodology:1 kann:1 planar:1 april:1 furthermore:1 just:1 lastly:1 hand:1 working:1 propagation:1 gray:1 scientific:4 glyph:1 usa:1 name:2 normalized:1 semantic:1 deal:1 during:2 width:1 prominent:1 pdf:1 stress:2 mm3:1 complete:1 motion:1 temperature:2 image:2 jack:1 novel:1 dbj:2 brede:10 superior:3 functional:17 volume:16 cerebral:1 association:5 interpretation:1 automatic:1 grid:1 focal:1 consistency:1 surface:1 posterior:6 hemisphere:1 apart:2 resamplings:3 certain:1 meta:6 watson:1 seen:1 george:1 impose:1 neuroanatomy:1 broca:1 july:2 stephen:1 ii:1 multiple:1 thalamus:2 technical:1 long:2 retrieval:6 parenthesis:1 kernel:2 normalization:4 modality:1 publisher:1 file:1 probably:2 subject:1 claus:1 december:1 flow:1 counting:1 automated:2 identified:1 associating:1 lange:1 utility:1 peter:3 karen:1 york:2 cause:1 cyril:1 matlab:1 situated:1 gyrus:2 http:1 isolines:1 neuroscience:2 wr:1 anatomical:1 group:7 four:1 eden:1 blood:1 thresholded:3 imaging:3 graph:3 fraction:1 convert:2 prob:1 opaque:1 disgust:1 comparable:1 layer:1 resampled:5 toft:1 distinguish:1 quadratic:1 mni:2 vectorial:1 nearby:1 u1:1 martin:1 according:1 march:2 across:6 ur:4 isosurfaces:2 organizer:1 outlier:2 lyngby:2 visualization:1 previously:3 goutte:1 turn:2 describing:1 finn:6 available:2 multiplied:1 hierarchical:1 appropriate:2 pierre:1 appearing:1 nicholas:1 voice:2 thomas:1 neuroanatomical:2 standardized:1 top:2 include:1 paulson:1 medicine:2 especially:1 establish:1 build:1 february:1 society:2 atanh:1 already:1 parametric:1 primary:1 pain:26 distance:12 majority:1 collected:1 trivial:1 declarative:2 pet:1 denmark:3 rom:2 modeled:1 relationship:1 index:2 setup:1 neuroimaging:6 voxelized:1 design:1 tournoux:1 thermal:9 january:3 situation:1 neurobiology:1 incorporated:1 precise:1 defining:1 heterogeneity:1 incorporate:1 smoothed:1 august:1 david:1 copenhagen:1 specified:1 toolbox:1 philosophical:1 california:1 established:1 able:1 usually:1 pattern:1 below:1 reading:2 built:1 max:2 memory:18 royal:2 hot:4 power:1 warm:1 neuroimages:1 xml:1 fiez:1 library:1 dtu:1 identifies:1 roberto:1 prior:5 voxels:11 review:5 tangent:1 generation:1 localized:5 validation:1 foundation:1 vectorized:1 consistent:1 article:2 viewpoint:1 cd:2 centimeter:1 prone:1 rasmussen:1 heading:1 verbal:1 side:1 allow:1 bias:1 julie:1 regard:1 curve:1 amygdala:1 cumulative:2 forward:1 author:1 san:1 voxel:9 transaction:1 compact:1 imm:1 francisco:1 spatio:1 continuous:1 why:1 table:6 nature:1 ignoring:1 complex:1 necessarily:1 domain:1 did:1 main:2 identifier:1 child:3 allowed:1 ny:1 neuroimage:4 position:1 originated:1 down:1 specific:3 showing:1 list:10 dk:1 grouping:1 sorting:1 chen:1 ingvar:1 simply:1 likely:2 forming:1 penrose:1 contained:1 adjustment:1 fear:1 talairach:14 sorted:1 towards:1 content:1 included:1 determined:2 specifically:1 wt:5 called:2 svarer:1 experimental:1 support:1 thanked:1 stereotaxic:4 |
1,778 | 2,615 | Kernels for Multi?task Learning
Charles A. Micchelli
Department of Mathematics and Statistics
State University of New York,
The University at Albany
1400 Washington Avenue, Albany, NY, 12222, USA
Massimiliano Pontil
Department of Computer Sciences
University College London
Gower Street, London WC1E 6BT, England, UK
Abstract
This paper provides a foundation for multi?task learning using reproducing kernel Hilbert spaces of vector?valued functions. In this setting, the kernel is a
matrix?valued function. Some explicit examples will be described which go beyond our earlier results in [7]. In particular, we characterize classes of matrix?
valued kernels which are linear and are of the dot product or the translation invariant type. We discuss how these kernels can be used to model relations between
the tasks and present linear multi?task learning algorithms. Finally, we present a
novel proof of the representer theorem for a minimizer of a regularization functional which is based on the notion of minimal norm interpolation.
1
Introduction
This paper addresses the problem of learning a vector?valued function f : X ? Y, where
X is a set and Y a Hilbert space. We focus on linear spaces of such functions that admit a
reproducing kernel, see [7]. This study is valuable from a variety of perspectives. Our main
motivation is the practical problem of multi?task learning where we wish to learn many
related regression or classification functions simultaneously, see eg [3, 5, 6]. For instance,
image understanding requires the estimation of multiple binary classifiers simultaneously,
where each classifier is used to detect a specific object. Specific examples include locating
a car from a pool of possibly similar objects, which may include cars, buses, motorbikes,
faces, people, etc. Some of these objects or tasks may share common features so it would
be useful to relate their classifier parameters. Other examples include multi?modal human
computer interface which requires the modeling of both, say, speech and vision, or tumor
prediction in bioinformatics from multiple micro?array datasets.
Moreover, the spaces of vector?valued functions described in this paper may be useful for
learning continuous transformations. In this case, X is a space of parameters and Y a
Hilbert space of functions. For example, in face animation X represents pose and expression of a face and Y a space of functions IR2 ? IR, although in practice one considers
discrete images in which case f (x) is a finite dimensional vector whose components are
associated to the image pixels. Other problems such as image morphing, can be formulated
as vector?valued learning.
When Y is an n?dimensional Euclidean space, one straightforward approach in learning a
vector?valued function f = (f1 , . . . , fn ) consists in separately representing each component of f by a linear space of smooth functions and then learn these components independently, for example by minimizing some regularized error functional. This approach does
not capture relations between components of f (which are associated to tasks or pixels in
the examples above) and should not be the method of choice when these relations occur. In
this paper we investigate how kernels can be used for representing vector?valued functions.
We proposed to do this by using a matrix?valued kernel K : X ? X ? IRn?n that reflects
the interaction amongst the components of f . This paper provides a foundation for this
approach. For example, in the case of support vector machines (SVM?s) [10], appropriate
choices of the matrix?valued kernel implement a trade?off between large margin of each
per?task SVM and large margin of combinations of these SVM?s, eg their average.
The paper is organized as follows. In section 2 we formalize the above observations and
show that reproducing Hilbert spaces (RKHS) of vector?valued functions admit a kernel
with values which are bounded linear operators on the output space Y and characterize the
form some of these operators in section 3. Finally, in section 4 we provide a novel proof
for the representer theorem which is based on the notion of minimal norm interpolation and
present linear multi?task learning algorithms.
2
RKHS of vector?valued functions
Let Y be a real Hilbert space with inner product (?, ?), X a set, and H a linear space of functions on X with values in Y. We assume that H is also a Hilbert space with inner product
h?, ?i. We present two methods to enhance standard RKHS to vector?valued functions.
2.1
Matrix?valued kernels based on Aronszajn
The first approach extends the scalar case, Y = IR, in [2].
Definition 1 We say that H is a reproducing kernel Hilbert space (RKHS) of functions
f : X ? Y, when for any y ? Y and x ? X the linear functional which maps f ? H to
(y, f (x)) is continuous on H.
We conclude from the Riesz Lemma (see, e.g., [1]) that, for every x ? X and y ? Y, there
is a linear operator Kx : Y ? H such that
(y, f (x)) = hKx y, f i.
(2.1)
For every x, t ? X we also introduce the linear operator K(x, t) : Y ? Y defined, for
every y ? Y, by
K(x, t)y := (Kt y)(x).
(2.2)
In the proposition below we state the main properties of the function K. To this end,
we let L(Y) be the set of all bounded linear operators from Y into itself and, for every
A ? L(Y), we denote by A? its adjoint. We also use L+ (Y) to denote the cone of positive
semidefinite bounded linear operators, i.e. A ? L+ (Y) provided that, for every y ? Y,
(y, Ay) ? 0. When this inequality is strict for all y 6= 0 we say A is positive definite.
We also denote by INm the set of positive integers up to and including m. Finally, we say
that H is normal provided there does not exist (x, y) ? X ? (Y\{0}) such that the linear
functional (y, f (x)) = 0 for all f ? H.
Proposition 1 If K(x, t) is defined, for every x, t ? X , by equation (2.2) and Kx is given
by equation (2.1) then the kernel K satisfies, for every x, t ? X , the following properties:
(a) For every y, z ? Y, we have that (y, K(x, t)z) = hKt z, Kx yi.
(b) K(x, t) ? L(Y), K(x, t) = K(t, x)? , and K(x, x) ? L+ (Y).
Moreover, K(x, x) is positive definite for all x ? X if and only if H is normal.
(c) For any m ? IN, {xj : j ? INm } ? X , {yj : j ? INm } ? Y we have that
X
(yj , K(xj , x` )y` ) ? 0.
(2.3)
j,`?INm
We prove (a) by merely choosing f = Kt z in equation (2.1) to obtain that
P ROOF.
hKx y, Kt zi = (y, (Kt z)(x)) = (y, K(x, t)z).
(2.4)
Consequently, from this equation, we conclude that K(x, t) admits an algebraic adjoint
K(t, x) defined everywhere on Y and, so, the uniform boundness principle, see, eg, [1,
p. 48] implies that K(x, t) ? L(Y) and K(x, t) = K(t, x)? . Moreover, choosing t = x
in (a) proves that K(x, x) ? L+ (Y). As for the positive definiteness of K(x, x), merely
use equations (2.1) and property (a). These remarks prove (b). As for (c), we again use
property (a) to obtain that
X
X
X
(yj , K(xj , x` )y` ) =
hKxj yj , Kx` y` i = k
Kxj yj k2 ? 0.
j,`?INm
j?INm
j,`?INm
This completes the proof.
For simplicity, we say that K : X ? X ? L(Y) is a matrix?valued kernel (or simply
a kernel if no confusion will arise) if it satisfies properties (b) and (c). So far we have
seen that if H is a RKHS of vector?valued functions, there exists a kernel. In the spirit of
the Moore-Aronszajn?s theorem for RKHS of scalar functions [2], it can be shown that if
K : X ? X ? L(Y) is a kernel then there exists a unique (up to an isometry) RKHS of
functions from X to Y which admits K as the reproducing kernel. The proof parallels the
scalar case.
Given a vector?valued function f : X ? Y we associate to it a scalar?valued function
F : X ? Y ? IR defined by
F (x, ?) := (?, f (x)), x ? X , ? ? Y.
1
(2.5)
1
We let H be the linear space of all such functions. Thus, H consists of functions which
are linear in their second variable. We make H1 into a Hilbert space by choosing kF k =
kf k. It then follows that H1 is a RKHS with reproducing scalar?valued kernel defined, for
all (x, y), (t, z) ? X ? Y, by the formula
K 1 ((x, y), (t, z)) := (y, K(x, t)z).
2.2
(2.6)
Feature map
The second approach uses the notion of feature map, see e.g. [9]. A feature map is a
function ? : X ? Y ? W where W is a Hilbert space. A feature map representation of a
kernel K has the property that, for every x, t ? X and y, z ? Y there holds the equation
(?(x, y), ?(t, z)) = (y, K(x, t)z).
From equation (2.4) we conclude that every kernel admits a feature map representation
(a Mercer type theorem) with W = H. With additional hypotheses on H and Y this
representation can take a familiar form
X
K`q (x, t) =
?`r (x)?qr (t), `, q ? IN.
(2.7)
r?IN
Much more importantly, we begin with a feature map ?(x, ?) = ((?` (x), ?) : ` ? IN)
where ? ? W, this being the space of squared summable sequence on IN. We wish to learn
aPfunction f : X ? Y where Y = W and f = (f` : ` ? IN) with f` = (w, ?` ) :=
`
r?IN wr ?r for each ` ? IN, where w ? W. We choose kf k = kwk and conclude that
the space of all such functions is a Hilbert space of function from X to Y with kernel (2.7).
These remarks connect feature maps to kernels and vice versa. Note a kernel may have
many maps which represent it and a feature map representation for a kernel may not be the
appropriate way to write it for numerical computations.
3
Kernel construction
In this section we characterize a wide variety of kernels which are potentially useful for
applications.
3.1
Linear kernels
A first natural question concerning RKHS of vector?valued functions is: if X is IRd what
is the form of linear kernels? In the scalar case a linear kernel is a quadratic form, namely
K(x, t) = (x, Qt), where Q is a d ? d positive semidefinite matrix. We claim that for
Y = IRn any linear matrix?valued kernel K = (K`q : `, q ? INn ) has the form
K`q (x, t) = (B` x, Bq t), x, t ? IRd
(3.8)
where B` are p ? d matrices for some p ? IN. To see that such K is a kernel simply note
that K is in the Mercer form (2.7) for ?` (x) = B` x. On the other hand, since any linear
kernel has a Mercer representation with linear features, we conclude that all linear kernels
have the form (3.8). A special case is provided by choosing p = d and B` to be diagonal
matrices.
We note that the theory presented in section 2 can be naturally extended to the case where
each component of the vector?valued function has a different input domain. This situation
is important in multi?task learning, see eg [5]. To this end, we specify sets X` , ` ? INn ,
functions g` : X` ? IR, and note that multi?task learning can be placed in the above
framework by defining the input space
X := X1 ? X2 ? ? ? ? ? Xn .
We are interested in vector?valued functions f : X ? IRn whose coordinates are given by
f` (x) = g` (P` x), where x = (x` : x` ? X` , ` ? INn ) and P` : X ? X` is a projection
operator defined, for every x ? X by P` (x) = x` , ` ? INn . For `, q ? INn , we suppose
kernel functions C`q : X` ? Xq ? IR are given such that the matrix valued kernel whose
elements are defined as
K`q (x, t) := C`q (P` x, Pq t), `, q ? INn
satisfies properties (b) and (c) of Proposition 1. An example of this construction is provided again by linear functions. Specifically, we choose X` = IRd` , where d` ? IN and
C`q (x` , tq ) = (Q` x` , Qq tq ), x` ? X` , tq ? Xq , where Q` are p ? d` matrices. In this case,
the matrix?valued kernel K = (K`q : `, q ? INn ) is given by
K`q (x, t) = (Q` P` x, Qq Pq t)
(3.9)
which is of the form in equation (3.8) for B` = Q` P` , ` ? INn .
3.2
Combinations of kernels
The results in this section are based on a lemma by Schur which state that the elementwise
product of two positive semidefinite matrices is also positive semidefinite, see [2, p. 358].
This result implies that, when Y is finite dimensional, the elementwise product of two
matrix?valued kernels is also a matrix?valued kernel. Indeed, in view of the discussion at
the end of section 2.2 we immediately conclude the following two lemma hold.
Lemma 1 If Y = IRn and K1 and K2 are matrix?valued kernels then their elementwise
product is a matrix?valued kernel.
This result allows us, for example, to enhance the linear kernel (3.8) to a polynomial kernel.
In particular, if r is a positive integer, we define, for every `, q ? INn ,
K`q (x, t) := (B` x` , Bq tq )r
and conclude that K = (K`q : `, q ? INn ) is a kernel.
Lemma 2 If G : IRd ? IRd ? IR is a kernel and z` : X ? IRd a vector?valued function,
for ` ? INn then the matrix?valued function K : X ? X ? IRn?n whose elements are
defined, for every x, t ? X , by
K`q (x, t) = G(z` (x), zq (t))
is a matrix?valued kernel.
This lemma confirms, as a special case, that if z` (x) = B` x with B` a p ? d matrix,
` ? INn , and G : IRd ? IRd ? IR is a scalar?valued kernel, then the function (3.8) is
a matrix?valued kernel. When G is chosen to be a Gaussian kernel, we conclude that
K`q (x, t) = exp(??kB` x ? Bq tk2 ) is a matrix?valued kernel.
In the scalar case it is well?known that a nonnegative combination of kernels is a kernel.
The next proposition extends this result to matrix?valued kernels.
Proposition 2 If Kj , j ? INs , s ? IN are scalar?valued kernels and Aj ? L+ (Y) then the
function
X
K=
A j Kj
(3.10)
j?INs
is a matrix?valued kernel.
P ROOF.
For any x, t ? X and c, d ? Y we have that
X
(c, K(x, t)z) =
(c, Aj d)Kj (x, t)
j?INs
and so the proposition follows form the Schur lemma.
Other results of this type can be found in [7]. The formula (3.10) can be used to generate
a wide variety of matrix?valued kernels which have the flexibility needed for learning. For
example, we obtain polynomial matrix?valued kernels by setting X = IRd and Kj (x, t) =
(x, t)j , where x, t ? IRd . We remark that, generally, the kernel in equation (3.10) cannot be
reduced to a diagonal kernel. An interesting case of Proposition 2 is provided by low rank
kernels which may be useful in situations where the components of f are linearly related,
that is, for every f ? H and x ? X f (x) lies in a linear subspace M ? Y. In this case,
it is desirable to use a kernel which has the same property that f (x) ? M, x ? X for all
f ? H. We can ensure this by an appropriate choice of the matrices Aj . For example, if
M = span({bj : j ? INs }) we may choose Aj = bj b?j .
Matrix?valued Gaussian mixtures are obtained by choosing X = IRd , Y = IRn , {?j : j ?
INs } ? IR+ , and Kj (x, t) = exp(??j kx ? tk2 ). Specifically,
X
2
Aj e??j kx?tk
K(x, t) =
j?INs
is a kernel on X ? X for any {Aj : j ? INs } ? L+ (IRn ).
4
Regularization and minimal norm interpolation
Let V : Y m ? IR+ ? IR be a prescribed function and consider the problem of minimizing
the functional
E(f ) := V (f (xj ) : j ? INm ), kf k2
(4.11)
over all functions f ? H. A special case is covered by the functional of the form
X
E(f ) :=
Q(yj , f (xj )) + ?kf k2
(4.12)
j?INm
where ? is a positive parameter and Q : Y ? Y ? IR+ is some prescribed loss function,
eg the square loss. Within this general setting we provide a ?representer theorem? for any
function which minimizes the functional in equation (4.11). This result is well-known in
the scalar case. Our proof technique uses the idea of minimal norm interpolation, a central
notion in function estimation and interpolation.
Lemma 3 If y ? {(f (xj ) : j ? INm ) : f ? H} ? IRm the minimum of problem
min kf k2 : f (xj ) = yj , j ? INm
P
is unique and admits the form f? = j?INm Kxj cj .
(4.13)
We refer to [7] for a proof. This approach achieves both simplicity and generality. For
example, it can be extended to normed linear spaces, see [8]. Our next result establishes
that the form of any local minimizer1 indeed has the same form as in Lemma 3. This result
improves upon [9] where it is proven only for a global minimizer.
Theorem 1 If for every y ? Y m the function h : IR+ ? IR+ defined for t ? IR+ by
h(t)
P := V (y, t) is strictly increasing and f0 ? H is a local minimum of E then f0 =
j?INm Kxj cj for some {cj : j ? INm } ? Y.
Proof: If g is any function in H such that g(xj ) = 0, j ? INm and t a real number such
that |t|kgk ? , for > 0, then
V y0 , kf0 k2 ? V y0 , kf0 + tgk2 .
Consequently, we have that kf0 k2 ? kf0 + tgk2 from which it follows that (f0 , g) = 0.
Thus, f0 satisfies
kf0 k = min{kf k : f (xj ) = f0 (xj ), j ? INm , f ? H}
and the result follows from Lemma 3.
4.1
Linear regularization
We comment on regularization for linear multi?task learning and therefore consider minimizing the functional
X X
R0 (w) :=
Q(yj` , (w, B` xj )) + ?kwk2
(4.14)
j?INm `?INn
p
for w ? IR . We set u` =
is related to the functional
B`? w,
R1 (u) :=
u = (u` : ` ? INn ), and observe that the above functional
X X
Q(yj` , (u` , xj )) + ?J(u)
(4.15)
j?INm `?INn
1
A function f0 ? H is a local minimum for E provided that there is a positive number such that
whenever f ? H satisfies kf0 ? f k ? then E(f0 ) ? E(f ).
where we have defined the minimum norm functional
J(u) := min{kwk2 : w ? IRp , B`? w = u` , ` ? INn }.
(4.16)
Specifically, we have
min{R0 (w) : w ? IRp } = min{R1 ((B` w : ` ? INn )) : w ? IRp }.
P
The optimal solution w
? of problem (4.16) is given by w
? = `?INn B` c` , where the vectors
{c` : ` ? INn } ? IRd satisfy the linear equations
X
B`? Bk ck = u` , ` ? INn
k?INn
and
J(u) =
X
? ?1 uq )
(u` , B
`q
`,q?INn
? = (B ? Bq : `, q ? INn ) is nonsingular. We note that this
provided the d ? d block matrix B
`
analysis can be extended to the case of different inputs across the tasks by replacing x j in
equations (4.14) and (4.15) by xj,` ? IRd` and matrix B` by Q` P` , see section 3.1 for the
definition of these quantities.
As a special example we choose B` to be the (n + 1)d ? d matrix whose d ? d blocks
are all zero expect for the 1?st and (` + 1)?th block which are equal to c?1 Id and Id
respectively, where c > 0 and Id is the d?dimensional identity matrix. From equation
(3.8) the matrix?valued kernel K in equation (3.8) reduce to
1
+ ?`q )(x, t), `, q ? INn , x, t ? IRn .
c2
Moreover, in this case the minimization in (4.16) is given by
X
X
c2
n
1 X
2
J(u) =
ku
k
+
ku
?
u q k2 .
`
`
n + c2
n + c2
n
K`q (x, t) = (
`?INn
`?INn
(4.17)
(4.18)
q?INn
The model of minimizing (4.14) was proposed in [6] in the context of support vector machines (SVM?s) for these special choice of matrices. The derivation presented here improves upon it. The regularizer (4.18) forces a trade?off between a desirable small size
for per?task parameters and closeness of each of these parameters to their average. This
trade-off is controlled by the coupling parameter c. If c is small the tasks parameters are
related (closed to their average) whereas a large value of c means the task are learned independently. For SVM?s, Q is the Hinge loss function defined by Q(a, b) := max(0, 1 ? ab),
a, b ? IR. In this case the above regularizer trades off large margin of each per?task SVM
with closeness of each SVM to the average SVM. Numerical experiments showing the
good performance of the multi?task SVM compared to both independent per?task SVM?s
(ie, c = ? in equation (4.17)) and previous multi?task learning methods are also discussed
in [6].
The analysis above can be used to derive other linear kernels. This can be done by either
introducing the matrices B` as in the previous example, or by modifying the functional on
the right hand side of equation (4.15). For example, we choose an n ? n symmetric matrix
A all of whose entries are in the unit interval, and consider the regularizer
X
1 X
J(u) :=
ku` ? uq k2 A`q =
(u` , uq )L`q
(4.19)
2
`,q?INn
`,q?INn
P
where L = D ? A with D`q = ?`q h?INn A`h . The matrix A could be the weight matrix
of a graph with n vertices and L the graph Laplacian, see eg [4]. The equation A `q = 0
means that tasks ` and q are not related, whereas A`q = 1 means strong relation. In order
? u) where
to derive the matrix?valued kernel we note that (4.19) can be written as (u, L,
? is the n ? n block matrix whose `, q block is the d ? d matrix Id L`q . Thus, we define
L
? ? 21 w (here L?1 is the pseudoinverse), where P` is
? 12 u so that we have u` = P` L
w = L
a projection matrix from IRdn to IRd . Consequently, the feature map in equation (2.7) is
? ? 12 P ? and we conclude that
given by ?` = B` = L
`
? ?1 P ? t).
K`q (x, t) = (x, P` L
q
Finally, as discussed in section 3.2 one can form polynomials or non-linear functions of the
above linear kernels. From Theorem 1 the minimizer of (4.12) is still a linear combination
of the kernel at the given data examples.
5
Conclusions and future directions
We have described reproducing kernel Hilbert spaces of vector?valued functions and discussed their use in multi?task learning. We have provided a wide class of matrix?valued
kernels which should proved useful in applications. In the future it would be valuable to
study learning methods, using convex optimization or MonteCarlo integration, for choosing
the matrix?valued kernel. This problem seems more challenging that its scalar counterpart
due to the possibly large dimension of the output space. Another important problem is to
study error bounds for learning in these spaces. Such analysis can clarify the role played by
the spectra of the matrix?valued kernel. Finally, it would be interesting to link the choice
of matrix?valued kernels to the notion of relatedness between tasks discussed in [5].
Acknowledgments
This work was partially supported by EPSRC Grant GR/T18707/01 and NSF Grant No.
ITR-0312113. We are grateful to Zhongying Chen, Head of the Department of Scientific
Computation at Zhongshan University for providing both of us with the opportunity to
complete this work in a scientifically stimulating and friendly environment. We also wish
to thank Andrea Caponnetto, Sayan Mukherjee and Tomaso Poggio for useful discussions.
References
[1] N.I. Akhiezer and I.M. Glazman. Theory of linear operators in Hilbert spaces, volume I. Dover
reprint, 1993.
[2] N. Aronszajn. Theory of reproducing kernels. Trans. AMS, 686:337?404, 1950.
[3] J. Baxter. A Model for Inductive Bias Learning. Journal of Artificial Intelligence Research, 12,
p. 149?198, 2000.
[4] M. Belkin and P. Niyogi. Laplacian Eigenmaps for Dimensionality Reduction and data Representation Neural Computation, 15(6):1373?1396, 2003.
[5] S. Ben-David and R. Schuller. Exploiting Task Relatedness for Multiple Task Learning. Proc.
of the 16?th Annual Conference on Learning Theory (COLT?03), 2003.
[6] T. Evgeniou and M.Pontil. Regularized Multitask Learning. Proc. of 17-th SIGKDD Conf. on
Knowledge Discovery and Data Mining, 2004.
[7] C.A. Micchelli and M. Pontil. On Learning Vector-Valued Functions. Neural Computation,
2004 (to appear).
[8] C.A. Micchelli and M. Pontil. A function representation for learning in Banach spaces. Proc.
of the 17?th Annual Conf. on Learning Theory (COLT?04), 2004.
[9] B. Sch?olkopf, R. Herbrich, and A.J. Smola. A Generalized Representer Theorem. Proc. of the
14-th Annual Conf. on Computational Learning Theory (COLT?01), 2001.
[10] V. N. Vapnik. Statistical Learning Theory. Wiley, New York, 1998.
| 2615 |@word multitask:1 kgk:1 polynomial:3 norm:5 seems:1 confirms:1 reduction:1 rkhs:9 written:1 fn:1 numerical:2 intelligence:1 dover:1 provides:2 herbrich:1 c2:4 consists:2 prove:2 introduce:1 indeed:2 tomaso:1 andrea:1 multi:12 increasing:1 provided:8 begin:1 moreover:4 bounded:3 what:1 minimizes:1 transformation:1 every:15 friendly:1 classifier:3 k2:9 uk:1 unit:1 grant:2 appear:1 positive:11 local:3 id:4 interpolation:5 challenging:1 practical:1 unique:2 acknowledgment:1 yj:9 practice:1 block:5 implement:1 definite:2 pontil:4 projection:2 ir2:1 cannot:1 operator:8 context:1 map:11 go:1 straightforward:1 independently:2 normed:1 convex:1 simplicity:2 immediately:1 array:1 importantly:1 notion:5 coordinate:1 qq:2 construction:2 suppose:1 us:2 hypothesis:1 associate:1 element:2 mukherjee:1 role:1 epsrc:1 capture:1 trade:4 valuable:2 environment:1 grateful:1 upon:2 kxj:3 regularizer:3 derivation:1 massimiliano:1 london:2 artificial:1 choosing:6 whose:7 valued:49 say:5 statistic:1 niyogi:1 itself:1 sequence:1 inn:30 interaction:1 product:6 flexibility:1 adjoint:2 olkopf:1 qr:1 exploiting:1 r1:2 hkt:1 ben:1 object:3 tk:1 coupling:1 derive:2 pose:1 qt:1 strong:1 implies:2 riesz:1 direction:1 modifying:1 kb:1 human:1 f1:1 proposition:7 strictly:1 clarify:1 hold:2 normal:2 exp:2 bj:2 claim:1 achieves:1 estimation:2 albany:2 proc:4 vice:1 establishes:1 reflects:1 minimization:1 gaussian:2 ck:1 focus:1 rank:1 sigkdd:1 hkx:2 detect:1 am:1 irp:3 bt:1 relation:4 irn:8 interested:1 pixel:2 classification:1 colt:3 special:5 integration:1 equal:1 evgeniou:1 washington:1 represents:1 representer:4 future:2 micro:1 belkin:1 simultaneously:2 roof:2 familiar:1 tq:4 ab:1 investigate:1 mining:1 mixture:1 semidefinite:4 kt:4 poggio:1 bq:4 euclidean:1 irm:1 minimal:4 inm:18 instance:1 earlier:1 modeling:1 introducing:1 vertex:1 entry:1 uniform:1 eigenmaps:1 gr:1 characterize:3 connect:1 st:1 ie:1 off:4 pool:1 enhance:2 again:2 squared:1 central:1 choose:5 possibly:2 summable:1 admit:2 conf:3 satisfy:1 h1:2 view:1 closed:1 kwk:1 parallel:1 square:1 ir:16 nonsingular:1 whenever:1 definition:2 naturally:1 proof:7 associated:2 proved:1 knowledge:1 car:2 improves:2 dimensionality:1 hilbert:12 organized:1 formalize:1 cj:3 modal:1 specify:1 done:1 generality:1 smola:1 hand:2 replacing:1 aronszajn:3 aj:6 scientific:1 usa:1 counterpart:1 inductive:1 regularization:4 symmetric:1 moore:1 eg:6 scientifically:1 generalized:1 ay:1 complete:1 confusion:1 interface:1 image:4 novel:2 charles:1 common:1 functional:12 volume:1 banach:1 discussed:4 elementwise:3 kwk2:2 refer:1 versa:1 mathematics:1 dot:1 pq:2 f0:7 etc:1 isometry:1 t18707:1 perspective:1 inequality:1 binary:1 yi:1 seen:1 minimum:4 additional:1 r0:2 multiple:3 desirable:2 caponnetto:1 smooth:1 england:1 concerning:1 controlled:1 laplacian:2 prediction:1 regression:1 vision:1 kernel:74 represent:1 whereas:2 separately:1 interval:1 completes:1 sch:1 strict:1 comment:1 spirit:1 schur:2 integer:2 baxter:1 variety:3 xj:13 zi:1 inner:2 idea:1 reduce:1 avenue:1 itr:1 expression:1 ird:14 locating:1 algebraic:1 speech:1 york:2 remark:3 useful:6 generally:1 covered:1 reduced:1 generate:1 exist:1 nsf:1 per:4 wr:1 discrete:1 write:1 graph:2 merely:2 cone:1 everywhere:1 extends:2 bound:1 played:1 quadratic:1 nonnegative:1 annual:3 occur:1 x2:1 span:1 prescribed:2 min:5 department:3 combination:4 across:1 y0:2 invariant:1 equation:18 bus:1 discus:1 montecarlo:1 needed:1 end:3 observe:1 appropriate:3 uq:3 motorbike:1 include:3 ensure:1 opportunity:1 hinge:1 gower:1 wc1e:1 k1:1 prof:1 tk2:2 micchelli:3 question:1 quantity:1 diagonal:2 amongst:1 subspace:1 link:1 thank:1 street:1 considers:1 providing:1 minimizing:4 potentially:1 relate:1 observation:1 datasets:1 finite:2 situation:2 extended:3 defining:1 head:1 reproducing:8 bk:1 david:1 namely:1 learned:1 trans:1 address:1 beyond:1 below:1 including:1 max:1 natural:1 force:1 regularized:2 schuller:1 representing:2 reprint:1 kj:5 xq:2 morphing:1 understanding:1 discovery:1 kf:7 loss:3 expect:1 interesting:2 proven:1 foundation:2 mercer:3 principle:1 share:1 translation:1 placed:1 supported:1 side:1 bias:1 wide:3 face:3 dimension:1 xn:1 far:1 relatedness:2 global:1 pseudoinverse:1 conclude:9 spectrum:1 continuous:2 boundness:1 zq:1 learn:3 ku:3 domain:1 main:2 linearly:1 motivation:1 animation:1 arise:1 x1:1 definiteness:1 ny:1 wiley:1 explicit:1 wish:3 lie:1 sayan:1 theorem:8 formula:2 specific:2 showing:1 svm:10 admits:4 closeness:2 exists:2 vapnik:1 margin:3 kx:6 chen:1 simply:2 partially:1 scalar:11 minimizer:3 satisfies:5 stimulating:1 identity:1 formulated:1 consequently:3 specifically:3 kf0:6 tumor:1 lemma:10 college:1 people:1 support:2 bioinformatics:1 |
1,779 | 2,616 | The Variational Ising Classifier (VIC) algorithm
for coherently contaminated data
Oliver Williams
Dept. of Engineering
University of Cambridge
Andrew Blake
Microsoft Research Ltd.
Cambridge, UK
Roberto Cipolla
Dept. of Engineering
University of Cambridge
[email protected]
Abstract
There has been substantial progress in the past decade in the development
of object classifiers for images, for example of faces, humans and vehicles. Here we address the problem of contaminations (e.g. occlusion,
shadows) in test images which have not explicitly been encountered in
training data. The Variational Ising Classifier (VIC) algorithm models
contamination as a mask (a field of binary variables) with a strong spatial coherence prior. Variational inference is used to marginalize over
contamination and obtain robust classification. In this way the VIC approach can turn a kernel classifier for clean data into one that can tolerate
contamination, without any specific training on contaminated positives.
1
Introduction
Recent progress in discriminative object detection, especially for faces, has yielded good
performance and efficiency [1, 2, 3, 4]. Such systems are capable of classifying those
positives that can be generalized from positive training data. This is restrictive in practice
in that test data may contain distortions that take it outside the strict ambit of the training
positives. One example would be lighting changes (to a face) but this can be addressed
reasonably effectively by a normalizing transformation applied to training and test images;
doing so is common practice in face classification. Other sorts of disruption are not so
easily factored out. A prime example is partial occlusion.
The aim of this paper is to extend a classifier trained on clean positives to accept also
partially occluded positives, without further training. The approach is to capture some of
the regularity inherent in a typical pattern of contamination, namely its spatial coherence.
This can be thought of as extending the generalizing capability of a classifier to tolerate the
sorts of image distortion that occur as a result of contamination.
As done previously in one-dimension, for image contours [5], the Variational Ising Classifier (VIC) models contamination explicitly as switches with a strong coherence prior in the
form of an Ising model, but here over the full two-dimensional image array. In addition,
the Ising model is loaded with a bias towards non-contamination. The aim is to incorporate
these hidden contamination variables into a kernel classifier such as [1, 3]. In fact the Relevance Vector Machine (RVM) is particularly suitable [6] as it is explicitly probabilistic,
so that contamination variables can be incorporated as a hidden layer of random variables.
edge
i
neighbours of i
Figure 1: The 2D Ising model is applied over a graph with edges e ? ? between neighbouring pixels (connected 4-wise).
Classification is done by marginalization over all possible configurations of the hidden variable array, and this is made tractable by variational (mean field) inference. The inference
scheme makes use of ?hallucination? to fill in parts of the object that are unobserved due
to occlusion.
Results of VIC are given for face detection. First we show that the classifier performance
is not significantly damaged by the inclusion of contamination variables. Then a contaminated test set is generated using real test images and computer generated contaminations.
Over this test data the VIC algorithm does indeed perform significantly better than a conventional classifier (similar to [4]). The hidden variable layer is shown to operate effectively, successfully inferring areas of contamination. Finally, inference of contamination is
shown working on real images with real contaminations.
2
Bayesian modelling of contamination
Classification requires P (F |I), the posterior for the proposition F that an object is present
given the image data intensity array I. This can be computed in terms of likelihoods
P (F | I) = P (I | F )P (F )/ P (I | F )P (F ) + P (I | F )P (F )
(1)
so then the test P (F | I) >
1
2
becomes
log P (I | F ) ? log P (I | F ) > t
(2)
where t is a prior-dependent threshold that controls the tradeoff between positive and negative classification errors. Suppose we are given a likelihood P (I|?, F ) for the presence of
a face given contamination ?, an array of binary ?observation? variables corresponding to
each pixel Ij of I, such that ?j = 0 indicates contamination at that pixel, whereas ?j = 1
indicates a successfully observed pixel. Then, in principle,
X
P (I|F ) =
P (I|?, F )P (?),
(3)
?
(making the reasonable assumption P (?|F ) = P (?), that the pattern of contamination is
object independent) and similarly for log P (I | F ). The marginalization itself is intractable,
requiring a summation over all 2N possible configurations of ?, for images with N pixels.
Approximating that marginalization is dealt with in the next section. In the meantime, there
are two other problems to deal with: specifying the prior P (?); and specifying the likelihood under contamination P (I|?, F ) given only training data for the unoccluded object.
2.1
Prior over contaminations
The prior contains two terms: the first expresses the belief that contamination will occur
in coherent regions of a subimage. This takes the form of an Ising model [7] with energy
UI (?) that penalizes adjacent pixels which differ in their labelling (see Figure 1); the second
term UC biases generally against contamination a priori and its balance with the first term
is mediated by the constant ?. The total prior energy is then
X
X
U (?) = UI (?) + ?UC (?) =
[1 ? ?(?e1 ? ?e2 )] + ?
?(?j ),
(4)
j
e??
where ?(x) = 1 if x = 0 and 0 otherwise, and e1 , e2 are the indices of the pixels at either
end of edge e ? ? (figure 1). The prior energy determines a probability via a temperature
constant 1/T0 [7]:
P (?) ? e?U (?)/T0 = e?UI (?)/T0 e??UC (?)/T0
2.2
(5)
Relevance vector machine
An unoccluded classifier P (F |I, ? = 0) can be learned from training data using a Relevance Vector Machine (RVM) [6], trained on a database of frontal face and non-face images [8] (see Section 4 for details). The probabilistic properties of the RVM make it a good
choice when (later) it comes to marginalising over ?. For now we consider how to construct
the likelihood itself. First the conventional, unoccluded case is considered for which the
posterior P (F |I) is learned from positive and negative examples. Kernel functions [9] are
computed between a candidate image I and a subset of relevance vectors {xk }, retained
from the training set. Gaussian kernels are used here to compute
X
X
2
y(I) =
wk exp ??
(Ij ? xkj ) .
(6)
k
j
where wk are learned weights, and xkj is the j th pixel of the k th relevance vector. Then the
posterior is computed via the logistic sigmoid function as
P (F |I, ? = 1) = ?(y(I)) =
1
.
1 + e?y(I)
(7)
and finally the unoccluded data-likelihood would be
P (I|F, ? = 1) ? ?(y(I))/P (F ).
2.3
(8)
Hallucinating appearance
The aim now is to derive the occluded likelihood from the unoccluded case, where the contamination mask is known, without any further training. To do this, (8) must be extended
to give P (I|F, ?) for arbitrary masks ?, despite the fact the pixels Ij from the object are
not observed wherever ?j = 0. In principle one should take into account all possible (or
at least probable) values for the occluded pixels. Here, for simplicity, a single fixed hallucination is substituted for occluded pixels, then we proceed as if those values had actually
been observed. This gives
P (I|F, ?) ? ?(?
y (I, ?))/P (F )
(9)
where
? ?, F )) and I(I,
? ?, F ) =
y?(?, I) = y(I(I,
j
Ij
(E[I|F ])j
if ?j = 1
otherwise
(10)
in which E[I|F ] is a fixed hallucination, conditioned on the model F , and computed as a
sample mean over training instances.
3
Approximate marginalization of ? by mean field
At this point we return to the task of marginalising over ? (3) to obtain P (I|F ) and P (I|F )
for use in classification (2). Due to the connectedness of neighbouring pixels in the Ising
prior (figure 1), P (I, ?|F ) is a Markov Random Field (MRF) [7]. The marginalized likelihood P (I|F ) could be estimated by Gibbs sampling [10] but that takes tens of minutes to
converge in our experiments. The following section describes a mean field approximation
which converges in a few seconds. The mean field algorithm is given here for P (I|F ) but
must be repeated also for P (I|F ), simply substituting F for F throughout.
3.1
Variational approximation
Mean field approximation is a form of variational approximation [11] and transforms an
inference problem into the optimization of a functional J:
J(Q) = log P (I|F ) ? KL [Q(?)kP (?|F, I)] ,
(11)
where KL is the Kullback-Liebler divergence
KL [Q(?)kP (?|F, I)] =
X
Q(?) log
?
Q(?)
.
P (?|F, I)
The objective functional J(Q) is a lower bound on the log-marginal probability log P (I|F )
[11]; when it is maximized at Q? , it gives both the marginal likelihood J(Q? ) =
log P (I|F ), and the posterior distribution Q? (?) = P (?|F, I) over hidden variables. Following [11], J(Q) is simplified using Bayes? rule:
J(Q) = H(Q) + EQ [log P (I, ?|F )]
P
where H(?) is the entropy of a distribution [12] and EQ [g(?)] = ? Q(?)g(?) denotes the
expectation of a function g with respect to Q(?). A form of Q(?) must be chosen that makes
the maximization of J(Q) tractable. For mean-field
approximation, Q(?) is modelled as
Q
a pixel-wise product of factors: Q(?) = i Qi (?i ). It is now possible to maximize J
iteratively with respect to each marginal Qi (?i ) in turn, giving the mean field update [11]:
Qi ?
1
exp EQ|?i [log P (I, ?|F )] ,
Zi
Zi =
X
where
?i
(12)
exp EQ|?i [log P (I, ?|F )]
is the partition function and EQ|?i [?] is the expectation with respect to Q given ?i :
?
?
X Y
? Qj (?j )? g(?).
EQ|?i [g(?)] =
{?}j\i
3.2
j\i
Taking expectations over P (I, ?|F )
To perform the expectation required in (12), the log-joint distribution is written as:
log {P (I, ?|F )} = ? log 1 + e??y(?,I) ? T10 UI (?) ? T?0 UC (?) + const.
The conditional expectation EQ|?i in (12) is found efficiently from the complete expectations by replacing only terms in ?i . Likewise, when one factor of Q changes (12), the
complete expectations may be updated without recomputing them ab initio. For brevity,
we give the expressions for the complete expectations only. For the prior this is simply:
XX
X
EQ [U (?)] =
Qe (?e ) [1 ? ?(?e1 ? ?e2 )] + ?
Qj (?j = 0).
(13)
j
e?? ?e
For the likelihood it is more difficult. Saul et al. [13] show how to approximate the expectation over the sigmoid function by introducing a dummy variable ?:
h
i
n h
i
h
io
EQ log(1 + e??y(?,I) ) ? ??EQ [?
y (?, I)] + log EQ e?y?(?,I) + EQ e(??1)?y(?,I) .
1
The Gaussian
RBF in (6) means that it is not feasible to compute the expectation
? y?(?,I)
EQ e
, so a simpler approximation is used:
EQ [log ?(?
y (?, I)] ? log ? (EQ [?
y (?, I)]) ,
where
EQ [?
y (?, I)] =
X
k
4
wk
YX
j
?j
? ?, F )j ? xkj 2 .
Qj (?j ) exp ?? I(I,
(14)
Results and discussion
The mean field algorithm described above is capable only of local optimization of J(Q).
A symptom of this is that it exhibits spontaneous symmetry breaking [11], setting the contamination field to either all contaminated or all uncontaminated. This is alleviated through
careful initialization. By performing iterations initially at a high temperature, Th , the prior
is weakened. The temperature is then progressively decreased, on a linear annealing schedule [10], until the modelled prior temperature T0 is reached. Figure 2 shows pseudo-code
for the VIC algorithm. Note also that an advantage of hallucinating appearance from the
mean face is that the hallucination process requires no computation within the optimization
loop. For 19 ? 19 subimages, the average time taken for the VIC algorithm to converge
is 4 seconds. However this is an unoptimized Matlab implementation; and in C++ it is
anticipated to be at least 10 times faster.
The training set used for the RVM [8] contains subimages of registered faces and non-faces
which were histogram equalized [14] to reduce the effect of different lighting with their
pixel values scaled to the range [0, 1]. The same is done to each test subimage I. The RVM
was trained using 1500 face examples and 1500 non-face examples 2 . Parameters were set
as follows: the RBF width parameter in (6) is ? = 0.05; the contamination cost ? = 0.2
and the temperature constants are Th = 2.5, T0 = 1.5 and ?T = 0.2.
As a by-product of the VIC algorithm, the posterior pattern P (?|F, I) of contamination is
approximately inferred as the value of Q which maximizes J. Figure 3 shows some results
of this. As might be expected, for a non-face, the algorithm hallucinates an intact face
with total contamination (For example, row 4 of the figure); but of course the marginalized
posterior probability P (F |I) is very small in such a case.
4.1
Classifier
To assess the classification performance of the VIC, contaminated positives were automatically generated (figure 4). These were combined with pure faces and pure non-faces
(none of which were used in the training set) and tested to produce the Receiver Operating
Characteristic (ROC) curves are given in Figure 4 for the unaltered RVM acting on the
P
Q
The term exp[? y?(?, I)] = exp[? k wk j e??dj (I,xk |?j ) ] does not factorize across pixels
2
These sizes are limited in practice by the complexity of the training algorithm [6]
1
Require:
Require:
Require:
Require:
Candidate image region I
Parameters Th , T0 , ?T , ?
RVM weights and examples wk , xk
Mean face appearance I?
Initialize Qi (?i = 1) ? 0.5 ?i
Compute EQ [U (?)] (13)
Compute EQ [?
y (?, I)] (14)
T ? Th
while T > T0 do
while Q not converged do
for All image locations i do
Compute conditional expectations EQ|?i [U (?)] and E
y (?, I)]
Q|?i [?
Compute EQ|?i [log P (I, ?|F )] = log ? EQ|?i [?
y (?, I)] ? EQ|?i [U (?)]
P
Compute partition Zi = ?i exp EQ|?i [log P (I, ?|F )]
Update Qi (?i ) ? Z1i exp EQ|?i [log P (I, ?|F )]
Update complete expectations EQ [U (?)] and EQ [?
y (?, I)]
end for
T ? T ? ?T
end while
end while
Figure 2: Pseudo-code for the VIC algorithm
Input I
Hallucinated image
Contamination field Q(? = 1)
0.8
0.6
0.4
0.2
0.8
0.6
0.4
0.2
0.8
0.6
0.4
0.2
0
0.4
0.3
0.2
0.1
Figure 3: Partially occluded mages with inferred areas of probable contamination (dark).
contaminated set and for the new contamination-tolerant VIC outlined in this paper. For
comparison, points are shown for a boosted cascade of classifiers [15] which is a publicly
available detector based on the system of Viola and Jones [4]. The curve shown for the
RVM against an uncontaminated test set confirms that contamination does make the classification task considerably harder. Figure 5 shows some natural face images that the boosted
cascade [15] fails to detect, either because of occlusion or due to a degree of deviation from
1
True positive rate
0.95
0.9
0.85
0.8
RVM, no cont.
RVM
VIC
Boosted cascade
Cascade, no cont.
0.75
0.7
0
0.1
0.2
0.3
0.4
0.5
False positive rate
0.6
Figure 4: ROC curves. Also shown are some of the contaminated positives used to generate
the curves. These were made by sampling contamination patterns from the prior and using
them to mix a face and a non-face artificially.
Input I
Hallucinated image
Contamination field Q(? = 1)
0.8
0.6
0.4
0.2
0.8
0.6
0.4
0.2
0.8
0.6
0.4
0.2
Figure 5: Images that the boosted cascade [15] failed to detect as faces: the VIC algorithm produces higher posterior face probability by labelling certain regions with unusual
appearance (eg due to 3D rotation) as contaminated.
the frontal pose. The VIC algorithm detects them successfully however.
4.2
Discussion
Figure 4 shows that by modelling the contamination field explicitly, the VIC detector improves on the performance, over a contaminated test set, both of a plain RVM and of a
boosted cascade detector. The algorithm is relatively expensive to execute compared, say,
with the contamination-free RVM. However, this could be mitigated by cascading [4], in
which a simple and efficient classifier, tuned to return a high rate of false positives for all
objects, contaminated and non-contaminated, would make a preliminary sweep of a test
image. The contamination-tolerant VIC algorithm would then be applied to the candidate
subimages that remain, thereby concentrating computational power on just a few locations.
Figure 5 illustrates the operation of the contamination mechanism on real images, all of
which are detected as faces by the VIC algorithm but missed by the boosted cascade. There
is no occlusion in these examples but rotations have distorted the appearance of certain
features. The VIC algorithm has deals with this by labelling the distortions as contaminated
areas, and hallucinating face-like texture in their place.
In conclusion, we have developed the VNC algorithm for object detection in the presence
of coherently contaminated data. Contamination is modelled as coherent via an Ising prior,
and is marginalized out by variational inference. Experiments show that VIC classifies
contaminated images more robustly than classifiers designed for clean data. It is worth
pointing out that the approach of the VIC algorithm is not limited to RVMs. Any probabilistic detector for which it is possible to estimate the expectation (14) could be modified
in a similar way to deal with spatially coherent contamination. Future work will address:
improved efficiency by incorporating the VIC into a cascade of simple classifiers; alternatives to data hallucination using marginalization over missing data, if a tractable means of
doing this can be found.
References
[1] E. Osuna, R. Freund, and F. Girosi. Training support vector machines: An application
to face detection. Proc. Conf. Computer Vision and Pattern Recognition, pages 130?
136, 1997.
[2] H.A. Rowley, S. Baluja, and T. Kanade. Neural network-based face detection. IEEE
Transactions on Pattern Alaysis and Machine Intelligence, 20(1):23?38, 1998.
[3] S. Romdhani, P. Torr, B. Sch?olkopf, and A. Blake. Computationally efficient face
detection. In Proc. Int. Conf. on Computer Vision, volume 2, pages 524?531, 2001.
[4] P. Viola and M. Jones. Rapid object detection using a boosted cascade of simple
features. In Proc. Conf. Computer Vision and Pattern Recognition, 2001.
[5] J. MacCormick and A. Blake. Spatial dependence in the observation of visual contours. In Proc. European Conf. on Computer Vision, pages 765?781, 1998.
[6] M.E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal
of Machine Learning Research, 1:211?244, 2001.
[7] R. Kindermann and J.L. Snell. Markov Random Fields and Their Applications. American Mathematical Society, 1980.
[8] CBCL face database #1. MIT Center For Biological and Computation Learning:
http://www.ai.mit.edu/projects/cbcl.
[9] B. Sch?olkopf and A. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning). MIT Press, 2001.
[10] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian
restoration of images. IEEE Trans. on Pattern Analysis and Machine Intelligence,
6(6):721?741, 1984.
[11] T. Jaakkola. Tutorial on variational approximation methods. In Advanced Mean Field
Methods: Theory and Practice. MIT Press, 2000.
[12] T. Cover and J. Thomas. Elements of Information Theory. John Wiley & Sons, 1991.
[13] L. Saul, T. Jaakkola, and M. Jordan. Mean field theory for sigmoid belief networks.
Journal of Artificial Intelligence Research, 4:61?76, 1996.
[14] A.K. Jain. Fundamentals of Digital Image Processing. System Sciences. PrenticeHall, New Jersey, 1989.
[15] R. Lienhart and J. Maydt. An extended set of Haar-like features for rapid object
detection. In Proc. IEEE ICIP, volume 1, pages 900?903, 2002.
| 2616 |@word unaltered:1 confirms:1 thereby:1 harder:1 configuration:2 contains:2 tuned:1 mages:1 past:1 must:3 written:1 john:1 partition:2 girosi:1 designed:1 update:3 progressively:1 intelligence:3 xk:3 location:2 simpler:1 mathematical:1 mask:3 expected:1 indeed:1 rapid:2 detects:1 automatically:1 becomes:1 project:1 xx:1 mitigated:1 classifies:1 maximizes:1 developed:1 unobserved:1 transformation:1 pseudo:2 classifier:16 scaled:1 uk:2 control:1 rvms:1 positive:13 engineering:2 local:1 io:1 despite:1 approximately:1 connectedness:1 might:1 initialization:1 weakened:1 specifying:2 limited:2 range:1 practice:4 area:3 thought:1 significantly:2 t10:1 alleviated:1 cascade:9 marginalize:1 www:1 conventional:2 missing:1 center:1 williams:1 simplicity:1 pure:2 factored:1 rule:1 array:4 cascading:1 fill:1 updated:1 spontaneous:1 suppose:1 damaged:1 neighbouring:2 element:1 expensive:1 particularly:1 recognition:2 ising:9 database:2 geman:2 observed:3 capture:1 region:3 connected:1 contamination:40 substantial:1 ui:4 complexity:1 rowley:1 occluded:5 cam:1 trained:3 efficiency:2 easily:1 joint:1 jersey:1 jain:1 kp:2 detected:1 equalized:1 artificial:1 outside:1 distortion:3 say:1 otherwise:2 itself:2 advantage:1 product:2 loop:1 olkopf:2 regularity:1 extending:1 produce:2 converges:1 object:11 derive:1 andrew:1 ac:1 pose:1 ij:4 progress:2 eq:26 strong:2 shadow:1 come:1 differ:1 stochastic:1 human:1 require:4 preliminary:1 snell:1 proposition:1 probable:2 biological:1 summation:1 initio:1 considered:1 blake:3 exp:8 cbcl:2 pointing:1 substituting:1 proc:5 rvm:12 kindermann:1 successfully:3 mit:4 gaussian:2 aim:3 modified:1 boosted:7 jaakkola:2 modelling:2 likelihood:9 indicates:2 detect:2 inference:6 dependent:1 accept:1 initially:1 hidden:5 unoptimized:1 pixel:15 classification:8 priori:1 development:1 spatial:3 initialize:1 uc:4 marginal:3 field:17 construct:1 sampling:2 jones:2 anticipated:1 future:1 contaminated:14 inherent:1 few:2 neighbour:1 divergence:1 occlusion:5 microsoft:1 ab:1 detection:8 hallucination:5 oliver:1 edge:3 capable:2 partial:1 penalizes:1 instance:1 recomputing:1 cover:1 restoration:1 maximization:1 cost:1 introducing:1 deviation:1 subset:1 considerably:1 combined:1 fundamental:1 probabilistic:3 prenticehall:1 conf:4 american:1 return:2 account:1 wk:5 int:1 explicitly:4 vehicle:1 later:1 doing:2 reached:1 sort:2 bayes:1 capability:1 ass:1 publicly:1 loaded:1 characteristic:1 efficiently:1 maximized:1 likewise:1 dealt:1 modelled:3 bayesian:3 none:1 lighting:2 worth:1 liebler:1 converged:1 detector:4 romdhani:1 z1i:1 against:2 energy:3 uncontaminated:2 e2:3 concentrating:1 improves:1 schedule:1 actually:1 tolerate:2 higher:1 tipping:1 improved:1 done:3 execute:1 symptom:1 marginalising:2 just:1 smola:1 until:1 working:1 replacing:1 logistic:1 effect:1 contain:1 requiring:1 true:1 regularization:1 spatially:1 iteratively:1 deal:3 eg:1 adjacent:1 width:1 qe:1 generalized:1 complete:4 temperature:5 disruption:1 image:23 variational:9 wise:2 xkj:3 common:1 sigmoid:3 rotation:2 functional:2 volume:2 extend:1 cambridge:3 gibbs:2 ai:1 outlined:1 similarly:1 inclusion:1 had:1 dj:1 operating:1 posterior:7 recent:1 prime:1 certain:2 binary:2 converge:2 maximize:1 full:1 mix:1 faster:1 e1:3 qi:5 mrf:1 vision:4 expectation:13 iteration:1 kernel:5 histogram:1 addition:1 whereas:1 addressed:1 decreased:1 annealing:1 sch:2 operate:1 strict:1 jordan:1 presence:2 switch:1 marginalization:5 zi:3 reduce:1 tradeoff:1 qj:3 t0:8 expression:1 hallucinating:3 ltd:1 proceed:1 matlab:1 generally:1 transforms:1 dark:1 ten:1 generate:1 http:1 tutorial:1 estimated:1 dummy:1 express:1 threshold:1 clean:3 graph:1 relaxation:1 distorted:1 place:1 throughout:1 reasonable:1 missed:1 coherence:3 layer:2 bound:1 encountered:1 yielded:1 occur:2 performing:1 relatively:1 describes:1 across:1 remain:1 son:1 osuna:1 making:1 wherever:1 taken:1 computationally:1 previously:1 turn:2 mechanism:1 tractable:3 end:4 unusual:1 available:1 operation:1 robustly:1 alternative:1 thomas:1 denotes:1 marginalized:3 const:1 yx:1 giving:1 restrictive:1 especially:1 approximating:1 society:1 sweep:1 objective:1 coherently:2 dependence:1 exhibit:1 maccormick:1 code:2 index:1 retained:1 cont:2 balance:1 difficult:1 negative:2 implementation:1 perform:2 observation:2 markov:2 viola:2 extended:2 incorporated:1 arbitrary:1 vnc:1 intensity:1 inferred:2 namely:1 required:1 kl:3 hallucinated:2 icip:1 coherent:3 learned:3 registered:1 trans:1 address:2 beyond:1 pattern:8 belief:2 power:1 suitable:1 natural:1 haar:1 meantime:1 advanced:1 scheme:1 vic:22 mediated:1 roberto:1 prior:14 freund:1 digital:1 degree:1 principle:2 classifying:1 row:1 course:1 free:1 bias:2 saul:2 face:29 taking:1 subimage:2 lienhart:1 sparse:1 curve:4 dimension:1 plain:1 contour:2 made:2 adaptive:1 simplified:1 transaction:1 approximate:2 kullback:1 unoccluded:5 tolerant:2 receiver:1 discriminative:1 factorize:1 decade:1 kanade:1 reasonably:1 robust:1 symmetry:1 european:1 artificially:1 substituted:1 repeated:1 roc:2 wiley:1 fails:1 inferring:1 candidate:3 breaking:1 minute:1 specific:1 normalizing:1 intractable:1 incorporating:1 false:2 effectively:2 subimages:3 texture:1 labelling:3 conditioned:1 illustrates:1 entropy:1 generalizing:1 simply:2 appearance:5 visual:1 failed:1 partially:2 cipolla:1 determines:1 conditional:2 rbf:2 towards:1 careful:1 feasible:1 change:2 typical:1 baluja:1 torr:1 acting:1 total:2 intact:1 support:2 brevity:1 relevance:6 frontal:2 incorporate:1 dept:2 tested:1 |
1,780 | 2,617 | The Convergence of Contrastive Divergences
Alan Yuille
Department of Statistics
University of California at Los Angeles
Los Angeles, CA 90095
[email protected]
Abstract
This paper analyses the Contrastive Divergence algorithm for learning
statistical parameters. We relate the algorithm to the stochastic approximation literature. This enables us to specify conditions under which the
algorithm is guaranteed to converge to the optimal solution (with probability 1). This includes necessary and sufficient conditions for the solution to be unbiased.
1 Introduction
Many learning problems can be reduced to statistical inference of parameters. But inference
algorithms for this task tend to be very slow. Recently Hinton proposed a new algorithm
called contrastive divergences (CD) [1]. Computer simulations show that this algorithm
tends to converge, and to converge rapidly, although not always to the correct solution [2].
Theoretical analysis shows that CD can fail but does not give conditions which guarantee
convergence [3,4].
This paper relates CD to the stochastic approximation literature [5,6] and hence derives
elementary conditions which ensure convergence (with probability 1). We conjecture that
far stronger results can be obtained by applying more advanced techniques such as those
described by Younes [7]. We also give necessary and sufficient conditions for the solution
of CD to be unbiased.
Section (2) describes CD and shows that it is closely related to a class of stochastic approximation algorithms for which convergence results exist. In section (3) we state and
give a proof of a simple convergence theorem for stochastic approximation algorithms.
Section (4) applies the theorem to give sufficient conditions for convergence of CD.
2 Contrastive Divergence and its Relations
The task of statistical inference is to estimate the model parameters ? ? which minimize the
Kullback-Leibler divergence D(P0 (x)||P (x|?)) between the empirical distribution func-
tion of the observed data P0 (x) and the model P (x|?). It is assumed that the model distribution is of the form P (x|?) = e?E(x;?) /Z(?).
Estimating the model parameters is difficult. For example, it is natural to try performing
steepest descent on D(P0 (x)||P (x|?)). The steepest descent algorithm can be expressed
as:
X
?E(x; ?)
?E(x; ?) X
P (x|?)
+
},
(1)
?t+1 ? ?t = ?t {?
P0 (x)
??
??
x
x
where the {?t } are constants.
Unfortunately steepest descent is usually computationally intractable because of the need to
compute the second term on the right hand side of equation (1). This is extremely difficult
because of the need to evaluate the normalization term Z(?) of P (x|?).
Moreover, steepest descent also risks getting stuck in a local minimum. There is, however,
an important exception if we can express E(x; ?) in the special form E(x; ?) = ? ? ?(x),
for some function ?(x). In this case D(P0 (x)||P (x|?)) is convex and so steepest descent
is guaranteed to converge to the global minimum. But the difficulty of evaluating Z(?)
remains.
The CD algorithm is formally similar to steepest descent. But it avoids the need to evaluate Z(?). Instead it approximates the second term on the right hand side of the steepest
descent equation (1) by a stochastic term. This approximation is done by defining, for
each ?, a Markov Chain Monte
P Carlo (MCMC) transition kernel K? (x, y) whose invariant
distribution is P (x|?) (i.e. x P (x|?)K? (x, y) = P (y|?)).
Then the CD algorithm can be expressed as:
?t+1 ? ?t = ?t {?
X
x
P0 (x)
?E(x; ?) X
?E(x; ?)
+
Q? (x)
},
??
??
x
(2)
where Q? (x) is the empirical distribution function on the samples obtained by initializing
the chain at the data samples P0 (x) and running the Markov chain forward for m steps (the
value of m is a design choice).
We now observe that CD is similar to a class of stochastic approximation algorithms which
also use MCMC methods to stochastically approximate the second term on the right hand
side of the steepest descent equation (1). These algorithms are reviewed in [7] and have
been used, for example, to learn probability distributions for modelling image texture [8].
A typical algorithm of this type introduces a state vector S t (x) which is initialized by
setting S t=0 (x) = P0 (x). Then S t (x) and ?t are updated sequentially as follows. S t (x)
is obtained by sampling with the transition kernel K?t (x, y) using S t?1 (x) as the initial
state for the chain. Then ?t+1 is computed by replacing the second term in equation (1) by
the expectation with respect to S t (x). From this perspective, we can obtain CD by having
a state vector S t (x) (= Q? (x)) which gets re-initialized to P0 (x) at each time step.
This stochastic approximation algorithm, and its many variants, have been extensively studied and convergence results have been obtained (see [7]). The convergence results are
based on stochastic approximation theorems [6] whose history starts with the analysis of
the Robbins-Monro algorithm [5]. Precise conditions can be specified which guarantee
convergence in probability. In particular, Kushner [9] has proven convergence to global
optima. Within the NIPS community, Orr and Leen [10] have studied the ability of these
algorithms to escape from local minima by basin hopping.
3 Stochastic Approximation Algorithms and Convergence
The general stochastic approximation algorithm is of the form:
?t+1 = ?t ? ?t S(?t , Nt ),
(3)
where Nt is a random variable sampled from a distribution Pn (N ), ?t is the damping
coefficient, and S(., .) is an arbitrary function.
We now state a theorem which gives sufficient conditions to ensure that the stochastic
approximation algorithm (3) converges to a (solution) state ? ? . The theorem is chosen
because of the simplicity of its proof and we point out that a large variety of alternative
results are available, see [6,7,9] and the references they cite.
The theorem involves three basic concepts. The first is a function L(?) = (1/2)|? ? ? ? |2
which is a measure of the distance of the current state ? from the solution state ? ? (in
the next sectionP
we will require ? ? = arg min? D(P0 (x)||P (x|?))). The second is the
expected value N Pn (N )S(?, N ) of the update term in the stochastic approximation
algorithm (3). The third is the expected squared magnitude h|S(?, N )|2 i of the update
term.
The theorem states that the algorithm will converge provided three conditions are satisfied.
These
conditions are fairly intuitive. The first condition requires that the expected update
P
?
P
N n (N )S(?, N ) has a large component towards the solution ? (i.e. in the direction
of the negative gradient of L(?)). The second condition requires that the expected squared
magnitude h|S(?, N )|2 i is bounded, so that the ?noise? in the update is not too large. The
third condition requires that the damping coefficients ?t decrease with time t, so that the
algorithm eventually settles down into a fixed state. This condition is satisfied by setting
?t = 1/t, ?t (which is the fastest fall off rate consistent with the SAC theorem).
We now state the theorem and briefly sketch the proof which is based on martingale theory
(for an introduction, see [11]).
Stochastic Approximation Convergence (SAC) Theorem. Consider the stochastic ap? 2
proximation algorithm, equation (3), and let L(?) = (1/2)|? ? ?P
| . Then the algorithm
?
will converge to ? with probability 1 provided: (1) ??L(?) ? N Pn (N )S(?, N ) ?
K1 L(?) for some constant K1 , (2) h|S(?, N )|2 it ? K2 (1 + L(?)), where K2 is some
constant
P? h.it is taken with respect to all the data prior to time t, and
P? and the expectation
(3) t=1 ?t = ? and t=1 ?t2 < ?.
Proof. The proof [12] is a consequence of the supermartingale convergenceP
theorem [11].
?
This theorem states that if Xt , Yt , Zt are positive random variables obeying t=0 Yt ? ?
withP
probability one and hXt+1 i ? Xt + Yt ? Zt , ?t, then Xt converges with probability 1
?
and t=0 Zt < ?. To apply the theorem, set Xt = (1/2)|?t ? ? ? |2 , set Yt = (1/2)K2 ?t2
and Zt = ?Xt (K2 ?t2 ? K1 ?t ) (Zt is positive for sufficiently large t). Conditions 1 and 2
imply that Xt can only converge to 0. The result follows after some algebra.
4 CD and SAC
The CD algorithm can be expressed as a stochastic approximation algorithm by setting:
S(?t , Nt ) = ?
X
x
P0 (x)
?E(x; ?) X
?E(x; ?)
+
Q? (x)
,
??
??
x
(4)
where the random variable Nt corresponds to the MCMC sampling used to obtain Q? (x).
We can now apply the SAC to give three conditions which guarantee convergence of the CD
algorithm. The third condition can be satisfied by setting ?t = 1/t, ?t. We can satisfy the
second condition by requiring that the gradient of E(x; ?) with respect to ? is bounded,
see equation (4). We conjecture that weaker conditions, such as requiring only that the
gradient of E(x; ?) be bounded by a function linear in ?, can be obtained using the more
sophisticated martingale analysis described in [7].
It remains to understand the first condition and to determine whether the solution is unbiased. These require studying the expected CD update:
X
Pn (Nt )S(?t , Nt ) = ?
X
x
Nt
P0 (x)
?E(x; ?) X
?E(x; ?)
+
P0 (y)K?m (y, x)
,
??
??
y,x
(5)
P
which is derived using the fact that the expected value of Q? (x) is y P0 (y)K?m (y, x)
(where the superscript m indicates running the transition kernel m times).
We now re-express this expected CD update in two different ways, Results 1 and 2, which
give alternative ways of understanding it. We then proceed to Results 3 and 4 which give
conditions for convergence and unbiasedness of CD.
But we must first introduce some background material from Markov Chain theory [13].
We choose the transition kernel K? (x, y) to satisfy detailed balance so that
P (x|?)K? (x, y) = P (y|?)K? (y, x). Detailed balance is obeyed by many MCMC algorithms and, in particular, is always satisfied by Metropolis-Hasting
algorithms. It implies
P
that P (x|?) is the invariant
kernel
of
K
(x,
y)
so
that
P
(x|?)K
?
? (x, y) = P (y|?) (all
x
P
transition kernels satisfy y K? (x, y) = 1, ?x).
Detailed balance implies that the matrix Q? (x, y) = P (x|?)1/2 K? (x, y)P (y|?)?1/2 is
symmetric and hence has orthogonal eigenvectors and eigenvalues {e?? (x), ??? }. The eigenvalues are ordered by magnitude (largest to smallest). The first eigenvalue is ?1 = 1 (so
|?? | < 1, ? ? 2). By standard linear algebra,
we can write Q? (x, y) in terms of its
P
eigenvectors and eigenvalues Q? (x, y) = ? ??? e?? (x)e?? (y), which implies that we can
express the transition kernel applied m times by:
X
X
K?m (x, y) =
{??? }m {P (x|?)}?1/2 e?? (x){P (y|?)}1/2 e?? (y) =
{??? }m u?? (x)v?? (y),
?
?
(6)
{v?? (x)}
{u?? (x)}
where the
and
are the left and right eigenvectors of the transition kernel
K? (x, y). They are defined by:
v?? (x) = e?? (x){P (x|?)}1/2 , u?? (x) = e?? (x){P (x|?)}?1/2 , ??,
(7)
P
P
and it can be verified that x v?? (x)K? (x, y) = ??? v?? (y), ?? and y K? (x, y)u?? (y) =
??? u?? (x), ??. In addition, the left and right eigenvectors are mutually orthonormal so that
P
?
?
x v? (x)u? (x) = ??? , where ??? is the Kronecker delta function. This implies that we
can express any function f (x) in equivalent expansions,
X X
X X
f (x) =
{
f (y)u?? (y)}v?? (x), f (x) =
{
f (y)v?? (y)}u?? (x).
(8)
?
y
?
y
Moreover, the first left and right eigenvectors can be calculated explicitly to give:
v?1 (x) = P (x|?), u1? (x) ? 1, ?1? = 1,
(9)
which follows because P (x|?) is the (unique) invariant distribution of the transition kernel
K? (x, y) and hence is the first left eigenvector.
We now have sufficient background to state and prove our first result.
Result 1.
The expected CD update corresponds to replacing the update term
P
?E(x;?)
in the steepest descent equation (1) by:
P
(x|?)
x
??
X ?E(x; ?)
x
??
P (x|?) +
X
X
{??? }m {
?=2
y
X
?E(x; ?)
P0 (y)u?? (y)}{
v?? (x)
},
??
x
(10)
where {v?? (x), u?? (x)} are the left and right eigenvectors of K? (x, y) with eigenvalues
{?? }.
Proof.
P
P
The expected CD update replaces x P (x|?) ?E(x;?)
by y,x P0 (y)K?m (y, x) ?E(x;?)
,
??
??
see equation (5). We use the eigenvector expansion of the transition kernel, equation (6),
P
to express this as y,x,? P0 (y){??? }m u?? (y)v?? (x) ?E(x;?)
. The result follows using the
??
specific forms of the first eigenvectors, see equation (9).
Result 1 demonstrates that the expected update of CD isPsimilar to theP
steepest descent
rule, see equations (1,10), but with an additional term ?=2 {??? }m { y P0 (y)u?? (y)}
P
} which will be small provided the magnitudes of P
the eigenvalues {??? }
{ x v?? (x) ?E(x;?)
??
are small for ? ? 2 (or if the transition kernel can be chosen so that y P0 (y)u?? is small
for ? ? 2).
We now give a second form for the expected
update rule. To do this, we define a new
P
variable g(x; ?). This is chosen so that x PP
(x|?)g(x; ?) = 0, ?? and the extrema of
the Kullback-Leibler divergence occurs when x P0 (x)g(x; ?) = 0.
P
P
Result 2. Let g(x; ?) = ?E(x;?)
? x P (x|?) ?E(x;?)
, then
x P (x|?)g(x; ?) = 0, the
??
??
P
extrema of the Kullback-Leibler divergence occur when x P0 (x)g(x; ?) = 0, and the
expected update rule can be written as:
X
X
?t+1 = ?t ? ?t {
P0 (x)g(x; ?) ?
P0 (y)K?m (y, x)g(x; ?)}.
(11)
x
y,x
P
Proof. The first result follows directly. The second follows because x P0 (x)g(x; ?) =
P
P
?E(x;?)
? x P (x|?) ?E(x;?)
. To get the third we substitute the definition of
x P0 (x)
??
??
g(x; ?) into the expected update
equation
(5). The result follows using the standard propP
erty of transition kernels that y K?m (x, y) = 1, ?x.
We now use Results 1 and 2 to understand the fixed points of the CD algorithm and determine whether it is biased.
Result 3. The fixed pointsP? ? of the CD algorithm are true (unbiased) extrema
?
of
(i.e.
x P0 (x)g(x; ? ) = 0) if, and only if, we also have
P the KL divergence
m
?
y,x P0 (y)K? ? (y, x)g(x; ? ) = 0. A sufficient condition is that P0 (y) and g(x; ?) lie
in orthogonal eigenspaces of K?? (y, x). This includes the (known) special case when
there exists ? ? such that P (x|? ? ) = P0 (x) (see [2]).
Proof. The first part follows directly from equation (11) in Result 2. The second part can
be obtained by the eigenspace
(x|? ? ). Recall that
P analysis in Result 1. Suppose P0 (x) = P P
v?1 ? (x) = P (x|? ? ), and so y P0 (y)u??ast (y) = 0, ? 6= 1. Moreover, x v?1 ? g(x; ? ? ) =
0. Hence P0 (x) and g(x; ? ? ) lie in orthogonal eigenspaces of K?? (y, x).
Result 3 shows that whether CD converges to an unbiased estimate usually depends on the
specific form of theP
MCMC transition matrix K? (y, x). But there is an intuitive argument
m
?
why
the
bias
term
y,x P0 (y)K? ? (y, x)g(x; ? ) may
P
P tend to bem small at places where
?
P
(x)g(x;
?
)
=
0.
This
is
because
for
small
m,
P0 (y)K?? (y, x) ? P0 (x) which
0
x
yP
P
?
m
?
satisfies x P0 (x)g(x;
?
)
=
0.
Moreover,
for
large
m,
y P0 (y)K? ? (y, x) ? P (x|? )
P
?
?
and we also have x P (x|? )g(x; ? ) = 0.
P
Alternatively, using Result 1, the bias term y,x P0 (y)K?m? (y, x)g(x; ? ? ) can be expressed
?
P
P
P
)
as ?=2 {???? }m { y P0 (y)u??? (y)}{ x v??? (x) ?E(x;?
}. This will tend to be small
??
?
provided the eigenvalue moduli |??? | are small for ? ? 2 (i.e. the standard conditions for
a well defined Markov Chain). In general the bias term should decrease exponentially as
|?2?? |m . Clearly it is also desirable to define the transition kernels K? (x, y) so that the right
eigenvectors {u?? (y) : ? ? 2} are as orthogonal as possible to the observed data P0 (y).
The practicality
of CD depends on whether we can find an MCMC sampler such that the
P
bias term y,x P0 (y)K?m? (y, x)g(x; ? ? ) = 0 is small for most ?. If not, then the alternative stochastic algorithms may be preferable.
Finally we give convergence conditions for the CD algorithm.
Result 4 CD will converge with probability 1 to state ? ? provided ?t = 1/t, ?E
?? is bounded,
and
X
X
(? ? ? ? ) ? {
P0 (x)g(x; ?) ?
P0 (y)K?m (y, x)g(x; ?)} ? K1 |? ? ? ? |2 , (12)
x
y,x
for some K1 .
Proof. This follows from the SAC theorem and Result 2. The boundedness of ?E
?? is required
to ensure that the ?update noise? is bounded in order to satisfy the second condition of the
SAC theorem.
Results 3 and 4 can be combined to ensure that CD converges (with probability 1) to the
?
correct (unbiased)
solution. This requires
P
P specifying that ? in Result 4 also satisfies the
conditions x P0 (x)g(x; ? ? ) = 0 and y,x P0 (y)K?m? (y, x)g(x; ? ? ) = 0.
5 Conclusion
The goal of this paper was to relate the Contrastive Divergence (CD) algorithm to the stochastic approximation literature. This enables us to give convergence conditions which
ensure that CD will converge to the parameters ? ? that minimize the Kullback-Leibler divergence D(P0 (x)||P (x|?)). The analysis also gives necessary and sufficient conditions to
determine whether the solution is unbiased. For more recent results, see Carreira-Perpignan
and Hinton (in preparation).
The results in this paper are elementary and preliminary. We conjecture that far more
powerful results can be obtained by adapting the convergence theorems in the literature
[6,7,9]. In particular, Younes [7] gives convergence results when the gradient of the energy
?E(x; ?)/?? is bounded by a term that is linear in ? (and hence unbounded). He is also
able to analyze the asymptotic behaviour of these algorithms. But adapting his mathematical techniques to Contrastive Divergence is beyond the scope of this paper.
Finally, the analysis in this paper does not seem to capture many of the intuitions behind
Contrastive Divergence [1]. But we hope that the techniques described in this paper may
also stimulate research in this direction.
Acknowledgements
I thank Geoff Hinton, Max Welling and Yingnian Wu for stimulating conversations and
feedback. Yingnian provided guidance to the stochastic approximation literature and Max
gave useful comments on an early draft. This work was partially supported by an NSF SLC
catalyst grant ?Perceptual Learning and Brain Plasticity? NSF SBE-0350356.
References
[1]. G. Hinton. ?Training Products of Experts by Minimizing Contrastive Divergence??.
Neural Computation. 14, pp 1771-1800. 2002.
[2]. Y.W. Teh, M. Welling, S. Osindero and G.E. Hinton. ?Energy-Based Models for Sparse
Overcomplete Representations?. Journal of Machine Learning Research. To appear. 2003.
[3]. D. MacKay. ?Failures of the one-step learning algorithm?. Available electronically at
http://www.inference.phy.cam.ac.uk/mackay/abstracts/gbm.html. 2001.
[4]. C.K.I. Williams and F.V. Agakov. ?An Analysis of Contrastive Divergence Learning
in Gaussian Boltzmann Machines?. Technical Report EDI-INF-RR-0120. Institute for
Adaptive and Neural Computation. University of Edinburgh. 2002.
[5]. H. Robbins and S. Monro. ?A Stochastic Approximation Method?. Annals of Mathematical Sciences. Vol. 22, pp 400-407. 1951.
[6]. H.J. Kushner and D.S. Clark. Stochastic Approximation for Constrained and Unconstrained Systems. New York. Springer-Verlag. 1978.
[7]. L. Younes. ?On the Convergence of Markovian Stochastic Algorithms with Rapidly
Decreasing Ergodicity rates.? Stochastics and Stochastic Reports, 65, 177-228. 1999.
[8]. S.C. Zhu and X. Liu. ?Learning in Gibbsian Fields: How Accurate and How Fast Can
It Be??. IEEE Trans. Pattern Analysis and Machine Intelligence. Vol. 24, No. 7, July
2002.
[9]. H.J. Kushner. ?Asymptotic Global Behaviour for Stochastic Approximation and Diffusions with Slowly Decreasing Noise Effects: Global Minimization via Monte Carlo?.
SIAM J. Appl. Math. 47:169-185. 1987.
[10]. G.B. Orr and T.K. Leen. ?Weight Space Probability Densities on Stochastic Learning: II Transients and Basin Hopping Times?. Advances in Neural Information Processing
Systems, 5. Eds. Giles, Hanson, and Cowan. Morgan Kaufmann, San Mateo, CA. 1993.
[11]. G.R. Grimmett and D. Stirzaker. Probability and Random Processes. Oxford
University Press. 2001.
[12].
B. Van Roy.
Course notes.
Prof.
(www.stanford.edu/class/msande339/notes/lecture6.ps).
B. Van Roy.
Stanford.
[13]. P. Bremaud. Markov Chains: Gibbs Fields, Monte Carlo Simulation, and
Queues. Springer. New York. 1999.
| 2617 |@word briefly:1 stronger:1 simulation:2 p0:46 contrastive:9 boundedness:1 phy:1 liu:1 initial:1 current:1 nt:7 must:1 written:1 plasticity:1 enables:2 update:14 intelligence:1 steepest:10 draft:1 math:1 unbounded:1 mathematical:2 prove:1 introduce:1 expected:13 brain:1 decreasing:2 provided:6 estimating:1 moreover:4 bounded:6 erty:1 eigenspace:1 eigenvector:2 extremum:3 guarantee:3 preferable:1 k2:4 demonstrates:1 uk:1 grant:1 appear:1 positive:2 local:2 tends:1 consequence:1 oxford:1 propp:1 ap:1 studied:2 mateo:1 specifying:1 appl:1 fastest:1 unique:1 empirical:2 adapting:2 get:2 risk:1 applying:1 ast:1 www:2 equivalent:1 yt:4 williams:1 convex:1 simplicity:1 sac:6 rule:3 orthonormal:1 his:1 updated:1 annals:1 suppose:1 roy:2 agakov:1 observed:2 initializing:1 capture:1 decrease:2 intuition:1 cam:1 algebra:2 yuille:2 geoff:1 fast:1 monte:3 whose:2 stanford:2 ability:1 statistic:1 superscript:1 eigenvalue:7 rr:1 product:1 rapidly:2 intuitive:2 getting:1 los:2 convergence:19 optimum:1 p:1 converges:4 ac:1 stat:1 involves:1 implies:4 direction:2 closely:1 correct:2 stochastic:24 transient:1 settle:1 material:1 require:2 behaviour:2 preliminary:1 elementary:2 sufficiently:1 scope:1 early:1 smallest:1 gbm:1 robbins:2 largest:1 hope:1 minimization:1 clearly:1 always:2 gaussian:1 pn:4 derived:1 modelling:1 indicates:1 inference:4 relation:1 arg:1 html:1 constrained:1 special:2 fairly:1 mackay:2 field:2 having:1 sampling:2 t2:3 report:2 bremaud:1 escape:1 divergence:14 withp:1 introduces:1 behind:1 chain:7 gibbsian:1 accurate:1 necessary:3 eigenspaces:2 orthogonal:4 damping:2 initialized:2 re:2 guidance:1 overcomplete:1 theoretical:1 giles:1 markovian:1 osindero:1 too:1 obeyed:1 combined:1 unbiasedness:1 density:1 siam:1 off:1 squared:2 satisfied:4 choose:1 slowly:1 stochastically:1 expert:1 yp:1 slc:1 orr:2 includes:2 coefficient:2 satisfy:4 explicitly:1 depends:2 tion:1 try:1 analyze:1 start:1 monro:2 minimize:2 kaufmann:1 carlo:3 history:1 ed:1 definition:1 failure:1 energy:2 pp:3 proof:9 sampled:1 recall:1 conversation:1 sophisticated:1 specify:1 leen:2 done:1 ergodicity:1 hand:3 sketch:1 replacing:2 stimulate:1 modulus:1 effect:1 concept:1 unbiased:7 requiring:2 true:1 hence:5 symmetric:1 leibler:4 supermartingale:1 image:1 bem:1 recently:1 exponentially:1 he:1 approximates:1 gibbs:1 unconstrained:1 recent:1 perspective:1 inf:1 verlag:1 morgan:1 minimum:3 additional:1 converge:9 determine:3 july:1 ii:1 relates:1 desirable:1 alan:1 technical:1 hasting:1 variant:1 basic:1 expectation:2 normalization:1 kernel:13 background:2 addition:1 biased:1 comment:1 tend:3 cowan:1 seem:1 variety:1 gave:1 angeles:2 whether:5 queue:1 proceed:1 york:2 useful:1 detailed:3 eigenvectors:8 extensively:1 younes:3 reduced:1 http:1 exist:1 nsf:2 delta:1 write:1 vol:2 express:5 verified:1 diffusion:1 powerful:1 place:1 wu:1 guaranteed:2 replaces:1 stirzaker:1 occur:1 kronecker:1 ucla:1 u1:1 argument:1 min:1 extremely:1 performing:1 conjecture:3 department:1 describes:1 metropolis:1 stochastics:1 invariant:3 taken:1 computationally:1 equation:13 mutually:1 remains:2 eventually:1 fail:1 studying:1 available:2 apply:2 observe:1 grimmett:1 alternative:3 substitute:1 running:2 ensure:5 kushner:3 hopping:2 practicality:1 k1:5 prof:1 occurs:1 gradient:4 distance:1 thank:1 balance:3 minimizing:1 difficult:2 unfortunately:1 relate:2 negative:1 design:1 zt:5 boltzmann:1 teh:1 markov:5 descent:10 defining:1 hinton:5 precise:1 arbitrary:1 community:1 edi:1 required:1 specified:1 kl:1 hanson:1 california:1 nip:1 trans:1 able:1 beyond:1 usually:2 pattern:1 max:2 natural:1 difficulty:1 advanced:1 zhu:1 yingnian:2 imply:1 func:1 prior:1 literature:5 understanding:1 acknowledgement:1 asymptotic:2 catalyst:1 proven:1 sbe:1 clark:1 sufficient:7 basin:2 consistent:1 cd:28 course:1 supported:1 electronically:1 side:3 weaker:1 understand:2 bias:4 institute:1 fall:1 sparse:1 edinburgh:1 van:2 feedback:1 calculated:1 evaluating:1 avoids:1 transition:13 stuck:1 forward:1 adaptive:1 san:1 far:2 welling:2 approximate:1 kullback:4 global:4 sequentially:1 proximation:1 assumed:1 thep:2 alternatively:1 why:1 reviewed:1 learn:1 ca:2 expansion:2 noise:3 martingale:2 slow:1 obeying:1 lie:2 perceptual:1 third:4 theorem:16 down:1 xt:6 specific:2 derives:1 intractable:1 hxt:1 exists:1 texture:1 magnitude:4 expressed:4 ordered:1 partially:1 applies:1 springer:2 cite:1 corresponds:2 satisfies:2 stimulating:1 goal:1 towards:1 carreira:1 typical:1 sampler:1 called:1 exception:1 formally:1 preparation:1 evaluate:2 mcmc:6 |
1,781 | 2,618 | Seeing through water
Alexei A. Efros?
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213, U.S.A.
Volkan Isler, Jianbo Shi and Mirk?o Visontai
Dept. of Computer and Information Science
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
{isleri,jshi,mirko}@cis.upenn.edu
Abstract
We consider the problem of recovering an underwater image distorted by
surface waves. A large amount of video data of the distorted image is
acquired. The problem is posed in terms of finding an undistorted image patch at each spatial location. This challenging reconstruction task
can be formulated as a manifold learning problem, such that the center
of the manifold is the image of the undistorted patch. To compute the
center, we present a new technique to estimate global distances on the
manifold. Our technique achieves robustness through convex flow computations and solves the ?leakage? problem inherent in recent manifold
embedding techniques.
1
Introduction
Consider the following problem. A pool of water is observed by a stationary video camera
mounted above the pool and looking straight down. There are waves on the surface of the
water and all the camera sees is a series of distorted images of the bottom of the pool,
e.g. Figure 1. The aim is to use these images to recover the undistorted image of the pool
floor ? as if the water was perfectly still. Besides obvious applications in ocean optics and
underwater imaging [1], variants of this problem also arise in several other fields, including
astronomy (overcoming atmospheric distortions) and structure-from-motion (learning the
appearance of a deforming object). Most approaches to solve this problem try to model the
distortions explicitly. In order to do this, it is critical not only to have a good parametric
model of the distortion process, but also to be able to reliably extract features from the data
to fit the parameters. As such, this approach is only feasible in well understood, highly
controlled domains. On the opposite side of the spectrum is a very simple method used in
underwater imaging: simply, average the data temporally. Although this method performs
surprisingly well in many situations, it fails when the structure of the target image is too
fine with respect to the amplitude of the wave (Figure 2).
In this paper we propose to look at this difficult problem from a more statistical angle. We
will exploit a very simple observation: if we watch a particular spot on the image plane,
most of the time the picture projected there will be distorted. But once in a while, when
the water just happens to be locally flat at that point, we will be looking straight down
and seeing exactly the right spot on the ground. If we can recognize when this happens
?
Authors in alphabetical order.
Figure 1: Fifteen consecutive frames from the video. The experimental setup involved: a transparent
bucket of water, the cover of a vision textbook ?Computer Vision/A Modern Approach?.
Figure 2: Ground truth image and reconstruction results using mean and median
and snap the right picture at each spatial location, then recovering the desired ground truth
image would be simply a matter of stitching these correct observations together. In other
words, the question that we will be exploring in this paper is not where to look, but when!
2
Problem setup
Let us first examine the physical setup of our problem. There is a ?ground truth? image G
on the bottom of the pool. Overhead, a stationary camera pointing downwards is recording
a video stream V . In the absence of any distortion V (x, y, t) = G(x, y) at any time t.
However, the water surface refracts in accordance with Snell?s Law. Let us consider what
the camera is seeing at a particular point x on the CCD array, as shown in Figure 3(c)
(assume 1D for simplicity). If the normal to the water surface directly underneath x is
pointing straight up, there is no refraction and V (x) = G(x). However, if the normal is
1
tilted by angle ?1 , light will bend by the amount ?2 = ?1 ? sin?1 ( 1.33
sin ?1 ), so the
camera point V (x) will see the light projected from G(x + dx) on the ground plane. It
is easy to see that the relationship between the tilt of the normal to the surface ?1 and the
displacement dx is approximately linear (dx ? 0.25?1 h using small angle approximation,
where h is the height of the water). This means that, in 2D, what the camera will be seeing
over time at point V (x, y, t) are points on the ground plane sampled from a disk centered at
G(x, y) and with radius related to the height of the water and the overall roughness of the
water surface. A similar relationship holds in the inverse direction as well: a point G(x, y)
will be imaged on a disk centered around V (x, y).
What about the distribution of these sample points? According to Cox-Munk Law [2], the
surface normals of rough water are distributed approximately as a Gaussian centered around
the vertical, assuming a large surface area and stationary waves. Our own experiments,
conducted by hand-tracking (Figure 3b), confirm that the distribution, though not exactly
Gaussian, is definitely unimodal and smooth.
Up to now, we only concerned ourselves with infinitesimally small points on the image
or the ground plane. However, in practice, we must have something that we can compute
with. Therefore, we will make an assumption that the surface of the water can be locally
approximated by a planar patch. This means that everything that was true for points is now
true for local image patches (up to a small affine distortion).
3
Tracking via embedding
From the description outlined above, one possible solution emerges. If the distribution of a
particular ground point on the image plane is unimodal, then one could track feature points
in the video sequence over time. Computing their mean positions over the entire video will
give an estimate of their true positions on the ground plane. Unfortunately, tracking over
long periods of time is difficult even under favorable conditions, whereas our data is so fast
(undersampled) and noisy that reliable tracking is out of the question (Figure 3(c)).
However, since we have a lot of data, we can substitute smoothness in time with smoothness
in similarity ? for a given patch we are more likely to find a patch similar to it somewhere
in time, and will have a better chance to track the transition between them. An alternative
to tracking the patches directly (which amounts to holding the ground patch G(x, y) fixed
and centering the image patches V (x + dxt , y + dyt ) on top of it in each frame), is to fix the
image patch V (x, y) in space and observe the patches from G(x + dxt , y + dyt ) appearing
in this window. We know that this set of patches comes from a disk on the ground plane
centered around patch G(x, y) ? our goal. If the disk was small enough compared to the
size of the patch, we could just cluster the patches together, e.g. by using translational
EM [3]. Unfortunately, the disk can be rather large, containing patches with no overlap
at all, thus making only the local similarity comparisons possible. However, notice that
our set of patches lies on a low-dimensional manifold; in fact we know precisely which
manifold ? it?s the disk on the ground plane centered at G(x, y)! So, if we could use the
local patch similarities to find an embedding of the patches in V (x, y, t) on this manifold,
the center of the embedding will hold our desired patch G(x, y).
The problem of embedding the patches based on local similarity is related to the recent
work in manifold learning [4, 5]. Basic ingredients of the embedding algorithms are: defining a distance measure between points, and finding an energy function that optimally places
them in the embedding space. The distance can be defined as all-pairs distance matrix, or
as distance from a particular reference node. In both cases, we want the distance function
to satisfy some constraints to model the underlying physical problem.
The local similarity measure for our problem turned out to be particularly unreliable, so
none of the previous manifold learning techniques were adequate for our purposes. In the
following section we will describe our own, robust method for computing a global distance
function and finding the right embedding and eventually the center of it.
Surface
N
h
?1
?2
G(x)
G(x + dx)
(a)
(b)
(c)
Figure 3: (a) Snell?s Law (b)-(c) Tracking points of the bottom of the pool: (b) the tracked position
forms a distribution close to a Gaussian, (c): a vertical line of the image shown at different time
instances (horizontal axis). The discontinuity caused by rapid changes makes the tracking infeasible.
4
What is the right distance function?
Let I = {I1 , . . . , In } be the set of patches, where It = V (x, y, t) and x =
[xmin , xmax ], y = [ymin , ymax ] are the patch pixel coordinates. Our goal is to find a
center patch to represent the set I. To achieve this goal, we need a distance function
d : I ? I ? IR such that d(Ii , Ij ) < d(Ii , Ik ) implies that Ij is more similar to Ii than Ik .
Once we have such a measure, the center can be found by computing:
X
I ? = arg min
d(Ii , Ij )
(1)
Ii ?I
Ij ?I
Unfortunately, the measurable distance functions, such as Normalized Cross Correlation
(N CC) are only local. A common approach is to design a global distance function using
the measurable local distances and transitivity [6, 4]. This is equivalent to designing a
global distance function of the form:
dlocal (Ii , Ij ),
if dlocal (Ii , Ij ) ? ?
d(Ii , Ij ) =
(2)
dtransitive (Ii , Ij ), otherwise.
where dlocal is a local distance function, ? is a user-specified threshold and dtransitive
is a global, transitive distance function which utilizes dlocal . The underlying assumption
here is that the members of I lie on a constraint space (or manifold) S. Hence, a local
similarity function such as N CC can be used to measure local distances on the manifold.
An important research question in machine learning is to extend the local measurements
into global ones, i.e. to design dtransitive above.
One method for designing such a transitive distance function is to build a graph G = (V, E)
whose vertices correspond to the members of I. The local distance measure is used to place
edges which connect only very similar members of I. Afterwards, the length of pairwise
shortest paths are used to estimate the true distances on the manifold S. For example, this
method forms the basis of the well-known Isomap method [4].
Unfortunately, estimating the distance dtransitive (?, ?) using shortest path computations is
not robust to errors in the local distances ? which are very common. Consider a patch that
contains the letter A and another one that contains the letter B. Since they are different
letters, we expect that these patches would be quite distant on the manifold S. However,
among the A patches there will inevitably be a very blurry A that would look quite similar
to a very blurry B producing an erroneous local distance measurement. When the transitive
global distances are computed using shortest paths, a single erroneous edge will singlehandedly cause all the A patches to be much closer to all the B patches, short-circuiting
the graph and completely distorting all the distances.
Such errors lead to the leakage problem in estimating the global distances of patches. This
problem is illustrated in Figure 4. In this example, our underlying manifold S is a triangle.
Suppose our local distance function erroneously estimates an edge between the corners of
the triangle as shown in the figure. After the erroneous edge is inserted, the shortest paths
from the top of the triangle leak through this edge. Therefore, the shortest path distances
will fail to reflect the true distance on the manifold.
5
Solving the leakage problem
Recall that our goal is to find the center of our data set as defined in Equation 1. Note that,
in order to compute
Pthe center we do not need all pairwise distances. All we need is the
quantity dI (Ii ) = Ij ?I d(Ii , Ij ) for all Ii .
The leakage problem occurs when we compute the values dI (Ii ) using the shortest path
metric. In this case, even a single erroneous edge may reduce the shortest paths from many
different patches to Ii ? changing the value of dI (Ii ) drastically. Intuitively, in order to
prevent the leakage problem we must prevent edges from getting involved in many shortest
path computations to the same node (i.e. leaking edges). We can formalize this notion by
casting the computation as a network flow problem.
Let G = (V, E) be our graph representation such that for each patch Ii ? I, there is a
vertex vi ? V . The edge set E is built as follows: there is an edge (vi , vj ) if dlocal (Ii , Ij )
is less than a threshold. The weight of the edge (vi , vj ) is equal to dlocal (Ii , Ij ).
To compute the value dI (Ii ), we build a flow network whose vertex set is also V . All
vertices in V ? {vi } are sources, pushing unit flow into the network. The vertex vi is a sink
with infinite capacity. The arcs of the flow network are chosen using the edge set E. For
each edge (vj , vk ) ? E we add the arcs vj ? vk and vk ? vj . Both arcs have infinite
capacity and the cost of pushing one unit of flow on either arc is equal to the weight of
(vj , vk ), as shown in Figure 4 left (top and bottom). It can easily be seen that the minimum
cost flow in this network is equal to dI (Ii ). Let us call this network which is used to
compute dI (Ii ) as N W (Ii ).
Error
v
d1 /?
u
d1
w
B: Convex Flow
A: Shortest Path
The crucial factor in designing such a flow network is choosing the right cost and capacity.
Computing the minimum cost flow on N W (Ii ) not only gives us dI (Ii ) but also allows us
to compute how many times an edge is involved in the distance computation: the amount of
flow through an edge is exactly the number of times that edge is used for the shortest path
computations. This is illustrated in Figure 4 (box A) where d1 units of cost is charged for
each unit of flow through the edge (u, w). Therefore, if we prevent too much flow going
through an edge, we can prevent the leakage problem.
d3 /?
u
d2 /c2
w
d1 /c1
c1 c1 + c2
u
C: Shortest Path with Capacity
Error
d/?
v
?
d1 /c1
u
w
v
c1
w
Figure 4: The leakage problem. Left: Equivalence of shortest path leakage and uncapacitated flow
leakage problem. Bottom-middle: After the erroneous edge is inserted, the shortest paths from the
top of the triangle to vertex v go through this edge. Boxes A-C:Alternatives for charging a unit of
flow between nodes u and w. The horizontal axis of the plots is the amount of flow and the vertical
axis is the cost. Box A: Linear flow. The cost of a unit of flow is d1 Box B: Convex flow. Multiple
edges are introduced between two nodes, with fixed capacity, and convexly increasing costs. The cost
of a unit of flow increases from d1 to d2 and then to d3 as the amount of flow from u to w increases.
Box C: Linear flow with capacity. The cost is d1 until a capacity of c1 is achieved and becomes
infinite afterwards.
One might think that the leakage problem can simply be avoided by imposing capacity
constraints on the arcs of the flow network (Figure 4, box C). Unfortunately, this is not
very easy. Observe that in the minimum cost flow solution of the network N W (Ii ), the
amount of flow on the arcs will increase as the arcs get closer to Ii . Therefore, when we are
setting up the network N W (Ii ), we must adaptively increase the capacities of arcs ?closer?
to the sink vi ? otherwise, there will be no feasible solution. As the structure of the graph
G gets complicated, specifying this notion of closeness becomes a subtle issue. Further,
the structure of the underlying space S could be such that some arcs in G must indeed
carry a lot of flow. Therefore imposing capacities on the arcs requires understanding the
underlying structure of the graph G as well as the space S ? which is in fact the problem
we are trying to solve!
Our proposed solution to the leakage problem uses the notion of a convex flow. We do not
impose a capacity on the arcs. Instead, we impose a convex cost function on the arcs such
that the cost of pushing unit flow on arc a increases as the total amount of flow through a
increases. See Figure 4, box B.
This can be achieved by transforming the network N W (Ii ) to a new network N W 0 (Ii ).
The transformation is achieved by applying the following operation on each arc in
N W (Ii ): Let a be an arc from u to w in N W (Ii ). In N W 0 (Ii ), we replace a by k
arcs a1 , . . . , ak . The costs of these arcs are chosen to be uniformly increasing so that
cost(a1 ) < cost(a2 ) < . . . < cost(ak ). The capacity of arc ak is infinite. The weights
and capacities of the other arcs are chosen to reflect the steepness of the desired convexity
(Figure 4, box B). The network shown in the figure yields the following function for the
cost of pushing x units of flow through the arc:
(
d1 x,
if 0 ? x ? c1
d1 c1 + d2 (x ? c1 ),
if c1 ? x ? c2
cost(x) =
(3)
d1 c1 + d2 (c2 ? c1 ) + d3 (x ? c1 ? c2 ), if c2 ? x
The advantage of this convex flow computation is twofold. It does not require putting
thresholds on the arcs a-priori. It is always feasible to have as much flow on a single arc as
required. However, the minimum cost flow will avoid the leakage problem because it will
be costly to use an erroneous edge to carry the flow from many different patches.
5.1
Fixing the leakage in Isomap
As noted earlier, the Isomap method [4] uses the shortest path measurements to estimate
a distance matrix M . Afterwards, M is used to find an embedding of the manifold S via
MDS.
As expected, this method also suffers from the leakage problem as demonstrated in Figure 5. The top-left image in Figure 5 shows our ground truth. In the middle row, we
present an embedding of these graphs computed using Isomap which uses the shortest path
length as the global distance measure. As illustrated in these figures, even though isomap
does a good job in embedding the ground truth when there are no errors, the embedding
(or manifold) collapses after we insert the erroneous edges. In contrast, when we use the
convex-flow based technique to estimate the distances, we recover the true embedding ?
even in the presence of erroneous edges (Figure 5 bottom row).
6
Results
In our experiments we used 800 image frames to reconstruct the ground truth image. We
fixed 30 ? 30 size patches in each frame at the same location (see top of Figure 7 for two
sets of examples), and for every location we found the center. The middle row of Figure
7 shows embeddings of the patches computed using the distance derived from the convex
flow. The transition path and the morphing from selected patches (A,B,C) to the center
patch (F) is shown at the bottom.
The embedding plot on the left is considered an easier case, with a Gaussian-like embedding (the graph is denser close to the center) and smooth transitions between the patches in
a transition path. The plot to the right shows a more difficult example, when the embedding
has no longer a Gaussian shape, but rather a triangular one. Also note that the transitions
can have jumps connecting non-similar patches which are distant in the embedding space.
The two extremes of the triangle represent the blurry patches, which are so numerous and
0.6
0.6
B
0.4
0.2
0
0.6
B
0.4
0
A
?0.2
0.2
0
A
?0.2
?0.4
?0.4
?0.4
C
?0.6
?0.6
?0.6 ?0.4 ?0.2
0
0.2
0.4
0.6
0.2
A
0
0.2
0.4
?0.6 ?0.4 ?0.2
0
0.4
0.4
0.2
0.2
0
?0.2
?0.2
A
?0.4
Isomap [4]
?0.6
C
0
0.2
0.4
?0.6
?0.6 ?0.4 ?0.2
0
0.6
B
0.2
0.4
?0.6 ?0.4 ?0.2
0.4
0.4
0.2
0.2
0.2
0
0
0
?0.2
?0.2
A
C
?0.4
?0.4
A
C
?0.6
?0.6
?0.6 ?0.4 ?0.2
0
0.2
0.4
0
0.2
0.4
0.6
B
0.4
?0.2
B
C
?0.4
?0.6
?0.6 ?0.4 ?0.2
0.6
Convex flow
?0.4
C
0.4
0
A
?0.2
0.2
0.6
B
B
0.4
C
?0.6
?0.6 ?0.4 ?0.2
0.6
0
A
?0.2
C
Ground Truth
B
0.4
0.2
?0.4
B
C
A
?0.6
?0.6 ?0.4 ?0.2
0
0.2
0.4
?0.6 ?0.4 ?0.2
0
0.2
0.4
Figure 5: Top row: Ground truth. After sampling points from a triangular disk, a kNN graph is
constructed to provide a local measure for the embedding (left). Additional erroneous edges AC
and CB are added to perturb the local measure (middle, right). Middle row: Isomap embedding.
Isomap recovers the manifold for the error-free cases (left). However, all-pairs shortest path can
?leak? through AC and CB, resulting a significant change in the embedding. Bottom row: Convex
flow embedding. Convex flow penalized too many paths going through the same edge ? correcting
the leakage problem. The resulting embedding is more resistant to perturbations in the kNN graph.
very similar to each other, so that they are no longer treated as noise or outliers. This
results in ?folding in? the embedding and thus, moving estimated the center towards the
blurry patches. To solve this problem, we introduced additional two centers, which ideally
would represent the blurry patches, allowing the third center to move to the ground truth.
Once we have found the centers for all patches we stitched them together to form the
complete reconstructed image. In case of three centers, we use overlapping patches and
dynamic programming to determine the best stitching. Figure 6 shows the reconstruction
Figure 6: Comparison of reconstruction results of different methods using the first 800 frames, top:
patches stitched together which are closest to mean (left) and median (right), bottom: our results
using a single (left) and three (right) centers
result of our algorithm compared to simple methods of taking the mean/median of the
patches and finding the closest patch to them. The bottom row shows our result for a single
and for three center patches. The better performance of the latter suggests that the two new
centers relieve the correct center from the blurry patches.
For a graph with n vertices and m edges, the minimum cost flow computation takes
O(m log n(m + n log n)) time, therefore finding the center I ? of one set of patches can be
done in O(mn log n(m + n log n)) time. Our flow computation is based on the min-cost
max-flow implementation by Goldberg [7]. The convex function used in our experiments
was as described in Equation 3 with parameters d1 = 1, c1 = 1, d2 = 5, c2 = 9, d3 = 50.
B
A1
FC
C2
C1
F
C
FA
F
B2
FB
A2
B1
A
F
A
F
B
F
C
FA
A1
FB
FA
A2
FC
FB
B1
FC
B2
C1
C2
Figure 7:
Top row: sample patches (two different locations) from 800 frames, Middle row:
Convex flow embedding, showing the transition paths. Bottom row: corresponding patches (A, B,
C, A1, A2, B1, B2, C1, C2) and the morphing of them to the centers F F, FA, FB, FC respectively
7
Conclusion
In this paper, we studied the problem of recovering an underwater image from a video
sequence. Because of the surface waves, the sequence consists of distorted versions of
the image to be recovered. The novelty of our work is in the formulation of the reconstruction problem as a manifold embedding problem. Our contribution also includes a new
technique, based on convex flows, to recover global distances on the manifold in a robust
fashion. This technique solves the leakage problem inherent in recent embedding methods.
References
[1] Lev S. Dolin, Alexander G. Luchinin, and Dmitry G. Turlaev. Correction of an underwater object
image distorted by surface waves. In International Conference on Current Problems in Optics of
Natural Waters, pages 24?34, St. Petersburg, Russia, 2003.
[2] Charles Cox and Walter H. Munk. Slopes of the sea surface deduced from photographs of sun
glitter. Scripps Inst. of Oceanogr. Bull., 6(9):401?479, 1956.
[3] Brendan Frey and Nebojsa Jojic. Learning mixture models of images and inferring spatial transformations using the em algorithm. In IEEE Conference on Computer Vision and Pattern Recognition, pages 416?422, Fort Collins, June 1999.
[4] Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for
nonlinear dimensionality reduction. Science, pages 2319?2323, Dec 22 2000.
[5] Sam Roweis and Lawrence Saul. Nonlinear dimeansionality reduction by locally linear embedding. Science, 290(5500):2323?2326, Dec 22 2000.
[6] Bernd Fischer, Volker Roth, and Joachim M. Buhmann. Clustering with the connectivity kernel.
In Advances in Neural Information Processing Systems 16. MIT Press, 2004.
[7] Andrew V. Goldberg. An efficient implementation of a scaling minimum-cost flow algorithm.
Journal of Algorithms, 22:1?29, 1997.
| 2618 |@word cox:2 version:1 middle:6 disk:7 d2:5 fifteen:1 carry:2 reduction:2 series:1 contains:2 recovered:1 current:1 dx:4 must:4 john:1 tilted:1 distant:2 shape:1 plot:3 stationary:3 nebojsa:1 selected:1 plane:8 short:1 volkan:1 node:4 location:5 height:2 c2:10 constructed:1 ik:2 consists:1 overhead:1 pairwise:2 acquired:1 upenn:1 expected:1 indeed:1 rapid:1 examine:1 window:1 increasing:2 becomes:2 estimating:2 underlying:5 what:4 textbook:1 finding:5 astronomy:1 transformation:2 petersburg:1 every:1 exactly:3 jianbo:1 unit:9 producing:1 understood:1 accordance:1 local:17 frey:1 ak:3 lev:1 path:20 approximately:2 might:1 studied:1 equivalence:1 specifying:1 challenging:1 suggests:1 collapse:1 camera:6 practice:1 alphabetical:1 spot:2 displacement:1 area:1 word:1 seeing:4 get:2 close:2 bend:1 applying:1 measurable:2 equivalent:1 charged:1 shi:1 center:22 demonstrated:1 go:1 roth:1 convex:14 simplicity:1 correcting:1 array:1 embedding:27 notion:3 underwater:5 coordinate:1 target:1 suppose:1 user:1 programming:1 us:3 designing:3 goldberg:2 pa:2 approximated:1 particularly:1 recognition:1 observed:1 bottom:11 inserted:2 sun:1 xmin:1 transforming:1 leak:2 convexity:1 ideally:1 leaking:1 dynamic:1 solving:1 basis:1 completely:1 triangle:5 sink:2 easily:1 walter:1 fast:1 describe:1 choosing:1 whose:2 quite:2 posed:1 solve:3 denser:1 distortion:5 snap:1 otherwise:2 reconstruct:1 triangular:2 knn:2 fischer:1 think:1 noisy:1 sequence:3 advantage:1 reconstruction:5 propose:1 turned:1 pthe:1 ymax:1 achieve:1 roweis:1 description:1 getting:1 cluster:1 sea:1 object:2 andrew:1 ac:2 fixing:1 undistorted:3 ij:12 school:1 job:1 solves:2 recovering:3 c:1 come:1 implies:1 direction:1 radius:1 correct:2 centered:5 everything:1 munk:2 require:1 transparent:1 fix:1 snell:2 roughness:1 exploring:1 insert:1 correction:1 hold:2 around:3 considered:1 ground:18 normal:4 cb:2 lawrence:1 pointing:2 efros:2 achieves:1 consecutive:1 a2:4 purpose:1 favorable:1 rough:1 mit:1 gaussian:5 always:1 aim:1 rather:2 avoid:1 volker:1 casting:1 derived:1 june:1 joachim:1 vk:4 contrast:1 brendan:1 underneath:1 inst:1 entire:1 going:2 i1:1 pixel:1 overall:1 translational:1 arg:1 among:1 issue:1 priori:1 spatial:3 field:1 once:3 equal:3 sampling:1 look:3 inherent:2 modern:1 recognize:1 ourselves:1 highly:1 alexei:1 mixture:1 extreme:1 light:2 stitched:2 edge:27 closer:3 desired:3 instance:1 earlier:1 cover:1 bull:1 cost:23 vertex:7 conducted:1 too:3 optimally:1 connect:1 adaptively:1 st:1 deduced:1 definitely:1 international:1 pool:6 together:4 connecting:1 connectivity:1 reflect:2 containing:1 russia:1 corner:1 de:1 b2:3 includes:1 relieve:1 matter:1 satisfy:1 explicitly:1 caused:1 vi:6 stream:1 try:1 lot:2 wave:6 recover:3 complicated:1 vin:1 slope:1 contribution:1 ir:1 correspond:1 yield:1 none:1 cc:2 straight:3 suffers:1 centering:1 energy:1 involved:3 obvious:1 refraction:1 di:7 recovers:1 sampled:1 recall:1 emerges:1 dimensionality:1 formalize:1 amplitude:1 subtle:1 planar:1 formulation:1 done:1 though:2 box:8 just:2 correlation:1 until:1 hand:1 langford:1 horizontal:2 nonlinear:2 overlapping:1 normalized:1 true:6 isomap:8 hence:1 jojic:1 imaged:1 illustrated:3 sin:2 transitivity:1 noted:1 trying:1 complete:1 performs:1 motion:1 silva:1 image:26 charles:1 common:2 jshi:1 physical:2 tracked:1 tilt:1 extend:1 mellon:1 measurement:3 significant:1 imposing:2 smoothness:2 outlined:1 moving:1 resistant:1 similarity:6 surface:13 longer:2 add:1 something:1 closest:2 own:2 recent:3 joshua:1 seen:1 minimum:6 additional:2 floor:1 impose:2 determine:1 shortest:16 period:1 novelty:1 ii:32 afterwards:3 unimodal:2 multiple:1 smooth:2 cross:1 long:1 a1:5 controlled:1 variant:1 basic:1 vision:3 cmu:1 metric:1 kernel:1 represent:3 xmax:1 achieved:3 dec:2 c1:17 folding:1 whereas:1 want:1 fine:1 median:3 source:1 crucial:1 recording:1 member:3 flow:45 call:1 presence:1 easy:2 concerned:1 enough:1 embeddings:1 fit:1 pennsylvania:1 perfectly:1 opposite:1 reduce:1 distorting:1 cause:1 adequate:1 amount:8 locally:3 tenenbaum:1 notice:1 estimated:1 track:2 carnegie:1 steepness:1 putting:1 threshold:3 d3:4 changing:1 prevent:4 imaging:2 graph:10 angle:3 inverse:1 letter:3 distorted:6 place:2 patch:52 utilizes:1 scaling:1 optic:2 precisely:1 constraint:3 flat:1 erroneously:1 min:2 infinitesimally:1 according:1 em:2 sam:1 making:1 happens:2 intuitively:1 outlier:1 bucket:1 equation:2 eventually:1 fail:1 know:2 stitching:2 operation:1 observe:2 blurry:6 ocean:1 appearing:1 alternative:2 robustness:1 substitute:1 top:9 clustering:1 ccd:1 pushing:4 somewhere:1 exploit:1 perturb:1 build:2 leakage:16 move:1 question:3 quantity:1 occurs:1 added:1 parametric:1 costly:1 fa:4 md:1 distance:35 capacity:13 manifold:20 water:14 assuming:1 besides:1 length:2 relationship:2 difficult:3 setup:3 unfortunately:5 holding:1 design:2 reliably:1 implementation:2 allowing:1 vertical:3 observation:2 arc:22 inevitably:1 situation:1 defining:1 looking:2 frame:6 perturbation:1 overcoming:1 atmospheric:1 introduced:2 pair:2 required:1 specified:1 fort:1 bernd:1 discontinuity:1 able:1 pattern:1 built:1 including:1 reliable:1 video:7 max:1 charging:1 critical:1 overlap:1 treated:1 natural:1 undersampled:1 buhmann:1 mn:1 temporally:1 picture:2 numerous:1 axis:3 ymin:1 transitive:3 extract:1 philadelphia:1 refracts:1 morphing:2 understanding:1 geometric:1 law:3 expect:1 dxt:2 mounted:1 ingredient:1 affine:1 row:10 penalized:1 surprisingly:1 free:1 infeasible:1 drastically:1 side:1 saul:1 taking:1 distributed:1 transition:6 fb:4 author:1 jump:1 projected:2 avoided:1 reconstructed:1 dmitry:1 unreliable:1 confirm:1 global:11 b1:3 pittsburgh:1 spectrum:1 robust:3 domain:1 vj:6 noise:1 arise:1 downwards:1 fashion:1 fails:1 position:3 inferring:1 lie:2 uncapacitated:1 third:1 down:2 erroneous:9 showing:1 convexly:1 closeness:1 ci:1 easier:1 fc:4 simply:3 appearance:1 likely:1 photograph:1 tracking:7 watch:1 truth:9 chance:1 goal:4 formulated:1 towards:1 twofold:1 replace:1 absence:1 feasible:3 change:2 infinite:4 uniformly:1 total:1 dyt:2 experimental:1 deforming:1 latter:1 collins:1 alexander:1 dept:1 d1:12 |
1,782 | 2,619 | Self-Tuning Spectral Clustering
Pietro Perona
Lihi Zelnik-Manor
Department of Electrical Engineering
Department of Electrical Engineering
California Institute of Technology
California Institute of Technology
Pasadena, CA 91125, USA
Pasadena, CA 91125, USA
[email protected]
[email protected]
http://www.vision.caltech.edu/lihi/Demos/SelfTuningClustering.html
Abstract
We study a number of open issues in spectral clustering: (i) Selecting the
appropriate scale of analysis, (ii) Handling multi-scale data, (iii) Clustering with irregular background clutter, and, (iv) Finding automatically the
number of groups. We first propose that a ?local? scale should be used to
compute the affinity between each pair of points. This local scaling leads
to better clustering especially when the data includes multiple scales and
when the clusters are placed within a cluttered background. We further
suggest exploiting the structure of the eigenvectors to infer automatically
the number of groups. This leads to a new algorithm in which the final
randomly initialized k-means stage is eliminated.
1
Introduction
Clustering is one of the building blocks of modern data analysis. Two commonly used
methods are K-means and learning a mixture-model using EM. These methods, which are
based on estimating explicit models of the data, provide high quality results when the data
is organized according to the assumed models. However, when it is arranged in more complex and unknown shapes, these methods tend to fail. An alternative clustering approach,
which was shown to handle such structured data is spectral clustering. It does not require
estimating an explicit model of data distribution, rather a spectral analysis of the matrix
of point-to-point similarities. A first set of papers suggested the method based on a set of
heuristics (e.g., [8, 9]). A second generation provided a level of theoretical analysis, and
suggested improved algorithms (e.g., [6, 10, 5, 4, 3]).
There are still open issues: (i) Selection of the appropriate scale in which the data is to
be analyzed, (ii) Clustering data that is distributed according to different scales, (iii) Clustering with irregular background clutter, and, (iv) Estimating automatically the number of
groups. We show here that it is possible to address these issues and propose ideas to tune
the parameters automatically according to the data.
1.1
Notation and the Ng-Jordan-Weiss (NJW) Algorithm
The analysis and approaches suggested in this paper build on observations presented in [5].
For completeness of the text we first briefly review their algorithm.
Given a set of n points S = {s1 , . . . , sn } in Rl cluster them into C clusters as follows:
?d2 (s ,s )
i j
1. Form the affinity matrix A ? Rn?n defined by Aij = exp (
) for i = j
?2
and Aii = 0, where d(si , sj ) is some distance function, often just the Euclidean
? = 0.041235
? = 0.035897
? = 0.054409
? = 0.015625
? = 0.35355
? = 0.03125
?=1
Figure 1: Spectral clustering without local scaling (using the NJW algorithm.) Top row:
When the data incorporates multiple scales standard spectral clustering fails. Note, that
the optimal ? for each example (displayed on each figure) turned out to be different. Bottom
row: Clustering results for the top-left point-set with different values of ?. This highlights
the high impact ? has on the clustering quality. In all the examples, the number of groups
was set manually. The data points were normalized to occupy the [?1, 1]2 space.
distance between the vectors si and sj . ? is a scale parameter which is further
discussed in Section 2.
n
2. Define D to be a diagonal matrix with Dii = j=1 Aij and construct the normalized affinity matrix L = D?1/2 AD?1/2 .
3. Manually select a desired number of groups C.
4. Find x1 , . . . , xC , the C largest eigenvectors of L, and form the matrix X =
[x1 , . . . , xC ] ? Rn?C .
5. Re-normalizethe rows of X to have unit length yielding Y ? Rn?C , such that
2 1/2
Yij = Xij /( j Xij
) .
6. Treat each row of Y as a point in RC and cluster via k-means.
7. Assign the original point si to cluster c if and only if the corresponding row i of
the matrix Y was assigned to cluster c.
In Section 2 we analyze the effect of ? on the clustering and suggest a method for setting
it automatically. We show that this allows handling multi-scale data and background clutter. In Section 3 we suggest a scheme for finding automatically the number of groups C.
Our new spectral clustering algorithm is summarized in Section 4. We conclude with a
discussion in Section 5.
2
Local Scaling
As was suggested by [6] the scaling parameter is some measure of when two points are
considered similar. This provides an intuitive way for selecting possible values for ?. The
selection of ? is commonly done manually. Ng et al. [5] suggested selecting ? automatically by running their clustering algorithm repeatedly for a number of values of ? and
selecting the one which provides least distorted clusters of the rows of Y . This increases
significantly the computation time. Additionally, the range of values to be tested still has to
be set manually. Moreover, when the input data includes clusters with different local statistics there may not be a singe value of ? that works well for all the data. Figure 1 illustrates
the high impact ? has on clustering. When the data contains multiple scales, even using the
optimal ? fails to provide good clustering (see examples at the right of top row).
(a)
(b)
(c)
Figure 2: The effect of local scaling. (a) Input data points. A tight cluster resides within
a background cluster. (b) The affinity between each point and its surrounding neighbors
is indicated by the thickness of the line connecting them. The affinities across clusters are
larger than the affinities within the background cluster. (c) The corresponding visualization
of affinities after local scaling. The affinities across clusters are now significantly lower
than the affinities within any single cluster.
Introducing Local Scaling: Instead of selecting a single scaling parameter ? we propose
to calculate a local scaling parameter ?i for each data point si . The distance from si
to sj as ?seen? by si is d(si , sj )/?i while the converse is d(sj , si )/?j . Therefore the
square distance d2 of the earlier papers may be generalized as d(si , sj )d(sj , si )/?i ?j =
d2 (si , sj )/?i ?j The affinity between a pair of points can thus be written as:
2
?d (si , sj )
?
(1)
Aij = exp
?i ?j
Using a specific scaling parameter for each point allows self-tuning of the point-to-point
distances according to the local statistics of the neighborhoods surrounding points i and j.
The selection of the local scale ?i can be done by studying the local statistics of the neighborhood of point si . A simple choice, which is used for the experiments in this paper,
is:
(2)
?i = d(si , sK )
where sK is the K?th neighbor of point si . The selection of K is independent of scale
and is a function of the data dimension of the embedding space. Nevertheless, in all our
experiments (both on synthetic data and on images) we used a single value of K = 7, which
gave good results even for high-dimensional data (the experiments with high-dimensional
data were left out due to lack of space).
Figure 2 provides a visualization of the effect of the suggested local scaling. Since the data
resides in multiple scales (one cluster is tight and the other is sparse) the standard approach
to estimating affinities fails to capture the data structure (see Figure 2.b). Local scaling
automatically finds the two scales and results in high affinities within clusters and low
affinities across clusters (see Figure 2.c). This is the information required for separation.
We tested the power of local scaling by clustering the data set of Figure 1, plus four additional examples. We modified the Ng-Jordan-Weiss algorithm reviewed in Section 1.1
substituting the locally scaled affinity matrix A? (of Eq. (1)) instead of A. Results are shown
in Figure 3. In spite of the multiple scales and the various types of structure, the groups
now match the intuitive solution.
3
Estimating the Number of Clusters
Having defined a scheme to set the scale parameter automatically we are left with one
more free parameter: the number of clusters. This parameter is usually set manually and
Figure 3: Our clustering results. Using the algorithm summarized in Section 4. The
number of groups was found automatically.
1
1
1
1
0.99
0.99
0.99
0.99
0.98
0.98
0.98
0.98
0.97
0.97
0.97
0.97
0.96
0.96
0.96
0.95
2
4
6
8
10
0.95
2
4
6
8
10
0.95
0.96
2
4
6
8
10
0.95
2
4
6
8
10
Figure 4: Eigenvalues. The first 10 eigenvalues of L corresponding to the top row data
sets of Figure 3.
not much research has been done as to how might one set it automatically. In this section
we suggest an approach to discovering the number of clusters. The suggested scheme turns
out to lead to a new spatial clustering algorithm.
3.1
The Intuitive Solution: Analyzing the Eigenvalues
One possible approach to try and discover the number of groups is to analyze the eigenvalues of the affinity matrix. The analysis given in [5] shows that the first (highest magnitude)
eigenvalue of L (see Section 1.1) will be a repeated eigenvalue of magnitude 1 with multiplicity equal to the number of groups C. This implies one could estimate C by counting
the number of eigenvalues equaling 1.
Examining the eigenvalues of our locally scaled matrix, corresponding to clean data-sets,
indeed shows that the multiplicity of eigenvalue 1 equals the number of groups. However,
if the groups are not clearly separated, once noise is introduced, the values start to deviate
from 1, thus the criterion of choice becomes tricky. An alternative approach would be to
search for a drop in the magnitude of the eigenvalues (this was pursued to some extent by
Polito and Perona in [7]). This approach, however, lacks a theoretical justification. The
eigenvalues of L are the union of the eigenvalues of the sub-matrices corresponding to
each cluster. This implies the eigenvalues depend on the structure of the individual clusters
and thus no assumptions can be placed on their values. In particular, the gap between the
C?th eigenvalue and the next one can be either small or large. Figure 4 shows the first 10
eigenvalues corresponding to the top row examples of Figure 3. It highlights the different
patterns of distribution of eigenvalues for different data sets.
3.2
A Better Approach: Analyzing the Eigenvectors
We thus suggest an alternative approach which relies on the structure of the eigenvectors. After sorting L according to clusters, in the ?ideal? case (i.e., when L is strictly
block diagonal with blocks L(c) , c = 1, . . . , C), its eigenvalues and eigenvectors are
the union of the eigenvalues and eigenvectors of its blocks padded appropriately with
zeros (see [6, 5]). As long as the eigenvalues of the blocks are different each eigen-
vector?will have non-zero ?
values only in entries corresponding to a single block/cluster:
?
?
?
?
x(1) 0
0
?
? ?
?
? =? ?
X
where x(c) is an eigenvector of the sub-matrix L(c) cor0
???
0
? ?
?
?
(C)
0
0 x
n?C
responding to cluster c. However, as was shown above, the eigenvalue 1 is bound to be a
repeated eigenvalue with multiplicity equal to the number of groups C. Thus, the eigensolver could just as easily have picked any other set of orthogonal vectors spanning the
? columns. That is, X
? could have been replaced by X = XR
? for any
same subspace as X?s
C?C
orthogonal matrix R ? R
.
This, however, implies that even if the eigensolver provided us the rotated set of vectors,
? such that each row in the matrix X R
?
we are still guaranteed that there exists a rotation R
has a single non-zero entry. Since the eigenvectors of L are the union of the eigenvectors
of its individual blocks (padded with zeros), taking more than the first C eigenvectors will
result in more than one non-zero entry in some of the rows. Taking fewer eigenvectors we
do not have a full basis spanning the subspace, thus depending on the initial X there might
or might not exist such a rotation. Note, that these observations are independent of the
difference in magnitude between the eigenvalues.
We use these observations to predict the number of groups. For each possible group number
C we recover the rotation which best aligns X?s columns with the canonical coordinate
system. Let Z ? Rn?C be the matrix obtained after rotating the eigenvector matrix X,
i.e., Z = XR and denote Mi = maxj Zij . We wish to recover the rotation R for which in
every row in Z there will be at most one non-zero entry. We thus define a cost function:
J=
n
C
2
Zij
Mi2
i=1 j=1
(3)
Minimizing this cost function over all possible rotations will provide the best alignment
with the canonical coordinate system. This is done using the gradient descent scheme
described in Appendix A. The number of groups is taken as the one providing the minimal
cost (if several group numbers yield practically the same minimal cost, the largest of those
is selected).
The search over the group number can be performed incrementally saving computation
time. We start by aligning the top two eigenvectors (as well as possible). Then, at each
step of the search (up to the maximal group number), we add a single eigenvector to the
already rotated ones. This can be viewed as taking the alignment result of the previous
group number as an initialization to the current one. The alignment of this new set of
eigenvectors is extremely fast (typically a few iterations) since the initialization is good.
The overall run time of this incremental procedure is just slightly longer than aligning all
the eigenvectors in a non-incremental way.
Using this scheme to estimate the number of groups on the data set of Figure 3 provided
a correct result for all but one (for the right-most dataset at the bottom row we predicted
2 clusters instead of 3). Corresponding plots of the alignment quality for different group
numbers are shown in Figure 5.
Yu and Shi [11] suggested rotating normalized eigenvectors to obtain an optimal segmentation. Their method iterates between non-maximum suppression (i.e., setting Mi = 1 and
Zij = 0 otherwise) and using SVD to recover the rotation which best aligns the columns of
X with those of Z. In our experiments we noticed that this iterative method can easily get
stuck in local minima and thus does not reliably find the optimal alignment and the group
number. Another related approach is that suggested by Kannan et al. [3] who assigned
points to clusters according to the maximal entry in the corresponding row of the eigenvector matrix. This works well when there are no repeated eigenvalues as then the eigenvectors
0.2
0.08
0.2
0.08
0.15
0.06
0.15
0.06
0.1
0.04
0.1
0.04
0.05
0.02
0.05
0.02
0
2
4
6
8
10
0
2
4
6
8
10
0
2
4
6
8
10
0
2
4
6
8
10
Figure 5: Selecting Group Number. The alignment cost (of Eq. (3)) for varying group
numbers corresponding to the top row data sets of Figure 3. The selected group number
marked by a red circle, corresponds to the largest group number providing minimal cost
(costs up to 0.01% apart were considered as same value).
corresponding to different clusters are not intermixed. Kannan et al. used a non-normalized
affinity matrix thus were not certain to obtain a repeated eigenvalue, however, this could
easily happen and then the clustering would fail.
4
A New Algorithm
Our proposed method for estimating the number of groups automatically has two desirable by-products: (i) After aligning with the canonical coordinate system, one can use
non-maximum suppression on the rows of Z, thus eliminating the final iterative k-means
process, which often requires around 100 iterations and depends highly on its initialization.
(ii) Since the final clustering can be conducted by non-maximum suppression, we obtain
clustering results for all the inspected group numbers at a tiny additional cost. When the
data is highly noisy, one can still employ k-means, or better, EM, to cluster the rows of Z.
However, since the data is now aligned with the canonical coordinate scheme we can obtain
by non-maximum suppression an excellent initialization so very few iterations suffice. We
summarize our suggested algorithm:
Algorithm: Given a set of points S = {s1 , . . . , sn } in Rl that we want to cluster:
1. Compute the local scale ?i for each point si ? S using Eq. (2).
2. Form the locally scaled affinity matrix A? ? Rn?n where A?ij is defined according
to Eq. (1) for i = j and A?ii = 0.
n
3. Define D to be a diagonal matrix with Dii = j=1 A?ij and construct the nor? ?1/2 .
malized affinity matrix L = D?1/2 AD
4. Find x1 , . . . , xC the C largest eigenvectors of L and form the matrix X =
[x1 , . . . , xC ] ? Rn?C , where C is the largest possible group number.
5. Recover the rotation R which best aligns X?s columns with the canonical coordinate system using the incremental gradient descent scheme (see also Appendix A).
6. Grade the cost of the alignment for each group number, up to C, according to
Eq. (3).
7. Set the final group number Cb est to be the largest group number with minimal
alignment cost.
8. Take the alignment result Z of the top Cb est eigenvectors and assign the original
2
2
point si to cluster c if and only if maxj (Zij
) = Zic
.
9. If highly noisy data, use the previous step result to initialize k-means, or EM,
clustering on the rows of Z.
We tested the quality of this algorithm on real data. Figure 6 shows intensity based image
segmentation results. The number of groups and the corresponding segmentation were
obtained automatically. In this case same quality of results were obtained using non-scaled
affinities, however, this required manual setting of both ? (different values for different
images) and the number of groups, whereas our result required no parameter settings.
Figure 6: Automatic image segmentation. Fully automatic intensity based image segmentation results using our algorithm.
More experiments and results on real data sets can be found on our web-page
http://www.vision.caltech.edu/lihi/Demos/SelfTuningClustering.html
5
Discussion & Conclusions
Spectral clustering practitioners know that selecting good parameters to tune the clustering process is an art requiring skill and patience. Automating spectral clustering was the
main motivation for this study. The key ideas we introduced are three: (a) using a local
scale, rather than a global one, (b) estimating the scale from the data, and (c) rotating the
eigenvectors to create the maximally sparse representation. We proposed an automated
spectral clustering algorithm based on these ideas: it computes automatically the scale and
the number of groups and it can handle multi-scale data which are problematic for previous
approaches.
Some of the choices we made in our implementation were motivated by simplicity and are
perfectible. For instance, the local scale ? might be better estimated by a method which
relies on more informative local statistics. Another example: the cost function in Eq. (3) is
reasonable, but by no means the only possibility (e.g. the sum of the entropy of the rows
Zi might be used instead).
Acknowledgments:
Finally, we wish to thank Yair Weiss for providing us his code for spectral clustering.
This research was supported by the MURI award number SA3318 and by the Center of
Neuromorphic Systems Engineering award number EEC-9402726.
References
[1] G. H. Golub and C. F. Van Loan ?Matrix Computation?, John Hopkins University Press, 1991,
Second Edition.
[2] V. K. Goyal and M. Vetterli ?Block Transform by Stochastic Gradient Descent? IEEE Digital
Signal Processing Workshop, 1999, Bryce Canyon, UT, Aug. 1998
[3] R. Kannan, S. Vempala and V.Vetta ?On Spectral Clustering ? Good, Bad and Spectral? In Proceedings of the 41st Annual Symposium on Foundations of Computer Sceince, 2000.
[4] M. Meila and J. Shi ?Learning Segmentation by Random Walks? In Advances in Neural Information Processing Systems 13, 2001
[5] A. Ng, M. Jordan and Y. Weiss ?On spectral clustering: Analysis and an algorithm? In Advances
in Neural Information Processing Systems 14, 2001
[6] P. Perona and W. T. Freeman ?A Factorization Approach to Grouping? Proceedings of the 5th
European Conference on Computer Vision, Volume I, pp. 655?670 1998.
[7] M. Polito and P. Perona ?Grouping and dimensionality reduction by locally linear embedding?
Advances in Neural Information Processing Systems 14, 2002
[8] G.L. Scott and H.C. Longuet-Higgins ?Feature grouping by ?relocalisation? of eigenvectors of
the proximity matrix? In Proc. British Machine Vision Conference, Oxford, UK, pages 103?108,
1990.
[9] J. Shi and J. Malik ?Normalized Cuts and Image Segmentation? IEEE Transactions on Pattern
Analysis and Machine Intelligence, 22(8), 888-905, August 2000.
[10] Y. Weiss ?Segmentation Using Eigenvectors: A Unifying View? International Conference on
Computer Vision, pp.975?982,September,1999.
[11] S. X. Yu and J. Shi ?Multiclass Spectral Clustering? International Conference on Computer
Vision, Nice, France, pp.11?17,October,2003.
A
Recovering the Aligning Rotation
To find the best alignment for a set of eigenvectors we adopt a gradient descent scheme
similar to that suggested in [2]. There, Givens rotations where used to recover a rotation
which diagonalizes a symmetric matrix by minimizing a cost function which measures the
diagonality of the matrix. Similarly, here, we define a cost function which measures the
alignment quality of a set of vectors and prove that the gradient descent, using Givens
rotations, converges.
The cost function we wish to minimize is that of Eq. (3). Let mi = j such that Zij =
Zimi = Mi . Note, that the indices mi of the maximal entries of the rows of X might be
different than those of the optimal Z. A simple non-maximum supression on the rows of
X can provide a wrong result. Using the gradient descent scheme allows to increase the
cost corresponding to part of the rows as long as the overall cost is reduced, thus enabling
changing the indices mi .
Similar to [2] we wish to represent the rotation matrix R in terms of the smallest possible
? i,j,? denote a Givens rotation [1] of ? radians (counterclocknumber of parameters. Let G
wise) in the (i, j) coordinate plane. It is sufficient to consider Givens rotations so that i < j,
? i,j,? , where (i, j) is the kth entry
thus we can use a convenient index re-mapping Gk,? = G
2
of a lexicographical list of (i, j) ? {1, 2, . . . , C} pairs with i < j. Hence, finding the
aligning rotation amounts to minimizing the cost function J over ? ? [??/2, ?/2)K . The
update rule for ? is: ?k+1 = ?k ? ? ?J|?=?k where ? ? R+ is the step size.
We next compute the gradient of J and bounds on ? for stability. For convenience we will
further adopt the notation convention of [2]. Let U(a,b) = Ga,?a Ga+1,?a+1 ? ? ? Gb,?b where
U(a,b) = I if b < a, Uk = U(k,k) , and Vk = ???k Uk . Define A(k) , 1 ? k ? K, element
(k)
wise by Aij =
?Zij
??k .
Since Z = XR we obtain A(k) = XU(1,k?1) Vk U(k+1,K) .
We can now compute ?J
element wise:
n
n
C
C
2
2
? Zij
Zij (k) Zij
?Mi
?J
=
?
1
=
2
A
?
ij
2
2
3
??k
??k Mi
Mi
Mi ??k
i=1 j=1
i=1 j=1
Due to lack of space we cannot describe in full detail the complete convergence proof. We
thus refer the reader to [2] where it is shown
that convergence is obtained when 1 ? ?Fkl
2
J
lie in the unit circle, where Fkl = ???l ??
. Note, that at ? = 0 we have Zij = 0
k
for j = mi , Zimi = Mi , and
=
?=0
?Zimi
=
??k
(k)
Aimi (i.e., near ? = 0 the maximal
2
J
entry for each row does not change its index). Deriving thus gives ???l ??
=
k ij|?=0
n
(k) (l)
(k)
2 i=1 j=mi M12 Aij Aij . Further substituting in the values for Aij |?=0 yields:
i2
? J
2#i s.t. mi = ik or mi = jk if k = l
=
Fkl =
0
otherwise
??l ??k ij|?=0
?Mi
??k
where (ik , jk ) is the pair (i, j) corresponding to the index k in the index re-mapping discussed above. Hence, by setting ? small enough we get that 1 ? ?Fkl lie in the unit circle
and convergence is guaranteed.
| 2619 |@word briefly:1 eliminating:1 open:2 d2:3 zelnik:1 reduction:1 initial:1 contains:1 selecting:7 zij:10 current:1 si:17 written:1 malized:1 john:1 happen:1 informative:1 shape:1 drop:1 plot:1 update:1 pursued:1 discovering:1 fewer:1 selected:2 intelligence:1 plane:1 completeness:1 provides:3 iterates:1 rc:1 symposium:1 ik:2 prove:1 indeed:1 nor:1 multi:3 grade:1 freeman:1 automatically:14 becomes:1 provided:3 estimating:7 notation:2 moreover:1 discover:1 suffice:1 eigenvector:4 finding:3 every:1 scaled:4 wrong:1 tricky:1 uk:3 unit:3 converse:1 engineering:3 local:20 treat:1 analyzing:2 oxford:1 might:6 plus:1 initialization:4 factorization:1 range:1 lexicographical:1 acknowledgment:1 union:3 block:8 goyal:1 xr:3 procedure:1 significantly:2 convenient:1 spite:1 suggest:5 get:2 convenience:1 ga:2 selection:4 cannot:1 www:2 shi:4 center:1 cluttered:1 simplicity:1 rule:1 higgins:1 deriving:1 his:1 embedding:2 handle:2 stability:1 coordinate:6 justification:1 inspected:1 element:2 jk:2 cut:1 muri:1 bottom:2 electrical:2 capture:1 calculate:1 equaling:1 highest:1 depend:1 tight:2 basis:1 easily:3 aii:1 various:1 surrounding:2 separated:1 fast:1 describe:1 neighborhood:2 heuristic:1 larger:1 otherwise:2 statistic:4 mi2:1 transform:1 noisy:2 final:4 eigenvalue:24 propose:3 maximal:4 product:1 turned:1 aligned:1 intuitive:3 normalize:1 exploiting:1 convergence:3 cluster:31 incremental:3 converges:1 rotated:2 depending:1 ij:5 aug:1 eq:7 recovering:1 predicted:1 implies:3 convention:1 correct:1 stochastic:1 dii:2 require:1 assign:2 yij:1 strictly:1 practically:1 around:1 considered:2 proximity:1 exp:2 cb:2 mapping:2 predict:1 substituting:2 adopt:2 smallest:1 proc:1 largest:6 create:1 njw:2 clearly:1 perfectible:1 manor:1 rather:2 modified:1 zic:1 varying:1 vk:2 suppression:4 typically:1 perona:5 pasadena:2 france:1 issue:3 overall:2 html:2 spatial:1 art:1 initialize:1 equal:3 construct:2 once:1 having:1 ng:4 eliminated:1 manually:5 saving:1 yu:2 lihi:4 few:2 employ:1 modern:1 randomly:1 individual:2 maxj:2 replaced:1 highly:3 possibility:1 alignment:11 golub:1 mixture:1 analyzed:1 yielding:1 orthogonal:2 iv:2 euclidean:1 initialized:1 desired:1 re:3 rotating:3 circle:3 walk:1 theoretical:2 minimal:4 instance:1 column:4 earlier:1 neuromorphic:1 cost:17 introducing:1 entry:8 examining:1 conducted:1 thickness:1 eec:1 synthetic:1 st:1 international:2 automating:1 connecting:1 hopkins:1 summarized:2 includes:2 ad:2 depends:1 performed:1 try:1 picked:1 view:1 analyze:2 red:1 start:2 recover:5 minimize:1 square:1 who:1 yield:2 manual:1 aligns:3 pp:3 proof:1 mi:16 radian:1 dataset:1 ut:1 dimensionality:1 organized:1 segmentation:8 vetterli:1 improved:1 wei:5 maximally:1 arranged:1 done:4 just:3 stage:1 web:1 lack:3 incrementally:1 quality:6 indicated:1 building:1 effect:3 usa:2 normalized:5 requiring:1 hence:2 assigned:2 symmetric:1 i2:1 self:2 criterion:1 generalized:1 complete:1 image:6 wise:3 rotation:15 rl:2 volume:1 polito:2 discussed:2 refer:1 tuning:2 automatic:2 meila:1 similarly:1 similarity:1 longer:1 add:1 aligning:5 apart:1 certain:1 caltech:4 seen:1 minimum:1 additional:2 diagonality:1 signal:1 ii:4 multiple:5 desirable:1 full:2 infer:1 match:1 long:2 award:2 impact:2 vision:8 iteration:3 represent:1 irregular:2 eigensolver:2 background:6 want:1 whereas:1 appropriately:1 tend:1 incorporates:1 jordan:3 practitioner:1 near:1 counting:1 ideal:1 iii:2 enough:1 automated:1 gave:1 zi:1 idea:3 multiclass:1 motivated:1 gb:1 repeatedly:1 eigenvectors:21 tune:2 amount:1 clutter:3 locally:4 reduced:1 http:2 occupy:1 xij:2 exist:1 canonical:5 problematic:1 estimated:1 group:36 key:1 four:1 nevertheless:1 changing:1 clean:1 canyon:1 pietro:1 padded:2 sum:1 run:1 distorted:1 reasonable:1 reader:1 separation:1 appendix:2 scaling:13 patience:1 bound:2 guaranteed:2 annual:1 extremely:1 vempala:1 department:2 structured:1 according:8 across:3 slightly:1 em:3 s1:2 multiplicity:3 taken:1 visualization:2 diagonalizes:1 turn:1 fail:2 know:1 studying:1 spectral:15 appropriate:2 alternative:3 yair:1 eigen:1 original:2 top:8 clustering:33 running:1 responding:1 unifying:1 xc:4 especially:1 build:1 malik:1 noticed:1 already:1 diagonal:3 september:1 affinity:19 gradient:7 subspace:2 distance:5 thank:1 kth:1 extent:1 spanning:2 kannan:3 length:1 code:1 index:6 providing:3 minimizing:3 october:1 intermixed:1 gk:1 implementation:1 reliably:1 unknown:1 m12:1 observation:3 enabling:1 descent:6 displayed:1 rn:6 august:1 intensity:2 introduced:2 pair:4 required:3 california:2 address:1 suggested:11 usually:1 pattern:2 scott:1 summarize:1 power:1 scheme:9 technology:2 bryce:1 sn:2 text:1 review:1 deviate:1 nice:1 fully:1 highlight:2 generation:1 digital:1 foundation:1 sufficient:1 tiny:1 row:23 placed:2 supported:1 free:1 aij:7 institute:2 neighbor:2 taking:3 sparse:2 distributed:1 van:1 aimi:1 dimension:1 resides:2 computes:1 stuck:1 commonly:2 made:1 transaction:1 sj:9 skill:1 global:1 assumed:1 conclude:1 demo:2 search:3 iterative:2 sk:2 reviewed:1 additionally:1 ca:2 longuet:1 singe:1 excellent:1 complex:1 european:1 main:1 motivation:1 noise:1 edition:1 repeated:4 x1:4 xu:1 fails:3 sub:2 explicit:2 wish:4 lie:2 british:1 bad:1 specific:1 list:1 grouping:3 exists:1 workshop:1 magnitude:4 illustrates:1 gap:1 sorting:1 entropy:1 corresponds:1 relies:2 vetta:1 viewed:1 marked:1 fkl:4 change:1 loan:1 svd:1 est:2 select:1 tested:3 handling:2 |
1,783 | 262 | 498
Barben, Toomarian and Gulati
Adjoint Operator Algorithms for Faster
Learning in Dynamical Neural Networks
Nikzad Toomarian
Jacob Barhen
Sandeep Gulati
Center for Space Microelectronics Technology
Jet Propulsion Laboratory
California Institute of Technology
Pasadena, CA 91109
ABSTRACT
A methodology for faster supervised learning in dynamical nonlinear neural networks is presented. It exploits the concept of adjoint
operntors to enable computation of changes in the network's response due to perturbations in all system parameters, using the solution of a single set of appropriately constructed linear equations.
The lower bound on speedup per learning iteration over conventional methods for calculating the neuromorphic energy gradient is
O(N2), where N is the number of neurons in the network.
1
INTRODUCTION
The biggest promise of artifcial neural networks as computational tools lies in the
hope that they will enable fast processing and synthesis of complex information
patterns. In particular, considerable efforts have recently been devoted to the formulation of efficent methodologies for learning (e.g., Rumelhart et al., 1986; Pineda,
1988; Pearlmutter, 1989; Williams and Zipser, 1989; Barhen, Gulati and Zak, 1989).
The development of learning algorithms is generally based upon the minimization
of a neuromorphic energy function. The fundamental requirement of such an approach is the computation of the gradient of this objective function with respect
to the various parameters of the neural architecture, e.g., synaptic weights, neural
Adjoint Operator Algorithms
gains, etc. The paramount contribution to the often excessive cost of learning using dynamical neural networks arises from the necessity to solve, at each learning
iteration, one set of equations for each parameter of the neural system, since those
parameters affect both directly and indirectly the network's energy.
In this paper we show that the concept of adjoint operators, when applied to dynamical neural networks, not only yields a considerable algorithmic speedup, but also
puts on a firm mathematical basis prior results for "recurrent" networks, the derivations of which sometimes involved much heuristic reasoning. We have already used
adjoint operators in some of our earlier work in the fields of energy-economy modeling (Alsmiller and Barhen, 1984) and nuclear reactor thermal hydraulics (Barhen
et al., 1982; Toomarian et al., 1987) at the Oak Ridge National Laboratory, where
the concept flourished during the past decade (Oblow, 1977; Cacuci et al., 1980).
In the sequel we first motivate and construct, in the most elementary fashion, a
computational framework based on adjoint operators. We then apply our results
to the Cohen-Grossberg-Hopfield (CGH) additive model, enhanced with terminal
attractor (Barhen, Gulati and Zak, 1989) capabilities. We conclude by presenting
the results of a few typical simulations.
2
ADJOINT OPERATORS
Consider, for the sake of simplicity, that a problem of interest is represented by the
following system of N coupled nonlinear equations
rp( u, p)
o
(2.1)
where rp denotes a nonlinear operator 1 . Let u and p represent the N-vector of
dependent state variables and the M-vector of system parameters, respectively. We
will assume that generally M ? N and that elements of p are, in principle, independent. Furthermore, we will also assume that, for a specific choice of parameters,
a unique solution of Eq. (2.1) exists. Hence, u is an implicit function of p. A
system "response", R, represents any result of the calculations that is of interest.
Specifically
(2.2)
R = R(u,p)
i.e., R is a known nonlinear function of p and u and may be calculated from Eq. (2.2)
when the solution u in Eq. (2.1) has been obtained for a given p. The problem of
interest is to compute the "sensitivities" of R, i.e., the derivatives of R with respect
to parameters PI" 1L = 1"", M. By definition
oR
OPI'
oR au
au OPI'
-+-.-
(2.3)
1 If differential operators appear in Eq. (2.1), then a corresponding set of boundary and/or
initial conditions to specify the domain of cp must also be provided. In general an inhomogeneous
"source" term can also be present. The learning model discussed in this paper focuses on the
adiabatic approximation only. Nonadiabatic learning algorithms, wherein the response is defined
as a functional, will be discussed in a forthcoming article.
499
500
Barhen, Toomarian and Gulati
Since the response R is known analytically, the computation of oR/oPIS and oR/au
is straightforward. The quantity that needs to be determined is the vector ou/ oPw
Differentiating the state equations (2.1), we obtain a set of equations to be referred
to as "forward" sensitivity equations
(2.4)
To simplify the notations, we are omitting the "transposed" sign and denoting the
N by N forward sensitivity matrix ocp/ou by A, the N-vector oU/OPIS by I-'ij and
the "source" N-vector -ocp/ OPIS by ISS. Thus
(2.5)
Since the source term in Eq. (2.5) explicitly depends on ft, computing dR/dPI-"
requires solving the above system of N algebraic equations for each parameter Pw
This difficulty is circumvented by introd ucing adjoint operators. Let A? denote the
formal adjoint2 of the operator A. The adjoint sensitivity equations can then be
expressed as
IS S-. .
A. I-' ij.
(2.6)
By definition, for algebraic operators
Since Eq. (2.3), can be rewritten as
dR
dpl-'
oR
OPIS
+
oR 1'au q,
(2.8)
s-*
(2.9)
if we identify
oR
au
-
I-'
s.
we observe that the source term for the adjoint equations is independent of the
specific parameter PI-" Hence, the solution of a single set of adjoint equations will
provide all the information required to compute the gradient of R with respect to all
parameters. To underscore that fact we shall denote I-'ij* as ii. Thus
(2.10)
We will now apply this computational framework to a CGH network enha.nced with
terminal attractor dynamics. The model developed in the sequel differs from our
2 Adjoint operators can only be considered for densely defined linear operators on Banach spaces
(see e.g., Cacuci, 1980). For the neural application under consideration we will limit ourselves to
real Hilbert spaces. Such spaces are self-dual. Furthermore, the domain of an adjoint operator is
detennined by selecting appropriate adjoint boundary conditions l . The associated bilinear form
evaluated on the domain boundary must thus be also generally included.
Adjoint Operator Algorithms
earlier formulations (Barhen, Gulati and Zak, 1989; Barhen, Zak and Gulati, 1989)
in avoiding the use of constraints in the neuromorphic energy function, thereby
eliminating the need for differential equations to evolve the concomitant Lagrange
multipliers. Also, the usual activation dynamics is transformed into a set of equivalent equations which exhibit more "congenial" numerical properties, such as "contraction" .
3
APPLICATIONS TO NEURAL LEARNING
We formalize a neural network as an adaptive dynamical system whose temporal
evolution is governed by the following set of coupled nonlinear differential equations
2:= Wnm Tnm g-y(zm)
+
(3.1)
kIn
m
where Zn represents the mean soma potential of the nth neuron and Tnm denotes the
synaptic coupling from the m-th to the n-th neuron. The weighting factor Wnm
enforces topological considerations. The constant Kn chara.cterizes the decay of neuron activity. The sigmoidal function g-y(.) modulates the neural response, with gain
tanh(fz). The "source" term k In, which includes
given by 1m; typically, g-y(z)
dimensional considerations, encodes contribution in terms of attractor coordinates
of the k-th training sample via the following expression
=
if n E Sx
if n E SH U Sy
(3.2)
The topographic input, output and hidden network partitions Sx, Sy and SH are
architectural requirements related to the encoding of ma.pping-type problems for
which a number of possibilities exist (Barhen, Gulati and Zak, 1989; Barhen, Zak
and Gulati, 1989). In previous articles (ibid; Zak, 1989) we have demonstrated that
in general, for f3 = (2i + 1)-1 and i a strictly positive integer, such attractors have
infinite local stability and provide opportunity for learning in real-time. Typically,
f3 can be set to 1/3. Assuming an adiabatic framework, the fixed point equations
at equilibrium, i.e., as
--+ 0, yield
zn
=
-Kn g-l(k-)
Un
In
~
~
Wnm T.nrn
k -
Urn
+
kI-n
(3.3)
m
=
where Un
g-y(zn) represents the neura.l response. The superscript"" denotes
quantities evaluated at steady state. Operational network dynamics is then given
by
Un
+
Un
= g-y [ In
Kn
2:= Wnm T,lm
m
Urn
+
In kIn
Kn
1
(3.4)
To proceed formally with the development of a supervised learning algorithm, we
consider an approach based upon the minimization of a constrained "neuromorphic"
energy function E given by the following expression
E(u,p)
= ~
2:= 2:=
k
n
[ku n
-
kan
]2
V n E Sx U Sy
(3.5)
501
502
Barben, Toomarian and Gulati
We relate adjoint theory to neural learning by identifying the neuromorphic energy
function, E in Eq. (3.5), with the system response R. Also, let p denote the following
system parameters:
The proposed objective function enforces convergence of every neuron in Sx and
Sy to attractor coordinates corresponding to the components in the input-output
training patterns, thereby prompting the network to learn the embedded invariances. Lyapunov stability requires an energy-like function to be monotonically decreasing in time. Since in our model the internal dynamical parameters of interest
are the synaptic strengths Tnm of the interconnection topology, the characteristic
decay constants Kn and the gain parameters In this implies that
E
=
'"""' '"""' dE
~ ~ ~
n
m
nm
r..nm + '~
"""'
n
dE.
dK Kn
n
'"""' dE.
~ d In
n
In
+
< 0
(3.6)
For each adaptive system parameter, PIA' Lyapunov stability will be satisfied by the
following choice of equations of motion
PIA =
-Tp
dE
dpIA
(3.7)
Examples include
.
dE
Tnm = -TT dTnm
dE
'Y din
,n
dE
-r. -
where the time-scale parameters TT, T,. and T"y > O. Since E depends on PIA
both directly and indirectly, previous methods required solution of a system of N
equations for each parameter PIA to obtain dE/dPIA from du/dPIA. Our methodology
(based on adjoint operators), yields all deri vati ves dE / dplA' V J1. , by solving a
single set of N linear equations.
The nonlinear neural operator for each training pattern k, k
librium is given by
" l(Jn (" U,
- P-) = 9 [ - 1 '"""'
~ Wnm'
Kn m ,
r."
nm' U-m , + -1
Kn
"1-n
=
1,??? J(, at equi-
1
,n
(3.8)
=
to unity. So, in principle" Un
where, without loss of generality we have set
"un [T, K, r, "an,??-j. Using Eqs. (3.8), the forward sensitivity matrix can be
computed and compactly expressed as
{) "l(Jn
{)
,,-
Um
"A
1
gn Kn
-1
Kn
"Agn
[
Wnm Tnm
Wnm T.nm
-
"- 1
{) In
+ {)"_U m
,,~
fJn unm?
(3.9)
Adjoint Operator Algorithms
where
if n E Sx
ifn E SHUSy
Above,
then
'g. =
k
gn
1-
represents the derivative of 9 with respect to
['g.J 2
'g.
where
Recall that the formal adjoint equation is given as A? v
1 k~
Km
T.
if 9
= tanh,
= s? ; here
k,
mn -
gm Wmn
n, i.e.,
~w.m T. m 'um + 'I. ) 1 (3.11)
g[ :. (
=
ku
(3.10)
TJm Umn
(3.12)
Using Eqs. (2.9) and (3.5), we can compute the formal adjoint source
BE
.ll
v
ifn E Sx USy
if n E SH
k-
Un
(3.13)
The system of adjoint fixed-point equations can then be constructed using Eqs.
(3.12) and (3.13), to yield:
"'"
~
m
1 k~gm
Km
Wmn
T.mn
k-
Vm -
"'" k
~
,
fJm Umn
k-
Vm
(3.14)
m
Notice that the above coupled system, (3.14), is linear in kv. Furthermore, it
has the same mathematical characteristics as the operational dynamics (3.4). Its
components can be obtained as the equilibrium points, (i.e., Vi --+ 0) of the adjoint
neural dynalnics
1
Km
m
k
~
gm Wmn
T.
mn Vm
(3.15)
As an implementation example, let us conclude by deriving the learning equations
for the synaptic strengths, Tw Recall that
dE
BE
-
dTIJ
BTIJ
+ "'"
L k-v, IJk S-
p. = (i, j)
(3.16)
k
We differentiate the steady state equations (3.8) with respect to Tij , to obtain the
forward source term,
a k<pn
aIij
-
k~gn
-
[1"",
;:
n
1 k~,
Kn
"kUI
~ Wnl uin Ujl
I
gn Din Wnj
kUj
(3.17)
503
504
Barben, Toomarian and Gulati
=
Since by definition, fJE / 8Tnm
0 , the explicit energy gradient contribution is
obtained as
~ 1.; II: ~ II: ]
T..nm -- -1"T [Wnm
(3.18)
- - - L.-, Vn 9n Urn
"'n
k
It is straightforward to obtain learning equations for In and "'n in a similar fashion.
4
ADAPTIVE TIME-SCALES
So far the adaptive learning rates, i.e., Tp in Eq.(3.7), have not been specified. Now
we will show that, by an appropriate selection of these parameters the convergence
of the corresponding dynamical systems can be considerably improved. Without
loss of generality, we shall assume TT
T,.
T-y
T, and we shall seek T in the
form (Barhen et aI, 1989; Zak 1989)
=
=
=
(4.1)
where \7 E denotes the vector with components \7TE, \7 -yE and \7 ,.E. It is straightforward to show that
(4.2)
as \7 E tends to zero, where X is an arbitrary positive constant. If we evaluate the
relaxation time of the energy gradient, we find that
tE =
l
d! \7 E
IVE'-O
!
if f3
if f3
!\7E!I-.6
IVElo
<
>
0
0
(4. 3)
Thus, for f3 ~ 0 the relaxation time is infinite, while for f3 > 0 it is finite. The
dynamical system (3.19) suffers a qualitative change for f3 > 0: it loses uniqueness
of solution. The equilibrium point 1 \7 E 1
0 becomes a singular solution being
intersected by all the transients, and the Lipschitz condition is violated, as one can
see from
=
! !)
( d \7 E
d
d 1 \7 E
1
dt
=
-X
1\7 E 1-.6 _
-00
(4.4)
where 1 \7 E 1 tends to zero, while f3 is strictly positive. Such infinitely stable points
are" terminal attractors". By analogy with our previous results we choose f3
2/3,
which yields
=
-1/3
T
(
~~
[\7 T E
]~rn + ~ [\7-yE]~ + ~ [\7 ,.E]~
)
(4.5)
The introduction of these adaptive time-scales dramatically improves the convergence of the corresponding learning dynamical systems.
Adjoint Operator Algorithms
5
SIMULATIONS
The computational framework developed in the preceding section has been applied to a number of problems that involve learning nonlinear mappings, including
Exclusive-OR, the hyperbolic tangent and trignometric functions, e.g., sin. Some of
these mappings (e.g., XOR) have been extensively benchmarked in the literature,
and provide an adequate basis for illustrating the computational efficacy of our proposed formulation. Figures l(a)-I(d) demonstrate the temporal profile of various
network elements during learning of the XOR function. A six neuron feedforward
network was used, that included self-feedback on the output unit and bias. Fig.
l(a) shows the LMS error during the training phase. The worst-case convergence of
the output state neuron to the presented attractor is displayed in Fig. l(b) . Notice
the rapid convergence of the input state due to the terminal attractor effect. The
behavior of the adaptive time-scale parameter T is depicted in Fig. 1(c). Finally,
Fig. l(d) shows the evolution of the energy gradient components.
The test setup for signal processing applications, i.e., learning the sin function and
the tanh sigmoidal nonlinearlity, included a 8-neUl'on fully connected network with
no bias. In each case the network was trained using as little as 4 randomly sampled
training points. Efficacy of recall was determined by presenting 100 random samples. Fig. (2) and (3b) illustrate that we were able to approximate the sin and the
hyperbolic tangent functions using 16 and 4 pairs respectively. Fig. 3(a) demonstrates the network performance when 4 pairs were used to learn the hyperbolic
tangent.
We would like to mention that since our learning methodology involves terminal
at tractors, extreme caution must be exercised when simulating the algorithms in
a digital computing environment. Our discussion on sensitivity of results to the
integration schemes (Barhen, Zak and Gulati, 1989) emphasizes that explicit methods such as Euler or Runge-Kutta shall not be used, since the presence of terminal
at tractors induces extreme stiffness. Practically, this would require an integration
time-step of infinitesimal size, resulting in numerical round-off errors of unacceptable magnitude. Implicit integration techniques such as the Kaps- Rentrop scheme
should therefore be used.
6
CONCLUSIONS
In this paper we have presented a theoretical framework for faster learning in dynamical neural networks. Central to our approach is the concept of adjoint operators
which enables computation of network neuromorphic energy gradients with respect
to all system parameters using the solution of a single set of lineal' equations. If
CF and CA denote the computational costs associated with solving the forward and
adjoint sensitivity equations (Eqs. 2.5 and 2.6), and if M denotes the number of
parameters of interest in the network, the speedup achieved is
505
506
Barhen, Toomarian and Gulati
=
If we assume that C F ~ CA and that M
N 2 + 2N + ... , we see that the lower
bound on speedup per learning iteration is O(N2). Finally, particular care must be
execrcised when integrating the dynamical systems of interest, due to the extreme
stiffness introduced by the terminal attractor constructs.
Acknowledgements
The research described in this paper was performed by the Center for Space Microelectronics Technology, Jet Propulsion Laboratory, California Institute of Technology, and was sponsored by agencies of the U.S. Department of Defense, and
by the Office of Basic Energy Sciences of the U.S. Department of Energy, through
interagency agreements with NASA.
References
R.G. Alsmiller, J. Barhen and J. Horwedel. (1984) "The Application of Adjoint
Sensitivity Theory to a Liquid Fuels Supply Model" , Energy, 9(3), 239-253.
J. Barhen, D.G. Cacuci and J.J. Wagschal. (1982) "Uncertainty Analysis of TimeDependent Nonlinear Systems", Nucl. Sci. Eng., 81, 23-44.
J. Barhen, S. Gulati and M. Zak. (1989) "Neural Learning of Constrained Nonlinear
Transformations", IEEE Computer, 22(6), 67-76.
J. Barhen, M. Zak and S. Gulati. (1989) " Fast Neural Learning Algorithms Using
Networks with Non-Lipschitzian Dynamics", in Proc. Neuro-Nimes '89,55-68, EC2,
Nanterre, France.
D.G. Cacuci, C.F. Weber, E.M. Oblow and J.H. Marable. (1980) "Sensitivity Theory for General Systems of Nonlinear Equations", Nucl. Sci. Eng., 75, 88-110.
E.M. Oblow. (1977) "Sensitivity Theory for General Non-Linear Algebraic Equations with Constraints", ORNL/TM-5815, Oak Ridge National Laboratory.
B.A. Pearlmutter. (1989) "Learning State Space Trajectories in Recurrent Neural
Networks", Neural Computation, 1(3), 263-269.
F.J. Pineda. (1988) "Dynamics and Architecture in Neural Computation", Journal
of Complexity, 4, 216-245.
D.E. Rumelhart and J .L. Mclelland. (1986) Parallel and Distributed Procesing, MIT
Press, Cambridge, MA.
N. Toomarian, E. Wacholder and S. Kaizerman. (1987) "Sensitivity Analysis of
Two-Phase Flow Problems", Nucl. Sci. Eng., 99(1), 53-8l.
R.J. Williams and D. Zipser. (1989) "A Learning Algorithm for Continually Running Fully Recurrent Neural Networks", Neural Computation, 1(3), 270-280.
M. Zak. (1989) "Terminal Attractors", Neural Networks, 2(4),259-274.
Adjoint Operator Algorithms
(a)
(b)
1.5
4
til
~
:2!
t:r4
P
~
0
~
Q)
a
~ 1'--
Q)
bJI
"8~
~
~
l
iterations
?
,
150
iterations
150
iterations
150
1
20
iterations
(c)
Figure l(a)-(d).
150
(d)
Learning the Exclusive-OR function using a 6-neumn
(including bias) feedforward dynamical nctwork with
sclf-feedback on the output unit.
507
508
Barben, Toomarian and Gulati
1.000,-------------.,..._--_
0 .500
0.000
-0.500
-1.000 t---..:::....~~--t__---t__--__.J
-1.000
-0.500
0 .000
0.500
1.000
Figure 2.
3 (a)
Learning the Sin function using a fully connccted, 8-neunm
network with no bias. The truining set comprised of
4 points that were randomly selected.
1.000 r----------.---:::=;~----.
0.500
0000
-0.500
-1000~~~~~---t__---t__--~
- 1.000
3(b)
-0.500
0.000
0 .500
1.000
1000
0.500
0.000
-0.500
-I.OOG .--"-.-.!~---t__---t__--__.J
- I.oeo
-0 .500
0.000
0.500
1.000
It'igure 3.
Learning the Hyperbolic Tangent function using a fully connected,
8-neunm network with no bias. (a> using 4 randomly selected
training samples; (b> using 16 randomly selected training samples.
| 262 |@word illustrating:1 pw:1 eliminating:1 km:3 simulation:2 seek:1 jacob:1 contraction:1 eng:3 thereby:2 mention:1 initial:1 necessity:1 efficacy:2 selecting:1 liquid:1 denoting:1 past:1 activation:1 must:4 additive:1 numerical:2 partition:1 j1:1 enables:1 sponsored:1 selected:3 equi:1 sigmoidal:2 oak:2 mathematical:2 unacceptable:1 constructed:2 differential:3 supply:1 qualitative:1 kuj:1 dtij:1 rapid:1 behavior:1 terminal:8 decreasing:1 little:1 becomes:1 provided:1 notation:1 toomarian:9 fuel:1 benchmarked:1 developed:2 caution:1 transformation:1 temporal:2 every:1 um:2 demonstrates:1 unit:2 appear:1 continually:1 positive:3 local:1 tends:2 limit:1 bilinear:1 encoding:1 au:5 r4:1 barhen:17 grossberg:1 unique:1 enforces:2 timedependent:1 differs:1 hyperbolic:4 trignometric:1 integrating:1 selection:1 operator:21 put:1 equivalent:1 conventional:1 demonstrated:1 center:2 williams:2 straightforward:3 simplicity:1 identifying:1 nuclear:1 deriving:1 stability:3 coordinate:2 enhanced:1 gm:3 agreement:1 element:2 rumelhart:2 ft:1 worst:1 connected:2 environment:1 agency:1 complexity:1 dynamic:6 motivate:1 trained:1 solving:3 upon:2 basis:2 compactly:1 hopfield:1 various:2 represented:1 derivation:1 fast:2 firm:1 whose:1 heuristic:1 solve:1 ive:1 interconnection:1 topographic:1 superscript:1 runge:1 pineda:2 differentiate:1 uin:1 zm:1 ocp:2 detennined:1 adjoint:27 kv:1 convergence:5 requirement:2 coupling:1 recurrent:3 illustrate:1 ij:3 eq:12 involves:1 implies:1 lyapunov:2 inhomogeneous:1 enable:2 transient:1 require:1 elementary:1 strictly:2 practically:1 considered:1 cacuci:4 algorithmic:1 equilibrium:3 mapping:2 lm:2 uniqueness:1 proc:1 tanh:3 exercised:1 barben:4 tool:1 hope:1 minimization:2 mit:1 ujl:1 pn:1 office:1 focus:1 underscore:1 economy:1 dependent:1 typically:2 pasadena:1 hidden:1 transformed:1 france:1 dual:1 development:2 constrained:2 integration:3 field:1 construct:2 f3:9 represents:4 excessive:1 simplify:1 few:1 randomly:4 national:2 densely:1 phase:2 reactor:1 ourselves:1 attractor:10 interest:6 possibility:1 umn:2 sh:3 extreme:3 devoted:1 theoretical:1 earlier:2 modeling:1 gn:4 tp:2 zn:3 neuromorphic:6 cost:2 pia:4 euler:1 comprised:1 kn:11 considerably:1 fundamental:1 sensitivity:11 ec2:1 sequel:2 vm:3 off:1 synthesis:1 wnl:1 central:1 nm:5 satisfied:1 choose:1 dr:2 derivative:2 prompting:1 til:1 potential:1 de:10 includes:1 explicitly:1 depends:2 vi:1 performed:1 capability:1 parallel:1 contribution:3 xor:2 characteristic:2 sy:4 yield:5 identify:1 emphasizes:1 trajectory:1 suffers:1 synaptic:4 definition:3 infinitesimal:1 energy:15 involved:1 associated:2 transposed:1 gain:3 sampled:1 recall:3 improves:1 hilbert:1 formalize:1 ou:3 nasa:1 dt:1 supervised:2 methodology:4 wherein:1 specify:1 response:7 improved:1 formulation:3 evaluated:2 generality:2 furthermore:3 implicit:2 nonlinear:10 effect:1 ye:2 concept:4 omitting:1 multiplier:1 deri:1 evolution:2 analytically:1 hence:2 din:2 laboratory:4 round:1 ll:1 during:3 self:2 sin:4 steady:2 presenting:2 ridge:2 tt:3 demonstrate:1 pearlmutter:2 cp:1 motion:1 reasoning:1 weber:1 consideration:3 recently:1 functional:1 cohen:1 nrn:1 banach:1 discussed:2 mclelland:1 cambridge:1 zak:12 ai:1 stable:1 etc:1 care:1 preceding:1 monotonically:1 signal:1 ii:3 faster:3 jet:2 calculation:1 efficent:1 vati:1 neuro:1 basic:1 ifn:2 iteration:7 sometimes:1 represent:1 achieved:1 singular:1 source:7 appropriately:1 flow:1 integer:1 zipser:2 presence:1 feedforward:2 affect:1 forthcoming:1 architecture:2 topology:1 tm:1 sandeep:1 expression:2 introd:1 six:1 defense:1 effort:1 f:1 algebraic:3 proceed:1 adequate:1 dramatically:1 generally:3 tij:1 involve:1 extensively:1 ibid:1 induces:1 fz:1 exist:1 notice:2 sign:1 per:2 promise:1 shall:4 soma:1 intersected:1 opi:2 relaxation:2 uncertainty:1 wmn:3 architectural:1 vn:1 bound:2 ki:1 igure:1 topological:1 paramount:1 activity:1 strength:2 constraint:2 encodes:1 sake:1 lineal:1 urn:3 circumvented:1 speedup:4 department:2 unity:1 tw:1 equation:26 rewritten:1 stiffness:2 apply:2 observe:1 indirectly:2 appropriate:2 simulating:1 rp:2 jn:2 denotes:5 running:1 include:1 cf:1 opportunity:1 lipschitzian:1 calculating:1 nimes:1 exploit:1 neura:1 objective:2 already:1 quantity:2 exclusive:2 usual:1 exhibit:1 gradient:7 kutta:1 sci:3 propulsion:2 procesing:1 fjn:1 assuming:1 concomitant:1 setup:1 relate:1 implementation:1 neuron:7 finite:1 thermal:1 displayed:1 t__:6 rn:1 perturbation:1 dpi:1 arbitrary:1 introduced:1 pair:2 required:2 specified:1 california:2 able:1 dynamical:12 pattern:3 fjm:1 including:2 difficulty:1 nucl:3 nth:1 mn:3 scheme:2 technology:4 coupled:3 prior:1 literature:1 acknowledgement:1 gulati:16 tangent:4 evolve:1 tractor:2 embedded:1 loss:2 fully:4 analogy:1 digital:1 oeo:1 wnm:8 article:2 principle:2 pi:2 formal:3 bias:5 institute:2 differentiating:1 distributed:1 boundary:3 calculated:1 feedback:2 cgh:2 forward:5 adaptive:6 dpl:1 far:1 unm:1 approximate:1 conclude:2 un:7 decade:1 oog:1 ku:2 learn:2 ca:3 operational:2 du:1 kui:1 pping:1 complex:1 domain:3 profile:1 n2:2 fig:6 biggest:1 referred:1 i:1 fashion:2 adiabatic:2 explicit:2 lie:1 governed:1 weighting:1 kin:2 specific:2 tnm:6 decay:2 dk:1 microelectronics:2 exists:1 modulates:1 magnitude:1 te:2 sx:6 depicted:1 infinitely:1 lagrange:1 expressed:2 kan:1 loses:1 ma:2 bji:1 lipschitz:1 considerable:2 change:2 agn:1 included:3 determined:2 typical:1 specifically:1 infinite:2 wnj:1 invariance:1 ijk:1 formally:1 internal:1 arises:1 violated:1 evaluate:1 avoiding:1 |
1,784 | 2,620 | Learning efficient auditory codes using spikes
predicts cochlear filters
Evan Smith1
Michael S. Lewicki2
[email protected] [email protected]
Departments of Psychology1 & Computer Science2
Center for the Neural Basis of Cognition
Carnegie Mellon University
Abstract
The representation of acoustic signals at the cochlear nerve must serve a
wide range of auditory tasks that require exquisite sensitivity in both time
and frequency. Lewicki (2002) demonstrated that many of the filtering
properties of the cochlea could be explained in terms of efficient coding
of natural sounds. This model, however, did not account for properties
such as phase-locking or how sound could be encoded in terms of action
potentials. Here, we extend this theoretical approach with algorithm for
learning efficient auditory codes using a spiking population code. Here,
we propose an algorithm for learning efficient auditory codes using a
theoretical model for coding sound in terms of spikes. In this model,
each spike encodes the precise time position and magnitude of a localized, time varying kernel function. By adapting the kernel functions to
the statistics natural sounds, we show that, compared to conventional
signal representations, the spike code achieves far greater coding efficiency. Furthermore, the inferred kernels show both striking similarities
to measured cochlear filters and a similar bandwidth versus frequency
dependence.
1
Introduction
Biological auditory systems perform tasks that require exceptional sensitivity to both spectral and temporal acoustic structure. This precision is all the more remarkable considering
these computations begin with an auditory code that consists of action potentials whose duration is in milliseconds and whose firing in response to hair cell motion is probabilistic. In
computational audition, representing the acoustic signal is the first step in any algorithm,
and there are numerous approaches to this problem which differ in both their computational complexity and in what aspects of signal structure are extracted. The auditory nerve
representation subserves a wide variety of different auditory tasks and is presumably welladapted for these purposes. Here, we investigate the theoretical question of what computational principles might underlie cochlear processing and the representation of the auditory
nerve.
For sensory representations, a theoretical principle that has attracted considerable interest is efficient coding. This posits that (assuming low noise) one goal of sensory coding
is to represent signals in the natural sensory environment efficiently, i.e. with minimal
redundancy [1?3]. Recently, it was shown that efficient coding of natural sounds could explain auditory nerve filtering properties and their organization as a population [4] and also
account for some non-linear properties of auditory nerve responses [5]. Although those
results provided an explanation for auditory nerve encoding of spectral information, they
fail to explain the encoding of temporal information. Here, we extend the standard efficient
coding model, which has an implicit stationarity assumption, to form efficient representations of non-stationary and time-relative acoustic structures.
2
An abstract model for auditory coding
In standard models of efficient coding, sensory signals are represented by vectors of fixed
length, and the representation is a linear transformation of the input pattern. A simple
method to encode temporal signals is to divide the signal into discrete blocks; however, this
approach has several drawbacks. First, the underlying acoustic structures have no relation
to the block boundaries, so elemental acoustic features may be split across blocks. Second,
this representation implicitly assumes that the signal structures are stationary, and provides
no way to represent time-relative structures such as transient sounds. Finally, this approach
has limited plausibility as a model of cochlear encoding. To address all of these problems,
we use a theoretical model in which sounds are represented as spikes [6, 7]. In this model,
the signal, x(t), is encoded with a set of kernel functions, ?1 . . . ?M , that can be positioned
arbitrarily and independently in time. The mathematical form of the representation with
additive noise is
nm
M X
X
m
x(t) =
sm
(1)
i ?m (t ? ?i ) + (t),
m=1 i=1
?im
sm
i
where
and
are the temporal position and coefficient of the ith instance of kernel ?m ,
respectively. The notation nm indicates the number of instances of ?m , which need not be
the same across kernels. In addition, the kernels are not restricted in form or length.
The key theoretical abstraction of the model is that the signal is decomposed in terms of
discrete acoustic events, each of which has a precise amplitude and temporal position. We
interpret the analog amplitude values as representing a local population of auditory nerve
spikes. Thus, this theory posits that the purpose of the (binary) spikes at the auditory nerve
is to encode as accurately as possible the temporal position and amplitude of the acoustic
events defined by ?m (t). The main questions we address are 1) encoding, i.e. what are the
optimal values of ?im and sm
i and 2) learning, i.e. what are the optimal kernel functions
?m (t).
2.1
Encoding
Finding the optimal representation of arbitrary signals in terms of spikes is a hard problem,
and currently there are no known biologically plausible algorithms that solve this problem
well [7]. There are reasons to believe that this problem can be solved (approximately) with
biological mechanisms, but for our purposes here, we compute the values of ?im and sm
i
for a given signal we using the matching pursuit algorithm [8]. It iteratively approximates
the input signal with successive orthogonal projections onto a basis. The signal can be
decomposed into
x(t) =< x(t)?m > ?m + Rx (t),
(2)
where < x(t)?m > is the inner product between the signal and the kernel and is equivalent to sm
i in equation 1. The final term in equation 2, Rx (t), is the residual signal after
approximating x(t) in the direction of ?m . The projection with the largest magnitude inner
product will minimize the power of Rx (t), thereby capturing the most structure possible
with a single kernel.
Kernel CF (Hz)
K
5000
2000
1000
500
200
100
0
5
10
15
20
25 ms
Input
Reconstruction
Residual
Figure 1: A brief segment of the word canteen (input) is represented as a spike code (top).
A reconstruction of the speech based only on the few spikes shown (ovals in spike code) is
very accurate with relatively little residual error (reconstruction and residual). The colored
arrows and matching curves illustrate the correspondence between a few of the ovals and
the underlying acoustic structure represented by the kernel functions.
Equation 2 can be rewritten more generally as
Rxn (t) =< Rxn (t)?m > ?m + Rxn+1 (t),
(3)
with Rx0 (t) = x(t) at the start of the algorithm. On each iteration, the current residual is
projected onto the basis. The projection with the largest inner product is subtracted out, and
its coefficient and time are recorded. This projection and subtraction leaves < Rxn (t)?m >
?m orthogonal to the residual signal, Rxn+1 (t) and to all previous and future projections [8].
As a result, matching pursuit codes are composed of mutually orthogonal signal structures.
For the results reported here, the encoding was halted when sm
i fell below a preset threshold
(the spiking threshold).
Figure 1 illustrates the spike code model and its efficiency in representing speech. The spoken word ?canteen? was encoded as a set of spikes using a fixed set of kernel functions. The
kernels can have arbitrary shape and for illustration we have chosen gammatones (mathematical approximations of cochlear filters) as the kernel functions. A brief segment from
input signal (1, Input) consists of three glottal pulses in the /a/ vowel. The resulting spike
code is show above it. Each oval represents the temporal position and center frequency of
an underlying kernel function, with oval size and gray value indicating kernel amplitude.
For four spikes, colored arrows and curves indicate the relationship between the ovals and
the acoustics events they represent. As evidenced from the figure, the very small set of spike
events is sufficient to produce a very accurate reconstruction of the sound (reconstruction
and residual).
2.2
Learning
We adapt the method used in [9] to train our kernel function. Equation 1 can be rewritten
in probabilistic form as
Z
p(x|?) =
p(x|?, s?)p(?
s)ds,
(4)
where s?, an approximation of the posterior maximum, comes from the set of coefficient
generated by matching pursuit. We assume the noise in the likelihood, p(x|?, s?), is
Gaussian and the prior, p(s), is sparse. The basis is updated by taking the gradient of
the log probability,
?
log(p(x|?))
??m
=
=
=
?
log(p(x|?, s)) + log(p(s))
??m
nm
M X
X
1 ?
m 2
[x ?
s?m
i ?m (t ? ?i )]
2?? ??m
m=1 i=1
X
1
[x ? x
?]
s?m
i
??
i
(5)
(6)
(7)
As noted by Olshausen (2002), equation 7 indicates that the kernels are updated in Hebbian
fashion, simply as a product of activity and residual [9] (i.e., the unit shifts its preferred
stimuli in the direction of the stimuli that just made it spike minus those elements already
encoded by other units). But in the case of the spike code, rather than updating for every
time-point, we need only update at times when the kernel spiked.
As noted earlier, the model can use kernels of any form or length. This capability also
extends to the learning algorithm such that it can learn functions of differing temporal extents, growing or shrinking them as needed. Low frequency functions and others requiring
longer temporal extent can be grown from shorter initial seeds, while brief functions can be
trimmed to speed processing and minimize the effects of over-fitting. Periodically during
training, a simple heuristic is used to trim or extend the kernels, ?m . The functions are
initially zero-padded. If learning causes the power of the padding to surpass a threshold,
the padding is extended. If the power of the padding plus an adjacent segment falls below
the threshold, the padding is trimmed from the end. Following the gradient step and length
adjustment, the kernels are again normalized and the next training signal is encoded.
3
Adapting kernels to natural sounds
The spike coding algorithm was used to learn kernel functions for two different classes
of sounds: human speech and music. For speech, the algorithm trained on a subset the
TIMIT Speech Corpus. Each training sample consisted of a single speaker saying a single
sentence. The signals were bandpass filtered to remove DC components of the signal and
to prevent aliasing from affecting learning. The signals were all normalized to a maximum
amplitude of 1.
Each of the 30 kernel functions were initialized to random Gaussian vectors of 100 samples in duration. The threshold below which spikes (values of sm ) were ignored during
the encoding stage was set at 0.1, which allowed for an initial encoding of ? 12dB signalto-noise ratio (SNR). As indicated by equation 7, the gradient depends on the residual. If
the residual drops near zero or is predominately noise then learning is impeded. By slowly
increasing the spiking threshold as the average residual drops, we retain some signal structure in the residual for further training. At the same time, the power distribution of natural
sounds means that high frequency signal components might fall entirely below threshold,
preventing their being learned. One possible solution that was not implemented here is
using separate thresholds for each kernel.
Figure 2: When adapted to speech, kernel functions become asymmetric sinusoids (smooth
curves in red, zero padding has been removed for plotting), with sharp attacks and gradual
decays. They also adapt in temporal extent, with longer and shorter functions emerging
from the same initial length. These learned kernels are strikingly similar to revcor functions
obtained from cat auditory nerve fibers (noisy curves in blue). The revcor functions were
normalized and aligned in phase with the learned kernels but are otherwise unaltered (no
smoothing or fitting).
Figure 2 shows the kernel functions trained on speech (red curves). All are temporally
localized, bandpass filters. They are similar in form to previous results but with several
notable differences. Most notably, the learned kernel functions are temporally asymmetric,
with sharp attack and gradual decay which matches physiological filtering properties of
the auditory nerves. Each kernel function in figure 2 is overlayed on a so-called reversecorrelation (revcor) function which is an estimate of the physiological impulse response
function for an individual auditory nerve fiber [10]. The revcor functions have been normalized, and the most closely matching in terms of center frequency and envelop were
phase aligned with learned kernels by hand. No additional fitting was done, yet there is
a striking similarity between the inferred kernels functions and physiologically estimated
reverse-correlation functions. For 25 out of 30 kernel functions, we found a close match
to the physiological revcor functions (correlation > 0.8). Of the remaining filters, all
possessed the same basic asymmetric filter structure show in figure 2 and showed a more
modest match to the data (correlation > 0.5).
In the standard efficient coding model, the signal and the basis functions are all the same
length. In order for the basis to span the signal space in the time domain and still be temporally localized, some of the learned functions are essentially replications of one another.
In the spike coding model, this redundancy does not occur because coding is time-relative.
Kernel functions can be placed arbitrarily in time such that one kernel function can code for
similar acoustic events at different points in the signal. So, temporally extended functions
can be learned without causing an explosion in the number of high-frequency functions
5
Speech Prediction
Auditory Nerve Filters
Bandwidth (kHz)
2
1
0.5
0.2
0.1
0.1
0.2
0.5
1
Center Frequency (kHz)
2
5
Figure 3: The center frequency vs. bandwidth distribution of learned kernel functions (red
squares) plotted against physiological data (blue pluses).
needed to span the signal space. Because cochlear coding also shares this quality, it might
also allow more precise predictions about the population characteristics of cochlear filters.
Individually, the learned kernel functions closely match the linear component of cochlear
filters. We can also compare the learned kernels against physiological data in terms of
population distributions. In frequency space, our learned population follows the approximately logarithmic distribution found in the cochlea, a more natural distribution of filters
compared to previous findings, where the need to tile high-frequency space biased the distribution [4]. Figure 3 presents a log-log scatter-plot of the center frequency of each kernel
versus its bandwidth (red squares). Plotted on the same axis are two sets of empirical data.
One set (blue pluses) comes from a large corpus of reverse-correlation functions derived
from physiological recordings of auditory nerve fibers [10]. Both the slope and distribution of the learned kernel functions match those of the empirical data. The distribution of
learned kernels even appears to follow shifts in the slope of the empirical data at the high
and low frequencies.
4
Coding Efficiency
We can quantify the coding efficiency of the learned kernel functions in bits so as to objectively evaluate the model and compare it quantitatively to other signal representations.
Rate-fidelity provides a useful objective measure for comparison. Here we use a method
developed in [7] which we now briefly describe. Computing the rate-fidelity curves begins
m
with associated pairs of coefficients and time values, {sm
i , ?i }, which are initially stored
as double precision variables. Storing the original time values referenced to the start of
the signal is costly because their range can be arbitrarily large and the distribution of time
points is essentially uniform. Storing only the time since the last spike, ??im , greatly restricts the range and produces a variable that approximately follows a gamma distribution.
m
Rate-fidelity curves are generated by varying the precision of the code, {sm
i , ??i }, and
computing the resulting fidelity through reconstruction. A uniform quantizer is used to
vary the precision of the code between 1 and 16 bits. At all levels of precision, the bin
widths for quantization are selected so that equal numbers of values fall in each bin. All
m
sm
i or ??i that fall within a bin are recoded to have the same value. We use the mean of
m
the non-quantized values that fell within the bin. sm
i and ??i are quantized independently.
Treating the quantized values as samples from a random variable, we estimate a code?s
entropy (bits/coefficient) from histograms of the values. Rate is then the product of the
estimated entropy of the quantized variables and the number of coefficients per second for
a given signal. At each level of precision the signal is reconstructed based on the quantized
values, and an SNR for the code is computed. This process was repeated across a set of
signals and the results were averaged to produce rate-fidelity curves.
Coding efficiency can be measured in nearly identical fashion for other signal representations. For comparison we generate rate-fidelity curves for Fourier and wavelet representations as well as for a spike code using either learned kernel functions or gammatone
functions. Fourier coefficients were obtained for each signal via Fast Fourier Transform.
The real and imaginary parts were quantized independently, and the rate was based on
the estimated entropy of the quantized coefficients. Reconstruction was simply the inverse Fourier transform of the quantized coefficients. Similarly, coding efficiency using
Daubechies wavelets was estimated using Matlab?s discrete wavelet transform and inverse
wavelet transform functions. Curves for the gammatone spike code were generated as described above.
Figure 4 shows the rate-fidelity curves calculated for speech from the TIMIT speech corpus
[11]. At low bit rates (below 40 Kbps), both of the spike codes produce more efficient
representations of speech than the other traditional representations. For example, between
10 and 20 Kbps the fidelity of the spike representation of speech using learned kernels is
approximately twice that of either Fourier or wavelets. The learned kernels are also sightly
but significantly more efficient than spike codes using gammatones, particularly in the case
of music. The kernel functions trained on music are more extended in time and appear
better able to describe harmonic structure than the gammatones. As the number of spikes
increases the spike codes become less efficient, with the curve for learned kernels dropping
more rapidly than for gammatones. Encoding sounds to very high precision requires setting
the spike threshold well below the threshold used in training. It may be that the learned
kernel functions are not well adapted to the statistics of very low amplitude sounds. At
higher bit rates (above 60 Kbps) the Fourier and wavelet representations produce much
higher rate-fidelity curves than either spike codes.
5
Conclusion
We have presented a theoretical model of auditory coding in which temporal kernels are
the elemental features of natural sounds. The essential property of these features is that
they can describe acoustic structure at arbitrary time points, and can thus represent nonstationary, transient sounds in a compact and shift-invariant manner. We have shown that
by using this time-relative spike coding model and adapting the kernel shapes to efficiently
code natural sounds, it is possible to account for both the detailed filter shapes of auditory nerve fibers and their distribution as a population. Moreover, we have demonstrated
quantitatively that, at a broad range of low to medium bit rates, this type of code is substantially more efficient than conventional signal representations such as Fourier or wavelet
transforms.
References
[1] H. B. Barlow. Possible principles underlying the transformation of sensory messages.
In W. A. Rosenbluth, editor, Sensory Communication, pages 217?234. MIT Press,
40
35
SNR (dB)
30
25
20
15
10
Spike Code: adapted
Spike Code: gammatone
Block Code: wavelet
Block Code: Fourier
5
0
0
10
20
30
40
50
60
70
80
90
Rate (Kbps)
Figure 4: Rate-Fidelity curves speech were made for spike coding using both learned kernels (red) and gammatones (light blue) as well as using discrete Daubechies wavelet transform (black) and Fourier transform (dark blue).
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
Cambridge, 1961.
J. J. Atick. Could information-theory provide an ecological theory of sensory processing. Network, 3(2):213?251, 1992.
E. Simoncelli and B. Olshausen. Natural image statistics and neural representation.
Annual Review of Neuroscience, 24:1193?1216, 2001.
M. S. Lewicki. Efficient coding of natural sounds. Nature Neuroscience, 5(4):356?
363, 2002.
O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control.
Nature Neuroscience, 4:819?825, 2001.
M. S. Lewicki. Efficient coding of time-varying patterns using a spiking population
code. In R. P. N. Rao, B. A. Olshausen, and M. S. Lewicki, editors, Probabilistic
Models of the Brain: Perception and Neural Function, pages 241?255. MIT Press,
Cambridge, MA, 2002.
E. C. Smith and M. S. Lewicki. Efficient coding of time-relative structure using
spikes. Neural Computation, 2004.
S. G. Mallat and Z. Zhang. Matching pursuits with time-frequency dictionaries. IEEE
Transactions on Signal Processing, 41(12):3397?3415, 1993.
B. A. Olshausen. Sparse codes and spikes. In R. P. N. Rao, B. A. Olshausen, and M. S.
Lewicki, editors, Probabilistic Models of the Brain: Perception and Neural Function,
pages 257?272. MIT Press, Cambridge, MA, 2002.
L. H. Carney, M. J. McDuffy, and I. Shekhter. Frequency glides in the impulse
responses of auditory-nerve fibers. Journal of the Acoustical Society of America,
105:2384?2391, 1999.
J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, N. L. Dahlgren,
and V. Zue. Timit acoustic-phonetic continuous speech corpus, 1990.
| 2620 |@word briefly:1 unaltered:1 gradual:2 pulse:1 thereby:1 minus:1 initial:3 imaginary:1 current:1 yet:1 scatter:1 must:1 attracted:1 periodically:1 additive:1 shape:3 remove:1 drop:2 plot:1 update:1 treating:1 v:1 stationary:2 leaf:1 selected:1 ith:1 smith:1 colored:2 filtered:1 provides:2 quantizer:1 quantized:8 successive:1 attack:2 zhang:1 mathematical:2 become:2 replication:1 consists:2 fitting:3 manner:1 cnbc:2 notably:1 growing:1 aliasing:1 brain:2 decomposed:2 little:1 considering:1 increasing:1 begin:2 provided:1 underlying:4 notation:1 moreover:1 medium:1 what:4 substantially:1 emerging:1 developed:1 spoken:1 finding:2 transformation:2 differing:1 temporal:11 every:1 dahlgren:1 schwartz:1 control:1 unit:2 underlie:1 appear:1 local:1 referenced:1 encoding:9 firing:1 approximately:4 might:3 plus:3 twice:1 black:1 garofolo:1 limited:1 range:4 averaged:1 block:5 evan:2 empirical:3 adapting:3 significantly:1 projection:5 matching:6 word:2 onto:2 close:1 conventional:2 equivalent:1 demonstrated:2 center:6 duration:2 independently:3 impeded:1 population:8 updated:2 mallat:1 element:1 particularly:1 updating:1 asymmetric:3 predicts:1 solved:1 removed:1 fiscus:1 environment:1 locking:1 complexity:1 trained:3 segment:3 serve:1 efficiency:6 basis:6 strikingly:1 represented:4 cat:1 fiber:5 america:1 grown:1 train:1 fast:1 describe:3 whose:2 encoded:5 heuristic:1 plausible:1 solve:1 otherwise:1 objectively:1 statistic:4 transform:6 noisy:1 final:1 propose:1 reconstruction:7 product:5 causing:1 aligned:2 rapidly:1 gammatone:3 elemental:2 double:1 produce:5 illustrate:1 measured:2 implemented:1 indicate:1 come:2 quantify:1 differ:1 direction:2 posit:2 drawback:1 closely:2 filter:11 human:1 transient:2 bin:4 require:2 sightly:1 biological:2 im:4 presumably:1 seed:1 cognition:1 achieves:1 vary:1 dictionary:1 purpose:3 currently:1 individually:1 largest:2 exceptional:1 mit:3 gaussian:2 rather:1 varying:3 encode:2 derived:1 indicates:2 likelihood:1 greatly:1 abstraction:1 glottal:1 initially:2 relation:1 fidelity:10 smoothing:1 equal:1 identical:1 represents:1 broad:1 nearly:1 future:1 others:1 stimulus:2 quantitatively:2 few:2 composed:1 gamma:1 individual:1 phase:3 envelop:1 vowel:1 overlayed:1 organization:1 interest:1 stationarity:1 message:1 investigate:1 light:1 accurate:2 explosion:1 shorter:2 predominately:1 orthogonal:3 modest:1 divide:1 initialized:1 plotted:2 theoretical:7 minimal:1 instance:2 earlier:1 rao:2 halted:1 subset:1 snr:3 uniform:2 reported:1 stored:1 sensitivity:2 retain:1 probabilistic:4 michael:1 again:1 daubechies:2 nm:3 recorded:1 slowly:1 tile:1 audition:1 account:3 potential:2 coding:24 coefficient:9 notable:1 depends:1 red:5 start:2 capability:1 slope:2 timit:3 minimize:2 square:2 characteristic:1 efficiently:2 accurately:1 rx:3 explain:2 against:2 frequency:15 associated:1 gain:1 auditory:23 amplitude:6 positioned:1 kbps:4 nerve:15 appears:1 higher:2 follow:1 response:4 done:1 furthermore:1 just:1 implicit:1 stage:1 atick:1 correlation:4 d:1 hand:1 quality:1 indicated:1 gray:1 impulse:2 believe:1 olshausen:5 effect:1 requiring:1 consisted:1 barlow:1 normalized:4 sinusoid:1 iteratively:1 adjacent:1 during:2 width:1 noted:2 speaker:1 m:1 motion:1 lamel:1 image:1 harmonic:1 recently:1 spiking:4 khz:2 extend:3 analog:1 approximates:1 interpret:1 mellon:1 cambridge:3 similarly:1 similarity:2 longer:2 posterior:1 showed:1 reverse:2 phonetic:1 ecological:1 binary:1 arbitrarily:3 greater:1 additional:1 exquisite:1 subtraction:1 signal:40 sound:17 simoncelli:2 hebbian:1 smooth:1 match:5 adapt:2 plausibility:1 prediction:2 basic:1 hair:1 essentially:2 cmu:2 cochlea:2 kernel:54 represent:4 iteration:1 histogram:1 cell:1 addition:1 affecting:1 biased:1 fell:2 hz:1 recording:1 db:2 nonstationary:1 near:1 split:1 variety:1 bandwidth:4 inner:3 pallett:1 shift:3 trimmed:2 padding:5 speech:14 cause:1 action:2 matlab:1 ignored:1 generally:1 useful:1 detailed:1 transforms:1 dark:1 generate:1 restricts:1 millisecond:1 rx0:1 estimated:4 neuroscience:3 per:1 blue:5 carnegie:1 discrete:4 dropping:1 redundancy:2 key:1 four:1 threshold:10 prevent:1 padded:1 inverse:2 striking:2 extends:1 saying:1 bit:6 capturing:1 entirely:1 correspondence:1 annual:1 activity:1 adapted:3 occur:1 encodes:1 aspect:1 speed:1 fourier:9 span:2 relatively:1 department:1 across:3 biologically:1 explained:1 restricted:1 spiked:1 invariant:1 equation:6 mutually:1 fail:1 mechanism:1 needed:2 zue:1 end:1 pursuit:4 rewritten:2 spectral:2 subtracted:1 original:1 assumes:1 top:1 cf:1 remaining:1 music:3 approximating:1 society:1 objective:1 question:2 already:1 spike:37 costly:1 dependence:1 traditional:1 gradient:3 separate:1 acoustical:1 cochlear:9 extent:3 reason:1 assuming:1 code:31 length:6 relationship:1 illustration:1 ratio:1 recoded:1 rosenbluth:1 perform:1 sm:11 possessed:1 extended:3 communication:1 precise:3 dc:1 arbitrary:3 sharp:2 inferred:2 evidenced:1 pair:1 sentence:1 acoustic:13 learned:20 address:2 able:1 below:6 pattern:2 perception:2 explanation:1 power:4 psychology1:1 event:5 natural:12 residual:12 representing:3 brief:3 numerous:1 temporally:4 axis:1 prior:1 review:1 relative:5 filtering:3 versus:2 localized:3 remarkable:1 sufficient:1 principle:3 plotting:1 editor:3 storing:2 share:1 placed:1 last:1 allow:1 wide:2 fall:4 taking:1 sparse:2 boundary:1 curve:14 calculated:1 sensory:8 preventing:1 made:2 projected:1 far:1 transaction:1 reconstructed:1 compact:1 trim:1 implicitly:1 preferred:1 corpus:4 physiologically:1 continuous:1 learn:2 nature:2 domain:1 did:1 main:1 arrow:2 noise:5 allowed:1 repeated:1 fashion:2 precision:7 shrinking:1 position:5 bandpass:2 carney:1 wavelet:9 subserves:1 decay:2 physiological:6 essential:1 quantization:1 magnitude:2 illustrates:1 signalto:1 entropy:3 logarithmic:1 simply:2 adjustment:1 lewicki:7 extracted:1 ma:2 goal:1 fisher:1 considerable:1 hard:1 preset:1 surpass:1 glide:1 called:1 oval:5 indicating:1 evaluate:1 |
1,785 | 2,621 | A Cost-Shaping LP for
Bellman Error Minimization with
Performance Guarantees
Daniela Pucci de Farias
Mechanical Engineering
Massachusetts Institute of Technology
Benjamin Van Roy
Management Science and Engineering
and Electrical Engineering
Stanford University
Abstract
We introduce a new algorithm based on linear programming that
approximates the differential value function of an average-cost
Markov decision process via a linear combination of pre-selected
basis functions. The algorithm carries out a form of cost shaping
and minimizes a version of Bellman error. We establish an error
bound that scales gracefully with the number of states without
imposing the (strong) Lyapunov condition required by its counterpart in [6]. We propose a path-following method that automates
selection of important algorithm parameters which represent counterparts to the ?state-relevance weights? studied in [6].
1
Introduction
Over the past few years, there has been a growing interest in linear programming
(LP) approaches to approximate dynamic programming (DP). These approaches
offer algorithms for computing weights to fit a linear combination of pre-selected
basis functions to a dynamic programming value function. A control policy that is
?greedy? with respect to the resulting approximation is then used to make real-time
decisions.
Empirically, LP approaches appear to generate effective control policies for highdimensional dynamic programs [1, 6, 11, 15, 16]. At the same time, the strength
and clarity of theoretical results about such algorithms have overtaken counterparts
available for alternatives such as approximate value iteration, approximate policy
iteration, and temporal-difference methods. As an example, a result in [6] implies
that, for a discrete-time finite-state Markov decision process (MDP), if the span of
the basis functions contains the constant function and comes within a distance of
of the dynamic programming value function then the approximation generated by a
certain LP will come within a distance of O(). Here, the coefficient of the O() term
depends on the discount factor and the metric used for measuring distance, but not
on the choice of basis functions. On the other hand, the strongest results available
for approximate value iteration and approximate policy iteration only promise O()
error under additional requirements on iterates generated in the course of executing
the algorithms [3, 13]. In fact, it has been shown that, even when = 0, approximate
value iteration can generate a diverging sequence of approximations [2, 5, 10, 14].
In this paper, we propose a new LP for approximating optimal policies. We work
with a formulation involving average cost optimization of a possibly infinite-state
MDP. The fact that we work with this more sophisticated formulation is itself a
contribution to the literature on LP approaches to approximate DP, which have
been studied for the most part in finite-state discounted-cost settings. But we view
as our primary contributions the proposed algorithms and theoretical results, which
strengthen in important ways previous results on LP approaches and unify certain
ideas in the approximate DP literature. In particular, highlights of our contributions
include:
1. Relaxed Lyapunov Function dependence. Results in [6] suggest that
? in order for the LP approach presented there to scale gracefully to large
problems ? a certain linear combination of the basis functions must be
a ?Lyapunov function,? satisfying a certain strong Lyapunov condition.
The method and results in our current paper eliminate this requirement.
Further, the error bound is strengthened because it alleviates an undesirable
dependence on the Lyapunov function that appears in [6] even when the
Lyapunov condition is satisfied.
2. Restart Distribution Selection. Applying the LP studied in [6] requires
manual selection of a set of parameters called state-relevance weights. That
paper illustrated the importance of a good choice and provided intuition
on how one might go about making the choice. The LP in the current
paper does not explicitly make use of state-relevance weights, but rather,
an analog which we call a restart distribution, and we propose an automated
method for finding a desirable restart distribution.
3. Relation to Bellman-Error Minimization. An alternative approach
for approximate DP aims at minimizing ?Bellman error? (this idea was
first suggested in [16]). Methods proposed for this (e.g., [4, 12]) involve
stochastic steepest descent of a complex nonlinear function. There are no
results indicating whether a global minimum will be reached or guaranteeing
that a local minimum attained will exhibit desirable behavior. In this
paper, we explain how the LP we propose can be thought of as a method
for minimizing a version of Bellman error. The important differences here
are that our method involves solving a linear ? rather than a nonlinear (and
nonconvex) ? program and that there are performance guarantees that can
be made for the outcome.
The next section introduces the problem formulation we will be working with. Section 3 presents the LP approximation algorithm and an error bound. In Section 4,
we propose a method for computing a desirable reset distribution. The LP approximation algorithm works with a perturbed version of the MDP. Errors introduced
by this perturbation are studied in Section 5. A closing section discusses relations
to our prior work on LP approaches to approximate DP [6, 8].
2
Problem Formulation and Perturbation Via Restart
Consider an MDP with a countable state space S and a finite set of actions A
available at each state. Under a control policy u : S 7? A, the system dynamics
are defined by a transition probability matrix Pu ? <|S|?|S| , where for policies
u and u and states x and y, (Pu )xy = (Pu )xy if u(x) = u(x). We will assume
that, under each policy u, the system has a unique invariant distribution, given by
?u (x) = limt?? (Put )yx , for all x, y ? S.
A cost g(x, a) is associated with each state-action pair (x, a). For shorthand, given
any policy u, we let gu (x) = g(x, u(x)). We consider the problem of computing a
policy that minimizes the average cost ?u = ?uT gu . Let ?? = minu ?u and define
PT
the differential value function h? (x) = minu limT ?? Exu [ t=0 (gu (xt ) ? ?? )]. Here,
the superscript u of the expectation operator denotes the control policy and the
subscript x denotes conditioning on x0 = x. It is easy to show that there exists
a policy u that simultaneously minimizes the expectation for P
every x. Further, a
policy u? is optimal if and only if u? (x) ? arg minu (g(x, a) + y (Pu )xy h? (y)) for
all x ? S.
While in principle h? can be computed exactly by dynamic programming algorithms,
this is often infeasible due to the curse of dimensionality. We consider approximating
PK
h? using a linear combination k=1 rk ?k of fixed basis functions ?1 , . . . , ?K : S 7?
<. In this paper, we propose and analyze an algorithm for computing weights
PK
r ? <K to approximate: h? ?
k=1 ?k (x)rk . It is useful to define a matrix
|S|?K
??<
so that our approximation to h? can be written as ?r.
The algorithm we will propose operates on a perturbed version of the MDP. The
nature of the perturbation is influenced by two parameters: a restart probability
(1 ? ?) ? [0, 1] and a restart distribution c over the state space. We refer to the
new system as an (?, c)-perturbed MDP. It evolves similarly with the original MDP,
except that at each time, the state process restarts with probability 1 ? ?; in this
event, the next state is sampled randomly according to c. Hence, the perturbed
MDP has the same state space, action space, and cost function as the original one,
but the transition matrix under each policy u are given by P?,u = ?Pu + (1 ? ?)ecT .
We define some notation that will streamline our discussion and analysis of pert
T
turbed MDPs. Let ??,u (x) = limt?? (P?,u
)yx , ??,u = ??,u
gu , ??? = minu ??,u , and
?
let h? be the differential value function for the (?,
MDP, and let u??
Pc)-perturbed
?
?
be a policy satisfying u? (x) ? arg minu (g(x, a) + y (P?,u )xy h? (y)) for all x ? S.
Finally, we will make use of dynamic programming operators T?,u h = gu + P?,u h
and T? h = minu T?,u h.
3
The New LP
We now propose a new LP that approximates the differential value function of a
(?, c)-perturbed MDP. This LP takes as input several pieces of problem data:
1. MDP parameters: g(x, a) and (Pu )xy for all x, y ? S, a ? A, u : S 7? A.
P
2. Perturbation parameters: ? ? [0, 1] and c : S 7? [0, 1] with x c(x) = 1.
3. Basis functions: ? = [?1 ? ? ? ?K ] ? <|S|?K .
4. Slack function and penalty: ? : S 7? [1, ?) and ? > 0.
We have defined all these terms except for the slack function and penalty, which we
will explain after defining the LP. The LP optimizes decision variables r ? < K and
s1 , s2 ? < according to
minimize
subject to
s1 + ?s2
T? ?r ? ?r + s1 1 + s2 ? ? 0
s2 ? 0.
(1)
It is easy to see that this LP is feasible. Further, if ? is sufficiently large, the
objective is bounded. We assume that this is the case and denote an optimal
solution by (?
r, s?1 , s?2 ). Though the first |S| constraints are nonlinear, each involves
a minimization over actions and therefore can be decomposed into |A| constraints.
This results in a total of |S| ? |A| + 1 constraints, which is unmanageable if the
state space is large. We expect, however, that the solution to this LP can be
approximated closely and efficiently through use of constraint sampling techniques
along the lines discussed in [7].
We now offer an interpretation of the LP. The constraint T? ?r ? ?r ? ??? 1 ? 0 is
satisfied if and only if ?r = h?? + ?1 for some ? ? <. Terms (s1 + ??? )1 and s2 ? can
be viewed as cost shaping. In particular, they effectively transform the costs g(x, a)
to g(x, a) + s1 + ??? + s2 ?(x), so that the constraint T? ?r ? ?r ? ??? 1 ? 0 can be
met.
The LP can alternatively be viewed as an efficient method for minimizing a form
of Bellman error, as we now explain. Suppose that s2 = 0. Then, minimization
of s1 corresponds to minimization of k min(T? ?r ? ?r ? ??? 1, 0)k? , which can be
viewed as a measure of (one-sided) Bellman error. Measuring error with respect
to the maximum norm is problematic, however, when the state space is large. In
the extreme case, when there is an infinite number of states and an unbounded cost
function, such errors are typically infinite and therefore do not provide a meaningful
objective for optimization. This shortcoming is addressed by the slack term s 2 ?.
To understand its role, consider constraining s1 to be ???? and minimizing s2 . This
corresponds to minimization of k min(T? ?r ? ?r ? ??? 1, 0)k?,1/? , where the norm
is defined by khk?,1/? = maxx |h(x)|/?(x). This term can be viewed as a measure
of Bellman error with respect to a weighted maximum norm, with weights 1/?(x).
One important factor that distinguishes our LP from other approaches to Bellman
error minimization [4, 12, 16] is a theoretical performance guarantee, which we now
develop.
For any r, let u?,r (x) ? arg minu (gu (x) + (P?,u ?r)(x)). Let ??,r = ??,u?,r
T
Let ??,r = ??,r
gu?,r . The following theorem establishes that the difference between the average cost ??,?r associated with an optimal solution (?
r, s?1 , s?2 ) to
the LP and the optimal average cost ??? is proportional to the minimal error that can be attained given the choice of basis functions. A proof of this
theorem is provided in the appendix of a version of this paper available at
http://www.stanford.edu/ bvr/psfiles/LPnips04.pdf.
T
Theorem 3.1. If ? ? (2 ? ?)??,u
? ? then
?
??,?r ? ??? ?
(1 + ?)? max(?, 1)
min kh?? ? ?rk?,1/? ,
1??
r?<K
where
?
?
= max kP?,u k?,1/? ? max
u
=
u
kP?,u hk?,1/?
,
khk?,1/?
T
??,?
r ? ??
r + s?1 1 + s?2 ?)
r (T? ??
.
T
c (T? ??
r ? ??
r + s?1 1 + s?2 ?)
The bound suggests that the slack function ? should be chosen so that the basis
functions can offer a reasonably sized approximation error kh?? ? ?rk?,1/? . At
the same time, this choice affects the sizes of ? and ?. The theorem requires that
T
T
the penalty ? be at least (2 ? ?)??,u
?. The term ??,u
? ? is the steady-state
??
?
expectation of the slack function under an optimal policy. Note that
? ? max kP?,u ?k?,1/? = max
u
u,x
(P?,u ?)(x)
,
?(x)
which is the maximal factor by which the expectation of ? can increase over a single
time period. When dealing with specific classes of problems it is often possible to
select ? so that the norm kh?? ? ?rk?,1/? as well as the terms maxu kP?,u k?,1/?
T
and ??,u
? scale gracefully with the number of states and/or state variables. This
??
issue will be addressed further in a forthcoming full-length version of this paper.
It may sometimes be difficult to verify that any particular value of ? dominates
T
(2??)??,u
?. One approach to selecting ? is to perform a line search over possible
??
values of ?, solving an LP in each case, and choosing the value of ? that results in
the best-performing control policy. A simple line search algorithm solves the LP
successively for ? = 1, 2, 4, 8, . . ., until the optimal solution is such that s?2 = 0. It
is easy to show that the LP is unbounded for all ? < 1, and that there is a finite
? = inf{?|?
s2 = 0} such that for each ? ? ?, the solution is identical and s?2 = 0.
This search process delivers a policy that is at least as good as a policy generated
T
T
by the LP for some ? ? [(2 ? ?)??,u
? ?, 2(2 ? ?)??,u? ?], and the upper bound of
T
Theorem 3.1 would hold with ? replaced by 2(2 ? ?)??,u
? ?.
?
We have discussed all but two terms involved in the bound: ? and 1/(1 ? ?). Note
that if c = ??,?r , then ? = 1. In the next section, we discuss an approach that aims
at choosing c to be close enough to ??,?r so that ? is approximately 1. In Section
5, we discuss how the reset probability 1 ? ? should be chosen in order to ensure
that policies for the perturbed MDP offer similar performance when applied to the
original MDP. This choice determines the magnitude of 1/(1 ? ?).
4
Fixed Points and Path Following
The coefficient ? would be equal to 1 if c were equal to ??,?r . We can not to simply
choose c to be equal to ??,?r , since ??,?r depends on r?, an outcome of the LP which
depends on c. Rather, arriving at a distribution c such that c = ??,?r is a fixed point
problem. In this section, we explore a path-following algorithm for approximating
such a fixed point [9], with the aim of arriving at a value of ? that is close to one.
Consider solving a sequence indexed by i = 1, . . . , M of (?i , ci )-perturbed MDPs.
Let r?i denote the weight vector associated with an optimal solution to the LP (1)
with perturbation parameters (?i , ci ). Let ?1 = 0 and ?i+1 = ?i + ? for i ? 1,
where ? is a small positive step size. For any initial choice of c1 , we have c1 = ??1 ,?r1 ,
since the system resets in every time period. For i ? 1, let ci+1 = ??i ,?ri . One might
hope that the change in ci is gradual, and therefore, ci ? ??i ,?ri for each i.
We can not yet offer rigorous theoretical support for the proposed path following
algorithm. However, we will present promising results from a simple computational
experiment. This experiment involves a problem with continuous state and action
spaces. Though our main result ? Theorem 3.1 ? applies to problems with countable
state spaces and finite action spaces, there is no reason why the LP cannot be applied
to broader classes of problems such as the one we now describe. Consider a scalar
state process xt+1 = xt + at + wt , driven by scalar actions at and a sequence wt
i.i.d. zero-mean unit-variance normal random variables. Consider a cost function
g(x, a) = (x ? 2)2 + a2 . We aim at approximating the differential value function
using a single basis function ?(x) = x2 . Hence, (?r)(x) = rx2 , with r ? <. We will
use a slack function ?(x) = 1 + x2 and penalty ? = 5. The special structure of this
problem allows for exact solution of the LP (1) as well as the exact computation
of the parameter ?, though we will not explain here how this is done. Figure 1
plots ? versus ?, as ? is increased from 0 to 0.99, with c initially set to a zero-mean
normal distribution with variance 4. The three curves represent results from using
three different step sizes ? ? {0.01, 0.005, 0.0025}. Note that in all cases, ? is very
close to 1. Smaller values of ? resulted in curves being closer to 1: the lowest curve
corresponds to ? = 0.01 and the highest curve corresponds to ? = 0.0025.
Figure 1: Evolution of ? with ? ? {0.01, 0.005, 0.0025}.
5
The Impact of Perturbation
Some simple algebra will show that for any policy u,
?
X
??,u ? ?u = (1 ? ?)
?t cT Put gu ? ?uT gu .
t=0
T
Put gu
When the state space is finite |c
? ?uT gu | decays at a geometric rate. This
is also true in many
practical
contexts
involving
infinite state spaces. One might
P?
think of mu = t=0 (cT Put gu ? ?uT gu ), as the mixing time of the policy u if the
initial state is drawn according to the restart distribution c. This mixing time is
finite if the differences cT Put gu ? ?uT gu converge geometrically. Further, we have
|??,u ? ?u | = mu (1 ? ?), and coming back to the LP, this implies that
?u?,?r ? ?u? ? ??,?r ? ??,u?? + (1 ? ?)(mu?,?r + max(mu? , mu?? )).
Combined with the bound of Theorem 3.1, this offers a performance bound for the
policy u?,?r applied to the original MDP. Note that when c = ??,?r , in the spirit
discussed in Section 4, we have mu?,?r = 0. For simplicity, we will assume in the
rest of this section that mu?,?r = 0 and mu? ? mu?? , so that
?u?,?r ? ?u? ? ??,?r ? ??,u?? + (1 ? ?)mu? .
Let us turn to discuss how ? should be chosen. This choice must strike a balance
between two factors: the coefficient of 1/(1 ? ?) in the bound of Theorem 3.1 and
the loss of (1??)mu? associated with the perturbation. One approach is to fix some
> 0 that we are willing to accept as an absolute performance loss, and then choose
? so that (1 ? ?)mu? ? . Then, we would have 1/(1 ? ?) ? mu? /. Note that the
term 1/(1 ? ?) multiplying the right-hand-side of the bound can then be thought
of as a constant multiple of the mixing time of u? . An important open question is
whether it is possible to design an approximate DP algorithm and establish for that
algorithm an error bound that does not depend on the mixing time in this way.
6
Relation to Prior Work
In closing, it is worth discussing how our new algorithm and results relate to our
prior work on LP approaches to approximate DP [6, 8]. If we remove the slack
function by setting s2 to zero and let s1 = ?(1 ? ?)cT ?r, our LP (1) becomes
maximize
subject to
cT ?r
min(gu + ?Pu ?r) ? ?r ? 0,
(2)
u
which is precisely the LP considered in [6] for approximating the optimal cost-to-go
function in a discounted MDP with discount factor ?. Let r? be an optimal solution
to (2). For any function V : S 7? <+ , let ?V = ?k maxu Pu V k?,1/V . We call V
a Lyapunov function if ?V < 1. The following result can be established using an
analysis entirely analogous to that carried out in [6]:
Theorem 6.1. If ??v < 1 and ?v 0 = 1 for some v, v 0 ? <K . Then,
??,?r ? ??? ?
2?cT ?v
min kh? ? ?rk?,1/?v .
1 ? ??v r?<K ?
A comparison of Theorems 3.1 and 6.1 reveals benefits afforded by the slack function. We consider the situation where ? = ?v, which makes the bounds directly
comparable. An immediate observation is that, even though ? and ?v play analogous roles in the bounds, ? is not required to be a Lyapunov function. In this sense,
T
Theorem 3.1 is stronger than Theorem 6.1. Moreover, if ? = ??,u
? ?, we have
?
?
cT ?v
= cT (I ? ?Pu?? )?1 ? ? max cT (I ? ?Pu )?1 ?v ?
.
u
1??
1 ? ?V
Hence, the first term ? which appears in the bound of Theorem 6.1 ? grows with the
largest mixing time among all policies, whereas the second term ? which appears in
the bound of Theorem 3.1 ? only depends on the mixing time of an optimal policy.
As discussed in [6], appropriate choice of c ? there referred to as the state-relevance
weights ? can be important for the error bound of Theorem 6.1 to scale well with the
number of states. In [8], it is argued that some form of weighting of states in terms
of a metric of relevance should continue to be important when considering average
cost problems. An LP-based algorithm is also presented in [8], but the results are
far weaker than the ones we have presented in this paper, and we suspect that the
that LP-based algorithm of [8] will not scale well to high-dimensional problems.
Some guidance is offered in [6] regarding how c might be chosen. However, this
is ultimately left as a manual task. An important contribution of this paper is
the path-following algorithm proposed in Section 4, which aims at automating an
effective choice of c.
Acknowledgments
This research was supported in part by the NSF under CAREER Grant ECS9985229 and by the ONR under grant MURI N00014-00-1-0637.
References
[1] D. Adelman, ?A Price-Directed Approach to Stochastic Inventory/Routing,?
preprint, 2002, to appear in Operations Research.
[2] L. C. Baird, ?Residual Algorithms: Reinforcement Learning with Function
Approximation,? ICML, 1995.
[3] D. P. Bertsekas and J. N. Tsitsiklis, Neuro-Dynamic Programming, Athena
Scientific, Bellmont, MA, 1996.
[4] D. P. Bertsekas, Dynamic Programming and Optimal Control, second edition,
Athena Scientific, Bellmont, MA, 2001.
[5] J. A. Boyan and A. W. Moore, ?Generalization in Reinforcement Learning:
Safely Approximating the Value Function,? NIPS, 1995.
[6] D. P. de Farias and B. Van Roy, ?The Linear Programming Approach to
Approximate Dynamic Programming,? Operations Research, Vol. 51, No. 6,
November-December 2003, pp. 850-865. Preliminary version appeared in NIPS,
2001.
[7] D. P. de Farias and B. Van Roy, ?On Constraint Sampling in the Linear Programming Approach to Approximate Dynamic Programming,? Mathematics
of Operations Research, Vol. 29, No. 3, 2004, pp. 462?478.
[8] D.P. de Farias and B. Van Roy, ?Approximate Linear Programming for
Average-Cost Dynamic Programming,? NIPS, 2003.
[9] C. B. Garcia and W. I. Zangwill, Pathways to Solutions, Fixed Points, and
Equilibria, Prentice-Hall, Englewood Cliffs, NJ, 1981.
[10] G. J. Gordon, ?Stable Function Approximation in Dynamic Programming,?
ICML, 1995.
[11] C. Guestrin, D. Koller, R. Parr, and S. Venkataraman, ?Efficient Solution
Algorithms for Factored MDPs,? Journal of Artificial Intelligence Research,
Volume 19, 2003, pp. 399-468. Preliminary version appeared in NIPS, 2001.
[12] M. E. Harmon, L. C. Baird, and A. H. Klopf, ?Advantage Updating Applied
to a Differential Game,? NIPS 1995.
[13] R. Munos, ?Error Bounds for Approximate Policy Iteration,? ICML, 2003.
[14] J. N. Tsitsiklis and B. Van Roy, ?Feature-Based Methods for Large Scale Dynamic Programming,? Machine Learning, Vol. 22, 1996, pp. 59-94.
[15] D. Schuurmans and R. Patrascu, ?Direct Value Approximation for Factored
MDPs,? NIPS, 2001.
[16] P. J. Schweitzer and A. Seidman, ?Generalized Polynomial Approximation in
Markovian Decision Processes,? Journal of Mathematical Analysis and Applications, Vol. 110, ?985, pp. 568-582.
| 2621 |@word version:8 polynomial:1 norm:4 stronger:1 open:1 willing:1 gradual:1 carry:1 initial:2 contains:1 selecting:1 past:1 current:2 yet:1 must:2 written:1 remove:1 plot:1 greedy:1 selected:2 intelligence:1 steepest:1 iterates:1 unbounded:2 mathematical:1 along:1 schweitzer:1 direct:1 differential:6 ect:1 shorthand:1 khk:2 pathway:1 introduce:1 x0:1 behavior:1 growing:1 bellman:9 discounted:2 decomposed:1 curse:1 considering:1 becomes:1 provided:2 notation:1 bounded:1 moreover:1 lowest:1 minimizes:3 finding:1 nj:1 guarantee:3 temporal:1 safely:1 every:2 exactly:1 control:6 unit:1 grant:2 appear:2 bertsekas:2 positive:1 engineering:3 local:1 cliff:1 subscript:1 path:5 approximately:1 might:4 studied:4 suggests:1 directed:1 unique:1 practical:1 acknowledgment:1 zangwill:1 maxx:1 thought:2 pre:2 suggest:1 cannot:1 undesirable:1 selection:3 operator:2 close:3 put:5 context:1 applying:1 prentice:1 www:1 go:2 unify:1 simplicity:1 factored:2 analogous:2 pt:1 suppose:1 play:1 strengthen:1 exact:2 programming:17 roy:5 satisfying:2 approximated:1 updating:1 muri:1 role:2 preprint:1 electrical:1 venkataraman:1 highest:1 benjamin:1 intuition:1 mu:13 automates:1 dynamic:14 ultimately:1 depend:1 solving:3 algebra:1 basis:10 gu:16 farias:4 effective:2 shortcoming:1 describe:1 kp:4 artificial:1 outcome:2 choosing:2 stanford:2 think:1 transform:1 itself:1 superscript:1 sequence:3 advantage:1 propose:8 maximal:1 reset:3 coming:1 rx2:1 alleviates:1 mixing:6 kh:4 requirement:2 r1:1 guaranteeing:1 executing:1 develop:1 solves:1 strong:2 streamline:1 involves:3 implies:2 come:2 met:1 lyapunov:8 closely:1 stochastic:2 routing:1 argued:1 fix:1 generalization:1 preliminary:2 hold:1 sufficiently:1 considered:1 hall:1 normal:2 minu:7 maxu:2 equilibrium:1 parr:1 a2:1 largest:1 establishes:1 weighted:1 minimization:7 hope:1 aim:5 rather:3 broader:1 hk:1 rigorous:1 sense:1 eliminate:1 typically:1 accept:1 initially:1 relation:3 koller:1 arg:3 issue:1 among:1 overtaken:1 special:1 equal:3 sampling:2 identical:1 icml:3 gordon:1 few:1 distinguishes:1 randomly:1 simultaneously:1 resulted:1 replaced:1 interest:1 englewood:1 introduces:1 extreme:1 pc:1 closer:1 xy:5 harmon:1 indexed:1 guidance:1 theoretical:4 minimal:1 increased:1 markovian:1 measuring:2 bellmont:2 cost:17 perturbed:8 combined:1 automating:1 satisfied:2 management:1 successively:1 choose:2 possibly:1 de:4 psfiles:1 coefficient:3 baird:2 explicitly:1 depends:4 piece:1 view:1 analyze:1 reached:1 contribution:4 minimize:1 variance:2 efficiently:1 multiplying:1 worth:1 explain:4 strongest:1 influenced:1 manual:2 pp:5 involved:1 associated:4 proof:1 sampled:1 massachusetts:1 ut:5 dimensionality:1 shaping:3 sophisticated:1 back:1 appears:3 attained:2 restarts:1 formulation:4 done:1 though:4 until:1 hand:2 working:1 nonlinear:3 scientific:2 grows:1 mdp:15 verify:1 true:1 counterpart:3 evolution:1 hence:3 moore:1 illustrated:1 game:1 adelman:1 steady:1 generalized:1 pdf:1 delivers:1 empirically:1 conditioning:1 volume:1 analog:1 discussed:4 approximates:2 interpretation:1 refer:1 imposing:1 mathematics:1 similarly:1 closing:2 stable:1 pu:10 optimizes:1 inf:1 driven:1 certain:4 nonconvex:1 n00014:1 onr:1 continue:1 discussing:1 guestrin:1 minimum:2 additional:1 relaxed:1 converge:1 maximize:1 period:2 strike:1 full:1 desirable:3 multiple:1 offer:6 impact:1 involving:2 neuro:1 metric:2 expectation:4 iteration:6 represent:2 sometimes:1 limt:3 c1:2 whereas:1 addressed:2 rest:1 subject:2 suspect:1 december:1 spirit:1 call:2 constraining:1 easy:3 enough:1 automated:1 affect:1 fit:1 forthcoming:1 idea:2 regarding:1 whether:2 penalty:4 action:7 useful:1 involve:1 discount:2 generate:2 http:1 problematic:1 nsf:1 discrete:1 promise:1 vol:4 drawn:1 clarity:1 geometrically:1 year:1 decision:5 appendix:1 comparable:1 entirely:1 bound:17 ct:9 strength:1 constraint:7 precisely:1 ri:2 x2:2 afforded:1 span:1 min:5 performing:1 according:3 combination:4 seidman:1 smaller:1 lp:39 evolves:1 making:1 s1:8 invariant:1 sided:1 daniela:1 discus:4 slack:8 turn:1 available:4 operation:3 appropriate:1 alternative:2 original:4 denotes:2 include:1 ensure:1 yx:2 establish:2 approximating:6 objective:2 question:1 primary:1 dependence:2 exhibit:1 dp:7 distance:3 restart:7 athena:2 gracefully:3 bvr:1 reason:1 length:1 minimizing:4 balance:1 difficult:1 relate:1 design:1 countable:2 policy:26 perform:1 upper:1 observation:1 markov:2 finite:7 descent:1 november:1 immediate:1 defining:1 situation:1 perturbation:7 introduced:1 pair:1 mechanical:1 required:2 established:1 nip:6 suggested:1 appeared:2 program:2 max:7 event:1 boyan:1 residual:1 technology:1 mdps:4 carried:1 prior:3 literature:2 geometric:1 loss:2 expect:1 highlight:1 proportional:1 versus:1 offered:1 principle:1 course:1 supported:1 arriving:2 infeasible:1 tsitsiklis:2 side:1 weaker:1 understand:1 institute:1 unmanageable:1 munos:1 absolute:1 van:5 benefit:1 curve:4 transition:2 pert:1 made:1 reinforcement:2 far:1 approximate:17 dealing:1 global:1 reveals:1 alternatively:1 search:3 continuous:1 why:1 promising:1 nature:1 reasonably:1 career:1 schuurmans:1 inventory:1 complex:1 pk:2 main:1 s2:10 edition:1 turbed:1 referred:1 strengthened:1 weighting:1 rk:6 theorem:15 xt:3 specific:1 decay:1 dominates:1 exists:1 effectively:1 importance:1 ci:5 magnitude:1 exu:1 garcia:1 simply:1 explore:1 patrascu:1 scalar:2 applies:1 pucci:1 corresponds:4 determines:1 ma:2 viewed:4 sized:1 price:1 feasible:1 change:1 infinite:4 except:2 operates:1 wt:2 called:1 total:1 diverging:1 meaningful:1 klopf:1 indicating:1 select:1 highdimensional:1 support:1 relevance:5 |
1,786 | 2,622 | The power of feature clustering: An application
to object detection
Shai Avidan
Mitsibishi Electric Research Labs
201 Broadway
Cambridge, MA 02139
[email protected]
Moshe Butman
Adyoron Intelligent Systems LTD.
34 Habarzel St.
Tel-Aviv, Israel
[email protected]
Abstract
We give a fast rejection scheme that is based on image segments and
demonstrate it on the canonical example of face detection. However, instead of focusing on the detection step we focus on the rejection step and
show that our method is simple and fast to be learned, thus making it
an excellent pre-processing step to accelerate standard machine learning
classifiers, such as neural-networks, Bayes classifiers or SVM. We decompose a collection of face images into regions of pixels with similar
behavior over the image set. The relationships between the mean and
variance of image segments are used to form a cascade of rejectors that
can reject over 99.8% of image patches, thus only a small fraction of the
image patches must be passed to a full-scale classifier. Moreover, the
training time for our method is much less than an hour, on a standard PC.
The shape of the features (i.e. image segments) we use is data-driven,
they are very cheap to compute and they form a very low dimensional
feature space in which exhaustive search for the best features is tractable.
1
Introduction
This work is motivated by recent advances in object detection algorithms that use a cascade
of rejectors to quickly detect objects in images. Instead of using a full fledged classifier on
every image patch, a sequence of increasingly more complex rejectors is applied. Nonface image patches will be rejected early on in the cascade, while face image patches will
survive the entire cascade and will be marked as a face.
The work of Viola & Jones [15] demonstrated the advantages of such an approach. Other
researchers suggested similar methods [4, 6, 12]. Common to all these methods is the
realization that simple and fast classifiers are enough to reject large portions of the image, leaving more time to use more sophisticated, and time consuming, classifiers on the
remaining regions of the image.
All these ?fast? methods must address three issues. First, is the feature space in which to
work, second is a fast method to calculate the features from the raw image data and third is
the feature selection algorithm to use.
Early attempts assumed the feature space to be the space of pixel values. Elad et al. [4]
suggest the maximum rejection criteria that chooses rejectors that maximize the rejection
rate of each classifier. Keren et al. [6] use anti-face detectors by assuming normal distribution on the background. A different approach was suggested by Romdhani et al. [12],
that constructed the full SVM classifier first and then approximated it with a sequence or
support vector rejectors that were calculated using non-linear optimization. All the above
mentioned method need to ?touch? every pixel in an image patch at least once before they
can reject the image patch.
Viola & Jones [15], on the other hand, construct a huge feature space that consists of
combined box regions that can be quickly computed from the raw pixel data using the
?integral image? and use a sequential feature selection algorithm for feature selection. The
rejectors are combined using a variant of AdaBoost [2]. Li et al [7] replaced the sequential
forward searching algorithm with a float search algorithm (which can backtrack as well).
An important advantage of the huge feature space advocated by Viola & Jones is that now
image patches can be rejected with an extremely small number of operations and there is
no need to ?touch? every pixel in the image patch at least once.
Many of these methods focus on developing fast classifiers that are often constructed in a
greedy manner. This precludes classifiers that might demonstrate excellent classification
results but are slower to compute, such as the methods suggested by Schneiderman et al.
[8], Rowley et al. [13], Sung and Poggio [10] or Heisele et al [5].
Our method offers a way to accelerate ?slow? classification methods by using a preprocessing rejection step. Our rejection scheme is fast to be trained and very effective
in rejecting the vast majority of false patterns. On the canonical face detection example, it
took our method much less than an hour to train and it was able to reject over 99.8% of the
image patches, meaning that we can effectively accelerate standard classifiers by several
orders of magnitude, without changing the classifier at all.
Like other, ?fast?, methods we use a cascade of rejectors, but we use a different type of
filters and a different type of feature selection method. We take our features to be the
approximated mean and variance of image segments, where every image segment consists
of pixels that have similar behavior across the entire image set. As a result, our features
are derived from the data and do not have to be hand crafted for the particular object of
interest. In fact they do not even have to form contiguous regions. We use only a small
number of representative pixels to calculate the approximated mean and variance, which
makes our features very fast to compute during detection (in our experiments we found that
our first rejector rejects almost 50% of all image patches, using just 8 pixels). Finally, the
number of segments we use is quite small which makes it possible to exhaustively calculate
all possible rejectors based on single, pairs and triplets of segments in order to find the best
rejectors in every step of the cascade. This is in contrast to methods that construct a huge
feature bank and use a greedy feature selection algorithm to choose ?good? features from
it. Taken together, our algorithm is fast to train and fast to test. In our experiments we train
on a database that contains several thousands of face images and roughly half-a-million
non-faces in less than an hour on an average PC and our rejection module runs at several
frames per second.
2
Algorithm
At the core of our algorithm is the realization that feature representation is a crucial ingredient in any classification system. For instance, the Viola-Jones box filters are extremely efficient to compute using the ?integral image? but they form a large feature space, thus placing
a heavy computational burden on the feature selection algorithm that follows. Moreover,
empirically they show that the first feature selected by their method correspond to meaningful regions in the face. This suggests that it might be better to focus on features that
correspond to coherent regions in the image. This leads to the idea of image segmentation,
that breaks an ensemble of images into regions of pixels that exhibit similar temporal behavior. Given the image segmentation we take our features to be the mean and variance of
each segment, giving us a very small feature space to work on (we chose to segment the
face image into eight segments). Unfortunately, calculating the mean and variance of an
image segment requires going over all the pixels in the segment, a time consuming process. However, since the segments represent similar-behaving pixels we found that we can
approximate the calculation of the mean and variance of the entire segment using quite a
small number of representative pixels. In our experiments, four pixels were enough to adequately represent segments that contain several tens of pixels. Now that we have a very
small feature space to work with, and a fast way to extract features from raw pixels data
we can exhaustively search for all possible combinations of single, pairs or triplets of features to find the best rejector in every stage. The remaining patterns should be passed to a
standard classifier for final validation.
2.1
Image Segments
Image segments were already presented in the past [1] for the problem of classification of
objects such as faces or vehicles. We briefly repeat the presentation for the paper to be
self-contained. An ensemble of scaled, cropped and aligned images of a given object (say
faces) can be approximated by its leading principal components. This is done by stacking
the images (in vector form) in a design matrix A and taking the leading eigenvectors of the
covariance matrix C = N1 AAT , where N is the number of images. The leading principal
components are the leading eigenvectors of the covariance matrix C and they form a basis
that approximates the space of all the columns of the design matrix A [11, 9]. But instead
of looking at the columns of A look at the rows of A. Each row in A gives the intensity
profile of a particular pixel, i.e., each row represents the intensity values that a particular
pixel takes in the different images in the ensemble. If two pixels come from the same
region of the face they are likely to have the same intensity values and hence have a strong
temporal correlation. We wish to find this correlations and segment the image plane into
regions of pixels that have similar temporal behavior. This approach broadly falls under
the category of Factor Analysis [3] that seeks to find a low-dimensional representation that
captures the correlations between features.
Let Ax be the x-th row of the design matrix A. Then Ax is the intensity profile of pixel x
(We address pixels with a single number because the images are represented in a scan-line
vector form). That is, Ax is an N -dimensional vector (where N is the number of images)
that holds the intensity values of pixel x in each image in the ensemble. Pixels x and y
are temporally correlated if the dot product of rows Ax and Ay is approaching 1 and are
temporally uncorrelated if the dot-product is approaching 0.
Thus, to find temporally correlated pixels all we need to do is run a clustering algorithm
on the rows of the design matrix A. In particular, we used the k-means algorithm on the
rows of the matrix A but any method of Factor Analysis can be used. As a result, the
image-plane is segmented into several (possibly non-continuous) segments of temporally
correlated pixels. Experiments in the past [1] showed good classification results on different
objects such as faces and vehicles.
2.2
Finding Representative Pixels
Our algorithm works by comparing the mean and variance properties of one or more image
segments. Unfortunately this requires touching every pixel in the image segment during
test time, thus slowing the classification process considerably. Therefor, during train time
we find a set of representative pixels that will be used during test time. Specifically, we
approximate every segment in a face image with a small number of representative pixels
Face segments
2
4
6
8
10
12
14
16
18
20
2
4
6
8
10
12
14
16
18
20
(a)
(b)
Figure 1: Face segmentation and representative pixels. (a) Face segmentation and representative pixels. The face segmentation was computed using 1400 faces, each segment is
marked with a different color and the segments need not be contiguous. The crosses overlaid on the segments mark the representative pixels that were automatically selected by
our method. (b) Histogram of the difference between an approximated mean and the exact
mean of a particular segment (the light blue segment on the left). The histogram is peaked
at zero, meaning that the representative pixels give a good approximation.
that approximate the mean and variance of the entire image segment. Define ? i (xj ) to be
the true mean of segment i of face j, and let ?
? i (xj ) be its approximation, defined as
Pk
j=1 xj
?
?i (xj ) =
k
where {xj }kj=1 are a subset of pixels in segment i of pattern j. We use a greedy algorithm
that incrementally searches for the next representative pixel that minimize
n
X
(?
?i (xj )) ? ?i (xj ))2
j=1
and add it to the collection of representative pixels of segment i. In practice we use four
representative pixels per segment. The representative pixels computed this way are used
for computing both the approximated mean and the approximated variance of every test
pattern. Figure 1 show how well this approximation works in practice.
Given the representative pixels, the approximated variance ?
? i (xj ) of segment i of pattern j
is given by:
k
X
?
?i (xj ) =
|xj ? ?
?i (xj )|
j=1
2.3
The rejection cascade
We construct a rejection cascade that can quickly reject image patches, with minimal computational load. Our feature space consist of the approximated mean and variance of the
image segments. In our experiments we have 8 segments, each represented by its mean and
variance, giving rise to a 16D feature space. This feature space is very fast to compute, as
we need only four pixels to calculate the approximate mean and variance of the segment.
Because the feature space is so small we can exhaustively search for all classifiers on single,
pairs and triplets of segments. In addition this feature space gives enough information to
reject texture-less regions without the need to normalize the mean or variance of the entire
image patch. We next describe our rejectors in detail.
2.3.1
Feature rejectors
Now, that we have segmented every image into several segments and approximated every
segment with a small number of representative pixels, we can exhaustively search for the
best combination of segments that will reject the largest number of non-face images. We
repeat this process until the improvement in rejection is negligible.
Given a training set of P positive examples (i.e. faces) and N negative examples we construct the following linear rejectors and adjust the parameter ? so that they will correctly
classify d ? P (we use d = 0.95) of the face images and save r, the number of negative
examples they correctly rejected, as well as the parameter ?.
1. For each segment i, find a bound on its approximated mean. Formally, find ? s.t.
?
?i (x) > ? or ?
?i (x) < ?
2. For each segment i, find a bound on its approximated variance. Formally, find ?
s.t.
?
?i (x) > ? or ?
?i (x) < ?
3. For each pair of segments i, j, find a bound on the difference between their approximated means. Formally, find ? s.t.
?
?i (x) ? ?
?j (x) > ? or ?
?i (x) ? ?
?j (x) < ?
4. For each pair of segments i, j, find a bound on the difference between their approximated variance. Formally, find ? s.t.
?
?i (x) ? ?
?j (x) > ? or ?
?i (x) ? ?
?j (x) < ?
5. For each triplet of segments i, j, k find a bound on the difference of the absolute
difference of their approximated means. Formally, find ? s.t.
|?
?i (x) ? ?
?j (x)| ? |?
?i (x) ? ?
?k (x)| > ?
This process is done only once to form a pool of rejectors. We do not re-train rejectors after
selecting a particular rejector.
2.3.2
Training
We form the cascade of rejectors from a large pattern vs. rejector binary table T, where
each entry T(i, j) is 1 if rejector j rejects pattern i. Because the table is binary we can
store every entry in a single bit and therefor a table of 513, 000 patterns and 664 rejectors
can easily fit in the memory. We then use a greedy algorithm to pick the next rejector with
the highest rejection score r. We repeat this process until r falls below some predefined
threshold.
1. Sum each column and choose column (rejector) j with the highest sum.
2. For each entry T (i, j), in column j, that is equal to one, zero row i.
3. Go to step 1
The entire process is extremely fast and takes only several minutes, including I/O. The idea
of creating a rejector pool in advance was independently suggested by [16] to accelerate
the Viola-Jones training time. We obtain 50 rejectors using this method. Figure 2a shows
the rejection rate of this cascade on a training set of 513, 000 images, as well as the number
of arithmetic operations it takes. Note that roughly 50% of all patterns are rejected by the
first rejector using only 12 operations. During testing we compute the approximated mean
and variance only when they are needed and not before hand.
Comparing different image segmentations
Rejection rate
90
90
85
80
80
70
75
rejection rate
rejection rate
60
70
65
50
40
60
30
55
random segments
vertical segments
horizontal segments
image segments
20
50
45
10
0
50
100
150
number of operations
200
250
0
5
10
15
20
25
number of rejectors
(a)
(b)
Figure 2: (a) Rejection rate on training set. The x-axis counts the number of arithmetic
operations needed for rejection. The y-axis is the rejection rate on a training set of about
half-a-million non-faces and about 1500 faces. Note that almost 50% of the false patterns
are rejected with just 12 operations. Overall rejection rate of the feature rejectors on the
training set is 88%, it drops to about 80% on the CMU+MIT database. (b) Rejection rate
as a function of image segmentation method. We trained our system using four types of
image segmentation and show the rejector. We compare our image segmentation approach
against naive segmentation of the image plane into horizontal blocks, vertical blocks or
random segmentation. In each case we trained a cascade of 21 rejectors and calculated their
accumulative rejection rate on our training set. Clearly working with our image segments
gives the best results.
We wanted to confirm our intuition that indeed only meaningful regions in the image can
produce such results and we therefor performed the following experiment. We segmented
the pixels in the image using four different methods. (1) using our image segments (2)
into 8 horizontal blocks (3) into 8 vertical blocks (4) into 8 randomly generated segments.
Figure 2b show that image segments gives the best results, by far.
The remaining false positive patterns are passed on to the next rejectors, as described next.
2.4
Texture-less region rejection
We found that the feature rejectors defined in the previous section are doing poorly in
rejecting texture-less regions. This is because we do not perform any sort of variance
normalization on the image patch, a step that will slow us down. However, by now we
have computed the approximated mean and variance of all the image segments and we
can construct rejectors based on all of them to reject texture-less regions. In particular we
construct the following two rejectors
1. Reject all image patches where the variance of all 8 approximated means falls
below a threshold. Formally, find ? s.t.
?
? (?
?i (x)) < ? i = 1...8
2. Reject all image patches where the variance of all 8 approximated variances falls
below a threshold. Formally, find ? s.t.
?
? (?
?i (x)) < ? i = 1...8
2.5
Linear classifier
Finally, we construct a cascade of 10 linear rejectors, using all 16 features (i.e. the approximated means and variance of all 8 segments).
(a)
(b)
Figure 3: Examples. We show examples from the CMU+MIT dataset. Our method correctly rejected over 99.8% of the image patches in the image, leaving only a handful of
image patches to be tested by a ?slow?, full scale classifier.
2.6
Multi-detection heuristic
As noted by previous authors [15] face classifiers are insensitive to small changes in position and scale and therefor we adopt the heuristic that only four overlapping detections are
declared a face. This help reduce the number of detected rectangles around and face, as
well as reject some spurious false detections.
3
Experiments
We have tested our rejection scheme on the standard CMU+MIT database [13]. We created
a pyramid at increasing scales of 1.1 and scanned every scale for rectangles of size 20 ? 20
in jumps of two pixels. We calculate the approximated mean and variance only when they
are needed, to save time.
Overall, our rejection scheme rejected over 99.8% of the image patches, while correctly detecting 93% of the faces. On average the feature rejectors rejected roughly 80% of all image
patches, the textureless region rejectors rejected additional 10% of the image patches, the
linear rejectors rejected additional 5% and the multi-detection heuristic rejected the remaining image patterns. The average rejection rate per image is over 99.8%. This is not enough
for face detection, as there are roughly 615, 000 image patches per image in the CMU+MIT
database, and our rejector cascade passes, on average, 870 false positive image patches, per
image. This patterns will have to be passed to a full-scale classifier to be properly rejected.
Figure 3 give some examples of our system. Note that the system correctly detects all the
faces, while allowing a small number of false positives.
We have also experimented with rescaling the features, instead of rescaling the image, but
noted that the number of false positives increased by about 5% for every fixed detection
rate we tried (All the results reported here use image pyramids).
4
Summary and Conclusions
We presented a fast rejection scheme that is based on image segments and demonstrated it
on the canonical example of face detection. Image segments are made of regions of pixels
with similar behavior over the image set. The shape of the features (i.e. image segments)
we use is data-driven and they are very cheap to compute The relationships between the
mean and variance of image segments are used to form a cascade of rejectors that can reject
over 99.8% of the image patches, thus only a small fraction of the image patches must be
passed to a full-scale classifier. The training time for our method is much less than an hour,
on a standard PC. We believe that our method can be used to accelerate standard machine
learning algorithms that are too slow for object detection, by serving as a gate keeper that
rejects most of the false patterns.
References
[1] Shai Avidan. EigenSegments: A spatio-temporal decomposition of an ensemble of
image. In European Conference on Computer Vision (ECCV), May 2002, Copenhagen,
Denmark.
[2] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line
learning and an application to boosting. In Computational Learning Theory: Eurocolt
95, pages 2337. Springer-Verlag, 1995.
[3] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. WileyInterscience publication, 1973.
[4] M. Elad, Y. Hel-Or and R. Keshet. Rejection based classifier for face detection. Pattern
Recognition Letters 23 (2002) 1459-1471.
[5] B. Heisele, T. Serre, S. Mukherjee, and T. Poggio. Feature reduction and hierarchy of
classifiers for fast object detection in video images. In Proc. CVPR, volume 2, pages
1824, 2001.
[6] D. Keren, M. Osadchy, and C. Gotsman. Antifaces: A novel, fast method for image
detection. IEEE Trans. on Pattern Analysis and Machine Intelligence, 23(7):747761,
2001.
[7] S.Z. Li, L. Zhu, Z.Q. Zhang, A. Blake, H.J. Zhang and H. Shum. Statistical Learning of Multi-View Face Detection. In Proceedings of the 7th European Conference on
Computer Vision, Copenhagen, Denmark, May 2002.
[8] Henry Schneiderman and Takeo Kanade. A statistical model for 3d object detection
applied to faces and cars. In IEEE Conference on Computer Vision and Pattern Recognition. IEEE, June 2000.
[9] L. Sirovich and M. Kirby. Low-dimensional procedure for the characterization of human faces. In Journal of the Optical Society of America 4, 510-524.
[10] K.-K. Sung and T. Poggio. Example-based Learning for View-Based Human Face Detection. In IEEE Transactions on Pattern Analysis and Machine Intelligence 20(1):3951, 1998.
[11] M. Turk and A. Pentland. Eigenfaces for recognition. In Journal of Cognitive Neuroscience, vol. 3, no. 1, 1991.
[12] S. Romdhani, P. Torr, B. Schoelkopf, and A. Blake. Computationally efficient face
detection. In Proc. Intl. Conf. Computer Vision, pages 695700, 2001.
[13] H. A. Rowley, S. Baluja, and T. Kanade. Neural network-based face detection. IEEE
Trans. on Pattern Analysis and Machine Intelligence, 20(1):2338, 1998.
[14] V. Vapnik. The Nature of Statistical Learning Theory. Springer, N.Y., 1995.
[15] P. Viola and M. Jones. Rapid Object Detection using a Boosted Cascade of Simple
Features. In IEEE Conference on Computer Vision and Pattern Recognition, Hawaii,
2001.
[16] J. Wu, J. M. Rehg, and M. D. Mullin. Learning a Rare Event Detection Cascade
by Direct Feature Selection. To appear in Advances in Neural Information Processing
Systems 16 (NIPS*2003), MIT Press, 2004.
| 2622 |@word briefly:1 duda:1 seek:1 tried:1 covariance:2 decomposition:1 pick:1 reduction:1 contains:1 score:1 selecting:1 shum:1 past:2 com:2 comparing:2 must:3 takeo:1 shape:2 cheap:2 wanted:1 drop:1 v:1 greedy:4 half:2 selected:2 intelligence:3 slowing:1 plane:3 core:1 detecting:1 boosting:1 characterization:1 zhang:2 constructed:2 direct:1 consists:2 manner:1 indeed:1 rapid:1 roughly:4 behavior:5 multi:3 detects:1 eurocolt:1 automatically:1 increasing:1 moreover:2 israel:1 finding:1 sung:2 temporal:4 every:14 classifier:21 scaled:1 appear:1 before:2 negligible:1 aat:1 positive:5 osadchy:1 might:2 chose:1 suggests:1 testing:1 practice:2 block:4 procedure:1 heisele:2 cascade:16 reject:15 pre:1 suggest:1 selection:7 keeper:1 demonstrated:2 go:1 independently:1 rehg:1 searching:1 hierarchy:1 exact:1 approximated:21 recognition:4 mukherjee:1 database:4 module:1 capture:1 calculate:5 thousand:1 region:16 schoelkopf:1 highest:2 sirovich:1 mentioned:1 intuition:1 rowley:2 exhaustively:4 trained:3 segment:60 basis:1 accelerate:5 easily:1 represented:2 america:1 train:5 fast:17 effective:1 describe:1 detected:1 exhaustive:1 quite:2 heuristic:3 elad:2 cvpr:1 say:1 precludes:1 final:1 sequence:2 advantage:2 took:1 product:2 aligned:1 realization:2 poorly:1 normalize:1 rejector:11 intl:1 produce:1 object:11 help:1 textureless:1 advocated:1 strong:1 come:1 filter:2 human:2 generalization:1 decompose:1 hold:1 around:1 blake:2 normal:1 overlaid:1 early:2 adopt:1 proc:2 largest:1 mit:5 clearly:1 boosted:1 publication:1 derived:1 focus:3 ax:4 june:1 improvement:1 properly:1 contrast:1 detect:1 entire:6 spurious:1 going:1 pixel:44 issue:1 classification:7 overall:2 equal:1 once:3 construct:7 placing:1 represents:1 jones:6 survive:1 look:1 peaked:1 intelligent:1 randomly:1 replaced:1 n1:1 attempt:1 detection:24 huge:3 interest:1 adjust:1 pc:3 light:1 predefined:1 integral:2 poggio:3 re:1 mullin:1 minimal:1 merl:1 instance:1 column:5 classify:1 increased:1 contiguous:2 yoav:1 stacking:1 subset:1 entry:3 rare:1 too:1 reported:1 considerably:1 chooses:1 combined:2 st:1 pool:2 together:1 quickly:3 choose:2 possibly:1 hawaii:1 cognitive:1 creating:1 conf:1 leading:4 rescaling:2 li:2 vehicle:2 break:1 performed:1 lab:1 view:2 doing:1 portion:1 bayes:1 sort:1 shai:2 minimize:1 variance:25 ensemble:5 correspond:2 raw:3 rejecting:2 backtrack:1 researcher:1 detector:1 romdhani:2 against:1 turk:1 dataset:1 color:1 car:1 segmentation:11 sophisticated:1 focusing:1 adaboost:1 done:2 box:2 rejected:12 just:2 stage:1 correlation:3 until:2 hand:3 working:1 horizontal:3 touch:2 overlapping:1 incrementally:1 believe:1 aviv:1 serre:1 contain:1 true:1 accumulative:1 adequately:1 hence:1 during:5 nonface:1 self:1 noted:2 criterion:1 ay:1 theoretic:1 demonstrate:2 image:93 meaning:2 novel:1 common:1 empirically:1 insensitive:1 volume:1 million:2 approximates:1 cambridge:1 therefor:4 henry:1 dot:2 behaving:1 add:1 recent:1 showed:1 touching:1 driven:2 store:1 verlag:1 binary:2 additional:2 maximize:1 arithmetic:2 full:6 segmented:3 calculation:1 offer:1 cross:1 hart:1 variant:1 avidan:3 vision:5 cmu:4 histogram:2 represent:2 normalization:1 pyramid:2 background:1 cropped:1 addition:1 float:1 leaving:2 crucial:1 pass:1 enough:4 xj:11 fit:1 approaching:2 reduce:1 idea:2 motivated:1 passed:5 ltd:1 hel:1 eigenvectors:2 ten:1 category:1 schapire:1 canonical:3 neuroscience:1 per:5 correctly:5 blue:1 broadly:1 serving:1 vol:1 four:6 threshold:3 changing:1 rectangle:2 vast:1 fraction:2 sum:2 run:2 schneiderman:2 letter:1 almost:2 wu:1 patch:25 decision:1 bit:1 bound:5 scanned:1 handful:1 scene:1 declared:1 extremely:3 optical:1 developing:1 combination:2 across:1 increasingly:1 butman:1 kirby:1 making:1 taken:1 computationally:1 count:1 needed:3 tractable:1 operation:6 eight:1 save:2 slower:1 gate:1 clustering:2 remaining:4 calculating:1 giving:2 society:1 already:1 moshe:1 exhibit:1 keren:2 majority:1 denmark:2 assuming:1 relationship:2 unfortunately:2 robert:1 broadway:1 negative:2 rise:1 design:4 perform:1 allowing:1 vertical:3 anti:1 pentland:1 viola:6 looking:1 frame:1 intensity:5 pair:5 copenhagen:2 coherent:1 learned:1 hour:4 nip:1 trans:2 address:2 able:1 suggested:4 below:3 pattern:21 including:1 memory:1 video:1 power:1 event:1 zhu:1 scheme:5 temporally:4 axis:2 created:1 extract:1 naive:1 kj:1 freund:1 ingredient:1 validation:1 bank:1 uncorrelated:1 heavy:1 row:8 eccv:1 summary:1 repeat:3 fledged:1 fall:4 eigenfaces:1 face:40 taking:1 absolute:1 calculated:2 forward:1 collection:2 author:1 preprocessing:1 jump:1 made:1 far:1 transaction:1 approximate:4 confirm:1 rejectors:29 assumed:1 consuming:2 spatio:1 search:6 continuous:1 triplet:4 table:3 kanade:2 nature:1 tel:1 excellent:2 complex:1 european:2 electric:1 pk:1 profile:2 crafted:1 representative:15 slow:4 position:1 wish:1 third:1 minute:1 down:1 load:1 experimented:1 svm:2 burden:1 consist:1 false:8 sequential:2 effectively:1 vapnik:1 texture:4 magnitude:1 keshet:1 rejection:27 likely:1 contained:1 springer:2 ma:1 marked:2 presentation:1 change:1 specifically:1 torr:1 baluja:1 principal:2 meaningful:2 formally:7 support:1 mark:1 scan:1 tested:2 wileyinterscience:1 correlated:3 |
1,787 | 2,623 | Stable adaptive control with online learning
Andrew Y. Ng
Stanford University
Stanford, CA 94305, USA
H. Jin Kim
Seoul National University
Seoul, Korea
Abstract
Learning algorithms have enjoyed numerous successes in robotic control
tasks. In problems with time-varying dynamics, online learning methods
have also proved to be a powerful tool for automatically tracking and/or
adapting to the changing circumstances. However, for safety-critical applications such as airplane flight, the adoption of these algorithms has
been significantly hampered by their lack of safety, such as ?stability,?
guarantees. Rather than trying to show difficult, a priori, stability guarantees for specific learning methods, in this paper we propose a method
for ?monitoring? the controllers suggested by the learning algorithm online, and rejecting controllers leading to instability. We prove that even if
an arbitrary online learning method is used with our algorithm to control
a linear dynamical system, the resulting system is stable.
1
Introduction
Online learning algorithms provide a powerful set of tools for automatically fine-tuning a
controller to optimize performance while in operation, or for automatically adapting to the
changing dynamics of a control problem. [2] Although one can easily imagine many complex learning algorithms (SVMs, gaussian processes, ICA, . . . ,) being powerfully applied
to online learning for control, for these methods to be widely adopted for applications such
as airplane flight, it is critical that they come with safety guarantees, specifically stability
guarantees. In our interactions with industry, we also found stability to be a frequently
raised concern for online learning. We believe that the lack of safety guarantees represents
a significant barrier to the wider adoption of many powerful learning algorithms for online
adaptation and control. It is also typically infeasible to replace formal stability guarantees
with only empirical testing: For example, to convincingly demonstrate that we can safely
fly a fleet of 100 aircraft for 10000 hours would require 106 hours of flight-tests.
The control literature contains many examples of ingenious stability proofs for various online learning schemes. It is impossible to do this literature justice here, but some examples
include [10, 7, 12, 8, 11, 5, 4, 9]. However, most of this work addresses only very specific
online learning methods, and usually quite simple ones (such as ones that switch between
only a finite number of parameter values using a specific, simple, decision rule, e.g., [4]).
In this paper, rather than trying to show difficult a priori stability guarantees for specific
algorithms, we propose a method for ?monitoring? an arbitrary learning algorithm being
used to control a linear dynamical system. By rejecting control values online that appear to
be leading to instability, our algorithm ensures that the resulting controlled system is stable.
2
Preliminaries
Following most work in control [6], we will consider control of a linear dynamical system.
Let xt ? Rnx be the nx dimensional state at time t. The system is initialized to x0 = ~0. At
each time t, we select a control action ut ? Rnu , as a result of which the state transitions to
xt+1 = Axt + But + wt .
(1)
Here, A ? Rnx ?nx and B ? Rnx ?nu govern the dynamics of the system, and wt is a
disturbance term. We will not make any distributional assumptions about the source of the
disturbances wt for now (indeed, we will consider a setting where an adversary chooses
them from some bounded set). For many applications, the controls are chosen as a linear
function of the state:
ut = K t xt .
(2)
Here, the Kt ? Rnu ?nx are the control gains. If the goal is to minimize the expected value
PT
of a quadratic cost function over the states and actions J = (1/T ) t=1 xTt Qxt + uTt Rut
and the wt are gaussian, then we are in the LQR (linear quadratic regulation) control setting.
Here, Q ? Rnx ?nx and R ? Rnu ?nu are positive semi-definite matrices. In the infinite
horizon setting, under mild conditions there exists an optimal steady-state (or stationary)
gain matrix K, so that setting Kt = K for all t minimizes the expected value of J. [1]
We consider a setting in which an online learning algorithm (also called an adaptive control algorithm) is used to design a controller. Thus, on each time step t, an online algorithm
may (based on the observed states and action sequence so far) propose some new gain matrix Kt . If we follow the learning algorithm?s recommendation, then we will start choosing
controls according to u = Kt x. More formally, an online learning algorithm is a function
nx
? Rnu )t 7? Rnu ?nx mapping from finite sequences of states and actions
f : ??
t=1 (R
(x0 , u0 , . . . , xt?1 , ut?1 ) to controller gains Kt . We assume that f ?s outputs are bounded
(||Kt ||F ? ? for some ? > 0, where || ? ||F is the Frobenius norm).
2.1 Stability
In classical control theory [6], probably the most important desideratum of a controlled
system is that it must be stable. Given a fixed adaptive control algorithm f and a fixed
sequence of disturbance terms w0 , w1 , . . ., the sequence of states xt visited is exactly determined by the equations
Kt = f (x0 , u0 , . . . , xt?1 , ut?1 ); xt+1 = Axt + B ? Kt xt + wt . t = 0, 1, 2, . . . (3)
Thus, for fixed f , we can think of the (controlled) dynamical system as a mapping from
the sequence of disturbance terms wt to the sequence of states xt . We now give the most
commonly-used definition of stability, called BIBO stability (see, e.g., [6]).
Definition. A system controlled by f is bounded-input bounded-output (BIBO) stable if,
given any constant c1 > 0, there exists some constant c2 > 0 so that for all sequences of
disturbance terms satisfying ||wt ||2 ? c1 (for all t = 1, 2, . . .), the resulting state sequence
satisfies ||xt ||2 ? c2 (for all t = 1, 2, . . .).
Thus, a system is BIBO stable if, under bounded disturbances to it (possibly chosen by an
adversary), the state remains bounded and does not diverge.
We also define the t-th step dynamics matrix Dt to be Dt = A+BKt . Note therefore that
the state transition dynamics of the system (right half of Equation 3) may now be written
xt+1 = Dt xt + wt . Further, the dependence of xt on the wt ?s can be expressed as follows:
xt = wt?1 + Dt?1 xt?1 = wt?1 + Dt?1 (wt?2 + Dt?2 xt?2 ) = ? ? ?
(4)
= wt?1 + Dt?1 wt?2 + Dt?1 Dt?2 wt?3 + ? ? ? + Dt?1 ? ? ? D1 w0 .
(5)
Since the number of terms in the sum above grows linearly with t, to ensure BIBO stability
of a system?i.e., that xt remains bounded for all t?it is usually necessary for the terms in
the sum to decay rapidly, so that the sum remains bounded. For example, if it were true that
||Dt?1 ? ? ? Dt?k+1 wt?k ||2 ? (1 ? )k for some 0 < < 1, then the terms in the sequence
above would be norm bounded by a geometric series, and thus the sum is bounded. More
generally, the disturbance wt contributes a term Dt+k?1 ? ? ? Dt+1 wt to the state xt+k , and
we would like Dt+k?1 ? ? ? Dt+1 wt to become small rapidly as k becomes large (or, in the
control parlance, for the effects of the disturbance wt on xt+k to be attenuated quickly).
If Kt = K for all t, then we say that we using a (nonadaptive) stationary controller K. In
this setting, it is straightforward to check if our system is stable. Specifically, it is BIBO
stable if and only if the magnitude of all the eigenvalues of D = Dt = A+BKt are strictly
less than 1. [6] To informally see why, note that the effect of wt on xt+k can be written
Dk?1 wt (as in Equation 5). Moreover, |?max (D)| < 1 implies D k?1 wt ? 0 as k ? ?.
Thus, the disturbance wt has a negligible influence on xt+k for large k. More precisely, it
is possible to show that, under the assumption that ||wt || ? c1 , the sequence on the right
hand side of (5) is upper-bounded by a geometrically decreasing sequence, and thus its sum
must also be bounded. [6]
It was easy to check for stability when Kt was stationary, because the mapping from the
wt ?s to the xt ?s was linear. In more general settings, if Kt depends in some complex way
on x1 , . . . , xt?1 (which in turn depend on w0 , . . . , wt?2 ), then xt+1 = Axt + BKt xt + wt
will be a nonlinear function of the sequence of disturbances.1 This makes it significantly
more difficult to check for BIBO stability of the system.
Further, unlike the stationary case, it is well-known that ?max (Dt ) < 1 (for all t) is insufficient to ensure stability. For example, consider a system where Dt = Dodd if t is odd, and
Dt = Deven otherwise, where2h
i
h
i
0
0.9
10
Dodd = 0.9
;
D
=
.
(6)
even
10
0.9
0
0.9
Note that ?max (Dt ) = 0.9 < 1 for all t. However, if we pick w0 = [1 0]T and w1 = w2 =
. . . = 0, then (following Equation 5) we have
x2t+1 = D2t D2t?1 D2t?2 . . . D2 D1 w0
(7)
=
=
t
(Deven Dodd ) w0
h
it
100.81
9
w0
9
0.81
(8)
(9)
t
Thus, even though the wt ?s are bounded, we have ||x2t+1 ||2 ? (100.81) , showing that the
state sequence is not bounded. Hence, this system is not BIBO stable.
3
Checking for stability
If f is a complex learning algorithm, it is typically very difficult to guarantee that the
resulting system is BIBO stable. Indeed, even if f switches between only two specific sets
of gains K, and if w0 is the only non-zero disturbance term, it can still be undecidable to
determine whether the state sequence remains bounded. [3] Rather than try to give a priori
guarantees on f , we instead propose a method for ensuring BIBO stability of a system
by ?monitoring? the control gains proposed by f , and rejecting gains that appear to be
? t only if it
leading to instability. We start computing controls according to a set of gains K
is accepted by the algorithm.
?t
From the discussion in Section 2.1, the criterion for accepting or rejecting a set of gains K
cannot simply be to check if ?max (A+BKt ) = ?max (Dt ) < 1. Specifically, ?max (D2 D1 )
is not bounded by ?max (D2 )?max (D1 ), and so even if ?max (Dt ) is small for all t?which
would be the case if the gains KQ
a stable stationary
t for any fixed t could be used to obtain Q
t
t
controller?the quantity ?max ( ? =1 D? ) can still be large, and thus ( ? =1 D? )w0 can
be large. However, the following holds for the largest singular value ?max of matrices.
Though the result is quite standard, for the sake of completeness we include a proof. 3
Proposition 3.1 : Let any matrices P ? Rl?m and Q ? Rm?n be given. Then
?max (P Q) ? ?max (P )?max (Q).
Proof. ?max (P Q) = maxu,v:||u||2 =||v||2 =1 uT P Qv. Let u? and v ? be a pair of vectors attaining the maximum in the previous equation. Then ?max (P Q) = u? T P Qv ? ?
||u? T P ||2 ? ||Qv ? ||2 ? maxv,u:||v||2 =||u||2 =1 ||uT P ||2 ? ||Qv||2 = ?max (P )?max (Q).
Thus, if we could ensure that ?max (Dt ) ? 1 ? for all t, we would find that the influence
of w0 on xt has norm bounded by ||Dt?1 Dt?2 . . . D1 w0 ||2 = ?max (Dt?1 . . . D1 w0 ) ?
1
Even if f is linear in its inputs so that Kt is linear in x1 , . . . , xt?1 , the state sequence?s dependence on (w0 , w1 , . . .) is still nonlinear because of the multiplicative term Kt xt in the dynamics (Equation 3).
2
Clearly, such as system can be constructed with appropriate choices of A, B and K t .
3
The largest singular value of M is ?max (M ) = ?max (M T ) = maxu,v:||u||2 =||v||2 =1 uT M v =
maxu:||u||2 =1 ||M u||2 . If x is a vector, then ?max (x) is just the L2 -norm of x.
?max (Dt?1 ) . . . ?max (D1 )||w0 ||2 ? (1 ? )t?1 ||w0 ||2 (since ||v||2 = ?max (v) if v is a
vector). Thus, the influence of wt on xt+k goes to 0 as k ? ?.
However, it would be an overly strong condition to demand that ?max (Dt ) < 1? for every
t. Specifically, there are many stable, stationary controllers that do not satisfy this. For
example, either one of the matrices Dt in (6), if used as the stationary dynamics, is stable
(since ?max = 0.9 < 1). Thus, it should be acceptable for us to use a controller with either
of these Dt (so long as we do not switch between them on every step). But, these Dt have
?max ? 10.1 > 1, and thus would be rejected if we were to demand that ?max (Dt ) < 1 ?
for every t. Thus, we will instead ask only for a weaker condition, that for all t,
?max (Dt ? Dt?1 ? ? ? ? Dt?N +1 ) < 1 ? .
(10)
This is motivated by the following, which shows that any stable, stationary controller meets
this condition (for sufficiently large N ):
Proposition 3.2: Let any 0 < < 1 and any D with ?max (D) < 1 be given. Then there
exists N0 > 0 so that for all N ? N0 , we have that ?max (DN ) ? 1 ? .
The proof follows from the fact that ?max (D) < 1 implies D N ? 0 as N ? ?. Thus,
given any fixed, stable controller, if N is sufficiently large, it will satisfy (10). Further,
if (10) holds, then w0 ?s influence on xkN +1 is bounded by
||DkN ? DkN ?1 ? ? ? D1 w0 ||2 ? ?max (DkN ? DkN ?1 ? ? ? D1 )||w0 ||2
Qk?1
?
i=0 ?max (DiN +N DiN +N ?1 ? ? ? DiN +1 )||w0 ||2
? (1 ? )k ||w0 ||2 ,
(11)
which goes to 0 geometrically quickly as k ? ?. (The first and second inequalities above
follow from Proposition 3.1.) Hence, the disturbances? effects are attenuated quickly.
To ensure that (10) holds, we propose the following algorithm. Below, N > 0 and 0 < <
1 are parameters of the algorithm.
1. Initialization: Assume we have some initial stable controller K0 , so that
?max (D0 ) < 1, where D0 = A + BK0 . Also assume that ?max (D0N ) ? 1 ? .4
Finally, for all values of ? < 0. define K? = K0 and D? = D0 .
2. For t = 1, 2, . . .
(a) Run the online learning algorithm f to compute the next set of proposed
? t = f (x0 , u0 , . . . , xt?1 , ut?1 ).
gains K
? t = A + BK
? t , and check if
(b) Let D
? t Dt?1 Dt?2 Dt?3 . . . Dt?N +1 ) ? 1 ?
?max (D
(12)
2
? t Dt?1 Dt?2 . . . Dt?N +2 ) ? 1 ?
?max (D
(13)
? t3 Dt?1 . . . Dt?N +3 ) ? 1 ?
?max (D
(14)
...
?N) ? 1 ?
?max (D
(15)
t
?
(c) If all of the ?max ?s above are less than 1 ? , we ACCEPT Kt , and set
? t . Otherwise, REJECT K
? t , and set Kt = Kt?1 .
Kt = K
(d) Let Dt = A + BKt , and pick our action at time t to be ut = Kt xt .
We begin by showing that, if we use this algorithm to ?filter? the gains output by the online
learning algorithm, Equation (10) holds.
Lemma 3.3: Let f and w0 , w1 , . . . be arbitrary, and let K0 , K1 , K2 , . . . be the sequence
of gains selected using the algorithm above. Let Dt = A + BKt be the corresponding
dynamics matrices. Then for every ?? < t < ?, we have5
?max (Dt ? Dt?1 ? ? ? ? ? Dt?N +1 ) ? 1 ? .
(16)
4
5
From Proposition 3.2, it must be possible to choose N satisfying this.
As in the algorithm description, Dt = D0 for t < 0.
? t0 was accepted}).
Proof. Let any t be fixed, and let ? = max({0} ? {t0 : 1 ? t0 ? t, K
Thus, ? is the index of the time step at which we most recently accepted a set of gains from
f (or 0 if no such gains exist). So, K? = K? +1 = . . . = Kt , since the gains stay the same
in every time step on which we do not accept a new one. This also implies
D? = D? +1 = . . . = Dt .
(17)
We will treat the cases (i) ? = 0, (ii) 1 ? ? ? t ? N + 1 and (iii) ? > t ? N + 1,
? ? 1 separately. In case (i), ? = 0, and we did not accept any gains after time 0. Thus
Kt = ? ? ? = Kt?N +1 = K0 , which implies Dt = ? ? ? = Dt?N +1 = D0 . But from
Step 1 of the algorithm, we had chosen N sufficiently large that ?max (D0N ) ? 1 ? . This
shows (16). In case (ii), ? ? t ? N + 1 (and ? > 0). Together with (17), this implies
Dt ? Dt?1 ? ? ? ? ? Dt?N +1 = D?N .
(18)
But ?max (D?N ) ? 1 ? , because at time ? , when we accepted K? , we would have checked
that Equation (15) holds. In case (iii), ? > t ? N + 1 (and ? > 0). From (17) we have
Dt ? Dt?1 ? ? ? ? ? Dt?N +1 = D?t?? +1 ? D? ?1 ? D? ?2 ? ? ? ? ? Dt?N +1 .
(19)
But when we accepted K? , we would have checked that (12-15) hold, and the t ? ? + 1-st
equation in (12-15) is exactly that the largest singular value of (19) is at most 1 ? .
Theorem 3.4: Let an arbitrary learning algorithm f be given, and suppose we use f to
control a system, but using our algorithm to accept/reject gains selected by f . Then, the
resulting system is BIBO stable.
Proof. Suppose ||wt ||2 ? c1 for all t. For convenience also define w?1 = w?2 = ? ? ? = 0,
and let ? 0 = ||A||F + ?||B||F . From (5),
P?
||xt ||2 = || k=0 Dt?1 Dt?2 ? ? ? Dt?k wt?k?1 ||2
P?
? c1 k=0 ||Dt?1 Dt?2 ? ? ? Dt?k ||2
P? PN ?1
= c1 j=0 k=0 ?max (Dt?1 Dt?2 ? ? ? Dt?jN ?k )
P? PN ?1
Qj?1
? c1 j=0 k=0 ?max (( l=0 Dt?lN ?1 Dt?lN ?2 ? ? ? Dt?lN ?N )
? Dt?jN ?1 ? ? ? Dt?jN ?k )
P? PN ?1
? c1 j=0 k=0 (1 ? )j ? ?max (Dt?jN ?1 ? ? ? Dt?jN ?k )
P? PN ?1
? c1 j=0 k=0 (1 ? )j ? (? 0 )k
? c1 1 N (1 + ? 0 )N
The third inequality follows from Lemma 3.3, and the fourth inequality follows from our
assumption that ||Kt ||F ? ?, so that ?max (Dt ) ? ||Dt ||F ? ||A||F + ||B||F ||Kt ||F ?
||A||F + ?||Bt ||F = ? 0 . Hence, ||xt ||2 remains uniformly bounded for all t.
Theorem 3.4 guarantees that, using our algorithm, we can safely apply any adaptive control
algorithm f to our system. As discussed previously, it is difficult to exactly characterize
the class of BIBO-stable controllers, and thus the set of controllers that we can safety
accept. However, it is possible to show a partial converse to Theorem 3.4 that certain
large, ?reasonable? classes of adaptive control methods will always have their proposed
controllers accepted by our method. For example, it is a folk theorem in control that if we
use only stable sets of gains (K : ?max (A + BK) < 1), and if we switch ?sufficiently
slowly? between them, then system will be stable. For our specific algorithm, we can show
the following:
Theorem 3.5: Let any 0 < < 1 be fixed, and let K ? Rnu ?nx be a finite set of controller
gains, so that for all K ? K, we have ?max (A + BK) < 1. Then there exist constants N0
and k so that for all N ? N0 , if (i) Our algorithm is run with parameters N, , and (ii)
The adaptive control algorithm f picks only gains in K, and moreover switches gains no
? t 6= K
? t+1 ? K
? t+1 = K
? t+2 = ? ? ? = K
? t+k ), then all
more than once every k steps (i.e., K
controllers proposed by f will be accepted.
4
100
10
1.5
90
10
3
80
10
1
70
10
2
controller numder
60
state (x1)
state (x1)
10
50
10
1
40
10
0.5
0
30
10
0
20
10
?1
10
10
0
10
0
10
20
30
40
(a)
50
t
60
70
80
90
100
?2
0
10
20
30
40
50
t
(b)
60
70
80
90
100
?0.5
0
200
400
600
800
1000
t
1200
1400
1600
1800
2000
(c)
Figure 1: (a) Typical state sequence (first component xt,1 of state vector) using switching controllers
from Equation (6). (Note log-scale on vertical axis.) (b) Typical state sequence using our algorithm
and the same controller f . (N = 150, = 0.1) (c) Index of the controller used over time, when using
our algorithm.
The proof is omitted due to space constraints. A similar result also holds if K is infinite
(but ?c > 0, ?K ? K, ?max (A + BK) ? 1 ? c), and if the proposed gains change on
?t ? K
? t+1 ||F between successive values is small.
every step but the differences ||K
4
Experiments
We now present experimental results illustrating the behavior of our algorithm. In the first
experiment, we apply the switching controller given in (6). Figure 1a shows a typical state
sequence resulting from using this controller without using our algorithm to monitor it (and
wt ?s from an IID standard Normal distribution). Even though ?max (Dt ) < 1 for all t, the
controlled system is unstable, and the state rapidly diverges. In contrast, Figure 1b shows
the result of rerunning the same experiment, but using our algorithm to accept or reject
controllers. The resulting system is stable, and the states remain small. Figure 1c also
shows which of the two controllers in (6) is being used at each time, when our algorithm
is used. (If do not use our algorithm so that the controller switches on every time step,
this figure would switch between 0 and 1 on every time step.) We see that our algorithm is
rejecting most of the proposed switches to the controller; specifically, it is permitting f to
switch between the two controllers only every 140 steps or so. By slowing down the rate at
which we switch controllers, it causes the system to become stable (compare Theorem 3.5).
In our second example, we will consider a significantly more complex setting representative
of a real-world application. We consider controlling a Boeing 747 aircraft in a setting
where the states are only partially observable. We have a four-dimensional state vector x t
consisting of the sideslip angle ?, bank angle ?, yaw rate, and roll rate of the aircraft in
cruise flight. The two-dimensional controls ut are the rudder and aileron deflections. The
state transition dynamics are given as in Equation (1)6 with IID gaussian disturbance terms
wt . But instead of observing the states directly, on each time step t we observe only
yt = Cxt + vt ,
(20)
ny
ny
~
where yt ? R , and the disturbances vt ? R are distributed Normal(0, ?v ). If the system is stationary (i.e., if A, B, C, ?v , ?w were fixed), then this is a standard LQG problem,
and optimal estimates x
?t of the hidden states xt are obtained using a Kalman filter:
x
?t+1 = Lt (yt+1 ? C(Axt + But )) + A?
xt + But ,
(21)
where Lt ? Rnx ?ny is the Kalman filter gain matrix. Further, it is known that, in LQG,
the optimal steady state controller is obtained by picking actions according to u t = Kt x
?t ,
where Kt are appropriate control gains. Standard algorithms exist for solving for the optimal steady-state gain matrices L and K. [1]
h
i
In our aircraft control problem, C = 00 10 00 01 , so that only two of the four state
variables and are observed directly. Further, the noise in the observations varies over time.
Specifically, sometimes the variance of the first observation is ?v,11 = Var(vt,1 ) = 2
6
The parameters A ? R4?4 and B ? R4?2 are obtained from a standard 747 (?yaw damper?)
model, which may be found in, e.g., the Matlab control toolbox, and various texts such as [6].
2.5
2.5
2
2
1.5
1.5
1
1
0.5
0.5
0
0
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
5
?0.5
0
1
2
3
4
5
4
(a)
x 10
6
7
8
9
10
4
(b)
x 10
Figure 2: (a) Typical evolution of true ?v,11 over time (straight lines) and online approximation to
it. (b) Same as (a), but showing an example in which the learned variance estimate became negative.
while the variance of the second observation is ?v,22 = Var(vt,2 ) = 0.5; and sometimes
the values of the variances are reversed ?v,11 = 0.5, ?v,22 = 2. (?v ? R2?2 is diagonal in
all cases.) This models a setting in which, at various times, either of the two sensors may
be the more reliable/accurate one.
Since the reliability of the sensors changes over time, one might want to apply an online
learning algorithm (such as online stochastic gradient ascent) to dynamically estimate the
values of ?v,11 and ?v,22 . Figure 2 shows a typical evolution of ?v,11 over time, and the
result of using a stochastic gradient ascent learning algorithm to estimate ?v,11 . Empirically, a stochastic gradient algorithm seems to do fairly well at tracking the true ? v,11 .
Thus, one simple adaptive control scheme would be to take the current estimate of ? v at
each time step t, apply a standard LQG solver giving this estimate (and A, B, C, ? w ) to it
to obtain the optimal steady-state Kalman filter and control gains, and use the values obtained as our proposed gains Lt and Kt for time t. This gives a simple method for adapting
our controller and Kalman filter parameters to the varying noise parameters.
The adaptive control algorithm that we have described is sufficiently complex that it is extremely difficult to prove that it gives a stable controller. Thus, to guarantee BIBO stability
of the system, one might choose to run it with our algorithm. To do so, note that the ?state?
of the controlled system at each time step is fully characterized by the true world state x t
and the internal state estimate of the Kalman filter x
?t . So, we can define an augmented
state vector x
?t = [xt ; x
?t ] ? R8 . Because xt+1 is linear in ut (which is in turn linear in x
?t )
and similarly x
?t+1 is linear in xt and ut (substitute (20) into (21)), for a fixed set of gains
Kt and Lt , we can express x
?t+1 as a linear function of x
?t plus a disturbance:
? tx
x
?t+1 = D
?t + w
?t .
(22)
? t depends implicitly on A, B, C, Lt and Kt . (The details are not complex, but are
Here, D
? t and L
? t matrices
omitted due to space). Thus, if a learning algorithm is proposing new K
on each time step, we can ensure that the resulting system is BIBO stable by computing
? t as a function of K
? t and L
? t , and running our algorithm (with D
? t ?s
the corresponding D
replacing the Dt ?s) to decide if the proposed gains should be accepted. In the event that
they are rejected, we set Kt = Kt?1 , Lt = Lt?1 .
It turns out that there is a very subtle bug in the online learning algorithm. Specifically,
we were using standard stochastic gradient ascent to estimate ?v,11 (and ?v,22 ), and on
every step there is a small chance that the gradient update overshoots zero, causing ? v,11
to become negative. While the probability of this occurring on any particular time step is
small, a Boeing 747 flown for sufficiently many hours using this algorithm will eventually
encounter this bug and obtain an invalid, negative, variance estimate. When this occurs, the
Matlab LQG solver for the steady-state gains outputs L = 0 on this and all successive time
steps.7 If this were implemented on a real 747, this would cause it to ignore all observations
(Equation 21), enter divergent oscillations (see Figure 3a), and crash. However, using our
algorithm, the behavior of the system is shown in Figure 3b. When the learning algorithm
7
Even if we had anticipated this specific bug and clipped ?v,11 to be non-negative, the LQG
solver (from the Matlab controls toolbox) still outputs invalid gains, since it expects nonsingular ? v .
5
8
x 10
0.15
6
0.1
4
0.05
2
0
0
?0.05
?2
?0.1
?4
?0.15
?6
?8
0
1
2
3
4
5
6
7
8
9
10
?0.2
0
1
2
3
4
5
4
(a)
x 10
6
7
8
9
10
4
(b)
x 10
Figure 3: (a) Typical plot of state (xt,1 ) using the (buggy) online learning algorithm in a sequence
in which L was set to zero part-way through the sequence. (Note scale on vertical axis; this plot is
typical of a linear system entering divergent/unstable oscillations.) (b) Results on same sequence of
disturbances as in (a), but using our algorithm.
encounters the bug, our algorithm successfully rejects the changes to the gains that lead to
instability, thereby keeping the system stable.
5
Discussion
Space constraints preclude a full discussion, but these ideas can also be applied to verifying
the stability of certain nonlinear dynamical systems. For example, if the A (and/or B)
matrix depends on the current state but is always expressible as a convex combination
of some fixed A1 , . . . , Ak , then we can guarantee BIBO stability by ensuring that (10)
holds for all combinations of Dt = Ai + BKt defined using any Ai (i = 1, . . . k).8 The
same idea also applies to settings where A may be changing (perhaps adversarially) within
some bounded set, or if the dynamics are unknown so that we need to verify stability
with respect to a set of possible dynamics. In simulation experiments of the Stanford
autonomous helicopter, by using a linearization of the non-linear dynamics, our algorithm
was also empirically successful at stabilizing an adaptive control algorithm that normally
drives the helicopter into unstable oscillations.
References
[1] B. Anderson and J. Moore. Optimal Control: Linear Quadratic Methods. Prentice-Hall, 1989.
[2] Karl Astrom and Bjorn Wittenmark. Adaptive Control (2nd Edition). Addison-Wesley, 1994.
[3] V. D. Blondel and J. N. Tsitsiklis. The boundedness of all products of a pair of matrices is
undecidable. Systems and Control Letters, 41(2):135?140, 2000.
[4] Michael S. Branicky. Analyzing continuous switching systems: Theory and examples. In Proc.
American Control Conference, 1994.
[5] Michael S. Branicky. Stability of switched and hybrid systems. In Proc. 33rd IEEE Conf.
Decision Control, 1994.
[6] G. Franklin, J. Powell, and A. Emani-Naeini. Feedback Control of Dynamic Systems. AddisonWesley, 1995.
[7] M. Johansson and A. Rantzer. On the computation of piecewise quadratic lyapunov functions.
In Proceedings of the 36th IEEE Conference on Decision and Control, 1997.
[8] H. Khalil. Nonlinear Systems (3rd ed). Prentice Hall, 2001.
[9] Daniel Liberzon, Jo?ao Hespanha, and A. S. Morse. Stability of switched linear systems: A
lie-algebraic condition. Syst. & Contr. Lett., 3(37):117?122, 1999.
[10] J. Nakanishi, J.A. Farrell, and S. Schaal. A locally weighted learning composite adaptive controller with structure adaptation. In International Conference on Intelligent Robots, 2002.
[11] T. J. Perkins and A. G. Barto. Lyapunov design for safe reinforcement learning control. In Safe
Learning Agents: Papers from the 2002 AAAI Symposium, pages 23?30, 2002.
[12] Jean-Jacques Slotine and Weiping Li. Applied Nonlinear Control. Prentice Hall, 1990.
8
Checking all k N such combinations takes time exponential in N , but it is often possible to use
very small values of N , sometimes including N = 1, if the states xt are linearly reparameterized
(x0t = M xt ) to minimize ?max (D0 ).
| 2623 |@word aircraft:4 mild:1 illustrating:1 norm:4 seems:1 justice:1 nd:1 johansson:1 d2:3 simulation:1 pick:3 thereby:1 boundedness:1 xkn:1 initial:1 contains:1 series:1 daniel:1 lqr:1 franklin:1 current:2 must:3 written:2 lqg:5 plot:2 update:1 maxv:1 n0:4 stationary:9 half:1 selected:2 slowing:1 accepting:1 completeness:1 successive:2 dn:1 c2:2 constructed:1 become:3 symposium:1 prove:2 rudder:1 blondel:1 x0:4 indeed:2 expected:2 ica:1 behavior:2 frequently:1 decreasing:1 automatically:3 preclude:1 solver:3 becomes:1 begin:1 bounded:20 moreover:2 minimizes:1 proposing:1 guarantee:12 safely:2 every:11 axt:4 exactly:3 rm:1 k2:1 control:45 normally:1 converse:1 appear:2 safety:5 positive:1 negligible:1 treat:1 switching:3 ak:1 analyzing:1 meet:1 might:2 plus:1 initialization:1 r4:2 dynamically:1 rnx:5 adoption:2 testing:1 definite:1 dodd:3 branicky:2 powell:1 empirical:1 adapting:3 significantly:3 reject:4 composite:1 cannot:1 convenience:1 prentice:3 impossible:1 instability:4 influence:4 optimize:1 yt:3 straightforward:1 go:2 convex:1 stabilizing:1 rule:1 stability:22 autonomous:1 imagine:1 pt:1 suppose:2 controlling:1 bk0:1 satisfying:2 distributional:1 observed:2 fly:1 verifying:1 ensures:1 govern:1 dynamic:13 overshoot:1 depend:1 solving:1 powerfully:1 easily:1 k0:4 various:3 tx:1 choosing:1 quite:2 jean:1 stanford:3 widely:1 say:1 otherwise:2 think:1 online:21 sequence:22 eigenvalue:1 propose:5 interaction:1 product:1 helicopter:2 adaptation:2 causing:1 rapidly:3 description:1 frobenius:1 bug:4 khalil:1 diverges:1 wider:1 andrew:1 odd:1 strong:1 implemented:1 come:1 implies:5 lyapunov:2 safe:2 utt:1 filter:6 stochastic:4 require:1 ao:1 preliminary:1 proposition:4 sideslip:1 strictly:1 hold:8 sufficiently:6 hall:3 normal:2 maxu:3 mapping:3 omitted:2 proc:2 visited:1 largest:3 successfully:1 tool:2 qv:4 weighted:1 clearly:1 sensor:2 gaussian:3 always:2 rather:3 pn:4 varying:2 barto:1 dkn:4 schaal:1 check:5 contrast:1 kim:1 contr:1 typically:2 bt:1 accept:6 hidden:1 expressible:1 priori:3 raised:1 fairly:1 once:1 ng:1 represents:1 adversarially:1 yaw:2 anticipated:1 piecewise:1 intelligent:1 national:1 consisting:1 kt:30 accurate:1 partial:1 necessary:1 folk:1 korea:1 addisonwesley:1 initialized:1 industry:1 cost:1 expects:1 kq:1 successful:1 characterize:1 varies:1 chooses:1 st:1 international:1 stay:1 diverge:1 picking:1 together:1 quickly:3 michael:2 w1:4 jo:1 aaai:1 choose:2 possibly:1 slowly:1 conf:1 american:1 leading:3 li:1 syst:1 attaining:1 satisfy:2 farrell:1 depends:3 d0n:2 multiplicative:1 try:1 observing:1 start:2 cxt:1 minimize:2 undecidable:2 roll:1 qk:1 variance:5 became:1 t3:1 nonsingular:1 rejecting:5 iid:2 monitoring:3 drive:1 straight:1 checked:2 ed:1 definition:2 slotine:1 proof:7 gain:33 proved:1 ask:1 ut:12 subtle:1 wesley:1 dt:82 follow:2 though:3 anderson:1 just:1 rejected:2 parlance:1 flight:4 hand:1 replacing:1 nonlinear:5 lack:2 perhaps:1 believe:1 grows:1 usa:1 effect:3 verify:1 true:4 evolution:2 hence:3 din:3 entering:1 moore:1 d2t:3 steady:5 criterion:1 trying:2 demonstrate:1 recently:1 x0t:1 rl:1 empirically:2 discussed:1 significant:1 enter:1 ai:2 enjoyed:1 tuning:1 rd:2 similarly:1 had:2 reliability:1 stable:25 robot:1 certain:2 inequality:3 success:1 vt:4 determine:1 semi:1 u0:3 ii:3 full:1 d0:6 characterized:1 long:1 nakanishi:1 permitting:1 a1:1 controlled:6 ensuring:2 desideratum:1 controller:33 circumstance:1 sometimes:3 c1:10 want:1 fine:1 separately:1 crash:1 singular:3 source:1 w2:1 unlike:1 probably:1 ascent:3 iii:2 easy:1 switch:10 idea:2 airplane:2 attenuated:2 qj:1 fleet:1 whether:1 motivated:1 t0:3 algebraic:1 cause:2 action:6 matlab:3 generally:1 bibo:14 informally:1 locally:1 svms:1 exist:3 jacques:1 overly:1 express:1 four:2 monitor:1 changing:3 flown:1 nonadaptive:1 geometrically:2 sum:5 run:3 angle:2 deflection:1 powerful:3 fourth:1 letter:1 clipped:1 reasonable:1 decide:1 oscillation:3 decision:3 acceptable:1 quadratic:4 precisely:1 constraint:2 perkins:1 sake:1 damper:1 extremely:1 according:3 combination:3 remain:1 ln:3 equation:12 remains:5 previously:1 turn:3 eventually:1 x2t:2 addison:1 adopted:1 operation:1 apply:4 observe:1 rantzer:1 appropriate:2 encounter:2 jn:5 substitute:1 hampered:1 running:1 ensure:5 include:2 bkt:7 giving:1 k1:1 classical:1 ingenious:1 quantity:1 occurs:1 rerunning:1 dependence:2 diagonal:1 gradient:5 reversed:1 cruise:1 nx:7 w0:21 unstable:3 kalman:5 index:2 insufficient:1 difficult:6 regulation:1 hespanha:1 negative:4 boeing:2 design:2 unknown:1 upper:1 vertical:2 observation:4 finite:3 jin:1 reparameterized:1 arbitrary:4 bk:4 pair:2 toolbox:2 learned:1 hour:3 nu:2 address:1 rut:1 suggested:1 adversary:2 dynamical:5 usually:2 below:1 convincingly:1 max:56 reliable:1 including:1 critical:2 event:1 hybrid:1 disturbance:16 qxt:1 scheme:2 numerous:1 axis:2 rnu:6 text:1 literature:2 geometric:1 checking:2 l2:1 xtt:1 morse:1 fully:1 var:2 switched:2 agent:1 bank:1 karl:1 keeping:1 infeasible:1 tsitsiklis:1 formal:1 side:1 weaker:1 barrier:1 distributed:1 feedback:1 lett:1 transition:3 world:2 commonly:1 adaptive:11 reinforcement:1 far:1 observable:1 ignore:1 implicitly:1 robotic:1 continuous:1 why:1 ca:1 contributes:1 complex:6 did:1 linearly:2 noise:2 edition:1 x1:4 augmented:1 astrom:1 representative:1 ny:3 exponential:1 lie:1 third:1 theorem:6 down:1 specific:7 xt:42 showing:3 r8:1 r2:1 decay:1 dk:1 divergent:2 buggy:1 concern:1 exists:3 magnitude:1 linearization:1 occurring:1 horizon:1 demand:2 lt:7 simply:1 expressed:1 bjorn:1 tracking:2 partially:1 recommendation:1 applies:1 satisfies:1 chance:1 goal:1 invalid:2 replace:1 change:3 specifically:7 infinite:2 determined:1 uniformly:1 wt:34 typical:7 lemma:2 called:2 accepted:8 experimental:1 wittenmark:1 select:1 formally:1 internal:1 seoul:2 aileron:1 d1:9 |
1,788 | 2,624 | Modeling Conversational Dynamics as a
Mixed-Memory Markov Process
Tanzeem Choudhury
Intel Research
[email protected]
Sumit Basu
Microsoft Research
[email protected]
Abstract
In this work, we quantitatively investigate the ways in which a
given person influences the joint turn-taking behavior in a
conversation. After collecting an auditory database of social
interactions among a group of twenty-three people via wearable
sensors (66 hours of data each over two weeks), we apply speech
and conversation detection methods to the auditory streams. These
methods automatically locate the conversations, determine their
participants, and mark which participant was speaking when. We
then model the joint turn-taking behavior as a Mixed-Memory
Markov Model [1] that combines the statistics of the individual
subjects' self-transitions and the partners ' cross-transitions. The
mixture parameters in this model describe how much each person 's
individual behavior contributes to the joint turn-taking behavior of
the pair. By estimating these parameters, we thus estimate how
much influence each participant has in determining the joint turntaking behavior.
We show how this measure correlates
significantly with betweenness centrality [2], an independent
measure of an individual's importance in a social network. This
result suggests that our estimate of conversational influence is
predictive of social influence.
1
Introduction
People's relationships are largely determined by their social interactions, and the
nature of their conversations plays a large part in defining those interactions. There
is a long history of work in the social sciences aimed at understanding the
interactions between individuals and the influences they have on each others'
behavior. However, existing studies of social network interactions have either been
restricted to online communities, where unambiguous measurements about how
people interact can be obtained, or have been forced to rely on questionnaires or
diaries to get data on face-to-face interactions. Survey-based methods are error
prone and impractical to scale up. Studies show that self-reports correspond poorly
to communication behavior as recorded by independent observers [3].
In contrast, we have used wearable sensors and recent advances in speech
processing techniques to automatically gather information about conversations:
when they occurred, who was involved, and who was speaking when. Our goal was
then to see if we could examine the influence a given speaker had on the turn-taking
behavior of her conversational partners. Specifically, we wanted to see if we could
better explain the turn-taking transitions observed in a given conversation between
subjects i and} by combining the transitions typical to i and those typical toj. We
could then interpret the contribution from i as her influence on the joint turn-taking
behavior.
In this paper, we first describe how we extract speech and conversation information
from the raw sensor data, and how we can use this to estimate the underlying social
network. We then detail how we use a Mixed-Memory Markov Model to combine
the individuals ' statistics. Finally, we show the performance of our method on our
collected data and how it correlates well with other metrics of social influence.
2
Sensing and Modeling Face-to-face Communication Networks
Although people heavily rely on email, telephone, and other virtual means of
communication, high complexity information is primarily exchanged through face-toface interaction [4]. Prior work on sensing face-to-face networks have been based on
proximity measures [5],[6], a weak approximation of the actual communication network.
Our focus is to model the network based on conversations that take place within a
community. To do this, we need to gather data from real-world interactions.
We thus used an experiment conducted at MIT [7] in which 23 people agreed to wear the
sociometer, a wearable data acquisition board [7],[8]. The device stored audio
information from a single microphone at 8 KHz. During the experiment the users wore
the device both indoors and outdoors for six hours a day for 11 days. The participants
were a mix of students, facuity, and administrative support staff who were distributed
across different floors of a laboratory building and across different research groups.
3
Speech and Conversation Detection
Given the set of auditory streams of each subject, we now have the problem of
detecting who is speaking when and to whom they are speaking. We break this
problem into two parts: voicing/speech detection and conversation detection.
3.1
Voicing and Speech Detection
To detect the speech, we use the linked-HMM model for VOlClllg and speech
detection presented in [9]. This structure models the speech as two layers (see
Figure 1); the lower level hidden state represents whether the current frame of audio
is voiced or unvoiced (i.e., whether the audio in the frame has a harmonic structure,
as in a vowel), while the second level represents whether we are in a speech or nonspeech segment. The principle behind the model is that while there are many voiced
sounds in our environment (car horns, tones, computer sounds, etc.), the dynamics
of voiced/unvoiced transitions provide a unique signature for human speech; the
higher level is able to capture this dynamics since the lower level 's transitions are
dependent on this variable.
speech layer (S[t) = {O, I})
voicing layer (V[t) = {O,l})
observation layer (3 features)
Figure 1: Graphical model for the voicing and speech detector.
To apply this model to data, the 8 kHz audio is split into 256-sample frames (32
milliseconds) with a 128-sample overlap. Three features are then computed: the
non-initial maximum of the noisy autocorrelation, the number of autocorrelation
peaks, and the spectral entropy. The features were modeled as a Gaussian with
diagonal covariance. The model was then trained on 8000 frames of fully labeled
data. We chose this model because of its robustness to noise and distance from the
microphone : even at 20 feet away more than 90% of voiced frames were detected
with negligible false alarms (see [9]).
The results from this model are the binary sequences v[t} and s[t} signifying
whether the frame is voiced and whether it is in a speech segment for all frames of
the audio.
3.2
Conversation Detection
Once the voicing and speech segments are identified, we are sti II left with the
problem of determining who was talking with whom and when. To approach this,
we use the method of conversation detection described in [10]. The basic idea is
simple: since the speech detection method described above is robust to distance, the
voicing segments v[t} of all the participants in the conversation will be picked up by
the detector in all of the streams (this is referred to as a "mixed stream" in [10]).
We can then examine the mutual information of the binary voicing estimates
between each person as a matching measure. Since both voicing streams will be
nearly identical, the mutual information should peak when the two participants are
either involved in a conversation or are overhearing a conversation from a nearby
group. However, we have the added complication that the streams are only roughly
aligned in time. Thus, we also need to consider a range of time shifts between the
streams. We can express the alignment measure a[k] for an offset of k between the
two voicing streams as follows:
"
p(v,[t]=i,v, [t-l]=j)
a[k] = l(vJt], v, [t - k]) = L." p(vJt] = i, v, [t - k] = j) log --.:...--'--'-'~----=-=---=---....::...:....i.j
p(vJt]=i)p(v, [t-k]=j)
where i and j take on values {O, l} for unvoiced and voiced states respectively.
The distributions for p(v\, vJ and its marginals are estimated over a window of one
minute (T=3750 frames). To see how well this measure performs, we examine an
example pair of subjects who had one five-minute conversation over the course of
half an hour. The streams are correctly aligned at k=0, and by examining the value
of ark} over a large range we can investigate its utility for conversation detection
and for aligning the auditory streams (see Figure 2).
The peaks are both strong and unique to the correct alignment (k=0), implying that
this is indeed a good measure for detecting conversations and aligning the audio in
our setup. By choosing the optimal threshold via the ROC curve, we can achieve
100% detection with no false alarms using time windows T of one minute.
Figure 2: Values of ark] over ranges: 1.6 seconds, 2.5 minutes, and 11 minutes.
For each minute of data in each speaker' s stream, we computed ark] for k ranging
over +/- 30 seconds with T=3750 for each of the other 22 subjects in the study.
While we can now be confident that this will detect most of the convers ations
between the subjects, since the speech segments from all the participants are being
picked up by all of their microphones (and those of others within earshot), there is
still the problem of determining who is speaking when. Fortunately, this is fairly
straightforward. Since the microphones for each subject are pre-calibrated to have
approximately equal energy response, we can classify each voicing segment among
the speakers by integrating the audio energy over the segment and choosing the
argmax over subjects.
It is still possible that the resulting subject does not
correspond to the actua l speaker (she could simply be the one nearest to a nonsubject who is speaking), we determine an overall threshold below which the
assignment to the speaker is rejected. Both of these methods are further detailed in
[10].
For this work, we rejected all conversations with more than two participants or
those that were simply overheard by the subj ects. Finally, we tested the overall
performance of our method by comparing with a hand-labeling of conversation
occurrence and length from four subjects over 2 days (48 hours of data) and found
an 87% agreement with the hand labeling. Note that the actual performance may
have been better than this , as the labelers did miss some conversations.
3.3
The Turn-Taking Signal S;
Finally, given the location of the conversations and who is speaking when, we can
S;,
defined over five-second blocks, which is
create a new signal for each subject i ,
1 when the subject is holding the turn and 0 otherwise. We define the holder of the
turn as whoever has produced more speech during the five-second block. Thus,
within a given conversation between subjects i and j , the turn-taking signals are
complements of each other, i.e., Si = -,SJ .
I
4
I
Estimating the Social Network Structure
Once we have detected the pairwise conversations we can identify the communication
that occurs within the community and map the links between individuals. The link
structure is calculated from the total number of conversations each subj ect has with
others: interactions with another person that account for less than 5% of the subject's
total interactions are removed from the graph. To get an intuitive picture of the
interaction pattern within the group, we visualize the network diagram by performing
multi-dimensional scaling (MDS) on the geodesic distances (number of hops) between
the people (Figure 3). The nodes are colored according to the physical closeness of the
subjects' office locations. From this we see that people whose offices are in the same
general space seem to be close in the communication space as well.
Figure 3: Estimated network of subjects
5
Modeling the Influence of Turn-taking Behavior in
Conversations
When we talk to other people we are influenced by their style of interaction.
Sometimes this influence is strong and sometimes insignificant - we are interested
in finding a way to quantify this effect. We probably all know people who have a
strong effect on our natural interaction style when we talk to them, causing us to
change our style as a result . For example, consider someone who never seems to
stop talking once it is her turn. She may end up imposing her style on us, and we
may consequently end up not having enough of a chance to talk, whereas in most
other circumstances we tend to be an active and equal participant.
In our case, we can model this effect via the signals we have already gathered. Let
us consider the influence subject} has on subj ect i. We can compute i's average
self-transition table , peS: I S;_I) , via simple counts over all conversations for subject
i (excluding those with i). Similarly, we can compute j's average cross-transition
table, p(Stk I Sf- I)' over all subjects k (excluding i) with which} had conversations.
The question now is, for a given conversation between i and}, how much does} 's
average cross-transition help explain peS: I S;_I ' Sf- I) ?
We can formalize this contribution via the Mixed-Memory Markov Model of Saul
and Jordan [1]. The basic idea of this model was to approximate a high-dimensional
conditional probability table of one variable conditioned on many others as a convex
combination of the pairwise conditional tables. For a general set of N interacting
Markov chains in the form of a Coupled Markov Model [11], we can write this
approximation as:
peS; I sLI,??? , St~l) = I a ij P(S; I S f- I)
j
For our case of a two chain (two person) model the transition probabilities will be
the following:
peS: I S,'_, , S,2_,) = a Il P(S,' I S,'_,) + a 12 P(S,k I S,2_, )
p(S,2 I S,'_, , S,2_,)
= a 2,P(S,k I S,'_,) + a P(S,2 I S,~, )
22
This is very similar to the original Mixed-Memory Model, though the transition
tables are estimated over all other subjects k excluding the partner as described
above. Also, since the a ij sum to one over j, in this case a ll = 1- a '2 . We thus have
a single parameter, a'2' which describes the contribution of p(Stk I St2_ 1) to
explaining P(S~ I SLl,St~I)' i.e., the contribution of subject 2's average turn-taking
behavior on her interactions with subject 1.
5.1
Learning the influence parameters
To find the a ij values, we would like to maximize the likelihood of the data. Since
we have already estimated the relevant conditional probability tables, we can do this
via constrained gradient ascent, where we ensure that a ij>O [12]. Let us first
examine how the likelihood function simplifies for the Mixed-Markov model:
Converting this expression to log likelihood and removing terms that are not
relevant to maximization over a ij yields:
Now we reparametrize for the normality constraint with fJij = a ij and fJ;N = 1- LfJij ,
remove the terms not relevant to chain i, and take the derivatives:
a
afJij (.) =
peS; I S,~,) - pes; I S,~ ,)
~ LfJ;k P(S; I S,~, )+(I- LfJ;k )P(S; I S,~,)
We can show that the likelihood is convex in the a ij ' so we are guaranteed to
achieve the global maximum by climbing the gradient. More details of this
formulation are given in [12],[7].
5.2
Aggregate Influence over Multiple Conversations
In order to evaluate whether this model provides additional benefit over using a
given subject's self-transition statistics alone, we estimated the reduction in KL
divergence by using the mixture of interactions vs. using the self-transition model.
We found that by using the mixture model we were able to reduce the KL
divergence between a subject's average self-transition statistics and the observed
transitions by 32% on average. However, in the mixture model we have added extra
degrees of freedom, and hence tested whether the better fit was statistically
significant by using the F-test. The resulting p-value was less than 0.01 , implying
that the mixture model is a significantly better fit to the data.
In order to find a single influence parameter for each person, we took a subset of 80
conversations and aggregated all the pairwise influences each subject had on all her
conversational partners. In order to compute this aggregate value, there is an
additional aspect about a ij we need to consider. If the subject's self-transition
matrix and the complement of the partner's cross-transition matrix are very similar,
the influence scores are indeterminate, since for a given interaction S; = -,s: : i.e. ,
we would essentially be trying to find the best way to linearly combine two identical
transition matrices. We thus weight the contribution to the aggregate influence
estimate for each individual Ai by the relevant I-divergence (symmetrized KL
divergence) for each conversational partner:
Ai =
L
J(P(S:
I-,SL,) II peS: I S:_,))a ki
kEpartners
The upper panel of Figure 4 shows the aggregated influence values for the subset of
subjects contained in the set of eighty conversations analyzed.
6
Link between Conversational Dynamics and Social Role
Betweenness centrality is a measure frequently used in social network analysis to
characterize importance in the social network. For a given person i, it is defined as
being proportional to the number of pairs of people (j,k) for which that person lies
along the shortest path in the network between j and k. It is thus used to estimate
how much control an individual has over the interaction of others, since it is a count
of how often she is a "gateway" between others. People with high betweenness are
often perceived as leaders [2].
We computed the betweenness centrality for the subjects from the 80 conversations
using the network structure we estimated in Section 3. We then discovered an
interesting and statistically significant correlation between a person's aggregate
influence score and her betweenness centrality -- it appears that a person's
interaction style is indicative of her role within the community based on the
centrality measure. Figure 4 shows the weighted influence values along with the
centrality scores. Note that ID 8 (the experiment coordinator) is somewhat of an
outlier -- a plausible explanation for this can be that during the data collection ID 8
went and talked to many of the subjects, which is not her usual behavior. This
resulted in her having artificially high centrality (based on link structure) but not
high influence based on her interaction style.
We computed the statistical correlation between the influence values and the
centrality scores, both including and excluding the outlier subject ID 8. The
correlation excluding ID 8 was 0.90 (p-value < 0.0004, rank correlation 0.92) and
including ID 8 it was 0.48 (p-value <0.07, rank correlation 0.65). The two measures,
namely influence and centrality, are highly correlated, and this correlation is
statistically significant when we exclude ID 8, who was the coordinator of the
project and whose centrality is likely to be artificially large.
7
Conclusion
We have developed a model for quantitatively representing the influence of a given
person j's turn-taking behavior on the joint-turn taking behavior with person i. On
real-world data gathered from wearable sensors, we have estimated the relevant
component statistics about turn taking behavior via robust speech processing
techniques, and have shown how we can use the Mixed-Memory Markov formalism
to estimate the behavioral influence. Finally, we have shown a strong correlation
between a person's aggregate influence value and her betweenness centrality score.
This implies that our estimate of conversational influence may be indicative of
importance within the social network.
Aggregate Influence V alues
0.25
"
~
0 .2
>
l! 0.15
?~
0 .1
0 .05
o
10
11
12
13
14
BelweenneS5 CenlralHy Scores
~
0 .2
~
~0. 15
ei
o
0 .1
0.05
Figure 4: Aggregate influence values and corresponding centrality scores.
8 References
[1] Saul, L.K. and M. Jordan. "Mixed Memory Markov Models." Machine
Learning, 1999.37: p. 75-85.
[2] Freeman, L.c., "A Set of Measures of Centrality Based on Betweenness."
Sociometry, 1977.40: p. 35-41.
[3] Bernard, H.R., et aI., "The Problem of Informant Accuracy: the Validity of
Retrospective data." Annual Review of Anthropology, 1984. 13: p. pp. 495-517.
[4] Allen, T., Architecture and Communication Among Product Development
Engineers. 1997, Sloan School of Management, MIT: Cambridge. p. pp. 1-35.
[5] Want, R., et aI., "The Active Badge Location System." ACM Transactions on
Information Systems, 1992.10: p. 91-102.
[6] Borovoy, R. , Folk Computing: Designing Technology to Support Face-to-Face
Community Building. Doctoral Thesis in Media Arts and Sciences. MIT, 2001.
[7] Choudhury, T. , Sensing and Modeling Human Networks, Doctoral Thesis in
Media Arts and Sciences. MIT. Cambridge, MA, 2003.
[8] Gerasimov, V., T. Selker, and W. Bender, Sensing and Effecting Environment
with Extremity Computing Devices. Motorola Offspring, 2002. 1(1).
[9] Basu, S. "A Two-Layer Model for Voicing and Speech Detection." in Int 'l
Conference on Acoustics, Speech, and Signal Processing (ICASSP). 2003.
[10]Basu, S., Conversation Scene Analysis. Doctoral Thesis in Electrical
Engineering and Computer Science. MIT. Cambridge, MA 2002.
[11]Brand, M., "Coupled Hidden Markov Models for Modeling Interacting
Processes." MIT Media Lab Vision & Modeling Tech Report, 1996.
[12]Basu, S., T. Choudhury, and B. Clarkson. "Learning Human Interactions with
the Influence Model." MIT Media Lab Vision and Modeling Tech Report #539.
June, 2001.
| 2624 |@word fjij:1 seems:1 covariance:1 reduction:1 initial:1 score:7 existing:1 current:1 com:2 comparing:1 si:1 wanted:1 remove:1 v:1 implying:2 half:1 alone:1 betweenness:7 device:3 tone:1 indicative:2 colored:1 detecting:2 provides:1 complication:1 location:3 node:1 five:3 along:2 ect:3 combine:3 behavioral:1 autocorrelation:2 pairwise:3 indeed:1 roughly:1 behavior:15 examine:4 frequently:1 multi:1 freeman:1 automatically:2 actual:2 motorola:1 window:2 bender:1 project:1 estimating:2 underlying:1 panel:1 medium:4 developed:1 finding:1 impractical:1 collecting:1 control:1 negligible:1 engineering:1 offspring:1 id:6 extremity:1 path:1 approximately:1 chose:1 anthropology:1 doctoral:3 suggests:1 someone:1 range:3 statistically:3 horn:1 unique:2 block:2 significantly:2 indeterminate:1 matching:1 pre:1 integrating:1 get:2 close:1 influence:30 map:1 straightforward:1 convex:2 survey:1 play:1 heavily:1 user:1 designing:1 agreement:1 ark:3 database:1 labeled:1 observed:2 role:2 electrical:1 capture:1 went:1 removed:1 questionnaire:1 environment:2 complexity:1 dynamic:4 geodesic:1 signature:1 trained:1 segment:7 predictive:1 icassp:1 joint:6 talk:3 reparametrize:1 forced:1 describe:2 detected:2 labeling:2 aggregate:7 choosing:2 whose:2 plausible:1 otherwise:1 statistic:5 noisy:1 online:1 sequence:1 took:1 interaction:20 product:1 causing:1 aligned:2 combining:1 sll:1 relevant:5 poorly:1 achieve:2 intuitive:1 help:1 ij:8 nearest:1 school:1 strong:4 implies:1 quantify:1 foot:1 correct:1 lfj:2 human:3 virtual:1 proximity:1 week:1 visualize:1 perceived:1 create:1 weighted:1 mit:7 sensor:4 gaussian:1 choudhury:4 office:2 focus:1 june:1 she:3 rank:2 likelihood:4 tech:2 contrast:1 detect:2 dependent:1 her:12 coordinator:2 hidden:2 toj:1 interested:1 overall:2 among:3 development:1 constrained:1 art:2 fairly:1 mutual:2 equal:2 once:3 never:1 having:2 hop:1 identical:2 tanzeem:2 represents:2 nearly:1 others:6 report:3 quantitatively:2 eighty:1 primarily:1 divergence:4 resulted:1 individual:8 argmax:1 microsoft:2 vowel:1 freedom:1 detection:12 investigate:2 highly:1 alignment:2 mixture:5 analyzed:1 behind:1 chain:3 folk:1 exchanged:1 classify:1 modeling:7 formalism:1 ations:1 assignment:1 maximization:1 subset:2 examining:1 conducted:1 sumit:1 characterize:1 stored:1 calibrated:1 confident:1 person:13 st:2 peak:3 thesis:3 recorded:1 management:1 derivative:1 style:6 account:1 exclude:1 student:1 alues:1 int:1 sloan:1 stream:11 break:1 observer:1 picked:2 lab:2 linked:1 participant:9 voiced:6 contribution:5 il:1 holder:1 accuracy:1 largely:1 who:12 correspond:2 identify:1 gathered:2 yield:1 climbing:1 weak:1 raw:1 produced:1 history:1 explain:2 detector:2 influenced:1 email:1 actua:1 energy:2 acquisition:1 pp:2 involved:2 wearable:4 auditory:4 stop:1 conversation:34 car:1 formalize:1 agreed:1 appears:1 higher:1 day:3 response:1 formulation:1 though:1 rejected:2 correlation:7 hand:2 ei:1 outdoors:1 building:2 effect:3 validity:1 hence:1 laboratory:1 ll:1 during:3 self:7 unambiguous:1 speaker:5 trying:1 performs:1 allen:1 fj:1 ranging:1 harmonic:1 physical:1 khz:2 occurred:1 interpret:1 marginals:1 measurement:1 significant:3 cambridge:3 imposing:1 ai:4 similarly:1 had:4 wear:1 gateway:1 etc:1 labelers:1 aligning:2 recent:1 diary:1 binary:2 fortunately:1 additional:2 staff:1 floor:1 somewhat:1 converting:1 determine:2 shortest:1 maximize:1 aggregated:2 signal:5 ii:2 multiple:1 mix:1 sound:2 cross:4 long:1 basic:2 circumstance:1 metric:1 essentially:1 vision:2 sometimes:2 whereas:1 want:1 diagram:1 extra:1 probably:1 ascent:1 subject:30 tend:1 seem:1 jordan:2 split:1 enough:1 fit:2 architecture:1 identified:1 reduce:1 idea:2 simplifies:1 shift:1 whether:7 six:1 expression:1 utility:1 retrospective:1 clarkson:1 speech:21 speaking:7 indoors:1 aimed:1 detailed:1 sl:1 millisecond:1 estimated:7 correctly:1 write:1 express:1 group:4 four:1 threshold:2 graph:1 sum:1 sti:1 badge:1 place:1 scaling:1 layer:5 ki:1 guaranteed:1 annual:1 subj:3 constraint:1 scene:1 nearby:1 aspect:1 conversational:7 performing:1 according:1 combination:1 across:2 describes:1 outlier:2 restricted:1 vjt:3 turn:16 count:2 know:1 end:2 apply:2 away:1 voicing:11 spectral:1 occurrence:1 centrality:13 robustness:1 symmetrized:1 original:1 ensure:1 graphical:1 added:2 already:2 occurs:1 question:1 md:1 diagonal:1 usual:1 gradient:2 distance:3 link:4 hmm:1 partner:6 whom:2 collected:1 length:1 talked:1 modeled:1 relationship:1 setup:1 effecting:1 holding:1 twenty:1 upper:1 observation:1 markov:10 unvoiced:3 defining:1 communication:7 excluding:5 locate:1 frame:8 interacting:2 discovered:1 community:5 complement:2 pair:3 namely:1 kl:3 acoustic:1 hour:4 able:2 below:1 pattern:1 including:2 memory:7 explanation:1 overlap:1 natural:1 rely:2 normality:1 representing:1 technology:1 picture:1 extract:1 coupled:2 prior:1 understanding:1 review:1 determining:3 fully:1 mixed:9 interesting:1 proportional:1 degree:1 gather:2 principle:1 prone:1 course:1 sociometry:1 basu:4 wore:1 taking:13 face:9 saul:2 explaining:1 distributed:1 benefit:1 curve:1 calculated:1 transition:18 world:2 collection:1 sli:1 social:13 correlate:2 transaction:1 sj:1 approximate:1 global:1 active:2 leader:1 table:6 nature:1 robust:2 contributes:1 interact:1 artificially:2 vj:1 did:1 linearly:1 noise:1 alarm:2 intel:2 referred:1 roc:1 board:1 sf:2 lie:1 pe:7 administrative:1 minute:6 removing:1 sensing:4 offset:1 insignificant:1 closeness:1 false:2 importance:3 conditioned:1 entropy:1 simply:2 likely:1 contained:1 talking:2 chance:1 acm:1 ma:2 conditional:3 goal:1 consequently:1 stk:2 change:1 determined:1 specifically:1 typical:2 telephone:1 miss:1 engineer:1 microphone:4 total:2 bernard:1 brand:1 people:11 mark:1 support:2 signifying:1 evaluate:1 audio:7 tested:2 correlated:1 |
1,789 | 2,625 | On the Adaptive Properties of Decision Trees
Clayton Scott
Statistics Department
Rice University
Houston, TX 77005
[email protected]
Robert Nowak
Electrical and Computer Engineering
University of Wisconsin
Madison, WI 53706
[email protected]
Abstract
Decision trees are surprisingly adaptive in three important respects: They
automatically (1) adapt to favorable conditions near the Bayes decision
boundary; (2) focus on data distributed on lower dimensional manifolds;
(3) reject irrelevant features. In this paper we examine a decision tree
based on dyadic splits that adapts to each of these conditions to achieve
minimax optimal rates of convergence. The proposed classifier is the
first known to achieve these optimal rates while being practical and implementable.
1
Introduction
This paper presents three adaptivity properties of decision trees that lead to faster rates of
convergence for a broad range of pattern classification problems. These properties are:
Noise Adaptivity: Decision trees can automatically adapt to the (unknown) regularity of
the excess risk function in the neighborhood of the Bayes decision boundary. The
regularity is quantified by a condition similar to Tsybakov?s noise condition [1].
Manifold Focus: When the distribution of features happens to have support on a lower dimensional manifold, decision trees can automatically detect and adapt their structure to the manifold. Thus decision trees learn the ?effective? data dimension.
Feature Rejection: If certain features are irrelevant (i.e., independent of the class labels),
then decision trees can automatically ignore these features. Thus decision trees
learn the ?relevant? data dimension.
Each of the above properties can be formalized and translated into a class of distributions
with known minimax rates of convergence. Adaptivity is a highly desirable quality of
classifiers since in practice the precise characteristics of the distribution are unknown.
We show that dyadic decision trees achieve the (minimax) optimal rate (to within a log
factor) without needing to know the specific parameters defining the class. Such trees
are constructed by minimizing a complexity penalized empirical risk over an appropriate
family of dyadic partitions. The complexity term is derived from a new generalization error
bound for trees, inspired by [2]. This bound in turn leads to an oracle inequality from which
the optimal rates are derived. Full proofs of all results are given in [11].
The restriction to dyadic splits is necessary to achieve a computationally tractable classifier.
Our classifiers have computational complexity nearly linear in the training sample size.
The same rates may be achieved by more general tree classifiers, but these require searches
over prohibitively large families of partitions. Dyadic decision trees are thus preferred
because they are simultaneously implementable, analyzable, and sufficiently flexible to
achieve optimal rates.
1.1
Notation
Let Z be a random variable taking values in a set Z, and let Z n = {Z1 , . . . , Zn } be iid
b n be the empirical
realizations of Z. Let PZ be the probability measure for Z, and let P
Pn
n b
estimate of PZ based on Z : Pn (B) = (1/n) i=1 I{Zi ?B} , B ? Z, where I denotes
the indicator function. In classification we take Z = X ? Y, where X is the collection
of feature vectors and Y is a finite set of class labels. Assume X = [0, 1]d , d ? 2, and
Y = {0, 1}. A classifier is a measurable function f : [0, 1]d ? {0, 1}. Each classifier f
induces a set Bf = {(x, y) ? Z | f (x) 6= y}. Define the probability of error and empirical
b n (Bf ), respectively. The Bayes
bn (f ) = P
error (risk) of f by R(f ) = PZ (Bf ) and R
classifier f ? achieves minimum probability of error and is given by f ? (x) = I{?(x)>1/2} ,
where ?(x) = PY |X (1 |x) is the posterior probability that the correct label is 1. The
Bayes error is R(f ? ) and denoted R? . The Bayes decision boundary, denoted ?G? , is the
topological boundary of the Bayes decision set G? = {x | f ? (x) = 1}.
1.2
Rates of Convergence in Classification
In this paper we study the rate at which EZ n {R(fbn )} ? R? goes to zero as n ? ?,
where fbn is a classification learning rule, i.e., a rule for constructing a classifier from
Z n . Yang [3] shows that for ?(x) in certain smoothness classes minimax optimal rates
are achieved by appropriate plug-in density estimates. Tsybakov and collaborators replace
global restrictions on ? by restrictions on ? near ?G? . Faster rates are then possible, although existing optimal classifiers typically rely on -nets or otherwise non-implementable
methods. [1, 4, 5]. Other authors have derived rates of convergence for existing practical
classifiers, but these rates are suboptimal in the minimax sense considered here [6?8]. Our
contribution is to demonstrate practical classifiers that adaptively attain minimax optimal
rates for some of Tsybakov?s and other classes.
2
Dyadic Decision Trees
A dyadic decision tree (DDT) is a decision tree that divides the input space by means of
axis-orthogonal dyadic splits. More precisely, a dyadic decision tree T is specified by
assigning an integer s(v) ? {1, . . . , d} to each internal node v of T (corresponding to the
coordinate/attribute that is split at that node), and a binary label 0 or 1 to each leaf node.
The nodes of DDTs correspond to hyperrectangles (cells) in [0, 1]d (see Figure 1). Given a
Qd
hyperrectangle A = r=1 [ar , br ], let As,1 and As,2 denote the hyperrectangles formed by
splitting A at its midpoint along coordinate s. Specifically, define As,1 = {x ? A | xs ?
(as + bs )/2} and As,2 = A\As,1 . Each node of a DDT is associated with a cell according
to the following rules: (1) The root node is associated with [0, 1]d ; (2) If v is an internal
node associated to the cell A, then the children of v are associated to As(v),1 and As(v),2 .
Let ?(T ) = {A1 , . . . , Ak } denote the partition induced by T . Let j(A) denote the depth
of A and note that ?(A) = 2?j(A) where ? is the Lebesgue measure on Rd . Define T to
be the collection of all DDTs and A to be the collection of all cells corresponding to nodes
Figure 1: A dyadic decision tree (right) with the associated recursive dyadic partition (left)
when d = 2. Each internal node of the tree is labeled with an integer from 1 to d indicating
the coordinate being split at that node. The leaf nodes are decorated with class labels.
of trees in T .
Let M be a dyadic integer, that is, M = 2L for some nonnegative integer L. Define TM to
be the collection of all DDTs such that no terminal cell has a sidelength smaller than 2?L .
In other words, no coordinate is split more than L times when traversing a path from the
root to a leaf.
We will consider classifiers of the form
bn (T ) + ?n (T )
Tbn = arg min R
(1)
T ?TM
where ?n is a ?penalty? or regularization term specified below. An algorithm of Blanchard et al. [9] may be used to compute Tbn in O(ndLd log(ndLd )) operations. For all of
our theorems on rates of convergence below we have L = O(log n), in which case the
computational cost is O(nd(log n)d+1 ).
3
Generalization Error Bounds for Trees
In this section we state a uniform error bound and an oracle inequality for DDTs. These two
results are extensions of our previous work on DDTs [10]. The bounding techniques are
quite general and can be extended to larger (even uncountable) families of trees using VC
theory, but for the sake of simplicity we confine the discussion to DDTs. Complete proofs
may be found in [11]. Before stating these results, some additional notation is necessary.
Let A ? A, and define JAK = (2 + log2 d)j(A). JAK represents the number of bits needed
to uniquely encode A and will be used to measure the complexity
of a DDT having A as a
P
leaf cell. These ?codelengths? satisfy a Kraft inequality A?A 2?JAK ? 1.
Pn
For a cell A ? [0, 1]d , define pA = PX (A) and p?A = (1/n) i=1 I{Xi ?A} . Further define
p?0A = 4 max(?
pA , (JAK log 2 + log n)/n) and p0A = 4 max(pA , (JAK log 2 + log n)/(2n)).
It can be shown that with high probability, pA ? p?0A and p?A ? p0A uniformly over all
A ? A [11]. The mutual boundedness of pA and p?A is a key to making our proposed
classifier both computable on the one hand and analyzable on the other.
Define the data-dependent penalty
?n (T ) =
X
A??(T )
r
2?
p0A
JAK log 2 + log(2n)
.
n
(2)
Our first main result is the following uniform error bound.
Theorem 1. With probability at least 1 ? 2/n,
bn (T ) + ?n (T )
R(T ) ? R
for all T ? T .
(3)
p
Traditional error bounds for trees involve a penalty proportional to |T | log n/n, where
|T | denotes the number of leaves in T (see [12] or the ?naive? bound in [2]). The penalty
in (2) assigns a different weight to each leaf of the tree depending on both the depth of the
leaf and the fraction of data reaching the leaf. Indeed, for very deep leaves, little data will
reach those nodes, and such leaves will contribute very little to the overall penalty. For
example, we may bound p?0A by p0A with high probability, and if X has a bounded density,
then p0A decays like max{2?j , log n/n}, where j is the depth of A. Thus, as j increases,
JAK grows additively with j, but p?0A decays at a multiplicative rate. The upshot is that the
penalty ?n (T ) favors unbalanced trees. Intuitively, if two trees have the same size and
empirical error, minimizing the penalized empirical risk with this new penalty will select
the tree that is more unbalanced, whereas a traditional penalty based only on tree size would
not distinguish the two. This has advantages for classification because unbalanced trees are
what we expect when approximating a lower dimensional decision boundary.
The derivation of (2) comes from applying standard concentration inequalities for sums
of Bernoulli trials (most notably the relative Chernoff bound) in a spatially decomposed
manner. Spatial decomposition allows the introduction of local probabilities pA to offset
the complexity of each leaf node A. Our analysis is inspired by the work of Mansour and
McAllester [2].
The uniform error bound of Theorem 1 can be converted (using standard techniques) into
an oracle inequality that is the key to deriving rates of convergence for DDTs.
Theorem 2. Let Tbn be as in (1) with ?n as in (2). Define
r
X
JAK log 2 + log(2n)
?
.
?n (T ) =
8p0A
n
A??(T )
Then
h
i
? n (T ) + O 1 .
EZ n {R(Tbn )} ? R? ? min R(T ) ? R? + 2?
T ?T
n
(4)
? n upper
Note that with high probability, p0A is an upper bound on p?A , and therefore ?
? n instead of ?n in the oracle bound facilitates rate of convergence
bounds ?n . The use of ?
analysis. The oracle inequality essentially says that Tbn performs nearly as well as the
DDT chosen by an oracle to minimize R(T ) ? R? . The right hand side of (4) bears the
interpretation of a decomposition into approximation error (R(T ) ? R? ) and estimation
? n (T ).
error ?
4
Rates of Convergence
The classes of distributions we study are motivated by the work of Mammen and Tsybakov
[4] and Tsybakov [1] which we now review. The classes are indexed by the smoothness
? of the Bayes decision boundary ?G? and a parameter ? that quantifies how ?noisy? the
distribution is near ?G? . We write an 4 bn when an = O(bn ) and an bn if both
an 4 bn and bn 4 an .
Let ? > 0, and take r = d?e ? 1 to be the largest integer not exceeding ?. Suppose
b : [0, 1]d?1 ? [0, 1] is r times differentiable, and let pb,s denote the Taylor polynomial of
b of order r at the point s. For a constant c1 > 0, define ?(?, c1 ), the class of functions
with H?older regularity ?, to be the collection of all b such that
|b(s0 ) ? pb,s (s0 )| ? c1 |s ? s0 |? for all s, s0 ? [0, 1]d?1 .
Using Tsybakov?s terminology, the Bayes decision set G? is a boundary fragment of
smoothness ? if G? = epi(b) for some b ? ?(?, c1 ). Here epi(b) = {(s, t) ? [0, 1]d :
b(s) ? t} is the epigraph of b. In other words, for a boundary fragment, the last coordinate
of ?G? is a H?older-? function of the first d ? 1 coordinates.
Tsybakov also introduces a condition that characterizes the level of ?noise? near ?G? in
terms of a noise exponent ?, 1 ? ? ? ?. Let ?(f1 , f2 ) = {x ? [0, 1]d : f1 (x) 6= f2 (x)}.
Let c2 > 0. A distribution satisfies Tsybakov?s noise condition with noise exponent ? and
constant c2 if
PX (?(f, f ? )) ? c2 (R(f ) ? R? )1/? for all f .
(5)
The case ? = 1 is the ?low noise? case and corresponds to a jump of ?(x) at the Bayes
decision boundary. The case ? = ? is the high noise case and imposes no constraint on
the distribution (provided c2 ? 1). See [6] for further discussion.
Define the class F = F(?, ?) = F(?, ?, c0 , c1 , c2 ) to be the collection of distributions of
Z = (X, Y ) such that
0A For all measurable A ? [0, 1]d , PX (A) ? c0 ?(A)
1A G? is a boundary fragment defined by b ? ?(?, c1 ).
2A The margin condition is satisfied with noise exponent ? and constant c2 .
Introducing the parameter ? = (d ? 1)/?, Tsybakov [1] proved the lower bound
h
i
inf sup EZ n {R(fbn )} ? R? < n??/(2?+??1) .
fbn
(6)
F
The inf is over all rules for constructing classifiers from training data. Theoretical rules that
achieve this lower bound are studied by [1, 4, 5, 13]. Unfortunately, none of these works
provide computationally efficient algorithms for implementing the proposed discrimination
rules, and it is unlikely that practical algorithms exist for these rules.
It is important to note that the lower bound in (6) is tight only when ? < 1. To see this,
fix ? > 1. From the definition of F(?, ?) we have F(?, 1) ? F(?, ?) for any ? > 1. As
? ? ?, the right-hand side of (6) decreases. Therefore, the minimax rate for F(?, ?) can
be no faster than n?1/(1+?) , which is the lower bound for F(?, 1).
In light of the above, Tsybakov?s noise condition does not improve the learning situation
when ? > 1. To achieve rates faster than n?1/(1+?) when ? > 1, clearly an alternate
assumption must be made. If the right-hand side of (6) is any indication, then the distributions responsible for slower rates are those with small ?. Thus, it would seem that we need
a noise assumption that excludes those ?low noise? distributions with small ? that cause
slower rates when ? > 1.
Since recursive dyadic partitions can well-approximate G? with smoothness ? ? 1, we
are in the regime of ? ? (d ? 1)/? ? 1. As motivated above, faster rates in this situation
require an assumption that excludes low noise levels. We propose such an assumption. Like
Tsybakov?s noise condition, our assumption is also defined in terms of constants ? ? 1
and c2 > 0. Because of limited space we are unable to fully present the modified noise
condition, and we simply write
2B Low noise levels are excluded as defined in [11].
Effectively, 2B says that the inequality in (5) is reversed, not for all classifiers f , but only
for those f that are the best DDT approximations to f ? for each DDT resolutions parameter
M . Using techniques presented in [13], we show in [11] that lower bounds of the form in
(6) are valid when 2A is replaced by 2B. According to the results in the next section, these
lower bounds are tight to within a log factor for ? > 1.
5
Adaptive Rates for Dyadic Decision Trees
All of our rate of convergence proofs use the oracle inequality in the same basic way. The
? n (T ? ) decay
objective is to find an ?oracle tree? T ? ? T such that both R(T ? ) ? R? and ?
at the desired rate. This tree is roughly constructed as follows. First form a ?regular?
dyadic partition (the exact construction will depend on the specific problem) into cells
of sidelength 1/m = 2?K , for a certain K ? L. Then ?prune back? all cells that do
not intersect ?G? . Both approximation and estimation error may now be bounded using
the given assumptions and elementary bounding methods. For example, R(T ? ) ? R? 4
(PZ (?(T ? , f ? )))? (by 2B) 4 (?(?(T ? , f ? )))? (by 0A) 4 m?? (by 1A). This example
reveals how the noise exponent enters the picture to affect the approximation error. See [11]
for complete proofs.
5.1
Noise Adaptive Classification
Dyadic decision trees, selected according to the penalized empirical risk criterion discussed
earlier, adapt to the unknown noise level to achieve faster rates as stated in Theorem 3
below. For now we focus on distributions with ? = 1 (? = d ? 1), i.e., Lipschitz decision
boundaries (the case ? 6= 1 is discussed in Section 5.4), and arbitrary noise parameter ?.
The optimal rate for this class is n??/(2?+d?2) [11]. We will see that DDTs can adaptively
learn at a rate of (log n/n)?/(2?+d?2) .
In an effort to be more general and practical, we replace the boundary fragment condition
1A with a less restrictive assumption. Tysbakov and van de Geer [5] assume the Bayes
decision set G? is a boundary fragment, meaning it is known a priori that (a) one coordinate
of ?G? is a function of the others, (b) that coordinate is known, and (c) class 1 corresponds
to the region above ?G? . The following condition includes all piecewise Lipschitz decision
boundaries, and allows ?G? to have arbitrary orientation and G? to have multiple connected
components. Let Pm denote the regular partition of [0, 1]d into hypercubes of sidelength
1/m where m is a dyadic integer (i.e., a power of 2). A distribution of Z satisfies the
box-counting assumption with constant c1 > 0 if
1B For all dyadic integers m, ?G? intersects at most c1 md?1 of the md cells in Pm .
Condition 1A (? = 1) implies 1B, (with a different c1 ) so the minimax rate under 0A, 1B,
and 2B is no faster than n??/(2?+d?2) .
Theorem 3. Let M (n/ log n). Take Tbn as in (1) with ?n as in (2). Then
?
h
i log n 2?+d?2
?
b
sup EZ n {R(Tn )} ? R 4
.
(7)
n
where the sup is over all distributions such that 0A, 1B, and 2B hold.
The complexity regularized DDT is adaptive in the sense that the noise exponent ? and
constants c0 , c1 , c2 need not be known. Tbn can always be constructed and in opportune
circumstances the rate in (7) is achieved.
5.2
When the Data Lie on a Manifold
For certain problems it may happen that the feature vectors lie on a manifold in the ambient
space X . When this happens, dyadic decision trees automatically adapt to achieve faster
rates of convergence. To recast assumptions 0A and 1B in terms of a data manifold1 , we
again use box-counting ideas. Let c0 , c1 > 0 and 1 ? d0 ? d. The boundedness and
regularity assumptions for a d0 dimensional manifold are given by
0
0B For all dyadic integers m and all A ? Pm , PX (A) ? c0 m?d .
0
1C For all dyadic integers m, ?G? passes through at most c1 md ?1 of the md hypercubes in Pm .
0
The minimax rate under these assumptions is n?1/d . To see this, consider the mapping of
0
0
0
features X 0 = (X 1 , . . . , X d ) ? [0, 1]d to X = (X 1 , . . . , X d , 1/2, . . . , 1/2) ? [0, 1]d .
0
Then X lives on a d dimensional manifold, and clearly there can be no classifier achieving
0
a rate faster than n?1/d uniformly over all such X, as this would lead to a classifier outperforming the minimax rate for X 0 . As the following theorem shows, DDTs can achieve
this rate to within a log factor.
Theorem 4. Let M (n/ log n). Take Tbn as in (1) with ?n as in (2). Then
i log n d10
?
b
.
sup EZ n {R(Tn )} ? R 4
n
h
(8)
where the sup is over all distributions such that 0B and 1C hold.
Again, Tbn is adaptive in that it does not require knowledge d0 , c0 , or c1 .
5.3
Irrelevant Features
The ?relevant? data dimension is the number of relevant features/attributes, meaning the
number d00 < d of features of X that are not independent of Y . By an argument like that in
the previous section, the minimax rate under this assumption (and 0A and 1B) can be seen
00
to be n?1/d . Once again, DDTs can achieve this rate to within a log factor.
Theorem 5. Let M (n/ log n). Take Tbn as in (1) with ?n as in (2).
h
i log n d100
sup EZ n {R(Tbn )} ? R? 4
.
n
(9)
where the sup is over all distributions with relevant data dimension d00 and such that 0A
and 1B hold.
As in the previous theorems, our learning rule is adaptive in the sense that it does not need
to be told d00 or which d00 features are relevant.
1
For simplicity, we eliminate the margin assumption here and in subsequent sections, although it
could be easily incorporated to yield faster adaptive rates.
5.4
Adapting to Bayes Decision Boundary Smoothness
Our results thus far apply to Tsybakov?s class with ? = 1. In [10] we show that DDTs with
polynomial classifiers decorating the leaves can achieve faster rates for ? > 1. Combined
with the analysis here, these rates can approach n?1 under appropriate noise assumptions.
Unfortunately, the rates we obtain are suboptimal and the classifiers are not practical.
For ? ? 1, free DDTs adaptively attain the minimax rate (within a log factor) of
n??/(?+d?1) . Due to space limitations, this discussion is deferred to [11]. Finding practical classifiers that adapt to the optimal rate for ? > 1 is a current line of research.
6
Conclusion
Dyadic decision trees adapt to a variation of Tsybakov?s noise condition, data manifold
dimension and the number of relevant features to achieve minimax optimal rates of convergence (to within a log factor). DDTs are constructed by a computationally efficient
penalized empirical risk minimization procedure based on a novel, spatially adaptive, datadependent penalty. Although we consider each condition separately so as to simplify the
?
discussion, the conditions can be combined to yield a rate of (log n/n)?/(2?+d ?2) where
d? is the dimension of the manifold supporting the relevant features.
References
[1] A. B. Tsybakov, ?Optimal aggregation of classifiers in statistical learning,? Ann. Stat., vol. 32,
no. 1, pp. 135?166, 2004.
[2] Y. Mansour and D. McAllester, ?Generalization bounds for decision trees,? in Proceedings of
the Thirteenth Annual Conference on Computational Learning Theory, N. Cesa-Bianchi and
S. Goldman, Eds., Palo Alto, CA, 2000, pp. 69?74.
[3] Y. Yang, ?Minimax nonparametric classification?Part I: Rates of convergence,? IEEE Trans.
Inform. Theory, vol. 45, no. 7, pp. 2271?2284, 1999.
[4] E. Mammen and A. B. Tsybakov, ?Smooth discrimination analysis,? Ann. Stat., vol. 27, pp.
1808?1829, 1999.
[5] A. B. Tsybakov and S. A. van de Geer, ?Square root penalty: adaptation to the margin in
classification and in edge estimation,? 2004, preprint.
[6] P. Bartlett, M. Jordan, and J. McAuliffe, ?Convexity, classification, and risk bounds,? Department of Statistics, U.C. Berkeley, Tech. Rep. 638, 2003, to appear in Journal of the American
Statistical Association.
[7] G. Blanchard, G. Lugosi, and N. Vayatis, ?On the rate of convergence of regularized boosting
classifiers,? J. Machine Learning Research, vol. 4, pp. 861?894, 2003.
[8] J. C. Scovel and I. Steinwart, ?Fast rates for support vector machines,? Los Alamos National
Laboratory, Tech. Rep. LA-UR 03-9117, 2004.
[9] G. Blanchard, C. Sch?afer, and Y. Rozenholc, ?Oracle bounds and exact algorithm for dyadic
classification trees,? in Learning Theory: 17th Annual Conference on Learning Theory, COLT
2004, J. Shawe-Taylor and Y. Singer, Eds. Heidelberg: Springer-Verlag, 2004, pp. 378?392.
[10] C. Scott and R. Nowak, ?Near-minimax optimal classification with dyadic classification trees,?
in Advances in Neural Information Processing Systems 16, S. Thrun, L. Saul, and B. Sch?olkopf,
Eds. Cambridge, MA: MIT Press, 2004.
[11] ??, ?Minimax optimal classification with dyadic decision trees,? Rice University, Tech. Rep.
TREE0403, 2004. [Online]. Available: http://www.stat.rice.edu/?cscott
[12] A. Nobel, ?Analysis of a complexity based pruning scheme for classification trees,? IEEE Trans.
Inform. Theory, vol. 48, no. 8, pp. 2362?2368, 2002.
[13] J.-Y. Audibert, ?PAC-Bayesian statistical learning theory,? Ph.D. dissertation, Universit?e Paris
6, June 2004.
| 2625 |@word trial:1 polynomial:2 nd:1 c0:6 bf:3 additively:1 bn:8 decomposition:2 boundedness:2 fragment:5 existing:2 current:1 scovel:1 assigning:1 must:1 subsequent:1 happen:1 partition:7 discrimination:2 leaf:12 selected:1 dissertation:1 boosting:1 node:13 contribute:1 along:1 constructed:4 c2:8 manner:1 notably:1 indeed:1 roughly:1 examine:1 terminal:1 inspired:2 decomposed:1 automatically:5 goldman:1 little:2 provided:1 notation:2 bounded:2 alto:1 what:1 finding:1 berkeley:1 prohibitively:1 classifier:23 universit:1 hyperrectangles:2 appear:1 mcauliffe:1 before:1 engineering:1 local:1 ak:1 path:1 lugosi:1 studied:1 quantified:1 limited:1 range:1 practical:7 responsible:1 practice:1 recursive:2 procedure:1 intersect:1 decorating:1 empirical:7 reject:1 attain:2 adapting:1 word:2 regular:2 sidelength:3 risk:7 applying:1 py:1 restriction:3 measurable:2 www:1 go:1 tbn:11 resolution:1 rozenholc:1 formalized:1 splitting:1 simplicity:2 assigns:1 rule:8 deriving:1 coordinate:8 variation:1 construction:1 suppose:1 exact:2 pa:6 labeled:1 preprint:1 electrical:1 enters:1 region:1 connected:1 decrease:1 convexity:1 complexity:7 engr:1 depend:1 tight:2 f2:2 kraft:1 translated:1 easily:1 tx:1 intersects:1 derivation:1 epi:2 fast:1 effective:1 neighborhood:1 quite:1 larger:1 say:2 otherwise:1 favor:1 statistic:2 noisy:1 online:1 advantage:1 differentiable:1 indication:1 net:1 propose:1 adaptation:1 relevant:7 realization:1 achieve:13 adapts:1 olkopf:1 los:1 convergence:14 regularity:4 depending:1 stating:1 stat:3 come:1 implies:1 qd:1 correct:1 attribute:2 vc:1 mcallester:2 implementing:1 require:3 f1:2 generalization:3 fix:1 collaborator:1 d00:4 elementary:1 extension:1 hold:3 sufficiently:1 considered:1 confine:1 mapping:1 achieves:1 favorable:1 estimation:3 label:5 palo:1 largest:1 minimization:1 mit:1 clearly:2 always:1 modified:1 reaching:1 pn:3 encode:1 derived:3 focus:3 june:1 bernoulli:1 tech:3 detect:1 sense:3 dependent:1 typically:1 unlikely:1 eliminate:1 arg:1 classification:14 flexible:1 overall:1 denoted:2 exponent:5 priori:1 orientation:1 colt:1 spatial:1 mutual:1 once:1 having:1 chernoff:1 represents:1 broad:1 nearly:2 others:1 piecewise:1 simplify:1 simultaneously:1 national:1 replaced:1 lebesgue:1 highly:1 deferred:1 introduces:1 light:1 ambient:1 edge:1 nowak:3 necessary:2 traversing:1 orthogonal:1 tree:41 indexed:1 divide:1 taylor:2 desired:1 theoretical:1 earlier:1 ar:1 zn:1 cost:1 introducing:1 uniform:3 alamo:1 combined:2 adaptively:3 hypercubes:2 density:2 told:1 decorated:1 fbn:4 again:3 satisfied:1 cesa:1 american:1 converted:1 de:2 includes:1 blanchard:3 satisfy:1 audibert:1 multiplicative:1 root:3 characterizes:1 sup:7 bayes:11 aggregation:1 contribution:1 minimize:1 formed:1 square:1 characteristic:1 correspond:1 yield:2 bayesian:1 iid:1 none:1 reach:1 inform:2 ed:3 definition:1 pp:7 proof:4 associated:5 proved:1 cscott:2 knowledge:1 back:1 box:2 hand:4 steinwart:1 d10:1 quality:1 grows:1 regularization:1 spatially:2 excluded:1 laboratory:1 uniquely:1 mammen:2 criterion:1 complete:2 demonstrate:1 tn:2 performs:1 meaning:2 novel:1 discussed:2 interpretation:1 association:1 cambridge:1 smoothness:5 rd:1 pm:4 shawe:1 afer:1 posterior:1 irrelevant:3 inf:2 certain:4 verlag:1 inequality:8 binary:1 outperforming:1 rep:3 life:1 seen:1 minimum:1 additional:1 houston:1 prune:1 multiple:1 desirable:1 full:1 needing:1 d0:3 smooth:1 faster:11 adapt:7 plug:1 a1:1 basic:1 essentially:1 circumstance:1 achieved:3 cell:10 c1:13 vayatis:1 whereas:1 separately:1 thirteenth:1 sch:2 pass:1 induced:1 facilitates:1 seem:1 jordan:1 integer:9 near:5 yang:2 counting:2 split:6 affect:1 zi:1 suboptimal:2 idea:1 tm:2 br:1 computable:1 motivated:2 bartlett:1 effort:1 penalty:10 cause:1 deep:1 involve:1 nonparametric:1 tsybakov:16 ph:1 induces:1 http:1 exist:1 ddt:20 write:2 vol:5 key:2 terminology:1 pb:2 achieving:1 wisc:1 excludes:2 fraction:1 sum:1 family:3 decision:34 bit:1 bound:22 distinguish:1 topological:1 nonnegative:1 oracle:9 annual:2 precisely:1 constraint:1 sake:1 argument:1 min:2 px:4 department:2 according:3 alternate:1 smaller:1 ur:1 wi:1 b:1 happens:2 making:1 intuitively:1 computationally:3 turn:1 needed:1 know:1 singer:1 tractable:1 available:1 operation:1 apply:1 appropriate:3 slower:2 denotes:2 uncountable:1 log2:1 madison:1 restrictive:1 approximating:1 objective:1 concentration:1 md:4 traditional:2 reversed:1 unable:1 thrun:1 manifold:10 nobel:1 minimizing:2 unfortunately:2 robert:1 opportune:1 stated:1 unknown:3 bianchi:1 upper:2 implementable:3 finite:1 jak:8 supporting:1 defining:1 extended:1 situation:2 precise:1 incorporated:1 mansour:2 arbitrary:2 clayton:1 paris:1 specified:2 hyperrectangle:1 z1:1 trans:2 below:3 pattern:1 scott:2 regime:1 recast:1 max:3 power:1 rely:1 regularized:2 indicator:1 minimax:16 older:2 improve:1 scheme:1 picture:1 axis:1 naive:1 review:1 upshot:1 relative:1 wisconsin:1 fully:1 expect:1 bear:1 adaptivity:3 limitation:1 proportional:1 s0:4 imposes:1 penalized:4 surprisingly:1 last:1 free:1 side:3 saul:1 taking:1 midpoint:1 distributed:1 van:2 boundary:15 dimension:6 depth:3 valid:1 author:1 collection:6 adaptive:9 jump:1 made:1 far:1 excess:1 approximate:1 pruning:1 ignore:1 preferred:1 global:1 reveals:1 xi:1 search:1 quantifies:1 learn:3 ca:1 heidelberg:1 constructing:2 main:1 bounding:2 noise:23 dyadic:25 child:1 epigraph:1 analyzable:2 exceeding:1 lie:2 theorem:10 specific:2 pac:1 offset:1 pz:4 x:1 decay:3 effectively:1 margin:3 rejection:1 simply:1 ez:6 datadependent:1 springer:1 corresponds:2 satisfies:2 rice:4 ma:1 ann:2 codelengths:1 replace:2 lipschitz:2 specifically:1 uniformly:2 geer:2 la:1 indicating:1 select:1 internal:3 support:2 unbalanced:3 |
1,790 | 2,626 | Efficient Out-of-Sample Extension of
Dominant-Set Clusters
Massimiliano Pavan and Marcello Pelillo
Dipartimento di Informatica, Universit`a Ca? Foscari di Venezia
Via Torino 155, 30172 Venezia Mestre, Italy
{pavan,pelillo}@dsi.unive.it
Abstract
Dominant sets are a new graph-theoretic concept that has proven to
be relevant in pairwise data clustering problems, such as image segmentation. They generalize the notion of a maximal clique to edgeweighted graphs and have intriguing, non-trivial connections to continuous quadratic optimization and spectral-based grouping. We address the
problem of grouping out-of-sample examples after the clustering process
has taken place. This may serve either to drastically reduce the computational burden associated to the processing of very large data sets, or
to efficiently deal with dynamic situations whereby data sets need to be
updated continually. We show that the very notion of a dominant set offers a simple and efficient way of doing this. Numerical experiments on
various grouping problems show the effectiveness of the approach.
1
Introduction
Proximity-based, or pairwise, data clustering techniques are gaining increasing popularity over traditional central grouping techniques, which are centered around the notion of
?feature? (see, e.g., [3, 12, 13, 11]). In many application domains, in fact, the objects to
be clustered are not naturally representable in terms of a vector of features. On the other
hand, quite often it is possible to obtain a measure of the similarity/dissimilarity between
objects. Hence, it is natural to map (possibly implicitly) the data to be clustered to the
nodes of a weighted graph, with edge weights representing similarity or dissimilarity relations. Although such a representation lacks geometric notions such as scatter and centroid,
it is attractive as no feature selection is required and it keeps the algorithm generic and
independent from the actual data representation. Further, it allows one to use non-metric
similarities and it is applicable to problems that do not have a natural embedding to a uniform feature space, such as the grouping of structural or graph-based representations.
We have recently developed a new framework for pairwise data clustering based on a novel
graph-theoretic concept, that of a dominant set, which generalizes the notion of a maximal
clique to edge-weighted graphs [7, 9]. An intriguing connection between dominant sets
and the solutions of a (continuous) quadratic optimization problem makes them related in
a non-trivial way to spectral-based cluster notions, and allows one to use straightforward
dynamics from evolutionary game theory to determine them [14]. A nice feature of this
framework is that it naturally provides a principled measure of a cluster?s cohesiveness as
well as a measure of a vertex participation to its assigned group. It also allows one to obtain
?soft? partitions of the input data, by allowing a point to belong to more than one cluster.
The approach has proven to be a powerful one when applied to problems such as intensity,
color, and texture segmentation, or visual database organization, and is competitive with
spectral approaches such as normalized cut [7, 8, 9].
However, a typical problem associated to pairwise grouping algorithms in general, and
hence to the dominant set framework in particular, is the scaling behavior with the number
of data. On a dataset containing N examples, the number of potential comparisons scales
with O(N 2 ), thereby hindering their applicability to problems involving very large data
sets, such as high-resolution imagery and spatio-temporal data. Moreover, in applications
such as document classification or visual database organization, one is confronted with
a dynamic environment which continually supplies the algorithm with newly produced
data that have to be grouped. In such situations, the trivial approach of recomputing the
complete cluster structure upon the arrival of any new item is clearly unfeasible.
Motivated by the previous arguments, in this paper we address the problem of efficiently
assigning out-of-sample, unseen data to one or more previously determined clusters. This
may serve either to substantially reduce the computational burden associated to the processing of very large (though static) data sets, by extrapolating the complete grouping solution
from a small number of samples, or to deal with dynamic situations whereby data sets need
to be updated continually. There is no straightforward way of accomplishing this within
the pairwise grouping paradigm, short of recomputing the complete cluster structure. Recent sophisticated attempts to deal with this problem use optimal embeddings [11] and the
Nystr?om method [1, 2]. By contrast, we shall see that the very notion of a dominant set,
thanks to its clear combinatorial properties, offers a simple and efficient solution to this
problem. The basic idea consists of computing, for any new example, a quantity which
measures the degree of cluster membership, and we provide simple approximations which
allow us to do this in linear time and space, with respect to the cluster size. Our classification schema inherits the main features of the dominant set formulation, i.e., the ability
of yielding a soft classification of the input data and of providing principled measures for
cluster membership and cohesiveness.
Numerical experiments show that the strategy of first grouping a small number of data
items and then classifying the out-of-sample instances using our prediction rule is clearly
successful as we are able to obtain essentially the same results as the dense problem in
much less time. We also present results on high-resolution image segmentation problems,
a task where the dominant set framework would otherwise be computationally impractical.
2
Dominant Sets and Their Continuous Characterization
We represent the data to be clustered as an undirected edge-weighted (similarity) graph
with no self-loops G = (V, E, w), where V = {1, . . . , n} is the vertex set, E ? V ?
V is the edge set, and w : E ? IR?+ is the (positive) weight function. Vertices in G
correspond to data points, edges represent neighborhood relationships, and edge-weights
reflect similarity between pairs of linked vertices. As customary, we represent the graph
G with the corresponding weighted adjacency (or similarity) matrix, which is the n ? n
nonnegative, symmetric matrix A = (aij ) defined as:
w(i, j) , if (i, j) ? E
aij =
0,
otherwise .
Let S ? V be a non-empty subset of vertices and i ? V . The (average) weighted degree
of i w.r.t. S is defined as:
1
awdegS (i) =
aij
(1)
|S|
j?S
where |S| denotes the cardinality of S. Moreover, if j ?
/ S we define ?S (i, j) = aij ?
awdegS (i) which is a measure of the similarity between nodes j and i, with respect to the
average similarity between node i and its neighbors in S.
Figure 1: An example edge-weighted graph. Note that w{1,2,3,4} (1) < 0 and this reflects the fact
that vertex 1 is loosely coupled to vertices 2, 3 and 4. Conversely, w{5,6,7,8} (5) > 0 and this reflects
the fact that vertex 5 is tightly coupled with vertices 6, 7, and 8.
Let S ? V be a non-empty subset of vertices and i ? S. The weight of i w.r.t. S is
?
1,
if |S| = 1
?
?
wS (i) =
?S\{i} (j, i) wS\{i} (j) , otherwise
?
?
(2)
j?S\{i}
while the total weight of S is defined as:
W(S) =
wS (i) .
(3)
i?S
Intuitively, wS (i) gives us a measure of the overall similarity between vertex i and the
vertices of S \ {i} with respect to the overall similarity among the vertices in S \ {i}, with
positive values indicating high internal coherency (see Fig. 1).
A non-empty subset of vertices S ? V such that W (T ) > 0 for any non-empty T ? S, is
said to be dominant if:
1. wS (i) > 0, for all i ? S
/ S.
2. wS?{i} (i) < 0, for all i ?
The two previous conditions correspond to the two main properties of a cluster: the
first regards internal homogeneity, whereas the second regards external inhomogeneity.
The above definition represents our formalization of the concept of a cluster in an edgeweighted graph.
Now, consider the following quadratic program, which is a generalization of the so-called
Motzkin-Straus program [5] (here and in the sequel a dot denotes the standard scalar product between vectors):
maximize
f (x) = x ? Ax
(4)
subject to
x ? ?n
where
?n = {x ? IRn : xi ? 0 for all i ? V and e ? x = 1}
is the standard simplex
of IRn , and e is a vector of appropriate length consisting of unit
entries (hence e ? x = i xi ). The support of a vector x ? ?n is defined as the set of
indices corresponding to its positive components, that is ? (x) = {i ? V : xi > 0}. The
following theorem, proved in [7], establishes an intriguing connection between dominant
sets and local solutions of program (4).
Theorem 1 If S is a dominant subset of vertices, then its (weighted) characteristics vector
xS , which is the vector of ?n defined as
wS (i)
if i ? S
S
W(S) ,
xi =
(5)
0,
otherwise
is a strict local solution of program (4). Conversely, if x is a strict local solution of program (4) then its support S = ?(x) is a dominant set, provided that wS?{i} (i) = 0 for all
i?
/ S.
The condition that wS?{i} (i) = 0 for all i ?
/ S = ?(x) is a technicality due to the presence
of ?spurious? solutions in (4) which is, at any rate, a non-generic situation.
By virtue of this result, we can find a dominant set by localizing a local solution of program (4) with an appropriate continuous optimization technique, such as replicator dynamics from evolutionary game theory [14], and then picking up its support. Note that the
components of the weighted characteristic vectors give us a natural measure of the participation of the corresponding vertices in the cluster, whereas the value of the objective
function measures the cohesiveness of the class. In order to get a partition of the input data
into coherent groups, a simple approach is to iteratively finding a dominant set and then
removing it from the graph, until all vertices have been grouped (see [9] for a hierarchical
extension of this framework). On the other hand, by finding all dominant sets, i.e., local solutions of (4), of the original graph, one can obtain a ?soft? partition of the dataset, whereby
clusters are allowed to overlap. Finally, note that spectral clustering approaches such as,
e.g., [10, 12, 13] lead to similar, though intrinsically different, optimization problems.
3
Predicting Cluster Membership for Out-of-Sample Data
Suppose we are given a set V of n unlabeled items and let G = (V, E, w) denote the
corresponding similarity graph. After determining the dominant sets (i.e., the clusters)
for these original data, we are next supplied with a set V of k new data items, together
with all kn pairwise affinities between the old and the new data, and are asked to assign
each of them to one or possibly more previously determined clusters. We shall denote by
? = (V? , E,
? w),
G
? with V? = V ? V , the similarity
graph built upon all the n + k data. Note
that in our approach we do not need the k2 affinities between the new points, which is a
? is a supergraph
nice feature as in most applications k is typically very large. Technically, G
?
?
of G, namely a graph having V ? V , E ? E and w(i, j) = w(i,
? j) for all (i, j) ? E.
Let S ? V be a subset of vertices which is dominant in the original graph G and let
i ? V? \ V a new data point. As pointed out in the previous section, the sign of wS?{i} (i)
provides an indication as to whether i is tightly or loosely coupled with the vertices in S
(the condition wS?{i} (i) = 0 corresponds to a non-generic boundary situation that does
not arise in practice and will therefore be ignored).1 Accordingly, it is natural to propose
the following rule for predicting cluster membership of unseen data:
if wS?{i} (i) > 0, then assign vertex i to cluster S .
(6)
Note that, according to this rule, the same point can be assigned to more than one class,
thereby yielding a soft partition of the input data. To get a hard partition one can use the
cluster membership approximation measures we shall discuss below. Note that it may also
happen for some instance i that no cluster S satisfies rule (6), in which case the point gets
unclassified (or assigned to an ?outlier? group). This should be interpreted as an indication
that either the point is too noisy or that the cluster formation process was inaccurate. In our
experience, however, this situation arises rarely.
A potential problem with the previous rule is its computational complexity. In fact, a direct
application of formula (2) to compute wS?{i} (i) is clearly infeasible due to its recursive
nature. On the other hand, using a characterization given in [7, Lemma 1] would also be
expensive since it would involve the computation of a determinant. The next result allows
us to compute the sign of wS?{i} (i) in linear time and space, with respect to the size of S.
Proposition 1 Let G = (V, E, w) be an edge-weighted (similarity) graph, A = (aij ) its
weighted adjacency matrix, and S ? V a dominant set of G with characteristic vector
Observe that wS (i) depends only on the the weights on the edges of the subgraph induced by S.
?
Hence, no ambiguity arises as to whether wS (i) is computed on G or on G.
1
? = (V? , E,
? w)
xS . Let G
? be a supergraph of G with weighted adjacency matrix A? = (?
aij ).
Then, for all i ? V? \ V , we have:
wS?{i} (i) > 0 ?
a
?hi xSh > f (xS )
(7)
h?S
S
Proof. From Theorem 1, x is a strict local solution of program (4) and hence it satisfies
the Karush-Kuhn-Tucker (KKT) equality conditions, i.e., the first-order necessary equality
? S be
conditions for local optimality [4]. Now, let n
? = |V? | be the cardinality of V? and let x
S
?
the (?
n-dimensional) characteristic vector of S in G, which is obtained by padding x with
? S satisfies the KKT equality conditions for the problem
zeros. It is immediate to see that x
?
?
? ? A?
? ? ?n? . Hence, from Lemma 2 of [7] we have
of maximizing f (?
x) = x
x, subject to x
for all i ? V? \ V :
wS?{i} (i)
=
(?
ahi ? ahj )xSh
(8)
W(S)
h?S
for
recall that the KKT equality conditions for program (4) imply
any j ?S S. Now,
S
S
a
x
=
x
?
Ax
= f (xS ) for any j ? S [7]. Hence, the proposition follows
hj
h
h?S
from the fact that, being S dominant, W (S) is positive.
Given an out-of-sample vertex i and a class S such that rule (6) holds, we now provide an
approximation of the degree of participation of i in S ? {i} which, as pointed out in the
previous section, is given by the ratio between wS?{i} (i) and W(S ? {i}). This can be
used, for example, to get a hard partition of the input data when an instance happens to be
assigned to more than one class. By equation (8), we have:
wS?{i} (i)
W(S)
=
(?
ahi ? ahj )xSh
W(S ? {i})
W(S ? {i})
h?S
for any j ? S. Since computing the exact value of the ratio W(S)/W(S ? {i}) would
be computationally expensive, we now provide simple approximation formulas. Since S
is dominant, it is reasonable to assume that all weights within it are close to each other.
Hence, we approximate S with a clique having constant weight a, and impose that it has
the same cohesiveness value f (xS ) = xS ? AxS as the original dominant set. After some
algebra, we get
|S|
f (xS )
a=
|S| ? 1
which yields W(S) ? |S|a|S|?1 . Approximating W(S ? {i}) with |S + 1|a|S| in a similar
way, we get:
|S|a|S|?1
W(S)
1 |S| ? 1
?
=
W(S ? {i})
f (xS ) |S| + 1
|S + 1|a|S|
which finally yields:
wS?{i} (i)
?hi xSh
|S| ? 1
h?S a
?
?1 .
(9)
W(S ? {i})
|S| + 1
f (xS )
Using the above formula one can easily get, by normalization, an approximation of the
?
? the extension of cluster S obtained applying rule (6):
characteristic vector xS ? ?n+k of S,
S? = S ? {i ? V? \ V : wS?{i} (i) > 0} .
?
With an approximation of xS at hand, it is also easy to compute an approximation of the
? i.e., xS? ? Ax
? S? . Indeed, assuming that S? is dominant in G,
?
cohesiveness of the new cluster S,
?
?
? S )i = xS ? Ax
? S?
and recalling the KKT equality conditions for program (4) [7], we get (Ax
? It is therefore natural to approximate the cohesiveness of S? as a weighted
for all i ? S.
? S? )i ?s.
average of the (Ax
0.025
0.00045
0.02
0.00035
12
0.0004
11
10
0.015
0.01
0.00025
Seconds
Euclidean distance
Euclidean distance
0.0003
0.0002
0.00015
0.0001
0.005
9
8
7
5e-005
6
0
0
-5e-005
0
0.1
0.2
0.3
0.4
0.5
0.6
Sampling rate
0.7
0.8
0.9
1
5
0
0.1
0.2
0.3
0.4
0.5
0.6
Sampling rate
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
Sampling rate
0.7
0.8
0.9
1
Figure 2: Evaluating the quality of our approximations on a 150-point cluster. Average distance
between approximated and actual cluster membership (left) and cohesiveness (middle) as a function
of sampling rate. Right: average CPU time as a function of sampling rate.
4
Experimental Results
In an attempt to evaluate how the approximations given at the end of the previous section actually compare to the solutions obtained on the dense problem, we conducted the
following preliminary experiment. We generated 150 points on the plane so as to form
a dominant set (we used a standard Gaussian kernel to obtain similarities), and extracted
random samples with increasing sampling rate, ranging from 1/15 to 1. For each sampling
rate 100 trials were made, for each of which we computed the Euclidean distance between
the approximated and the actual characteristic vector (i.e., cluster membership), as well
as the distance between the approximated and the actual cluster cohesiveness (that is, the
value of the objective function f ). Fig. 2 shows the average results obtained. As can be
seen, our approximations work remarkably well: with a sampling rate less than 10 % the
distance between the characteristic vectors is around 0.02 and this distance decreases linearly towards zero. As for the objective function, the results are even more impressive as
the distance from the exact value (i.e., 0.989) rapidly goes to zero starting from 0.00025,
at less than 10% rate. Also, note how the CPU time increases linearly as the sampling rate
approaches 100%.
Next, we tested our algorithm over the Johns Hopkins University ionosphere database2
which contains 351 labeled instances from two different classes. As in the previous experiment, similarities were computed using a Gaussian kernel. Our goal was to test how the
solutions obtained on the sampled graph compare with those of the original, dense problem and to study how the performance of the algorithm scales w.r.t. the sampling rate. As
before, we used sampling rates from 1/15 to 1, and for each such value 100 random samples were extracted. After the grouping process, the out-of-sample instances were assigned
to one of the two classes found using rule (6). Then, for each example in the dataset a
?success? was recorded whenever the actual class label of the instance coincided with the
majority label of its assigned class. Fig. 3 shows the average results obtained. At around
40% rate the algorithm was already able to obtain a classification accuracy of about 73.4%,
which is even slightly higher that the one obtained on the dense (100% rate) problem, which
is 72.7%. Note that, as in the previous experiment, the algorithm appears to be robust with
respect to the choice of the sample data. For the sake of comparison we also ran normalized
cut on the whole dataset, and it yielded a classification rate of 72.4%.
Finally, we applied our algorithm to the segmentation of brightness images. The image
to be segmented is represented as a graph where vertices correspond to pixels and edgeweights reflect the ?similarity? between vertex pairs. As customary, we defined a similarity
measure between pixels based on brightness proximity. Specifically,
following [7],
similarity between pixels i and j was measured by w(i, j) = exp (I(i) ? I(j))2 /? 2 where
? is a positive real number which affects the decreasing rate of w, and I(i) is defined as
the (normalized) intensity value at node i. After drawing a set of pixels at random with
sampling rate p = 0.005, we iteratively found a dominant set in the sampled graph using
replicator dynamics [7, 14], we removed it from the graph. and we then employed rule (6)
2
http://www.ics.uci.edu/?mlearn/MLSummary.html
0.75
16
0.74
15
0.73
14
0.72
13
Seconds
Hit rate
0.71
0.7
0.69
12
11
0.68
10
0.67
9
0.66
0.65
8
0
0.1
0.2
0.3
0.4
0.5
0.6
Sampling rate
0.7
0.8
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
Sampling rate
0.7
0.8
0.9
1
Figure 3: Results on the ionosphere database. Average classification rate (left) and CPU time (right)
as a function of sampling rate.
Figure 4: Segmentation results on a 115 ? 97 weather radar image. From left to right: original
image, the two regions found on the sampled image (sampling rate = 0.5%), and the two regions
obtained on the whole image (sampling rate = 100%).
to extend it with out-of-sample pixels.
Figure 4 shows the results obtained on a 115 ? 97 weather radar image, used in [13, 7]
as an instance whereby edge-detection-based segmentation would perform poorly. Here,
and in the following experiment, the major components of the segmentations are drawn
on a blue background. The leftmost cluster is the one obtained after the first iteration of
the algorithm, and successive clusters are shown left to right. Note how the segmentation
obtained over the sparse image, sampled at 0.5% rate, is almost identical to that obtained
over the whole image. In both cases, the algorithms correctly discovered a background
and a foreground region. The approximation algorithm took a couple of seconds to return
the segmentation, i.e., 15 times faster than the one run over the entire image. Note that
our results are better than those obtained with normalized cut, as the latter provides an
over-segmented solution (see [13]).
Fig. 5 shows results on two 481 ? 321 images taken from the Berkeley database.3 On
these images the sampling process produced a sample with no more than 1000 pixels,
and our current MATLAB implementation took only a few seconds to return a solution.
Running the grouping algorithm on the whole images (which contain more than 150, 000
pixels) would simply be unfeasible. In both cases, our approximation algorithm partitioned
the images into meaningful and clean components. We also ran normalized cut on these
images (using the same sample rate of 0.5%) and the results, obtained after a long tuning
process, confirm its well-known inherent tendency to over-segment the data (see Fig. 5).
5
Conclusions
We have provided a simple and efficient extension to the dominant-set clustering framework
to deal with the grouping of out-of-sample data. This makes the approach applicable to
very large grouping problems, such as high-resolution image segmentation, where it would
otherwise be impractical. Experiments show that the solutions extrapolated from the sparse
data are comparable with those of the dense problem, which in turn compare favorably with
spectral solutions such as normalized cut?s, and are obtained in much less time.
3
http://www.cs.berkeley.edu/projects/vision/grouping/segbench
Figure 5: Segmentation results on two 481 ? 321 images. Left columns: original images. For each
image, the first line shows the major regions obtained with our approximation algorithm, while the
second line shows the results obtained with normalized cut.
References
[1] Y. Bengio, J.-F. Paiement, P. Vincent, O. Delalleau, N. Le Roux, and M. Ouimet. Out-of-sample
extensions for LLE, Isomap, MDS, eigenmaps, and spectral clustering. In: S. Thrun, L. Saul,
and B.Sch?olkopf (Eds.), Advances in Neural Information Processing Systems 16, MIT Press,
Cambridge, MA, 2004.
[2] C. Fowlkes, S. Belongie, F. Chun, and J. Malik. Spectral grouping using the Nystr?om method.
IEEE Trans. Pattern Anal. Machine Intell. 26:214?225, 2004.
[3] T. Hofmann and J. M. Buhmann. Pairwise data clustering by deterministic annealing. IEEE
Trans. Pattern Anal. Machine Intell. 19:1?14, 1997.
[4] D. Luenberger. Linear and Nonlinear Programming. Addison-Wesley, Reading, MA, 1984.
[5] T. S. Motzkin and E. G. Straus. Maxima for graphs and a new proof of a theorem of Tur?an.
Canad. J. Math. 17:533?540, 1965.
[6] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In: T.
G. Dietterich, S. Becker, and Z. Ghahramani (Eds.), Advances in Neural Information Processing Systems 14, MIT Press, Cambridge, MA, pp. 849?856, 2002.
[7] M. Pavan and M. Pelillo. A new graph-theoretic approach to clustering and segmentation. In
Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 145?152, 2003.
[8] M. Pavan, M. Pelillo. Unsupervised texture segmentation by dominant sets and game dynamics.
In Proc. 12th Int. Conf. on Image Analysis and Processing, pp. 302?307, 2003.
[9] M. Pavan and M. Pelillo. Dominant sets and hierarchical clustering. In Proc. 9th Int. Conf. on
Computer Vision, pp. 362?369, 2003.
[10] P. Perona and W. Freeman. A factorization approach to grouping. In: H. Burkhardt and B. Neumann (Eds.), Computer Vision?ECCV?98, pp. 655?670. Springer, Berlin, 1998.
[11] V. Roth, J. Laub, M. Kawanabe, and J. M. Buhmann. Optimal cluster preserving embedding of
nonmetric proximity data. IEEE Trans. Pattern Anal. Machine Intell. 25:1540?1551, 2003.
[12] S. Sarkar and K. Boyer. Quantitative measures of change based on feature organization: Eigenvalues and eigenvectors. Computer Vision and Image Understanding 71:110?136, 1998.
[13] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Machine Intell. 22:888?905, 2000.
[14] J. W. Weibull. Evolutionary Game Theory. MIT Press, Cambridge, MA, 1995.
[15] Y. Weiss. Segmentation using eigenvectors: A unifying view. In Proc. 7th Int. Conf. on Computer Vision, pp. 975?982, 1999.
| 2626 |@word trial:1 determinant:1 middle:1 brightness:2 thereby:2 nystr:2 contains:1 document:1 current:1 scatter:1 intriguing:3 assigning:1 john:1 numerical:2 happen:1 edgeweighted:2 partition:6 ahj:2 hofmann:1 extrapolating:1 item:4 accordingly:1 plane:1 short:1 provides:3 characterization:2 node:4 math:1 successive:1 direct:1 supergraph:2 supply:1 laub:1 consists:1 pairwise:7 indeed:1 behavior:1 freeman:1 decreasing:1 actual:5 cpu:3 cardinality:2 increasing:2 provided:2 project:1 moreover:2 interpreted:1 substantially:1 weibull:1 developed:1 finding:2 impractical:2 temporal:1 berkeley:2 quantitative:1 universit:1 k2:1 hit:1 unit:1 continually:3 unive:1 positive:5 before:1 local:7 conversely:2 factorization:1 practice:1 recursive:1 weather:2 segbench:1 get:8 unfeasible:2 unlabeled:1 selection:1 close:1 applying:1 www:2 map:1 deterministic:1 roth:1 maximizing:1 shi:1 straightforward:2 go:1 starting:1 resolution:3 roux:1 rule:9 embedding:2 notion:7 updated:2 suppose:1 exact:2 programming:1 expensive:2 approximated:3 recognition:1 cut:7 database:4 labeled:1 region:4 decrease:1 removed:1 tur:1 ran:2 principled:2 environment:1 complexity:1 asked:1 dynamic:7 radar:2 segment:1 algebra:1 serve:2 upon:2 technically:1 easily:1 various:1 represented:1 massimiliano:1 venezia:2 formation:1 neighborhood:1 quite:1 delalleau:1 drawing:1 otherwise:5 ability:1 unseen:2 noisy:1 inhomogeneity:1 confronted:1 indication:2 eigenvalue:1 took:2 propose:1 hindering:1 maximal:2 product:1 relevant:1 loop:1 uci:1 rapidly:1 xsh:4 subgraph:1 poorly:1 olkopf:1 cluster:31 empty:4 neumann:1 object:2 measured:1 pelillo:5 c:1 kuhn:1 centered:1 adjacency:3 assign:2 clustered:3 generalization:1 karush:1 preliminary:1 proposition:2 dipartimento:1 extension:5 hold:1 proximity:3 around:3 ic:1 exp:1 major:2 proc:4 applicable:2 combinatorial:1 label:2 grouped:2 establishes:1 weighted:12 reflects:2 mit:3 clearly:3 gaussian:2 hj:1 ax:7 inherits:1 contrast:1 centroid:1 membership:7 inaccurate:1 typically:1 entire:1 spurious:1 w:22 relation:1 irn:2 boyer:1 perona:1 pixel:7 overall:2 classification:6 among:1 html:1 having:2 ng:1 sampling:18 identical:1 represents:1 marcello:1 unsupervised:1 foreground:1 simplex:1 inherent:1 few:1 tightly:2 homogeneity:1 intell:4 consisting:1 attempt:2 recalling:1 detection:1 organization:3 yielding:2 edge:10 necessary:1 experience:1 loosely:2 old:1 euclidean:3 recomputing:2 instance:7 soft:4 column:1 localizing:1 applicability:1 vertex:23 subset:5 entry:1 uniform:1 successful:1 eigenmaps:1 conducted:1 too:1 kn:1 pavan:5 thanks:1 sequel:1 mestre:1 picking:1 together:1 hopkins:1 imagery:1 central:1 reflect:2 ambiguity:1 containing:1 recorded:1 possibly:2 external:1 conf:4 return:2 potential:2 int:3 depends:1 view:1 doing:1 schema:1 linked:1 competitive:1 om:2 ir:1 accuracy:1 accomplishing:1 characteristic:7 efficiently:2 correspond:3 yield:2 generalize:1 vincent:1 produced:2 mlearn:1 whenever:1 ed:3 definition:1 pp:6 tucker:1 straus:2 naturally:2 associated:3 di:2 proof:2 static:1 couple:1 sampled:4 newly:1 dataset:4 proved:1 intrinsically:1 recall:1 color:1 segmentation:15 nonmetric:1 sophisticated:1 actually:1 appears:1 wesley:1 higher:1 wei:2 formulation:1 though:2 until:1 hand:4 nonlinear:1 lack:1 quality:1 dietterich:1 concept:3 normalized:8 contain:1 isomap:1 hence:8 assigned:6 equality:5 symmetric:1 iteratively:2 deal:4 attractive:1 game:4 self:1 whereby:4 leftmost:1 theoretic:3 complete:3 image:24 ranging:1 novel:1 recently:1 replicator:2 belong:1 extend:1 cambridge:3 tuning:1 pointed:2 dot:1 similarity:18 impressive:1 dominant:29 recent:1 italy:1 success:1 seen:1 preserving:1 impose:1 employed:1 determine:1 paradigm:1 maximize:1 segmented:2 faster:1 offer:2 long:1 prediction:1 involving:1 basic:1 essentially:1 metric:1 vision:6 iteration:1 represent:3 normalization:1 kernel:2 whereas:2 remarkably:1 background:2 annealing:1 sch:1 strict:3 subject:2 induced:1 undirected:1 effectiveness:1 jordan:1 structural:1 presence:1 bengio:1 embeddings:1 easy:1 affect:1 reduce:2 idea:1 whether:2 motivated:1 padding:1 becker:1 matlab:1 ignored:1 clear:1 involve:1 eigenvectors:2 burkhardt:1 informatica:1 http:2 supplied:1 coherency:1 sign:2 popularity:1 correctly:1 blue:1 shall:3 paiement:1 group:3 drawn:1 clean:1 graph:23 run:1 unclassified:1 powerful:1 place:1 mlsummary:1 reasonable:1 almost:1 scaling:1 comparable:1 hi:2 quadratic:3 foscari:1 nonnegative:1 yielded:1 sake:1 argument:1 optimality:1 according:1 representable:1 slightly:1 partitioned:1 happens:1 intuitively:1 outlier:1 taken:2 computationally:2 equation:1 previously:2 discus:1 turn:1 addison:1 ouimet:1 end:1 luenberger:1 generalizes:1 observe:1 hierarchical:2 kawanabe:1 spectral:8 generic:3 appropriate:2 fowlkes:1 customary:2 original:7 denotes:2 clustering:11 running:1 unifying:1 ghahramani:1 approximating:1 objective:3 torino:1 already:1 quantity:1 malik:2 strategy:1 canad:1 md:1 traditional:1 said:1 evolutionary:3 affinity:2 distance:8 thrun:1 berlin:1 majority:1 trivial:3 assuming:1 length:1 index:1 relationship:1 providing:1 ratio:2 favorably:1 implementation:1 anal:4 perform:1 allowing:1 immediate:1 situation:6 discovered:1 intensity:2 sarkar:1 pair:2 required:1 namely:1 connection:3 coherent:1 trans:4 address:2 able:2 below:1 pattern:5 reading:1 program:9 built:1 gaining:1 overlap:1 natural:5 participation:3 predicting:2 buhmann:2 representing:1 imply:1 coupled:3 nice:2 geometric:1 understanding:1 determining:1 dsi:1 proven:2 degree:3 classifying:1 eccv:1 extrapolated:1 cohesiveness:8 infeasible:1 drastically:1 aij:6 allow:1 lle:1 neighbor:1 saul:1 sparse:2 regard:2 boundary:1 evaluating:1 made:1 approximate:2 implicitly:1 keep:1 clique:3 technicality:1 confirm:1 kkt:4 belongie:1 spatio:1 xi:4 continuous:4 nature:1 robust:1 ca:1 database2:1 domain:1 ahi:2 main:2 dense:5 linearly:2 whole:4 arise:1 arrival:1 allowed:1 fig:5 formalization:1 coincided:1 theorem:4 removing:1 formula:3 x:13 chun:1 virtue:1 ionosphere:2 grouping:16 burden:2 texture:2 dissimilarity:2 simply:1 visual:2 motzkin:2 scalar:1 springer:1 corresponds:1 satisfies:3 extracted:2 ma:4 goal:1 towards:1 hard:2 change:1 typical:1 determined:2 specifically:1 lemma:2 total:1 called:1 experimental:1 tendency:1 meaningful:1 indicating:1 rarely:1 internal:2 support:3 latter:1 arises:2 evaluate:1 tested:1 |
1,791 | 2,627 | Joint Probabilistic Curve Clustering and
Alignment
Scott Gaffney and Padhraic Smyth
School of Information and Computer Science
University of California, Irvine, CA 92697-3425
{sgaffney,smyth}@ics.uci.edu
Abstract
Clustering and prediction of sets of curves is an important problem in
many areas of science and engineering. It is often the case that curves
tend to be misaligned from each other in a continuous manner, either in
space (across the measurements) or in time. We develop a probabilistic
framework that allows for joint clustering and continuous alignment of
sets of curves in curve space (as opposed to a fixed-dimensional featurevector space). The proposed methodology integrates new probabilistic
alignment models with model-based curve clustering algorithms. The
probabilistic approach allows for the derivation of consistent EM learning algorithms for the joint clustering-alignment problem. Experimental
results are shown for alignment of human growth data, and joint clustering and alignment of gene expression time-course data.
1 Introduction
We introduce a novel methodology for the clustering and prediction of sets of smoothly
varying curves while jointly allowing for the learning of sets of continuous curve transformations. Our approach is to formulate models for both the clustering and alignment
sub-problems and integrate them into a unified probabilistic framework that allows for the
derivation of consistent learning algorithms. The alignment sub-problem is handled with
the introduction of a novel curve alignment procedure employing model priors over the set
of possible alignments leading to the derivation of EM learning algorithms that formalize
the so-called Procrustes approach for curve data [1]. These alignment models are then
integrated into a finite mixture model setting in which the clustering is carried out. We
make use of both polynomial and spline regression mixture models to complete the joint
clustering-alignment framework.
The following simple illustrative example demonstrates the importance of jointly handling
the clustering-alignment problem as opposed to treating alignment and clustering separately. Figure 1(a) shows a simulated set of curves which have been subjected to random translations in time. The underlying generative model contains three clusters each
described by a cubic polynomial (not shown). Figure 1(b) shows the output of the proposed joint EM algorithm introduced in this paper, where curves have been simultaneously aligned and clustered. The algorithm recovers the hidden labels and alignments nearperfectly in this case. On the other hand, Figure 1(c) shows the result of first clustering
100
Y?axis
Y?axis
100
0
?100
?200
?100
?5
0
5
Time
10
15
Y?axis
Y?axis
?200
?5
0
5
Time
10
15
?5
0
5
Time
10
15
100
100
0
0
?100
?100
?200
0
?5
0
5
Time
10
15
?200
Figure 1: Comparison of joint EM and sequential clustering-alignment: (a, top-left) unlabelled simulated data with hidden alignments; (b, top-right) solution recovered by joint
EM; (c, bottom-left) partial solution after clustering first, and (d, bottom-right) final solution after aligning clustered data in (c).
the unaligned data in Figure 1(b), while Figure 1(d) shows the final result of aligning each
of the found clusters individually. The sequential approach results in significant misclassification and incorrect alignment demonstrating that a two-stage approach can be quite
suboptimal when compared to a joint clustering-alignment methodology. (Similar results,
not shown, are obtained when the curves are first aligned and then clustered?see [2] for
full details.)
There has been little prior work on the specific problem of joint curve clustering and alignment, but there is related work in other areas. For example, clustering of gene-expression
time profiles with mixtures of splines was addressed in [3]. However, alignment was only
considered as a post-processing step to compare cluster results among related datasets. In
image analysis, the transformed mixture of Gaussians (TMG) model uses a probabilistic
framework and an EM algorithm to jointly learn clustering and alignment of image patches
subject to various forms of linear transformations [4]. However, this model only considers
sets of transformations in discrete pixel space, whereas we are focused on curve modelling
that allows for arbitrary continuous alignment in time and space. Another branch of work
in image analysis focuses on the problem of estimating correspondences of points across
images [5] (or vertices across graphs [6]), using EM or deterministic annealing algorithms.
The results we describe here differ primarily in that (a) we focus specifically on sets of
curves rather than image data (generally making the problem more tractable), (b) we focus on clustering and alignment rather than just alignment, (c) we allow continuous affine
transformations in time and measurement space, and (d) we have a fully generative probabilistic framework allowing for (for example) the incorporation of informative priors on
transformations if such prior information exists.
In earlier related work we developed general techniques for curve clustering (e.g., [7])
and also proposed techniques for transformation-invariant curve clustering with discrete
time alignment and Gaussian mixture models for curves [8, 9]. In this paper we provide
a much more general framework that allows for continuous alignment in both time and
measurement space for a general class of ?cluster shape? models, including polynomials
and splines.
2 Joint clustering and alignment
It is useful to represent curves as variable-length vectors. In this case, y i is a curve that
consists of a sequence of n i observations or measurements. The j-th measurement of y i
is denoted by y ij and is usually taken to be univariate (the generalization to multivariate
observations is straightforward). The associated covariate of y i is written as xi in the same
manner. x i is often thought of as time so that x ij gives the time at which y ij was observed.
Regression mixture models can be effectively used to cluster this type of curve data [10].
In the standard setup, y i is modelled using a normal (Gaussian) regression model in which
yi = Xi ? +i , where ? is a (p+1)?1 coefficient vector, i is a zero-mean Gaussian noise
variable, and X i is the regression matrix. The form of X i depends on the type of regression
model employed. For polynomial regression, X i is often associated with the standard
Vandermonde matrix; and for spline regression, X i takes the form of a spline-basis matrix
(see, e.g., [7] for more details). The mixture model is completed by repeating this model
over K clusters and indexing the parameters by k so that, for example, y i = Xi ? k + i
gives the regression model for y i under the k-th cluster.
B-splines [11] are particularly efficient for computational purposes due to the blockdiagonal basis matrices that result. Using B-splines, the curve point y ij can be represented
as the linear combination y ij = Bij c, in which the vector B ij gives the vector of B-spline
basis functions evaluated at x ij , and c gives the spline coefficient vector [2]. The full curve
yi can then be written compactly as y i = Bi c in which the spline basis matrix takes the
form Bi = [Bi1 ? ? ? Bini ] . Spline regression models can be easily integrated into the regression mixture model framework by equating the regression matrix X i with the spline
basis matrix Bi . In what follows, we use the more general notation X i in favor of the more
specific Bi .
2.1 Joint model definition
The joint clustering-alignment model definition is based on a regression mixture model
that has been augmented with up to four individual random transformation parameters or
variables (ai , bi , ci , di ). The ai and bi allow for scaling and translation in time, while the c i
and di allow for scaling and translation in measurement space. The model definition takes
the form
yi = ci ai xi ? bi ? k + di + i ,
(1)
in which ai xi ? bi represents the regression matrix X i (either spline or polynomial)
evaluated at the transformed time a i xi ? bi . Below we use the matrix X i to denote ai xi ?
bi when parsimony is required. It is assumed that i is a zero-mean Gaussian vector with
covariance ?k2 I.
The conditional density
pk (yi |ai , bi , ci , di ) = N (yi |ci ai xi ? bi ? k + di , ?k2 I)
(2)
gives the probability density of y i when all the transformation parameters (as well as cluster
membership) are known. (Note that the density on the left is implicitly conditioned on an
appropriate set of parameters?this is always assumed in what follows.) In general, the values for the transformation parameters are unknown. Treating this as a standard hidden-data
problem, it is useful to think of each of the transformation parameters as random variables
that are curve-specific but with ?population-level? prior probability distributions. In this
way, the transformation parameters and the model parameters can be learned simultaneously in an efficient manner using EM.
2.2 Transformation priors
Priors are attached to each of the transformation variables in such a way that the identity
transformation is the most likely transformation. A useful prior for this is the Gaussian density N (?, ? 2 ) with mean ? and variance ? 2 . The time transformation priors are specified
as
ai ? N (1, rk2 ), bi ? N (0, s2k ),
(3)
and the measurement space priors are given as
ci ? N (1, u2k ) , di ? N (0, vk2 ).
(4)
Note that the identity transformation is indeed the most likely. All of the variance parameters are cluster-specific in general; however, any subset of these parameters can be ?tied?
across clusters if desired in a specific application. Note that these priors technically allow
for negative scaling in time and in measurement space. In practice this is typically not a
problem, though one can easily specify other priors (e.g., log-normal) to strictly disallow
this possibility. It should be noted that each of the prior variance parameters are learned
from the data in the ensuing EM algorithm. We do not make use of hyperpriors for these
prior parameters; however, it is straightforward to extend the method to allow hyperpriors
if desired.
2.3 Full probability model
The joint density of y i and the set of transformation variables ? i = {ai , bi , ci , di } can be
written succinctly as
pk (yi , ?i ) = pk (yi |?i )pk (?i ),
(5)
where pk (?i ) = N (ai |1, rk2 )N (bi |0, s2k )N (ci |1, u2k )N (di |0, vk2 ). The space transformation parameters can be integrated-out of (5) resulting in the marginal of y i conditioned only
on the time transformation parameters. This conditional marginal takes the form
pk (yi |ai , bi ) =
pk (yi , ci , di |ai , bi ) dci , ddi
= N (yi |X i ?k , Uik + Vk ? ?k2 I),
(6)
2
2
2
2
with Uik = uk X i ? k ?k X i + ?k I and V k = vk 11 + ?k I. The unconditional (though,
still cluster-dependent) marginal for y i cannot be computed analytically since a i , bi cannot
be analytically integrated-out. Instead, we use numerical Monte Carlo integration for this
task. The resulting unconditional marginal for y i can be approximated by
pk (yi ) =
pk (yi |ai , bi )pk (ai )pk (bi ) dai dbi
1
(m) (m)
pk (yi |ai , bi ),
M m
?
(7)
where the M Monte Carlo samples are taken according to
(m)
(m)
ai ? N (1, rk2 ), and bi ? N (0, s2k ), for m = 1, . . . , M.
A mixture results when cluster membership is unknown:
?k pk (yi ).
p(yi ) =
k
(8)
(9)
The log-likelihood of all n curves Y = {y i } follows directly from this approximation and
takes the form
(m) (m)
log p(Y ) ?
log
?k pk (yi |ai , bi ) ? n log M.
(10)
i
mk
2.4 EM algorithm
We derive an EM algorithm that simultaneously allows the learning of both the model
parameters and the transformation
variables ? with time-complexity that is linear in the
total number of data points N = i ni . First, let zi give the cluster membership for curve
yi . Now, regard the transformation variables {? i } as well as the cluster memberships {z i }
as being hidden. The complete-data log-likelihood function is defined as the joint loglikelihood of Y and the hidden data {? i , zi }. This can be written as the sum over all n
curves of the log of the product of ? zi and the cluster-dependent joint density in (5). This
function takes the form
log ?zi pzi (yi |?i ) pzi (?i ).
(11)
Lc =
i
In the E-step, the posterior p(? i , zi |yi ) is calculated and then used to take the posterior
expectation of Equation (11). This expectation is then used in the M-step to calculate the
re-estimation equations for updating the model parameters {? k , ?k2 , rk2 , s2k , u2k , vk2 }.
2.5 E-step
The posterior p(? i , zi |yi ) can be factorized as p zi (?|yi )p(zi |yi ). The second factor is
the membership probability w ik that yi was generated by cluster k. It can be rewritten as
p(zi = k|yi ) ? pk (yi ) and evaluated using Equation (7). The first factor requires a bit
more work. Further factoring reveals that p zi (?|yi ) = pzi (ci , di |ai , bi , yi )pzi (ai , bi |yi ).
The new first factor p zi (ci , di |ai , bi , yi ) can be solved for exactly by noting that it is proportional to a bivariate normal distribution for each z i [2]. The new second factor p zi (ai , bi |yi )
cannot, in general, be solved for analytically, so instead we use an approximation.
The fact that posterior densities tend towards highly peaked Gaussian densities has been
widely noted (e.g, [12]) and leads to the normal approximation of posterior densities.
To make the approximation here, the vector (?
a ik , ?bik ) representing the multi-dimensional
(k)
mode of p k (ai , bi |yi ), the covariance matrix V ai bi for (?
aik , ?bik ), and the separate variances
Vaik , Vbik must be found. These can readily be estimated using a Nelder-Mead optimization method. Experiments have shown this approximation works well across a variety of
experimental and real-world data sets [2].
The above calculations of the posterior p(? i , zi |yi ) allow the posterior expectation of the
complete-data log-likelihood in Equation (11) to be solved for. This expectation results
in the so-called Q-function which is maximized in the M-step. Although the derivation
is quite complex, the Q-function can be calculated exactly for polynomial regression [2];
for spline regression, the basis functions do not afford an exact formula for the solution of
the Q-function. However, in the spline case, removal of a few problematic variance terms
gives an efficient approximation (the interested reader is referred to [2] for more details).
2.6 M-step
The M-step is straightforward since most of the hard work is done in the E-step. The Qfunction is maximized over the set of parameters {? k , ?k2 , rk2 , s2k , u2k , vk2 } for 1 ? k ? K.
The derived solutions are as follows:
2
1
1
r?k2 =
?ik + Vaik , s?2k =
wik a
wik ?b2ik + Vbik ,
i wik i
i wik i
u
?2k =
1
wik c?2ik + Vcik ,
i wik i
v?k2 =
1
wik d?2ik + Vdik ,
i wik i
Height acceleration
Height acceleration
4
2
0
?2
?4
4
2
0
?2
?4
?6
?6
10
12
14
Age
16
18
8
10
12
14
Age
16
18
Figure 2: Curves measuring the height acceleration for 39 boys; (left) smoothed versions
of raw observations, (right) automatically aligned curves.
? =
?
k
? X
?
wik c?2ik X
ik ik
+ Vxxi
i
?1
?
?
wik c?ik X ik (yi ? dik ) + Vxi yi ? Vxcd 1 ,
i
and
?
?k2
=
2
1
? ik ? ? d?ik
wik yi ? c?ik X
i wik ni i
?
?2yi Vxi ?
k
? Vxxi ?
? Vxcd 1 + ni Vd
? + 2?
+?
k
k
k
ik
,
? ik = ?
where X
aik xi ? ?bik , and V xxi , Vxi , Vxcd are special ?variance? matrices whose
components are functions of the posterior expectations of ? calculated in the E-step (the
exact forms of these matrices can be found in [2]).
3 Experimental results and conclusions
The results of a simple demonstration of EM-based alignment (using splines and the learning algorithm of the previous section, but with no clustering) are shown in Figure 2. In the
left plot are a set of smoothed curves representing the acceleration of height for each of 39
boys whose heights were measured at 29 observation times over the ages of 1 to 18 [1]. Notice that the curves share a similar shape but seem to be misaligned in time due to individual
growth dynamics. The right plot shows the same acceleration curves after processing from
our spline alignment model using quartic splines with 8 uniformly spaced knots allowing
for a maximum time translation of 2 units. The x-axis in this plot can be seen as canonical
(or ?average?) age. The aligned curves in the right plot of Figure 2 represent the average
behavior in a much clearer way. For example, it appears there is an interval of 2.5 years
from peak (age 12.5) to trough (age 15) that describes the average cycle that all boys go
through. The results demonstrate that it is common for important features of curves to be
randomly translated in time and that it is possible to use the data to recover these underlying
hidden transformations using our alignment models.
Next we briefly present an application of the joint clustering-alignment model to the problem of gene expression clustering. We analyze the alpha arrest data described in [13] that
captures gene expression levels at 7 minute intervals for two consecutive cell cycles (totaling 17 measurements per gene). Clustering is often used in gene expression analysis
to reveal groups of genes with similar profiles that may be physically related to the same
underlying biological process (e.g., [13]). It is well-known that time-delays play an impor-
2
1
1
Expression
Expression
2
0
?1
?1
0
5
10
Canonical time
?2
15
2
2
1
1
Expression
Expression
?2
0
?1
?2
5
10
Time
15
0
5
10
Time
15
0
5
10
Time
15
0
0
5
10
Canonical time
?2
15
2
2
1
1
0
?1
?2
0
?1
Expression
Expression
0
0
?1
0
5
10
Canonical time
15
?2
Figure 3: Three clusters for the time translation alignment model (left) and the nonalignment model (right).
tant role in gene regulation, and thus, curves measured over time which represent the same
process may often be misaligned from each other. [14].
Since these gene expression data are already normalized, we did not allow for transformations in measurement space. We only allowed for translations in time since experts do
not expect scaling in time to be a factor in these data. For the curve model, cubic splines
with 6 uniformly spaced knots across the interval from ?4 to 21 were chosen, allowing
for a maximum time translation of 4 units. Due to limited space, we present a single case
of comparison between a standard spline regression mixture model (SRM) and an SRM
that jointly allows for time translations. Ten random starts of EM were allowed for each
algorithm with the highest likelihood model selected for comparison for each algorithm. It
is common to assume that there are five distinct clusters of genes in these data; as such we
set K = 5 for each algorithm [13].
Three of the resulting clusters from the two methods are shown in Figure 3. The left
column of the figure shows the output from the joint clustering-alignment model, while
the right column shows the output from the standard cluster model. It is apparent that
the time-aligned clusters represent the mean behavior more accurately. The overall cluster
variance is much lower than in the non-aligned clustering. The results also demonstrate
the appearance of cluster-dependent alignment effects. Out-of-sample experiments (not
shown here) show that the joint model produces better predictive models than the standard
clustering method. Experimental results on a variety of other data sets are provided in [2],
including applications to clustering of cyclone trajectories.
4 Conclusions
We proposed a general probabilistic framework for joint clustering and alignment of sets
of curves. The experimental results indicate that the approach provides a new and useful tool for curve analysis in the face of underlying hidden transformations. The resulting EM-based learning algorithms have time-complexity that is linear in the number
of measurements?in contrast, many existing curve alignment algorithms themselves are
O(n2 ) (e.g., dynamic time warping) without regard to clustering. The incorporation of
splines gives the method an overall non-parametric freedom which leads to general applicability.
Acknowledgements
This material is based upon work supported by the National Science Foundation under
grants No. SCI-0225642 and IIS-0431085.
References
[1] J.O. Ramsay and B. W. Silverman. Functional Data Analysis. Springer-Verlag, New York, NY,
1997.
[2] Scott J. Gaffney. Probabilistic Curve-Aligned Clustering and Prediction with Regression Mixture Models. Ph.D. Dissertation, University of California, Irvine, 2004.
[3] Z. Bar-Joseph et al. A new approach to analyzing gene expression time series data. Journal of
Computational Biology, 10(3):341?356, 2003.
[4] B. J. Frey and N. Jojic. Transformation-invariant clustering using the EM algorithm. IEEE
Trans. PAMI, 25(1):1?17, January 2003.
[5] H. Chui, J. Zhang, and A. Rangarajan. Unsupervised learning of an atlas from unlabeled pointsets. IEEE Trans. PAMI, 26(2):160?172, February 2004.
[6] A. D. J. Cross and E. R. Hancock. Graph matching with a dual-step EM algorithm. IEEE Trans.
PAMI, 20(11):1236?1253, November 1998.
[7] S. J. Gaffney and P. Smyth. Curve clustering with random effects regression mixtures. In C. M.
Bishop and B. J. Frey, editors, Proc. Ninth Inter. Workshop on Artificial Intelligence and Stats,
Key West, FL, January 3?6 2003.
[8] D. Chudova, S. J. Gaffney, and P. J. Smyth. Probabilistic models for joint clustering and timewarping of multi-dimensional curves. In Proc. of the Nineteenth Conference on Uncertainty in
Artificial Intelligence (UAI-2003), Acapulco, Mexico, August 7?10, 2003.
[9] D. Chudova, S. J. Gaffney, E. Mjolsness, and P. J. Smyth. Translation-invariant mixture models
for curve clustering. In Proc. Ninth ACM SIGKDD Inter. Conf. on Knowledge Discovery and
Data Mining, Washington D.C., August 24?27, New York, 2003. ACM Press.
[10] S. Gaffney and P. Smyth. Trajectory clustering with mixtures of regression models. In Surajit
Chaudhuri and David Madigan, editors, Proc. Fifth ACM SIGKDD Inter. Conf. on Knowledge
Discovery and Data Mining, August 15?18, pages 63?72, N.Y., 1999. ACM Press.
[11] P. H. C. Eilers. and B. D. Marx. Flexible smoothing with B-splines and penalties. Statistical
Science, 11(2):89?121, 1996.
[12] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman &
Hall, New York, NY, 1995.
[13] P. T. Spellman et al. Comprehensive identification of cell cycle-regulated genes of the yeast
Saccharomyces cerevisiae by microarray hybridization. Molec. Bio. Cell, 9(12):3273?3297,
December 1998.
[14] J. Aach and G. M. Church. Aligning gene expression time series with time warping algorithms.
Bioinformatics, 17(6):495?508, 2001.
| 2627 |@word briefly:1 version:1 polynomial:6 covariance:2 tmg:1 contains:1 series:2 existing:1 recovered:1 written:4 readily:1 must:1 numerical:1 informative:1 shape:2 treating:2 plot:4 atlas:1 generative:2 selected:1 intelligence:2 dissertation:1 provides:1 zhang:1 five:1 height:5 ik:15 incorrect:1 consists:1 introduce:1 manner:3 inter:3 indeed:1 behavior:2 themselves:1 multi:2 automatically:1 little:1 provided:1 estimating:1 underlying:4 notation:1 factorized:1 what:2 parsimony:1 developed:1 unified:1 transformation:26 growth:2 exactly:2 demonstrates:1 k2:8 uk:1 bio:1 unit:2 grant:1 engineering:1 frey:2 analyzing:1 mead:1 pami:3 equating:1 misaligned:3 limited:1 bi:29 practice:1 silverman:1 procedure:1 area:2 thought:1 matching:1 madigan:1 cannot:3 unlabeled:1 gelman:1 deterministic:1 vxi:3 pointsets:1 straightforward:3 go:1 focused:1 formulate:1 stats:1 dbi:1 population:1 play:1 aik:2 smyth:6 exact:2 us:1 approximated:1 particularly:1 updating:1 bottom:2 observed:1 role:1 solved:3 capture:1 calculate:1 cycle:3 mjolsness:1 highest:1 complexity:2 dynamic:2 predictive:1 technically:1 upon:1 basis:6 compactly:1 translated:1 easily:2 joint:21 various:1 represented:1 derivation:4 distinct:1 hancock:1 describe:1 monte:2 artificial:2 u2k:4 quite:2 whose:2 widely:1 apparent:1 nineteenth:1 loglikelihood:1 favor:1 think:1 jointly:4 final:2 sequence:1 unaligned:1 product:1 uci:1 aligned:7 chaudhuri:1 cluster:23 rangarajan:1 produce:1 derive:1 develop:1 clearer:1 measured:2 ij:8 school:1 indicate:1 differ:1 chudova:2 human:1 material:1 ddi:1 clustered:3 generalization:1 bi1:1 biological:1 acapulco:1 strictly:1 molec:1 considered:1 ic:1 normal:4 hall:1 consecutive:1 purpose:1 estimation:1 proc:4 integrates:1 label:1 individually:1 tool:1 gaussian:6 always:1 cerevisiae:1 rather:2 varying:1 totaling:1 derived:1 focus:3 vk:2 saccharomyces:1 modelling:1 likelihood:4 contrast:1 sigkdd:2 vk2:4 dependent:3 factoring:1 membership:5 integrated:4 typically:1 hidden:7 transformed:2 interested:1 pixel:1 overall:2 among:1 dual:1 flexible:1 denoted:1 smoothing:1 integration:1 special:1 marginal:4 washington:1 chapman:1 biology:1 represents:1 unsupervised:1 peaked:1 spline:22 primarily:1 few:1 randomly:1 simultaneously:3 national:1 comprehensive:1 individual:2 freedom:1 gaffney:6 possibility:1 highly:1 mining:2 alignment:38 mixture:15 unconditional:2 pzi:4 partial:1 impor:1 desired:2 re:1 mk:1 column:2 earlier:1 measuring:1 applicability:1 vertex:1 subset:1 srm:2 delay:1 density:9 peak:1 probabilistic:10 padhraic:1 opposed:2 conf:2 expert:1 leading:1 coefficient:2 trough:1 depends:1 analyze:1 start:1 recover:1 ni:3 variance:7 maximized:2 spaced:2 modelled:1 raw:1 bayesian:1 identification:1 accurately:1 knot:2 carlo:2 trajectory:2 definition:3 associated:2 di:11 recovers:1 irvine:2 knowledge:2 formalize:1 appears:1 methodology:3 specify:1 evaluated:3 though:2 done:1 just:1 stage:1 hand:1 mode:1 reveal:1 yeast:1 effect:2 tant:1 rk2:5 normalized:1 analytically:3 jojic:1 illustrative:1 noted:2 arrest:1 complete:3 demonstrate:2 image:5 novel:2 common:2 functional:1 attached:1 extend:1 measurement:11 significant:1 ai:23 ramsay:1 aligning:3 multivariate:1 posterior:8 quartic:1 verlag:1 yi:35 seen:1 dai:1 employed:1 ii:1 branch:1 full:3 unlabelled:1 calculation:1 cross:1 post:1 prediction:3 regression:19 expectation:5 physically:1 represent:4 cell:3 xxi:1 whereas:1 separately:1 addressed:1 annealing:1 interval:3 microarray:1 subject:1 tend:2 vxxi:2 hybridization:1 december:1 seem:1 bik:3 noting:1 variety:2 carlin:1 zi:13 suboptimal:1 qfunction:1 expression:14 handled:1 penalty:1 dik:1 york:3 afford:1 generally:1 useful:4 procrustes:1 repeating:1 ten:1 ph:1 problematic:1 canonical:4 notice:1 estimated:1 per:1 discrete:2 group:1 key:1 four:1 demonstrating:1 graph:2 sum:1 year:1 uncertainty:1 reader:1 patch:1 scaling:4 bit:1 fl:1 correspondence:1 incorporation:2 chui:1 according:1 combination:1 across:6 describes:1 em:16 joseph:1 making:1 invariant:3 indexing:1 taken:2 equation:4 subjected:1 tractable:1 gaussians:1 rewritten:1 hyperpriors:2 appropriate:1 top:2 clustering:40 completed:1 bini:1 february:1 warping:2 already:1 parametric:1 regulated:1 separate:1 simulated:2 sci:1 ensuing:1 vd:1 considers:1 length:1 demonstration:1 mexico:1 setup:1 regulation:1 dci:1 boy:3 negative:1 stern:1 unknown:2 allowing:4 observation:4 datasets:1 finite:1 november:1 january:2 ninth:2 smoothed:2 arbitrary:1 timewarping:1 august:3 introduced:1 david:1 required:1 specified:1 california:2 learned:2 trans:3 bar:1 usually:1 below:1 scott:2 including:2 marx:1 misclassification:1 s2k:5 representing:2 wik:12 spellman:1 axis:5 carried:1 church:1 prior:14 acknowledgement:1 blockdiagonal:1 removal:1 discovery:2 fully:1 expect:1 eilers:1 proportional:1 age:6 vandermonde:1 integrate:1 foundation:1 affine:1 consistent:2 rubin:1 editor:2 share:1 translation:9 course:1 succinctly:1 supported:1 disallow:1 allow:7 face:1 fifth:1 regard:2 curve:44 calculated:3 world:1 employing:1 alpha:1 implicitly:1 gene:13 reveals:1 uai:1 assumed:2 nelder:1 xi:9 continuous:6 learn:1 ca:1 complex:1 did:1 pk:15 noise:1 profile:2 n2:1 allowed:2 augmented:1 referred:1 west:1 uik:2 cubic:2 ny:2 lc:1 cyclone:1 sub:2 tied:1 formula:1 minute:1 specific:5 covariate:1 bishop:1 bivariate:1 exists:1 workshop:1 sequential:2 effectively:1 importance:1 ci:10 conditioned:2 smoothly:1 univariate:1 likely:2 appearance:1 surajit:1 springer:1 acm:4 conditional:2 identity:2 acceleration:5 towards:1 hard:1 specifically:1 uniformly:2 called:2 total:1 experimental:5 bioinformatics:1 handling:1 |
1,792 | 2,628 | A direct formulation for sparse PCA
using semidefinite programming
Alexandre d?Aspremont
EECS Dept.
U.C. Berkeley
Berkeley, CA 94720
[email protected]
Michael I. Jordan
EECS and Statistics Depts.
U.C. Berkeley
Berkeley, CA 94720
[email protected]
Laurent El Ghaoui
SAC Capital
540 Madison Avenue
New York, NY 10029
[email protected]
(on leave from EECS, U.C. Berkeley)
Gert R. G. Lanckriet
EECS Dept.
U.C. Berkeley
Berkeley, CA 94720
[email protected]
Abstract
We examine the problem of approximating, in the Frobenius-norm sense,
a positive, semidefinite symmetric matrix by a rank-one matrix, with an
upper bound on the cardinality of its eigenvector. The problem arises
in the decomposition of a covariance matrix into sparse factors, and has
wide applications ranging from biology to finance. We use a modification of the classical variational representation of the largest eigenvalue
of a symmetric matrix, where cardinality is constrained, and derive a
semidefinite programming based relaxation for our problem.
1
Introduction
Principal component analysis (PCA) is a popular tool for data analysis and dimensionality
reduction. It has applications throughout science and engineering. In essence, PCA finds
linear combinations of the variables (the so-called principal components) that correspond
to directions of maximal variance in the data. It can be performed via a singular value
decomposition (SVD) of the data matrix A, or via an eigenvalue decomposition if A is a
covariance matrix.
The importance of PCA is due to several factors. First, by capturing directions of maximum variance in the data, the principal components offer a way to compress the data with
minimum information loss. Second, the principal components are uncorrelated, which can
aid with interpretation or subsequent statistical analysis. On the other hand, PCA has a
number of well-documented disadvantages as well. A particular disadvantage that is our
focus here is the fact that the principal components are usually linear combinations of all
variables. That is, all weights in the linear combination (known as loadings), are typically
non-zero. In many applications, however, the coordinate axes have a physical interpreta-
tion; in biology for example, each axis might correspond to a specific gene. In these cases,
the interpretation of the principal components would be facilitated if these components involve very few non-zero loadings (coordinates). Moreover, in certain applications, e.g.,
financial asset trading strategies based on principal component techniques, the sparsity of
the loadings has important consequences, since fewer non-zero loadings imply fewer fixed
transaction costs.
It would thus be of interest to be able to discover ?sparse principal components?, i.e., sets of
sparse vectors spanning a low-dimensional space that explain most of the variance present
in the data. To achieve this, it is necessary to sacrifice some of the explained variance and
the orthogonality of the principal components, albeit hopefully not too much.
Rotation techniques are often used to improve interpretation of the standard principal components [1]. [2] considered simple principal components by restricting the loadings to
take values from a small set of allowable integers, such as 0, 1, and ?1. [3] propose an
ad hoc way to deal with the problem, where the loadings with small absolute value are
thresholded to zero. We will call this approach ?simple thresholding.? Later, a method
called SCoTLASS was introduced by [4] to find modified principal components with possible zero loadings. In [5] a new approach, called sparse PCA (SPCA), was proposed to
find modified components with zero loadings, based on the fact that PCA can be written
as a regression-type optimization problem. This allows the application of LASSO [6], a
penalization technique based on the L1 norm.
In this paper, we propose a direct approach (called DSPCA in what follows) that improves
the sparsity of the principal components by directly incorporating a sparsity criterion in the
PCA problem formulation and then relaxing the resulting optimization problem, yielding a
convex optimization problem. In particular, we obtain a convex semidefinite programming
(SDP) formulation.
SDP problems can be solved in polynomial time via general-purpose interior-point methods [7], and our current implementation of DSPCA makes use of these general-purpose
methods. This suffices for an initial empirical study of the properties of DSPCA and for
comparison to the algorithms discussed above on problems of small to medium dimensionality. For high-dimensional problems, the general-purpose methods are not viable and it is
necessary to attempt to exploit special structure in the problem. It turns out that our problem can be expressed as a special type of saddle-point problem that is well suited to recent
specialized algorithms, such as those described in [8, 9]. These algorithms offer a significant reduction in computational time compared to generic SDP solvers. In the current
paper, however, we restrict ourselves to an investigation of the basic properties of DSPCA
on problems for which the generic methods are adequate.
Our paper is structured as follows. In Section 2, we show how to efficiently derive a
sparse rank-one approximation of a given matrix using a semidefinite relaxation of the
sparse PCA problem. In Section 3, we derive an interesting robustness interpretation of our
technique, and in Section 4 we describe how to use this interpretation in order to decompose
a matrix into sparse factors. Section 5 outlines different algorithms that can be used to solve
the problem, while Section 6 presents numerical experiments comparing our method with
existing techniques.
Notation
Here, Sn is the set of symmetric matrices of size n. We denote by 1 a vector of ones,
while Card(x) is the cardinality (number of non-zero elements)
of a vector x. For X ?
p
n
2
S , kXkF is the Frobenius norm of X, i.e., kXkF = Tr(X ), and by ?max (X) the
maximum eigenvalue of X, while |X| is the matrix whose elements are the absolute values
of the elements of X.
2
Sparse eigenvectors
In this section, we derive a semidefinite programming (SDP) relaxation for the problem
of approximating a symmetric matrix by a rank one matrix with an upper bound on the
cardinality of its eigenvector. We first reformulate this as a variational problem, we then
obtain a lower bound on its optimal value via an SDP relaxation (we refer the reader to [10]
for an overview of semidefinite programming).
Let A ? Sn be a given n ? n positive semidefinite, symmetric matrix and k be an integer
with 1 ? k ? n. We consider the problem:
?k (A) :=
min
kA ? xxT kF
subject to Card(x) ? k,
(1)
in the variable x ? Rn . We can solve instead the following equivalent problem:
?2k (A) = min
kA ? ?xxT k2F
subject to kxk2 = 1, ? ? 0,
Card(x) ? k,
in the variable x ? Rn and ? ? R. Minimizing over ?, we obtain:
?2k (A) = kAk2F ? ?k (A),
where
?k (A) := max
xT Ax
subject to kxk2 = 1
Card(x) ? k.
(2)
To compute a semidefinite relaxation of this program (see [10], for example), we rewrite
(2) as:
?k (A) := max
Tr(AX)
subject to Tr(X) = 1
(3)
Card(X) ? k 2
X 0, Rank(X) = 1,
in the symmetric, matrix variable X ? Sn . Indeed, if X is a solution to the above problem,
then X 0 and Rank(X) = 1 means that we have X = xxT , and Tr(X) = 1 implies
that kxk2 = 1. Finally, if X = xxT then Card(X) ? k 2 is equivalent to Card(x) ? k.
Naturally, problem (3) is still non-convex and very difficult to solve, due to the ?
rank and
cardinality constraints. Since for every u ? Rp , Card(u) = q implies kuk1 ? qkuk2 ,
we can replace the non-convex constraint Card(X) ? k 2 , by a weaker
? but convex one:
1T |X|1 ? k, where we have exploited the property that kXkF = xT x = 1 when
X = xxT and Tr(X) = 1. If we also drop the rank constraint, we can form a relaxation
of (3) and (2) as:
? k (A) := max
Tr(AX)
subject to Tr(X) = 1
(4)
1T |X|1 ? k
X 0,
which is a semidefinite program (SDP) in the variable X ? Sn , where k is an integer
parameter controlling the sparsity of the solution. The optimal value of this program will
be an upper bound on the optimal value vk (a) of the variational program in (2), hence it
gives a lower bound on the optimal value ?k (A) of the original problem (1). Finally, the
optimal solution X will not always be of rank one but we can truncate it and keep only its
dominant eigenvector x as an approximate solution to the original problem (1). In Section
6 we show that in practice the solution X to (4) tends to have a rank very close to one, and
that its dominant eigenvector is indeed sparse.
3
A robustness interpretation
In this section, we show that problem (4) can be interpreted as a robust formulation of the
maximum eigenvalue problem, with additive, component-wise uncertainty in the matrix A.
We again assume A to be symmetric and positive semidefinite. In the previous section,
we considered in (2) a cardinality-constrained variational formulation of the maximum
eigenvalue problem. Here we look at a small variation where we penalize the cardinality
and solve:
max
xT Ax ? ? Card2 (x)
subject to kxk2 = 1,
in the variable x ? Rn , where the parameter ? > 0 controls the size of the penalty. Let
us remark that we can easily move from the constrained formulation in (4) to the penalized
form in (5) by duality. This problem is again non-convex and very difficult to solve. As in
the last section, we can form the equivalent program:
max
Tr(AX) ? ? Card(X)
subject to Tr(X) = 1
X 0, Rank(X) = 1,
in the variable X ? Sn . Again, we get a relaxation of this program by forming:
max
Tr(AX) ? ?1T |X|1
subject to Tr(X) = 1
X 0,
(5)
which is a semidefinite program in the variable X ? Sn , where ? > 0 controls the penalty
size. We can rewrite this last problem as:
max
min Tr(X(A + U ))
X0,Tr(X)=1 |Uij |??
(6)
and we get a dual to (5) as:
min
?max (A + U )
subject to |Uij | ? ?, i, j = 1, . . . , n,
(7)
which is a maximum eigenvalue problem with variable U ? Rn?n . This gives a natural
robustness interpretation to the relaxation in (5): it corresponds to a worst-case maximum
eigenvalue computation, with component-wise bounded noise of intensity ? on the matrix
coefficients.
4
Sparse decomposition
Here, we use the results obtained in the previous two sections to describe a sparse equivalent
to the PCA decomposition technique. Suppose that we start with a matrix A1 ? Sn , our
objective is to decompose it in factors with target sparsity k. We solve the relaxed problem
in (4):
max
Tr(A1 X)
subject to Tr(X) = 1
1T |X|1 ? k
X 0,
to get a solution X1 , and truncate it to keep only the dominant (sparse) eigenvector x1 .
Finally, we deflate A1 to obtain
A2 = A1 ? (xT1 A1 x1 )x1 xT1 ,
and iterate to obtain further components.
The question is now: When do we stop the decomposition? In the PCA case, the decomposition stops naturally after Rank(A) factors have been found, since ARank(A)+1 is then
equal to zero. In the case of the sparse decomposition, we have no guarantee that this will
happen. However, the robustness interpretation gives us a natural stopping criterion: if all
the coefficients in |Ai | are smaller than the noise level ?? (computed in the last section) then
we must stop since the matrix is essentially indistinguishable from zero. So, even though
we have no guarantee that the algorithm will terminate with a zero matrix, the decomposition will in practice terminate as soon as the coefficients in A become undistinguishable
from the noise.
5
Algorithms
For problems of moderate size, our SDP can be solved efficiently using solvers such as
SEDUMI [7]. For larger-scale problems, we need to resort to other types of algorithms
for convex optimization. Of special interest are the recently-developed algorithms due to
[8, 9]. These are first-order methods specialized to problems having a specific saddlepoint structure. It turns out that our problem, when expressed in the saddle-point form (6),
falls precisely into this class of algorithms. Judged from the results presented in [9], in
the closely related context of computing the Lovascz capacity of a graph, the theoretical
complexity, as well as practical performance, of the method as applied to (6) should exhibit
very significant improvements over the general-purpose interior-point algorithms for SDP.
Of course, nothing comes without a price: for fixed problem size, the first-order methods
mentioned above converge in O(1/), where is the required accuracy on the optimal
value, while interior-point methods converge in O(log(1/)). We are currently evaluating
the impact of this tradeoff both theoretically and in practice.
6
Numerical results
In this section, we illustrate the effectiveness of the proposed approach both on an artificial
and a real-life data set. We compare with the other approaches mentioned in the introduction: PCA, PCA with simple thresholding, SCoTLASS and SPCA. The results show that
our approach can achieve more sparsity in the principal components than SPCA does, while
explaining as much variance. We begin by a simple example illustrating the link between
k and the cardinality of the solution.
6.1
Controlling sparsity with k
Here, we illustrate on a simple example how the sparsity of the solution to our relaxation
evolves as k varies from 1 to n. We generate a 10 ? 10 matrix U with uniformly distributed
coefficients in [0, 1]. We let v be a sparse vector with:
v = (1, 0, 1, 0, 1, 0, 1, 0, 1, 0).
We then form a test matrix A = U T U + ?vv T , where ? is a signal-to-noise ratio equal
to 15 in our case. We sample 50 different matrices A using this technique. For each k
between 1 and 10 and each A, we solve the following SDP in (4). We then extract the first
eigenvector of the solution X and record its cardinality. In Figure 1, we show the mean
cardinality (and standard deviation) as a function of k. We observe that k + 1 is actually a
good predictor of the cardinality, especially when k + 1 is close to the actual cardinality (5
in this case).
12
cardinality
10
8
6
4
2
0
0
2
4
6
8
10
12
k
Figure 1: Cardinality versus k.
6.2
Artificial data
We consider the simulation example proposed by [5]. In this example, three hidden factors
are created:
V1 ? N (0, 290), V2 ? N (0, 300), V3 = ?0.3V1 + 0.925V2 + , ? N (0, 300) (8)
with V1 , V2 and independent. Afterwards, 10 observed variables are generated as follows:
Xi = Vj + ji , ji ? N (0, 1),
with j = 1 for i = 1, 2, 3, 4, j = 2 for i = 5, 6, 7, 8 and j = 3 for i = 9, 10 and {ji }
independent for j = 1, 2, 3, i = 1, . . . , 10. Instead of sampling data from this model and
computing an empirical covariance matrix of (X1 , . . . , X10 ), we use the exact covariance
matrix to compute principal components using the different approaches.
Since the three underlying factors have about the same variance, and the first two are associated with 4 variables while the last one is only associated with 2 variables, V1 and V2
are almost equally important, and they are both significantly more important than V3 . This,
together with the fact that the first 2 principal components explain more than 99% of the
total variance, suggests that considering two sparse linear combinations of the original variables should be sufficient to explain most of the variance in data sampled from this model.
This is also discussed by [5]. The ideal solution would thus be to only use the variables
(X1 , X2 , X3 , X4 ) for the first sparse principal component, to recover the factor V1 , and
only (X5 , X6 , X7 , X8 ) for the second sparse principal component to recover V2 .
Using the true covariance matrix and the oracle knowledge that the ideal sparsity is 4, [5]
performed SPCA (with ? = 0). We carry out our algorithm with k = 4. The results are
reported in Table 1, together with results for PCA, simple thresholding and SCoTLASS
(t = 2). Notice that SPCA, DSPCA and SCoTLASS all find the correct sparse principal
components, while simple thresholding yields inferior performance. The latter wrongly
includes the variables X9 and X10 to explain most variance (probably it gets misled by
the high correlation between V2 and V3 ), even more, it assigns higher loadings to X9 and
X10 than to one of the variables (X5 , X6 , X7 , X8 ) that are clearly more important. Simple
thresholding correctly identifies the second sparse principal component, probably because
V1 has a lower correlation with V3 . Simple thresholding also explains a bit less variance
than the other methods.
6.3
Pit props data
The pit props data (consisting of 180 observations and 13 measured variables) was introduced by [11] and has become a standard example of the potential difficulty in interpreting
Table 1: Loadings and explained variance for first two principal components, for the artificial example. ?ST? is the simple thresholding method, ?other? is all the other methods:
SPCA, DSPCA and SCoTLASS.
X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 explained variance
PCA, PC1 .116 .116 .116 .116 -.395 -.395 -.395 -.395 -.401 -.401
60.0%
PCA, PC2 -.478 -.478 -.478 -.478 -.145 -.145 -.145 -.145 .010 .010
39.6%
ST, PC1
0
0
0
0
0
0 -.497 -.497 -.503 -.503
38.8%
ST, PC2
-.5 -.5 -.5 -.5
0
0
0
0
0
0
38.6%
other, PC1
0
0
0
0
.5
.5
.5
.5
0
0
40.9%
.5
.5
.5
.5
0
0
0
0
0
0
39.5%
other, PC2
principal components. [4] applied SCoTLASS to this problem and [5] used their SPCA
approach, both with the goal of obtaining sparse principal components that can better be
interpreted than those of PCA. SPCA performs better than SCoTLASS: it identifies principal components with respectively 7, 4, 4, 1, 1, and 1 non-zero loadings, as shown in
Table 2. As shown in [5], this is much sparser than the modified principal components by
SCoTCLASS, while explaining nearly the same variance (75.8% versus 78.2% for the 6
first principal components). Also, simple thresholding of PCA, with a number of non-zero
loadings that matches the result of SPCA, does worse than SPCA in terms of explained
variance.
Following this previous work, we also consider the first 6 principal components. We try
to identify principal components that are sparser than the best result of this previous work,
i.e., SPCA, but explain the same variance. Therefore, we choose values for k of 5, 2, 2, 1,
1, 1 (two less than those of the SPCA results reported above, but no less than 1). Figure 2
shows the cumulative number of non-zero loadings and the cumulative explained variance
(measuring the variance in the subspace spanned by the first i eigenvectors). The results
for DSPCA are plotted with a red line and those for SPCA with a blue line. The cumulative
explained variance for normal PCA is depicted with a black line. It can be seen that our
approach is able to explain nearly the same variance as the SPCA method, while clearly reducing the number of non-zero loadings for the first 6 principal components. Adjusting the
first k from 5 to 6 (relaxing the sparsity), we obtain the results plotted with a red dash-dot
line: still better in sparsity, but with a cumulative explained variance that is fully competitive with SPCA. Moreover, as in the SPCA approach, the important variables associated
with the 6 principal components do not overlap, which leads to a clearer interpretation. Table 2 shows the first three corresponding principal components for the different approaches
(DSPCAw5 for k1 = 5 and DSPCAw6 for k1 = 6).
Table 2: Loadings for first three principal components, for the real-life example.
SPCA, PC1
SPCA, PC2
SPCA, PC3
DSPCAw5, PC1
DSPCAw5, PC2
DSPCAw5, PC3
DSPCAw6, PC1
DSPCAw6, PC2
DSPCAw6, PC3
7
topdiam
-.477
0
0
-.560
0
0
-.491
0
0
length
-.476
0
0
-.583
0
0
-.507
0
0
moist testsg ovensg ringtop ringbud bowmax bowdist whorls clear knots diaknot
0
0
.177
0
-.250
-.344
-.416 -.400
0
0
0
.785 .620
0
0
0
-.021
0
0
0 .013
0
0
0
.640
.589
.492
0
0
0
0
0 -.015
0
0
0
0
-.263
-.099
-.371 -.362
0
0
0
.707 .707
0
0
0
0
0
0
0
0
0
0
0
0 -.793
-.610
0
0
0
0
0
.012
0
0
0 -.067
-.357
-.234
-.387 -.409
0
0
0
.707 .707
0
0
0
0
0
0
0
0
0
0
0
0 -.873
-.484
0
0
0
0
0
.057
Conclusion
The semidefinite relaxation of the sparse principal component analysis problem proposed
here appears to significantly improve the solution?s sparsity, while explaining the same
18
100
90
16
Cumulative explained variance
Cumulative cardinality
80
14
12
10
70
60
50
40
30
20
8
10
6
1
1.5
2
2.5
3
3.5
4
4.5
Number of principal components
5
5.5
6
0
1
1.5
2
2.5
3
3.5
4
4.5
5
5.5
Number of principal components
Figure 2: Cumulative cardinality and cumulative explained variance for SPCA and DSPCA
as a function of the number of principal components: black line for normal PCA, blue for
SPCA and red for DSPCA (full for k1 = 5 and dash-dot for k1 = 6).
variance as previously proposed methods in the examples detailed above. The algorithms
we used here handle moderate size problems efficiently. We are currently working on
large-scale extensions using first-order techniques.
Acknowledgements
Thanks to Andrew Mullhaupt and Francis Bach for useful suggestions. We would like to acknowledge support from ONR MURI N00014-00-1-0637, Eurocontrol-C20052E/BM/03,
NASA-NCC2-1428.
References
[1] I. T. Jolliffe. Rotation of principal components: choice of normalization constraints. Journal of
Applied Statistics, 22:29?35, 1995.
[2] S. Vines. Simple principal components. Applied Statistics, 49:441?451, 2000.
[3] J. Cadima and I. T. Jolliffe. Loadings and correlations in the interpretation of principal components. Journal of Applied Statistics, 22:203?214, 1995.
[4] I. T. Jolliffe and M. Uddin. A modified principal component technique based on the lasso.
Journal of Computational and Graphical Statistics, 12:531?547, 2003.
[5] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Technical report,
statistics department, Stanford University, 2004.
[6] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal statistical
society, series B, 58(267-288), 1996.
[7] Jos F. Sturm. Using sedumi 1.0x, a matlab toolbox for optimization over symmetric cones.
Optimization Methods and Software, 11:625?653, 1999.
[8] I. Nesterov. Smooth minimization of non-smooth functions. CORE wroking paper, 2003.
[9] A. Nemirovski. Prox-method with rate of convergence o(1/t) for variational inequalities with
Lipschitz continuous monotone operators and smooth convex-concave saddle-point problems.
MINERVA Working paper, 2004.
[10] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[11] J. Jeffers. Two case studies in the application of principal components. Applied Statistics,
16:225?236, 1967.
6
| 2628 |@word illustrating:1 polynomial:1 norm:3 loading:16 simulation:1 decomposition:9 covariance:5 tr:15 carry:1 reduction:2 initial:1 series:1 dspca:9 existing:1 current:2 com:1 comparing:1 ka:2 written:1 must:1 subsequent:1 numerical:2 additive:1 happen:1 drop:1 fewer:2 core:1 record:1 org:1 direct:2 become:2 viable:1 theoretically:1 sacrifice:1 indeed:2 examine:1 sdp:9 actual:1 cardinality:16 solver:2 considering:1 begin:1 discover:1 moreover:2 notation:1 bounded:1 medium:1 underlying:1 what:1 interpreted:2 eigenvector:6 developed:1 deflate:1 guarantee:2 berkeley:9 every:1 concave:1 finance:1 control:2 positive:3 ncc2:1 engineering:1 kuk1:1 tends:1 consequence:1 laurent:2 might:1 black:2 suggests:1 relaxing:2 pit:2 nemirovski:1 practical:1 practice:3 x3:2 empirical:2 significantly:2 boyd:1 get:4 interior:3 close:2 selection:1 judged:1 wrongly:1 context:1 operator:1 equivalent:4 convex:9 assigns:1 sac:2 spanned:1 vandenberghe:1 financial:1 handle:1 gert:2 coordinate:2 variation:1 controlling:2 suppose:1 target:1 exact:1 programming:5 lanckriet:1 element:3 muri:1 observed:1 solved:2 worst:1 vine:1 mentioned:2 complexity:1 nesterov:1 rewrite:2 easily:1 xxt:5 describe:2 artificial:3 m4x:1 whose:1 larger:1 solve:7 stanford:1 statistic:7 hoc:1 eigenvalue:7 propose:2 maximal:1 achieve:2 frobenius:2 convergence:1 leave:1 derive:4 illustrate:2 clearer:1 andrew:1 measured:1 c:1 trading:1 implies:2 come:1 direction:2 closely:1 correct:1 explains:1 suffices:1 investigation:1 decompose:2 extension:1 considered:2 normal:2 a2:1 purpose:4 currently:2 largest:1 tool:1 minimization:1 clearly:2 always:1 modified:4 shrinkage:1 ax:7 focus:1 vk:1 improvement:1 rank:11 sense:1 el:1 stopping:1 typically:1 hidden:1 uij:2 dual:1 constrained:3 special:3 equal:2 having:1 sampling:1 biology:2 x4:2 look:1 k2f:1 nearly:2 uddin:1 report:1 few:1 ourselves:1 consisting:1 attempt:1 interest:2 semidefinite:13 yielding:1 necessary:2 sedumi:2 plotted:2 theoretical:1 disadvantage:2 kxkf:3 measuring:1 cost:1 deviation:1 predictor:1 too:1 reported:2 varies:1 eec:5 st:3 thanks:1 jos:1 michael:1 together:2 again:3 x9:3 choose:1 worse:1 resort:1 potential:1 prox:1 includes:1 coefficient:4 ad:1 performed:2 tion:1 later:1 try:1 francis:1 red:3 start:1 recover:2 competitive:1 accuracy:1 variance:23 efficiently:3 correspond:2 yield:1 identify:1 knot:1 asset:1 explain:6 naturally:2 associated:3 stop:3 sampled:1 adjusting:1 popular:1 knowledge:1 dimensionality:2 improves:1 actually:1 nasa:1 appears:1 alexandre:2 higher:1 x6:3 formulation:6 though:1 correlation:3 hand:1 working:2 sturm:1 hopefully:1 true:1 daspremont:1 hence:1 symmetric:8 deal:1 indistinguishable:1 x5:3 inferior:1 essence:1 jeffers:1 criterion:2 allowable:1 outline:1 performs:1 l1:1 interpreting:1 ranging:1 variational:5 wise:2 recently:1 rotation:2 specialized:2 physical:1 overview:1 ji:3 discussed:2 interpretation:10 significant:2 refer:1 cambridge:1 scotlass:7 ai:1 dot:2 dominant:3 recent:1 moderate:2 certain:1 n00014:1 inequality:1 onr:1 life:2 exploited:1 seen:1 minimum:1 relaxed:1 converge:2 v3:4 signal:1 afterwards:1 full:1 x10:4 smooth:3 technical:1 match:1 offer:2 bach:1 equally:1 a1:5 impact:1 regression:2 basic:1 essentially:1 minerva:1 normalization:1 penalize:1 singular:1 pc3:3 probably:2 subject:10 effectiveness:1 jordan:2 integer:3 call:1 cadima:1 ideal:2 spca:21 iterate:1 hastie:1 lasso:3 restrict:1 avenue:1 tradeoff:1 pca:20 penalty:2 york:1 remark:1 adequate:1 matlab:1 useful:1 clear:1 involve:1 eigenvectors:2 detailed:1 documented:1 generate:1 notice:1 correctly:1 eurocontrol:1 tibshirani:2 blue:2 capital:1 thresholded:1 v1:6 graph:1 relaxation:10 monotone:1 cone:1 facilitated:1 uncertainty:1 throughout:1 reader:1 almost:1 pc2:6 bit:1 capturing:1 bound:5 dash:2 oracle:1 orthogonality:1 constraint:4 precisely:1 x2:2 software:1 x7:3 min:4 structured:1 department:1 truncate:2 combination:4 smaller:1 saddlepoint:1 evolves:1 modification:1 explained:9 ghaoui:1 previously:1 turn:2 jolliffe:3 observe:1 v2:6 generic:2 robustness:4 rp:1 original:3 compress:1 graphical:1 madison:1 exploit:1 k1:4 especially:1 approximating:2 classical:1 society:1 move:1 objective:1 question:1 strategy:1 exhibit:1 subspace:1 link:1 card:10 capacity:1 kak2f:1 spanning:1 length:1 reformulate:1 ratio:1 minimizing:1 difficult:2 implementation:1 upper:3 observation:1 acknowledge:1 rn:4 pc1:6 intensity:1 introduced:2 required:1 toolbox:1 undistinguishable:1 able:2 usually:1 sparsity:12 elghaoui:1 program:7 max:10 royal:1 overlap:1 natural:2 difficulty:1 misled:1 improve:2 imply:1 identifies:2 axis:1 created:1 x8:3 aspremont:1 extract:1 sn:7 acknowledgement:1 kf:1 loss:1 fully:1 interesting:1 suggestion:1 versus:2 penalization:1 sufficient:1 thresholding:8 uncorrelated:1 course:1 penalized:1 last:4 soon:1 weaker:1 vv:1 wide:1 fall:1 explaining:3 absolute:2 sparse:23 distributed:1 evaluating:1 cumulative:8 bm:1 transaction:1 approximate:1 gene:1 keep:2 xt1:2 xi:1 continuous:1 table:5 terminate:2 robust:1 ca:3 obtaining:1 zou:1 vj:1 noise:4 nothing:1 x1:7 ny:1 aid:1 kxk2:4 specific:2 xt:3 incorporating:1 albeit:1 restricting:1 importance:1 depts:1 sparser:2 suited:1 depicted:1 saddle:3 forming:1 expressed:2 corresponds:1 prop:2 goal:1 replace:1 price:1 lipschitz:1 uniformly:1 reducing:1 principal:42 called:4 total:1 duality:1 svd:1 support:1 latter:1 arises:1 dept:2 |
1,793 | 2,629 | A feature selection algorithm based on the global
minimization of a generalization error bound
Dori Peleg
Department of Electrical Engineering
Technion
Haifa, Israel
[email protected]
Ron Meir
Department of Electrical Engineering
Technion
Haifa, Israel
[email protected]
Abstract
A novel linear feature selection algorithm is presented based on the
global minimization of a data-dependent generalization error bound.
Feature selection and scaling algorithms often lead to non-convex optimization problems, which in many previous approaches were addressed
through gradient descent procedures that can only guarantee convergence
to a local minimum. We propose an alternative approach, whereby the
global solution of the non-convex optimization problem is derived via
an equivalent optimization problem. Moreover, the convex optimization
task is reduced to a conic quadratic programming problem for which efficient solvers are available. Highly competitive numerical results on both
artificial and real-world data sets are reported.
1
Introduction
This paper presents a new approach to feature selection for linear classification
?
?nwhere the
goal is to learn a decision rule from a training set of pairs Sn = x(i) , y (i) i=1 , where
x(i) ? Rd are input patterns and y (i) ? {?1, 1} are the corresponding labels. The goal
of a classification algorithm is to find a separating function f (?), based on the training set,
which will generalize well, i.e. classify new patterns with as few errors as possible. Feature
selection schemes often utilize, either explicitly or implicitly, scaling variables, {? j }dj=1 ,
which multiply each feature. The aim of such schemes is to optimize an objective function
over ? ? Rd . Feature selection can be viewed as the case ?j ? {0, 1}, j = 1, . . . , d, where
a feature j is removed if ?j = 0. The more general case of feature scaling is considered
here, i.e. ?j ? R+ . Clearly feature selection is a special case of feature scaling.
The overwhelming majority of feature selection algorithms in the literature, separate the
feature selection and classification tasks, while solving either a combinatorial or a nonconvex optimization problem (e.g. [1],[2],[3],[4]). In either case there is no guarantee
of efficiently locating a global optimum. This is particularly problematic in large scale
classification tasks which may initially contain several thousand features. Moreover, the
objective function of many feature selection algorithms is unrelated to the Generalization
Error (GE). Even for global solutions of such algorithms there is no theoretical guarantee
of proximity to the minimum of the GE.
To overcome the above shortcomings we propose a feature selection algorithm based on
the Global Minimization of an Error Bound (GMEB). This approach is based on simultaneously finding the optimal classifier and scaling factors of each feature by minimizing a
GE bound. As in previous feature selection algorithms, a non-convex optimization problem
must be solved. A novelty of this paper is the use of the equivalent optimization problems
concept, whereby a global optimum is guaranteed in polynomial time.
The development of the GMEB algorithm begins with the design of a GE bound for feature selection. This is followed by formulating an optimization problem which minimizes
this bound. Invariably, the resulting problem is non-convex. To avoid the drawbacks of
solving non-convex optimization problems, an equivalent convex optimization problem is
formulated whereby the exact global optimum of the non-convex problem can be computed.
Next the dual problem is derived and formulated as a Conic Quadratic Programming (CQP)
problem. This is advantageous because efficient CQP algorithms are available. Comparative numerical results on both artificial and real-world datsets are reported.
The notation and definitions were adopted from [5]. All vectors are column vectors unless
transposed. Mathematical operators on scalars such as the square root are expanded to vectors by operating componentwise. The notation R+ denotes nonnegative real numbers. The
notation x ? y denotes componentwise inequality between vectors x and y respectively.
A vector with all components equal to one is denoted as 1. The domain of a function f is
denoted as dom f . The set of points for which the objective and all the constraint functions
are defined is called the domain of the optimization problem, D. For lack of space, only
proof sketches will be presented; the complete proofs are deferred to the full paper.
2
The Generalization Error Bounds
We establish GE bounds which are used to motivate an effective algorithm for feature scaling. Consider a sample Sn = {(x(1) , y (1) ), . . . , (x(n) , y (n) )}, x(i) ? X ? Rd , y (i) ? Y,
where (x(i) , y (i) ) are generated independently from some distribution P . A set of nonnegative variables ? = (?1 , . . . , ?d )T is introduced to allow the additional freedom of feature
scaling. The scaling variables ? transform the linear classifiers from f (x) = w T x + b to
f (x) = w T ?x + b, where ? = diag(?). It may seem at first glance that these classifiers
are essentially the same since w can be redefined as ?w. However the role of ? is to offer
an extra degree of freedom to scale the features independently of w, in a way which can be
exploited by an optimization algorithm.
For a real-valued classifier f , the 0 ? 1 loss is the probability of error given by
P (yf (x) ? 0) = EI (yf (x) ? 0), where I(?) is the indicator function.
Definition 1 The margin cost function ?? : R ? R+ is defined as ?? (z) = 1 ? z/? if
z ? ?, and zero otherwise (note that I (yf (x) ? 0) ? ?? (yf (x))).
Consider a classifier f for which the input features have been rescaled, namely f (?x) is
? n be the empirical mean.
used instead of f (x). Let F be some class of functions and let E
Using standard GE bounds, one can establish that for any choice of ?, with probability at
least 1 ? ?, for any f ? F
? n ?? (yf (?x)) + ?(f, ?, ?),
P (yf (?x) ? 0) ? E
(1)
for some appropriate complexity measure ? depending on the bounding technique.
Unfortunately, (1) cannot be used directly when attempting to select optimal values of the
variables ? because the bound is not uniform in ?. In particular, we need a result which
holds with probability 1 ? ? for every choice of ?.
Definition 2 The indices of training patterns with labels {?1, 1} are denoted by I ? , I+
respectively. The cardinalities of the sets I? , I+ are n? , n+ respectively. The empirical
mean of the second order moment of the jth feature over the training patterns belonging to
?
?
?2
?2
P
P
(i)
(i)
indices I? , I+ are vj? = n1? i?I? xj
, vj+ = n1+ i?I+ xj
respectively.
Theorem 3 Fix B, r, ? > 0, and suppose that {(x(i) , y (i) )}ni=1 are chosen independently
at random according to some probability distribution P on X ? {?1}, where kxk ? r for
x ? X . Define the class of functions F
?
?
F = f : f (x) = w T ?x + b, kwk ? B, |b| ? r, ? ? 0 .
Let ?0 be an arbitrary positive number, and set ?
` j = 2 max(?j , ?0 ). Then with probability
at least 1 ? ?, for every function f ? F
v
v
?
?
? u
? u
d
d
X
uX
u
n
n
2B
+
?
t
t
? n ?? (yf (x)) +
?
vj+ ?
vj? ?
P (yf (x) ? 0) ? E
`j2 +
`j2 ? + ? ,
?
n
n
j=1
j=1
?
where K(?) = (Bk`
? k + 1)r and ? = ?(?,?,?)
,
n
v
u d
?
?r
u X
?
`j
2
2r
2
t
ln log2
2 ln .
?(?, ?, ?) =
+ K(?) 2
+ K(?)
+1
?
?
?
?
0
j=1
(2)
Proof sketch We begin by assuming a fixed upper bound on the values of ?j , say ?j ? sj ,
j = 1, 2, . . . , d. This allows us to use the methods developed in [6] in order to establish
upper bounds on the Rademacher complexity of the class F, where ?j ? sj for all j.
Finally, a simple variant of the union bound (the so-called multiple testing lemma) is used
in order to obtain a bound which is uniform with respect to ? (see the proof technique of
Theorem 10 in [6]).
In principle, we would like to minimize the r.h.s. of (2) with respect to the variables w, ?, b.
However, in this work the focus is only on the data-dependent terms in (2), which include
the empirical error term and the weighted norms of ?. Note that all other terms of (2) are
of the same order of magnitude (as a function of n), but do not depend explicitly on the
data. It should be commented that the extra terms appearing in the bound arise because of
the assumed unboundedness of ?. Assuming ? to be bounded, e.g. ? ? s, as is the case
in most other bounds in the literature, one may replace ? by s in all terms except the first
two, thus removing the explicit dependence on ?.
The data-dependent terms of the GE bound (2) are the basis of the objective function
v
v
? u
d
d
n
? C ?n u
X
u
uX
C
n
1 X ? (i)
+
+
?
?
t
t
vj+ ?j2 +
vj? ?j2 ,
?? y f (x(i) ) +
n? i=1
n?
n?
j=1
j=1
(3)
where C+ = C? = 4 and the variables are subject to w T w ? 1, ? ? 0. The transition
was performed by setting B = 1, and replacing ?
` by 2? (assuming that ? > ? 0 ).
Due to the fact that only the sign of f determines the estimated labels, it can be multiplied
by any positive factor and produce identical results. The constraint on the norm of w
induces a normalization on the classifier f (x) = w T x + b, without which the classifier is
not unique. However, by introducing the scale variables ?, the classifier was transformed to
f (x) = w T ?x + b. Hence, despite the constraint on w, the classifier is not unique again. If
the variable ? in (3) is set to an arbitrary positive constant then the solution is unique. This
is true because ? appears in (3) only in the expressions ?b , ??1 , . . . , ??d . We chose ? = 1.
The objective function is comprised of two elements: (1) the mean of the penalty on the
training errors (2) and two weighted l2 norms of the scale variables ?. The second term acts
as the feature selection element. Note that the values of C+ , C? following from Theorem
3 depend specifically on the bounding technique used in the proof. To allow more generality and flexibility in practical applications, we propose to turn the norm terms of (3) into
inequality constraints which are bounded by hyperparameters R+ , R? respectively. The
interpretation of these hyperparameters is essentially the number of informative features.
We propose that R+ , R? are chosen via a Cross Validation (CV) scheme. These hyperparameters enable fine-tuning a general classifier to a specific classification task as is done in
many other classification algorithms such as the SVM algorithm.
Note that the present bound is sensitive to a shift of the features. Therefore, as a preprocessing step the features of the training patterns should be set to zero mean and the features
of the test set shifted accordingly.
3
The primal non-convex optimization problem
The problem of minimizing (3) with ? = 1 can then be expressed as
minimize 1T ?
subject to w T w ? 1
Pd
(i)
y (i) ( j=1 xj wj ?j + b) ? 1 ? ?i , i = 1, . . . , n
Pd
R+ ? j=1 vj+ ?j2
Pd
R? ? j=1 vj? ?j2
?, ? ? 0,
with variables w, ? ? Rd , ? ? Rn , b ? R. Note that the constant value
1
n
(4)
was discarded.
Remark 4 Consider a solution of problem (4) in which ?j? = 0 for some feature j. Only
the constraint w T w ? 1 affects the value of wj? . A unique solution is established by setting
?j? = 0 ? wj? = 0. If the original solution w ? satisfies the constraint w T w ? 1 then the
amended solution will also satisfy the constraint and won?t affect the value of the objective
function.
The functions wj ?j in the second inequality constraints are neither convex nor concave (in
fact they are quasiconcave [5]). To make matters worse, the functions wj ?j are multiplied
(i)
by constants ?y (i) xj which can be either positive or negative. Consequently problem (4)
is not a convex optimization problem. The objective of Section 3.1 is to find the global
minimum of (4) in polynomial time despite its non-convexity.
3.1
Convexification
In this paper the informal definition of equivalent optimization problems is adopted from
[5, pp. 130?135]: two optimization problems are called equivalent if from a solution of
one, a solution of the other is found, and vice versa. Instead of detailing a complicated
formal definition of general equivalence, the specific equivalence relationships utilized in
this paper are either formally introduced or cited from [5].
The functions wj ?j in problem (4) are not convex and the signs of the multiplying constants
(i)
?y (i) xj are data dependant. The only functions that remain convex irrespective of the sign
of the constants which multiply them are linear functions. Therefore the functions w j ?j
must be transformed into linear functions.
However, such a transformation must also maintain the convexity of the objective function
and the remaining constraints. For this purpose the change of variables equivalence relationship, described in appendix A, was utilized. The transformation ? : Rd ?Rd ? Rd ?Rd
was used on the variables w, ?:
p
w
?j
?j , wj = p , j = 1, . . . , d,
?j = + ?
(5)
?
?j
where dom ? = {(?
? , w)|?
? ? ? 0}. If ?
? j = 0 then ?j = wj = 0 without regard to the
value of w
?j , in accordance with remark 4. Transformation (5) is clearly one-to-one and
?(dom ?) ? D.
Lemma 5 The problem
minimize
subject to
1T ?
y (i) (w
? T x(i) + b) ? 1 ? ?i , i = 1, . . . , n
Pd w?j2
j=1 ?
?j ? 1
R+ ? (v + )T ?
?
R? ? (v ? )T ?
?
?, ?
??0
(6)
is convex and equivalent to the primal non-convex problem (4) with transformation (5).
Note that since w
?j = wj ?j , the new classifier is f (x) = w
? T x + b. Therefore there is no
need to use transformation (5) to obtain the desired classifier. Also one can use Schur?s
complement [5] to transform the non-linear constraint into a sparse linear matrix inequality
constraint
?
?
? w
? 0.
wT 1
Thus problem (6) can be cast as a Semi-Definite Programming (SDP) problem. The primal
problem therefore, consists of n + 2d + 1 variables, 2n + d + 2 linear inequality constraints
and a linear matrix inequality of [(d + 1) ? (d + 1)] dimensions. Although the primal
problem (6) is convex, it heavily relies on the number of features d which is typically the
bottleneck for feature selection datasets. To alleviate this dependency the Dual problem is
formulated.
Theorem 6 (Dual problem) The dual optimization problem associated with problem (6)
is
maximize
subject to
T
1
?P? ? ?1 ? R+ ?+ ? R? ??
?
n
+
?
(i) (i)
r
i=1 ?i y xj , 2?1 , (?+ vj + ?? vj ) ? K
T
? y=0
0???1
?+ , ?? ? 0,
, j = 1, . . . , d
(7)
where K r is the Rotated Quadratic Cone (RQC) K r = {(x, y, z) ? Rn ? R ? R|xT x ?
2yz, y ? 0, z ? 0} and with the variables ? ? Rn , ?1 , ?2 ? R.
Theorem 7 (Strong duality) Strong duality holds between problems (6) and (7).
The dual problem (7) is a CQP problem. The number of variables is n + 3, there are
2n+2 linear inequality constraints, a single linear equality constraint and d RQC inequality
constraints. Due to the reduced computational complexity we used the dual formulation in
all the experiments.
4
Experiments
Several algorithms were comparatively evaluated on a number of artificial and real world
two class problem datasets. The GMEB algorithm was compared to the linear SVM (standard SVM with linear kernel) and the l1 SVM classifier [7].
4.1
Experimental Methodology
The algorithms are compared by two criteria: the number of selected features and the
error rates. The weight assigned by a linear classifier to a feature j, determines whether it
shall be ?selected? or ?rejected?. This weight must fulfil at least one of the following two
requirements:
1. Absolute measure - |wj | ? ?.
2. Relative measure -
|wj |
maxj {|wj |}
? ?.
In this paper ? = 0.01 was used. Ideally, ? should be set adaptively. Note that for the
GMEB algorithm w
? should be used.
The definition of the error rate is intrinsically entwined with the protocol for determining
the hyperparameter. Given an a-priori partitioning of the dataset into training and test sets,
the following protocol for determining the value of R+ , R? and defining the error rate is
suggested:
1. Define a set R of values of the hyperparameters R+ , R? for all datasets. The set R
consists of a predetermined number of values. For each algorithm the cardinality
|R| = 49 was used.
2. Calculate the N-fold CV error for each value of R+ , R? from set R on the training
set. Five fold CV was used throughout all the datasets.
3. Use the classifier with the value of R+ , R? which produced the lowest CV error
to classify the test set. This is the reported error rate.
If the dataset is not partitioned a-priori into a training and test set, it is randomly divided
n ?1
into np contiguous training and ?test? sets. Each training set contains n pnp patterns and
the corresponding test set consists of nnp patterns. Once the dataset is thus partitioned, the
above steps 1 ? 3 can be implemented. The error rate and the number of selected features
are then defined as the average on the np problems. The value np = 10 was used for all
datasets, where an a-priori partitioning was not available.
The hyperparameter sets R used for the GMEB algorithm consisted of 7?7 linearly spaced
?
values between 1 and 10. For the SVM algorithms the set R consisted of the values 1??
where ? = {0.02, 0.04, . . . , 0.98}, i.e. 49 linearly spaced values between 0.02 and 0.98.
4.2
Data sets
Tests were performed on the ?Linear problem? synthetic datasets as described in [2], and
eight real-world problems. The number of features, the number of patterns and the partitioning into train and test sets of the real-world datasets are detailed in Table 2. The datasets
were taken form the UCI repository unless stated otherwise. Dataset (1) is termed Wisconsin Diagnostic Breast Cancer ?WDBC?, (2) ?Multiple Features? dataset, which was first
introduced by ([8]), (3) the ?Internet Advertisements? dataset, was separated into a training
and test set randomly, (4) the ?Colon? dataset, taken from ([2]), (5) the ?BUPA? dataset, (6)
the ?Pima Indians Diabetes? dataset, (7) the ?Cleveland heart disease? dataset, and (8), the
?Ionosphere? dataset.
Table 1: Mean and standard deviation of the mean of test error rate percentage on synthetic
datasets given n training patterns. The number of selected features is in brackets.
n
10
20
30
40
50
SVM
46.2 ? 1.9 (197.1?2.1)
44.9 ? 2.1 (196.8?1.9)
43.6 ? 1.7 (196.7?2.8)
41.8 ? 1.9 (197.2?1.8)
41.9 ? 1.8 (196.6?2.6)
l1 SVM
49.6 ? 1.9 (77.7?83.8)
38.5 ? 12.7 (10.7?6.1)
27.4 ? 12.4 (14.5?8.7)
19.2 ? 6.9 (16.2?11.1)
16.0 ? 5.3 (18.4?11.3)
GMEB
33.8 ? 14.2 (3.7?2.1)
13.9 ? 7.2 (4.8?2.7)
7.1 ? 5.6 (5.1?2.3)
5.0 ? 3.5 (5.5?2.1)
3.1 ? 2.7 (5.1?1.8)
Table 2: The real-world datasets and the performance of the algorithms. The set R for the
linear SVM algorithm and for datasets 1,5,6 had to be set to ? to allow convergence.
Feat.
30
649
1558
2000
6
8
13
34
4.3
Patt.
569
200/1800
200/3080
62
345
768
297
351
Linear SVM
5.3?0.8 (27.3?0.3)
0.3 (616)
5.3 (322)
13.6?5.9 (1941.8?1.9)
33.1?3.5 (6.0?0.0)
22.8?1.5 (5.8?0.2)
17.5?1.9 (11.6?0.2)
11.7?2.6 (32.8?0.2)
l1 SVM
4.9?1.1 (16.4?1.3)
3.5 (15)
4.7 (12)
10.7?4.4 (23.3?1.5)
33.6?3.6 (5.9?0.1)
22.9?1.4 (5.8?0.2)
16.8?1.6 (10.7?0.3)
12.0?2.3 (27.9?1.6)
GMEB
4.2?0.9 (6.0?0.3)
0.2 (32)
5.5 (98)
10.7?4.4 (59.1?25.0)
34.2?4.4 (5.4?0.5)
22.5?1.8 (4.8?0.2)
15.5?2.0 (9.1?0.3)
10.0?2.3 (12.1?1.7)
Experimental results
Table 1 provides a comparison of the GMEB algorithm with the SVM algorithms on the
synthetic datasets. The Bayes error is 0.4%. For further numerical comparison see [3].
Note that the number of features selected by the l1 SVM and the GMEB algorithms increase
with the sample size. A possible explanation for this observation is that with only a few
training patterns a small training error can be achieved by many subsets containing a small
number of features, i.e. a sparse solution. The particular subset selected is essentially
random, leading to a large test error, possibly due to overfitting.
For all the synthetic datasets the GMEB algorithm clearly attained the lowest error rates.
On the real-world datasets it produced the lowest error rates and the smallest number of
features for the majority of datasets investigated.
4.4
Discussion
The GMEB algorithm performs comparatively well against the linear and l1 SVM algorithms, in regard to both the test error and the number of selected features. A possible
explanation is that the l1 SVM algorithm performs both classification and feature selection
with the same variable w. In contrast, the GMEB algorithm performs the feature selection
and classification simultaneously, while using variables ? and w respectively. The use of
two variables also allows the GMEB algorithm to reduce the weight of a feature j with both
wj and ?j , while the l1 SVM uses only wj . Perhaps this property of GMEB could explain
why it produces comparable (and at times better) results than the SVM algorithms both in
classification problems where feature selection is and is not required.
5
Summary and future work
This paper presented a feature selection algorithm motivated by minimizing a GE bound.
The global optimum of the objective function is found by solving a non-convex optimization problem. The equivalent optimization problems technique reduces this task to a convex
problem. The dual problem formulation depends more weakly on the number of features d
and this enabled an extension of the GMEB algorithm to large scale classification problems.
The GMEB classifier is a linear classifier. Linear classifiers are the most important type of
classifiers in a feature selection framework because feature selection is highly susceptible
to overfitting. We believe that the GMEB algorithm is just the first of a series of algorithms
which may globally minimize increasingly tighter bounds on the generalization error.
Acknowledgment R.M. is partially supported by the fund for promotion of research at the Technion
and by the Ollendorff foundation of the Electrical Engineering department at the Technion.
A
Change of variables
Consider optimization problem
minimize f0 (x)
(8)
subject to fi (x) ? 0, i = 1, . . . , m.
Suppose ? : Rn ? Rn is one-to-one, with image covering the problem domain D, i.e.,
?(dom ?) ? D . We define functions f?i as f?i (z) = fi (?(z)), i = 0, . . . , m. Now consider
the problem
minimize f?0 (z)
(9)
subject to f?i (z) ? 0, i = 1, . . . , m,
with variable z. Problem (8) and (9) are said to be related by the change of variable x =
?(z) and are equivalent: if x solves the problem (8), then z = ??1 (x) solves problem(9);
if z solves problem (9), then x = ?(z) solves problem (8).
References
[1] Y. Grandvalet and S. Canu. Adaptive scaling for feature selection in svms. In S. Thrun S. Becker
and K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 553?
560. MIT Press, 2003.
[2] Jason Weston, Sayan Mukherjee, Olivier Chapelle, Massimiliano Pontil, Tomaso Poggio, and
Vladimir Vapnik. Feature selection for SVMs. In Advances in Neural Information Processing
Systems 13, pages 668?674, 2000.
[3] Alain Rakotomamonjy. Variable selection using svm based criteria. The Journal of Machine
Learning Research, 3:1357?1370, 2003.
[4] Jason Weston, Andr?e Elisseeff, Bernhard Sch?olkopf, and Mike Tipping. Use of the zero norm
with linear models and kernel methods. The Journal of Machine Learning Research, 3:1439?
1461, March 2003.
[5] Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press,
2004. http://www.stanford.edu/?boyd/cvxbook.html.
[6] R. Meir and T. Zhang. Generalization bounds for Bayesian mixture algorithms. Journal of
Machine Learning Research, 4:839?860, 2003.
[7] Glenn Fung and O. L. Mangasarian. Data selection for support vector machines classifiers. In
Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and
Data Mining, pages 64?70, 2000.
[8] Simon Perkins, Kevin Lacker, and James Theiler. Grafting: Fast, incremental feature selection
by gradient descent in function space. Journal of Machine Learning Research, 3:1333?1356,
March 2003.
| 2629 |@word repository:1 polynomial:2 advantageous:1 norm:5 elisseeff:1 moment:1 contains:1 series:1 must:4 numerical:3 informative:1 predetermined:1 fund:1 selected:7 accordingly:1 provides:1 ron:1 nnp:1 zhang:1 five:1 mathematical:1 consists:3 pnp:1 tomaso:1 nor:1 sdp:1 globally:1 overwhelming:1 solver:1 cardinality:2 begin:2 cleveland:1 moreover:2 unrelated:1 notation:3 bounded:2 lowest:3 israel:2 minimizes:1 developed:1 finding:1 transformation:5 guarantee:3 every:2 act:1 concave:1 classifier:20 partitioning:3 positive:4 engineering:3 local:1 accordance:1 despite:2 chose:1 equivalence:3 bupa:1 unique:4 practical:1 acknowledgment:1 testing:1 union:1 definite:1 procedure:1 pontil:1 empirical:3 boyd:2 cannot:1 selection:26 operator:1 www:1 optimize:1 equivalent:8 independently:3 convex:19 rule:1 vandenberghe:1 enabled:1 fulfil:1 suppose:2 heavily:1 exact:1 programming:3 olivier:1 us:1 diabetes:1 element:2 particularly:1 utilized:2 mukherjee:1 convexification:1 mike:1 role:1 electrical:3 solved:1 thousand:1 calculate:1 wj:14 removed:1 rescaled:1 disease:1 pd:4 convexity:2 complexity:3 ideally:1 dom:4 motivate:1 depend:2 solving:3 weakly:1 basis:1 tx:2 train:1 separated:1 massimiliano:1 fast:1 shortcoming:1 effective:1 artificial:3 kevin:1 stanford:1 valued:1 say:1 otherwise:2 transform:2 propose:4 j2:7 uci:1 flexibility:1 olkopf:1 convergence:2 optimum:4 requirement:1 rademacher:1 produce:2 comparative:1 incremental:1 rotated:1 depending:1 ac:2 solves:4 strong:2 implemented:1 peleg:1 drawback:1 enable:1 datsets:1 generalization:6 fix:1 alleviate:1 tighter:1 extension:1 hold:2 proximity:1 considered:1 smallest:1 purpose:1 label:3 combinatorial:1 sensitive:1 vice:1 weighted:2 minimization:3 promotion:1 clearly:3 mit:1 aim:1 avoid:1 derived:2 focus:1 contrast:1 sigkdd:1 colon:1 dependent:3 typically:1 initially:1 transformed:2 classification:10 dual:7 html:1 denoted:3 priori:3 development:1 special:1 equal:1 once:1 identical:1 future:1 np:3 few:2 randomly:2 simultaneously:2 maxj:1 n1:2 maintain:1 freedom:2 invariably:1 highly:2 mining:1 multiply:2 deferred:1 mixture:1 bracket:1 primal:4 poggio:1 unless:2 detailing:1 haifa:2 desired:1 theoretical:1 classify:2 column:1 contiguous:1 ollendorff:1 cost:1 introducing:1 deviation:1 subset:2 rakotomamonjy:1 uniform:2 technion:6 comprised:1 quasiconcave:1 reported:3 dependency:1 synthetic:4 adaptively:1 cited:1 international:1 again:1 containing:1 possibly:1 worse:1 leading:1 matter:1 satisfy:1 explicitly:2 depends:1 performed:2 root:1 jason:2 kwk:1 competitive:1 bayes:1 complicated:1 simon:1 minimize:6 il:2 square:1 ni:1 efficiently:1 spaced:2 generalize:1 bayesian:1 produced:2 multiplying:1 explain:1 definition:6 sixth:1 against:1 pp:1 james:1 proof:5 associated:1 transposed:1 dataset:11 intrinsically:1 knowledge:1 appears:1 attained:1 tipping:1 methodology:1 formulation:2 done:1 evaluated:1 generality:1 rejected:1 just:1 sketch:2 ei:1 replacing:1 lack:1 glance:1 dependant:1 yf:8 perhaps:1 believe:1 contain:1 concept:1 true:1 consisted:2 hence:1 equality:1 assigned:1 covering:1 whereby:3 won:1 criterion:2 complete:1 performs:3 l1:7 image:1 novel:1 fi:2 mangasarian:1 interpretation:1 lieven:1 versa:1 cambridge:1 cv:4 rd:8 tuning:1 canu:1 dj:1 had:1 chapelle:1 f0:1 operating:1 termed:1 nonconvex:1 inequality:8 exploited:1 minimum:3 additional:1 novelty:1 maximize:1 semi:1 stephen:1 multiple:2 full:1 reduces:1 offer:1 cross:1 cqp:3 divided:1 variant:1 breast:1 essentially:3 normalization:1 kernel:2 achieved:1 fine:1 addressed:1 sch:1 extra:2 subject:6 seem:1 schur:1 xj:6 affect:2 reduce:1 shift:1 bottleneck:1 whether:1 expression:1 motivated:1 becker:1 penalty:1 locating:1 remark:2 detailed:1 induces:1 svms:2 reduced:2 http:1 meir:2 percentage:1 problematic:1 andr:1 shifted:1 sign:3 estimated:1 diagnostic:1 rmeir:1 patt:1 hyperparameter:2 shall:1 commented:1 neither:1 utilize:1 cone:1 throughout:1 decision:1 appendix:1 scaling:9 comparable:1 bound:21 internet:1 guaranteed:1 followed:1 fold:2 quadratic:3 nonnegative:2 constraint:15 perkins:1 entwined:1 formulating:1 attempting:1 expanded:1 department:3 fung:1 according:1 march:2 belonging:1 remain:1 increasingly:1 partitioned:2 taken:2 heart:1 ln:2 turn:1 ge:8 informal:1 adopted:2 available:3 multiplied:2 eight:1 appropriate:1 appearing:1 alternative:1 original:1 denotes:2 remaining:1 include:1 lacker:1 log2:1 yz:1 establish:3 comparatively:2 objective:9 dependence:1 said:1 obermayer:1 gradient:2 separate:1 separating:1 thrun:1 majority:2 assuming:3 index:2 relationship:2 minimizing:3 vladimir:1 unfortunately:1 susceptible:1 pima:1 negative:1 stated:1 design:1 redefined:1 upper:2 observation:1 datasets:15 discarded:1 descent:2 defining:1 rn:5 arbitrary:2 introduced:3 bk:1 pair:1 namely:1 complement:1 cast:1 componentwise:2 required:1 established:1 suggested:1 pattern:10 max:1 explanation:2 indicator:1 scheme:3 conic:2 irrespective:1 sn:2 literature:2 l2:1 discovery:1 determining:2 relative:1 wisconsin:1 loss:1 validation:1 foundation:1 degree:1 theiler:1 principle:1 editor:1 grandvalet:1 cancer:1 summary:1 supported:1 jth:1 alain:1 formal:1 allow:3 absolute:1 sparse:2 regard:2 overcome:1 dimension:1 world:7 transition:1 amended:1 adaptive:1 preprocessing:1 sj:2 grafting:1 implicitly:1 feat:1 bernhard:1 global:10 overfitting:2 assumed:1 why:1 table:4 glenn:1 learn:1 investigated:1 domain:3 diag:1 vj:10 unboundedness:1 protocol:2 linearly:2 bounding:2 arise:1 hyperparameters:4 explicit:1 advertisement:1 sayan:1 theorem:5 removing:1 specific:2 xt:1 svm:17 ionosphere:1 vapnik:1 magnitude:1 margin:1 wdbc:1 kxk:1 expressed:1 ux:2 partially:1 scalar:1 determines:2 satisfies:1 relies:1 acm:1 weston:2 goal:2 viewed:1 formulated:3 consequently:1 replace:1 change:3 specifically:1 except:1 wt:1 lemma:2 called:3 duality:2 experimental:2 select:1 formally:1 support:1 indian:1 |
1,794 | 263 | Designing Application-Specific Neural Networks
Designing Application-Specific
Neural Networks
Using the Genetic Algorithm
Steven A. Harp, Tariq Samad, Aloke Guha
Honeywell SSDC
1000 Boone Avenue North
Golden Valley, MN 55427
ABSTRACT
We present a general and systematic method for neural network
design based on the genetic algorithm. The technique works in
conjunction with network learning rules, addressing aspects of
the network's gross architecture, connectivity, and learning rule
parameters. Networks can be optimiled for various applicationspecific criteria, such as learning speed, generalilation, robustness
and connectivity. The approach is model-independent. We
describe a prototype system, NeuroGENESYS, that employs the
backpropagation learning rule. Experiments on several small
problems have been conducted. In each case, NeuroGENESYS
has produced networks that perform significantly better than the
randomly generated networks of its initial population. The computational feasibility of our approach is discussed.
1 INTRODUCTION
With the growing interest in the practical use of neural networks, addressing the
problem of customiling networks for specific applications is becoming increasingly critical. It has repeatedly been observed that different network structures
and learning parameters can substantially affect performance. Such important
aspects of neural network applications as generalilation, learning speed, connectivity and tolerance to network damage are strongly related to the choice of
447
448
Harp, Samad and Guha
network architecture. Yet there are few analytic results, and few heuristics, that
can help the application developer design an appropriate network.
We have been investigating the use of the genetic algorithm (Goldberg, 1989;
Holland, 1975) for designing application-specific neural networks (Harp, Samad
and Guha, 1989ab). In our approach, the genetic algorithm is used to evolve
appropriate network structures and values of learning parameters. In contrast,
other recent applications of the genetic algorithm to neural networks (e.g., Davis
[1988], Whitley [1988]) have largely restricted the role of the genetic algorithm to
updating weights on a predetermined network structure-another logical
approach.
Several first-generation neural network application development tools already
exist. However, they are only partly effective: the complexity of the problem,
our limited understanding of the interdependencies between various network
design choices, and the extensive human effort involved permit only limited
exploration of the design space. An objective of our research is the development
of a next-generation neural network application development tool that can synthesise optimised custom networks. The genetic algorithm has been distinguished
by its relative immunity to high dimensionality, local minima and noise, and it is
therefore a logical candidate for solving the network optimilation problem.
2 GENETIC SYNTHESIS OF NEURAL NETWORKS
Fig. 1 outlines our approach. A network is represented by a blueprint-a bitstring that encodes a number of characteristics of the network, including structural properties and learning parameter values. Each blueprint directs the creation of an actual network with random initial weights. An instantiated network
is trained using some predetermined training algorithm and training data, and
the trained network can then be tested in various ways-e.g., on non-training
inputs, after disabling some units, and after perturbing learned weight values.
Mter testing, a network is evaluated-a fitneu estimate is computed for it based
on appropriate criteria. This process of instantiation, training, testing and
evaluation is performed for each of a population of blueprints.
Mter the entire population is evaluated, the next generation of blueprints is produced. A number of genetic operator3 are employed, the most prominent of these
being crouotler, in which two parent blueprints are spliced together to produce a
child blueprint (Goldberg, 1989). The higher the fitness of a blueprint, the
greater the probability of it being selected as a parent for the subsequent generation. Characteristics that are found useful will thereby tend to be emphasized in
the next generation, whereas harmful ones will tend to be suppressed.
The definition of network performance depends on the application. If the application requires good generalilation capabilities, the results of testing on
(appropriately chosen) non-training data are important. If a network capable of
real-time learning is required, the learning rate must be optimiled. For fast
response, the sile of the network must be minimized. If hardware (especially
VLSI) implementation is a consideration, low connectivity is essential. In most
applications several such criteria must be considered. This important aspect of
application-specific network design is covered by the fitness function. In our
approach, the fitness of a network can be an arbitrary function of several distinct
Designing Application-Specific Neural Networks
Sampling & Synthesis
of Network
-Blueprints?
Genetic
Algorithm
blueprint
fitness
estimates
Network
Performance
Evaluation
testing
Test Stimuli
I
L...-_--l
Figure 11 A population ot network ~lueprint8" 18 eyelically
updated by the genetic algorithm baaed on their fitne88.
performance and cost criteria, some or all of which can thereby be simultaneously
optimized.
3 NEUROGENESYS
Our approach is model-independent: it can be applied to any existing or future
neural network model (including models without a training component). As a
first prototype implementation we have developed a working system called NeuroGENESYS. The current implementation uses a variant (Samad, 1988) of the
backpropagation learning algorithm (Werbos, 1974; Rumelhart, Hinton, and
Williams, 1985) as the training component and is restricted to feedforward networks.
Within these constraints, NeuroGENESYS is a reasonably general system. Networks can have arbitrary directed acyclic graph structures, where each vertex oC
the graph corresponds to an 4re4 or layer oC units and each edge to a projection
Crom one area to another. Units in an area have a spatial organization; the
current system arrays units in 2 dimensions. Each projection specifies independent radii oC connectivity, one Cor each dimension. The radii of connectivity
allow localized receptive field structures. Within the receptive fields connection
densities can be specified. Two learning parameters are associated with both projections and areas. Each projection has a learning rate parameter ("11" in backpropagation) and a decay rate Cor 11. Each area has 11 and 11-decay parameters
for threshold weights.
These network characteristics are encoded in the genetic blueprint. This bitstring
is composed oC several segments, one Cor each area. An area segment consists of
an area parameter specification (APS) and a variable number of projection
449
450
Harp, Samad and Guha
specification fields (PSFs), each of which describes a projection from the area to
some other area. Both the APS and the PSF contain values for several parameters Cor areas and projections respectively. Fig. 2 shows a simple area segment.
Note that the target of a projection can be specified through either Ab"olute or
Relative addressing. More than one projections are possible between two given
areas; this allows the generation of receptive field structures at different scales
and with different connection densities, and it also allows the system to model the
effect of larger initial weights. In our current implementation, all initial weights
are randomly generated small values from a fixed uniform distribution. In the
near future, we intend to incorporate some aspects of the distribution in the
genetic blueprint.
~
AroaN
-
~
PROJEdTioN
~arameters
X-Share
V -Share----'
Initial Threhsold Eta-----'
Threshold Eta Decay
----....I
start of ProjectiOn Marker - -.....
Connection Density
Initial Eta
Ela Decay
--
-
- X-Radius
V-Radius
T arget Address
Address Mode
Figure 3. Network Blueprint Representation
In NeuroGENESYS, the score of a blueprint is computed as a linear weighted
sum of several performance and cost criteria, including learning speed, the results
of testing on a "test set", the numbers of units and weights in the network, the
results of testing (on the training set) after disabling some of the units, the
results of testing (on the training set) after perturbing the learned weight values,
the average fanout of the network, and the maximum fanout for any unit in the
network. Other criteria can be incorporated as needed. The user of NeuroGENESYS supplies the weighting factors at the start of the experiment, thereby
controlling which aspects of the network are to be optimized.
4 EXPERIMENTS
NeuroGENESYS can be used for both classification and function approximation
problems. We have conducted experiments on three classification problems-digit
recognition from 4x 8 pixel images, exclusive-OR (XOR), and simple convexity
Designing Application-Specific Neural Networks
detection; and one function approximation problem-modeling one cycle of a sine
function. Various combinations of the above criteria have been used. In most
experiments NeuroGENESYS has produced appropriate network designs in a
relatively small number of generations ? 50).
Our first experiment was with digit recognition, and NeuroGENESYS produced a
solution that surprised us: The optimized networks had no hidden layers yet
learned perfectly. It had not been obvious to us that this digit recognition problem is linearly separable. Even in the simple case of no-hidden-Iayer networks,
our earlier remarks on application-specific design can be appreciated. When NeuroGENESYS was asked to optimile for average fanout for the digit recognition
task as well as for perfect learning, the best network produced learned perfectly
(although comparatively slowly) and had an average fanout of three connections
per unit; with learning speed as the sole optimization criterion, the best network
produced learned substantially faster (48 iterations) but it had an average fanout
of almost an order of magnitude higher.
The XOR problem, of course, is prototypically non-linearly-separable. In this
case, NeuroGENESYS produced many fast-learning networks that had a
"bypass" connection from the input layer directly to the output layer (in addition
to connections to and from hidden layers); it is an as yet unverified hypothesis
that these bypass connections accelerate learning.
In one of our experiments on the sine function problem, NeuroGENESYS was
asked to design networks for moderate accuracy-the error cutoff during training
was relatively high. The networks produced typically had one hidden layer of
two units, which is the minimum possible configuration for a sufficiently crude
approximation. When the experiment was repeated with a low error cutoil', intricate multilayer structures were produced that were capable of modeling the training data very accurately (Fig. 3). Fig. 4 shows the learning curve for one sine
function experiment. The" Average" and "Best" scores are over all individuals in
the generation, while "Online" and "amine" are running averages of Average
and Best, respectively. Performance on this problem is quite sensitive to initial
weight values, hence the non-monotonicity oC the Best curve. Steady progress
overall was still being observed when the experiment was terminated.
We have conducted control studies using random search (with best retention)
instead of the genetic algorithm. The genetic algorithm has consisten tly proved
superior. Random search is the weakest possible optimilation procedure, but on
the other hand there are few sophisticated alternatives for this problem-the
search space is discontinuous, largely unknown, and highly nonlinear.
5 COMPUTATIONAL EFFICIENCY
Our approach requires the evaluation of a large number of networks. Even on
some of our small-scale problems, experiments have taken a week or longer, the
bottleneck being the neural network training ~lgorithm. While computational
feasibility is a real concern, Cor several reasons we are optimistic that this
approach will be practical for realistic applications:
?
The hardware platform for our experiments to date has been a Symbolics
computer without any floating-point support. This choice has been ideal
451
452
Harp, Samad and Guha
GENESYS
?
I
1.4'
1 . 34
tc
IU90~
.lton ~ he: 39
C"0'50\l." ) : a.8
of c:rO'SO\le'r pt s : 1
"'-.JtetlC)f"l ): a,31
9 . 58
9 . 2'
a.45
1.4'
1 . 6'
1 . 46
Z
on Rete: 9.81
I"trons: T., 1'10
~e"e .... t
ion: Ye,
I."
1 . 4'
"0
r
PJPOJ-4
A
PROJ-9
-1
I
PROJ-I
HPUI-
29 . 65
12.43
19.'8
29 . 89
19999
2956
19999
19999
4632
19099
5"4
19999
5384
5 . 9'
J . 18
5.98
5 . 9'
7 5~
5 . 98
5.98
5 . 83
J.39
5 . 88
S . 99
I
'
9 . 31
1 . 4'
8 . a9
.'.41
29.99
15 . 31
21.93
21 . 54
21 . 3'
9.11
U_
S
?
14
11
18
34
' . 9a
P . 88
-.! ... ~~
5.09
t 92
,
22
2
,
1'4
2
14
19
18
2
8
CJ
13'
1
8
36
2
32
11
15
2
Q
12.99
1 . 99
6 . SU
' . 00
2.99
5 . 91
' . 91
5 . 9a
2. 99
9.89
9 . 99
a . 99
9.99
B.la
8 . S9
9 . 99
9 . 99
a.a
9 . 89
9.81
PROJ-'7?U'PUr-AilEil
PROJ- 8
~,of-
~q?i~::AZJGiibL::miC:::::=========:)
jAr ?? II:
PROJ 6
-
/
1't!OJ-2
J.69
A-
/,
~
4'48
~~~~
teNt
Ion' pe-r IluP\ : 49
81n eac.h
18.6'
tot.I
/
.. h.' 12 .. II te 3214128
Itf'enaton 1 : t 2" I 18321114 128
Dt.....,.to" 2 :
til 32 S4 1'8
'2""
PIfOJ - 3 1
Intti.l Et. n"lre.hold : 0. 10.20" a., 1 II 3. 21. ' ' 2.1
,,,,, ? .nold (t.. &1008 : ' ?? 0 .002 0004 0008 a.QUI 0 .032 a. olU 0 . t21
(Mtt
Abort
Abort
Bral...,?? h
Chart
Cl....
Ilun
Sav.
She...
StAtu.
Continue
LaM
Figure I. The NeuroGENESYS interfaee, showing a network strueture
optimised tor the sine tUnetion problem
?
?
for program development, and NeuroGENESYS' user interface features
would not have been possible without it, but the performance penalty has
been severe (relative to machines with floating point hardware).
The genetic algorithm is an inherently parallel optimization procedure, a
feature we soon hope to take advantage of. We have recently implemented
a networked version of NeuroGENESYS that will allow us to retain the
desirable aspects of the Symbolics version and yet achieve substantial
speedup in execution (we expect two to three orders of magnitude): up to
30 Apollo workstations t a VAX, and 10 Symbolics computers can now be
evaluating different networks in parallel (Harp, Samad and Guha, 1990).
The current version of NeuroGENESYS employs the backpropagation
learning rule, which is notoriously slow for many applications. However,
faster-learning extensions of backpropagation are continually being
developed. We have incorporated one recent extension (Samad, 1988), but
others, especially common ones such as including a "momentum n term in
the weight update rule (Rumelhart, Hinton and Williams, 1985), could also
be considered. More generally, learning in neural networks is a topic of
intensive research and it is likely that more efficient learning algorithms
will become popular in the near future.
Designing Application-Specific Neural Networks
.
8~----------------------------------------------------~,~,----~
Accuracy on the SINE Function
/;?
;'
,
;
6
i
??0- best
- 0- average
-+- offline
-+- online
,
..
.
!..i
! ,.
,/ \, ./
i
."'.
t
2
;
,i
;
"
.' \
......... . ,..
,-,
.", .?/
,.
'e....
10
\
\
r.
~,
~
,
i i
;,
~
I
!;,
!;,
!;
~;
! i
i
i
i
~
I
~,
20
..,
.;
;!
,,
,,
.
t
~
!i,
i
,
\
.'
.,
I
.,
' .'~
A"
_
o
.
I
,~. '
,-,
......
. '~
i i
i
...a. 'a-",
Generation
4.a.. -....
.,0- ....
, A..
30
Figure 41 A learning curve for the Bine function problem
?
?
?
The genetic algorithm is a.n active field of research itself. Improvements,
many or which are concerned with convergence properties, are frequently
being reported a.nd could reduce the computational requirements (or its
application significantly.
The genetic algorithm is an iterative optimization procedure that, on the
average, produces better solutions with each passing generation. Unlike
some other optimilation techniques, userul results can be obtained during a
run. The genetic algorithm can thus take advantage of whatever time and
computational resources are available ror an application.
Just as there is no strict termination requirement for the genetic algorithm,
there is no constraint on its initialilation. In our experimen ts, the zeroth
generation consisted or randomly generated networks. Not surprisingly,
almost all or these are poor perrormers. However, better better ways of
selecting the initial population are possible. In particular, the initial population can consist or manually optimiled networks. Manual optimization of
neural networks is currently the norm, but it leaves much or the design
space unexplored. Our approach would allow a human application
developer to design one or more networks that could be the starting point
for further, more systematic optimization by the genetic algorithm. Other
initialization approaches are also possible, such as using optimized networks
from similar applications, or using heuristic guidelines to generate networks.
It should be emphasized that computational efficiency is not the only factor that
must be considered in evaluating this (or any) approach. Others such as the
potential for improved perrormance or neural network applications and the costs
453
454
Harp, Samad and Guha
and benefits associated with alternative approaches for designing network applications are also critically important.
6 FUTURE RESEARCH
In addition to running further experiments, we hope in the future to develop versions of NeuroGENESYS for other network models, including hybrid models that
incorporate supervised and unsupervised learning components.
Space restrictions have precluded a detailed description of NeuroGENESYS and
our experiments. The interested reader is referred to (Harp, Samad, and Guha,
1989ab, 1990).
References
Davis, L. (1988). Properties of a hybrid neural network-classifier system. In
Advcuz.cu in Neura.l Information Proceuing Sydem8 1, D.S. Touretlky (Ed.).
San Mateo: Morgan Kaufmann.
Goldberg, D.E. (1989). Genetic Algorithm8 in Search, Optimization and Machine
Learning. Addison-Wesley.
Harp, S.A., T. Samad, and A. Guha (1989a). Towards the genetic synthesis of
neural networks. Proceeding8 of the Third International Conference on Genetic
Algorithm8, J.D. Schaffer (ed.). San Mateo: Morgan Kaufmann.
Harp, S.A., T. Samad, and A. Guha (1989b). Genetic Synthui8 of Neura.l Network8. Technical Report 14852-CC-1989-2. Honeywell SSDC, 1000 Boone Avenue North, Golden Valley, MN 55427.
Harp, S.A., T. Samad, and A. Guha (1990). Genetic synthesis of neural network
architecture. In The Genetic Algorithm8 Handbook, L.D. Davis (Ed.). New
York: Van Nostrand Reinhold. (To appear.)
Holland, J. (1975). Adaptation in Natural and Artificial Sydem,. Ann Arbor:
University of Michigan Press.
Rumelhart, D.E., G.E. Hinton, and R.J. Williams (1985). Learning Interna.l
Repruentation, by Error-Propagation, ICS Report 8506, Institute for Cognitive
Science, UCSD, La Jolla, CA.
Samad, T. (1988). Back-propagation is significantly faster if the expected value
of the source unit is used for update. Neural Network8, 1, Sup. 1.
Werbos, P. (1974). Beyond Regru8ion: New Tool8 for Prediction and AnalY8i8
in the Behavioral Sciencu. Ph.D. Thesis, Harvard University Committee on
Applied Mathematics, Cambridge, MA.
Whitley, D. (1988). Applying Genetic Algorithm8 to Neural Net Learning.
Technical Report CS-88-128, Department of Computer Science, Colorado State
University.
| 263 |@word cu:1 version:4 norm:1 nd:1 termination:1 thereby:3 initial:9 configuration:1 score:2 selecting:1 genetic:28 existing:1 current:4 yet:4 must:4 tot:1 realistic:1 subsequent:1 predetermined:2 analytic:1 aps:2 update:2 selected:1 leaf:1 become:1 supply:1 surprised:1 consists:1 behavioral:1 psf:2 expected:1 intricate:1 frequently:1 growing:1 actual:1 substantially:2 developer:2 developed:2 unexplored:1 golden:2 pur:1 ro:1 classifier:1 control:1 unit:10 whatever:1 appear:1 continually:1 retention:1 local:1 proceuing:1 optimised:2 becoming:1 zeroth:1 initialization:1 mateo:2 limited:2 directed:1 practical:2 testing:7 backpropagation:5 digit:4 procedure:3 area:12 significantly:3 projection:10 valley:2 s9:1 applying:1 restriction:1 tariq:1 blueprint:13 williams:3 starting:1 rule:5 array:1 population:6 updated:1 target:1 controlling:1 colorado:1 user:2 pt:1 goldberg:3 designing:7 us:1 hypothesis:1 harvard:1 rumelhart:3 recognition:4 mic:1 updating:1 werbos:2 observed:2 steven:1 role:1 cycle:1 spliced:1 gross:1 substantial:1 convexity:1 complexity:1 asked:2 trained:2 proceeding8:1 solving:1 segment:3 arget:1 ror:1 creation:1 efficiency:2 accelerate:1 various:4 represented:1 instantiated:1 fast:2 honeywell:2 describe:1 effective:1 distinct:1 artificial:1 quite:1 heuristic:2 encoded:1 larger:1 whitley:2 itself:1 online:2 a9:1 advantage:2 net:1 t21:1 lam:1 adaptation:1 networked:1 date:1 achieve:1 description:1 parent:2 convergence:1 requirement:2 produce:2 perfect:1 help:1 develop:1 sole:1 progress:1 disabling:2 implemented:1 c:1 eac:1 radius:4 discontinuous:1 exploration:1 human:2 extension:2 hold:1 sufficiently:1 considered:3 ic:1 week:1 tor:1 currently:1 bitstring:2 sensitive:1 tool:2 weighted:1 hope:2 conjunction:1 directs:1 she:1 improvement:1 u_:1 contrast:1 entire:1 typically:1 hidden:4 vlsi:1 proj:5 interested:1 pixel:1 overall:1 classification:2 development:4 spatial:1 platform:1 field:5 sampling:1 manually:1 unsupervised:1 experimen:1 future:5 minimized:1 report:3 stimulus:1 others:2 employ:2 few:3 randomly:3 composed:1 simultaneously:1 individual:1 fitness:4 floating:2 ab:3 detection:1 organization:1 interest:1 highly:1 custom:1 evaluation:3 severe:1 edge:1 capable:2 prototypically:1 harmful:1 modeling:2 eta:3 earlier:1 cost:3 addressing:3 vertex:1 uniform:1 conducted:3 guha:11 reported:1 sav:1 density:3 international:1 retain:1 systematic:2 synthesis:4 together:1 mtt:1 connectivity:6 thesis:1 slowly:1 cognitive:1 til:1 potential:1 north:2 depends:1 performed:1 sine:5 optimistic:1 sup:1 sile:1 start:2 capability:1 parallel:2 chart:1 accuracy:2 xor:2 kaufmann:2 largely:2 characteristic:3 accurately:1 produced:9 critically:1 notoriously:1 cc:1 nold:1 manual:1 ed:3 definition:1 involved:1 obvious:1 associated:2 workstation:1 proved:1 popular:1 logical:2 dimensionality:1 cj:1 sophisticated:1 back:1 wesley:1 higher:2 dt:1 supervised:1 response:1 improved:1 evaluated:2 strongly:1 just:1 working:1 hand:1 su:1 nonlinear:1 marker:1 propagation:2 abort:2 mode:1 effect:1 ye:1 contain:1 lgorithm:1 consisted:1 hence:1 during:2 davis:3 steady:1 oc:5 criterion:8 prominent:1 outline:1 interface:1 image:1 consideration:1 recently:1 superior:1 common:1 perturbing:2 lre:1 discussed:1 he:1 cambridge:1 mathematics:1 had:6 specification:2 longer:1 recent:2 moderate:1 jolla:1 nostrand:1 continue:1 morgan:2 minimum:2 greater:1 employed:1 ii:3 interdependency:1 desirable:1 technical:2 faster:3 feasibility:2 prediction:1 variant:1 multilayer:1 iteration:1 ion:2 whereas:1 addition:2 source:1 appropriately:1 ot:1 unlike:1 strict:1 tend:2 structural:1 near:2 ideal:1 feedforward:1 concerned:1 affect:1 architecture:3 perfectly:2 reduce:1 prototype:2 avenue:2 intensive:1 bottleneck:1 effort:1 fanout:5 penalty:1 boone:2 passing:1 york:1 repeatedly:1 remark:1 useful:1 generally:1 covered:1 detailed:1 s4:1 ph:1 hardware:3 generate:1 specifies:1 exist:1 amine:1 per:1 harp:11 threshold:2 cutoff:1 graph:2 sum:1 tly:1 run:1 almost:2 reader:1 qui:1 layer:6 constraint:2 encodes:1 aspect:6 speed:4 separable:2 relatively:2 speedup:1 department:1 synthesise:1 combination:1 poor:1 describes:1 increasingly:1 suppressed:1 tent:1 restricted:2 taken:1 resource:1 committee:1 needed:1 addison:1 cor:5 available:1 permit:1 appropriate:4 distinguished:1 alternative:2 robustness:1 apollo:1 symbolics:3 running:2 neura:2 especially:2 comparatively:1 objective:1 intend:1 already:1 damage:1 receptive:3 exclusive:1 topic:1 reason:1 bine:1 design:10 implementation:4 guideline:1 unknown:1 perform:1 t:1 hinton:3 incorporated:2 ucsd:1 arbitrary:2 schaffer:1 reinhold:1 required:1 specified:2 extensive:1 immunity:1 optimized:4 connection:7 learned:5 address:2 beyond:1 precluded:1 program:1 including:5 oj:1 critical:1 ela:1 unverified:1 hybrid:2 natural:1 mn:2 vax:1 understanding:1 evolve:1 relative:3 expect:1 generation:11 acyclic:1 localized:1 bypass:2 share:2 course:1 surprisingly:1 soon:1 appreciated:1 offline:1 allow:3 institute:1 tolerance:1 benefit:1 curve:3 dimension:2 van:1 evaluating:2 san:2 itf:1 monotonicity:1 mter:2 active:1 investigating:1 instantiation:1 handbook:1 iayer:1 search:4 iterative:1 reasonably:1 ca:1 inherently:1 cl:1 linearly:2 terminated:1 noise:1 child:1 repeated:1 fig:4 referred:1 slow:1 momentum:1 candidate:1 crude:1 pe:1 weighting:1 third:1 specific:9 emphasized:2 showing:1 decay:4 weakest:1 concern:1 essential:1 consist:1 samad:14 arameters:1 magnitude:2 te:1 execution:1 jar:1 tc:1 michigan:1 likely:1 holland:2 corresponds:1 ma:1 ann:1 towards:1 called:1 partly:1 arbor:1 la:2 support:1 incorporate:2 tested:1 |
1,795 | 2,630 | Fast Rates to Bayes for Kernel Methods
Ingo Steinwart? and Clint Scovel
Modeling, Algorithms and Informatics Group, CCS-3
Los Alamos National Laboratory
{ingo,jcs}@lanl.gov
Abstract
We establish learning rates to the Bayes risk for support vector machines
(SVMs) with hinge loss. In particular, for SVMs with Gaussian RBF
kernels we propose a geometric condition for distributions which can be
used to determine approximation properties of these kernels. Finally, we
compare our methods with a recent paper of G. Blanchard et al..
1
Introduction
In recent years support vector machines (SVM?s) have been the subject of many theoretical
considerations. In particular, it was recently shown ([1], [2], and [3]) that SVM?s can learn
for all data-generating distributions. However, these results are purely asymptotic, i.e. no
performance guarantees can be given in terms of the number n of samples. In this paper
we will establish such guarantees. Since by the no-free-lunch theorem of Devroye (see [4])
performance guarantees are impossible without assumptions on the data-generating distribution we will restrict our considerations to specific classes of distributions. In particular,
we will present a geometric condition which describes how distributions behave close to
the decision boundary. This condition is then used to establish learning rates for SVM?s.
To obtain learning rates faster than n?1/2 we also employ a noise condition of Tsybakov
(see [5]). Combining both concepts we are in particular able to describe distributions such
that SVM?s with Gaussian kernel learn almost linearly, i.e. with rate n?1+? for all ? > 0,
even though the Bayes classifier cannot be represented by the SVM.
Let us now formally introduce the statistical classification problem. To this end assume that
X is a set. We write Y := {?1, 1}. Given a training set T = (x1 , y1 ), . . . , (xn , yn ) ?
(X ? Y )n the classification task is to predict the label y of a new sample (x, y). In the
standard batch model it is assumed that T is i.i.d. according to an unknown probability
measure P on X ? Y . Furthermore, the new sample (x, y) is drawn from P independently
of T . Given a classifier C that assigns to every training set T a measurable function f T :
X ? R the prediction of C for y is sign fT (x), where we choose a fixed definition of
sign(0) ? {?1, 1}. In order to ?learn? from the samples of T the decision function f T
should guarantee a small probability for the misclassification of the example (x, y). To
make this precise the risk of a measurable function f : X ? R is defined by
RP (f ) := P {(x, y) : sign f (x) 6= y} .
The smallest achievable risk RP := inf RP (f ) | f : X ? R measurable is called the
Bayes risk of P . A function fP : X ? Y attaining this risk is called a Bayes decision function. Obviously, a good classifier should produce decision functions whose risks are close
to the Bayes risk. This leads to the definition: a classifier is called universally consistent if
ET ?P n RP (fT ) ? RP ? 0
(1)
holds for all probability measures P on X ? Y . The next naturally arising question is
whether there are classifiers which guarantee a specific rate of convergence in (1) for all
distributions. Unfortunately, this is impossible by the so-called no-free-lunch theorem of
Devroye (see [4, Thm. 7.2]). However, if one restricts considerations to certain smaller
classes of distributions such rates exist for various classifiers, e.g.:
? Assuming that the conditional probability ?(x) := P (1|x) satisfies certain
smoothness assumptions Yang showed in [6] that some plug-in rules (cf. [4])
achieve rates for (1) which are of the form n?? for some 0 < ? < 1/2 depending on the assumed smoothness. He also showed that these rates are optimal in
the sense that no classifier can obtain faster rates under the proposed smoothness
assumptions.
? It is well known (see [4, Sec. 18.1]) that using structural risk minimization over a
sequence of hypothesis classes with finite VC-dimension every distribution which
has a Bayes decision function in one of the hypothesis classes can be learned with
rate n?1/2 .
? Let P be a noise-free distribution, i.e. RP = 0 and F be a class with finite VCdimension. If F contains a Bayes decision function then up to a logarithmic factor
the convergence rate of the ERM classifier over F is n?1 (see [4, Sec. 12.7]).
Restricting the class of distributions for classification always raises the question of whether
it is likely that these restrictions are met in real world problems. Of course, in general
this question cannot be answered. However, experience shows that the assumption that the
distribution is noise-free is almost never satisfied. Furthermore, it is rather unrealistic to
assume that a Bayes decision function can be represented by the algorithm. Finally, assuming that the conditional probability is smooth, say k-times continuously differentiable,
seems to be unjustifiable for many real world classification problems. We conclude that the
above listed rates are established for situations which are rarely met in practice.
Considering the ERM classifier and hypothesis classes F containing a Bayes decision function there is a large gap in the rates for noise-free and noisy distributions. In [5] Tsybakov
proposed a condition on the noise which describes intermediate situations. In order to
present this condition we write ?(x) := P (y = 1|x), x ? X, for the conditional probability and PX for the marginal distribution of P on X. Now, the noise in the labels can
be described by the function |2? ? 1|. Indeed, in regions where this function is close to
1 there is only a small amount of noise, whereas function values close to 0 only occur in
regions with a high noise. We will use the following modified version of Tsybakov?s noise
condition which describes the size of the latter regions:
Definition 1.1 Let 0 ? q ? ? and P be a distribution on X ? Y . We say that P has
Tsybakov noise exponent q if there exists a constant C > 0 such that for all sufficiently
small t > 0 we have
PX |2? ? 1| ? t ? C ? tq .
(2)
All distributions have at least noise exponent 0. In the other extreme case q = ? the
conditional probability ? is bounded away from 12 . In particular this means that noise-free
distributions have exponent q = ?. Finally note, that Tsybakov?s original noise condition
q
assumed PX (f 6= fP ) ? c(RP (f ) ? RP ) 1+q for all f : X ? Y which is satisfied if
e.g. (2) holds (see [5, Prop. 1]).
In [5] Tsybakov showed that if P has a noise exponent q then ERM-type classifiers can
q+1
obtain rates in (1) which are of the form n? q+pq+2 , where 0 < p < 1 measures the complexity of the hypothesis class. In particular, rates faster than n?1/2 are possible whenever
q > 0 and p < 1. Unfortunately, the ERM-classifier he considered is usually hard to implement and in general there exists no efficient algorithm. Furthermore, his classifier requires
substantial knowledge on how to approximate the Bayes decision rules of the considered
distributions. Of course, such knowledge is rarely present in practice.
2
Results
In this paper we will use the Tsybakov noise exponent to establish rates for SVM?s which
are very similar to the above rates of Tsybakov. We begin by recalling the definition of
SVM?s. To this end let H be a reproducing kernel Hilbert space (RKHS) of a kernel
k : X ? X ? R, i.e. H is a Hilbert space consisting of functions from X to R such
that the evaluation functionals are continuous, and k is symmetric and positive definite (see
e.g. [7]). Throughout this paper we assume that X is a compact metric space and that k
is continuous, i.e. H contains only continuous functions. In order to avoid cumbersome
notations we additionally assume kkk? ? 1. Now given a regularization parameter ? > 0
the decision function of an SVM is
n
1X
2
(fT,? , bT,? ) := arg min ?kf kH +
l yi (f (xi ) + b) ,
(3)
f ?H
n i=1
b?R
where l(t) := max{0, 1 ? t} is the so-called hinge loss. Unfortunately, only a few results
on learning rates for SVM?s are known: In [8] it was shown that SVM?s can learn with
linear rate if the distribution is noise-free and the two classes can be strictly separated by
the RKHS. For RKHS which are dense in the space C(X) of continuous functions the
latter condition is satisfied if the two classes have strictly positive distance in the input
space. Of course, these assumptions are far too strong for almost all real-world problems.
Furthermore, Wu and Zhou (see [9]) recently established rates under the assumption that ?
is contained in a Sobolev space. In particular, they proved rates of the form (log n) ?p for
some p > 0 if the SVM uses a Gaussian kernel. Obviously, these rates are much too slow
to be of practical interest and the difficulties with smoothness assumptions have already
been discussed above.
For our first result, which is much stronger than the above mentioned results, we need to
introduce two concepts both of which deal with the involved RKHS. The first concept describes how well a given RKHS H can approximate a distribution P . In order to introduce
it we define the l-risk of a function f : X ? R by Rl,P (f ) := E(x,y)?P l(yf (x)). The
smallest possible l-risk is denoted by Rl,P := inf{Rl,P (f ) | f : X ? R}. Furthermore,
we define the approximation error function by
a(?) := inf ?kf k2H + Rl,P (f ) ? Rl,P ,
? ? 0.
(4)
f ?H
The function a(.) quantifies how well an infinite sample SVM with RKHS H approximates
the minimal l-risk (note that we omit the offset b in the above definition for simplicity). If
H is dense in the space of continuous functions C(X) then for all P we have a(?) ? 0 if
? ? 0 (see [3]). However, in non-trivial situations no rate of convergence which uniformly
holds for all distributions P is possible. The following definition characterizes distributions
which guarantee certain polynomial rates:
Definition 2.1 Let H be a RKHS over X and P be a distribution on X ? Y . Then H
approximates P with exponent ? ? (0, 1] if there is a C > 0 such that for all ? > 0:
a(?) ? C?? .
It can be shown (see [10]) that the extremal case ? = 1 is equivalent to the fact that the
minimal l-risk can be achieved by an element of H. Because of the specific structure of the
approximation error function values ? > 1 are only possible for distributions with ? ? 12 .
Finally, we need a complexity measure for RKHSs. To this end let A ? E be a subset of a
Banach space E. Then the covering numbers of A are defined by
n
n
o
[
N (A, ?, E) := min n ? 1 : ?x1 , . . . , xn ? E with A ?
(xi + ?BE ) ,
? > 0,
i=1
where BE denotes the closed unit ball of E. Now our complexity measure is:
Definition 2.2 Let H be a RKHS over X and BH its closed unit ball. Then H has complexity exponent 0 < p ? 2 if there is an ap > 0 such that for all ? > 0 we have
log N (BH , ?, C(X)) ? ap ??p .
Note, that in [10] the complexity exponent was defined in terms of N (BH , ?, L2 (TX )),
where L2 (TX ) is the L2 -space with respect to the empirical measure of (x1 , . . . , xn ). Since
we always have N (BH , ?, L2 (T )) ? N (BH , ?, C(X)) the Definition 2.2 is stronger than
the one in [10]. Here, we only used Definition 2.2 since it enables us to compare our results
with [11]. However, all results remain true if one uses the original definition of [10].
For many RKHSs bounds on the complexity exponents are known (see e.g. [3] and [10]).
Furthermore, many SVMs use a parameterized family of RKHSs. For such SVMs the
constant ap may play a crucial role. We will see below, that this is in particular true for
SVMs using a family of Gaussian RBF kernels. Let us now formulate our first result on
rates:
Theorem 2.3 Let H be a RKHS of a continuous kernel on X with complexity exponent
0 < p < 2, and let P be a probability measure on X ? Y with Tsybakov noise exponent
0 < q ? ?. Furthermore, assume that H approximates P with exponent 0 < ? ? 1. We
4(q+1)
define ?n := n? (2q+pq+4)(1+?) . Then for all ? > 0 there is a constant C > 0 such that for
all x ? 1 and all n ? 1 we have
4?(q+1)
Pr? T ? (X ? Y )n : RP (fT,?n + bT,?n ) ? RP + Cx2 n? (2q+pq+4)(1+?) +? ? 1 ? e?x .
Here Pr? denotes the outer probability of P n in order to avoid measurability considerations.
Remark 2.4 With a tail bound of the form of Theorem 2.3 one can easily get rates for (1).
4?(q+1)
In the case of Theorem 2.3 these rates have the form n? (2q+pq+4)(1+?) +? for all ? > 0.
Remark 2.5 For brevity?s sake our major aim was to show the best possible rates using
our techniques. Therefore, Theorem 2.3 states rates for the SVM under the assumption that
(?n ) is optimally chosen. However, we emphasize, that the techniques of [10] also give
rates if (?n ) is chosen in a different (and thus sub-optimal) way. This is also true for our
results on SVM?s using Gaussian kernels which we will establish below.
Remark 2.6 In [5] it is assumed that a Bayes classifier is contained in the function class
the algorithm minimizes over. This assumption corresponds to a perfect approximation of
2(q+1)
P by H, i.e. ? = 1. In this case our rate is (essentially) of the form n? 2q+pq+4 . If we
rescale the complexity exponent p from (0, 2) to (0, 1) and write p0 for the new complexity
?
q+1
exponent this rate becomes n q+p0 q+2 . This is exactly the form of Tsybakov?s result in [5].
However, as far as we know our complexity measure cannot be compared to Tsybakov?s.
Remark 2.7 By the nature of Theorem 2.3 it suffices that P satisfies Tsybakov?s noise
assumption for every q 0 < q. It also suffices to suppose that H approximates P with
exponent ? 0 for all ? 0 < ?, and that H has complexity exponent p0 for all p0 > p. Now,
it is shown in [10] that the RKHS H has an approximation exponent ? = 1 if and only if
H contains a minimizer of the l-risk. In particular, if H has approximation exponent ? for
all ? < 1 but not for ? = 1 then H does not contain such a minimizer but Theorem 2.3
gives the same result as for ? = 1. If in addition the RKHS consists of C ? functions we
can choose p arbitrarily close to 0, and hence we can obtain rates up to n?1 even though H
does not contain a minimizer of the l-risk, that means e.g. a Bayes decision function.
In view of Theorem 2.3 and the remarks concerning covering numbers it is often only
necessary to estimate the approximation exponent. In particular this seems to be true for
the most popular kernel, that is the Gaussian RBF kernel k? (x, x0 ) = exp(?? 2 kx ? x0 k22 ),
x, x0 ? X on (compact) subsets X of Rd with width 1/?. However, to our best knowledge
no non-trivial condition on ? or fP = sign ?(2? ? 1) which ensures an approximation
exponent ? > 0 for fixed width has been established and [12] shows that Gaussian kernels
poorly approximate smooth functions. Hence plug-in rules based on Gaussian kernels may
perform poorly under smoothness assumptions on ?. In particular, many types of SVM?s
using other loss functions are plug-in rules and therefore, their approximation properties
under smoothness assumptions on ? may be poor if a Gaussian kernel is used. However, our
SVM?s are not plug-in rules since their decision functions approximate the Bayes decision
function (see [13]). Intuitively, we therefore only need a condition that measures the cost of
approximating the ?bump? of the Bayes decision function at the ?decision boundary?. We
will now establish such a condition for Gaussian RBF kernels with varying widths 1/? n .
To this end let X?1 := {x ? X : ? < 21 } and X1 := {x ? X : ? > 21 }. Recall that
these two sets are the classes which have to be learned. Since we are only interested in
distributions P having a Tsybakov exponent q > 0 we always assume that X = X ?1 ? X1
holds PX -almost surely. Now we define
?
if x ? X?1 ,
?d(x, X1 ),
?x := d(x, X?1 ), if x ? X1 ,
(5)
?
0,
otherwise .
Here, d(x, A) denotes the distance of x to a set A with respect to the Euclidian norm. Note
that roughly speaking ?x measures the distance of x to the ?decision boundary?. With the
help of this function we can define the following geometric condition for distributions:
Definition 2.8 Let X ? Rd be compact and P be a probability measure on X ? Y . We
say that P has geometric noise exponent ? ? (0, ?] if we have
Z
?x??d |2?(x) ? 1|PX (dx) < ? .
(6)
X
Furthermore, P has geometric noise exponent ? if (6) holds for all ? > 0.
In the above definition we make neither any kind of smoothness assumption nor do we
assume a condition on PX in terms of absolute continuity with respect to the Lebesgue
measure. Instead, the integral condition (6) describes the concentration of the measure
|2? ?1|dPX near the decision boundary. The less the measure is concentrated in this region
the larger the geometric
noise exponent can be chosen. In particular, we have x 7? ?x?1 ?
L? |2??1|dPX if and only if the two classes X?1 and X1 have strictly positive distance!
If (6) holds for some 0 < ? < ? then the two classes may ?touch?, i.e. the decision
boundary ?X?1 ? ?X1 is nonempty. Using this interpretation we easily can construct
distributions which have geometric noise exponent ? and touching classes. In general for
these distributions there is no Bayes classifier in the RKHS H? of k? for any ? > 0.
Example 2.9 We say that ? is H?older about 12 with exponent ? > 0 on X ? Rd if there is
a constant c? > 0 such that for all x ? X we have
(7)
|2?(x) ? 1| ? c? ?x? .
If ? is H?older about 1/2 with exponent ? > 0, the graph of 2?(x) ? 1 lies in a multiple
of the envelope defined by ?x? at the top and by ??x? at the bottom. To be H?older about
1/2 it is sufficient that ? is H?older continuous, but it is not necessary. A function which is
H?older about 1/2 can be very irregular away from the decision boundary but it cannot jump
across the decision boundary discontinuously. In addition a Ho? lder continuous function?s
exponent must satisfy 0 < ? ? 1 where being Ho? lder about 1/2 only requires ? > 0.
For distributions with Tsybakov exponent such that ? is Ho? lder about 1/2 we can bound
the geometric noise exponent. Indeed, let P be a distribution which has Tsybakov noise
exponent q ? 0 and a conditional probability ? which is Ho? lder about 1/2 with exponent
? > 0. Then (see [10]) P has geometric noise exponent ? for all ? < ? q+1
d .
For distributions having a non-trivial geometric noise exponent we can now bound the
approximation error function for Gaussian RBF kernels:
Theorem 2.10 Let X be the closed unit ball of the Euclidian space Rd , and H? be the
RKHS of the Gaussian RBF kernel k? on X with width 1/? > 0. Furthermore, let P
be a distribution with geometric noise exponent 0 < ? < ?. We write a? (.) for the
approximation error function with respect to H? . Then there is a C > 0 such that for all
? > 0, ? > 0 we have
a? (?) ? C ? d ? + ? ??d .
(8)
In order to let the right hand side of (8) converge to zero it is necessary to assume both
? ? 0 and ? ? ?. An easy consideration shows that the fastest rate of convergence can
1
?
be achieved if ?(?) := ?? (?+1)d . In this case we have a?(?) (?) ? 2C? ?+1 . Roughly
?
speaking this states that the family of spaces H?(?) approximates P with exponent ?+1
.
Note, that we can obtain approximation rates up to linear order in ? for sufficiently benign
distributions. The price for this good approximation property is, however, an increasing
complexity of the hypothesis class H?(?) for ? ? ?, i.e. ? ? 0. The following theorem
estimates this in terms of the complexity exponent:
Theorem 2.11 Let H? be the RKHS of the Gaussian RBF kernel k? on X. Then for all
0 < p ? 2 and ? > 0, there is a cp,d,? > 0 such that for all ? > 0 and all ? ? 1 we have
p
sup log N (BH? , ?, L2 (TX )) ? cp,d,? ? (1? 2 )(1+?)d ??p .
T ?Z n
Having established both results for the approximation and complexity exponent we can
now formulate our main result for SVM?s using Gaussian RBF kernels:
Theorem 2.12 Let X be the closed unit ball of the Euclidian space Rd , and P be a distribution on X ? Y with Tsybakov noise exponent 0 < q ? ? and geometric noise exponent
0 < ? < ?. We define
(
?+1
n? 2?+1
if ? ? q+2
2q
?n :=
2(?+1)(q+1)
n? 2?(q+2)+3q+4 otherwise ,
?
1
and ?n := ?n (?+1)d in both cases. Then for all ? > 0 there is a C > 0 such that for all
x ? 1 and all n ? 1 the SVM using ?n and Gaussian RBF kernel with width 1/?n satisfies
?
Pr? T ? (X ? Y )n : RP (fT,?n + bT,?n ) ? RP + Cx2 n? 2?+1 +? ? 1 ? e?x
if ? ? q+2
2q and
2?(q+1)
Pr? T ? (X ? Y )n : RP (fT,?n + bT,?n ) ? RP + Cx2 n? 2?(q+2)+3q+4 +? ? 1 ? e?x
?
otherwise. If ? = ? the latter holds if ?n = ? is a constant with ? > 2 d.
Most of the remarks made after Theorem 2.3 also apply to the above theorem up to obvious
modifications. In particular this is true for Remark 2.4, Remark 2.5, and Remark 2.7.
3
Discussion of a modified support vector machine
Let us now discuss a recent result (see [11]) on rates for the following modification of the
original SVM:
n
1X
?
:= arg min ?kf kH +
fT,?
l yi f (xi ) .
(9)
f ?H
n i=1
Note that unlike in (3) the norm of the regularization term is not squared in (9). To describe
the result of [11] we need the following modification of the approximation error function:
? ? 0.
(10)
a? (?) := inf ?kf kH + Rl,P (f ) ? Rl,P ,
f ?H
?
Obviously, a (.) plays the same role for (9) as a(.) does for (3). Moreover, it is easy to
see that for all ? > 0 with kfP,? k ? 1 we have a? (?) ? a(?). Now, a slightly simplified
version of the result in [11] reads as follows:
Theorem 3.1 Let H be a RKHS of a continuous kernel on X with complexity exponent
0 < p < 2, and let P be a distribution on X ? Y with Tsybakov noise exponent ?. We
2
define ?n := n? 2+p . Then for all x ? 1 there is a Cx > 0 such that for all n ? 1 we have
2
? 2+p
?
?
? 1 ? e?x .
Pr? T ? (X ? Y )n : RP (fT,?
)
?
R
+
C
a
(?
)
+
n
P
x
n
n
Besides universal constants the exact value of Cx is given in [11]. Also note, that the original result of [11] used the eigenvalue distribution of the integral operator Tk : L2 (PX ) ?
L2 (PX ) as a complexity measure. If H has complexity exponent p it can be shown that
these eigenvalues decay at least as fast as n?2/p . Since we only want to compare Theorem
3.1 with our results we do not state the eigenvalue version of Theorem 3.1.
It was also mentioned in [11] that using the techniques therein it is possible to derive rates
for the original SVM. In this case a? (?n ) has to be replaced by a(?n ) and the stochastic
2
term n? 2+p has to be replaced by ?some more involved term? (see [11, p.10]). Since
typically a? (.) decreases faster than a(.) the authors conclude that using a regularization
term k.k instead of the original k.k2 will ?necessarily yield an improved convergence rate?
(see [11, p.11]). Let us now show that this conclusion is not justified. To this end let us
suppose that H approximates P with exponent 0 < ? ? 1, i.e. a(?) ? C?? for some
C > 0 and all ? > 0. It was shown in [10] that this equivalent to
inf
kf k???1/2
?
Rl,P (f ) ? Rl,P ? c1 ? 1??
(11)
for some constant c1 > 0 and all ? > 0. Furthermore, using the techniques in [10] it
2?
is straightforward to show that (11) is equivalent to a? (?) ? c2 ? 1?? . Therefore, if H
4?
approximates P with exponent ? then the rate in Theorem 3.1 becomes n? (2+p)(1+?) which
is the rate we established in Theorem 2.3 for the original SVM. Although the original SVM
(3) and the modification (9) learn with the same rate there is a substantial difference in the
way the regularization parameter has to be chosen in order to achieve this rate. Indeed,
4
for the original SVM we have to use ?n = n? (2+p)(1+?) while for (9) we have to choose
2
?n = n? 2+p . In other words, since p is known for typical RKHS?s but ? is not, we know
the asymptotically optimal choice of ?n for (9) while we do not know the corresponding
optimal choice for the standard SVM. It is naturally to ask whether a similar observation
can be made if we have a Tsybakov noise exponent which is smaller than ?. The answer
to this question is ?yes? and ?no?. More precisely, using our techniques in [10] one can
show that for 0 < q ? ? the optimal choice of the regularization parameter in (9) is
2(q+1)
4?(q+1)
?n = n? 2q+pq+4 leading to the rate n? (2q+pq+4)(1+?) . As for q = ? this rate coincides
with the rate we obtained for the standard SVM. Furthermore, the asymptotically optimal
choice of ?n is again independent of the approximation exponent ?. However, it depends on
the (typically unknown) noise exponent q. This leads to the following important questions:
Question 1: Is it easier to find an almost optimal choice of ? for (9) than for the standard
SVM? And if so, what are the computational requirements of solving (9)?
Question 2: Can a similar observation be made for the parametric family of Gaussian RBF
kernels used in Theorem 2.12 if P has a non-trivial geometric noise exponent ??
References
[1] I. Steinwart. Support vector machines are universally consistent. J. Complexity,
18:768?791, 2002.
[2] T. Zhang. Statistical behaviour and consistency of classification methods based on
convex risk minimization. Ann. Statist., 32:56?134, 2004.
[3] I. Steinwart. Consistency of support vector machines and other regularized kernel
machines. IEEE Trans. Inform. Theory, to appear, 2005.
[4] L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition.
Springer, New York, 1996.
[5] A.B. Tsybakov. Optimal aggregation of classifiers in statistical learning. Ann. Statist.,
32:135?166, 2004.
[6] Y. Yang. Minimax nonparametric classification?part I and II. IEEE Trans. Inform.
Theory, 45:2271?2292, 1999.
[7] N. Cristianini and J. Shawe-Taylor. An Introduction to Support Vector Machines.
Cambridge University Press, 2000.
[8] I. Steinwart. On the influence of the kernel on the consistency of support vector
machines. J. Mach. Learn. Res., 2:67?93, 2001.
[9] Q. Wu and D.-X. Zhou. Analysis of support vector machine classification. Tech.
Report, City University of Hong Kong, 2003.
[10] C. Scovel and I. Steinwart. Fast rates for support vector machines. Ann. Statist.,
submitted, 2003.
http://www.c3.lanl.gov/?ingo/publications/
ann-03.ps.
[11] G. Blanchard, O. Bousquet, and P. Massart. Statistical performance of support vector
machines. Ann. Statist., submitted, 2004.
[12] S. Smale and D.-X. Zhou. Estimating the approximation error in learning theory.
Anal. Appl., 1:17?41, 2003.
[13] I. Steinwart. Sparseness of support vector machines. J. Mach. Learn. Res., 4:1071?
1105, 2003.
| 2630 |@word kong:1 version:3 achievable:1 seems:2 stronger:2 polynomial:1 norm:2 p0:4 euclidian:3 contains:3 rkhs:16 scovel:2 dx:1 must:1 benign:1 enables:1 zhang:1 c2:1 consists:1 introduce:3 x0:3 indeed:3 roughly:2 nor:1 gov:2 considering:1 increasing:1 becomes:2 begin:1 estimating:1 bounded:1 notation:1 moreover:1 what:1 kind:1 minimizes:1 guarantee:6 every:3 exactly:1 classifier:15 k2:1 unit:4 omit:1 yn:1 appear:1 positive:3 mach:2 clint:1 ap:3 lugosi:1 therein:1 appl:1 fastest:1 practical:1 practice:2 implement:1 definite:1 dpx:2 universal:1 empirical:1 orfi:1 word:1 get:1 cannot:4 close:5 unjustifiable:1 operator:1 bh:6 risk:15 impossible:2 influence:1 restriction:1 measurable:3 equivalent:3 www:1 straightforward:1 independently:1 convex:1 formulate:2 simplicity:1 assigns:1 rule:5 his:1 play:2 suppose:2 exact:1 us:2 hypothesis:5 element:1 recognition:1 bottom:1 ft:8 role:2 region:4 ensures:1 decrease:1 substantial:2 mentioned:2 complexity:18 cristianini:1 raise:1 solving:1 purely:1 easily:2 represented:2 various:1 tx:3 separated:1 fast:3 describe:2 whose:1 larger:1 say:4 otherwise:3 lder:4 noisy:1 obviously:3 sequence:1 differentiable:1 eigenvalue:3 propose:1 combining:1 poorly:2 achieve:2 kh:3 los:1 convergence:5 requirement:1 p:1 produce:1 generating:2 perfect:1 tk:1 help:1 depending:1 derive:1 vcdimension:1 rescale:1 strong:1 met:2 stochastic:1 vc:1 behaviour:1 suffices:2 strictly:3 hold:7 sufficiently:2 considered:2 exp:1 k2h:1 predict:1 bump:1 major:1 smallest:2 label:2 extremal:1 city:1 minimization:2 gaussian:16 always:3 aim:1 modified:2 rather:1 avoid:2 zhou:3 varying:1 publication:1 tech:1 sense:1 bt:4 typically:2 interested:1 arg:2 classification:7 denoted:1 exponent:49 marginal:1 construct:1 never:1 having:3 report:1 employ:1 few:1 national:1 replaced:2 consisting:1 lebesgue:1 tq:1 recalling:1 interest:1 evaluation:1 extreme:1 integral:2 necessary:3 experience:1 taylor:1 re:2 theoretical:1 minimal:2 modeling:1 cost:1 subset:2 alamo:1 too:2 optimally:1 answer:1 probabilistic:1 informatics:1 continuously:1 squared:1 again:1 satisfied:3 containing:1 choose:3 leading:1 attaining:1 gy:1 sec:2 blanchard:2 satisfy:1 depends:1 view:1 closed:4 characterizes:1 sup:1 bayes:16 aggregation:1 yield:1 yes:1 kfp:1 cc:1 submitted:2 inform:2 cumbersome:1 whenever:1 definition:13 involved:2 obvious:1 naturally:2 proved:1 popular:1 ask:1 recall:1 knowledge:3 hilbert:2 improved:1 though:2 furthermore:11 hand:1 steinwart:6 touch:1 continuity:1 yf:1 measurability:1 k22:1 concept:3 true:5 contain:2 regularization:5 hence:2 read:1 symmetric:1 laboratory:1 deal:1 width:5 covering:2 coincides:1 hong:1 cp:2 consideration:5 recently:2 rl:9 banach:1 discussed:1 he:2 approximates:7 tail:1 interpretation:1 cambridge:1 smoothness:7 rd:5 jcs:1 consistency:3 shawe:1 pq:7 recent:3 showed:3 touching:1 inf:5 certain:3 arbitrarily:1 yi:2 surely:1 converge:1 determine:1 ii:1 multiple:1 smooth:2 faster:4 plug:4 concerning:1 prediction:1 essentially:1 metric:1 kernel:25 achieved:2 irregular:1 justified:1 whereas:1 addition:2 want:1 c1:2 crucial:1 envelope:1 unlike:1 massart:1 subject:1 structural:1 near:1 yang:2 intermediate:1 easy:2 restrict:1 whether:3 york:1 speaking:2 remark:9 listed:1 amount:1 nonparametric:1 tsybakov:19 statist:4 concentrated:1 svms:5 http:1 exist:1 restricts:1 sign:4 arising:1 write:4 group:1 drawn:1 neither:1 graph:1 asymptotically:2 year:1 parameterized:1 almost:5 throughout:1 family:4 wu:2 sobolev:1 decision:20 bound:4 occur:1 precisely:1 sake:1 bousquet:1 answered:1 min:3 px:8 according:1 ball:4 poor:1 describes:5 smaller:2 remain:1 across:1 slightly:1 lunch:2 modification:4 intuitively:1 pr:5 erm:4 discus:1 nonempty:1 know:3 end:5 apply:1 away:2 batch:1 rkhss:3 ho:4 rp:15 original:9 denotes:3 top:1 cf:1 hinge:2 establish:6 approximating:1 question:7 already:1 parametric:1 concentration:1 distance:4 outer:1 trivial:4 assuming:2 devroye:3 besides:1 unfortunately:3 smale:1 anal:1 unknown:2 perform:1 observation:2 ingo:3 finite:2 behave:1 situation:3 precise:1 y1:1 reproducing:1 thm:1 lanl:2 c3:1 learned:2 established:5 trans:2 able:1 usually:1 below:2 pattern:1 fp:3 max:1 unrealistic:1 misclassification:1 difficulty:1 regularized:1 minimax:1 older:5 geometric:13 l2:7 kf:5 asymptotic:1 loss:3 sufficient:1 consistent:2 course:3 free:7 side:1 absolute:1 boundary:7 dimension:1 xn:3 world:3 author:1 made:3 jump:1 universally:2 simplified:1 far:2 functionals:1 approximate:4 compact:3 emphasize:1 assumed:4 conclude:2 xi:3 continuous:9 quantifies:1 additionally:1 learn:7 nature:1 necessarily:1 dense:2 main:1 linearly:1 cx2:3 noise:33 x1:9 slow:1 sub:1 lie:1 theorem:21 specific:3 kkk:1 offset:1 decay:1 svm:26 exists:2 restricting:1 sparseness:1 kx:1 gap:1 easier:1 cx:2 logarithmic:1 likely:1 contained:2 springer:1 corresponds:1 minimizer:3 satisfies:3 prop:1 conditional:5 ann:5 rbf:10 price:1 hard:1 infinite:1 discontinuously:1 uniformly:1 typical:1 called:5 rarely:2 formally:1 support:11 latter:3 brevity:1 |
1,796 | 2,631 | Real-Time Pitch Determination of One or More
Voices by Nonnegative Matrix Factorization
Fei Sha and Lawrence K. Saul
Dept. of Computer and Information Science
University of Pennsylvania, Philadelphia, PA 19104
{feisha,lsaul}@cis.upenn.edu
Abstract
An auditory ?scene?, composed of overlapping acoustic sources, can be
viewed as a complex object whose constituent parts are the individual
sources. Pitch is known to be an important cue for auditory scene analysis. In this paper, with the goal of building agents that operate in human
environments, we describe a real-time system to identify the presence of
one or more voices and compute their pitch. The signal processing in the
front end is based on instantaneous frequency estimation, a method for
tracking the partials of voiced speech, while the pattern-matching in the
back end is based on nonnegative matrix factorization, an unsupervised
algorithm for learning the parts of complex objects. While supporting a
framework to analyze complicated auditory scenes, our system maintains
real-time operability and state-of-the-art performance in clean speech.
1 Introduction
Nonnegative matrix factorization (NMF) is an unsupervised algorithm for learning the parts
of complex objects [11]. The algorithm represents high dimensional inputs (?objects?) by
a linear superposition of basis functions (?parts?) in which both the linear coefficients and
basis functions are constrained to be nonnegative. Applied to images of faces, NMF learns
basis functions that correspond to eyes, noses, and mouths; applied to handwritten digits,
it learns basis functions that correspond to cursive strokes. The algorithm has also been
implemented in real-time embedded systems as part of a visual front end [10].
Recently, it has been suggested that NMF can play a similarly useful role in speech and audio processing [16, 17]. An auditory ?scene?, composed of overlapping acoustic sources,
can be viewed as a complex object whose constituent parts are the individual sources.
Pitch is known to be an extremely important cue for source separation and auditory scene
analysis [4]. It is also an acoustic cue that seems amenable to modeling by NMF. In particular, we can imagine the basis functions in NMF as harmonic stacks of individual periodic
sources (e.g., voices, instruments), which are superposed to give the magnitude spectrum
of a mixed signal. The pattern-matching computations of NMF are reminiscent of longstanding template-based models of pitch perception [6].
Our interest in NMF lies mainly in its use for speech processing. In this paper, we describe
a real-time system to detect the presence of one or more voices and determine their pitch.
Learning plays a crucial role in our system: the basis functions of NMF are trained offline
from data to model the particular timbres of voiced speech, which vary across different
phonetic contexts and speakers. In related work, Smaragdis and Brown used NMF to model
polyphonic piano music [17]. Our work differs in its focus on speech, real-time processing,
and statistical learning of basis functions.
A long-term goal is to develop interactive voice-driven agents that respond to the pitch
contours of human speech [15]. To be truly interactive, these agents must be able to process
input from distant sources and to operate in noisy environments with overlapping speakers.
In this paper, we have taken an important step toward this goal by maintaining real-time
operability and state-of-the-art performance in clean speech while developing a framework
that can analyze more complicated auditory scenes. These are inherently competing goals
in engineering. Our focus on actual system-building also distinguishes our work from many
other studies of overlapping periodic sources [5, 9, 19, 20, 21].
The organization of this paper is as follows. In section 2, we describe the signal processing
in our front end that converts speech signals into a form that can be analyzed by NMF. In
section 3, we describe the use of NMF for pitch tracking?namely, the learning of basis
functions for voiced speech, and the nonnegative deconvolution for real-time analysis. In
section 4, we present experimental results on signals with one or more voices. Finally, in
section 5, we conclude with plans for future work.
2 Signal processing
A periodic signal is characterized by its fundamental frequency, f0 . It can be decomposed
by Fourier analysis as the sum of sinusoids?or partials?whose frequencies occur at integer multiples of f0 . For periodic signals with unknown f0 , the frequencies of the partials
can be inferred from peaks in the magnitude spectrum, as computed by an FFT.
Voiced speech is perceived as having a pitch at the fundamental frequency of vocal cord vibration. Perfect periodicity is an idealization, however; the waveforms of voiced speech are
non-stationary, quasiperiodic signals. In practice, one cannot reliably extract the partials
of voiced speech by simply computing windowed FFTs and locating peaks in the magnitude spectrum. In this section, we review a more robust method, known as instantaneous
frequency (IF) estimation [1], for extracting the stable sinusoidal components of voiced
speech. This method is the basis for the signal processing in our front-end.
The starting point of IF estimation is to model the voiced speech signal, s(t), by a sum of
amplitude and frequency-modulated sinusoids:
Z t
X
s(t) =
?i (t) cos
dt ?i (t) + ?i .
(1)
i
0
The arguments of the cosines in eq. (1) are called the instantaneous phases; their derivatives with respect to time yield the so-called instantaneous frequencies ?i (t). If the amplitudes ?i (t) and frequencies ?i (t) are stationary, then eq. (1) reduces to a weighted sum
of pure sinusoids. For nonstationary signals, ?i (t) intuitively represents the instantaneous
frequency of the ith partial at time t.
The short-time Fourier transform (STFT) provides an efficient tool for IF estimation [2].
The STFT of s(t) with windowing function w(t) is given by:
Z
F (?, t) = d? s(? )w(? ? t)e?j?? .
(2)
Let z(?, t) = ej?t F (?, t) denote the analytic signal of the Fourier component of s(t) with
frequency ?, and let a = Re[z] and b = Im[z] denote its real and imaginary parts. We
Instantaneous Frequency (Hz)
Pitch (Hz)
1000
800
600
400
200
0
0.2
0.4
0.6
0.8
0.2
0.4
0.6
0.8
1.0
1.2
1.4
1.6
1.8
2.0
1.0
1.2
1.4
1.6
1.8
2.0
200
100
0
0
Time (second)
Figure 1: Top: instantaneous frequencies of estimated partials for the utterance ?The north
wind and the sun were disputing.? Bottom: f0 contour derived from a laryngograph recording.
can define a mapping from the time-frequency plane of the STFT to another frequency
axis ?(?, t) by:
a ?b ? b ?a
?
?t
?(?, t) =
arg[z(?, t)] = ?t2
(3)
?t
a + b2
The derivatives on the right hand side can be computed efficiently via SFFTs [2]. Note
that the right hand side of eq. (3) differentiates the instantaneous phase associated with a
particular Fourier component of s(t). IF estimation identifies the stable fixed points [7, 8]
of this mapping, given by
?(? ? , t) = ? ? and
(??/??)|?=?? < 1,
(4)
as the instantaneous frequencies of the partials that appear in eq. (1). Intuitively, these fixed
points occur where the notions of energy at frequency ? in eqs. (1) and (2) coincide?that
is, where the IF and STFT representations appear most consistent.
The top panel of Fig. 1 shows the IFs of partials extracted by this method for a speech
signal with sliding and overlapping analysis windows. The bottom panels shows the pitch
contour. Note that in regions of voiced speech, indicated by nonzero f0 values, the IFs
exhibit a clear harmonic structure, while in regions of unvoiced speech, they do not.
In summary, the signal processing in our front-end extracts partials with frequencies ?i? (t)
and nonnegative amplitudes |F (?i? (t), t)|, where t indexes the time of the analysis window
and i indexes the number of extracted partials. Further analysis of the signal is performed
by the NMF algorithm described in the next section, which is used to detect the presence
of one or more voices and to estimate their f0 values. Similar front ends have been used in
other studies of pitch tracking and source separation [1, 2, 7, 13].
3 Nonnegative matrix factorization
For mixed signals of overlapping speakers, our front-end outputs the mixture of partials
extracted from several voices. How can we analyze this output by NMF? In this section,
we show: (i) how to learn nonnegative basis functions that model the characteristic timbres
of voiced speech, and (ii) how to decompose mixed signals in terms of these basis functions.
We briefly review NMF [11]. Given observations yt , the goal of NMF is to compute basis
? t = Wxt
functions W and linear coefficients xt such that the reconstructed vectors y
best match the original observations. The observations, basis functions, and coefficients
are constrained to be nonnegative. Reconstruction errors are measured by the generalized
Kullback-Leibler divergence:
X
?) =
G(y, y
[y? log(y? /?
y? ) ? y? + y?? ] ,
(5)
?
? . NMF works by optiwhich is lower bounded by zero and P
vanishes if and only if y = y
? t ) in terms of the basis functions W and
mizing the total reconstruction error t G(yt , y
? t and xt
coefficients xt . We form three matrices by concatenating the column vectors yt , y
?
separately and denote them by Y, Y and X respectively. Multiplicative updates for the
optimization problem are given in terms of the elements of these matrices:
?
?P
"
#
??t
W
Y
/
Y
X
??
?t
?
Y?t
? . (6)
P
,
X?t ? X?t ?
W?? ? W??
X?t
?
Y?t
? W??
t
These alternating updates are guaranteed to converge to a local minimum of the total reconstruction error; see [11] for further details.
In our application of NMF to pitch estimation, the vectors yt store vertical ?time slices?
of the IF representation in Fig. 1. Specifically, the elements of yt store the magnitude
spectra |F (?i? (t), t)| of extracted partials at time t; the instantaneous frequency axis is discretized on a log scale so that each element of yt covers 1/36 octave of the frequency spectrum. The columns of W store basis functions, or harmonic templates, for the magnitude
spectra of voiced speech with different fundamental frequencies. (An additional column
in W stores a non-harmonic template for unvoiced speech.) In this study, only one harmonic template was used per fundamental frequency. The fundamental frequencies range
from 50Hz to 400Hz, spaced and discretized on a log scale. We constrained the harmonic
templates for different fundamental frequencies to be related by a simple translation on
the log-frequency axis. Tying the columns of W in this way greatly reduces the number
of parameters that must be estimated by a learning algorithm. Finally, the elements of xt
store the coefficients that best reconstruct yt by linearly superposing harmonic templates
of W. Note that only partials from the same source form harmonic relations. Thus, the
number of nonzero elements in xt indicates the number of periodic sources at time t, while
the indices of nonzero elements indicate their fundamental frequencies. It is in this sense
that the reconstruction yt ? Wxt provides an analysis of the auditory scene.
3.1 Learning the basis functions of voiced speech
The harmonic templates in W were estimated from the voiced speech of (non-overlapping)
speakers in the Keele database [14]. The Keele database provides aligned pitch contours
derived from laryngograph recordings. The first halves of all utterances were used for
training, while the second halves were reserved for testing. Given the vectors yt computed
by IF estimation in the front end, the problem of NMF is to estimate the columns of W and
the reconstruction coefficients xt . Each xt has only two nonzero elements (one indicating
the reference value for f0 , the other corresponding to the non-harmonic template of the
basis matrix W); their magnitudes must still be estimated by NMF. The estimation was
performed by iterating the updates in eq. (6).
Fig. 2 (left) compares the harmonic template at 100 Hz before and after learning. While
the template is initialized with broad spectral peaks, it is considerably sharpened by the
NMF learning algorithm. Fig. 2 (right) shows four examples from the Keele database
(from snippets of voiced speech with f0 = 100 Hz) that were used to train this template.
Note that even among these four partial profiles there is considerable variance. The learned
template is derived to minimize the total reconstruction error over all segments of voiced
speech in the training data.
1
female: cloak
male: stronger
0.5
0
0
500
1000
1500
2000
2500
0
500 1000 1500 2000 2500
0
500 1000 1500 2000 2500
1
male: travel
male: the
0.5
0
0
500
1000
1500
Frequency (Hz)
2000
2500
0
500 1000 1500 2000 2500
Frequency (Hz)
0
500 1000 1500 2000 2500
Frequency (Hz)
Figure 2: Left: harmonic template before and after learning for voiced speech at
f0 = 100 Hz. The learned template (bottom) has a much sharper spectral profile. Right:
observed partials from four speakers with f0 = 100 Hz.
3.2 Nonnegative deconvolution for estimating f0 of one or more voices
Once the basis functions in W have been estimated, computing x such that y ? Wx under
the measure of eq. (5) simplifies to the problem of nonnegative deconvolution. Nonnegative
deconvolution has been applied to problems in fundamental frequency estimation [16],
music analysis [17] and sound localization [12].
In our model, nonnegative deconvolution of y ? Wx yields an estimate of the number
of periodic sources in y as well as their f0 values. Ideally, the number of nonzero reconstruction weights in x reveal the number of sources, and the corresponding columns in
the basis matrix W reveal their f0 values. In practice, the index of the largest component
of x is found, and its corresponding f0 value is deemed to be the dominant fundamental
frequency. The second largest component of x is then used to extract a secondary fundamental frequency, and so on. A thresholding heuristic can be used to terminate the search
for additional sources. Unvoiced speech is detected by a simple frame-based classifier
trained to make voiced/unvoiced distinctions from the observation y and its nonnegative
deconvolution x.
The pattern-matching computations in NMF are reminiscent of well-known models of harmonic template matching [6]. Two main differences are worth noting. First, the templates
in NMF are learned from labeled speech data. We have found this to be essential in their
generalization to unseen cases. It is not obvious how to craft a harmonic template ?by
hand? that manages the variability of partial profiles in Fig. 2 (right). Second, the template
matching in NMF is framed by nonnegativity constraints. Specifically, the algorithm models observed partials by a nonnegative superposition of harmonic stacks. The cost function
in eq. (5) also diverges if y?? = 0 when y? is nonzero; this useful property ensures that minima of eq. (5) must explain each observed partial by its attribution to one or more sources.
This property does not hold for traditional least-squares linear reconstructions.
4 Implementation and results
We have implemented both the IF estimation in section 2 and the nonnegative deconvolution in section 3.2 in a real-time system for pitch tracking. The software runs on a laptop
computer with a visual display that shows the contour of estimated f0 values scrolling in
real-time. After the signal is downsampled to 4900 Hz, IF estimation is performed in 10 ms
shifts with an analysis window of 50 ms. Partials extracted from the fixed points of eq. (4)
are discretized on a log-frequency axis. The columns of the basis matrix W provide har-
NMF
RAPT
VE (%)
7.7
3.2
Keele database
UE (%) GPE (%)
4.6
0.9
6.8
2.2
RMS (Hz)
4.3
4.4
NMF
RAPT
Edinburgh database
VE (%) UE (%) GPE (%)
7.8
4.4
0.7
4.5
8.4
1.9
RMS (Hz)
5.8
5.3
Table 1: Comparison between our algorithm and RAPT [18] on the test portion of the Keele
database (see text) and the full Edinburgh database, in terms of the percentages of voiced
errors (VE), unvoiced errors (UE), and gross pitch errors (GPE), as well as the root mean
square (RMS) deviation in Hz.
monic templates for f0 = 50 Hz to f0 = 400 Hz with a step size of 1/36 octave. To
achieve real-time performance and reduce system latency, the system does not postprocess the f0 values obtained in each frame from nonnegative deconvolution: in particular,
there is no dynamic programming to smooth the pitch contour, as commonly done in many
pitch tracking algorithms [18]. We have found that our algorithm performs well and yields
smooth pitch contours (for non-overlapping voices) even without this postprocessing.
4.1 Pitch determination of clean speech signals
Table 1 compares the performance of our algorithm on clean speech to RAPT [18], a stateof-the-art pitch tracker based on autocorrelation and dynamic programming. Four error
types are reported: the percentage of voiced frames misclassified as unvoiced (VE), the percentage of unvoiced frames misclassified as voiced (UE), the percentage of voiced frames
with gross pitch errors (GPE) where predicted and reference f0 values differ by more than
20%, and the root-mean-squared (RMS) difference between predicted and reference f0 values when there are no gross pitch errors. The results were obtained on the second halves
of utterances reserved for testing in the Keele database, as well as the full set of utterances
in the Edinburgh database [3]. As shown in the table, the performance of our algorithm is
comparable to that of RAPT.
4.2 Pitch determination of overlapping voices and noisy speech
We have also examined the robustness of our system to noise and overlapping speakers.
Fig. 3 shows the f0 values estimated by our algorithm from a mixture of two voices?one
with ascending pitch, the other with descending pitch. Each voice spans one octave. The
dominant and secondary f0 values extracted in each frame by nonnegative deconvolution
are shown. The algorithm recovers the f0 values of the individual voices almost perfectly,
though it does not currently make any effort to track the voices through time. (This is a
subject for future work.)
Fig. 4 shows in more detail how IF estimation and nonnegative deconvolution are affected
by interfering speakers and noise. A clean signal from a single speaker is shown in the
top row of the plot, along with its log power spectra, partials extracted by IF estimation,
estimated f0 , and reconstructed harmonic stack. The second and third rows show the effects
of adding white noise and an overlapping speaker, respectively. Both types of interference
degrade the harmonic structure in the log power spectra and extracted partials. However,
nonnegative deconvolution is still able to recover the pitch of the original speaker, as well
as the pitch of the second speaker. On larger evaluations of the algorithm?s robustness, we
have obtained results comparable to RAPT over a wide range of SNRs (as low as 0 dB).
1000
200
dominant pitch
secondary pitch
150
600
F0 (Hz)
Frequency(Hz)
800
400
100
200
0
0
0.5
1
1.5
2
Time (s)
2.5
50
0
3
0.5
1
1.5
2
Time (s)
2.5
3
Figure 3: Left: Spectrogram of a mixture of two voices with ascending and descending f0
contours. Right: f0 values estimated by NMF.
Log Power Spectra
Y
500 1000 1500
Frequency (Hz)
500 1000 1500
Frequency (Hz)
Deconvoluted X
Reconstructed Y
Mix of two signals
White noise added
Clean
Waveform
50
100 150 200 250
Time
0
0
200
Frequency (Hz)
400
500 1000 1500
Frequency (Hz)
Figure 4: Effect of white noise (middle row) and overlapping speaker (bottom row) on
clean speech (top row). Both types of interference degrade the harmonic structure in the log
power spectra (second column) and the partials extracted by IF estimation (third column).
The results of nonnegative deconvolution (fourth column), however, are fairly robust. Both
the pitch of the original speaker at f0 = 200 Hz and the overlapping speaker at f0 = 300 Hz
are recovered. The fifth column displays the reconstructed profile of extracted partials from
activated harmonic templates.
5 Discussion
There exists a large body of related work on fundamental frequency estimation of overlapping sources [5, 7, 9, 19, 20, 21]. Our contributions in this paper are to develop a new
framework based on recent advances in unsupervised learning and to study the problem
with the constraints imposed by real-time system building. Nonnegative deconvolution
is similar to EM algorithms [7] for harmonic template matching, but it does not impose
normalization constraints on spectral peaks as if they represented a probability distribution.
Important directions for future work are to train a richer set of harmonic templates by NMF,
to incorporate the frame-based computations of nonnegative deconvolution into a dynamical model, and to embed our real-time system in interactive agents that respond to the pitch
contours of human speech. All these directions are being actively pursued.
References
[1] T. Abe, T. Kobayashi, and S. Imai. Harmonics tracking and pitch extraction based on instantaneous frequency. In Proc. of ICASSP, pages 756?759, 1995.
[2] T. Abe, T. Kobayashi, and S. Imai. Robust pitch estimation with harmonics enhancement in
noisy environments based on instantaneous frequency. In Proc. of ICSLP, pages 1277?1280,
1996.
[3] P. Bagshaw, S. M. Hiller, and M. A. Jack. Enhanced pitch tracking and the processing of
f0 contours for computer aided intonation teaching. In Proc. of 3rd European Conference on
Speech Communication and Technology, pages 1003?1006, 1993.
[4] A. S. Bregman. Auditory Scene Analysis: The Perceptual Organization of Sound. MIT Press,
2nd edition, 1999.
[5] A. de Cheveigne and H. Kawahara. Multiple period estimation and pitch perception model.
Speech Communication, 27:175?185, 1999.
[6] J. Goldstein. An optimum processor theory for the central formation of the pitch of complex
tones. J. Acoust. Soc. Am., 54:1496?1516, 1973.
[7] M. Goto. A robust predominant-F0 estimation method for real-time detection of melody and
bass lines in CD recordings. In Proc. of ICASSP, pages 757?760, June 2000.
[8] H. Kawahara, H. Katayose, A. de Cheveign?e, and R. D. Patterson. Fixed point analysis of
frequency to instantaneous frequency mapping for accurate estimation of f0 and periodicity. In
Proc. of EuroSpeech, pages 2781?2784, 1999.
[9] A. Klapuri, T. Virtanen, and J.-M. Holm. Robust multipitch estimation for the analysis and
manipulation of polyphonic musical signals. In Proc. of COST-G6 Conference on Digital Audio
Effects, Verona, Italy, 2000.
[10] D. D. Lee and H. S. Seung. Learning in intelligent embedded systems. In Proc. of USENIX
Workshop on Embedded Systems, 1999.
[11] D. D. Lee and H. S. Seung. Learning the parts of objects with nonnegative matrix factorization.
Nature, 401:788?791, 1999.
[12] Y. Lin, D. D. Lee, and L. K. Saul. Nonnegative deconvolution for time of arrival estimation. In
Proc. of ICASSP, 2004.
[13] T. Nakatani and T. Irino. Robust fundamental frequency estimation against background noise
and spectral distortion. In Proc. of ICSLP, pages 1733?1736, 2002.
[14] F. Plante, G. F. Meyer, and W. A. Ainsworth. A pitch extraction reference database. In Proc. of
EuroSpeech, pages 837?840, 1995.
[15] L. K. Saul, D. D. Lee, C. L. Isbell, and Y. LeCun. Real time voice processing with audiovisual
feedback: toward autonomous agents with perfect pitch. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in Neural Information Processing Systems 15. MIT Press, 2003.
[16] L. K. Saul, F. Sha, and D. D. Lee. Statistical signal processing with nonnegativity constraints.
In Proc. of EuroSpeech, pages 1001?1004, 2003.
[17] P. Smaragdis and J. C. Brown. Non-negative matrix factorization for polyphonic music transcription. In Proc. of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pages 177?180, 2003.
[18] D. Talkin. A robust algorithm for pitch tracking(RAPT). In W. B. Kleijn and K. K. Paliwal,
editors, Speech Coding and Synthesis, chapter 14. Elsevier Science B.V., 1995.
[19] T. Tolonen and M. Karjalainen. A computationally efficient multipitch analysis model. IEEE
Trans. on Speech and Audio Processing, 8(6):708?716, 2000.
[20] T. Virtanen and A. Klapuri. Separation of harmonic sounds using multipitch analysis and iterative parameter estimation. In Proc. of IEEE Workshop on Applications of Signal Processing to
Audio and Acoustics, pages 83?86, New Paltz, NY, USA, Oct 2001.
[21] M. Wu, D. Wang, and G. J. Brown. A multipitch tracking algorithm for noisy speech. IEEE
Trans. on Speech and Audio Processing, 11:229?241, 2003.
| 2631 |@word middle:1 briefly:1 stronger:1 seems:1 nd:1 verona:1 rapt:7 imaginary:1 recovered:1 reminiscent:2 must:4 distant:1 wx:2 analytic:1 plot:1 update:3 polyphonic:3 stationary:2 cue:3 half:3 pursued:1 tone:1 plane:1 postprocess:1 ith:1 short:1 provides:3 windowed:1 along:1 autocorrelation:1 upenn:1 discretized:3 audiovisual:1 decomposed:1 actual:1 window:3 estimating:1 bounded:1 panel:2 laptop:1 tying:1 acoust:1 ifs:2 interactive:3 nakatani:1 classifier:1 appear:2 before:2 kobayashi:2 engineering:1 local:1 virtanen:2 examined:1 co:1 factorization:6 range:2 lecun:1 testing:2 practice:2 differs:1 digit:1 matching:6 vocal:1 downsampled:1 cannot:1 context:1 superposed:1 descending:2 imposed:1 yt:9 attribution:1 starting:1 pure:1 notion:1 autonomous:1 enhanced:1 play:2 ainsworth:1 imagine:1 programming:2 pa:1 element:7 database:10 labeled:1 bottom:4 role:2 observed:3 wang:1 cord:1 region:2 ensures:1 sun:1 bass:1 gross:3 environment:3 vanishes:1 ideally:1 seung:2 dynamic:2 trained:2 segment:1 localization:1 patterson:1 basis:20 icassp:3 represented:1 chapter:1 train:2 snrs:1 describe:4 detected:1 formation:1 kawahara:2 whose:3 quasiperiodic:1 heuristic:1 larger:1 richer:1 distortion:1 reconstruct:1 unseen:1 kleijn:1 transform:1 noisy:4 reconstruction:8 aligned:1 achieve:1 constituent:2 enhancement:1 optimum:1 diverges:1 perfect:2 object:6 scrolling:1 develop:2 measured:1 eq:10 soc:1 implemented:2 predicted:2 indicate:1 differ:1 direction:2 waveform:2 human:3 melody:1 icslp:2 generalization:1 decompose:1 im:1 hold:1 tracker:1 lawrence:1 mapping:3 vary:1 perceived:1 estimation:23 proc:13 travel:1 currently:1 superposition:2 vibration:1 largest:2 tool:1 weighted:1 mit:2 feisha:1 ej:1 derived:3 focus:2 june:1 indicates:1 mainly:1 greatly:1 detect:2 sense:1 am:1 elsevier:1 lsaul:1 relation:1 misclassified:2 arg:1 among:1 superposing:1 stateof:1 plan:1 art:3 constrained:3 fairly:1 once:1 having:1 extraction:2 represents:2 broad:1 unsupervised:3 future:3 t2:1 intelligent:1 distinguishes:1 composed:2 divergence:1 ve:4 individual:4 phase:2 detection:1 organization:2 interest:1 evaluation:1 male:3 truly:1 analyzed:1 mixture:3 predominant:1 activated:1 har:1 amenable:1 accurate:1 bregman:1 partial:23 cheveign:1 initialized:1 re:1 column:11 modeling:1 cover:1 cost:2 deviation:1 front:8 eurospeech:3 reported:1 periodic:6 considerably:1 fundamental:12 peak:4 lee:5 synthesis:1 squared:1 sharpened:1 central:1 derivative:2 actively:1 sinusoidal:1 de:2 b2:1 coding:1 north:1 coefficient:6 performed:3 multiplicative:1 wind:1 root:2 analyze:3 portion:1 recover:1 maintains:1 complicated:2 laryngograph:2 voiced:21 contribution:1 minimize:1 square:2 musical:1 reserved:2 variance:1 characteristic:1 efficiently:1 correspond:2 identify:1 yield:3 spaced:1 handwritten:1 manages:1 worth:1 processor:1 stroke:1 explain:1 against:1 energy:1 frequency:44 obvious:1 associated:1 plante:1 recovers:1 auditory:8 mizing:1 amplitude:3 goldstein:1 back:1 dt:1 done:1 though:1 hand:3 overlapping:14 indicated:1 reveal:2 usa:1 effect:3 klapuri:2 brown:3 building:3 sinusoid:3 alternating:1 nonzero:6 leibler:1 white:3 ue:4 speaker:14 cosine:1 m:2 generalized:1 octave:3 performs:1 postprocessing:1 image:1 harmonic:24 instantaneous:13 jack:1 recently:1 framed:1 rd:1 stft:4 similarly:1 teaching:1 stable:2 f0:32 dominant:3 wxt:2 recent:1 female:1 italy:1 driven:1 manipulation:1 phonetic:1 store:5 paliwal:1 minimum:2 additional:2 impose:1 spectrogram:1 determine:1 converge:1 imai:2 period:1 signal:26 ii:1 sliding:1 full:2 mix:1 multiple:2 windowing:1 reduces:2 sound:3 smooth:2 match:1 determination:3 characterized:1 long:1 lin:1 pitch:40 normalization:1 background:1 separately:1 source:16 crucial:1 operate:2 hz:25 recording:3 subject:1 db:1 goto:1 integer:1 extracting:1 nonstationary:1 presence:3 noting:1 fft:1 pennsylvania:1 competing:1 perfectly:1 reduce:1 simplifies:1 shift:1 rms:4 becker:1 effort:1 locating:1 speech:39 useful:2 iterating:1 clear:1 latency:1 cursive:1 multipitch:4 percentage:4 estimated:9 per:1 track:1 affected:1 four:4 clean:7 convert:1 sum:3 idealization:1 run:1 fourth:1 respond:2 almost:1 wu:1 separation:3 comparable:2 guaranteed:1 display:2 smaragdis:2 nonnegative:25 occur:2 constraint:4 fei:1 isbell:1 scene:8 software:1 fourier:4 argument:1 extremely:1 span:1 developing:1 across:1 em:1 intuitively:2 g6:1 interference:2 taken:1 computationally:1 differentiates:1 nose:1 ascending:2 instrument:1 end:9 spectral:4 voice:17 robustness:2 original:3 top:4 maintaining:1 music:3 added:1 sha:2 traditional:1 obermayer:1 exhibit:1 thrun:1 degrade:2 toward:2 holm:1 index:4 sharper:1 negative:1 implementation:1 reliably:1 unknown:1 karjalainen:1 vertical:1 observation:4 unvoiced:7 snippet:1 supporting:1 variability:1 communication:2 frame:7 stack:3 usenix:1 abe:2 nmf:27 inferred:1 namely:1 acoustic:5 learned:3 distinction:1 trans:2 able:2 suggested:1 dynamical:1 pattern:3 perception:2 mouth:1 power:4 technology:1 eye:1 identifies:1 axis:4 deemed:1 extract:3 utterance:4 philadelphia:1 text:1 review:2 piano:1 embedded:3 mixed:3 digital:1 agent:5 consistent:1 thresholding:1 editor:2 interfering:1 translation:1 row:5 cd:1 periodicity:2 summary:1 offline:1 side:2 saul:4 template:22 face:1 wide:1 fifth:1 edinburgh:3 slice:1 feedback:1 contour:10 commonly:1 coincide:1 longstanding:1 reconstructed:4 kullback:1 transcription:1 keele:6 conclude:1 spectrum:10 search:1 iterative:1 table:3 learn:1 terminate:1 robust:7 nature:1 inherently:1 complex:5 gpe:4 european:1 main:1 linearly:1 noise:6 profile:4 edition:1 arrival:1 body:1 fig:7 ny:1 meyer:1 nonnegativity:2 intonation:1 concatenating:1 lie:1 perceptual:1 third:2 ffts:1 learns:2 embed:1 xt:7 timbre:2 deconvolution:15 essential:1 exists:1 workshop:3 adding:1 ci:1 magnitude:6 simply:1 visual:2 tracking:9 extracted:10 oct:1 viewed:2 goal:5 monic:1 considerable:1 aided:1 specifically:2 called:2 total:3 secondary:3 experimental:1 craft:1 indicating:1 modulated:1 incorporate:1 dept:1 audio:6 |
1,797 | 2,632 | Distributed Information Regularization on
Graphs
Adrian Corduneanu
CSAIL MIT
Cambridge, MA 02139
[email protected]
Tommi Jaakkola
CSAIL MIT
Cambridge, MA 02139
[email protected]
Abstract
We provide a principle for semi-supervised learning based on optimizing
the rate of communicating labels for unlabeled points with side information. The side information is expressed in terms of identities of sets of
points or regions with the purpose of biasing the labels in each region
to be the same. The resulting regularization objective is convex, has a
unique solution, and the solution can be found with a pair of local propagation operations on graphs induced by the regions. We analyze the
properties of the algorithm and demonstrate its performance on document classification tasks.
1
Introduction
A number of approaches and algorithms have been proposed for semi-supervised learning
including parametric models [1], random field/walk models [2, 3], or discriminative (kernel
based) approaches [4]. The basic intuition underlying these methods is that the labels
should not change within clusters of points, where the definition of a cluster may vary from
one method to another.
We provide here an alternative information theoretic criterion and associated algorithms
for solving semi-supervised learning problems. Our formulation, an extension of [5, 6],
is based on the idea of minimizing the number of bits required to communicate labels for
unlabeled points, and involves no parametric assumptions. The communication scheme
inherent to the approach is defined in terms of regions, weighted sets of points, that are
shared between the sender and the receiver. The regions are important in capturing the
topology over the points to be labeled, and, through the communication criterion, bias the
labels to be the same within each region.
We start by defining the communication game and the associated regularization problem,
analyze properties of the regularizer, derive distributed algorithms for finding the unique
solution to the regularization problem, and demonstrate the method on a document classification task.
Rm
R1
?
P (R)
P (x|R)
Q(y|x)
?
x1
x2
xn?1
xn
Figure 1: The topology imposed by the set of regions (squares) on unlabeled points (circles)
2
The communication problem
Let S = {x1 , . . . , xn } be the set of unlabeled points and Y the set of possible labels.
We assume that target labels are available only for a small subset Sl ? S of the unlabeled
points. The objective here is to find a conditional distribution Q(y|x) over the labels at each
unlabeled point x ? S. The estimation is made possible by a regularization criterion over
the conditionals which we define here through a communication problem. The communication scheme relies on a set of regions R = {R1 , . . . , Rm }, where each region R ? R is
a subset of the unlabeled points S (cf. Figure 1). The weightsPof points within each region
are expressed in terms of a conditional distribution P (x|R), Px?R P (x|R) = 1, and each
region has an a priori probability P (R). We require only that R?R P (x|R)P (R) = 1/n
for all x ? S. (Note: in our overloaded notation ?R? stands both for the set of points and
its identity as a set).
The regions and the membership probabilities are set in an application specific manner. For
example, in a document classification setting we might define regions as sets of documents
containing each word. The probabilities P (R) and P (x|R) could be subsequently derived
from a word frequency representation of documents: if f (w|x) is the frequency of word
w in document
x, then for each pair of w and the corresponding region R we can set
P
P (R) = x?S f (w|x)/n and P (x|R) = f (w|x)/(nP (R)).
For any fixed conditionals {Q(y|x)} we define the communication problem as follows.
The sender selects a region R
P? R with probability P (R) and a point within the region
according to P (x|R). Since R?R P (x|R)P (R) = 1/n, each point x is overall equally
likely to be selected. The label y is sampled from Q(y|x) and communicated to the receiver
optimally using a coding scheme tailored to the region R (based on knowing P (x|R) and
Q(y|x), x ? R). The receiver has access to x, R, and the region specific coding scheme
to reproduce y. The rate of information needed to be sent to the receiver in this scheme is
given by
Jc (Q; R) =
X
P (R)IR (x; y) =
R?R
where Q(y|R) =
P
X
R?R
x?R
P (R)
XX
x?R y?Y
P (x|R)Q(y|x) log
Q(y|x)
Q(y|R)
(1)
P (x|R)Q(y|x) is the overall probability of y within the region.
3
The regularization problem
We use Jc (Q; R) to regularize the conditionals. This regularizer biases the conditional
distributions to be constant within each region so as to minimize the communication cost
IR (x; y). Put another way, by introducing a region R we bias the points in the region
to be labeled the same. By adding the cost of encoding the few available labeled points,
expressed here in terms of the empirical distribution P? (y, x) whose support lies in Sl , the
overall regularization criterion is given by
XX
J(Q; ?) = ?
P? (y, x) log Q(y|x) + ?Jc (Q; R)
(2)
x?Sl y?Y
where ? > 0 is a regularization parameter. The following lemma guarantees that the
solution is always unique:
Lemma 1 J(Q; ?) for ? > 0 is a strictly convex function of the conditionals {Q(y|x)}
provided that 1) each point x ? S belongs to at least one region containing at least two
points, and 2) the membership probabilities P (x|R) and P (R) are all non-zero.
The proof follows immediately from the strict convexity of mutual information [7] and the
fact that the two conditions guarantee that each Q(y|x) appears non-trivially in at least one
mutual information term.
4
Regularizer and the number of labelings
We consider here a simple setting where the labels are hard and binary, Q(y|x) ? {0, 1},
and seek to bound the number of possible binary labelings consistent with a cap on the
regularizer.
We assume for simplicity that points in a region have uniform weights P (x|R). Let N (I)
be the number of labelings of S consistent with an upper bound I on the regularizer
Jc (Q, R). The goal is to show that N (I) is significantly less than 2n and N (I) ? 2
as I ? 0.
Theorem 2 log2 N (I) ? C(I) + I ? n ? t(R)/ minR P (R), where C(I) ? 1 as I ? 0,
and t(R) is a property of R.
Proof Let f (R) be the fraction of positive samples in region R.PBecause the labels are
binary IR (x; y) is given by H(f (R)), where H is the entropy. If R P (R)H(f (R)) ? I
then certainly H(f (R)) ? I/P (R). Since the binary entropy is concave and symmetric
w.r.t. 0.5, this is equivalent to f (R) ? gR (I) or f (R) >= 1 ? gR (I), where gR (I) is the
inverse of H at I/P (R). We say that a region is mainly negative if the former condition
holds, or mainly positive if the latter.
If two regions R1 and R2 overlap by a large amount, they must be mainly positive or mainly
negative together. Specifically this is the case if |R1 ? R2 | > gR1 (I)|R1 | + gR2 (I)|R2 |
Consider a graph with vertices the regions, and edges whenever the above condition holds.
Then regions in a connected component must be all mainly positive or mainly negative
together. Let C(I) be the number of connected components in this graph, and note that
C(I) ? 1 as I ? 0.
We upper bound the number of labelings of the points spanned by a given connected component C, and subsequently combine the bounds. Consider the case in which all regions in
C are mainly negative. For any subset C 0 of C that still covers all the points spanned by C,
P
1 X
0 |R|
f (C) ?
gI (R)|R| ? max gI (R) ? R?C0
(3)
R
|C|
|C |
0
R?C
Thus f (C) ? t(C) maxR gI (R) where t(C) = minC 0 ?C, C 0 cover
average number of times a point in C is necessarily covered.
P
R?C 0
|C 0 |
|R|
is the minimum
There at most 2nf (R) log2 (2/f (R)) labelings of a set of points of which at most nf (R) are
positive. 1 . Thus the number of feasible labelings of the connected component C is upper
bounded by 21+nt(C) maxR gI (R) log2 (2/(t(C) maxR gI (R))) where 1 is because C can be either
mainly positive or mainly negative. By cumulating the bounds over all connected components and upper bounding the entropy-like term with I/P (R) we achieve the stated result.
2
Note that t(R), the average number of times a point is covered by a minimal subcovering
of R normally does not scale with |R| and is a covering dependent constant.
5
Distributed propagation algorithm
We introduce here a local propagation algorithm for minimizing J(Q; ?) that is both easy to
implement and provably convergent. The algorithm can be seen as a variant of the BlahutArimoto algorithm in rate-distortion theory [8], adapted to the more structured context here.
We begin by rewriting each mutual information term IR (x; y) in the criterion
XX
Q(y|x)
IR (x; y) =
P (x|R)Q(y|x) log
(4)
Q(y|R)
x?R y?Y
=
min
QR (?)
XX
P (x|R)Q(y|x) log
x?R y?Y
Q(y|x)
QR (y)
(5)
where the variational distribution QR (y) can be chosen independently
from Q(y|x) but the
P
unique minimum is attained when QR (y) = Q(y|R) = x?R P (x|R)Q(y|x). We can
extend the regularizer over both {Q(y|x)} and {QR (y)} by defining
X
XX
Q(y|x)
Jc (Q, QR ; R) =
P (R)
P (x|R)Q(y|x) log
(6)
QR (y)
R?R
x?R y?Y
so that Jc (Q; R) = min{QR (?),R?R} Jc (Q, QR ; R) recovers the original regularizer.
The local propagation algorithm follows from optimizing each Q(y|x) based on fixed
{QR (y)} and subsequently finding each QR (y) given fixed {Q(y|x)}. We omit the
straightforward derivation and provide only the resulting algorithm: for all points x ?
S ? Sl (not labeled) and for all regions R ? R we perform the following complementary
averaging updates
X
1
Q(y|x) ?
exp(
[nP (R)P (x|R)] log QR (y) )
(7)
Zx
R:x?R
X
QR (y) ?
P (x|R)Q(y|x)
(8)
x?R
where Zx is a normalization constant. In other words, Q(y|x) is obtained by taking
a weighted geometric average of the distributions associated with the regions, whereas
QR (y) is (as before) a weighted arithmetic average of the conditionals within each region. In terms of the document classification example discussed earlier, the weight
[nP (R)P (x|R)] appearing in the geometric average reduces to f (w|x), the frequency of
word w identified with region R in document x.
Pk n
1
2n k
The result follows from
i=0
i
?
k
Updating Q(y|x) for each labeled point x ? Sl involves minimizing
?
P? (y, x) log Q(y|x) ? H(Q(?|x)) ?
n
y?Y
X
X
??
Q(y|x)
P (R)P (x|R) log QR (y)
X
y?Y
(9)
R:x?R
where H(Q(?|x)) is the Shannon entropy of the conditional. While the objective is strictly
convex, the solution cannot be written in closed form and have to be found iteratively (e.g.,
via Newton-Raphson or simple bracketing when the labels are binary). A much simpler
update Q(y|x) = ?(y, y?x ), where y?x is the observed label for x, may suffice in practice.
This update results from taking the limit of small ? and approximates the iterative solution.
6
6.1
Extensions
Structured labels and generalized propagation steps
Here we extend the regularization framework to the case where the labels represent
more structured annotations of objects. Let y be a vector of elementary labels y =
[y1 , . . . , yk ]0 associated with a single object x. We assume that the distribution Q(y|x) =
Q(y1 , . . . , yk |x), for any x, can be represented as a tree structured graphical model, where
the structure is the same for all x ? S. The model is appropriate, e.g., in the context of assigning topics to documents. While the regularization principle applies directly if we leave
Q(y|x) unconstrained, the calculations would be potentially infeasible due to the number
of elementary labels involved, and inefficient as we would not explicitly make use of the
assumed structure. Consequently, we seek to extend the regularization framework to handle
distributions of the form
QT (y|x) =
k
Y
Qi (yi |x)
i=1
Y
(i,j)?T
Qij (yi , yj |x)
Qi (yi |x)Qj (yj |x)
(10)
where T defines the edge set of the tree. The regularization problem will be formulated
over {Qi (yi |x), Qij (yi , yj |x)} rather than unconstrained Q(y|x).
The difficulty in this case arises from the fact that the arithmetic average (mixing) in eq
(8) is not structure preserving (tree structured models are not mean flat). We can, however,
also constrain QR (y) to factor according to the same tree structure. By restricting the class
of variational distributions QR (y) that we consider, we necessarily obtain an upper bound
on the original information criterion. If we minimize this upper bound with respect to
{QR (y)}, under the factorization constraint
QR,T (y) =
k
Y
i=1
QR,i (yi )
Y
(i,j)?T
QR,ij (yi , yj )
,
QR,i (yi |x)QR,j (yj )
(11)
given fixed {QT (y|x)}, we can replace eq (8) with simple ?moment matching? updates
X
QR,ij (yi , yj ) ?
P (x|R)Qij (yi , yj |x)
(12)
x?R
The geometric update of Q(y|x) in eq (7) is structure preserving in the sense that if
QR,T (y), R ? R share the same tree structure, then so will the resulting conditional.
The new updates will result in a monotonically decreasing bound on the original criterion.
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0.2
0.4
0.6
0.8
1
0
0
0.2
0.4
0.6
0.8
1
Figure 2: Clusters correctly separated by information regularization given one label from
each class
6.2
Complementary sets of regions
In many cases the points to be labeled may have alternative feature representations, each
leading to a different set of natural regions R(k) . For example, in web page classification
both the content of the page, and the type of documents that link to that page should be
correlated with its topic. The relationship between these heterogeneous features may be
complex, with some features more relevant to the classification task than others.
Let Jc (Q; R(k) ) denote the regularizer from the k th feature representation. Since the regularizers are on a common scale we can combine them linearly:
Jc (Q; K, ?) =
K
X
?k Jc (Q; R(k) ) =
k=1
K
X
X
?k Pk (R)IR (x; y)
(13)
k=1 R?R(k)
P
where ?k ? 0 and k ?k = 1. The result is a regularizer with regions K = ?k R(k)
and adjusted a priori weights ?k Pk (R) over the regions. The solution can therefore be
found as before provided that {?k } are known. When {?k } are unknown, we set them
competitively. In other words, we minimize the worst information rate across the available
representations. This gives rise to the following regularization problem:
max
P
?k ?0,
min J(Q; ?, ?)
?k =1 Q(y|x)
(14)
where J(Q; ?, ?) is the overall objective that uses Jc (Q; K, ?) as the regularizer. The
maximum is well-defined since the objective is concave in {?k }. This follows immediately
as the objective is a minimum of a collection of linear functions J(Q; ?, ?) (linear in {? k }).
At the optimum all Jc (Q; R(k) ) for which ?k > 0 have the same value (the same information rate). Other feature sets, those with ?k = 0, do not contribute to the overall solution
as their information rates are dominated by others.
7
Experiments
We first illustrate the performance of information regularization on two generated binary
classification tasks in the plane. Here we can derive a region covering from the Euclidean
metric as spheres of a certain radius centered at each data point. On the data set in Figure 2 inspired from [3] the method correctly propagates the labels to the clusters starting
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Figure 3: Ability of information regularization to correct the output of a prior classifier
(left: before, right: after)
from a single labeled point in each class. In the example in Figure 3 we demonstrate that
information regularization can be used as a post-processing to supervised classification and
improve error rates by taking advantage of the topology of the space. All points are a priori
labeled by a linear classifier that is non-optimal and places a decision boundary through the
negative and positive clusters. Information regularization (on a Euclidean region covering
defined as circles around each data point) is able to correct the mislabeling of the clusters.
Next we test the algorithm on a web document classification task, the WebKB data set of
[1]. The data consists of 1051 pages collected from the websites of four universities. This
particular subset of WebKB is a binary classification task into ?course? and ?non-course?
pages. 22% of the documents are positive (?course?). The dataset is interesting because
apart from the documents contents we have information about the link structure of the
documents. The two sources of information can illustrate the capability of information
regularization of combining heterogeneous unlabeled representations.
Both ?text? and ?link? features used here are a bag-of-words representation of documents.
To obtain ?link? features we collect text that appears under all links that link to that page
from other pages, and produce its bag-of-words representation. We employ no stemming,
or stop-word processing, but restrict the vocabulary to 2000 text words and 500 link words.
The experimental setup consists of 100 random selections of 3 positive labeled, 9 negative
labeled, and the rest unlabeled. The test set includes all unlabeled documents. We report a
na??ve Bayes baseline based on the model that features of different words are independent
given the document class. The na??ve Bayes algorithm can be run on text features, link
features, or combine the two feature sets by assuming independence. We also quote the
performance of the semi-supervised method obtained by combining na??ve Bayes with the
EM algorithm as in [9].
We measure the performance of the algorithms by the F-score equal to 2pr/(p+r), where p
and r are the precision and recall. A high F-score indicates that the precision and recall are
high and also close to each other. To compare algorithms independently of the probability
threshold that decides between positive and negative samples, the results reported are the
best F-scores for all possible settings of the threshold.
The key issue in applying information regularization is the derivation of a sound region
covering R. For document classification we obtained the best results by grouping all documents that share a certain word into the same region; thus each region is in fact a word,
and there are as many regions as the size of the vocabulary. Regions are weighted equally,
as well as the words belonging to the same region. The choice of ? is also task dependent.
Here cross-validation selected a optimal value ? = 90. When running information regu-
Table 1: Web page classification comparison between na??ve Bayes and information regularization and semi-supervised na?? ve Bayes+EM on text, link, and joint features
text
link
both
na??ve Bayes
82.85
65.64
83.33
inforeg
85.10
82.85
86.15
na??ve Bayes+EM
93.69
67.18
91.01
larization with both text and link features we combined the coverings with a weight of 0.5
rather than optimizing it in a min-max fashion.
All results are reported in Table 1. We observe that information regularization performs better than na??ve Bayes on all types of features, that combining text and link features improves
performance of the regularization method, and that on link features the method performs
better than the semi-supervised na?? ve Bayes+EM. Most likely the results do not reflect the
full potential of information regularization due to the ad-hoc choice of regions based on the
vocabulary used by na??ve Bayes.
8
Discussion
The regularization principle introduced here provides a general information theoretic approach to exploiting unlabeled points. The solution implied by the principle is unique and
can be found efficiently with distributed algorithms, performing complementary averages,
on the graph induced by the regions. The propagation algorithms also extend to more
structured settings. Our preliminary theoretical analysis concerning the number of possible labelings with bounded regularizer is suggestive but rather loose (tighter results can be
found). The effect of the choice of the regions (sets of points that ought to be labeled the
same) is critical in practice but not yet well-understood.
References
[1] A. Blum and T. Mitchell. Combining labeled and unlabeled data with co-training. In
Proceedings of the 1998 Conference on Computational Learning Theory, 1998.
[2] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian
fields and harmonic functions. In Machine Learning: Proceedings of the Twentieth
International Conference, 2003.
[3] M. Szummer and T. Jaakkola. Partially labeled classification with markov random
walks. In Advances in Neural Information Processing Systems 14, 2001.
[4] O. Chapelle, J. Weston, and B. Schoelkopf. Cluster kernels for semi-supervised learning. In Advances in Neural Information Processing Systems 15, 2002.
[5] M. Szummer and T. Jaakkola. Information regularization with partially labeled data.
In NIPS?2002, volume 15, 2003.
[6] A. Corduneanu and T. Jaakkola. On information regularization. In Proceedings of the
19th UAI, 2003.
[7] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley & Sons, New
York, 1991.
[8] R. E. Blahut. Computation of channel capacity and rate distortion functions. In IEEE
Trans. Inform. Theory, volume 18, pages 460?473, July 1972.
[9] K. Nigam, A.K. McCallum, S. Thrun, and T. Mitchell. Text classification from labeled
and unlabeled documents using EM. Machine Learning, 39:103?134, 2000.
| 2632 |@word c0:1 adrian:1 seek:2 moment:1 score:3 document:20 nt:1 assigning:1 yet:1 must:2 written:1 stemming:1 update:6 larization:1 selected:2 website:1 plane:1 mccallum:1 provides:1 contribute:1 simpler:1 qij:3 consists:2 combine:3 introduce:1 manner:1 inspired:1 decreasing:1 provided:2 xx:5 underlying:1 notation:1 bounded:2 begin:1 suffice:1 webkb:2 finding:2 ought:1 guarantee:2 nf:2 concave:2 rm:2 classifier:2 normally:1 omit:1 positive:10 before:3 understood:1 local:3 limit:1 encoding:1 might:1 collect:1 co:1 factorization:1 unique:5 yj:7 practice:2 implement:1 communicated:1 empirical:1 significantly:1 matching:1 word:15 cannot:1 unlabeled:13 selection:1 close:1 put:1 context:2 applying:1 equivalent:1 imposed:1 straightforward:1 starting:1 independently:2 convex:3 simplicity:1 immediately:2 communicating:1 spanned:2 regularize:1 handle:1 target:1 us:1 element:1 updating:1 labeled:15 observed:1 worst:1 region:49 schoelkopf:1 connected:5 yk:2 intuition:1 convexity:1 solving:1 joint:1 represented:1 regularizer:11 derivation:2 separated:1 whose:1 say:1 distortion:2 ability:1 gi:5 mislabeling:1 hoc:1 advantage:1 relevant:1 combining:4 mixing:1 achieve:1 qr:25 exploiting:1 cluster:7 optimum:1 r1:5 produce:1 leave:1 object:2 derive:2 illustrate:2 ij:2 qt:2 eq:3 involves:2 tommi:2 radius:1 correct:2 subsequently:3 centered:1 require:1 preliminary:1 tighter:1 elementary:2 adjusted:1 extension:2 strictly:2 hold:2 around:1 exp:1 vary:1 purpose:1 estimation:1 bag:2 label:19 quote:1 weighted:4 mit:4 always:1 gaussian:1 rather:3 minc:1 jaakkola:4 derived:1 indicates:1 mainly:9 baseline:1 sense:1 dependent:2 membership:2 reproduce:1 labelings:7 selects:1 provably:1 overall:5 classification:14 issue:1 priori:3 mutual:3 field:2 equal:1 np:3 others:2 report:1 inherent:1 few:1 employ:1 ve:10 blahut:1 certainly:1 regularizers:1 edge:2 tree:5 euclidean:2 walk:2 circle:2 theoretical:1 minimal:1 earlier:1 cover:3 cost:2 introducing:1 minr:1 subset:4 vertex:1 uniform:1 gr:3 optimally:1 reported:2 combined:1 international:1 csail:4 together:2 na:10 reflect:1 containing:2 inefficient:1 leading:1 potential:1 coding:2 includes:1 jc:12 explicitly:1 ad:1 closed:1 analyze:2 start:1 bayes:10 capability:1 annotation:1 minimize:3 square:1 ir:6 efficiently:1 zx:2 inform:1 whenever:1 definition:1 frequency:3 involved:1 associated:4 proof:2 recovers:1 sampled:1 stop:1 dataset:1 mitchell:2 recall:2 cap:1 regu:1 improves:1 appears:2 attained:1 supervised:9 formulation:1 web:3 propagation:6 defines:1 corduneanu:2 effect:1 former:1 regularization:27 symmetric:1 iteratively:1 game:1 covering:5 criterion:7 generalized:1 theoretic:2 demonstrate:3 performs:2 variational:2 harmonic:1 common:1 volume:2 extend:4 discussed:1 approximates:1 cambridge:2 unconstrained:2 trivially:1 chapelle:1 access:1 optimizing:3 belongs:1 apart:1 certain:2 binary:7 yi:10 seen:1 minimum:3 preserving:2 monotonically:1 july:1 semi:8 arithmetic:2 full:1 sound:1 reduces:1 calculation:1 cross:1 raphson:1 sphere:1 concerning:1 post:1 equally:2 qi:3 variant:1 basic:1 heterogeneous:2 metric:1 kernel:2 represent:1 tailored:1 normalization:1 whereas:1 conditionals:5 bracketing:1 source:1 rest:1 strict:1 induced:2 sent:1 lafferty:1 easy:1 independence:1 topology:3 identified:1 restrict:1 idea:1 knowing:1 qj:1 york:1 covered:2 amount:1 sl:5 correctly:2 key:1 four:1 threshold:2 blum:1 rewriting:1 graph:5 fraction:1 run:1 inverse:1 communicate:1 place:1 decision:1 bit:1 capturing:1 bound:8 convergent:1 adapted:1 constraint:1 constrain:1 x2:1 flat:1 dominated:1 min:4 performing:1 px:1 structured:6 according:2 belonging:1 across:1 em:5 son:1 pr:1 loose:1 needed:1 available:3 operation:1 competitively:1 observe:1 appropriate:1 appearing:1 alternative:2 original:3 thomas:1 running:1 cf:1 graphical:1 log2:3 newton:1 ghahramani:1 implied:1 objective:6 parametric:2 link:13 thrun:1 capacity:1 topic:2 collected:1 assuming:1 relationship:1 minimizing:3 setup:1 potentially:1 negative:8 stated:1 rise:1 unknown:1 perform:1 upper:6 markov:1 defining:2 communication:8 y1:2 overloaded:1 introduced:1 pair:2 required:1 nip:1 trans:1 able:1 biasing:1 including:1 max:3 overlap:1 critical:1 difficulty:1 natural:1 zhu:1 scheme:5 improve:1 text:9 prior:1 geometric:3 interesting:1 validation:1 consistent:2 propagates:1 principle:4 share:2 course:3 infeasible:1 side:2 bias:3 taking:3 distributed:4 boundary:1 xn:3 stand:1 vocabulary:3 made:1 collection:1 gr1:1 maxr:3 decides:1 suggestive:1 uai:1 receiver:4 assumed:1 discriminative:1 iterative:1 table:2 channel:1 nigam:1 necessarily:2 complex:1 pk:3 linearly:1 bounding:1 complementary:3 x1:2 fashion:1 wiley:1 precision:2 lie:1 theorem:1 specific:2 r2:3 grouping:1 restricting:1 adding:1 entropy:4 likely:2 sender:2 twentieth:1 expressed:3 partially:2 applies:1 relies:1 ma:2 weston:1 conditional:5 identity:2 goal:1 formulated:1 consequently:1 shared:1 replace:1 feasible:1 change:1 hard:1 content:2 specifically:1 averaging:1 lemma:2 experimental:1 shannon:1 support:1 latter:1 arises:1 szummer:2 correlated:1 |
1,798 | 2,633 | Approximately Efficient Online Mechanism
Design
David C. Parkes
DEAS, Maxwell-Dworkin
Harvard University
[email protected]
Satinder Singh
Comp. Science and Engin.
University of Michigan
[email protected]
Dimah Yanovsky
Harvard College
[email protected]
Abstract
Online mechanism design (OMD) addresses the problem of sequential
decision making in a stochastic environment with multiple self-interested
agents. The goal in OMD is to make value-maximizing decisions despite
this self-interest. In previous work we presented a Markov decision process (MDP)-based approach to OMD in large-scale problem domains.
In practice the underlying MDP needed to solve OMD is too large and
hence the mechanism must consider approximations. This raises the possibility that agents may be able to exploit the approximation for selfish
gain. We adopt sparse-sampling-based MDP algorithms to implement efficient policies, and retain truth-revelation as an approximate BayesianNash equilibrium. Our approach is empirically illustrated in the context
of the dynamic allocation of WiFi connectivity to users in a coffeehouse.
1
Introduction
Mechanism design (MD) is concerned with the problem of providing incentives to implement desired system-wide outcomes in systems with multiple self-interested agents.
Agents are assumed to have private information, for example about their utility for different outcomes and about their ability to implement different outcomes, and act to maximize
their own utility. The MD approach to achieving multiagent coordination supposes the existence of a center that can receive messages from agents and implement an outcome and
collect payments from agents. The goal of MD is to implement an outcome with desired
system-wide properties in a game-theoretic equilibrium.
Classic mechanism design considers static systems in which all agents are present and a
one-time decision is made about an outcome. Auctions, used in the context of resourceallocation problems, are a standard example. Online mechanism design [1] departs from
this and allows agents to arrive and depart dynamically requiring decisions to be made with
uncertainty about the future. Thus, an online mechanism makes a sequence of decisions
without the benefit of hindsight about the valuations of the agents yet to arrive. Without the
issue of incentives, the online MD problem is a classic sequential decision problem.
In prior work [6], we showed that Markov decision processes (MDPs) can be used to define
an online Vickrey-Clarke-Groves (VCG) mechanism [2] that makes truth-revelation by the
agents (called incentive-compatibility) a Bayesian-Nash equilibrium [5] and implements a
policy that maximizes the expected summed value of all agents. This online VCG model
differs from the line of work in online auctions, introduced by Lavi and Nisan [4] in that it
assumes that the center has a model and it handles a general decision space and any decision
horizon. Computing the payments and allocations in the online VCG mechanism involves
solving the MDP that defines the underlying centralized (ignoring self-interest) decision
making problem. For large systems, the MDPs that need to be solved exactly become large
and thus computationally infeasible.
In this paper we consider the case where the underlying centralized MDPs are indeed too
large and thus must be solved approximately, as will be the case in most real applications.
Of course, there are several choices of methods for solving MDPs approximately. We show
that the sparse-sampling algorithm due to Kearns et al. [3] is particularly well suited to
online MD because it produces the needed local approximations to the optimal value and
action efficiently. More challengingly, regardless of our choice the agents in the system can
exploit their knowledge of the mechanism?s approximation algorithm to try and ?cheat? the
mechanism to further their own selfish interests. Our main contribution is to demonstrate
that our new approximate online VCG mechanism has truth-revelation by the agents as
an -Bayesian-Nash equilibrium (BNE). This approximate equilibrium supposes that each
agent is indifferent to within an expected utility of , and will play a truthful strategy in bestresponse to truthful strategies of other agents if no other strategy can improve its utility by
more than . For any , our online mechanism has a run-time that is independent of the
number of states in the underlying MDP, provides an -BNE, and implements a policy with
expected value within of the optimal policy?s value.
Our approach is empirically illustrated in the context of the dynamic allocation of WiFi connectivity to users in a coffeehouse. We demonstrate the speed-up introduced with sparsesampling (compared with policy calculation via value-iteration), as well as the economic
value of adopting an MDP-based approach over a simple fixed-price approach.
2
Preliminaries
Here we formalize the multiagent sequential decision problem that defines the online mechanism design (OMD) problem. The approach is centralized. Each agent is asked to report
its private information (for instance about its value and its capabilities) to a central planner
or mechanism upon arrival. The mechanism implements a policy based on its view of the
state of the world (as reported by agents). The policy defines actions in each state, and the
assumption is that all agents acquiesce to the decisions of the planner. The mechanism also
collects payments from agents, which can themselves depend on the reports of agents.
Online Mechanism Design We consider a finite-horizon problem with a set T of time
points and a sequence of decisions k = {k1 , . . . , kT }, where kt ? Kt and Kt is the set of
feasible decisions in period t. Agent i ? I arrives at time ai ? T , departs at time li ? T ,
and has value vi (k) ? 0 for a sequence of decisions k. By assumption, an agent has no
value for decisions outside of interval [ai , li ]. Agents also face payments, which can be collected after an agent?s departure. Collectively, information ?i = (ai , li , vi ) defines the type
of agent i with ?i ? ?. Agent types are sampled i.i.d. from a probability distribution f (?),
assumed known to the agents and to the central mechanism. Multiple agents can arrive and
depart at the same time. Agent i, with type ?i , receives utility ui (k, p; ?i ) = vi (k; ?i ) ? p,
for decisions k and payment p. Agents are modeled as expected-utility maximizers.
Definition 1 (Online Mechanism Design) The OMD problem is to implement the sequence
of decisions that maximizes the expected summed value across all agents in equilibrium,
given self-interested agents with private information about valuations.
In economic terms, an optimal (value-maximizing) policy is the allocatively-efficient, or
simply the efficient policy. Note that nothing about the OMD models requires centralized
execution of the joint plan. Rather, the agents themselves can have capabilities to perform
actions and be asked to perform particular actions by the mechanism. The agents can also
have private information about the actions that they are able to perform.
Using MDPs to Solve Online Mechanism Design. In the MDP-based approach to solving the OMD problem the sequential decision problem is formalized as an MDP with the
state at any time encapsulating both the current agent population and constraints on current
decisions as reflected by decisions made previously. The reward function in the MDP is
simply defined to correspond with the total reported value of all agents for all sequences of
decisions.
Given types ?i ? f (?) we define an MDP, Mf , as follows. Define the state of the MDP at
time t as the history-vector ht = (?1 , . . . , ?t ; k1 , . . . , kt?1 ), to include the reported types up
to and including period t and the decisions made up to and including period t ? 1. 1 The set
of all possible states at time t is denoted Ht . The set of all possible states across all time is
ST +1
H = t=1 Ht . The set of decisions available in state ht is Kt (ht ). Given a decision kt ?
Kt (ht ) in state ht , there is some probability distribution Prob(ht+1 |ht , kt ) over possible
next states ht+1 . In the setting of OMD, this probability distribution is determined by the
uncertainty on new agent arrivals (as represented within f (?)), together with departures
and the impact of decision kt on state.
The payoff function for the induced MDP is defined to reflect the goal of maximizing the
total expected reward across all agents. In particular, payoff R i (ht , kt ) = vi (k?t ; ?i ) ?
vi (k?t?1 ;P
?i ) becomes available from agent i upon taking action kt in state ht . With this,
?
provide the required corwe have t=1 Ri (ht , kt ) = vi (k?? ; ?i ), for all periods
P ? to
i
respondence with agent valuations. Let R(ht , kt ) =
R
(h
t , kt ), denote the payoff
i
obtained from all agents at time t. Given a (nonstationary) policy ? = {?1 , ?2 , . . . , ?T }
where ?t : Ht ? Kt , an MDP defines an MDP-value function V ? as follows: V ? (ht ) is
the expected value of the summed payoff obtained from state ht onwards under policy ?,
i.e., V ? (ht ) = E? {R(ht , ?(ht )) + R(ht+1 , ?(ht+1 )) + ? ? ? + R(hT , ?(hT ))}. An optimal
policy ? ? is one that maximizes the MDP-value of every state in H.
The optimal MDP-value function V ? can be
P computed by value-iteration, and is defined
so that V ? (h) = maxk?Kt (h) [R(h, k) + h0 ?Ht+1P rob(h0 |h, k)V ? (h0 )] for t = T ?
1, T ? 2, . . . , 1 and all h ? Ht , with V ? (h ? HT ) = maxk?KT (h) R(h, k). Given the
optimal MDP-value function, the optimal policy is derived
Pas follows: for t < T , policy
? ? (h ? Ht ) chooses action arg maxk?Kt (h) [R(h, k) + h0 ?Ht+1P rob(h0 |h, k)V ? (h0 )]
and ? ? (h ? HT ) = arg maxk?KT (h) R(h, k). Let ???t0 denote reported types up to and
including period t0 . Let Ri 0 (???t0 ; ?) denote the total reported reward to agent i up to and
?t
including period t0 . The commitment period for agent i is defined as the first period, mi ,
i
i
0
0
for which ?t ? mi , R?m
(???mi ; ?) = R?t
(???mi ? ?>m
; ?), for any types ?>m
still to
i
i
i
arrive. This is the earliest period in which agent i?s total value is known with certainty.
Let ht0 (???t0 ; ?) denote the state in period t0 given reports ???t0 . Let ???t0 \i = ???t0 \ ??i .
Definition 2 (Online VCG mechanism) Given history h ? H, mechanism Mvcg =
(?; ?, pvcg ) implements policy ? and collects payment,
h
i
??m ; ?) = Ri (???m ; ?) ? V ? (ha? (????a ; ?)) ? V ? (ha? (????a \i ; ?)) (1)
pvcg
(
?
?mi
i
i
i
i
i
i
i
from agent i in some period t0 ? mi .
1
Using histories as state will make the state space very large. Often, there will be some function
g for which g(h) is a sufficient statistic for all possible states h. We ignore this possibility here.
Agent i?s payment is equal to its reported value discounted by the expected marginal value
that it will contribute to the system as determined by the MDP-value function for the policy
in its arrival period. The incentive-compatibility of the Online VCG mechanism requires
that the MDP satisfies stalling which requires that the expected value from the optimal
policy in every state in which an agent arrives is at least the expected value from following
the optimal action in that state as though the agent had never arrived and then returning to
the optimal policy. Clearly, property Kt (ht ) ? Kt (ht \ ?i ) in any period t in which ?i has
just arrived is sufficient for stalling. Stalling is satisfied whenever an agent?s arrival does
not force a change in action on a policy.
Theorem 1 (Parkes & Singh [6]) An online VCG mechanism, Mvcg = (?; ? ? , pvcg ),
based on an optimal policy ? ? for a correct MDP model that satisfies stalling is BayesianNash incentive compatible and implements the optimal MDP policy.
3
Solving Very Large MDPs Approximately
From Equation 1, it can be seen that making outcome and payment decisions at any point
in time in an online VCG mechanism does not require a global value function or a global
policy. Unlike most methods for approximately solving MDPs that compute global approximations, the sparse-sampling methods of Kearns et al. [3] compute approximate values and
actions for a single state at a time. Furthermore, sparse-sampling methods provide approximation guarantees that will be important to establish equilibrium properties ? they can
compute an -approximation to the optimal value and action in a given state in time independent of the size of the state space (though polynomial in 1 and exponential in the time
horizon). Thus, sparse-sampling methods are particularly suited to approximating online
VCG and we adopt them here.
The sparse-sampling algorithm uses the MDP model Mf as a generative model, i.e., as a
black box from which a sample of the next-state and reward distributions for any given
state-action pair can be obtained. Given a state s and an approximation parameter , it
computes an -accurate estimate of the optimal value for s as follows. We make the parameterization on explicit by writing sparse-sampling(). The algorithm builds out a depth-T
sampled tree in which each node is a state and each node?s children are obtained by sampling each action in that state m times (where m is chosen to guarantee an approximation
to the optimal value of s), and each edge is labeled with the sample reward for that transition. The algorithm computes estimates of the optimal value for nodes in the tree working
backwards from the leaves as follows. The leaf-nodes have zero value. The value of a node
is the maximum over the values for all actions in that node. The value of an action in a
node is the summed value of the m rewards on the m outgoing edges for that action plus
the summed value of the m children of that node. The estimated optimal value of state s is
the value of the root node of the tree. The estimated optimal action in state s is the action
that leads to the largest value for the root node in the tree.
Lemma 1 (Adapted from Kearns, Mansour & Ng [3]) The sparse-sampling() algorithm,
given access to a generative model for any n-action MDP M , takes as input any state
s ? S and any > 0, outputs an action, and satisfies the following two conditions:
? (Running Time) The running time of the algorithm is O((nC)T ), where C =
f 0 (n, 1 , Rmax , T ) and f 0 is a polynomial function of the approximation parameter
1
, the number of actions n, the largest expected reward in a state R max and the
horizon T . In particular, the running time has no dependence on the number of
states.
? (Near-Optimality) The value function of the stochastic policy implemented by the
sparse-sampling() algorithm, denoted V ss , satisfies |V ? (s) ? V ss (s)| ? si-
multaneously for all states s ? S.
It is straightforward to derive the following corollary from the proof of Lemma 1 in [3].
Corollary 1 The value function computed by the sparse-sampling() algorithm, denoted
V? ss , is near-optimal in expectation, i.e., |V ? (s) ? E{V? ss (s)}| ? simultaneously for all
states s ? S and where the expectation is over the randomness introduced by the sparsesampling() algorithm.
4
Approximately Efficient Online Mechanism Design
In this section, we define an approximate online VCG mechanism and consider the effect
on incentives of substituting the sparse-sampling() algorithm into the online VCG mechanism. We model agents as indifferent between decisions that differ by at most a utility of
> 0, and consider an approximate Bayesian-Nash equilibrium. Let vi (?; ?) denote the
final value to agent i after reports ? given policy ?.
Definition 3 (approximate BNE) Mechanism Mvcg = (?, ?, pvcg ) is -Bayesian-Nash incentive compatible if
vcg
?
?
E?|??t0 {vi (?; ?) ? pvcg
i (?; ?)} + ? E?|??t0 {vi (??i , ?i ; ?) ? pi (??i , ?i ; ?)}(2)
where agent i with type ?i arrives in period t0 , and with the expectation taken over future
types given current reports ??t0 .
In particular, when truth-telling is an -BNE we say that the mechanism is -BNE incentive
compatible and no agent can improve its expected utility by more than > 0, for any type,
as long as other agents are bidding truthfully. Equivalently, one can interpret an -BNE as
an exact equilibrium for agents that face a computational cost of at least to compute the
exact BNE.
Definition 4 (approximate mechanism) A sparse-sampling() based approximate online
VCG mechanism, Mvcg () = (?; ?
? , p?vcg ), uses the sparse-sampling() algorithm to implement stochastic policy ?
? and collects payment
h
i
i
??m ; ?
??m ; ?
? ss (ha? (????a ; ?
? ss (ha? (????a \i ; ?
p?vcg
(
?
?
)
=
R
(
?
?
)
?
V
?
))
?
V
?
))
?mi
i
i
i
i
i
i
i
from agent i in some period t0 ? mi , for commitment period mi .
Our proof of incentive-compatibility first demonstrates that an approximate delayed VCG
mechanism [1, 6] is -BNE. With this, we demonstrate that the expected value of the payments in the approximate online VCG mechanism is within 3 of the payments in the
approximate delayed VCG mechanism. The delayed VCG mechanism makes the same
decisions as the online VCG mechanism, except that payments are delayed until the final
period and computed as:
h
i
? ?) = Ri (?;
? ?) ? R?T (?;
? ?) ? R?T (???i ; ?)
pDvcg
(
?;
(3)
?T
i
where the discount is computed ex post, once the effect of an agent on the system value
is known. In an approximate delayed-VCG mechanism, the role of the sparse-sampling
algorithm is to implement an approximate policy, as well as counterfactual policies for the
worlds ??i without each agent i in turn. The total reported reward to agents 6= i over this
counterfactual series of states is used to define the payment to agent i.
Lemma 2 Truthful bidding is an -Bayesian-Nash equilibrium in the sparse-sampling()
based approximate delayed-VCG mechanism.
Proof: Let ?
? denote the approximate policy computed by the sparse-sampling algorithm.
Assume agents 6= i are truthful. Now, if agent i bids truthfully its expected utility is
X j
X j
?) +
E?|??ai {vi (?; ?
R?T (?; ?
?) ?
R?T (??i ; ?
? )}
(4)
j6=i
j6=i
where the expectation is over both the types yet to be reported and the randomness introduced by the sparse-sampling() algorithm. Substituting R<ai (?<ai ; ?
?) +
V ss (hai (??ai ; ?
? )) for the first two terms in Equation (4) and R<ai (?<ai ; ?
?) +
V ss (hai (??ai \i ; ?
? )) for the third term, then its expected utility is at least
V ? (hai (??ai ; ?
? )) ? V ss (hai (??ai \i ; ?
? )) ?
(5)
?
ss
because V (hai (??ai ; ?
? )) ? V (hai (??ai ; ?
? )) ? by Lemma 1. Now, ignore term
R?T (??i ; ?
? ) in Equation (4), which is independent of agent i?s bid ??i , and consider the
maximal expected utility to agent i from some non-truthful bid. The effect of ??i on the first
two terms is indirect, through a change in the policy for periods ? ai . An agent can change
the policy only indirectly, by changing the center?s view of the state by misreporting its
type. By definition, the agent can do no better than selecting optimal policy ? ? , which is
defined to maximize the expected value of the first two terms. Putting this together, the
expected utility from ??i is at most V ? (hai (??ai ; ?
? )) ? V ss (hai (??ai \i ; ?
? )) and at most
better than that from bidding truthfully.
Theorem 2 Truthful bidding is an 4-Bayesian-Nash equilibrium in the sparsesampling() based approximate online VCG mechanism.
Proof: Assume agents 6= i bid truthfully, and consider report ??i . Clearly, the policy
implemented in the approximate online-VCG mechanism is the same as in the delayedVCG mechanism for all ??i . Left to show is that the expected value of the payments are
within 3 for all ??i . From this we conclude that the expected utility to agent i in the
approximate-VCG mechanism is always within 3 of that in the approximate delayed-VCG
mechanism, and therefore 4-BNE by Lemma 2. The expected payment in the approximate
online VCG mechanism is h
i
??
? )} ? E{V? ss (ha? (????a ; ?
E?|? {Ri (?;
? )} ? E{V? ss (ha? (????a \i ; ?
? )}
?ai
?T
i
i
i
i
The value function computed by the sparse-sampling() algorithm is a random variable to
agent i at the time of bidding, and the second and third expectations are over the randomness introduced by the sparse-sampling() algorithm. The first term is the same as in the
payment in the approximate delayed-VCG mechanism. By Corollary 1, the value function
estimated in the sparse-sampling() is near-optimal in expectation and the total of the second two terms is at least V ? (ha?i (????ai \i ; ? ? )) ? V ? (ha?i (????ai ; ? ? )) ? 2. Ignoring the first
term in pDvcg
, the expected payment in the approximate delayed-VCG mechanism is no
i
more than V ? (ha?i (????ai \i ; ? ? )) ? (V ? (ha?i (????ai ; ? ? )) ? ) because of the near-optimality
of the value function of the stochastic policy (Lemma 1). Putting this together, we have a
maximum difference in expected payments of 3. Similar analysis yields a maximum difference of 3 when an upper-bound is taken on the payment in the online VCG mechanism
and compared with a lower-bound on the payment in the delayed mechanism.
Theorem 3 For any parameter > 0, the sparse-sampling() based approximate online
VCG mechanism has -efficiency in an 4-BNE.
5
Empirical Evaluation of Approximate Online VCG
The WiFi Problem. The WiFi problem considers a fixed number of channels C with
a random number of agents (max A) that can arrive per period. The time horizon
T = 50. Agents demand a single channel and arrive with per-unit value, distributed i.i.d.
V = {v1 , . . . , vk } and duration in the system, distributed i.i.d. D = {d1 , . . . , dl }. The
value model requires that any allocation to agent i must be for contiguous periods, and be
made while the agent is present (i.e., during periods [t, ai + di ], for arrival ai and duration
di ). An agent?s value for an allocation of duration x is vi x where vi is its per-unit value.
Let dmax denote the maximal possible allocated duration. We define the following MDP
components:
State space: We use the following compact, sufficient, statistic of history: a resource
schedule is a (weakly non-decreasing) vector of length dmax that counts the number of
channels available in the current period and next dmax ? 1 periods given previous actions
(C channels are available after this); an agent vector of size (k ? l) that provides a count
of the number of agents present but not allocated for each possible per-unit value and each
possible duration (the duration is automatically decremented when an agent remains in the
system for a period without receiving an allocation); the time remaining until horizon T .
Action space: The policy can postpone an agent allocation, or allocate an agent to a channel for the remaining duration of the agent?s time in the system if a channel is available,
and the remaining duration is not greater than dmax .
Payoff function: The reward at a time step is the summed value obtained from all agents
for which an allocation is made in this time step. This is the total value such an agent will
receive before it departs.
Transition probabilities: The change in resource schedule, and in the agent vector that
relates to agents currently present, is deterministic. The random new additions to the agent
vector at each step are unaffected by the actions and also independent of time.
Mechanisms. In the exact online VCG mechanism we compute an optimal policy,
and optimal MDP values, offline using finite-horizon value iteration [7]. In the sparsesampling() approach, we define a sampling tree depth L (perhaps < T ) and sample each
state m times. This limited sampling depth places a lower-bound on the best possible approximation accuracy from the sparse-sampling algorithm. We also employ agent pruning,
with the agent vector in the state representation pruned to remove dominated agents: consider agent type with duration d and value v and remove all but C ? N agents where N is
the number of agents that either have remaining duration ? d and value > v or duration
< d and value ? v. In computing payments we use factoring, and only determine VCG
payments once for each type of agent to arrive. We compare performance with a simple
fixed-price allocation scheme that given a particular problem, computes off-line a fixed
number of periods and a fixed price (agents are queued and offered the price at random as
resources become available) that yields the maximum expected total value.
Results In the default model, we set C = 5, A = 5, define the set of values V = {1, 2, 3},
define the set of durations D = {1, 2, 6}, with lookahead L = 4 and sampling width
m = 6. All results are averaged over at least 10 instances, and experiments were performed
on a 3GHz P4, with 512 MB RAM. Value and revenue is normalized by the total value
demanded by all agents, i.e. the value with infinite capacity.2 Looking first at economic
properties, Figure 1(A) shows the effect of varying the number of agents from 2 to 12,
comparing the value and revenue between the approximate online VCG mechanism and the
fixed price mechanism. Notice that the MDP method dominates the price-based scheme for
value, with a notable performance improvement over fixed price when demand is neither
very low (no contention) nor very high (lots of competition). Revenue is also generally
better from the MDP-based mechanism than in the fixed price scheme. Fig. 1(B) shows the
similar effect of varying the number of channels from 3 to 10.
Turning now to computational properties, Figure 1 (C) illustrates the effectiveness of
sparse-sampling, and also agent pruning, sampled over 100 instances. The model is very
2
This explains why the value appears to drop as we scale up the number of agents? the total
available value is increasing but supply remains fixed.
100
value:mdp
rev:mdp
value:fixed
rev:fixed
80
value:mdp
rev:mdp
value:fixed
rev:fixed
80
60
%
%
60
40
40
20
20
4
6
8
Number of agents
10
98
1.0
value:pruning
value:no pruning
0.8
94
0.6
92
0.4
90
0.2
88
2
time:pruning
time:no pruning
4
6
Sampling Width
8
10
0
4
5
6
7
8
Number of channels
9
10
600
500
vs. #agents
time:pruning
vs. #agents (no pruning)
time:no pruning
vs. #channels
400
Run time (s)
% of Exact Value
96
0
3
12
% of Exact Time
2
300
200
100
0
2
4
6
8
Number of Agents
10
12
Figure 1: (A) Value and Revenue vs. Number of Agents. (B) Value and Revenue vs. Number of
Channels. (C) Effect of Sampling Width. (D) Pruning speed-up.
small: A = 2, C = 2, D = {1, 2, 3}, V = {1, 2, 3} and L = 4, to allow a comparison with the compute time for an optimal policy. The sparse-sampling method is already
running in less than 1% of the time for optimal value-iteration (right-hand axis), with an
accuracy as high as 96% of the optimal. Pruning provides an incremental speed-up, and
actually improves accuracy, presumably by making better use of each sample. Figure 1 (D)
shows that pruning is extremely useful computationally (in comparison with plain sparsesampling), for the default model parameters and as the number of agents is increased from
2 to 12. Pruning is effective, removing around 50% of agents (summed across all states in
the lookahead tree) at 5 agents.
Acknowledgments. David Parkes was funded by NSF grant IIS-0238147. Satinder Singh
was funded by NSF grant CCF 0432027 and by a grant from DARPA?s IPTO program.
References
[1] Eric Friedman and David C. Parkes. Pricing WiFi at Starbucks? Issues in online mechanism
design. In Fourth ACM Conf. on Electronic Commerce (EC?03), pages 240?241, 2003.
[2] Matthew O. Jackson. Mechanism theory. In The Encyclopedia of Life Support Systems. EOLSS
Publishers, 2000.
[3] Michael Kearns, Yishay Mansour, and Andrew Y Ng. A sparse sampling algorithm for nearoptimal planning in large Markov Decision Processes. In Proc. 16th Int. Joint Conf. on Artificial
Intelligence, pages 1324?1331, 1999. To appear in Special Issue of Machine Learning.
[4] Ron Lavi and Noam Nisan. Competitive analysis of incentive compatible on-line auctions. In
Proc. 2nd ACM Conf. on Electronic Commerce (EC-00), 2000.
[5] Martin J Osborne and Ariel Rubinstein. A Course in Game Theory. MIT Press, 1994.
[6] David C. Parkes and Satinder Singh. An MDP-based approach to Online Mechanism Design. In
Proc. 17th Annual Conf. on Neural Information Processing Systems (NIPS?03), 2003.
[7] M L Puterman. Markov decision processes: Discrete stochastic dynamic programming. John
Wiley & Sons, New York, 1994.
| 2633 |@word private:4 polynomial:2 nd:1 series:1 selecting:1 current:4 comparing:1 si:1 yet:2 must:3 john:1 remove:2 drop:1 v:5 generative:2 leaf:2 intelligence:1 parameterization:1 parkes:6 pdvcg:2 provides:3 contribute:1 node:10 ron:1 become:2 supply:1 indeed:1 expected:24 themselves:2 nor:1 planning:1 discounted:1 decreasing:1 automatically:1 increasing:1 becomes:1 underlying:4 maximizes:3 rmax:1 hindsight:1 guarantee:2 certainty:1 every:2 act:1 exactly:1 returning:1 revelation:3 demonstrates:1 unit:3 grant:3 appear:1 before:1 local:1 despite:1 cheat:1 approximately:6 black:1 plus:1 dynamically:1 collect:4 limited:1 averaged:1 acknowledgment:1 commerce:2 practice:1 implement:13 differs:1 postpone:1 empirical:1 context:3 writing:1 queued:1 deterministic:1 center:3 maximizing:3 straightforward:1 regardless:1 omd:9 duration:12 formalized:1 jackson:1 classic:2 handle:1 population:1 yishay:1 play:1 user:2 exact:5 programming:1 us:2 mvcg:4 harvard:4 pa:1 particularly:2 labeled:1 role:1 solved:2 environment:1 nash:6 ui:1 reward:9 asked:2 dynamic:3 singh:4 raise:1 solving:5 depend:1 weakly:1 upon:2 efficiency:1 eric:1 bidding:5 joint:2 indirect:1 darpa:1 represented:1 effective:1 artificial:1 rubinstein:1 outcome:7 outside:1 h0:6 solve:2 say:1 s:13 ability:1 statistic:2 final:2 online:37 sequence:5 maximal:2 mb:1 commitment:2 p4:1 lookahead:2 competition:1 produce:1 incremental:1 derive:1 andrew:1 implemented:2 involves:1 differ:1 correct:1 stochastic:5 explains:1 require:1 preliminary:1 around:1 presumably:1 equilibrium:11 matthew:1 substituting:2 adopt:2 proc:3 currently:1 coordination:1 largest:2 mit:1 clearly:2 always:1 rather:1 varying:2 earliest:1 corollary:3 derived:1 vk:1 improvement:1 factoring:1 stalling:4 interested:3 compatibility:3 issue:3 arg:2 denoted:3 plan:1 summed:7 special:1 marginal:1 equal:1 once:2 never:1 ng:2 sampling:31 lavi:2 wifi:5 future:2 report:6 decremented:1 employ:1 simultaneously:1 delayed:10 friedman:1 onwards:1 interest:3 message:1 centralized:4 possibility:2 evaluation:1 indifferent:2 arrives:3 kt:22 accurate:1 grove:1 edge:2 tree:6 desired:2 instance:3 increased:1 contiguous:1 cost:1 too:2 reported:8 nearoptimal:1 supposes:2 eec:1 chooses:1 st:1 retain:1 off:1 receiving:1 michael:1 together:3 connectivity:2 central:2 reflect:1 satisfied:1 conf:4 li:3 int:1 notable:1 nisan:2 vi:12 performed:1 try:1 view:2 root:2 lot:1 competitive:1 capability:2 contribution:1 accuracy:3 efficiently:1 correspond:1 yield:2 bayesian:6 comp:1 j6:2 randomness:3 history:4 unaffected:1 whenever:1 definition:5 proof:4 mi:9 di:2 static:1 gain:1 sampled:3 counterfactual:2 knowledge:1 improves:1 formalize:1 schedule:2 actually:1 appears:1 maxwell:1 reflected:1 though:2 box:1 furthermore:1 just:1 until:2 working:1 receives:1 hand:1 defines:5 perhaps:1 pricing:1 mdp:32 engin:1 effect:6 requiring:1 normalized:1 ccf:1 hence:1 illustrated:2 vickrey:1 puterman:1 game:2 self:5 during:1 width:3 arrived:2 theoretic:1 demonstrate:3 auction:3 contention:1 empirically:2 interpret:1 ai:24 baveja:1 had:1 funded:2 access:1 own:2 showed:1 life:1 vcg:35 seen:1 greater:1 determine:1 truthful:6 maximize:2 period:25 ii:1 relates:1 multiple:3 bestresponse:1 calculation:1 long:1 post:1 impact:1 expectation:6 iteration:4 adopting:1 receive:2 addition:1 interval:1 allocated:2 publisher:1 unlike:1 induced:1 effectiveness:1 nonstationary:1 near:4 backwards:1 concerned:1 bid:4 economic:3 t0:15 allocate:1 utility:13 york:1 action:24 generally:1 useful:1 discount:1 encyclopedia:1 nsf:2 notice:1 estimated:3 per:4 discrete:1 incentive:10 putting:2 achieving:1 changing:1 neither:1 ht:32 ht0:1 v1:1 ram:1 run:2 prob:1 uncertainty:2 fourth:1 arrive:7 planner:2 place:1 electronic:2 decision:32 clarke:1 bound:3 annual:1 adapted:1 constraint:1 ri:5 dominated:1 speed:3 optimality:2 extremely:1 pruned:1 martin:1 across:4 son:1 rob:2 making:4 rev:4 ariel:1 taken:2 computationally:2 equation:3 resource:3 payment:22 previously:1 turn:1 dmax:4 mechanism:60 count:2 needed:2 encapsulating:1 remains:2 umich:1 available:7 indirectly:1 existence:1 assumes:1 running:4 include:1 remaining:4 exploit:2 k1:2 build:1 establish:1 approximating:1 already:1 depart:2 fa:1 strategy:3 dependence:1 md:5 hai:8 capacity:1 valuation:3 considers:2 collected:1 length:1 modeled:1 providing:1 nc:1 equivalently:1 noam:1 design:12 policy:36 perform:3 upper:1 markov:4 finite:2 payoff:5 maxk:4 looking:1 mansour:2 david:4 introduced:5 pair:1 required:1 nip:1 address:1 able:2 departure:2 program:1 including:4 max:2 force:1 turning:1 scheme:3 improve:2 mdps:7 axis:1 prior:1 multiagent:2 allocation:9 revenue:5 agent:105 offered:1 sufficient:3 pi:1 course:2 compatible:4 infeasible:1 offline:1 allow:1 telling:1 wide:2 face:2 taking:1 sparse:25 benefit:1 distributed:2 ghz:1 depth:3 default:2 world:2 transition:2 plain:1 computes:3 made:6 ec:2 approximate:26 compact:1 ignore:2 pruning:13 starbucks:1 satinder:3 global:3 assumed:2 bne:10 conclude:1 truthfully:4 demanded:1 why:1 channel:10 ignoring:2 dea:1 domain:1 main:1 arrival:5 nothing:1 osborne:1 child:2 fig:1 wiley:1 explicit:1 exponential:1 third:2 theorem:3 departs:3 removing:1 dominates:1 maximizers:1 dl:1 sequential:4 execution:1 illustrates:1 horizon:7 demand:2 mf:2 suited:2 michigan:1 selfish:2 simply:2 collectively:1 truth:4 satisfies:4 acm:2 goal:3 price:8 feasible:1 change:4 determined:2 except:1 infinite:1 kearns:4 lemma:6 called:1 total:10 pvcg:5 college:1 support:1 ipto:1 outgoing:1 d1:1 ex:1 |
1,799 | 2,634 | Nearly Tight Bounds for the Continuum-Armed
Bandit Problem
Robert Kleinberg?
Abstract
In the multi-armed bandit problem, an online algorithm must choose
from a set of strategies in a sequence of n trials so as to minimize the
total cost of the chosen strategies. While nearly tight upper and lower
bounds are known in the case when the strategy set is finite, much less is
known when there is an infinite strategy set. Here we consider the case
when the set of strategies is a subset of Rd , and the cost functions are
continuous. In the d = 1 case, we improve on the best-known upper and
lower bounds, closing the gap to a sublogarithmic factor. We also consider the case where d > 1 and the cost functions are convex, adapting a
recent online convex optimization algorithm of Zinkevich to the sparser
feedback model of the multi-armed bandit problem.
1
Introduction
In an online decision problem, an algorithm must choose from among a set of strategies in
each of n consecutive trials so as to minimize the total cost of the chosen strategies. The
costs of strategies are specified by a real-valued function which is defined on the entire
strategy set and which varies over time in a manner initially unknown to the algorithm.
The archetypical online decision problems are the best expert problem, in which the entire
cost function is revealed to the algorithm as feedback at the end of each trial, and the multiarmed bandit problem, in which the feedback reveals only the cost of the chosen strategy.
The names of the two problems are derived from the metaphors of combining expert advice
(in the case of the best expert problem) and learning to play the best slot machine in a casino
(in the case of the multi-armed bandit problem).
The applications of online decision problems are too numerous to be listed here. In addition to occupying a central position in online learning theory, algorithms for such problems have been applied in numerous other areas of computer science, such as paging and
caching [6, 14], data structures [7], routing [4, 5], wireless networks [19], and online auction mechanisms [8, 15]. Algorithms for online decision problems are also applied in a
broad range of fields outside computer science, including statistics (sequential design of
experiments [18]), economics (pricing [20]), game theory (adaptive game playing [13]),
and medical decision making (optimal design of clinical trials [10]).
Multi-armed bandit problems have been studied quite thoroughly in the case of a finite
strategy set, and the performance of the optimal algorithm (as a function of n) is known
?
M.I.T. CSAIL, Cambridge, MA 02139. Email: [email protected]. Supported by a Fannie
and John Hertz Foundation Fellowship.
up to a constant factor [3, 18]. In contrast, much less is known in the case of an infinite
strategy set. In this paper, we consider multi-armed bandit problems with a continuum of
strategies, parameterized by one or more real numbers. In other words, we are studying
online learning problems in which the learner designates a strategy in each time step by
specifying a d-tuple of real numbers (x1 , . . . , xd ); the cost function is then evaluated at
(x1 , . . . , xd ) and this number is reported to the algorithm as feedback. Recent progress on
such problems has been spurred by the discovery of new algorithms (e.g. [4, 9, 16, 21])
as well as compelling applications. Two such applications are online auction mechanism
design [8, 15], in which the strategy space is an interval of feasible prices, and online
oblivious routing [5], in which the strategy space is a flow polytope.
Algorithms for online decisions problems are often evaluated in terms of their regret, defined as the difference in expected cost between the sequence of strategies chosen by the
algorithm and the best fixed (i.e. not time-varying) strategy. While tight upper and lower
bounds on the regret of algorithms for the K-armed bandit problem have been known
for many years [3, 18], our knowledge of such bounds for continuum-armed bandit problems is much less satisfactory. For a one-dimensional strategy space, the first algorithm
with sublinear regret appeared in [1], while the first polynomial lower bound on regret appeared in [15]. For Lipschitz-continuous cost functions (the case introduced in [1]), the
best known upper and lower bounds for this problem are currently O(n3/4 ) and ?(n1/2 ),
respectively [1, 15], leaving as an open question the problem of determining tight bounds
for the regret as a function of n. Here, we solve this open problem by sharpening the upper and lower bounds to O(n2/3 log1/3 (n)) and ?(n2/3 ), respectively, closing the gap to a
sublogarithmic factor. Note that this requires improving the best known algorithm as well
as the lower bound technique.
Recently, and independently, Eric Cope [11] considered a class of cost functions obeying
a more restrictive condition on the shape of the function near its optimum, and for such
functions he obtained a sharper bound on regret than the bound proved here for uniformly
locally Lipschitz cost functions. Cope requires that each cost function C achieves its optimum at a unique point ?, and that there exist constants K0 > 0 and p ? 1 such that for
all x, |C(x) ? C(?)| ? K0 kx ? ?kp . For this class of cost functions ? which is probably
broad enough to capture most cases of practical interest ? he proves that the regret of the
optimal continuum-armed bandit algorithm is O(n?1/2 ), and that this bound is tight.
For a d-dimensional strategy space, any multi-armed bandit algorithm must suffer regret
depending exponentially on d unless the cost functions are further constrained. (This is
demonstrated by a simple counterexample in which the cost function is identically zero
in all but one orthant of Rd , takes a negative value somewhere in that orthant, and does
not vary over time.) For the best-expert problem, algorithms whose regret is polynomial
in d and sublinear in n are known for the case of cost functions which are constrained to
be linear [16] or convex [21]. In the case of linear cost functions, the relevant algorithm
has been adapted to the multi-armed bandit setting in [4, 9]. Here we adapt the online
convex programming algorithm of [21] to the continuum-armed bandit setting, obtaining
the first known algorithm for this problem to achieve regret depending polynomially on
d and sublinearly on n. A remarkably similar algorithm was discovered independently
and simultaneously by Flaxman, Kalai, and McMahan [12]. Their algorithm and analysis
are superior to ours, requiring fewer smoothness assumptions on the cost functions and
producing a tighter upper bound on regret.
2
Terminology and Conventions
We will assume that a strategy set S ? Rd is given, and that it is a compact subset of Rd .
Time steps will be denoted by the numbers {1, 2, . . . , n}. For each t ? {1, 2, . . . , n} a cost
function Ct : S ? R is given. These cost functions must satisfy a continuity property
based on the following definition. A function f is uniformly locally Lipschitz with constant
L (0 ? L < ?), exponent ? (0 < ? ? 1), and restriction ? (? > 0) if it is the case that
for all u, u0 ? S with ku ? u0 k ? ?,
|f (u) ? f (u0 )| ? Lku ? u0 k? .
(Here, k ? k denotes the Euclidean norm on Rd .) The class of all such functions f will be
denoted by ulL(?, L, ?).
We will consider two models which may govern the cost functions. The first of these
is identical with the continuum-armed bandit problem considered in [1], except that [1]
formulates the problem in terms of maximizing reward rather than minimizing cost. The
second model concerns a sequence of cost functions chosen by an oblivious adversary.
Random The functions C1 , . . . , Cn are independent, identically distributed random samples from a probability distribution on functions C : S ? R. The expected cost
?
function C? : S ? R is defined by C(u)
= E(C(u)) where C is a random sample
from this distribution. This function C? is required to belong to ulL(?, L, ?) for
some specified ?, L, ?. In addition, we assume there exist positive constants ?, s 0
such that if C is a random sample from the given distribution on cost functions,
then
1 2 2
E(esC(u) ) ? e 2 ? s ?|s| ? s0 , u ? S.
?
The ?best strategy? u? is defined to be any element of arg minu?S C(u).
(This
set is non-empty, by the compactness of S.)
Adversarial The functions C1 , . . . , Cn are a fixed sequence of functions in ulL(?, L, ?),
taking valuesPin [0, 1]. The ?best strategy? u? is defined to be any element of
n
arg minu?S t=1 Ct (u). (Again, this set is non-empty by compactness.)
A multi-armed bandit algorithm is a rule for deciding which strategy to play at time t, given
the outcomes of the first t ? 1 trials. More formally, a deterministic multi-armed bandit
algorithm U is a sequence of functions U1 , U2 , . . . such that Ut : (S ? R)t?1 ? S. The
interpretation is that Ut (u1 , x1 , u2 , x2 , . . . , ut?1 , xt?1 ) defines the strategy to be chosen at
time t if the algorithm?s first t ? 1 choices were u1 , . . . , ut?1 respectively, and their costs
were x1 , . . . , xt?1 respectively. A randomized multi-armed bandit algorithm is a probability distribution over deterministic multi-armed bandit algorithms. (If the cost functions
are random, we will assume their randomness is independent of the algorithm?s random
choices.) For a randomized multi-armed bandit algorithm, the n-step regret R n is the expected difference in total cost between the algorithm?s chosen strategies u 1 , u2 , . . . , un and
the best strategy u? , i.e.
" n
#
X
?
Rn = E
Ct (ut ) ? Ct (u ) .
t=1
Here, the expectation is over the algorithm?s random choices and (in the random-costs
model) the randomness of the cost functions.
3
Algorithms for the one-parameter case (d = 1)
The continuum-bandit algorithm presented in [1] is based on computing an estimate C? of
the expected cost function C? which converges almost surely to C? as n ? ?. This estimate
is obtained by devoting a small fraction of the time steps (tending to zero as n ? ?)
to sampling the random cost functions at an approximately equally-spaced sequence of
?design points? in the strategy set, and combining these samples using a kernel estimator.
When the algorithm is not sampling a design point, it chooses a strategy which minimizes
? The convergence of C? to C? ensures that
expected cost according to the current estimate C.
?
the average cost in these ?exploitation steps? converges to the minimum value of C.
? Since the
A drawback of this approach is its emphasis on estimating the entire function C.
?
algorithm?s goal is to minimize cost, its estimate of C need only be accurate for strategies
where C? is near its minimum. Elsewhere a crude estimate of C? would have sufficed, since
such strategies may safely be ignored by the algorithm. The algorithm in [1] thus uses
its sampling steps inefficiently, focusing too much attention on portions of the strategy
interval where an accurate estimate of C? is unnecessary. We adopt a different approach
which eliminates this inefficiency and also leads to a much simpler algorithm. First we
discretize the strategy space by constraining the algorithm to choose strategies only from
a fixed, finite set of K equally spaced design points {1/K, 2/K, . . . , 1}. (For simplicity,
we are assuming here and for the rest of this section that S = [0, 1].) This reduces the
continuum-armed bandit problem to a finite-armed bandit problem, and we may apply one
of the standard algorithms for such problems. Our continuum-armed bandit algorithm is
shown in Figure 1. The outer loop uses a standard doubling technique to transform a
non-uniform algorithm to a uniform one. The inner loop requires a subroutine MAB
which should implement a finite-armed bandit algorithm appropriate for the cost model
under consideration. For example, MAB could be the algorithm UCB1 of [2] in the
random case, or the algorithm Exp3 of [3] in the adversarial case. The semantics of MAB
are as follows: it is initialized with a finite set of strategies; subsequently it recommends
strategies in this set, waits to learn the feedback score for its recommendation, and updates
its recommendation when the feedback is received.
The analysis of this algorithm will ensure that its choices have low regret relative to the best
design point. The Lipschitz regularity of C? guarantees that the best design point performs
nearly as well, on average, as the best strategy in S.
A LGORITHM CAB1
T ?1
while T ? n
1
2?+1
T
K?
log T
Initialize MAB with strategy set {1/K, 2/K, . . . , 1}.
for t = T, T + 1, . . . , min(2T ? 1, n)
Get strategy ut from MAB.
Play ut and discover Ct (ut ).
Feed 1 ? Ct (ut ) back to MAB.
end
T ? 2T
end
Figure 1: Algorithm for the one-parameter continuum-armed bandit problem
Theorem 3.1. In both the random and adversarial models, the regret of algorithm CAB1
?+1
?
is O(n 2?+1 log 2?+1 (n)).
?
Proof Sketch. Let q = 2?+1
, so that the regret bound is O(n1?q logq (n)). It suffices to
prove that the regret in the inner loop is O(T 1?q logq (T )); if so, then we may sum this
bound over all iterations of the inner loop to get a geometric progression with constant
ratio, whose largest term is O(n1?q logq (n)). So from now on assume that T is fixed and
that K is defined as in Figure 1, and for simplicity renumber the T steps in this iteration of
inner loop so that the first is step 1 and the last is step T . Let u? be the best strategy in S,
and let u0 be the element of {1/K, 2/K, . . . , 1} nearest to u? . Then |u0 ? u? | < 1/K, so
PT
using the fact that C? ? ulL(?, L, ?) (or that T1 t=1 Ct ? ulL(?, L, ?) in the adversarial
case) we obtain
" T
#
X
T
0
?
E
Ct (u ) ? Ct (u ) ? ? = O T 1?q logq (T ) .
K
t=1
i
hP
T
0
1?q
logq (T ) . For the adverIt remains to show that E
t=1 Ct (ut ) ? Ct (u ) = O T
sarial model, this?follows directly
from Corollary 4.2 in [3], which asserts that the regret
of Exp3 is O T K log K . For the random model, a separate argument is required.
(The upper bound for the adversarial model doesn?t directly imply an upper bound for
the random model, since the cost functions are required to take values in [0, 1] in the adversarial model but not in
pthe random model.) For u ? {1/K, 2/K, . . . , 1} let ?(u) =
?
? 0 ). Let ? = K log(T )/T , and partition the set {1/K, 2/K, . . . , 1} into two
C(u)
? C(u
subsets A, B according to whether ?(u) < ? or ?(u) ? ?. The time steps in which the
algorithm chooses strategies in A contribute at most O(T ?) = O(T 1?q logq (T )) to the
regret. For each strategy u ? B, one may prove that, with high probability, u is played
only O(log(T )/?(u)2 ) times. (This parallels the corresponding proof in [2] and is omitted
here. Our hypothesis on the moment generating function of the random variable C(u) is
strong enough to imply the exponential tail inequality required in that proof.) This implies that the time steps in which the algorithm chooses strategies in B contribute at most
O(K log(T )/?) = O(T 1?q logq (T )) to the regret, which completes the proof.
4
Lower bounds for the one-parameter case
There are many reasons to expect that Algorithm CAB1 is an inefficient algorithm for the
continuum-armed bandit problem. Chief among these is that fact that it treats the strategies
{1/K, 2/K, . . . , 1} as an unordered set, ignoring the fact that experiments which sample
the cost of one strategy j/K are (at least weakly) predictive of the costs of nearby strategies.
In this section we prove that, contrary to this intuition, CAB1 is in fact quite close to the
?+1
optimal algorithm. Specifically, in the regret bound of Theorem 3.1, the exponent of 2?+1
?+1
?
is the best possible: for any ? < 2?+1 , no algorithm can achieve regret O(n ). This lower
bound applies to both the randomized and adversarial models.
The lower bound relies on a function f : [0, 1] ? [0, 1] defined as the sum of a nested family of ?bump functions.? Let B be a C ? bump function defined on the real line, satisfying
0 ? B(x) ? 1 for all x, B(x) = 0 if x ? 0 or x ? 1, and B(x) = 1 if x ? [1/3, 2/3]. For
an interval [a, b], let B[a,b] denote the bump function B( x?a
b?a ), i.e. the function B rescaled
and shifted so that its support is [a, b] instead of [0, 1]. Define a random nested sequence
of intervals [0, 1] = [a0 , b0 ] ? [a1 , b1 ] ? . . . as follows: for k > 0, the middle third of
[ak?1 , bk?1 ] is subdivided into intervals of width wk = 3?k! , and [ak , bk ] is one of these
subintervals chosen uniformly at random. Now let
f (x) = 1/3 + 3??1 ? 1/3
?
X
wk? B[ak ,bk ] (x).
k=1
Finally, define a probability distribution on functions C : [0, 1] ? [0, 1] by the following
rule: sample ? uniformly at random from the open interval (0, 1) and put C(x) = ? f (x) .
The relevant technical properties of this construction are summarized in the following
lemma.
T?
Lemma 4.1. Let {u? } = k=1 [ak , bk ]. The function f (x) belongs to ulL(?, L, ?) for
some constants L, ?, it takes values in [1/3, 2/3], and it is uniquely maximized at u ? . For
each ? ? (0, 1), the function C(x) = ?f (x) belongs to ulL(?, L, ?) for some constants
L, ?, and is uniquely
at u? . The same two properties are satisfied by the function
minimized
?
C(x)
= E??(0,1) ?f (x) = (1 + f (x))?1 .
Theorem 4.2. For any randomized multi-armed bandit algorithm, there exists a probability
?+1
distribution on cost functions such that for all ? < 2?+1
, the algorithm?s regret {Rn }?
n=1
in the random model satisfies
Rn
lim sup ? = ?.
n?? n
The same lower bound applies in the adversarial model.
Proof sketch. The idea is to prove, using the probabilistic method, that there exists a nested
sequence of intervals [0, 1] = [a0 , b0 ] ? [a1 , b1 ] ? . . ., such that if we use these intervals
to define a probability distribution on cost functions C(x) as above, then Rn /n? diverges
as n runs through the sequence n1 , n2 , n3 , . . . defined by nk = d k1 (wk?1 /wk )wk?2? e.
Assume that intervals [a0 , b0 ] ? . . . ? [ak?1 , bk?1 ] have already been specified. Subdivide
[ak?1 , bk?1 ] into subintervals of width wk , and suppose [ak , bk ] is chosen uniformly at
random from this set of subintervals. For any u, u0 ? [ak?1 , bk?1 ], the Kullback-Leibler
distance KL(C(u)kC(u0 )) between the cost distributions at u and u0 is O(wk2? ), and it is
equal to zero unless at least one of u, u0 lies in [ak , bk ]. This means, roughly speaking,
that the algorithm must sample strategies in [ak , bk ] at least wk?2? times before being able
to identify [ak , bk ] with constant probability. But [ak , bk ] could be any one of wk?1 /wk
possible subintervals, and we don?t have enough time to play wk?2? trials in even a constant
fraction of these subintervals before reaching time nk . Therefore, with constant probability,
a constant fraction of the strategies chosen up to time nk are not located in [ak , bk ], and
each of them contributes ?(wk? ) to the regret. This means the expected regret at time nk is
?(nk wk? ). From this, we obtain the stated lower bound using the fact that
?+1
nk wk? = nk2?+1
?o(1)
.
Although this proof sketch rests on a much more complicated construction than the lower
bound proof for the finite-armed bandit problem given by Auer et al in [3], one may follow
essentially the same series of steps as in their proof to make the sketch given above into
a rigorous proof. The only significant technical difference is that we are working with
continuous-valued rather than discrete-valued random variables, which necessitates using
the differential Kullback-Leibler distance1 rather than working with the discrete KullbackLeibler distance as in [3].
5
An online convex optimization algorithm
We turn now to continuum-armed bandit problems with a strategy space of dimension
d > 1. As mentioned in the introduction, for any randomized multi-armed bandit algorithm there is a cost function C (with any desired degree of smoothness and boundedness) such that the algorithm?s regret is ?(2d ) when faced with the input sequence
C1 = C2 = . . . = Cn = C. As a counterpoint to this negative result, we seek interesting
classes of cost functions which admit a continuum-armed bandit algorithm whose regret is
polynomial in d (and, as always, sublinear in n). A natural candidate is the class of convex,
smooth functions on a closed, bounded, convex strategy set S ? Rd , since this is the most
1
Defined by the formula KL(P kQ) =
P, Q with density functions p, q.
R
log (p(x)/q(x)) dp(x), for probability distributions
general class of functions for which the corresponding best-expert problem is known to
admit an efficient algorithm, namely Zinkevich?s greedy projection algorithm [21]. Greedy
projection is initialized with a sequence of learning rates ?1 > ?2 > . . .. It selects an
arbitrary initial strategy u1 ? S and updates its strategy in each subsequent time step t
according to the rule ut+1 = P (ut ? ?t ?Ct (ut )), where ?Ct (ut ) is the gradient of Ct at
ut and P : Rd ? S is the projection operator which maps each point of Rd to the nearest
point of S. (Here, distance is measured according to the Euclidean norm.)
Note that greedy projection is nearly a multi-armed bandit algorithm: if the algorithm?s
feedback when sampling strategy ut were the vector ?Ct (ut ) rather than the number
Ct (ut ), it would have all the information required to run greedy projection. To adapt this
algorithm to the multi-armed bandit setting, we use the following idea: group the timeline
into phases of d + 1 consecutive steps, with a cost function C? for each phase ? defined by
averaging the cost functions at each time step of ?. In each phase use trials at d + 1 affinely
independent points of S, located at or near ut , to estimate the gradient ?C? (ut ).2
To describe the algorithm, it helps to assume that the convex set S is in isotropic position in
Rd . (If not, we may bring it into isotropic position by an affine transformation of the coordinate system. This does not increase the regret by a factor of more than d2 .) The algorithm,
which we will call simulated greedy projection, works as follows. It is initialized with a
sequence of ?learning rates? ?1 , ?2 , . . . and ?frame sizes? ?1 , ?2 , . . .. At the beginning of a
phase ?, we assume the algorithm has determined a basepoint strategy u? . (An arbitrary
u? may be used in the first phase.) The algorithm chooses a set of (d + 1) affinely independent points {x0 = u? , x1 , x2 , . . . , xd } with the property that for any y ? S, the difference
y ? x0 may be expressed as a linear combination of the vectors {xi ? x0 : 1 ? i ? d}
using coefficients in [?2, 2]. (Such a set is called an approximate barycentric spanner, and
may computed efficiently using an algorithm specified in [4].) We then choose a random
bijection ? mapping the time steps in phase ? into the set {0, 1, . . . , d}, and in step t we
sample the strategy yt = u? + ?? (x?(t) ? u? ). At the end of the phase we let B? denote the
unique affine function whose values at the points yt are equal to the costs observed during
the phase at those points. The basepoint for the next phase ?0 is determined according to
Zinkevich?s update rule u?0 = P (u? ? ?? ?B? (u? )).3
Theorem 5.1. Assume that S is in isotropic position and that the cost functions satisfy
kCt (x)k ? 1 for all x ? S, 1 ? t ? n, and that in addition the Hessian matrix of Ct (x) at
each point x ? S has Frobenius norm bounded above by a constant. If ?k = k ?3/4 and
?k = k ?1/4 , then the regret of the simulated greedy projection algorithm is O(d3 n3/4 ).
Proof sketch. In each phase ?, let Y? = {y0 , . . . , yd } be the set of points which were
sampled, and define the following four functions: C? , the average of the cost functions in
phase ?; ?? , the linearization of C? at u? , defined by the formula
?? (x) = ?C? (u? ) ? (x ? u? ) + C? (u? );
L? , the unique affine function which agrees with C? at each point of Y? ; and B? , the affine
function computed by the algorithm at the end of phase ?. The algorithm is simply running greedy projection with respect to the simulated cost functions B? , and it consequently
satisfies a low-regret bound with respect to those functions. The expected value of B ? (u)
is L? (u) for every u. (Proof: both are affine functions, and they agree on every point of
2
Flaxman, Kalai, and McMahan [12], with characteristic elegance, supply an algorithm which
counterintuitively obtains an unbiased estimate of the approximate gradient using only a single sample. Thus they avoid grouping the timeline into phases and improve the algorithm?s convergence time
by a factor of d.
3
Readers familiar with Kiefer-Wolfowitz stochastic approximation [17] will note the similarity
with our algorithm. The random bijection ? ? which is unnecessary in the Kiefer-Wolfowitz algorithm ? is used here to defend against the oblivious adversary.
Y? .) Hence we obtain a low-regret bound with respect to L? . To transfer this over to a lowregret bound for the original problem, we need to bound several additional terms: the regret
experienced because the algorithm was using u? + ?? (x?(t) ? u? ) instead of u? , the difference between L? (u? ) and ?? (u? ), and the difference between ?? (u? ) and C? (u? ). In
each case, the desired upper bound can be inferred from properties of barycentric spanners,
or from the convexity of C? and the bounds on its first and second derivatives.
References
[1] R. AGRAWAL . The continuum-armed bandit problem. SIAM J. Control and Optimization,
33:1926-1951, 1995.
[2] P. AUER , N. C ESA -B IANCHI , AND P. F ISCHER . Finite-time analysis of the multi-armed bandit
problem. Machine Learning, 47:235-256, 2002.
[3] P. AUER , N. C ESA -B IANCHI , Y. F REUND , AND R. S CHAPIRE . Gambling in a rigged casino:
The adversarial multi-armed bandit problem. In Proceedings of FOCS 1995.
[4] B. AWERBUCH AND R. K LEINBERG . Near-Optimal Adaptive Routing: Shortest Paths and
Geometric Generalizations. In Proceedings of STOC 2004.
[5] N. BANSAL , A. B LUM , S. C HAWLA , AND A. M EYERSON . Online oblivious routing. In Proceedings of SPAA 2003: 44-49.
[6] A. B LUM , C. B URCH , AND A. K ALAI . Finely-competitive paging. In Proceedings of FOCS
1999.
[7] A. B LUM , S. C HAWLA , AND A. K ALAI . Static Optimality and Dynamic Search-Optimality
in Lists and Trees. Algorithmica 36(3): 249-260 (2003).
[8] A. B LUM , V. K UMAR , A. RUDRA , AND F. W U . Online learning in online auctions. In Proceedings of SODA 2003.
[9] A. B LUM AND H. B. M C M AHAN . Online geometric optimization in the bandit setting against
an adaptive adversary. In Proceedings of COLT 2004.
[10] D. B ERRY AND L. P EARSON . Optimal Designs for Two-Stage Clinical Trials with Dichotomous Responses. Statistics in Medicine 4:487 - 508, 1985.
[11] E. C OPE . Regret and Convergence Bounds for Immediate-Reward Reinforcement Learning
with Continuous Action Spaces. Preprint, 2004.
[12] A. F LAXMAN , A. K ALAI , AND H. B. M C M AHAN . Online Convex Optimization in the Bandit
Setting: Gradient Descent Without a Gradient. To appear in Proceedings of SODA 2005.
[13] Y. F REUND AND R. S CHAPIRE . Adaptive Game Playing Using Multiplicative Weights. Games
and Economic Behavior 29:79-103, 1999.
[14] R. G RAMACY, M. WARMUTH , S. B RANDT, AND I. A RI . Adaptive Caching by Refetching. In
Advances in Neural Information Processing Systems 15, 2003.
[15] R. K LEINBERG AND T. L EIGHTON . The Value of Knowing a Demand Curve: Bounds on
Regret for On-Line Posted-Price Auctions. In Proceedings of FOCS 2003.
[16] A. K ALAI AND S. V EMPALA . Efficient algorithms for the online decision problem. In Proceedings of COLT 2003.
[17] J. K IEFER AND J. W OLFOWITZ . Stochastic Estimation of the Maximum of a Regression Function. Annals of Mathematical Statistics 23:462-466, 1952.
[18] T. L. L AI AND H. ROBBINS . Asymptotically efficient adaptive allocations rules. Adv. in Appl.
Math. 6:4-22, 1985.
[19] C. M ONTELEONI AND T. JAAKKOLA . Online Learning of Non-stationary Sequences. In Advances in Neural Information Processing Systems 16, 2004.
[20] M. ROTHSCHILD . A Two-Armed Bandit Theory of Market Pricing. Journal of Economic Theory 9:185-202, 1974.
[21] M. Z INKEVICH . Online Convex Programming and Generalized Infinitesimal Gradient Ascent.
In Proceedings of ICML 2003, 928-936.
| 2634 |@word trial:8 exploitation:1 middle:1 polynomial:3 norm:3 open:3 rigged:1 d2:1 seek:1 boundedness:1 moment:1 initial:1 inefficiency:1 series:1 score:1 ours:1 current:1 must:5 john:1 subsequent:1 partition:1 shape:1 update:3 stationary:1 greedy:7 fewer:1 warmuth:1 isotropic:3 beginning:1 earson:1 math:1 contribute:2 bijection:2 simpler:1 empala:1 mathematical:1 c2:1 lku:1 differential:1 supply:1 focs:3 prove:4 manner:1 x0:3 sublinearly:1 market:1 behavior:1 expected:7 roughly:1 multi:18 armed:35 metaphor:1 estimating:1 discover:1 bounded:2 minimizes:1 sharpening:1 transformation:1 guarantee:1 safely:1 every:2 xd:3 ull:7 control:1 medical:1 appear:1 producing:1 laxman:1 positive:1 t1:1 before:2 treat:1 ak:13 path:1 approximately:1 yd:1 emphasis:1 studied:1 wk2:1 specifying:1 appl:1 range:1 ischer:1 chapire:2 unique:3 practical:1 regret:33 implement:1 area:1 adapting:1 projection:8 word:1 wait:1 get:2 close:1 operator:1 put:1 restriction:1 zinkevich:3 deterministic:2 demonstrated:1 map:1 maximizing:1 yt:2 economics:1 attention:1 independently:2 convex:10 simplicity:2 rule:5 estimator:1 coordinate:1 ianchi:2 annals:1 pt:1 play:4 construction:2 suppose:1 programming:2 us:2 hypothesis:1 element:3 satisfying:1 located:2 logq:7 observed:1 preprint:1 capture:1 ensures:1 adv:1 rescaled:1 mentioned:1 intuition:1 govern:1 convexity:1 reward:2 dynamic:1 weakly:1 tight:5 predictive:1 eric:1 learner:1 necessitates:1 k0:2 describe:1 kp:1 outside:1 outcome:1 quite:2 whose:4 valued:3 solve:1 statistic:3 transform:1 online:22 sequence:13 agrawal:1 relevant:2 combining:2 loop:5 pthe:1 achieve:2 asserts:1 frobenius:1 convergence:3 empty:2 optimum:2 regularity:1 diverges:1 generating:1 converges:2 help:1 depending:2 measured:1 nearest:2 b0:3 received:1 progress:1 strong:1 implies:1 convention:1 drawback:1 subsequently:1 stochastic:2 routing:4 subdivided:1 suffices:1 generalization:1 mab:6 tighter:1 considered:2 deciding:1 minu:2 mapping:1 bump:3 continuum:14 consecutive:2 achieves:1 vary:1 adopt:1 omitted:1 alai:4 estimation:1 currently:1 counterintuitively:1 robbins:1 largest:1 agrees:1 occupying:1 mit:1 always:1 rather:4 kalai:2 caching:2 reaching:1 avoid:1 varying:1 jaakkola:1 corollary:1 derived:1 contrast:1 adversarial:9 rigorous:1 affinely:2 defend:1 rothschild:1 entire:3 a0:3 initially:1 compactness:2 bandit:39 kc:1 subroutine:1 selects:1 semantics:1 arg:2 among:2 colt:2 denoted:2 exponent:2 constrained:2 initialize:1 field:1 devoting:1 equal:2 distance1:1 sampling:4 identical:1 broad:2 icml:1 nearly:4 minimized:1 oblivious:4 simultaneously:1 refetching:1 familiar:1 phase:13 algorithmica:1 n1:4 interest:1 accurate:2 tuple:1 rudra:1 unless:2 tree:1 euclidean:2 initialized:3 desired:2 compelling:1 formulates:1 cost:51 paging:2 subset:3 uniform:2 kq:1 too:2 kullbackleibler:1 reported:1 varies:1 chooses:4 thoroughly:1 density:1 randomized:5 siam:1 csail:2 probabilistic:1 nk2:1 again:1 central:1 satisfied:1 choose:4 admit:2 expert:5 inefficient:1 derivative:1 unordered:1 casino:2 fannie:1 wk:13 summarized:1 coefficient:1 satisfy:2 multiplicative:1 closed:1 sup:1 portion:1 sufficed:1 competitive:1 parallel:1 complicated:1 minimize:3 kiefer:2 characteristic:1 efficiently:1 maximized:1 spaced:2 identify:1 randomness:2 email:1 definition:1 infinitesimal:1 against:2 kct:1 elegance:1 rdk:1 proof:11 static:1 sampled:1 proved:1 knowledge:1 ut:20 lim:1 auer:3 back:1 focusing:1 feed:1 follow:1 response:1 evaluated:2 stage:1 sketch:5 working:2 continuity:1 defines:1 pricing:2 name:1 requiring:1 lgorithm:1 unbiased:1 awerbuch:1 hence:1 leibler:2 satisfactory:1 game:4 width:2 uniquely:2 during:1 generalized:1 bansal:1 performs:1 bring:1 auction:4 consideration:1 recently:1 superior:1 tending:1 exponentially:1 belong:1 he:2 interpretation:1 tail:1 significant:1 multiarmed:1 cambridge:1 counterexample:1 ai:1 smoothness:2 rd:9 hp:1 closing:2 similarity:1 recent:2 belongs:2 inequality:1 minimum:2 additional:1 surely:1 wolfowitz:2 shortest:1 u0:10 reduces:1 smooth:1 technical:2 adapt:2 exp3:2 clinical:2 equally:2 a1:2 regression:1 essentially:1 expectation:1 iteration:2 kernel:1 c1:3 addition:3 fellowship:1 remarkably:1 interval:9 completes:1 leaving:1 eliminates:1 rest:2 finely:1 probably:1 ascent:1 contrary:1 flow:1 call:1 near:4 revealed:1 constraining:1 identically:2 enough:3 recommends:1 inner:4 idea:2 cn:3 economic:2 knowing:1 whether:1 suffer:1 speaking:1 hessian:1 action:1 ignored:1 listed:1 locally:2 exist:2 shifted:1 discrete:2 group:1 four:1 terminology:1 d3:1 asymptotically:1 fraction:3 year:1 sum:2 run:2 parameterized:1 soda:2 almost:1 family:1 reader:1 decision:7 bound:33 ct:17 played:1 ope:1 adapted:1 n3:3 x2:2 ri:1 dichotomous:1 nearby:1 kleinberg:1 u1:4 argument:1 min:1 optimality:2 according:5 combination:1 hertz:1 y0:1 making:1 counterpoint:1 agree:1 remains:1 turn:1 mechanism:2 end:5 studying:1 apply:1 progression:1 appropriate:1 subdivide:1 inefficiently:1 original:1 denotes:1 spurred:1 esc:1 ensure:1 running:1 medicine:1 somewhere:1 umar:1 restrictive:1 k1:1 prof:1 question:1 already:1 strategy:55 gradient:6 dp:1 distance:3 separate:1 simulated:3 outer:1 polytope:1 reason:1 leinberg:2 assuming:1 ratio:1 minimizing:1 reund:2 robert:1 sharper:1 stoc:1 negative:2 stated:1 design:9 unknown:1 upper:9 discretize:1 finite:8 descent:1 orthant:2 immediate:1 frame:1 discovered:1 rn:4 barycentric:2 arbitrary:2 esa:2 inferred:1 introduced:1 bk:13 namely:1 required:5 specified:4 kl:2 timeline:2 able:1 adversary:3 appeared:2 spanner:2 including:1 natural:1 improve:2 imply:2 numerous:2 log1:1 flaxman:2 lum:5 faced:1 geometric:3 discovery:1 determining:1 relative:1 expect:1 sublinear:3 interesting:1 erry:1 allocation:1 foundation:1 degree:1 affine:5 s0:1 basepoint:2 playing:2 elsewhere:1 supported:1 wireless:1 last:1 taking:1 distributed:1 feedback:7 dimension:1 curve:1 doesn:1 adaptive:6 reinforcement:1 polynomially:1 cope:2 approximate:2 compact:1 obtains:1 kullback:2 reveals:1 b1:2 unnecessary:2 xi:1 don:1 continuous:4 un:1 designates:1 search:1 chief:1 ku:1 learn:1 spaa:1 transfer:1 ignoring:1 obtaining:1 subintervals:5 improving:1 contributes:1 posted:1 n2:3 x1:5 advice:1 gambling:1 experienced:1 position:4 obeying:1 exponential:1 archetypical:1 mcmahan:2 crude:1 lie:1 candidate:1 third:1 renumber:1 theorem:4 formula:2 xt:2 list:1 concern:1 grouping:1 exists:2 sequential:1 linearization:1 kx:1 nk:6 gap:2 sparser:1 demand:1 ucb1:1 simply:1 expressed:1 doubling:1 u2:3 recommendation:2 applies:2 nested:3 satisfies:2 relies:1 ma:1 slot:1 goal:1 consequently:1 price:2 lipschitz:4 feasible:1 infinite:2 except:1 uniformly:5 specifically:1 averaging:1 determined:2 lemma:2 total:3 called:1 formally:1 support:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.