Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
5,500 | 5,979 | Time-Sensitive Recommendation From
Recurrent User Activities
Nan Du , Yichen Wang , Niao He? , Le Song
College of Computing, Georgia Tech
?
H. Milton Stewart School of Industrial & System Engineering, Georgia Tech
[email protected], [email protected], [email protected]
[email protected]
Abstract
By making personalized suggestions, a recommender system is playing a crucial
role in improving the engagement of users in modern web-services. However,
most recommendation algorithms do not explicitly take into account the temporal behavior and the recurrent activities of users. Two central but less explored
questions are how to recommend the most desirable item at the right moment, and
how to predict the next returning time of a user to a service. To address these
questions, we propose a novel framework which connects self-exciting point processes and low-rank models to capture the recurrent temporal patterns in a large
collection of user-item consumption pairs. We show that the parameters of the
model can be estimated via a convex optimization, and furthermore, we develop
an efficient algorithm that maintains O(1/) convergence rate, scales up to problems with millions of user-item pairs and hundreds of millions of temporal events.
Compared to other state-of-the-arts in both synthetic and real datasets, our model
achieves superb predictive performance in the two time-sensitive recommendation tasks. Finally, we point out that our formulation can incorporate other extra
context information of users, such as profile, textual and spatial features.
1
Introduction
Delivering personalized user experiences is believed to play a crucial role in the long-term engagement of users to modern web-services [26]. For example, making recommendations on proper items
at the right moment can make personal assistant services on mainstream mobile platforms more competitive and usable, since people tend to have different activities depending on the temporal/spatial
contexts such as morning vs. evening, weekdays vs. weekend (see for example Figure 1(a)). Unfortunately, most existing recommendation techniques are mainly optimized at predicting users? onetime preference (often denoted by integer ratings) on items, while users? continuously time-varying
preferences remain largely under explored.
Besides, traditional user feedback signals (e.g. user-item ratings, click-through-rates, etc.) have been
increasingly argued to be ineffective to represent real engagement of users due to the sparseness and
nosiness of the data [26]. The temporal patterns at which users return to the services (items) thus becomes a more relevant metric to evaluate their satisfactions [12]. Furthermore, successful predictions
of the returning time not only allows a service to keep track of the evolving user preferences, but also
helps a service provider to improve their marketing strategies. For most web companies, if we can
predict when users will come back next, we could make ads bidding more economic, allowing marketers to bid on time slots. After all, marketers need not blindly bid all time slots indiscriminately.
In the context of modern electronic health record data, patients may have several diseases that have
complicated dependencies on each other shown at the bottom of Figure 1(a). The occurrence of one
disease could trigger the progression of another. Predicting the returning time on certain disease
can effectively help doctors to take proactive steps to reduce the potential risks. However, since
most models in literature are particularly optimized for predicting ratings [16, 23, 15, 3, 25, 13, 21],
1
user
Church
time
predict the next activity at time t ?
Grocery
patient
?
t
Disease 1
?
tim
e
time
t
?
time
next event prediction
Disease n
?
time
(a) Predictions from recurrent events.
(b) User-item-event model.
Figure 1: Time-sensitive recommendation. (a) in the top figure, one wants to predict the most
desirable activity at a given time t for a user; in the bottom figure, one wants to predict the returning
time to a particular disease of a patient. (b) The sequence of events induced from each user-item
pair (u, i) is modeled as a temporal point process along time.
exploring the recurrent temporal dynamics of users? returning behaviors over time becomes more
imperative and meaningful than ever before.
Although the aforementioned applications come from different domains, we seek to capture them
in a unified framework by addressing the following two related questions: (1) how to recommend
the most relevant item at the right moment, and (2) how to accurately predict the next returningtime of users to existing services. More specifically, we propose a novel convex formulation of
the problems by establishing an under explored connection between self-exciting point processes
and low-rank models. We also develop a new optimization algorithm to solve the low rank point
process estimation problem efficiently. Our algorithm blends proximal gradient and conditional
gradient methods, and achieves the optimal O(1/t) convergence rate. As further demonstrated by
our numerical experiments, the algorithm scales up to millions of user-item pairs and hundreds of
millions of temporal events, and achieves superb predictive performance on the two time-sensitive
problems on both synthetic and real datasets. Furthermore, our model can be readily generalized to
incorporate other contextual information by making the intensity function explicitly depend on the
additional spatial, textual, categorical, and user profile information.
Related Work. The very recent work of Kapoor et al. [12, 11] is most related to our approach. They
attempt to predict the returning time for music streaming service based on survival analysis [1] and
hidden semi-markov model. Although these methods explicitly consider the temporal dynamics of
user-item pairs, a major limitation is that the models cannot generalize to recommend any new item
in future time, which is a crucial difference compared to our approach. Moreover, survival analysis
is often suitable for modeling a single terminal event [1], such as infection and death, by assuming
that the inter-event time to be independent. However, in many cases this assumption might not hold.
2
Background on Temporal Point Processes
This section introduces necessary concepts from the theory of temporal point processes [4, 5, 6].
A temporal point process is a random process of which the realization is a sequence of events {ti }
with ti ? R+ and i ? Z+ abstracted as points on the time line. Let the history T be the list of event
time {t1 , t2 , . . . , tn } up to but not including the current time t. An important way to characterize
temporal point processes is via the conditional intensity function, which is the stochastic model
for the next event time given all previous events. Within a small window [t, t + dt), ?(t)dt =
P {event in [t, t + dt)|T } is the probability for the occurrence of a new event given the history T .
The functional form of the intensity ?(t) is designed to capture the phenomena of interests [1]. For
instance, a homogeneous Poisson process has a constant intensity over time, i.e., ?(t) = ?0 >
0, which is independent of the history T . The inter-event gap thus conforms to the exponential
distribution with the mean being 1/?0 . Alternatively, for an inhomogeneous Poisson process, its
intensity function is also assumed to be independent of the history T but can be a simple function
of time, i.e., ?(t) = g(t) > 0. Given a sequence of events T = {t1 , . . . , tn }, for any t > tn ,
we characterize the conditional probability that no event happens during
R [tn , t) and
the conditional
t
density f (t|T ) that an event occurs at time t as S(t|T ) = exp ? tn ?(? ) d? and f (t|T ) =
2
?(t) S(t|T ) [1]. Then given a sequence of events T = {t1 , . . . , tn }, we express its likelihood by
!
Z T
Y
`({t1 , . . . , tn }) =
?(ti ) ? exp ?
?(? ) d? .
(1)
ti ?T
3
0
Low Rank Hawkes Processes
In this section, we present our model in terms of low-rank self-exciting Hawkes processes, discuss its
possible extensions and provide solutions to our proposed time-sensitive recommendation problems.
3.1
Modeling Recurrent User Activities with Hawkes Processes
Figure 1(b) highlights the basic setting of our model. For each observed user-item pair (u, i), we
model the occurrences of user u?s past consumption events on item i as a self-exciting Hawkes
process [10] with the intensity:
X
?(t) = ?0 + ?
?(t, ti ),
(2)
ti ?T
where ?(t, ti ) > 0 is the triggering kernel capturing temporal dependencies, ? > 0 scales the
magnitude of the influence of each past event, ?0 > 0 is a baseline intensity, and the summation of
the kernel terms is history dependent and thus a stochastic process by itself.
We have a twofold rationale behind this modeling choice. First, the baseline intensity ?0 captures
users? inherent and long-term preferences to items, regardless of the history. Second, the triggering
kernel ?(t, ti ) quantifies how the influence from each past event evolves over time, which makes
the intensity function depend on the history T . Thus, a Hawkes process is essentially a conditional
Poisson process [14] in the sense that conditioned on the history T , the Hawkes process is a Poisson
process formed by the superposition of a background homogeneous Poisson process with the intensity ?0 and a set of inhomogeneous Poisson processes with the intensity ?(t, ti ). However, because
the events in the past can affect the occurrence of the events in future, the Hawkes process in general
is more expressive than a Poisson process, which makes it particularly useful for modeling repeated
activities by keeping a balance between the long and the short term aspects of users? preferences.
3.2
Transferring Knowledge with Low Rank Models
So far, we have shown modeling a sequence of events from a single user-item pair. Since we cannot
observe the events from all user-item pairs, the next step is to transfer the learned knowledge to
unobserved pairs. Given m users and n items, we represent the intensity function between user u
P
u,i
u,i
u,i
and item i as ?u,i (t) = ?u,i
and ?u,i are the (u, i)-th entry
u,i ?(t, tj ), where ?0
0 +?
tu,i
j ?T
of the m-by-n non-negative base intensity matrix ?0 and the self-exciting matrix A, respectively.
However, the two matrices of coefficients ?0 and A contain too many parameters. Since it is often
believed that users? behaviors and items? attributes can be categorized into a limited number of
prototypical types, we assume that ?0 and A have low-rank structures. That is, the nuclear norms
of these parameter matrices are small k?0 k? 6 ?0 , kAk? 6 ? 0 . Some researchers also explicitly
assume that the two matrices factorize into products of low rank factors. Here we assume the above
nuclear norm constraints in order to obtain convex parameter estimation procedures later.
3.3
Triggering Kernel Parametrization and Extensions
Because it is only required that the triggering kernel should be nonnegative and bounded, feature
? u,i in 3 often has analytic forms when ?(t, tu,i
j ) belongs to many flexible parametric families,
such as the Weibull and Log-logistic distributions [1]. For the simplest case, ?(t, tu,i
j ) takes the
u,i
u,i
exponential form ?(t, tj ) = exp(?(t ? tj )/?). Alternatively, we can make the intensity function
?u,i (t) depend on other additional context information associated with each event. For instance, we
can make the base intensity ?0 depend on user-profiles and item-contents [9, 7]. We might also
extend ?0 and A into tensors to incorporate the location information. Furthermore, we can even
learn the triggering kernel directly using nonparametric methods [8, 30]. Without loss of generality,
we stick with the exponential form in later sections.
3.4
Time-Sensitive Recommendation
Once we have learned ?0 and A, we are ready to solve our proposed problems as follows :
3
(a) Item recommendation. At any given time t, for each user-item pair (u, i), because the intensity
function ?u,i (t) indicates the tendency that user u will consume item i at time t, for each user u,
we recommend the proper items by the following procedures :
1. Calculate ?u,i (t) for each item i.
2. Sort the items by the descending order of ?u,i (t).
3. Return the top-k items.
(b) Returning-time prediction: for each user-item pair (u, i), the intensity function ?u,i (t) dominates the point patterns along time. Given the history T u,i= {t1 , t2 , . . . , tn }, we calculate the
Rt
density of the next event time by f (t|T u,i ) = ?u,i (t) exp ? tn ?u,i (t)dt , so we can use the
expectation to predict the next event. Unfortunately, this expectation often does not have analytic
forms due to the complexity of ?u,i (t) for Hawkes process, so we approximate the returning-time
as following :
u,i
1. Draw samples t1n+1 , . . . , tm
) by Ogata?s thinning algorithm [19].
n+1 ? f (t|T
Pm i
1
2. Estimate the returning-time by the sample average m
i=1 tn+1
4
Parameter Estimation
Having presented our model, in this section, we develop a new algorithm which blends proximal
gradient and conditional gradient methods to learn the model efficiently.
4.1
Convex Formulation
Let T u,i be the set of events induced between u and i. We express the log-likelihood of observing
each sequence T u,i based on Equation 1 as :
X
>
>
u,i
log(wu,i
?u,i
,
(3)
` T u,i |?0 , A =
u,i
j ) ? wu,i ?
u,i
tj ?T
>
P
u,i >
where wu,i = (?0 (u, i), A(u, i)) , ?u,i
= (1, tu,i <tu,i ?(tu,i
and ? u,i =
j
j , tk ))
j
k
RT
P
u,i
>
u,i
(T, tu,i ?T u,i tu,i ?(t, tu,i
can be exj )dt) . When ?(t, tj ) is the exponential kernel, ?
j
j
P
u,i
u,i
>
pressed as ?
= (T, tu,i ?T u,i ?(1 ? exp(?(T ? tj )/?))) . Then, the log-likelihood of
j
observing all event sequences O = T u,i u,i is simply a summation of each individual term by
P
` (O) = T u,i ?O ` T u,i . Finally, we can have the following convex formulation :
1 X
` T u,i |?0 , A + ?k?0 k? + ?kAk? subject to ?0 , A > 0, (4)
OPT = min ?
u,i
T
?O
?0 ,A
|O|
where the matrix nuclear norm k ? k? , which is a summation of all singular values, is commonly used
as a convex surrogate for the matrix rank function [24]. One off-the-shelf solution to 4 is proposed
in [29] based on ADMM. However, the algorithm in [29] requires, at each iteration, a full SVD for
computing the proximal operator, which is often prohibitive with large matrices. Alternatively, we
might turn to more efficient conditional gradient algorithms [28], which require instead, the much
cheaper linear minimization oracles. However, the non-negativity constraints in our problem prevent
the linear minimization from having a simple analytical solution.
4.2
Alternative Formulation
The difficulty of directly solving the original formulation 4 is caused by the fact that the nonnegative
constraints are entangled with the non-smooth nuclear norm penalty. To address this challenge, we
approximate 4 using a simple penalty method. Specifically, given ? > 0, we arrive at the next
formulation 5 by introducing two auxiliary variables Z1 and Z2 with some penalty function, such
as the squared Frobenius norm.
1 X
[ =
` T u,i |?0 , A + ?kZ1 k? + ?kZ2 k? + ?k?0 ? Z1 k2F
OPT
min
?
?0 ,A,Z1 ,Z2
|O| u,i
T
+ ?kA ? Z2 k2F
?O
subject to ?0 , A > 0.
4
(5)
Algorithm 1: Learning Hawkes-Recommender
Algorithm 2: ProxU k?1 ?k ?1 (f (U k?1 ))
u,i
Input: O = T
,?>0
Output: Y1 = [?0 ; A]
Choose to initialize X10 and X20 = X10 ;
Set Y 0 = X 0 ;
for k = 1, 2, . . . do
2
? k = k+1
;
k?1
U
= (1 ? ? k )Y k?1 + ? k X k?1 ;
X1k = ProxU k?1 ?k ?1 (f (U k?1 )) ;
X2k = LMO? ?2 (f (U k?1 )) ;
Y k = (1 ? ? k )Y k?1 + ? k X k ;
end
X1k = U k?1 ? ?k ?1 (f (U k?1 ))
+
;
Algorithm 3: LMO? ?2 (f (U k?1 ))
(u1 , v1 ), (u2 , v2 ) top singular vector pairs of
??2 (f (U k?1 ))[Z1 ] and ??2 (f (U k?1 ))[Z2 ];
X2k [Z1 ] = u1 v1> , X2k [Z2 ] = u2 v2> ;
Find ?1k and ?2k by solving (6);
X2k [Z1 ] = ?1k X2k [Z1 ];
X2k [Z2 ] = ?2k X2k [Z2 ];
We show in Theorem 1 that when ? is properly chosen, these two formulations lead to the same
optimum. See appendix for the complete proof. More importantly, the new formulation 5 allows us
to handle the non-negativity constraints and nuclear norm regularization terms separately.
[ of the problem 5 coincides with the
Theorem 1. With the condition ? > ?? , the optimal value OPT
optimal value OPT in the problem 4 of interest, where ?? is a problem dependent threshold,
? (k??0 k? ? kZ1? k? ) + ? (kA? k? ? kZ2? k? )
?
? = max
.
k??0 ? Z1? k2F + kA? ? Z2? k2F
4.3
Efficient Optimization: Proximal Method Meets Conditional Gradient
Now, we are ready to present Algorithm 1 for solving 5 efficiently. Denote X1 = [?0 ; A], X2 =
[Z1 ; Z2 ] and X = [X1 ; X2 ]. We use the bracket [?] notation X1 [?0 ], X1 [A], X2 [Z1 ], X2 [Z2 ]
to represent
the respective part for simplicity.
Let f (X) := f (?0 , A, Z1 , Z2 ) =
P
1
u,i
2
? |O|
+
?kA
? Z2 k2F .
`
T
|?
,
A
+
?k?
?
Z
k
0
0
1 F
T u,i ?O
The course of our action is straightforward: at each iteration, we apply cheap projection gradient for
block X1 and
for block X2 and maintain three interdependent sequences
cheap
linear minimization
U k k>1 , Y k k>1 and X k k>1 based on the accelerated scheme in [17, 18]. To be more
specific, the algorithm consists of two main subroutines:
Proximal Gradient. When updating X1 , we compute directly the associated proximal
operator,
which in our case, reduces to the simple projection X1k = U k?1 ? ?k ?1 f (U k?1 ) + , where (?)+
simply sets the negative coordinates to zero.
Conditional Gradient. When updating X2 , instead of computing the proximal operator, we call
the linear minimization oracle (LMO? ): X2k [Z1 ] = argmin {hpk [Z1 ], Z1 i + ?(Z1 )} where pk =
?2 (f (U k?1 )) is the partial derivative with respect to X2 and ?(Z1 ) = ?kZ1 k? . We do similar
updates for X2k [Z2 ]. The overall performance clearly depends on the efficiency of this LMO, which
can be solved efficiently in our case as illustrated in Algorithm 3. Following [27], the linear minimization for our situation requires only : (i) computing X2k [Z1 ] = argminkZ1 k? 61 hpk [Z1 ], Z1 i,
where the minimizer is readily given by X2k [Z1 ] = u1 v1> , and u1 , v1 are the top singular vectors of
?pk [Z1 ]; and (ii) conducting a line-search that produces a scaling factor ?1k = argmin?1 >0 h(?1 )
h(?1 ) := ?kY1k?1 [?0 ] ? (1 ? ? k )Y2k?1 [Z1 ] ? ? k (?1 X2k [Z1 ])k2F + ?? k ?1 + C,
k
(6)
)kY2k?1 [Z1 ]k? .
where C = ?(1 ? ?
The quadratic problem (6) admits a closed-form solution and
thus can be computed efficiently. We repeat the same process for updating ?2k accordingly.
4.4
Convergence Analysis
Denote F (X) = f (X)+?(X2 ) as the objective in formulation 5, where X = [X1 ; X2 ]. We establish the following convergence results for Algorithm 1 described above when solving formulation 5.
Please refer to Appendix for complete proof.
5
Theorem 2. Let Y k be the sequence generated by Algorithm 1 by setting ? k = 2/(k + 1), and
? k = (? k )?1 /L. Then for k > 1, we have
[ 6 4LD1 + 2LD2 .
F (Y k ) ? OPT
(7)
k(k + 1)
k+1
where L corresponds to the Lipschitz constant of ?f (X) and D1 and D2 are some problem dependent constants.
Remark. Let g(?0 , A) denote the objective in formulation 4, which is the original problem of our
4LD1
2
interest. By invoking Theorem 1, we further have, g(Y k [?0 ], Y k [A]) ? OPT 6 k(k+1)
+ 2LD
k+1 .
The analysis builds upon the recursions from proximal gradient and conditional gradient methods.
As a result, the overall convergence rate comes from two parts, as reflected in (7). Interestingly, one
can easily see that for both the proximal and the conditional gradient parts, we achieve the respective
optimal convergence rates. When there is no nuclear norm regularization term, the results recover
the well-known optimal O(1/t2 ) rate achieved by proximal gradient method for smooth convex
optimization. When there is no nonnegative constraint, the results recover the well-known O(1/t)
rate attained by conditional gradient method for smooth convex minimization. When both nuclear
norm and non-negativity are in present, the proposed algorithm, up to our knowledge, is first of its
kind, that achieves the best of both worlds, which could be of independent interest.
5
Experiments
We evaluate our algorithm by comparing with state-of-the-art competitors on both synthetic and real
datasets. For each user, we randomly pick 20-percent of all the items she has consumed and hold out
the entire sequence of events. Besides, for each sequence of the other 80-percent items, we further
split it into a pair of training/testing subsequences. For each testing event, we evaluate the predictive
accuracy on two tasks :
(a) Item Recommendation: suppose the testing event belongs to the user-item pair (u, i). Ideally
item i should rank top at the testing moment. We record its predicted rank among all items.
Smaller value indicates better performance.
(b) Returning-Time Prediction: we predict the returning-time from the learned intensity function
and compute the absolute error with respect to the true time.
We repeat these two evaluations on all testing events. Because the predictive tasks on those entirely
held-out sequences are much more challenging, we report the total mean absolute error (MAE) and
that specific to the set of entirely heldout sequences, separately.
5.1 Competitors
Poisson process is a relaxation of our model by assuming each user-item pair (u, i) has only a
constant base intensity ?0 (u, i), regardless of the history. For task (a), it gives static ranks regardless
of the time. For task (b), it produces an estimate of the average inter-event gaps. In many cases, the
Poisson process is a hard baseline in that the most popular items often have large base intensity, and
recommending popular items is often a strong heuristic.
STiC [11] fits a semi-hidden Markov model to each observed user-item pair. Since it can only make
recommendations specific to the few observed items visited before, instead of the large number of
new items, we only evaluate its performance on the returning time prediction task. For the set of
entirely held-out sequences, we use the average predicted inter-event time from each observed item
as the final prediction.
SVD is the classic matrix factorization model. The implicit user feedback is converted into an explicit rating using the frequency of item consumptions [2]. Since it is not designed for predicting the
returning time, we report its performance on the time-sensitive recommendation task as a reference.
Tensor factorization generalizes matrix factorization to include time. We compare with the stateof-art method [3] which considers poisson regression as the loss function to fit the number of events
in each discretized time slot and shows better performance compared to other alternatives with the
squared loss [25, 13, 22, 21]. We report the performance by (1) using the parameters fitted only in
the last interval, and (2) using the average parameters over all time intervals. We denote these two
variants with varying number of intervals as Tensor-#-Last and Tensor-#-Avg.
6
0.14
0.30
Parameters
0.25
Parameters
0.14
A
?0
Parameters
A
?0
A
?0
0.12
0.15
MAE
MAE
MAE
0.12
0.20
0.10
0.10
0.10
0.08
100
200
300
#iterations
400
500
(a) Convergence by iterations
0.08
0
25000
50000
75000
100000
2500
300
103
10000
Methods
398.3
Hawkes
Poisson
Tensor2
Tensor90
SVD
300
351.3
200
261.9
242.6
MAE
7500
(c) Convergence by #events
Methods
234.7
213.5
200
5000
#events
(b) Convergence by #user-item
400
time(s)
0
#events
210.5
193.4
MAE
0
319.3
Hawkes
Poisson
Tensor2Last
Tensor2Avg
Tensor90Last
Tensor90Avg
STiC
312.7
182.4 182.8
163.4
171.3 171.9
169.5 169.7
151.5 153.7
141.2
100
102
100
68.3
54.7
59.9
43.3
0
103
102
104
#entries
(d) Scalability
105
0
Heldout
Groups
Total
(e) Item recommendation
Heldout
Groups
Total
(f) Returning-time prediction
Figure 2: Estimation error (a) by #iterations, (b) by #entries (1,000 events per entry), and (c) by
#events per entry (10,000 entries); (d) scalability by #entries (1,000 events per entry, 500 iterations);
(e) MAE of the predicted ranking; and (f) MAE of the predicted returning time.
5.2
Results
Synthetic data. We generate two 1,024-by-1,204 user-item matrices ?0 and A with rank five as the
ground-truth. For each user-item pair, we simulate 1,000 events by Ogata?s thinning algorithm [19]
with an exponential triggering kernel and get 100 million events in total. The bandwidth for the
triggering kernel is fixed to one. By theorem 1, it is inefficient to directly estimate the exact value of
the threshold value for ?. Instead, we tune ?, ? and ? to give the best performance.
How does our algorithm converge ? Figure 2(a) shows that it only requires a few hundred iterations
to descend to a decent error for both ?0 and A, indicating algorithm 1 converges very fast. Since
the true parameters are low-rank, Figure 2(b-c) verify that it only requires a modest number of observed entries, each of which induces a small number of events (1,000) to achieve a good estimation
performance. Figure 2(d) further illustrates that algorithm 1 scales linearly as the training set grows.
What is the predictive performance ? Figure 2(e-f) confirm that algorithm 1 achieves the best predictive performance compared to other baselines. In Figure 2(e), all temporal methods outperform
the static SVD since this classic baseline does not consider the underlying temporal dynamics of the
observed sequences. In contrast, although the Poisson regression also produces static rankings of
the items, it is equivalent to recommending the most popular items over time. This simple heuristic
can still give competitive performance. In Figure 2(f), since the occurrence of a new event depends
on the whole past history instead of the last one, the performance of STiC deteriorates vastly. The
other tensor methods predict the returning time with the information from different time intervals.
However, because our method automatically adapts different contributions of each past event to the
prediction of the next event, it can achieve the best prediction performance overall.
Real data. We also evaluate the proposed method on real datasets. last.fm consists of the music
streaming logs between 1,000 users and 3,000 artists. There are around 20,000 observed user-artist
pairs with more than one million events in total. tmall.com contains around 100K shopping events
between 26,376 users and 2,563 stores. The unit time for both dataset is hour. MIMIC II medical
dataset is a collection of de-identified clinical visit records of Intensive Care Unit patients for seven
years. We filtered out 650 patients and 204 diseases. Each event records the time when a patient was
diagnosed with a specific disease. The time unit is week. All model parameters ?, ?, ?, the kernel
bandwidth and the latent rank of other baselines are tuned to give the best performance.
Does the history help ? Because the true temporal dynamics governing the event patterns are unobserved, we first investigate whether our model assumption is reasonable. Our Hawkes model considers the self-exciting effects from past user activities, while the survival analysis applied in [11]
7
Quantile-plot
Item recommendation
1043.7
896.7
903.7
889.4
600
4
896.4
300
615.6
379.1
Hawkes
Poisson
Tensor2Last
Tensor2Avg
Tensor90Last
Tensor90Avg
STiC
372.9
200
174.7
168.2
173.5 176.7
163.8
158.7 162.3 160.3
140.6
300
100
Hawkes
201.7
Poisson
4
6
Theoretical Quantiles
4
Heldout
200
Quantiles of Real Data
0
8
3
150
Total
Groups
Heldout
300
Methods
204
Hawkes
Poisson
Tensor2
Tensor90
SVD
Hawkes
183.43
174.01
164.78
200
MAE
132.06
2
115.15
111.27
100
MAE
2
95.1
191.6
0
0
147.1
111.6
Rayleigh
0
87.23
Poisson
Groups
Total
Methods
Hawkes
Poisson
Tensor2Last
Tensor2Avg
Tensor90Last
Tensor90Avg
STiC 192.3 188.1
297.8
292.6
187.4 189.4
185.9 184.3
180.9 180.7
163.7 165.6
134.2
140.5
100
1
50
43.65
0
1
2
11.28
0
Rayleigh
0
3
Theoretical Quantiles
4
Heldout
0
Total
Groups
Heldout
Methods
Quantiles of Real Data
8
30
6
34.2
200
22.4
MAE
22.2
21.3
20
4
Groups
Total
Methods
34.5
Hawkes
Poisson
Tensor2
Tensor90
SVD 25.9
20.1
18.3
MAE
tmall.com
1085.7
MAE
6
Methods
Hawkes
Poisson
Tensor2
Tensor90
SVD 807.5
MAE
last.fm
Quantiles of Real Data
900
2
MIMIC II
Returning-time prediction
Methods
8
Hawkes
Poisson
T2Last
T2Avg 224.3
T90Last
T90Avg
STiC
274.9
271.7 268.8
270.4
246
235.4
238
230.6
218.9
162.1
139.6
100
Hawkes
10
2
10.4
75.8
Poisson
41.6
3.9
0
0
Rayleigh
0
2
4
6
Theoretical Quantiles
8
0
Heldout
Groups
Total
Heldout
Groups
Total
Figure 3: The quantile plots of different fitted processes, the MAE of predicted rankings and
returning-time on the last.fm (top), tmall.com (middle) and the MIMIC II (bottom), respectively.
assumes i.i.d. inter-event gaps which might conform to an exponential (Poisson process) or Rayleigh
distribution. According to the time-change theorem [6], given ansequence T =
on{t1 , . . . , tn } and a
R ti
particular point process with intensity ?(t), the set of samples ti ?1 ?(t)dt
should conform
i=1
to a unit-rate exponential distribution if T is truly sampled from the process. Therefore, we compare
the theoretical quantiles from the exponential distribution with the fittings of different models to a
real sequence of (listening/shopping/visiting) events. The closer the slope goes to one, the better a
model matches the event patterns. Figure 3 clearly shows that our Hawkes model can better explain
the observed data compared to the other survival analysis models.
What is the predictive performance ? Finally, we evaluate the prediction accuracy in the 2nd and
3rd column of Figure 3. Since holding-out an entire testing sequence is more challenging, the
performance on the Heldout group is a little lower than that on the average Total group. However,
across all cases, since the proposed model is able to better capture the temporal dynamics of the
observed sequences of events, it can achieve a better performance on both tasks in the end.
6
Conclusions
We propose a novel convex formulation and an efficient learning algorithm to recommend relevant
services at any given moment, and to predict the next returning-time of users to existing services.
Empirical evaluations on large synthetic and real data demonstrate its superior scalability and predictive performance. Moreover, our optimization algorithm can be used for solving general nonnegative
matrix rank minimization problem with other convex losses under mild assumptions, which may be
of independent interest.
Acknowledge
The research was supported in part by NSF IIS-1116886, NSF/NIH BIGDATA 1R01GM108341,
NSF CAREER IIS-1350983.
8
References
[1] O. Aalen, O. Borgan, and H. Gjessing. Survival and event history analysis: a process point of view.
Springer, 2008.
[2] L. Baltrunas and X. Amatriain. Towards time-dependant recommendation based on implicit feedback,
2009.
[3] E. C. Chi and T. G. Kolda. On tensors, sparsity, and nonnegative factorizations, 2012.
[4] D. Cox and V. Isham. Point processes, volume 12. Chapman & Hall/CRC, 1980.
[5] D. Cox and P. Lewis. Multivariate point processes. Selected Statistical Papers of Sir David Cox: Volume
1, Design of Investigations, Statistical Methods and Applications, 1:159, 2006.
[6] D. Daley and D. Vere-Jones. An introduction to the theory of point processes: volume II: general theory
and structure, volume 2. Springer, 2007.
[7] N. Du, M. Farajtabar, A. Ahmed, A. J. Smola, and L. Song. Dirichlet-hawkes processes with applications
to clustering continuous-time document streams. In KDD?15, 2015.
[8] N. Du, L. Song, A. Smola, and M. Yuan. Learning networks of heterogeneous influence. In Advances in
Neural Information Processing Systems 25, pages 2789?2797, 2012.
[9] N. Du, L. Song, H. Woo, and H. Zha. Uncover topic-sensitive information diffusion networks. In Artificial
Intelligence and Statistics (AISTATS), 2013.
[10] A. G. Hawkes. Spectra of some self-exciting and mutually exciting point processes. Biometrika,
58(1):83?90, 1971.
[11] K. Kapoor, K. Subbian, J. Srivastava, and P. Schrater. Just in time recommendations: Modeling the
dynamics of boredom in activity streams. WSDM, pages 233?242, 2015.
[12] K. Kapoor, M. Sun, J. Srivastava, and T. Ye. A hazard based approach to user return time prediction. In
KDD?14, pages 1719?1728, 2014.
[13] A. Karatzoglou, X. Amatriain, L. Baltrunas, and N. Oliver. Multiverse recommendation: N-dimensional
tensor factorization for context-aware collaborative filtering. In Proceeedings of the 4th ACM Conference
on Recommender Systems (RecSys), 2010.
[14] J. Kingman. On doubly stochastic poisson processes. Mathematical Proceedings of the Cambridge
Philosophical Society, pages 923?930, 1964.
[15] N. Koenigstein, G. Dror, and Y. Koren. Yahoo! music recommendations: Modeling music ratings with
temporal dynamics and item taxonomy. In Proceedings of the Fifth ACM Conference on Recommender
Systems, RecSys ?11, pages 165?172, 2011.
[16] Y. Koren. Collaborative filtering with temporal dynamics. In Knowledge discovery and data mining
KDD, pages 447?456, 2009.
[17] G. Lan. An optimal method for stochastic composite optimization. Mathematical Programming, 2012.
[18] G. Lan. The complexity of large-scale convex programming under a linear optimization oracle. arXiv
preprint arxiv:1309.5550v2, 2014.
[19] Y. Ogata. On lewis? simulation method for point processes. Information Theory, IEEE Transactions on,
27(1):23?31, 1981.
[20] H. Ouyang, N. He, L. Q. Tran, and A. Gray. Stochastic alternating direction method of multipliers. In
ICML, 2013.
[21] J. Z. J. L. Preeti Bhargava, Thomas Phan. Who, what, when, and where: Multi-dimensional collaborative
recommendations using tensor factorization on sparse user-generated data. In WWW, 2015.
[22] Y. Wang, R. Chen, J. Ghosh, J. Denny, A. Kho, Y. Chen, B. Malin, and J. Sun. Rubik: Knowledge guided
tensor factorization and completion for health data analytics. In KDD, 2015.
[23] S. Rendle. Time-Variant Factorization Models Context-Aware Ranking with Factorization Models. volume 330 of Studies in Computational Intelligence, chapter 9, pages 137?153. 2011.
[24] S. Sastry. Some np-complete problems in linear algebra. Honors Projects, 1990.
[25] L. Xiong, X. Chen, T.-K. Huang, J. G. Schneider, and J. G. Carbonell. Temporal collaborative filtering
with bayesian probabilistic tensor factorization. In SDM, pages 211?222. SIAM, 2010.
[26] X. Yi, L. Hong, E. Zhong, N. N. Liu, and S. Rajan. Beyond clicks: Dwell time for personalization. In
Proceedings of the 8th ACM Conference on Recommender Systems, RecSys ?14, pages 113?120, 2014.
[27] A. W. Yu, W. Ma, Y. Yu, J. G. Carbonell, and S. Sra. Efficient structured matrix rank minimization. In
NIPS, 2014.
[28] A. N. Zaid Harchaoui, Anatoli Juditsky. Conditional gradient algorithms for norm-regularized smooth
convex optimization. Mathematical Programming, 2013.
[29] K. Zhou, H. Zha, and L. Song. Learning social infectivity in sparse low-rank networks using multidimensional hawkes processes. In Artificial Intelligence and Statistics (AISTATS), 2013.
[30] K. Zhou, H. Zha, and L. Song. Learning triggering kernels for multi-dimensional hawkes processes. In
International Conference on Machine Learning (ICML), 2013.
9
| 5979 |@word mild:1 cox:3 middle:1 norm:9 nd:1 d2:1 seek:1 simulation:1 invoking:1 weekday:1 pick:1 pressed:1 kz1:3 ld:1 moment:5 liu:1 contains:1 tuned:1 document:1 interestingly:1 past:7 existing:3 current:1 contextual:1 z2:13 ka:4 comparing:1 com:3 vere:1 readily:2 numerical:1 kdd:4 analytic:2 cheap:2 zaid:1 designed:2 plot:2 update:1 juditsky:1 v:2 intelligence:3 prohibitive:1 selected:1 item:52 accordingly:1 parametrization:1 short:1 record:4 filtered:1 location:1 preference:5 five:1 mathematical:3 along:2 kho:1 yuan:1 consists:2 doubly:1 fitting:1 inter:5 behavior:3 multi:2 terminal:1 discretized:1 chi:1 wsdm:1 company:1 automatically:1 little:1 window:1 becomes:2 project:1 moreover:2 bounded:1 notation:1 underlying:1 what:3 argmin:2 kind:1 ouyang:1 weibull:1 dror:1 superb:2 unified:1 unobserved:2 ghosh:1 temporal:21 multidimensional:1 ti:11 returning:19 biometrika:1 stick:1 unit:4 medical:1 before:2 service:11 engineering:1 t1:6 infectivity:1 establishing:1 meet:1 might:4 baltrunas:2 challenging:2 limited:1 factorization:10 analytics:1 testing:6 block:2 tmall:3 procedure:2 empirical:1 evolving:1 composite:1 projection:2 get:1 cannot:2 operator:3 preeti:1 context:6 risk:1 influence:3 descending:1 www:1 equivalent:1 demonstrated:1 straightforward:1 regardless:3 go:1 convex:12 simplicity:1 importantly:1 nuclear:7 classic:2 handle:1 coordinate:1 kolda:1 play:1 trigger:1 user:56 suppose:1 exact:1 homogeneous:2 programming:3 particularly:2 updating:3 bottom:3 role:2 observed:9 preprint:1 wang:3 capture:5 solved:1 calculate:2 descend:1 sun:2 gjessing:1 nhe6:1 disease:8 borgan:1 complexity:2 ideally:1 dynamic:8 personal:1 depend:4 solving:5 algebra:1 predictive:8 upon:1 efficiency:1 bidding:1 easily:1 chapter:1 weekend:1 fast:1 artificial:2 heuristic:2 solve:2 consume:1 statistic:2 itself:1 final:1 sequence:18 sdm:1 analytical:1 propose:3 tran:1 product:1 tensor2:4 denny:1 tu:10 relevant:3 realization:1 kapoor:3 achieve:4 adapts:1 frobenius:1 isham:1 scalability:3 convergence:9 optimum:1 produce:3 converges:1 tk:1 help:3 depending:1 recurrent:6 develop:3 tim:1 koenigstein:1 completion:1 lsong:1 school:1 strong:1 auxiliary:1 predicted:5 come:3 direction:1 guided:1 inhomogeneous:2 attribute:1 stochastic:5 karatzoglou:1 crc:1 argued:1 require:1 shopping:2 investigation:1 opt:6 summation:3 exploring:1 extension:2 hold:2 around:2 hall:1 ground:1 exp:5 predict:11 week:1 major:1 achieves:5 estimation:5 assistant:1 superposition:1 visited:1 sensitive:8 minimization:8 clearly:2 zhou:2 shelf:1 zhong:1 mobile:1 varying:2 gatech:4 properly:1 she:1 rank:18 likelihood:3 mainly:1 indicates:2 tech:2 industrial:1 contrast:1 hpk:2 baseline:6 sense:1 dependent:3 streaming:2 entire:2 transferring:1 hidden:2 subroutine:1 overall:3 aforementioned:1 flexible:1 among:1 denoted:1 stateof:1 yahoo:1 grocery:1 art:3 platform:1 initialize:1 spatial:3 once:1 aware:2 having:2 chapman:1 jones:1 k2f:6 icml:2 yu:2 ky2k:1 future:2 mimic:3 t2:3 recommend:5 report:3 inherent:1 few:2 np:1 modern:3 randomly:1 individual:1 cheaper:1 connects:1 maintain:1 attempt:1 interest:5 investigate:1 mining:1 evaluation:2 introduces:1 truly:1 bracket:1 personalization:1 behind:1 tj:6 r01gm108341:1 held:2 oliver:1 closer:1 partial:1 necessary:1 experience:1 conforms:1 respective:2 modest:1 theoretical:4 fitted:2 y2k:1 instance:2 column:1 modeling:7 yichen:2 stewart:1 introducing:1 addressing:1 imperative:1 entry:9 hundred:3 successful:1 too:1 characterize:2 dependency:2 proximal:10 engagement:3 synthetic:5 density:2 international:1 siam:1 probabilistic:1 off:1 continuously:1 squared:2 central:1 vastly:1 choose:1 huang:1 usable:1 derivative:1 return:3 inefficient:1 kingman:1 account:1 potential:1 converted:1 de:1 coefficient:1 explicitly:4 caused:1 ad:1 stream:2 depends:2 proactive:1 later:2 view:1 closed:1 ranking:4 observing:2 doctor:1 competitive:2 sort:1 maintains:1 complicated:1 recover:2 zha:3 slope:1 contribution:1 collaborative:4 formed:1 accuracy:2 largely:1 efficiently:5 conducting:1 who:1 generalize:1 bayesian:1 accurately:1 artist:2 provider:1 cc:1 researcher:1 history:13 explain:1 infection:1 competitor:2 frequency:1 associated:2 proof:2 static:3 sampled:1 rubik:1 dataset:2 popular:3 knowledge:5 uncover:1 back:1 thinning:2 attained:1 dt:6 reflected:1 formulation:13 diagnosed:1 generality:1 furthermore:4 marketing:1 implicit:2 ld2:1 governing:1 smola:2 just:1 web:3 expressive:1 morning:1 dependant:1 logistic:1 gray:1 grows:1 effect:1 ye:1 concept:1 multiplier:1 contain:1 true:3 verify:1 regularization:2 alternating:1 death:1 illustrated:1 during:1 self:7 please:1 hawkes:26 kak:2 coincides:1 hong:1 generalized:1 complete:3 demonstrate:1 tn:10 percent:2 novel:3 nih:1 superior:1 functional:1 volume:5 million:6 extend:1 he:2 mae:15 schrater:1 refer:1 cambridge:1 rd:1 sastry:1 pm:1 exj:1 mainstream:1 etc:1 base:4 multivariate:1 recent:1 belongs:2 store:1 certain:1 honor:1 yi:1 additional:2 care:1 schneider:1 converge:1 signal:1 semi:2 ii:7 full:1 desirable:2 harchaoui:1 reduces:1 x10:2 smooth:4 match:1 ahmed:1 believed:2 long:3 clinical:1 hazard:1 visit:1 prediction:13 variant:2 basic:1 regression:2 heterogeneous:1 patient:6 metric:1 poisson:24 essentially:1 blindly:1 expectation:2 represent:3 kernel:11 iteration:7 arxiv:2 achieved:1 background:2 want:2 separately:2 interval:4 entangled:1 singular:3 crucial:3 extra:1 ineffective:1 induced:2 tend:1 subject:2 integer:1 call:1 split:1 decent:1 bid:2 affect:1 fit:2 identified:1 bandwidth:2 click:2 triggering:8 economic:1 reduce:1 tm:1 fm:3 consumed:1 intensive:1 dunan:1 listening:1 whether:1 x1k:3 penalty:3 song:6 action:1 remark:1 useful:1 delivering:1 tune:1 nonparametric:1 induces:1 simplest:1 generate:1 outperform:1 nsf:3 estimated:1 deteriorates:1 track:1 per:3 conform:2 express:2 group:10 rajan:1 threshold:2 lan:2 prevent:1 diffusion:1 v1:4 relaxation:1 year:1 farajtabar:1 arrive:1 family:1 reasonable:1 electronic:1 wu:3 draw:1 appendix:2 scaling:1 x2k:12 capturing:1 entirely:3 nan:1 koren:2 dwell:1 quadratic:1 nonnegative:5 activity:9 oracle:3 constraint:5 x2:9 personalized:2 aspect:1 u1:4 simulate:1 min:2 structured:1 according:1 remain:1 smaller:1 increasingly:1 across:1 proxu:2 evolves:1 making:3 happens:1 amatriain:2 stic:6 equation:1 mutually:1 discus:1 turn:1 rendle:1 milton:1 end:2 generalizes:1 apply:1 progression:1 observe:1 v2:3 occurrence:5 xiong:1 alternative:2 original:2 thomas:1 top:6 assumes:1 include:1 dirichlet:1 clustering:1 anatoli:1 music:4 quantile:2 build:1 establish:1 society:1 tensor:10 objective:2 question:3 occurs:1 blend:2 strategy:1 parametric:1 rt:2 niao:1 traditional:1 surrogate:1 visiting:1 gradient:15 consumption:3 kz2:2 seven:1 topic:1 recsys:3 considers:2 carbonell:2 assuming:2 besides:2 modeled:1 balance:1 x20:1 unfortunately:2 proceeedings:1 taxonomy:1 holding:1 negative:2 design:1 proper:2 allowing:1 recommender:5 datasets:4 markov:2 acknowledge:1 situation:1 ever:1 incorporate:3 y1:1 multiverse:1 intensity:21 rating:5 david:1 pair:18 required:1 t1n:1 optimized:2 connection:1 z1:24 philosophical:1 learned:3 textual:2 hour:1 nip:1 address:2 able:1 beyond:1 pattern:5 malin:1 sparsity:1 challenge:1 including:1 max:1 event:59 satisfaction:1 suitable:1 difficulty:1 regularized:1 predicting:4 recursion:1 bhargava:1 scheme:1 improve:1 church:1 ready:2 categorical:1 negativity:3 woo:1 health:2 literature:1 interdependent:1 discovery:1 sir:1 loss:4 highlight:1 rationale:1 heldout:10 suggestion:1 limitation:1 prototypical:1 subbian:1 filtering:3 exciting:8 playing:1 course:1 repeat:2 last:6 keeping:1 supported:1 absolute:2 fifth:1 sparse:2 feedback:3 world:1 collection:2 commonly:1 avg:1 boredom:1 far:1 social:1 transaction:1 approximate:2 keep:1 confirm:1 abstracted:1 assumed:1 recommending:2 factorize:1 alternatively:3 spectrum:1 subsequence:1 search:1 evening:1 latent:1 quantifies:1 continuous:1 learn:2 transfer:1 career:1 sra:1 improving:1 du:4 onetime:1 domain:1 aistats:2 pk:2 main:1 linearly:1 whole:1 profile:3 repeated:1 categorized:1 x1:7 quantiles:7 georgia:2 lmo:4 explicit:1 daley:1 exponential:8 ogata:3 theorem:6 specific:4 explored:3 list:1 admits:1 survival:5 dominates:1 indiscriminately:1 effectively:1 magnitude:1 conditioned:1 illustrates:1 sparseness:1 gap:3 phan:1 chen:3 rayleigh:4 simply:2 recommendation:19 u2:2 marketer:2 springer:2 corresponds:1 minimizer:1 truth:1 lewis:2 acm:3 ma:1 slot:3 conditional:13 towards:1 twofold:1 lipschitz:1 admm:1 content:1 hard:1 change:1 specifically:2 total:12 tendency:1 svd:7 meaningful:1 indicating:1 aalen:1 college:1 people:1 accelerated:1 bigdata:1 evaluate:6 d1:1 phenomenon:1 srivastava:2 |
5,501 | 598 | A Hybrid Neural Net System for
State-of-the-Art Continuous Speech Recognition
Y. Zhao
BBN Systems and Technologies
Cambridge, MA 02138
G. Zavaliagkos
Northeastern University
Boston MA 02115
R. Schwartz
BBN Systems and Technologies
Cambridge, MA 02138
J. Makhoul
BBN Systems and Technologies
Cambridge, MA 02138
Abstract
Untill recently, state-of-the-art, large-vocabulary, continuous speech
recognition (CSR) has employed Hidden Markov Modeling (HMM)
to model speech sounds. In an attempt to improve over HMM we
developed a hybrid system that integrates HMM technology with neural networks. We present the concept of a "Segmental Neural Net"
(SNN) for phonetic modeling in CSR. By taking into account all the
frames of a phonetic segment simultaneously, the SNN overcomes the
well-known conditional-independence limitation of HMMs. In several
speaker-independent experiments with the DARPA Resource Management corpus, the hybrid system showed a consistent improvement in
performance over the baseline HMM system.
1 INTRODUCTION
The current state of the art in continuous speech recognition (CSR) is based on the use
of hidden Markov models (HMM) to model phonemes in context. Two main reasons
for the popularity of HMMs are their high performance, in terms of recognition accuracy, and their computational efficiency However, the limitations of HMMs in modeling
the speech signal have been known for some time. Two such limitations are (a) the
conditional-independence assumption, which prevents a HMM from taking full advan704
A Hybrid Neural Net System for State-of-the-Art Continuous Speech Recognition
tage of the correlation that exists among the frames of a phonetic segment, and (b) the
awkwardness with which segmental features can be incorporated into .HM:M systems. We
have developed the concept of Segmental Neural Nets (SNN) to overcome the two .HM:M
limitations just mentioned for phonetic modeling in speech. A segmental neural net is a
neural network that attempts to recognize a complete phonetic segment as a single unit,
rather than a sequence of conditionally independent frames.
Neural nets are known to require a large amount of computation, especially for training.
Also, there is no known efficient search technique for finding the best scoring segmentation with neural nets in continuous speech. Therefore, we have developed a hybrid
SNN/HM:M system that is designed to take full advantage of the good properties of both
methods. The two methods are integrated through a novel use of the N-best (multiple
hypotheses) paradigm developed in conjunction with the BYBLOS system at BBN [1].
2 SEGMENTAL NEURAL NET MODELING
There have been several recent approaches to the use of neural nets in CSR. The SNN
differs from these approaches in that it attempts to recognize each phoneme by using all
the frames in a phonetic segment simultaneously to perform the recognition. By looking
at a whole phonetic segment at once, we are able to take advantage of the correlation that
exists among frames of a phonetic segments, thus ameliorating the limitations of .HM:Ms.
? core
neural
network
warping
phonetic .egment
Figure 1: The SNN model samples the frames and produces a single segment score.
The structure of a typical SNN is shown in Figure 1. The input to the network is a fixed
length representation of the speech segment. This input is scored by the network. If the
network was trained to minimize a mean square error (MSE) or a relative entropy distortion measure, the output of the network will be an estimate of the posterior probability
P(CI:z:) of the phonetic class C given the segment :z: [2, 3]. This property of the SNN
allows a natural extension to CSR: We segment the utterance into phonetic segments,
and score each one of them seperately. The score of the utterance is simply the product
of the scores of the individual segments.
705
706
Zavaliagkos, Zhao, Schwartz, and Makhoul
The procedure described above requires the availability of some form of phonetic segmentation of the speech. We describe in Section 3 how we use the HMM to obtain
likely candidate segmentations. Here, we shall assume that a phonetic segmentation has
been made available and each segment is represented by a sequence of frames of speech
features. The actual number of such frames in a phonetic segment is variable. However,
for input to the neural network, we need a fixed length representation. Therefore, we
have to convert the variable number of frames in each segment to a fixed number of
frames. We have considered two approaches to cope with this problem: time sampling
and Oiscrete Cosine Transfonn (ocr).
In the first approach, the requisite time warping is performed by a quasi-linear sampling
of the feature vectors comprising the segment to a fixed number of frames (5 in our
system). For example, in a 17-frame phonetic segment, we use frames 1, 5, 9, 13, and 17
as input to the SNN. The second approach uses the Discrete Cosine Transfonn (OCT).
The ocr can be used to represent the frame sequence of a segment as follows. Consider
the sequence of cepstral features across a segment as a time sequence and take its ocr.
For an m frame segment, this transfonn will result in a set of m OCT coefficients for
each feature. Truncate this sequence to its first few coefficients (the more coefficients
, the more precise the representation). To keep the number of features the same as in
the quasi-linear sampling, we use only five coefficients. If the input segment has less
than five frames, we initially interpolate in time so that a five-point ocr is possible.
Compared to the quasi-linear sampling, OCT has the advantage of using information
from all input frames.
Duration: Because of the time warping function, the SNN score for a segment is independent of the duration of the segment. In order to provide duration infonnation to the
SNN, we constructed a simple durational model. For each phoneme, a histogram was
made of segment durations in the training data. This histogram was then smoothed by
convolving with a triangular window, and probabilities falling below a floor level were
reset to that level. The duration score was multiplied by the neural net score to give an
overall segment score.
3
THE N-BEST RESCORING PARADIGM
Our hybrid system is based on the N-best rescoring paradigm [1], which allows us to
design and test the SNN with little regard to the usual problem of searching for the
segmentation when dealing with a large vocabulary speech recognition system.
Figure 2 illusrates the hybrid system. Each utterance is decoded using the BBN BYBLOS
system [4]. The decoding is done in two steps: First the N-best recognition is performed,
producing a list of the candidate N best-scoring sentence hypotheses. In this stage, a
relatively simple HMM: is used for computation pUIposes. The length of the N-best list
is chosen to be long enough to almost always include the correct answer. The second
step is the HMM: rescoring, where a more sophisticated HMM is used. The recognition
process may stop at this stage, selecting the top scoring utterance of the list (HMM I-best
output).
To incOlporate the SNN in the N -best paradigm, we use the HMM system to generate
a segmentation for each N-best hypothesis, and the SNN to generate a score for the
hypothesis USing the HMM: segmentation. The N-best list may be reordered based on
A Hybrid Neural Net System for State-of-the-Art Continuous Speech Recognition
SNN scores alone. In this case the recognition process stops by selecting the top scoring
utterance of the rescored list (NN I-best output).
Speech
N-&e.tHMM
Recognition
N-Best
~ r List
HMM
Rescoring
N-be8t
III.
Labels and
Segmentation
HMM
Scores
HMM
1-best
Segmental
Neural Net
Rescoring
SNN
Scor. .
++
SNN
1-bMt
Combine SCores
and Reorder List
?
Hybrid SNNIHMM
Top Choice
Figure 2: Schematic diagram of the hybrid SNN/HMM system
The last stage in the hybrid system is to combine several scores for each hypothesis,
such as SNN score, HMM: score, grammar score, and the hypothesized number of words
and phonemes. (The number of words and phonemes are included because they serve
the same pUIpose as word and phoneme insertion penalties in a HMM: CSR system.) We
form a composite score by taking a linear combination of the individual scores. The
linear combination is determined by selecting the weights that give the best performance
over a development test set. These weights can be chosen automatically [5]. After we
have rescored the N-Best list, we can reorder it according to the new composite scores.
If the CSR system is required to output just a single hypothesis, the highest scoring
hypothesis is chosen (hybrid SNN/HMM top choice in Figure 2).
4
SNN TRAINING
The training of the phonetic SNNs is done in two steps. In the first training step, we
segment all of the training utterances into phonetic segments using the HMM: models and
707
708
Zavaliagkos, Zhao, Schwartz, and Makhoul
the utterance transcriptions. Each segment then serves as a positive example of the SNN
output corresponding to the phonetic label of the segment and as a negative example for
all the other phonetic SNN outputs (we are using a total of 53 phonetic outputs). We call
this training method i-best training.
The SNN is trained using the log-error distortion measure [6], which is an extension
of the relative entropy measure to an M -class problem. To ensure that the outputs are
in fact probabilities, we use a sigmoidal nonlinearity to restrict their range in [0, 1] and
an output normalization layer to make them sum to one. The models are initialized by
removing the sigmoids and using the MSE measure. Then we reinstate th~ sigmoids and
proceed with four iterations of a quasi-Newton [7] error minimization algorithm. For the
adopted error measure, when the neural net non-linearity is the usual sigmoid function,
there exists a unique minimum for single-layer nets [6].
The I-best training described has one drawback: the training does not cover all the cases
that the network will be required to encounter in the N-best rescoring paradigm. With 1best training, given the correct segmentation, we train the network to discriminate between
correct and incorrect labeling. However, the network will also be used to score N-best
hypotheses with incorrect segmentation. Therefore, it is important to train based on the
N-best lists in what we call N-best training. During N-best training, we produce the Nbest lists for all of the training sentences, and we then train positively with all the correct
hypotheses and negatively on the "misrecognized" parts of the incorrect hypothesis.
4.1 Context Modelling
Some of the largest gains in accuracy for HMM CSR systems have been obtained with
the use of context (i.e., phonetic identity of neighbOring segments). Consequently, we
implemented a version of the SNN that provided a simple model of left-context. In
addition to the SNN previously described, which only models a segment's phonetic
identity and makes no reference to context, we trained 53 additional left-context networks.
Each of these 53 networks were identical in structure to the context-independent SNN.
In the recognition process, the segment score is obtained by combining the output of
the context-independent SNN with the corresponding output of the SNN that models the
left-context of the segment. This combination is a weighted average of the two network
values, where the weights are determined by the number of occurrences of the phoneme
in the training data and the number of times the phoneme has its present context in the
training data.
4.1.1
Regularization Techniques for Context Models
During neural net training of context models, a decrease of the distortion on the training
set often causes an increase of the distortion on the test set. This problem is called
overtraining, and it typically occurs when the number of training samples is on the order
of the number of the model parameters. Regularization provides a class of smoothing
techniques to ameliorate the overtraining problem. Instead of minimizing the distortion
measure alone, we are minimizing the following objective function:
(1)
A Hybrid Neural Net System for State-of-the-Art Continuous Speech Recognition
where Wo is the set of weights corresponding to the context-independent model, Nd
is the number of data points, and >'1, >'2, 711, 712 are smoothing parameters. The first
regularization tenn is used to control the excursion of the weights in general and the other
to control the degree to which the context-dependent model is allowed to deviate from the
corresponding context-independent model (to achieve this first we initialize the contextdependent models with the context-independent model). In our initial experiments, we
used values of >'1 = >'2 = 1.0, 711 = I, 712 =2.
When there are very few training data for a particular context model, the regularization
tenns in (!) p:'evail, Cflnstraining the model parameters to remain close to their initial
estimates. The regularization tenn is gradually turned off with the presence of more data.
What we accomplish in this way is an automatic mechanism that controls overtraining.
4.2
Elliptical Basis Functions
Our efforts to use multi-layer structures has been rather unsuccessful so far. The best
improvement we got was a mere 5% reduction in error rate over the single-layer performance, but with a 10-fold increase in both number of parameters and computation time.
We suspect that our training is getting trapped in bad local minima. Due to the above
considerations, we considered an alternative multi-layer structure, the Elliptical Basis
Function (EBF) network. EBFs are natural extensions of Radial Basis Functions, where
a full covariance matrix is introduced in the basis functions. As many researchers have
suggested, EBF networks provide modelling capabilities that are as powerful as multilayer perceptrons. An advantage of EBF is that there exist well established techniques
for estimating the elliptical basis layer. As a consequence, the problem of training an
EBF network can be reduced to a one-layer problem, i.e., training the second layer only.
Our approach with EBF is to initialize them with Maximum Likelihood (ML). ML training
allows us to use very detailed context models, such as triphones. The next step, which
is not yet implemented, is to either proceed with discriminative NN training, or use a
nonlinearity at the outout layer and treat the second layer as a single-layer feedforward
model, or both.
5 EXPERIMENTAL CONDITIONS AND RESULTS
Experiments to test the performance of the hybrid system were performed on the speakerindependent (SI) portion of the DARPA 1000-word Resource Management speech corpus.
The training set consisted of utterances from 109 speakers, 2830 utterances from male
speakers and 1160 utterances from female speakers. We have tested our system with
5 different test sets. The Feb '89 set was used as a cross-validation set for the SNN
system. Feb '89 and Oct '89 were used as development sets whenever the weights for
the combination of two or more models were to be estimated. Feb '91 and the two Sep
'92 sets were used as independent test sets.
Both the NN and the HMM systems had 3 separate models made from male, female, and
combined data. During recognition all 3 models were used to score the utterances, and
the recognition answer was decided by a 3-way gender selection: For each utterance, the
model that produced the highest score was selected. The HMM used was the February
'91 version of the BBN BYBLOS system.
709
710
Zavaliagkos, Zhao, Schwartz, and Makhoul
In the experiments, we used SNNs with 53 outputs, each representing one of the phonemes
in our system. The SNN was used to rescore N-best lists of length N = 20. The input
to the net is a fixed number of frames of speech features (5 frames in our system). The
features in each to-ms frame consist of 16 scalar values: power, power difference, and 14
mel-warped cepstral coefficients. For the EBF, the differences of the cepstral parameters
were used also.
Table 1: SNN development on February '89 test set
~--------------------~~~~--~-~
+
+
+
+
+
Original SSN (MSE)
Log-Error Criterion
N-Best training
Left Context
Regularization
word,phoneme penalties
EBF
Word EITor (%)
13.7
11.6
9.0
7.4
6.6
5.7
4.9
Table I shows the word error rates at the various stages of development. All the experiments mentioned below used the Feb '89 test set. The original I-layer SNN was trained
using the I-best training algorithm and the MSE criterion, and gave a word error rate
of 13.7%. The incorporation of the duration and the adoption of the log-error training
criterion both resulted in some improvement, bringing the error rate down to 11.6%.
With N-best training the error rate dropped to 9.0%; adding left context models reduced
the word error rate down to 7.4%. When the the context models were trained with the
regularization criterion the error rate dropped to 6.6%. All of the above results were obtained using the mean NN score (NN score divided by the number of segments). When
we used word and phone penalties, the perfonnance was even better, a 5.7% word error
rate. For the same conditions, the perfonnance for the EBF system was 4.9% word error
rate. We should mention that the implementation of training with regularization was not
complete at the time the hybrid system was tested on the September 92 test, so we will
exclude it from the NN results presented below.
The final hybrid system included the HM:M, the SNN and EBF models, and Table 2
summarizes its perfonnance (in this table, NN stands for the combination of SNN and
EBF). We notice that with the exception of of the Sep '92 test sets the word error of the
mfM was roughly around 3.5%(3.8, 3.7 and 3.4%). For the same test sets, the NN had
a word error slightly higher than 4.0%, and the hybrid NN/HMM system a word error
rate of 2.7%. We are very happy to see the perfonnance of our neural net approaching
the perfonnance of the HMM. It is also worthwhile to mention that the perfonnance of
the hybrid system for Feb '89, Oct '89 and Feb '91 is the best perfonnance reported so
far for these sets.
Special mention has to be made for the Sep '92 test sets. These test sets proved to be
radically different than the previous released RM tests, resulting in almost a doubling of
the HM:M word error rate. The deterioration in perfonnance of the hybrid system was
bigger, and the improvement due to the hybrid system was less than 10% (compared
to an improvement of :::::: 25% for the other 3 sets). We have all been baffled by these
unexpected results, and although we are continuously looking for an explanation of this
A Hybrid Neural Net System for State-of-the-Art Continuous Speech Recognition
System
HMM:
NN
NN+HMM:
Feb '89
3.7
4.0
2.7
Word Error %
Oct '89 Feb '91
3.8
3.4
4.2
4.1
2.7
2.7
Sep '92
6.0
7.4
5.5
Table 2: Hybrid Neural Net/HM1vf system.
strange behaviour our efforts have not yet been successful.
6
CONCLUSIONS
We have presented the Segmental Neural Net as a method for phonetic modeling in large
vocabulary CSR systems and have demonstrated that, when combined with a conventional
HMM, the SNN gives a significant improvement over the perfonnance of a state-of-theart HMM CSR system. The hybrid system is based on the N-best rescoring paradigm
which, by providing the HMM segmentation, drastically reduces the computation for
our segmental models and provides a simple way of combining the best aspects of two
systems. The improvements achieved from the use of a hybrid system vary from less
than 10% to about 25 % reduction in word error rate, depending on the test set used.
References
[1] R. Schwartz and S. Austin, "A Comparison of Several Approximate Algorithms for
Finding Multiple (N-Best) Sentence Hypotheses," IEEE Int. Con[ Acoustics, Speech
and Signal Processing, Toronto, Canada, May 1991, pp. 701-704.
[2] A. Barron, "Statistical properties of artificial neural networks," IEEE Conf. Decision
and Control, Tampa, FL, pp. 280-285, 1989.
[3] H. Gish, "A probabilistic approach to the understanding and training of neural
network classifiers," IEEE Int. ConfAcoust., Speech, Signal Processing, April 1990.
[4] M. Bates et. all, "The BBN/HARC Spoken Language Understanding System" IEEE
Int. Con[ Acoust., Speech,Signal Processing, Apr 1992, Minneapolis, MI, Apr. 1993
[5] M. Ostendorf et. all, "Integration of Diverse Recognition Methodologies Through
Reevaluation of N-Best Sentence Hypotheses," Proc. DARPA Speech and Natural
Language Workshop, Pacific Grove, CA, Morgan Kaufmann Publishers, February
1991.
[6] A. El-Jaroudi and J. Makhoul, "A New Error Criterion for Posterior Probability
Estimation with Neural Nets," International Joint Conference on Neural Networks,
San Diego, CA, June 1990, Vol III, pp. 185-192.
[7] D. Luenberger, Linear and Nonlinear Programming, Addison-Wesley, Massachusetts, 1984.
[8] R. Schwartz et. all, "Improved Hidden Markov Modeling of Phonemes for Continuous Speech Recognition," IEEE Int. Con[ Acoustics, Speech and Signal Processing,
San Diego, CA, March 1984, pp. 35.6.1-35.6.4.
711
| 598 |@word version:2 nd:1 gish:1 covariance:1 mention:3 reduction:2 initial:2 score:25 selecting:3 transfonn:3 current:1 elliptical:3 si:1 yet:2 speakerindependent:1 designed:1 alone:2 tenn:2 selected:1 ebf:10 core:1 provides:2 rescoring:7 bmt:1 toronto:1 sigmoidal:1 five:3 constructed:1 incorrect:3 combine:2 roughly:1 multi:2 automatically:1 snn:36 actual:1 little:1 window:1 provided:1 estimating:1 linearity:1 what:2 developed:4 spoken:1 finding:2 acoust:1 rm:1 classifier:1 schwartz:6 control:4 unit:1 producing:1 positive:1 dropped:2 local:1 treat:1 consequence:1 rescore:1 hmms:3 range:1 adoption:1 minneapolis:1 decided:1 unique:1 differs:1 procedure:1 ssn:1 got:1 composite:2 word:18 radial:1 close:1 selection:1 context:21 conventional:1 demonstrated:1 duration:6 searching:1 diego:2 programming:1 us:1 hypothesis:12 recognition:19 reevaluation:1 decrease:1 highest:2 mentioned:2 insertion:1 trained:5 segment:34 reordered:1 serve:1 negatively:1 efficiency:1 basis:5 sep:4 darpa:3 joint:1 represented:1 various:1 train:3 describe:1 artificial:1 labeling:1 distortion:5 triangular:1 grammar:1 final:1 sequence:6 advantage:4 net:22 product:1 reset:1 neighboring:1 turned:1 combining:2 achieve:1 getting:1 produce:2 depending:1 implemented:2 drawback:1 correct:4 require:1 behaviour:1 extension:3 around:1 considered:2 vary:1 released:1 estimation:1 proc:1 integrates:1 label:2 infonnation:1 largest:1 weighted:1 minimization:1 always:1 rather:2 conjunction:1 june:1 improvement:7 modelling:2 likelihood:1 baseline:1 dependent:1 el:1 nn:11 integrated:1 typically:1 initially:1 hidden:3 quasi:4 comprising:1 overall:1 among:2 development:4 art:7 smoothing:2 initialize:2 special:1 integration:1 once:1 sampling:4 identical:1 theart:1 few:2 simultaneously:2 recognize:2 interpolate:1 individual:2 resulted:1 attempt:3 male:2 durational:1 grove:1 perfonnance:9 initialized:1 modeling:7 cover:1 successful:1 reported:1 answer:2 accomplish:1 combined:2 international:1 probabilistic:1 off:1 decoding:1 continuously:1 management:2 conf:1 convolving:1 zhao:4 warped:1 account:1 exclude:1 availability:1 coefficient:5 scor:1 int:4 performed:3 portion:1 capability:1 minimize:1 square:1 accuracy:2 phoneme:11 kaufmann:1 produced:1 mere:1 bates:1 researcher:1 overtraining:3 whenever:1 pp:4 mi:1 con:3 stop:2 gain:1 proved:1 massachusetts:1 segmentation:11 sophisticated:1 wesley:1 higher:1 methodology:1 improved:1 april:1 done:2 just:2 stage:4 correlation:2 ostendorf:1 nonlinear:1 hypothesized:1 concept:2 consisted:1 regularization:8 conditionally:1 during:3 speaker:4 mel:1 cosine:2 m:2 criterion:5 complete:2 consideration:1 novel:1 recently:1 sigmoid:1 significant:1 cambridge:3 automatic:1 nonlinearity:2 language:2 had:2 feb:8 segmental:8 triphones:1 posterior:2 showed:1 recent:1 female:2 phone:1 phonetic:23 tenns:1 ameliorating:1 rescored:2 scoring:5 morgan:1 minimum:2 additional:1 floor:1 employed:1 paradigm:6 signal:5 full:3 sound:1 multiple:2 reduces:1 cross:1 long:1 divided:1 bigger:1 schematic:1 multilayer:1 histogram:2 represent:1 normalization:1 iteration:1 deterioration:1 achieved:1 addition:1 diagram:1 publisher:1 bringing:1 seperately:1 suspect:1 call:2 presence:1 feedforward:1 iii:2 enough:1 independence:2 gave:1 restrict:1 approaching:1 byblos:3 effort:2 penalty:3 wo:1 speech:24 proceed:2 cause:1 detailed:1 amount:1 reduced:2 generate:2 exist:1 notice:1 trapped:1 estimated:1 popularity:1 diverse:1 discrete:1 shall:1 vol:1 four:1 falling:1 convert:1 sum:1 powerful:1 ameliorate:1 almost:2 strange:1 excursion:1 decision:1 summarizes:1 layer:12 fl:1 fold:1 incorporation:1 aspect:1 relatively:1 pacific:1 according:1 truncate:1 combination:5 march:1 makhoul:5 across:1 remain:1 slightly:1 gradually:1 resource:2 previously:1 mechanism:1 addison:1 serf:1 adopted:1 available:1 luenberger:1 multiplied:1 ocr:4 worthwhile:1 barron:1 occurrence:1 alternative:1 encounter:1 reinstate:1 original:2 top:4 include:1 ensure:1 newton:1 especially:1 february:3 warping:3 objective:1 occurs:1 usual:2 september:1 separate:1 hmm:31 tage:1 reason:1 length:4 providing:1 minimizing:2 happy:1 negative:1 design:1 implementation:1 zavaliagkos:4 perform:1 markov:3 incorporated:1 looking:2 precise:1 frame:20 smoothed:1 canada:1 csr:10 introduced:1 required:2 sentence:4 acoustic:2 established:1 able:1 suggested:1 below:3 tampa:1 unsuccessful:1 explanation:1 power:2 natural:3 hybrid:24 representing:1 improve:1 technology:4 hm:6 utterance:12 deviate:1 understanding:2 relative:2 limitation:5 validation:1 degree:1 consistent:1 austin:1 last:1 drastically:1 taking:3 cepstral:3 regard:1 overcome:1 vocabulary:3 stand:1 made:4 san:2 far:2 cope:1 approximate:1 transcription:1 overcomes:1 keep:1 dealing:1 ml:2 corpus:2 reorder:2 discriminative:1 continuous:9 search:1 table:5 ca:3 mse:4 apr:2 main:1 whole:1 scored:1 allowed:1 positively:1 decoded:1 candidate:2 northeastern:1 removing:1 down:2 bad:1 list:11 exists:3 consist:1 workshop:1 adding:1 bbn:7 ci:1 sigmoids:2 boston:1 entropy:2 simply:1 likely:1 snns:2 prevents:1 unexpected:1 misrecognized:1 scalar:1 doubling:1 gender:1 radically:1 ma:4 oct:6 conditional:2 identity:2 consequently:1 included:2 typical:1 determined:2 total:1 called:1 discriminate:1 experimental:1 perceptrons:1 exception:1 untill:1 requisite:1 tested:2 contextdependent:1 |
5,502 | 5,980 | Parallel Recursive Best-First AND/OR Search for
Exact MAP Inference in Graphical Models
Akihiro Kishimoto
IBM Research, Ireland
Radu Marinescu
IBM Research, Ireland
Adi Botea
IBM Research, Ireland
[email protected]
[email protected]
[email protected]
Abstract
The paper presents and evaluates the power of parallel search for exact MAP
inference in graphical models. We introduce a new parallel shared-memory recursive best-first AND/OR search algorithm, called SPRBFAOO, that explores the
search space in a best-first manner while operating with restricted memory. Our
experiments show that SPRBFAOO is often superior to the current state-of-the-art
sequential AND/OR search approaches, leading to considerable speed-ups (up to
7-fold with 12 threads), especially on hard problem instances.
1
Introduction
Graphical models provide a powerful framework for reasoning with probabilistic information. These
models use graphs to capture conditional independencies between variables, allowing a concise
knowledge representation and efficient graph-based query processing algorithms. Combinatorial
maximization, or maximum a posteriori (MAP) tasks arise in many applications and often can be
efficiently solved by search schemes, especially in the context of AND/OR search spaces that are
sensitive to the underlying problem structure [1].
Recursive best-first AND/OR search (RBFAOO) is a recent yet very powerful scheme for exact MAP
inference that was shown to outperform current state-of-the-art depth-first and best-first methods by
several orders of magnitude on a variety of benchmarks [2]. RBFAOO explores the context minimal
AND/OR search graph associated with a graphical model in a best-first manner (even with nonmonotonic heuristics) while running within restricted memory. RBFAOO extends Recursive BestFirst Search (RBFS) [3] to graphical models and thus uses a threshold controlling technique to drive
the search in a depth-first like manner while using the available memory for caching.
Up to now, search-based MAP solvers were developed primarily as sequential search algorithms.
However, parallel, multi-core processing can be a powerful approach to boosting the performance
of a problem solver. Now that multi-core computing systems are ubiquitous, one way to extract
substantial speed-ups from the hardware is to resort to parallel processing. Parallel search has been
successfully employed in a variety of AI areas, including planning [4], satisfiability [5], and game
playing [6, 7]. However, little research has been devoted to solving graphical models in parallel.
The only parallel search scheme for MAP inference in graphical models that we are aware of is
the distributed AND/OR Branch and Bound algorithm (daoopt) [8]. This assumes however a large
and distributed computational grid environment with hundreds of independent and loosely connected
computing systems, without access to a shared memory space for caching and reusing partial results.
Contribution In this paper, we take a radically different approach and explore the potential of
parallel search for MAP tasks in a shared-memory environment which, to our knowledge, has not
been attempted before. We introduce SPRBFAOO, a new parallelization of RBFAOO in sharedmemory environments. SPRBFAOO maintains a single cache table shared among the threads. In
this way, each thread can effectively reuse the search effort performed by others. Since all threads
start from the root of the search graph using the same search strategy, an effective load balancing is
1
(a) Primal graph
(b) Pseudo tree
(c) Context minimal AND/OR search graph
Figure 1: A simple graphical model and its associated AND/OR search graph.
obtained without using sophisticated schemes, as done in previous work [8]. An extensive empirical
evaluation shows that our new parallel recursive best-first AND/OR search scheme improves considerably over current state-of-the-art sequential AND/OR search approaches, in many cases leading to
considerable speed-ups (up to 7-fold using 12 threads) especially on hard problem instances.
2
Background
Graphical models (e.g., Bayesian Networks [9] or Markov Random Fields [10]) capture the factorization structure of a distribution over a set of variables. A graphical model is a tuple M =
hX, D, Fi, where X = {Xi : i ? V } is a set of variables indexed by set V and D = {Di : i ? V }
is the set of their finite domains of values. F = {?? : ? ? F } is a set of discrete positive realvalued local functions defined on subsets of variables, where F ? 2V is a set of variable subsets. We
use ? ? V and X? ? X to indicate the scope of function ?? , i.e., X? = var(?? ) = {Xi : i ? ?}.
The function scopes yield a primal graph whose vertices are the variables and whose edges connect
any two variables that appear in the scope of the same function.QThe graphical model M defines a
factorized probability distribution on X, as follows: P (X) = Z1 ??F ?? (X? ) where the partition
function, Z, normalizes the probability.
An important inference task which appears in many real world applications is maximum a posteriori
(MAP, sometimes called maximum probable explanation or MPE). MAP/MPE finds a complete
assignment to the variables
Q that has the highest probability (i.e., a mode of the joint probability),
namely: x? = argmaxx ??F ?? (x? ) The task is NP-hard to solve in general [9]. In this paper
we focus on solving MAP as a minimization problem by taking
P the negative logarithm of the local
functions to avoid numerical issues, namely: x? = argminx ??F ? log (?? (x? )).
Significant improvements for MAP inference have been achieved by using AND/OR search spaces,
which often capture problem structure far better than standard OR search methods [11]. A pseudo
tree of the primal graph captures the problem decomposition and is used to define the search space.
A pseudo tree of an undirected graph G = (V, E) is a directed rooted tree T = (V, E 0 ), such that
every arc of G not included in E 0 is a back-arc in T , namely it connects a node in T to an ancestor
in T . The arcs in E 0 may not all be included in E.
Given a graphical model M = hX, D, Fi with a primal graph G and a pseudo tree T of G, the
AND/OR search tree ST has alternating levels of OR nodes corresponding to the variables and AND
nodes corresponding to the values of the OR parent?s variable, with edges weighted according to F.
We denote the weight on the edge from OR node n to AND node m by w(n, m). Identical subproblems, identified by their context (the partial instantiation that separates the sub-problem from
the rest of the problem graph), can be merged, yielding an AND/OR search graph [11]. Merging all
context-mergeable nodes yields the context minimal AND/OR search graph, denoted by CT . The
size of CT is exponential in the induced width of G along a depth-first traversal of T [11].
A solution tree T of CT is a subtree such that: (1) it contains the root node of CT ; (2) if an internal
AND node n is in T then all its children are in T ; (3) if an internal OR node n is in T then exactly
one of its children is in T ; (4) every tip node in T (i.e., nodes with no children) is a terminal node.
The cost of a solution tree is the sum of the weights associated with its edges.
2
Each node n in CT is associated with a value v(n) capturing the optimal solution cost of the conditioned sub-problem rooted at n. It was shown that v(n) can be computed recursively based on the
values of n?s children: OR nodes by minimization, AND nodes by summation (see also [11]).
Example 1. Figure 1(a) shows the primal graph of a simple graphical model with 5 variables and
7 binary functions. Figure 1(c) displays the context minimal AND/OR search graph based on the
pseudo tree from Figure 1(b) (the contexts are shown next to the pseudo tree nodes). A solution tree
corresponding to the assignment (A = 0, B = 1, C = 1, D = 0, E = 0) is shown in red.
Current state-of-the-art sequential search methods for exact MAP inference perform either depthfirst or best-first search. Prominent methods studied and evaluated extensively are the AND/OR
Branch and Bound (AOBB) [1] and Best-First AND/OR Search (AOBF) [12]. More recently, Recursive Best-First AND/OR Search (RBFAOO) [2] has emerged as the best performing algorithm
for exact MAP inference. RBFAOO belongs to the class of RBFS algorithms and employs a local
threshold controlling mechanism to explore the AND/OR search graph in a depth-first like manner
[3, 13]. RBFAOO maintains at each node n a lower-bound q(n) (called q-value) on v(n). During
search, RBFAOO improves and caches in a fixed size table q(n) which is calculated by propagating
back the q-values of n?s children. RBFAOO stops when q(r) = v(r) at the root r or it proves that
there is no solution, namely q(r) = v(r) = ?.
3
Our Parallel Algorithm
Algorithm 1 SPRBFAOO
for all i from 1 to nr CPU cores do
root.th ? ? ? ; root.thub ? ?
launch tRBFS(root) on a separate thread
wait for threads to finish their work
return optimal cost (e.g., as root?s q-value in the cache)
We now describe SPRBFAOO, a parallelization of RBFAOO in shared-memory environments.
SPRBFAOO?s threads start from the root and run in parallel, as shown in Algorithm 1. Threads
share one cache table, allowing them to reuse the results of each other. An entry in the cache table,
corresponding to a node n, is a tuple with 6 fields: a q-value q(n), being a lower bound on the
optimal cost of node n; n.solved, a flag indicating whether n is solved optimally; a virtual q-value
vq(n), defined later in this section; a best known solution cost bs(n) for node n; the number of
threads currently working on n; and a lock. When accessing a cache entry, threads lock it temporarily for other threads. The method Ctxt(n) identifies the context of n, which is further used to access
the corresponding cache entry. Besides the cache, shared among threads, each thread will use two
threshold values, n.th and n.thub, for each node n. These are separated from one thread to another.
Algorithm 2 shows the procedure invoked on each thread. When a thread examines a node n, it
first increments in the cache the number of threads working on node n (line 1). Then it increases
vq(n) by an increment ?, and stores the new value in the cache (line 2). The virtual q-value vq(n) is
initially set to q(n). As more threads work on solving n, vq(n) grows due to the repeated increases
by ?. In effect, vq(n) reflects both the estimated cost of node n (through its q(n) component) and
the number of threads working on n. By computing vq(n) this way, our goal is to dynamically
control the degree to which threads overlap when exploring the search space. When a given area
of the search space is more promising than others, more than one thread are encouraged to work
together within that area. On the other hand, when several areas are roughly equally promising,
threads should diverge and work on different areas. Indeed, in Algorithm 2, the tests on lines 13 and
23 prevent a thread from working on a node n if n.th < vq(n). (Other conditions in these tests are
discussed later.) A large vq(n), which increases the likelihood that n.th < vq(n), may reflect a less
promising node (i.e., large q-value), or many threads working on n, or both. Thus, our strategy is
an automated and dynamic way of tuning the number of threads working on solving a node n as a
function of how promising that node is. We call this the thread coordination mechanism.
Lines 4?7 address the case of nodes with no children, which are either terminal nodes or deadends.
In both cases, method Evaluate sets the solved flag to true. The q-value q is set to 0 for terminal
3
Algorithm 2 Method tRBFS. Handling locks skipped for clarity.
Require: node n
1: IncrementNrThreadsInCache(Ctxt(n))
2: IncreaseVQInCache(Ctxt(n), ?))
3: if n has no children then
4:
(q, solved) ? Evaluate(n)
5:
SaveInCache(Ctxt(n), q, solved, q, q)
6:
DecrementNrThreadsInCache(Ctxt(n))
7:
return
8: GenerateChildren(n)
9: if n is an OR node then
10:
loop
11:
(cbest , vq, vq2 , q, bs) ? BestChild(n)
12:
n.thub ? min(n.thub, bs)
13:
if n.th < vq ?q ? n.thub?n.solved then
14:
break
15:
cbest .th ? min(n.th, vq2 +?)?w(n, cbest )
16:
cbest .thub ? n.thub ? w(n, cbest )
17:
tRBSF(cbest )
18: [continued from previous column]
19: if n is an AND node then
20:
loop
21:
(q, vq, bs) ? Sum(n)
22:
n.thub ? min(n.thub, bs)
23:
if n.th < vq ?q ? n.thub?n.solved then
24:
break
25:
(cbest , qcbest , vqcbest ) ? UnsolvedChild(n)
26:
cbest .th ? n.th ? (vq ? vqcbest )
27:
cbest .thub ? n.thub ? (q ? qcbest )
28:
tRBSF(cbest )
29: if n.solved ? NrThreadsCache(Ctxt(n)) = 1
then
30:
vq ? q
31: DecrementNrThreadsInCache(Ctxt(n))
32: SaveInCache(Ctxt(n), q, n.solved, vq, bs)
nodes and to ? otherwise. Method SaveInCache takes as argument the context of the node, and four
values to be stored in order in these fields of the corresponding cache entry: q, solved, vq and bs.
Lines 10?17 and 20?28 show respectively the cases when the current node n is an OR node or an
AND node. Both these follow a similar high-level sequence of steps:
? Update vq, q, and bs for n, from the children?s values (lines 11, 21). Also update n.thub
(lines 12, 22), an upper bound for the best solution cost known for n so far. Methods
BestChild and Sum are shown in Algorithm 3. In these, child node information is either
retrieved from the cache, if available, or initialized with an admissible heuristic function h.
? Perform the backtracking test (lines 13?14 and 23?24). The thread backtracks to n?s parent
if at least one of the following conditions hold: th(n) < vq(n), discussed earlier; q(n) ?
n.thub i.e., a solution containing n cannot possibly beat the best known solution (we call
this the suboptimality test); or the node is solved. The solved flag is true iff the node cost
has been proven to be optimal, or the node was proven not to have any solution.
? Otherwise, select a successor cbest to continue with (lines 11, 25). At OR nodes n, cbest
is the child with the smallest vq among all children not solved yet (see method BestChild).
At AND nodes, any unsolved child can be chosen. Then, update the thresholds of cbest
(lines 15?16 and 26?27), and recursively process cbest (lines 17, 28). The threshold n.th is
updated in a similar way to RBFAOO, including the overestimation parameter ? (see [2]).
However, there are two key differences. First, we use vq instead of q, to obtain the thread
coordination mechanism presented earlier. Secondly, we use two thresholds, th and thub,
instead of just th, with thub being used to implement the suboptimality test q(n) ? n.thub.
When a thread backtracks to n?s parent, if either n?s solved flag is set, or no other thread currently
examines n, the thread sets vq(n) to q(n) (lines 29?30 in Algorithm 2). In this way, SPRBFAOO
reduces the frequency of the scenarios where n is considered to be less promising. Finally, the thread
decrements in the cache the number of threads working on n (line 31), and saves in the cache the
recalculated vq(n), q(n), bs(n), and the solved flag (line 32).
Theorem 3.1. With an admissible heuristic in use, SPRBFAOO returns optimal solutions.
Proof sketch. SPRBFAOO?s bs(r) at the root r is computed from a solution tree, therefore, bs(r) ?
v(r). Additionally, SPRBFAOO determines solution optimality by using not vq(n) but q(n) saved
in the cache table. By an induction-based discussion similar to Theorem 3.1 in [2], q(n) ? v(n)
holds for any q(n) saved in the cache table with admissible h, which indicates q(r) ? v(r). When
SPRBFAOO returns a solution, bs(r) = q(r), therefore, bs(r) = q(r) = v(r).
We conjecture that SPRBFAOO is also complete, and leave a more in-depth analysis as future work.
4
Algorithm 3 Methods BestChild (left) and Sum (right)
Require: node n
Require: node n
1: n.solved ? ? (? stands for false);
1: n.solved ? > (> stands for true)
2: initialize vq, vq2 , q, bs to ?
2: initialize vq, q, bs to 0
3: for all ci child of n do
3: for all ci child of n do
4:
if Ctxt(ci ) in cache then
4:
if Ctxt(ci ) in cache then
5:
(qci , sci , vqci , bsci ) ? FromCache(Ctxt(ci )) 5:
(qci , sci , vqci , bsci ) ? FromCache(Ctxt(ci ))
6:
else
6:
else
7:
(qci , sci , vqci , bsci ) ? (h(ci ), ?, h(ci ), ?) 7:
(qci , sci , vqci , bsci ) ? (h(ci ), ?, h(ci ), ?)
8:
qci ? w(n, ci ) + qci
8:
q ? q + qci
9:
vqci ? w(n, ci ) + vqci
9:
vq ? vq + vqci
10:
bs = min(bs, w(n, ci ) + bsci )
10:
bs ? bs + bsci
11:
if (qci < q) ? (qci = q ? ?n.solved) then
11:
n.solved ? n.solved ? sci
12:
n.solved ? sci ; q ? qci
12: return (q, vq, bs)
13:
if vqci < vq ? ?sci then
14:
vq2 ? vq; vq ? vqci ; cbest ? ci
15:
else if vqci < vq2 ? ?sci then
16:
vq2 ? vqci
17: return (cbest , vq, vq2 , q, bs)
4
Experiments
We evaluate empirically our parallel SPRBFAOO and compare it against sequential RBFAOO and
AOBB. We also considered parallel shared-memory AOBB, denote by SPAOBB, which uses a master thread to explore centrally the AND/OR search graph up to a certain depth and solves the remaining conditioned sub-problems in parallel using a set of worker threads. The cache table is shared
among the workers so that some workers may reuse partial search results recorded by others. In our
implementation, the search space explored by the master corresponds to the first m variables in the
pseudo tree. The performance of SPAOBB was very poor across all benchmarks due to noticeably
large search overhead as well as poor load balancing, and therefore its results are omitted hereafter.
All competing algorithms (SPRBFAOO, RBFAOO and AOBB) use the pre-compiled mini-bucket
heuristic [1] for guiding the search. The heuristic is controlled by a parameter called i-bound which
allows a trade-off between accuracy and time/space requirements ? higher values of i yield a more
accurate heuristic but take more time and space to compute. The search algorithms were also restricted to a static variable ordering obtained as a depth-first traversal of a min-fill pseudo tree [1].
Our benchmark problems1 include three sets of instances from genetic linkage analysis (denoted
pedigree) [14], grid networks and protein side-chain interaction networks (denoted protein)
[15]. In total, we evaluated 21 pedigrees, 32 grids and 240 protein networks. The algorithms were
implemented in C++ (64-bit) and the experiments were run on a 2.6GHz 12-core processor with
80GB of RAM. Following [2], RBFAOO ran with a 10-20GB cache table (134,217,728 entries)
and overestimation parameter ? = 1. However, SPRBFAOO allocated only 95,869,805 entries with
the same amount of memory, due to extra information such as virtual q-values. We set ? = 0.01
throughout the experiments (except those where we vary ?). The time limit was set to 2 hours. We
also record typical ranges of problem specific parameters shown in Table 1 such as the number of
variables (n), maximum domain size (k), induced width (w? ), and depth of the pseudo tree (h).
Table 1: Ranges (min-max) of the benchmark problems parameters.
benchmark
grid
pedigree
protein
n
144 ? 676
334 ? 1289
26 ? 177
k
2
3?7
81
w?
15 ? 36
15 ? 33
6 ? 16
Table 2: Number of unsolved problem instances (1 vs 12 cores).
h
48 ? 136
51 ? 140
15 ? 43
method
RBFAOO
SPRBFAOO
grid
i = 6 i = 14
9
5
7
5
pedigree
i = 6 i = 14
8
6
7
3
protein
i = 2 i=4
41
16
32
9
The primary performance measures reported are the run time and node expansions during search.
When the run time of a solver is discussed, the total CPU time reported in seconds is one metric to
show overall performance. The total CPU time consists of the heuristic compilation time and search
1
http://graphmod.ics.uci.edu
5
Table 3: Total CPU time (sec) and nodes on grid and pedigree instances. Time limit 2 hours.
instance
(n, k, w? , h)
75-22-5
(484,2,30,107)
75-24-5
(576,2,32,116)
90-30-5
(900,2,42,151)
pedigree7
(1068,4,28,140)
pedigree9
(1119,7,25,123)
pedigree19
(793,5,21,107)
(mbe)
AOBB
RBFAOO
SPRBFAOO
(mbe)
AOBB
RBFAOO
SPRBFAOO
(mbe)
AOBB
RBFAOO
SPRBFAOO
(mbe)
AOBB
RBFAOO
SPRBFAOO
(mbe)
AOBB
RBFAOO
SPRBFAOO
(mbe)
AOBB
RBFAOO
SPRBFAOO
i=8
time
(0.06
nodes
time
(0.07)
nodes
629
116
(0.08)
133143216
153612683
2018
483
(0.1)
331885596
410230906
2794
(0.2)
2273916962
2959
(0.2)
2309390159
(0.1)
(0.2)
(0.1)
4560
(0.2)
(0.1)
(0.2)
grids : total CPU time, i-bound (i = 6)
SPRBFAOO
SPRBFAOO
101
3062954989
103
226436502
249562472
201063828
222896697
(1.3)
102
104
104
2721253097
101
101
102
103
104
144092486
18728991
25076772
229
43
(1.4)
47015068
59504303
3783
869
(2.1)
565053698
665947009
1239
267
(1.6)
135387634
151794050
3021
(10)
2807834881
2083
1914585138
protein : total CPU time, i-bound (i = 2)
102
100 0
10
104
101
pedigree : total CPU time, i-bound (i = 14)
103
104
104
protein : total CPU time, i-bound (i = 4)
103
102
100 0
10
102
RBFAOO
101
103
nodes
101
SPRBFAOO
SPRBFAOO
102
102
1642
314
(0.5)
(0.4)
103
RBFAOO
465384385
511894256
RBFAOO
grids : total CPU time, i-bound (i = 14)
101
314622599
113597702
129817500
time
(0.7)
884
85
17
(0.8)
103
100 0
10
104
103
SPRBFAOO
1873
353
(0.2)
i = 14
nodes
(0.6)
101
102
2792
579
(0.5)
(0.3)
RBFAOO
100 0
10
665237411
804068930
103
101
i = 12
time
(0.2)
2100
638
152
(0.3)
761867041
334441548
385071090
pedigree : total CPU time, i-bound (i = 6)
104
102
104
4182
1012
(0.3)
i = 10
nodes
3792
103
100 0
10
time
(0.1)
5221
2036
466
(0.1)
SPRBFAOO
104
i=6
algorithm
102
101
101
102
RBFAOO
103
104
100 0
10
101
102
103
104
RBFAOO
Figure 2: Total CPU time (sec) for RBFAOO vs. SPRBFAOO with smaller (top) and larger (bottom)
i-bounds. Time limit 2 hours. i ? {6, 14} for grid and pedigree, i ? {2, 4} for protein.
time. SPRBFAOO does not reduce the heuristic compilation time calculated sequentially. Note that
parallelizing the heuristic compilation is an important extension as future work.
Parallel versus sequential search Table 3 shows detailed results (as total CPU time in seconds and
nodes expanded) for solving grid and pedigree instances using parallel and sequential search.
The columns are indexed by the i-bound. For each problem instance, we also record the mini-bucket
heuristic pre-compilation time, denoted by (mbe), corresponding to each i-bound. SPRBFAOO
ran with 12 threads. We can see that SPRBFAOO improves considerably over RBFAOO across
all reported i-bounds. The benefit of parallel search is more clearly observed at smaller i-bounds
that correspond to relatively weak heuristics. In this case, the heuristic is less likely to guide the
search towards more promising regions of the search space and therefore diversifying the search
via multiple parallel threads is key to achieving significant speed-ups. For example, on grid 75-225, SPRBFAOO(6) is almost 6 times faster than RBFAOO(6). Similarly, SPRBFAOO(8) solves the
pedigree7 instance while RBFAOO(8) runs out of time. This is important since on very hard problem
instances it may only be possible to compute rather weak heuristics given limited resources. Notice
6
6
30000
Average speed-up
Total search time (sec)
35000
25000
20000
15000
grids
pedigree
protein
10000
5000
5
4
3
2
grids
pedigree
protein
1
0
0
0.001
0.01
parameter ?
0.1
0.001
0.01
parameter ?
0.1
Figure 3: Total search time (sec) and average speed-up as a function of parameter ?. Time limit 2
hours. i = 14 for grid and pedigree, i = 4 for protein.
also that the pre-processing time (mbe) increases with the i-bound. Table 2 shows the number of
unsolved problems in each domain. Note that SPRBFAOO solved all instances solved by RBFAOO.
Figure 2 plots the total CPU time obtained by RBFAOO and SPRBFAOO using smaller (resp. larger)
i-bounds corresponding to relatively weak (resp. strong) heuristics. We selected i ? {6, 14} for
grid and pedigree, and i ? {2, 4} for protein. Specifically, i = 6 (grids, pedigrees) and
i = 2 (proteins) were the smallest i-bounds for which SPRBFAOO could solve at least two thirds of
instances within the 2 hour time limit, while i = 14 (grids, pedigrees) and i = 4 (proteins) were the
largest possible i-bounds for which we could compile the heuristics without running out of memory
on all instances. The data points shown in green correspond to problem instances that were solved
only by SPRBFAOO. As before, we notice the benefit of parallel search when using relatively weak
heuristics. The largest speed-up of 9.59 is obtained on the pdbilk protein instance with i = 2. As
the i-bound increases and the heuristics become more accurate, the difference between RBFAOO(i)
and SPRBFAOO(i) decreases because both algorithms are guided more effectively towards the subspace containing the optimal solution. In addition, the overhead associated with larger i-bounds,
which is calculated sequentially, offsets considerably the speed-up obtained by SPRBFAOO(i) over
RBFAOO(i) (see for example the plot for protein instances with i = 4).
We also observed that SPRBFAOO?s speed-up over RBFAOO increases sublinearly as more threads
are used (we experimented with 3, 6, and 12 threads, respectively). In addition to search overhead,
synchronization overhead is another cause for achieving only sublinear speed-ups. The synchronization overhead can be estimated by checking the node expansion rate per thread. For example, in
case of SPRBFAOO with 12 threads, the node expansion rate per thread slows down to 47 %, 50 %,
and 61 % of RBFAOO in grid (i = 6), pedigree (i = 6), and protein (i = 2), respectively.
This implies that the overhead related to locks is large. Since these numbers with 6 threads are 73
%, 79 %, and 96 %, respectively, the slowdown becomes severer with more threads. We hypothesize
that due to the property of the virtual q-value, SPRBFAOO?s threads tend to follow the same path
from the root until search directions are diversified, and frequently access the cache table entries of
the these internal nodes located on that path, where lock contentions occur non-negligibly.
Finally, SPRBFAOO?s load balance is quite stable in all domains, especially when all threads are
invoked and perform search after a while. For example, its load balance ranges between 1.0051.064, 1.013-1.049, and 1.004-1.117 for grid (i = 6), pedigree (i = 6), and protein (i = 2),
especially on those instances where SPRBFAOO expands at least 1 million nodes with 12 threads.
Impact of parameter ? In Figure 3 we analyze the performance of SPRBFAOO with 12 threads
as a function of the parameter ? which controls the way different threads are encouraged or discouraged to start exploring a specific subproblem (see also Section 3). For this purpose and to better
understand SPRBFAOO?s scaling behavior, we ignore the heuristic compilation time. Therefore,
we show the total search time (in seconds) over the instances that all parallel versions solve, and the
search-time-based average speed-ups based on the instances where RBFAOO needs at least 1 second
to solve. We obtained these numbers for ? ? {0.001, 0.01, 0.1}. We see that all ? values lead to
improved speed-ups. This is important because, unlike the approach of [8] which involves a sophisticated scheme, it is considerably simpler yet extremely efficient and only requires tuning a single
parameter (?). Of the three ? values, while SPRBFAOO with ? = 0.1 spends the largest total search
time, it yields the best speed-up. This indicates a trade-off about selecting ?. Since the instances
used to calculate speed-up values are solved by RBFAOO, they contain relatively easy instances.
7
Table 4: Total CPU time (sec) and node expansions for hard pedigree instances. SPRBFAOO ran
with 12 threads, i = 20 (type4b) and i = 16 (largeFam). Time limit 100 hours.
instance
(n, k, w? , h)
type4b-100-19
type4b-120-17
type4b-130-21
type4b-140-19
largeFam3-10-52
(7308,5,29,354)
(7766,5,24,319)
(8883,5,29,416)
(9274,5,30,366)
(1905,3,36,80)
(mbe)
time
400
191
281
488
13
RBFAOO
time
nodes
132711 22243047591
210
4297063
290760 51481315386
248376 39920187143
154994 19363865449
SPRBFAOO
time
nodes
42846
50509174040
195
6046663
149321 177393525747
74643
85152364623
50700
44073583335
On the other hand, several difficult instances solved by SPRBFAOO with 12 threads are included
in calculating the total search time. In case of ? = 0.1, because of increased search overhead,
SPRBFAOO needs more search time to solve these difficult instances. There is also one protein
instance unsolved with ? = 0.1 but solved with ? = 0.01 and 0.001. This phenomenon can be
explained as follows. With large ?, SPRBFAOO searches in more diversified directions which could
reduce lock contentions, resulting in improved speed-up values. However, due to larger diversification, when SPRBFAOO with ? = 0.1 solves difficult instances, it might focus on less promising
portions of the search space, resulting in increased total search time.
Summary of the experiments In terms of search-time-based speed-ups, our parallel sharedmemory method SPRBFAOO improved considerably over its sequential counterpart RBFAOO, by
up to 7 times using 12 threads. At relatively larger i-bounds, their corresponding computational
overhead typically outweighed the gains obtained by parallel search. Still, parallel search had an
advantage of solving additional instances unsolved by serial search. Finally, in Table 4 we report the
results obtained on 5 very hard pedigree instances from [2] (mbe records the heuristic compilation
time). We see again that SPRBFAOO improved over RBFAOO on all instances, while achieving a
total-time-based speed-up of 3 on two of them (i.e., type4b-100-19 and largeFam3-10-52).
5
Related Work
The distributed AOBB algorithm daoopt [8] which builds on the notion of parallel tree search
[16], explores centrally the search tree up to a certain depth and solves the remaining conditioned
sub-problems in parallel using a large grid of distributed processing units without a shared cache.
In parallel evidence propagation, the notion of pointer jumping has been used for exact probabilistic
inference. For example, Pennock [17] performs a theoretical analysis. Xia and Prasanna [18] split
a junction tree into chains where evidence propagation is performed in parallel using a distributedmemory environment, and the results are merged later on.
Proof-number search (PNS) in AND/OR spaces [19] and its parallel variants [20] have been shown
to be effective in two-player games. As PNS is suboptimal, it cannot be applied as is to exact
MAP inference. Kaneko [21] presents shared-memory parallel depth-first proof-number search with
virtual proof and disproof numbers (vpdn). These combine proof and disproof numbers [19] and the
number of threads examining a node. Thus, our vq(n) is closely related to vpdn. However, vpdn
has an over-counting problem, which we avoid due to the way we dynamically update vq(n). Saito
et al. [22] uses threads that probabilistically avoid the best-first strategy. Hoki et al. [23] adds small
random values the proof and disproof numbers of each thread without sharing any cache table.
6
Conclusion
We presented SPRBFAOO, a new shared-memory parallel recursive best-first AND/OR search
scheme in graphical models. Using the virtual q-values shared in a single cache table, SPRBFAOO
enables threads to work on promising regions of the search space with effective reuse of the search
effort performed by others. A homogeneous search mechanism across the threads achieves an effective load balancing without resorting to sophisticated schemes used in related work [8]. We prove the
correctness of the algorithm. In experiments, SPRBFAOO improves considerably over current stateof-the-art sequential AND/OR search approaches, in many cases leading to considerable speed-ups
(up to 7-fold using 12 threads) especially on hard problem instances. Ongoing and future research
directions include proving the completeness conjecture, extending SPRBFAOO to distributed memory environments, and parallelizing the mini-bucket heuristic for shared and distributed memory.
8
References
[1] R. Marinescu and R. Dechter. AND/OR branch-and-bound search for combinatorial optimization in graphical models. Artificial Intelligence, 173(16-17):1457?1491, 2009.
[2] A. Kishimoto and R. Marinescu. Recursive best-first AND/OR search for optimization in
graphical models. In International Conference on Uncertainty in Artificial Intelligence (UAI),
pages 400?409, 2014.
[3] R. Korf. Linear-space best-first search. Artificial Intelligence, 62(1):41?78, 1993.
[4] A. Kishimoto, A. Fukunaga, and A. Botea. Evaluation of a simple, scalable, parallel best-first
search strategy. Artificial Intelligence, 195:222?248, 2013.
[5] Wahid Chrabakh and Rich Wolski. Gradsat: A parallel SAT solver for the Grid. Technical
report, University of California at Santa Barbara, 2003.
[6] M. Campbell, A. Joseph Hoane Jr., and F.-h. Hsu. Deep Blue. Artificial Intelligence, 134(1?
2):57?83, 2002.
[7] M. Enzenberger, M. M?uller, B. Arneson, and R. Segal. FUEGO - an open-source framework
for board games and Go engine based on Monte-Carlo tree search. IEEE Transactions on
Computational Intelligence and AI in Games, 2(4):259?270, 2010.
[8] L. Otten and R. Dechter. A case study in complexity estimation: Towards parallel branch-andbound over graphical models. In Uncertainty in Artificial Intelligence (UAI), pages 665?674,
2012.
[9] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988.
[10] S. L. Lauritzen. Graphical Models. Clarendon Press, 1996.
[11] R. Dechter and R. Mateescu. AND/OR search spaces for graphical models. Artificial Intelligence, 171(2-3):73?106, 2007.
[12] R. Marinescu and R. Dechter. Memory intensive AND/OR search for combinatorial optimization in graphical models. Artificial Intelligence, 173(16-17):1492?1524, 2009.
[13] A. Nagai. Df-pn Algorithm for Searching AND/OR Trees and Its Applications. PhD thesis, The
University of Tokyo, 2002.
[14] M. Fishelson and D. Geiger. Exact genetic linkage computations for general pedigrees. Bioinformatics, 18(1):189?198, 2002.
[15] C. Yanover, O. Schueler-Furman, and Y. Weiss. Minimizing and learning energy functions for
side-chain prediction. Journal of Computational Biology, 15(7):899?911, 2008.
[16] A. Grama and V. Kumar. State of the art in parallel search techniques for discrete optimization
problems. IEEE Transactions on Knowledge and Data Engineering, 11(1):28?35, 1999.
[17] D. Pennock. Logarithmic time parallel Bayesian inference. In Uncertainty in Artificial Intelligence (UAI), pages 431?438, 1998.
[18] Y. Xia and V. K. Prasanna. Junction tree decomposition for parallel exact inference. In IEEE
International Symposium on Parallel and Distributed Processing (IPDPS), 2008.
[19] L. V. Allis, M. van der Meulen, and H. J. van den Herik. Proof-number search. Artificial
Intelligence, 66(1):91?124, 1994.
[20] A. Kishimoto, M. Winands, M. M?uller, and J.-T. Saito. Game-tree search using proof numbers:
The first twenty years. ICGA Journal, Vol. 35, No. 3, 35(3):131?156, 2012.
[21] T. Kaneko. Parallel depth first proof number search. In AAAI Conference on Artificial Intelligence, pages 95?100, 2010.
[22] J-T. Saito, M. H. M. Winands, and H. J. van den Herik. Randomized parallel proof-number
search. In Advances in Computers Games Conference (ACG?09), volume 6048 of Lecture
Notes in Computer Science, pages 75?87. Springer, 2010.
[23] K. Hoki, T. Kaneko, A. Kishimoto, and T. Ito. Parallel dovetailing and its application to depthfirst proof-number search. ICGA Journal, 36(1):22?36, 2013.
9
| 5980 |@word version:1 open:1 korf:1 decomposition:2 concise:1 recursively:2 contains:1 hereafter:1 selecting:1 genetic:2 icga:2 current:6 com:3 yet:3 dechter:4 numerical:1 partition:1 enables:1 hypothesize:1 plot:2 update:4 v:2 intelligence:12 selected:1 core:5 andbound:1 record:3 pointer:1 completeness:1 boosting:1 node:62 simpler:1 along:1 become:1 symposium:1 consists:1 prove:1 overhead:8 combine:1 manner:4 introduce:2 sublinearly:1 indeed:1 behavior:1 problems1:1 planning:1 frequently:1 multi:2 roughly:1 terminal:3 little:1 cpu:14 cache:24 solver:4 becomes:1 underlying:1 factorized:1 pedigree7:2 spends:1 developed:1 pseudo:9 every:2 expands:1 exactly:1 control:2 unit:1 appear:1 before:2 positive:1 wolski:1 local:3 depthfirst:2 engineering:1 limit:6 path:2 might:1 studied:1 dynamically:2 compile:1 factorization:1 limited:1 range:3 directed:1 recursive:8 implement:1 procedure:1 ipdps:1 saito:3 area:5 empirical:1 ups:9 pre:3 wait:1 protein:19 cannot:2 context:10 map:14 go:1 examines:2 continued:1 fill:1 proving:1 searching:1 notion:2 increment:2 updated:1 resp:2 controlling:2 exact:9 homogeneous:1 us:3 located:1 bottom:1 observed:2 akihiro:1 negligibly:1 subproblem:1 solved:28 capture:4 calculate:1 region:2 connected:1 ordering:1 trade:2 highest:1 decrease:1 ran:3 substantial:1 accessing:1 environment:6 complexity:1 overestimation:2 traversal:2 dynamic:1 solving:6 joint:1 separated:1 grama:1 effective:4 describe:1 monte:1 query:1 artificial:11 nonmonotonic:1 whose:2 heuristic:20 emerged:1 solve:5 larger:5 quite:1 otherwise:2 sequence:1 advantage:1 interaction:1 uci:1 loop:2 iff:1 parent:3 requirement:1 extending:1 leave:1 propagating:1 lauritzen:1 strong:1 solves:4 implemented:1 launch:1 involves:1 indicate:1 implies:1 direction:3 guided:1 merged:2 saved:2 closely:1 tokyo:1 successor:1 bsci:6 virtual:6 noticeably:1 require:3 hx:2 probable:1 summation:1 secondly:1 exploring:2 extension:1 hold:2 considered:2 ic:1 scope:3 recalculated:1 vary:1 achieves:1 smallest:2 omitted:1 purpose:1 estimation:1 combinatorial:3 currently:2 coordination:2 sensitive:1 largest:3 correctness:1 successfully:1 weighted:1 reflects:1 minimization:2 uller:2 clearly:1 rather:1 caching:2 avoid:3 mergeable:1 pn:1 probabilistically:1 focus:2 improvement:1 likelihood:1 indicates:2 skipped:1 posteriori:2 inference:12 marinescu:5 typically:1 qthe:1 initially:1 ancestor:1 issue:1 among:4 overall:1 denoted:4 stateof:1 art:6 initialize:2 field:3 aware:1 encouraged:2 identical:1 biology:1 future:3 others:4 np:1 report:2 intelligent:1 primarily:1 employ:1 argminx:1 connects:1 evaluation:2 schueler:1 yielding:1 winands:2 primal:5 devoted:1 compilation:6 chain:3 graphmod:1 accurate:2 tuple:2 edge:4 partial:3 worker:3 jumping:1 fuego:1 tree:22 indexed:2 loosely:1 logarithm:1 initialized:1 theoretical:1 minimal:4 instance:31 column:2 earlier:2 increased:2 assignment:2 maximization:1 cost:8 vertex:1 subset:2 entry:7 hundred:1 examining:1 optimally:1 stored:1 reported:3 connect:1 considerably:6 st:1 explores:3 international:2 randomized:1 ie:3 probabilistic:3 off:2 diverge:1 tip:1 together:1 again:1 reflect:1 recorded:1 thesis:1 containing:2 aaai:1 possibly:1 resort:1 leading:3 return:6 reusing:1 potential:1 segal:1 sec:5 performed:3 root:10 break:2 later:3 furman:1 mpe:2 analyze:1 portion:1 red:1 start:3 maintains:2 parallel:41 rbfs:2 contribution:1 accuracy:1 kaufmann:1 efficiently:1 yield:4 correspond:2 outweighed:1 weak:4 bayesian:2 carlo:1 drive:1 processor:1 sharing:1 evaluates:1 against:1 energy:1 frequency:1 associated:5 di:1 proof:11 unsolved:5 static:1 stop:1 gain:1 hsu:1 knowledge:3 improves:4 ubiquitous:1 satisfiability:1 sophisticated:3 back:2 campbell:1 appears:1 clarendon:1 higher:1 follow:2 improved:4 wei:1 done:1 evaluated:2 just:1 until:1 working:7 hand:2 sketch:1 propagation:2 defines:1 mode:1 grows:1 effect:1 contain:1 true:3 counterpart:1 alternating:1 game:6 width:2 during:2 mbe:10 rooted:2 pedigree:20 suboptimality:2 prominent:1 complete:2 performs:1 reasoning:2 invoked:2 contention:2 fi:2 recently:1 superior:1 empirically:1 volume:1 million:1 discussed:3 diversifying:1 otten:1 significant:2 ai:2 tuning:2 grid:21 resorting:1 similarly:1 had:1 access:3 stable:1 operating:1 compiled:1 add:1 recent:1 retrieved:1 belongs:1 barbara:1 scenario:1 store:1 certain:2 diversification:1 pns:2 binary:1 continue:1 qci:10 der:1 morgan:1 additional:1 employed:1 branch:4 multiple:1 reduces:1 technical:1 faster:1 serial:1 equally:1 controlled:1 impact:1 prediction:1 variant:1 scalable:1 metric:1 df:1 sometimes:1 achieved:1 background:1 addition:2 hoane:1 else:3 source:1 allocated:1 parallelization:2 rest:1 extra:1 unlike:1 pennock:2 induced:2 tend:1 undirected:1 call:2 counting:1 split:1 easy:1 automated:1 variety:2 finish:1 identified:1 competing:1 suboptimal:1 reduce:2 intensive:1 thread:59 whether:1 nagai:1 gb:2 reuse:4 linkage:2 effort:2 cause:1 deep:1 detailed:1 santa:1 amount:1 extensively:1 hardware:1 http:1 outperform:1 notice:2 estimated:2 per:2 blue:1 discrete:2 vol:1 independency:1 four:1 key:2 threshold:6 achieving:3 clarity:1 prevent:1 ctxt:12 ram:1 graph:18 sum:4 year:1 run:5 powerful:3 master:2 uncertainty:3 extends:1 throughout:1 almost:1 geiger:1 vq2:7 scaling:1 bit:1 capturing:1 bound:25 ct:5 centrally:2 display:1 fold:3 occur:1 speed:18 argument:1 min:6 optimality:1 extremely:1 performing:1 expanded:1 fukunaga:1 kumar:1 relatively:5 conjecture:2 radu:2 according:1 poor:2 jr:1 across:3 smaller:3 joseph:1 b:21 explained:1 restricted:3 den:2 bucket:3 resource:1 vq:35 mechanism:4 available:2 junction:2 backtracks:2 save:1 assumes:1 running:2 remaining:2 include:2 top:1 graphical:20 lock:6 calculating:1 especially:6 prof:1 build:1 strategy:4 primary:1 nr:1 discouraged:1 ireland:3 subspace:1 separate:2 sci:8 induction:1 besides:1 mini:3 balance:2 minimizing:1 difficult:3 subproblems:1 negative:1 slows:1 implementation:1 twenty:1 perform:3 allowing:2 upper:1 herik:2 markov:1 benchmark:5 finite:1 arc:3 beat:1 parallelizing:2 namely:4 extensive:1 z1:1 california:1 engine:1 kishimoto:5 hour:6 pearl:1 address:1 including:2 memory:15 explanation:1 max:1 green:1 power:1 overlap:1 yanover:1 scheme:8 meulen:1 realvalued:1 identifies:1 extract:1 checking:1 synchronization:2 lecture:1 sublinear:1 proven:2 var:1 versus:1 degree:1 playing:1 share:1 balancing:3 ibm:6 normalizes:1 prasanna:2 summary:1 mateescu:1 slowdown:1 side:2 guide:1 understand:1 taking:1 distributed:7 ghz:1 benefit:2 depth:11 calculated:3 world:1 stand:2 xia:2 rich:1 van:3 far:2 transaction:2 ignore:1 sequentially:2 instantiation:1 uai:3 sat:1 xi:2 search:93 table:19 additionally:1 promising:8 wahid:1 enzenberger:1 argmaxx:1 adi:1 expansion:4 domain:4 decrement:1 arise:1 child:14 repeated:1 board:1 sub:4 guiding:1 exponential:1 third:1 ito:1 acg:1 admissible:3 theorem:2 down:1 load:5 specific:2 explored:1 offset:1 experimented:1 evidence:2 false:1 sequential:9 effectively:2 merging:1 ci:14 phd:1 magnitude:1 subtree:1 conditioned:3 logarithmic:1 backtracking:1 explore:3 likely:1 diversified:2 temporarily:1 springer:1 kaneko:3 radically:1 corresponds:1 determines:1 conditional:1 goal:1 towards:3 arneson:1 shared:13 considerable:3 hard:7 aobb:11 included:3 typical:1 except:1 specifically:1 flag:5 called:4 total:21 attempted:1 player:1 indicating:1 select:1 internal:3 bioinformatics:1 ongoing:1 evaluate:3 phenomenon:1 handling:1 |
5,503 | 5,981 | Bounding the Cost of Search-Based Lifted Inference
Vibhav Gogate
University of Texas At Dallas
800 W Campbell Rd, Richardson, TX 75080
[email protected]
David Smith
University of Texas At Dallas
800 W Campbell Rd, Richardson, TX 75080
[email protected]
Abstract
Recently, there has been growing interest in systematic search-based and importance sampling-based lifted inference algorithms for statistical relational models
(SRMs). These lifted algorithms achieve significant complexity reductions over
their propositional counterparts by using lifting rules that leverage symmetries in
the relational representation. One drawback of these algorithms is that they use
an inference-blind representation of the search space, which makes it difficult to
efficiently pre-compute tight upper bounds on the exact cost of inference without running the algorithm to completion. In this paper, we present a principled
approach to address this problem. We introduce a lifted analogue of the propositional And/Or search space framework, which we call a lifted And/Or schematic.
Given a schematic-based representation of an SRM, we show how to efficiently
compute a tight upper bound on the time and space cost of exact inference from
a current assignment and the remaining schematic. We show how our bounding
method can be used within a lifted importance sampling algorithm, in order to
perform effective Rao-Blackwellisation, and demonstrate experimentally that the
Rao-Blackwellised version of the algorithm yields more accurate estimates on
several real-world datasets.
1
Introduction
A myriad of probabilistic logic languages have been proposed in recent years [5, 12, 17]. These
languages can express elaborate models with a compact specification. Unfortunately, performing
efficient inference in these models remains a challenge. Researchers have attacked this problem
by ?lifting? propositional inference techniques; lifted algorithms identify indistinguishable random
variables and treat them as a single block at inference time, which can yield significant reductions
in complexity. Since the original proposal by Poole [15], a variety of lifted inference algorithms
have emerged. One promising approach is the class of search-based algorithms [8, 9, 16, 19, 20, 21],
which lift propositional weighted model counting [4, 18] to the first-order level by transforming the
propositional search space into a smaller lifted search space.
In general, exact lifted inference remains intractable. As a result, there has been a growing interest
in developing approximate algorithms that take advantage of symmetries. In this paper, we focus
on a class of such algorithms, called lifted sampling methods [9, 10, 13, 14, 22] and in particular on
the lifted importance sampling (LIS) algorithm [10]. LIS can be understood as a sampling analogue
of an exact lifted search algorithm called probabilistic theorem proving (PTP). PTP accepts a SRM
as input (as a Markov Logic Network (MLN) [17]), decides upon a lifted inference rule to apply
(conditioning, decomposition, partial grounding, etc.), constructs a set of reduced MLNs, recursively
calls itself on each reduced MLN in this set, and combines the returned values in an appropriate
manner. A drawback of PTP is that the MLN representation of the search space is inference unaware;
at any step in PTP, the cost of inference over the remaining model is unknown. This is problematic
because unlike (propositional) importance sampling algorithms for graphical models, which can
be Rao-Blackwellised [3] in a principled manner by sampling variables until the treewidth of the
remaining model is bounded by a small constant (called w-cutset sampling [1]), it is currently not
possible to Rao-Blackwellise LIS in a principled manner. To address these limitations, we make the
following contributions:
1
1. We propose an alternate, inference-aware representation of the lifted search space that allows
efficient computation of the cost of inference at any step of the PTP algorithm. Our approach
is based on the And/Or search space perspective [6]. Propositional And/Or search associates a
compact representation of a search space with a graphical model (called a pseudotree), and then
uses this representation to guide a weighted model counting algorithm over the full search space.
We extend this notion to Lifted And/Or search spaces. We associate with each SRM a schematic,
which describes the associated lifted search space in terms of lifted Or nodes, which represent
branching on counting assignments [8] to groups of indistinguishable variables, and lifted And
nodes, which represent decompositions over independent and (possibly) identical subproblems.
Our formal specification of lifted And/Or search spaces offers an intermediate representation of
SRMs that bridges the gap between high-level probabilistic logics such as Markov Logic [17] and
the search space representation that must be explored at inference time.
2. We use the intermediate specification to characterize the size of the search space associated with
an SRM without actually exploring it, providing tight upper bounds on the complexity of PTP.
This allows us, in principle, to develop advanced approximate lifted inference algorithms that
take advantage of exact lifted inference whenever they encounter tractable subproblems.
3. We demonstrate the utility of our lifted And/Or schematic and tight upper bounds by developing
a Rao-Blackwellised lifted importance sampling algorithm, enabling the user to systematically
explore the accuracy versus complexity trade-off. We demonstrate experimentally that it vastly
improves the accuracy of estimation on several real-world datasets.
2
Background and Terminology
And/Or Search Spaces. The And/Or search space model is a general perspective for searching over
graphical models, including both probabilistic networks and constraint networks [6]. And/Or search
spaces allow for many familiar graph notions to be used to characterize algorithmic complexity.
Given a graphical model, M ? xG, ?y, where G ? xV, Ey is a graph and ? is a set of features or
potentials, and a rooted tree T that spans G in such a manner that the edges of G that are not in T
are all back-edges (i.e., T is a pseudo tree [6]), the corresponding And/Or Search Space, denoted
ST pRq, contains alternating levels of And nodes and Or nodes. Or nodes are labeled with Xi , where
Xi P varsp?q. And nodes are labeled with xi and correspond to assignments to Xi . The root of the
And/Or search tree is an Or node corresponding to the root of T .
Intuitively, the pseudo tree can be viewed as a schematic for the structure of an And/Or search space
associated with a graphical model, which denotes (1) the conditioning order on the set varsp?q, and
(2) the locations along this ordering at which the model decomposes into independent subproblems.
Given a pseudotree, we can generate the corresponding And/Or search tree via a straightforward
algorithm [6] that adds conditioning branches to the pseudo tree representation during a DFS walk
over the structure. Adding a cache that stores the value of each subproblem (keyed by an assignment
to its context) allows each subproblem to be computed just once, and converts the search tree into
a search graph. Thus the cost of inference is encoded in the pseudo tree. In Section 3, we define a
lifted analogue to the backbone pseudo tree, called a lifted And/Or schematic, and in Section 3, we
use the definition to prove cost of inference bounds for probabilistic logic models.
First Order Logic. An entity (or a constant) is an object in the model about which we would like to
reason. Each entity has an associated type, ? . The set of all unique types forms the set of base types
for the model. A domain is a set of entities of the same type ? ; we assume that each domain is finite
and is disjoint from every other domain in the model. A variable, denoted by a lower-case letter, is a
symbolic placeholder that specifies where a substitution may take place. Each variable is associated
with a type ? ; a valid substitution requires that a variable be replaced by an object (either an entity or
another variable) with the same type. We denote the domain associated with a variable v by ?v .
We define a predicate, denoted by Rpt1 :: ?1 , . . . , tk :: ?k q, to be a k-ary functor that maps typed
entities to binary-valued random variables (also called parameterized random variable [15]). A
substitution is an expression of the form tt1 ? x1 , . . . , tk ? xk u where ti are variables of type ?i
and xi are either entities or variables of type ?i . Given a predicate R and a substitution ? ? tt1 ?
x1 , . . . , tk ? xk u, the application of ? to R yields another k-ary functor functor with each ti replaced
by xi , called an atom. If all the xi are entities, the application yields a random variable. In this case,
we refer to ? as a grounding of R, and R? as a ground atom. We adopt the notation ?i to refer to the
i-th assignment of ?, i.e. ?i ? xi .
2
Statistical Relational Models combine first-order logic and probabilistic graphical models. A
popular SRM is Markov logic networks (MLNs) [17]. An MLN is a set of weighted first-order logic
clauses. Given entities, the MLN defines a Markov network over all the ground atoms in its Herbrand
base (cf. [7]), with a feature corresponding to each ground clause in the Herbrand base. (We assume
Herbrand interpretations throughout this paper.) The weight of each feature is the weight of the
corresponding first-order clause.
? The probability distribution associated with the Markov network
is given by: P pxq ? Z1 expp i wi ni pxqq where
? wi is
?the weight of the ith clause and ni pxq is its
number of true groundings in x, and Z ? x expp i wi ni pxqq is the partition function. In this
paper, we focus on computing Z. It is known that many inference problems over MLNs can be
reduced to computing Z.
Probabilistic Theorem Proving (PTP) [9] is an algorithm for computing Z in MLNs. It lifts the
two main steps in propositional inference: conditioning (Or nodes) and decomposition (And nodes).
In lifted conditioning, the set of truth assignments to ground atoms of a predicate R are partitioned
into multiple parts such that in each part (1) all truth assignment have the same number of true atoms
and (2) the MLNs obtained by applying the truth assignments are identical. Thus, if R has n ground
atoms, the lifted search procedure will search over Opn ` 1q new MLNs while the propositional
search procedure will search over Op2n q MLNs, an exponential reduction in complexity. In lifted
decomposition, the MLN is partitioned into a set of MLNs that are not only identical (up to a
renaming) but also disjoint in the sense that they do not share any ground atoms. Thus, unlike the
propositional procedure which creates n disjoint MLNs and searches over each, the lifted procedure
searches over just one of the n MLNs (since they are identical). Unfortunately, lifted decomposition
and lifted conditioning cannot always be applied and in such cases PTP resorts to propositional
conditioning and decomposition. A drawback of PTP is that unlike propositional And/Or search
which has tight complexity guarantees (e.g., exponential in the treewidth and pseudotree height),
there are no (tight) formal guarantees on the complexity of PTP.1 We address this limitation in the
next two sections.
3
Lifted And/Or Schematics
Our goal in this section is to define a lifted
(x,1,2)
analogue the pseudotree notion employed
R1 ([x],1,2,UN)
R1 ([x],1,2,UN)
R1 ([x],1,2,UN)
by the propositional And/Or framework.
The structure must encode (1) all infor(y,1,2)
mation contained in a propositional pseu(x,1,2)
dotree (a conditioning order, conditional
S1 ([x,y],2,2,UN)
S1 ([x,y],2,2,UN)
S1 ([x],1,2,UN)
independence assumptions), as well as (2)
additional information needed by the PTP
algorithm in order to exploit the symme- Figure 1: Possible schematics for (a) Rpxq _ Spxq, (b) Rpxq
tries of the lifted model. Since the symme- _Spx, yq and (c) Rpxq _ Rpyq _ Spx, yq, ?x ? ?y ? 2.
U N stands for unknown. Circles and diamonds represent
tries that can be exploited highly depend
lifted Or and And nodes respectively.
on the amount of evidence, we encode the
SRM after evidence is instantiated, via a process called shattering [2]. Thus, while a pseudotree
encodes a graphical model, a schematic encodes an (SRM, evidence set) pair.
Definition A lifted Or node is a vertex labeled by a 6 ? tuple xR, ?, ?, i, c, ty, where (1) R is a
k-ary predicate, (2) ? is a set of valid substitutions for R, (3) ? P t1, . . . , ku, represents the counting
argument for the predicate Rpt1 :: ?1 , . . . , tk :: ?k q and specifies a domain ?? to be counted over, (4)
i is an identifier of the block of the partition being counted over, (5) c P Z` is the number of entities
in block i, and (6) t P tT rue, F alse, U nknownu is the truth value of the set of entities in block i.
Definition A lifted And node is a vertex labeled by F , a (possibly empty) set of formulas, where
a formula f is a pair ptpO, ?, bqu, wq, in which O is a lifted Or node xR, ?, ?, i, c, ty, ? P ? ,
b P tT rue, F alseu, and w P R. Formulas are assumed to be in clausal form.
Definition A lifted And/Or schematic, S ? xVS , ES , vr y, is a rooted tree comprised of lifted Or
nodes and lifted And nodes. S must obey the following properties:
? Every lifted Or node O P VS has a single child node N P VS .
? Every lifted And node A P VS has a (possibly empty) set of children tN1 , . . . , Nn u ? VS .
1
Although, complexity bounds exist for related inference algorithms such as first-order decomposition trees
[20], they are not as tight as the ones presented in this paper.
3
? For each pair of lifted Or nodes O, O1 P VS , with respective labels xR, ?, ?, i, c, ty,
xR1 , ?1 , ?1 , i1 , c1 , t1 y, pR, iq ? pR1 , i1 q. Pairs pR, iq uniquely identify lifted Or nodes.
? For every lifted Or node O P VS with label xR, ?, ?, i, c, ty, @? P ?, @?1 ? ?, either (1) ???1 =
1, or (2) ??1 P X, where X has appeared as the decomposer label [9] of some edge in pathS pO, vr q.
? For each formula fi ? ptpO, ?, bqu, wq appearing at a lifted And node A, @O P tpO, ?, bqu,
O P pathS pvr , Aq. We call the set of edges tpO, Aq | O P FormulaspAqu the back edges of S.
? Each edge between a lifted Or node O and its child node N is unlabeled. Each edge between
a lifted And node A and its child node N may be (1) unlabeled or (2) labeled with a pair pX, cq,
where X is a set of variables, called a decomposer set, and c P Z` is the the number of equivalent
entities in the block of x represented by the subtree below. If it is labeled with a decomposer set X
then (a) for every substitution set ? labeling a lifted Or node O1 appearing in the subtree rooted at
N , Di s.t .@? P ?, ?i P X and (b) @ decomposer sets Y labeling edges in the subtree rooted at N ,
Y X X ? H.
The lifted And/Or Schematic is a general structure for specifying the inference procedure in SRMs.
It can encode models specified in many formats, such as Markov Logic [17] and PRV models [15].
Given a model and evidence set, constructing a schematic conversion into a canonical form is achieved
via shattering [2, 11], whereby exchangeable variables are grouped together. Inference only requires
information on the size of these groups, so the representation omits information on the specific
variables in a given group. Figure 1 shows And/Or schematics for three MLNs.
Algorithm 1 Function evalNode(And)
Algorithm 2 Function evalNode(Or)
1: Input: a schematic, T with And root node, a counting store cs
2: Output: a real number, w
3: N ? root(T )
4: for formula f P N do
5:
w ? w? calculateWeightpf, csq
6: for child N 1 of T do
7:
cs1 ? sumOutDoneAtomspcs, N q
8:
if pN, N 1 q has label xV, b, cb y then
9:
if ExpV, bq, ccy P cs s.t. v P V then
10:
cs2 ? cs1 Y xpV, bq, xtu, tptu, cb qyy
11:
xP, M y ?getCC(V, b, cs2 ) //get cc for V
12:
for assignment pai , ki q P M do
13:
//give v its own entry in cs
14:
cs3 ? updateCCAtDecomposerpcs2 , V, v, pai , 1qq
15:
w ? w?evalNodepN 1 , cs3 qki
16:
else
17:
w ? w?evalNodepN 1 , csq
18: return w
1: Input: a schematic, T with Or Node root, a counting store cs
2: Output: a real number, w
3: if pxroot(T),cs)y, wq P cache then return w
4: xR, ?, ?, b, c, t, P y = root(T )
5: T 1 ? child(xR, ?, ?, b, c, t, y, T q
6: V ? tv | ? P ?, ?? ? vu
7: xP, txai , ki yuy ?getCC(V, b)
8: w ? 0
9: if t P tT rue, F alseu then
10:
cs1 = updateCC(xP, M y, R, tv )
11:
w ?evalNode(T 1 ,cs1 )
12: else
13:
assigns = ttv1 , . . . , vn u | vi P t0, . . . , ki uu
14:
for tv1 , . . . , vn u P assigns do
15:
cs1 = updateCC(xP,
M y,?R, tv1 , . . . , vn u) ?
???
`k ?
n
i
16:
w?w`
evalNodepT 1 , cs1 q
i?1 vi
17: insertCache(xR, ?, ?, b, c, t, P y, w)
18: return w
3.1 Lifted Node Evaluation Functions-We describe the inference procedure in Algorithms 1 and 2.
We require the notion of a counting store in order to track counting assignments over the variables in
the model. A counting store is a set of pairs xpV, iq, ccy, where V is a set of variables that are counted
over together, i is a block identifier, and cc is a counting context. A counting context (introduced in
[16]), is a pair xP r, M y, where P r is a list of m predicates and M : tT rue, F alseum ? k, is a map
from truth assignments to P r to a non-negative integer denoting the count of the number of entities
in the i-th block of the partition of each v P V that take that assignment. We initialize the algorithm
by a call to Algorithm 1 with an appropriate schematic S and empty counting store.
The lifted And node function (Algorithm 1) first computes the weight of any completely conditioned
formulas. It then makes a set of evalNode calls for each of its children O; if pA, Oq has decomposer
label V , it makes a call for each assignment in each block of the partition of V ; otherwise it makes a
single call to O. The algorithm takes the product of the resulting terms along with the product of
the weights and returns the result. The lifted Or node function (Algorithm 2) retrieves the set of all
assignments previously made to its counting argument variable set; it then makes an evalNode call to
its child for each completion to its assignment set that is consistent with its labeled truth value, and
takes their weighted sum, where the weight is the number of symmetric assignments represented by
each assignment completion.
The overall complexity of depends on the number of entries in the counting store at each step of
inference. Note that Algorithm 1 reduces the size of the store by summing out over atoms that leave
context. Algorithm 2 increases the size of the store at atoms with unknown truth value by splitting
the current assignment into True and False blocks w.r.t. its atom predicate. Atoms with known truth
value leave the size of the store unchanged.
4
4
Complexity Analysis
Algorithms 1 and 2 describe a DFS-style traversal of the lifted search space associated with S. As our
notion of complexity, we are interested in specifying the maximum number of times any node VS P S
is replicated during instantiation of the search space. We describe this quantity as SSN pSq. Our goal
in this section is to define the function SSN pSq, which we refer to as the induced lifted width of S.
4.1 Computing the Induced Lifted Width of a Schematic-In the propositional And/Or framework,
the inference cost of a pseudotree T is determined by DR , the tree decomposition of the graph
G ? xN odespT q, BackEdgespT qy induced by the variable ordering attained by traversing T along
any DFS ordering from root to leaves. [6]. Inference is Opexppwqq, where w is the size of the largest
cluster in DR . The analogous procedure in lifted And/Or requires additional information be stored at
each cluster. Lifted tree decompositions are identical to their propositional counterparts with two
exceptions. First, each cluster Ci requires the ordering of its nodes induced by the original order of
S. Second, each cluster Ci that contains a node which occurs after a decomposer label requires the
inclusion of the decomposer label. Formally:
Definition The tree sequence TS associated with schematic S is a partially ordered set such that:
(1) O P N odespSq ? O P TS , (2) pA, N q with label l P EdgespSq ? pA, lq P TS , and (3)
AncpN1 , N2 , Sq ? N1 ? N2 P TS .
Definition The path sequence P associated with tree sequence TS of schematic S is any totally
ordered subsequence of TS .
Definition Given a schematic S and its tree sequence TS , the Lifted Tree Decomposition of TS ,
denoted DS , is a pair pC, T q in which C is a set of path sequences and T is a tree whose nodes are the
members of C satisfying the following properties: (1) @pO, Aq P BackEdgespP q, Di s.t. O, A P Ci ,
(2) @i, j, k s.t Ck P P athT pCi , Cj q, Ci X Cj ? Ck , (3) @A P TS , O P Ci , A ? O ? A P Ci .
Given the partial ordering of nodes defined by S, each schematic S induces a unique Lifted Tree
Decomposition, DS . Computing SSN pSq requires computing maxCi PC SSC pCi q. There exists a
total ordering over the nodes in each Ci ; hence the lifted structure in each Ci constitutes a path. We
take the lifted search space generated by each cluster C to be a tree; hence computing the maximum
node replication is equivalent to computing the number of leaves in SSC .
In order to calculate the induced lifted width of a given path, we must first determine which Or
nodes are counted over dependently. Let VC ? tv | xR, ?, ?, i, c, ty P C, ? P ?, ?? ? vu be the set
of variables that are counted over by an Or node in cluster C. Let VC be a partition of VC into its
dependent variable counting sets; i.e. define the binary relation CS ? tpv1 , v2 q | DxR, ?, ?, i, c, ty P
VS s.t D?, ?1 P ?, ?? ? v1 , ??1 ? v2 u. Then V ? tv 1 | pv, v 1 q P CS` u, where CS` is the transitive
closure of CS . Let VC ? tVj | v1 , v2 P Vj ?? pv1 , v2 q P CS` u. Variables that appear in a set
Vj P VC refer to the same set of objects; thus all have the same type ?j and they all share the same
partition of the entities of Tj . Let Pj denote the partition of the entities of Tj w.r.t variable set Vj .
Then each block pij P Pj is counted over independently (we refer to each pij as a dependent counting
path ). Thus we can calculate the total leaves corresponding to cluster C by taking the product of the
leaves of each pij block:
?
?
SSC pCq ? Vj PVC pij PPj SSp ppij q
(1)
Analysis of lifted Or nodes that count over the same block pij depends on the structure of the
decomposers sets over the structure. First, we consider the case in which C contains no decomposers.
4.2 Lifted Or Nodes with No Decomposer-Consider ORC,Vj ,i , the sequence of nodes in C that
perform conditioning over the i-th block of the partition of the variables in Vj . The nodes in ORC,Vj ,i
count over the same set of entities. A conditioning assignment at O assigns ct P t0 . . . cu entities to
T rue and cf ? c ? ct entities to F alse w.r.t. its predicate, breaking the symmetry over the c elements
in the block. Each O1 P ORp,Vj ,i that occurs after O must perform counting over two sets of size ct
and cf separately. The number of assignments for block tVj , iu grows exponentially with the number
of ancestors counting over tVj , iu whose truth value is unknown. Formally, let cij be the size of the
i-th block of the partition of Vj , and let nij ? |tO | O P ORC,Vj ,i , N ? xR, ?, ?, i, c, unknownyu|.
For an initial domain size cij and predicate count nij , we must compute the number of possible ways
to represent cij as a sum of 2nij non-negative integers. Define kij ? 2nij . We can count the number
of leaf nodes generated by counting the number of weak compositions of cij into kij parts. Thus the
number of search space leaves corresponding to pij generated
` byij C
?is:
?1
SSp ppij q ? W pcij , kij q ? cijk`k
(2)
ij ?1
5
Example Consider the example in Figure 1(a). There is a single path from the root to a leaf. The
set of variables appearing on the path, V ? txu, and hence the partition of V into variables that are
counted over together yields ttxuu. Thus n1,1 ? |tpR1 p2, U nq, S1`p2, U nqu|
? ?5!2, c1,1 ? 2, and
k1,1 ? 4. So we can count the leaves of the model by the expression 2`4?1
? 3!2! ? 10.
4?1
4.3 Lifted Or Nodes with Decomposers- To Algorithm 3 Function countPathLeaves
determine the size of the search tree induced by 1: Input: a subsequence path P
`
`
a subsequence P that contains decomposers, we 2: Output: f pxq : Z ? Z , where x is a domain size and f pxq
is the number of search space leaves generated by P
must consider whether the counting argument 3: //we represent the recursive polynomial
apwc1 - wc2 q as a triple pa, wc1 , wc2 q,
of each Or node is decomposed on.
4.3.1
Lifted Or Nodes with Decomposers
as Non Counting Arguments
4:
5:
6:
7:
8:
9:
where a P
Z, and wc1 , wc2 are either weak
compositions (base case) or triples of this
type (recursive case)
type WCP = WC INT | WCD (INT,WCP,WCP)
//evalPoly constructs the polynomial
function MAKE P OLY((WC nq, pt, a, sq)
n
, WC n, WC pn ? 2t?a qq
return WCD ( t?a
We first consider the case when ORC,Vj ,i contains decomposer variables as non-counting
2
function MAKE P OLY((WCD (c, wc1 , wc2 qq, pt, a, sqq
arguments. For each parent-to-child edge
return WCDpa, makePoly wc1 pt, a, sq, makePoly wc2 pt?
(A,N,label l), Algorithm 1 generates a child for
s, a ? s, sqq
each non-zero assignment in the counting store 10: //applyDec divides out the Or nodes with
containing the decomposer variable. If a path
counting variables that are decomposers
a))
subsequence over variable v of initial domain c 11: function APPLY D EC(d,(WC
12:
return WC pa{p2d qq
has n Or nodes, k of which occur below the de- 13: function APPLY D EC(d,(WCD (a,b,c)))
composer label, then we can compute the num- 14: return WCD (a,applyDec d b,applyDec d c)
ber of assignments in the counting store at each 15: //evalPoly creates a function that takes a
domain and computes the differences of the
decomposer as 2n?k . Further, we can compute
constituent weak compositions
the number of non-zero leaves generated by each 16: function EVAL P OLY((WCD (a,b,c)),x)
assignment can be computed as the difference 17: return a ? (evalPoly b x - evalPoly c x)
function EVAL
` P OLY?((WC a),x)
in leaves from the model over n Or nodes and 18:
19:
return x`a?1
a?1
the model over` k Or? nodes.
result?` n Hence
? `the
?? 20: t = totalOrNodes(P )
c`2 ?1
c`2k ?1
n?k
ing model has 2
? 2k ?1
21: dv = orNodesWithDecomposerCountingArgument(P )
2n ?1
22: poly = WC 2t ; orNodesAbove=0;orNodesBetween=0
leaves. This procedure can be repeated by recur- 23: for N of P do
sively applying the rule to split each weak com- 24: if N ? pA, xv, p, cyq then
poly = makePoly poly (t,orNodesAbove,orNodesBetween)
position into a difference of weak compositions 25:
26:
orNodesBetween=0
for each decomposer label present in the subse- 27: else
orNodesAbove++;orNodesBetween++
quence under consideration (Algorithm 3). The 28:
return 2dv ? evalPoly (applyDec dv poly)
final result is a polynomial in c, which, when
given a domain size, returns the number of leaves generated by the path subsequence.
Example Consider the example in Figure 1(c). Again there is a single path from the root to a leaf. The
set of variables appearing on the path is V ? tx, yu. The partition of V into variables that are counted
over together yields V ? ttx, yuu.Algorithm
3 returns
`
? `2`2?1
? the polynomial f pxq ? 2pW px, 4q?W px, 2qq.
So the search space contains 2p 2`4?1
?
4?1
2?1 q ? 14 leaves.
4.3.2
Lifted Or Nodes with Decomposers as Counting Arguments
The procedure is similar for the case when P contains Or nodes that count over variables that have
been decomposed one addition. Or nodes that count over a variable that has previously appeared as
the decomposer label of an ancestor in the path have a domain size of 1 and hence always spawn
W p1, 2q ? 2 children instead of W px, 2q children. If there are d Or nodes in P that count over
decomposed variables, we must divide the k term of each weak composition in our polynomial by 2d .
Lines 11 ? 14 of Algorithm 3 perform this operation.
Example Consider the example shown in Figure 1(b). Again there is one path from the root
to leaf, with V ? tx, yu; partitioning V into sets of variables that are counted over together
yields V ? ttxu, tyuu. Thus n1,1 ? |tpR1 p2, U nqu| ? 1, c1,1 ? 2, and k1,1 ? 2. Similarly,
n2,1 ? |tS1 p2, U nq|s| ? 1, c2,1 ? 2, and k2,1 ? 2. Algorithm 3 returns the constant functions
f1 pxq ? f2 pxq ? 2 ? W px, 1q ? 2. Equation 1 indicates that we take the product of these functions.
So the search space contains 4 leaves regardless of the domain sizes of x and y.
4.4 Overall Complexity-Detailed analysis, as well as a proof of correctness of Algorithm 3 is given
in the supplemental material section. Here we give general complexity results.
6
Theorem 4.1 Given a lifted And/Or Schematic S with associated Tree Decomposition DS ? pC, T q,
the overall time and space complexity of inference in S is OpmaxCi PC SSC pCi qq.
5
An Application: Rao-Blackwellised Importance Sampling
Rao-Blackwellisation [1, 3] is a variance-reduction
technique which combines exact inference with
sampling. The idea is to partition the ground atoms
into two sets: a set of atoms, say X that will be
sampled and a set of atoms that will be summed
out analytically using exact inference techniques,
say Y. Typically, the accuracy (variance decreases)
improves as the cardinality of Y is increased. However, so does the cost of exact inference, which in
turn decreases the accuracy because fewer samples
are generated. Thus, there is a trade-off.
Algorithm 4 Function makeRaoFunction
1: Input: a schematic S
2: Output: f pxq : CS ? Z`
3: find the clusters of S
4: pC, T q = findTreeDecomposition(S)
5: sizef ? tu
6: for Ci of C do
7:
P = dependentCountingPaths(Ci )
8:
cf ? tu
9:
for pVj , Pj q of P do
10:
fj = countPathLeaves(Pj )
11:
cf .append(xVJ , fj y)
12:
sizef .append(cf )
return sizef
Rao-Blackwellisation is particularly useful in lifted
sampling schemes because subproblems over large Algorithm 5 Function evalRaoFunction
sets of random variables are often tractable (e.g. 1: Input: a counting store, cs, a list of list of size functions, sf
subproblems containing 2n assignments can often 2: Output: s P Z` , the cost of exact inference
be summed out in Opnq time via lifted condition- 3: clusterCosts ? tu
for cf i of sf do
ing, or in Op1q time via lifted decomposition). The 4:
5:
clusterCost ? 1
approach presented in Section 3 is ideal for this 6: for xVj , fj y of cf i do
assigns ? getCCpVj q
task because Algorithm 3 returns a function that 7:
8:
for sk of assigns do
is specified at the schematic level rather than the 9:
clusterCost ? clusterCost ? fj psk q
search space level. Computing the size of the re- 10: clusterCosts.append(clusterCost)
return maxpclusterCostsq
maining search space requires just the evaluation of
a set of polynomials. In this section, we introduce
our sampling scheme, which adds Rao-Blackwellisation to lifted importance sampling (LIS) (as
detailed in [9, 10]). Technically, LIS is a minor modification of PTP, in which instead of searching
over all possible truth assignments to ground atoms via lifted conditioning, the algorithm generates a
random truth assignment (lifted sampling), and weighs it appropriately to yield an unbiased estimate
of the partition function.
5.1 Computing the size bounding function-Given a schematic S ? xVS , ES , vr y to sample, we
introduce a preprocessing step that constructs a size evaluation function for each v P VS . Algorithm
4 details the process of creating the function for one node. It takes as input the schematic S rooted at
v. It first finds the tree decomposition of S. The algorithm then finds the dependent paths in each
cluster; finally, it applies Algorithm 3 to each dependent path and wraps the resulting function with
the variable dependency. It returns a list of list of (variable,function) pairs.
5.2 Importance Sampling at lifted Or Nodes-Importance sampling at lifted Or nodes is similar to
its propositional analogue. Each lifted Or node is now specified by an 8-tuple xR, ?, ?, i, c, t, Q, sf y,
in which Q is the proposal distribution for pR, iq, and sf is the output of Algorithm 4. The sampling
algorithm takes an additional input, cb, specifying the complexity bound for Rao-Blackwellisation.
Given an or Node where t ?unknown, we first compute the cost of exact inference.
Algorithm 5 describes the procedure. It takes as input (1) the list of lists sf output by Algorithm 4,
and (2) the counting store, detailing the counting assignments already made by the current sample.
For each sublist in the input list, the algorithm evaluates each (variable,function) pair by (1) retrieving
the list of current assignments from the counting store, (2) evaluating the function for the domain size
of each assignment, and (3) computing the product of the results. Each of these values represents a
bound on the cost of inference for a single cluster; Algorithm 5 returns c, the maximum of this list.
If c ?? cb we call evalN odepSq; otherwise we sample assignment i from Q with probability
qi , update the counting store with assignment i, and call sampleN odepS 1 q, where S 1 is the child
schematic, yielding estimate w
? of the partition function of S 1 . We then return ??S ? qw?i as the estimate
of the partition function at S.
5.3 Importance Sampling at lifted And Nodes-Importance sampling at lifted And nodes differs
from its propositional counterpart in that a decomposer labeled edge pA, T q represents d distributions
7
that are not only independent but also identical. Let A be a lifted And node that we wish to
sample, with children S1 , . . . , Sk , with corresponding decomposer labels d1 . . . dk (for each edge
with no decomposer label take di ? 1). Then the estimator for the partition function at A is:
?
?
??A ? iPt1..ku jPt1..di u ?Ti .
6
Experiments
Time(s) vs Log Sample Variance:Smooth-test.pdf
2452
0
10
100
1000
2451
2450
Log Sample Variance
We ran our Rao-Blackwellised Importance Sampler on three benchmark SRMs and datasets: (1)
The friends, smokers and Asthma MLN and dataset
described in [19], (2) The webKB MLN for collective classification and (3) The Protein MLN, in
which the task is to infer protein interactions from
biological data. All models are available from
www.alchemy.cs.washington.edu.
2449
2448
2447
2446
2445
2444
0
200
400
600
800
1000
Time(s)
(a) Friends and Smokers, Asthma
2600 objects, 10% evidence
Time(s) vs Log Sample Variance:Smooth-test.pdf
595
0
10
100
1000
594
593
Log Sample Variance
Results. Figure 2 shows the sample variance of the
estimators as a function of time. We see that the
Rao-Blackwellised samplers typically have smaller
variance than LIS . However, increasing the complexity bound typically does not improve the variance
as a function of time (but the variance does improve
as a function of number of samples). Our results
indicate that the structure of the model plays a role
in determining the most efficient complexity bound
for sampling. In general, models with large decomposers, especially near the bottom of the schematic,
will benefit from a larger complexity bound, because
it is often more efficient to perform exact inference
over a decomposer node.
2443
592
591
590
589
588
587
586
0
200
400
600
800
1000
Time(s)
(b) webKB
410 objects, 10% evidence
Time(s) vs Log Sample Variance:Smooth-test.pdf
1115
0
10
100
1000
1110
Log Sample Variance
Setup. For each model, we set 10% randomly selected ground atoms as evidence, and designated
them to have T rue value. We then estimated the
partition function via our Rao-Blackwellised sampler
with complexity bounds t0, 10, 100, 1000u (bound of
0 yields the LIS algorithm). We used the uniform
distribution as our proposal. We ran each sampler
50 times and computed the sample variance of the
estimates.
1105
1100
1095
7
Conclusions and Future Work
1090
0
200
400
600
800
1000
Time(s)
(c) protein
550 objects, 10% evidence
Figure 2: Log variance as a function of time.
In this work, we have presented an inference-aware
representation of SRMs based on the And/Or framework. Using this framework, we have proposed an
accurate and efficient method for bounding the cost of inference for the family of lifted conditioning based algorithms, such as Probabilistic Theorem Proving. Given a shattered SRM, we have
shown how the method can be used to quickly identify tractable subproblems of the model. We
have presented one immediate application of the scheme by developing a Rao-Blackwellised Lifted
Importance Sampling Algorithm, which uses our bounding scheme as a variance reducer.
Acknowledgments
We gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA)
Probabilistic Programming for Advanced Machine Learning Program under Air Force Research
Laboratory (AFRL) prime contract no. FA8750-14-C-0005. Any opinions, findings, and conclusions
or recommendations expressed in this material are those of the author(s) and do not necessarily reflect
the view of DARPA, AFRL, or the US government.
8
References
[1] B. Bidyuk and R. Dechter. Cutset Sampling for Bayesian Networks. Journal of Artificial
Intelligence Research, 28:1?48, 2007.
[2] R Braz, Eyal Amir, and Dan Roth. Lifted first-order probabilistic inference. In Proceedings of
the 19th international joint conference on Artificial intelligence, pages 1319?1325. Citeseer,
2005.
[3] George Casella and Christian P Robert. Rao-blackwellisation of sampling schemes. Biometrika,
83(1):81?94, 1996.
[4] M. Chavira and A. Darwiche. On probabilistic inference by weighted model counting. Artificial
Intelligence, 172(6-7):772?799, 2008.
[5] Luc De Raedt and Kristian Kersting. Probabilistic inductive logic programming. Springer,
2008.
[6] Rina Dechter and Robert Mateescu. And/or search spaces for graphical models. Artificial
intelligence, 171(2):73?106, 2007.
[7] Michael R. Genesereth and Eric Kao. Introduction to Logic, Second Edition. Morgan &
Claypool Publishers, 2013.
[8] Vibhav Gogate and Pedro Domingos. Exploiting logical structure in lifted probabilistic inference.
In Statistical Relational Artificial Intelligence, 2010.
[9] Vibhav Gogate and Pedro Domingos. Probabilistic theorem proving. In Proceedings of the
Twenty-Seventh Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI11), pages 256?265, Corvallis, Oregon, 2011. AUAI Press.
[10] Vibhav Gogate, Abhay Kumar Jha, and Deepak Venugopal. Advances in lifted importance
sampling. In AAAI, 2012.
[11] Abhay Jha, Vibhav Gogate, Alexandra Meliou, and Dan Suciu. Lifted inference seen from
the other side: The tractable features. In Advances in Neural Information Processing Systems,
pages 973?981, 2010.
[12] Brian Milch, Bhaskara Marthi, Stuart Russell, David Sontag, Daniel L Ong, and Andrey
Kolobov. Blog: Probabilistic models with unknown objects. Statistical relational learning,
page 373, 2007.
[13] M. Niepert. Lifted probabilistic inference: An MCMC perspective. In UAI 2012 Workshop on
Statistical Relational Artificial Intelligence, 2012.
[14] M. Niepert. Symmetry-aware marginal density estimation. In Twenty-Seventh AAAI Conference
on Artificial Intelligence, pages 725?731, 2013.
[15] David Poole. First-order probabilistic inference. In IJCAI, volume 3, pages 985?991. Citeseer,
2003.
[16] David Poole, Fahiem Bacchus, and Jacek Kisynski. Towards completely lifted search-based
probabilistic inference. arXiv preprint arXiv:1107.4035, 2011.
[17] Matthew Richardson and Pedro Domingos. Markov logic networks. Machine learning, 62(12):107?136, 2006.
[18] T. Sang, P. Beame, and H. Kautz. Solving Bayesian networks by weighted model counting. In
Proceedings of the Twentieth National Conference on Artificial Intelligence, pages 475?482,
2005.
[19] Dan Suciu, Abhay Jha, Vibhav Gogate, and Alexandra Meliou. Lifted inference seen from the
other side: The tractable features. In NIPS, 2010.
[20] Nima Taghipour, Jesse Davis, and Hendrik Blockeel. First-order decomposition trees. In
Advances in Neural Information Processing Systems, pages 1052?1060, 2013.
[21] Guy Van den Broeck, Nima Taghipour, Wannes Meert, Jesse Davis, and Luc De Raedt. Lifted
probabilistic inference by first-order knowledge compilation. In Proceedings of the TwentySecond international joint conference on Artificial Intelligence-Volume Volume Three, pages
2178?2185. AAAI Press, 2011.
[22] Deepak Venugopal and Vibhav Gogate. On lifting the gibbs sampling algorithm. In Advances
in Neural Information Processing Systems, pages 1655?1663, 2012.
9
| 5981 |@word cu:1 version:1 pw:1 polynomial:6 closure:1 decomposition:15 citeseer:2 recursively:1 functor:3 reduction:4 initial:2 substitution:6 contains:8 daniel:1 denoting:1 fa8750:1 current:4 com:1 must:8 dechter:2 partition:17 christian:1 update:1 v:12 intelligence:10 leaf:17 fewer:1 nq:3 selected:1 mln:9 braz:1 xk:2 amir:1 ith:1 smith:1 num:1 node:66 location:1 height:1 along:3 c2:1 replication:1 retrieving:1 prove:1 combine:3 dan:3 darwiche:1 manner:4 introduce:3 p1:1 pr1:1 growing:2 beame:1 sively:1 pcq:1 decomposed:3 alchemy:1 cache:2 cardinality:1 totally:1 increasing:1 project:1 webkb:2 bounded:1 notation:1 opn:1 qw:1 backbone:1 cijk:1 supplemental:1 finding:1 decomposer:17 guarantee:2 pseudo:5 blackwellised:8 every:5 ti:3 auai:1 biometrika:1 cutset:2 k2:1 exchangeable:1 partitioning:1 appear:1 t1:2 understood:1 dallas:2 treat:1 xv:5 pxq:8 blockeel:1 path:18 tt1:2 specifying:3 unique:2 acknowledgment:1 xvj:2 vu:2 recursive:2 block:16 differs:1 xr:10 sq:3 procedure:10 bidyuk:1 ssn:3 pre:1 renaming:1 protein:3 symbolic:1 get:1 cannot:1 unlabeled:2 context:4 applying:2 milch:1 www:1 equivalent:2 map:2 roth:1 jesse:2 straightforward:1 regardless:1 independently:1 tv1:2 twentysecond:1 splitting:1 assigns:5 rule:3 estimator:2 proving:4 searching:2 notion:5 analogous:1 qq:6 pt:4 play:1 user:1 exact:11 dxr:1 programming:2 us:2 domingo:3 associate:2 pa:7 element:1 satisfying:1 particularly:1 labeled:8 bottom:1 role:1 subproblem:2 preprint:1 calculate:2 rina:1 symme:2 ordering:6 russell:1 trade:2 decrease:2 reducer:1 ran:2 principled:3 meert:1 transforming:1 agency:1 complexity:20 ong:1 traversal:1 depend:1 tight:7 solving:1 myriad:1 technically:1 upon:1 creates:2 cs3:2 f2:1 tpo:2 completely:2 eric:1 po:2 darpa:2 joint:2 represented:2 tx:4 retrieves:1 maining:1 instantiated:1 effective:1 describe:3 artificial:10 labeling:2 lift:2 pci:3 whose:2 emerged:1 encoded:1 valued:1 larger:1 say:2 otherwise:2 richardson:3 itself:1 final:1 advantage:2 sequence:6 propose:1 interaction:1 product:5 tu:3 achieve:1 kao:1 sizef:3 constituent:1 exploiting:1 parent:1 empty:3 cluster:10 r1:3 ijcai:1 leave:2 object:7 tk:4 iq:4 develop:1 completion:3 friend:2 ij:1 kolobov:1 yuy:1 minor:1 xpv:2 utdallas:2 p2:4 c:13 treewidth:2 uu:1 indicate:1 drawback:3 dfs:3 vc:5 wc2:5 opinion:1 material:2 require:1 government:1 f1:1 biological:1 brian:1 exploring:1 ground:9 claypool:1 cb:4 algorithmic:1 matthew:1 adopt:1 mlns:11 estimation:2 label:14 currently:1 bridge:1 grouped:1 largest:1 correctness:1 weighted:6 always:2 mation:1 ck:2 ppj:1 pn:2 rather:1 lifted:93 kersting:1 tvj:3 encode:3 focus:2 quence:1 sqq:2 indicates:1 nqu:2 sense:1 inference:48 dependent:4 spx:2 chavira:1 nn:1 shattered:1 typically:3 relation:1 ancestor:2 i1:2 interested:1 infor:1 iu:2 overall:3 classification:1 denoted:4 summed:2 initialize:1 marginal:1 opnq:1 construct:3 aware:3 once:1 sampling:26 atom:16 identical:6 shattering:2 pai:2 stuart:1 yu:2 constitutes:1 represents:3 future:1 randomly:1 national:1 ccy:2 familiar:1 replaced:2 n1:3 interest:2 highly:1 eval:2 evaluation:3 yielding:1 pc:5 tj:2 suciu:2 compilation:1 accurate:2 edge:11 tuple:2 partial:2 respective:1 bq:2 traversing:1 tree:24 divide:2 detailing:1 walk:1 circle:1 re:1 weighs:1 nij:4 kij:3 increased:1 rao:15 raedt:2 assignment:31 cost:13 vertex:2 entry:2 uniform:1 srm:8 comprised:1 predicate:9 seventh:2 bacchus:1 characterize:2 stored:1 dependency:1 broeck:1 andrey:1 st:1 density:1 international:2 recur:1 systematic:1 off:2 probabilistic:19 contract:1 meliou:2 michael:1 together:5 quickly:1 vastly:1 aaai:3 again:2 reflect:1 containing:2 possibly:3 ssc:4 genesereth:1 dr:2 guy:1 creating:1 resort:1 style:1 return:20 sang:1 li:7 potential:1 de:3 int:2 jha:3 oregon:1 blind:1 vi:2 depends:2 root:10 try:2 view:1 eyal:1 oly:4 kautz:1 contribution:1 air:1 ni:3 accuracy:4 variance:15 efficiently:2 yield:9 identify:3 correspond:1 weak:6 bayesian:2 researcher:1 cc:2 ary:3 casella:1 ptp:12 whenever:1 definition:7 evaluates:1 ty:6 typed:1 associated:11 di:4 proof:1 sampled:1 dataset:1 popular:1 logical:1 knowledge:1 ptpo:2 improves:2 cj:2 actually:1 campbell:2 back:2 cs1:6 attained:1 afrl:2 bqu:3 niepert:2 just:3 until:1 d:3 asthma:2 defines:1 vibhav:8 grows:1 alexandra:2 grounding:3 true:3 unbiased:1 counterpart:3 inductive:1 hence:5 analytically:1 alternating:1 symmetric:1 dependently:1 laboratory:1 tn1:1 indistinguishable:2 during:2 branching:1 uniquely:1 width:3 rooted:5 whereby:1 davis:2 pdf:3 tt:4 demonstrate:3 jacek:1 fj:4 kisynski:1 pvc:1 consideration:1 recently:1 fi:1 srms:5 clause:4 conditioning:12 exponentially:1 volume:3 extend:1 interpretation:1 significant:2 refer:5 composition:5 corvallis:1 gibbs:1 rd:2 similarly:1 inclusion:1 language:2 aq:3 gratefully:1 specification:3 etc:1 add:2 base:4 own:1 recent:1 perspective:3 prime:1 store:16 binary:2 blog:1 exploited:1 morgan:1 seen:2 additional:3 george:1 ey:1 employed:1 determine:2 branch:1 full:1 multiple:1 reduces:1 infer:1 ing:2 smooth:3 op1q:1 offer:1 schematic:29 qi:1 arxiv:2 represent:5 achieved:1 qy:1 c1:3 proposal:3 background:1 addition:1 separately:1 else:3 publisher:1 appropriately:1 unlike:3 induced:6 member:1 oq:1 call:10 integer:2 near:1 leverage:1 counting:33 intermediate:2 split:1 ideal:1 variety:1 independence:1 idea:1 texas:2 t0:3 whether:1 expression:2 utility:1 defense:1 returned:1 sontag:1 useful:1 detailed:2 amount:1 induces:1 reduced:3 generate:1 specifies:2 exist:1 problematic:1 canonical:1 xr1:1 taghipour:2 estimated:1 disjoint:3 track:1 clausal:1 herbrand:3 express:1 group:3 terminology:1 xtu:1 pj:4 v1:2 graph:4 year:1 convert:1 sum:2 letter:1 parameterized:1 uncertainty:1 place:1 throughout:1 family:1 vn:3 bound:13 ki:3 ct:3 p2d:1 annual:1 occur:1 constraint:1 prv:1 encodes:2 wc:8 generates:2 argument:6 span:1 kumar:1 performing:1 px:5 format:1 developing:3 tv:4 alternate:1 designated:1 smaller:2 describes:2 wi:3 partitioned:2 modification:1 s1:5 alse:2 intuitively:1 dv:3 pr:3 den:1 spawn:1 wcd:6 equation:1 remains:2 previously:2 turn:1 count:9 needed:1 tractable:5 available:1 operation:1 apply:3 obey:1 v2:4 appropriate:2 appearing:4 washington:1 encounter:1 original:2 denotes:1 running:1 remaining:3 cf:8 graphical:8 placeholder:1 exploit:1 k1:2 especially:1 unchanged:1 already:1 quantity:1 occurs:2 prq:1 ssp:2 wrap:1 entity:17 reason:1 o1:3 orp:1 gogate:8 providing:1 cq:1 pvj:1 difficult:1 unfortunately:2 cij:4 setup:1 robert:2 subproblems:6 negative:2 wcp:3 append:3 abhay:3 collective:1 unknown:6 perform:5 diamond:1 upper:4 conversion:1 twenty:2 wannes:1 datasets:3 markov:7 benchmark:1 enabling:1 finite:1 acknowledge:1 attacked:1 t:9 immediate:1 relational:6 david:4 propositional:18 pair:10 introduced:1 specified:3 z1:1 omits:1 accepts:1 marthi:1 nip:1 address:3 poole:3 below:2 appeared:2 hendrik:1 challenge:1 program:1 including:1 analogue:5 force:1 advanced:3 scheme:5 improve:2 yq:2 qki:1 xg:1 transitive:1 expp:2 determining:1 limitation:2 versus:1 triple:2 pij:6 xp:5 consistent:1 ttx:1 principle:1 systematically:1 share:2 mateescu:1 blackwellisation:6 guide:1 formal:2 allow:1 ber:1 side:2 taking:1 deepak:2 benefit:1 van:1 xn:1 world:2 valid:2 unaware:1 stand:1 computes:2 evaluating:1 made:2 author:1 replicated:1 preprocessing:1 counted:9 ec:2 approximate:2 compact:2 logic:13 decides:1 instantiation:1 uai:1 summing:1 assumed:1 xi:8 subsequence:5 search:47 un:6 decomposes:1 sk:2 promising:1 ku:2 pvr:1 symmetry:4 composer:1 poly:4 necessarily:1 constructing:1 domain:13 rue:6 vj:11 pv1:1 venugopal:2 main:1 bounding:5 edition:1 n2:3 identifier:2 atht:1 child:14 repeated:1 x1:2 orc:4 elaborate:1 vr:3 cs2:2 position:1 pv:1 wish:1 exponential:2 lq:1 sf:5 breaking:1 bhaskara:1 theorem:5 formula:6 specific:1 explored:1 list:10 dk:1 evidence:8 workshop:1 intractable:1 exists:1 false:1 adding:1 importance:14 ci:10 lifting:3 wc1:4 subtree:3 conditioned:1 gap:1 smoker:2 explore:1 twentieth:1 expressed:1 contained:1 keyed:1 ordered:2 partially:1 recommendation:1 applies:1 kristian:1 springer:1 pedro:3 truth:11 conditional:1 viewed:1 goal:2 towards:1 luc:2 psk:1 experimentally:2 fahiem:1 nima:2 determined:1 pxqq:2 sampler:4 called:9 total:2 e:2 exception:1 formally:2 wq:3 support:1 mcmc:1 d1:1 |
5,504 | 5,982 | Efficient Learning by Directed Acyclic Graph For
Resource Constrained Prediction
Joseph Wang
Department of Electrical
& Computer Engineering
Boston University,
Boston, MA 02215
[email protected]
Kirill Trapeznikov
Systems & Technology Research
Woburn, MA 01801
kirill.trapeznikov@
stresearch.com
Venkatesh Saligrama
Department of Electrical
& Computer Engineering
Boston University,
Boston, MA 02215
[email protected]
Abstract
We study the problem of reducing test-time acquisition costs in classification systems. Our goal is to learn decision rules that adaptively select sensors for each
example as necessary to make a confident prediction. We model our system as a
directed acyclic graph (DAG) where internal nodes correspond to sensor subsets
and decision functions at each node choose whether to acquire a new sensor or
classify using the available measurements. This problem can be posed as an empirical risk minimization over training data. Rather than jointly optimizing such
a highly coupled and non-convex problem over all decision nodes, we propose an
efficient algorithm motivated by dynamic programming. We learn node policies
in the DAG by reducing the global objective to a series of cost sensitive learning
problems. Our approach is computationally efficient and has proven guarantees of
convergence to the optimal system for a fixed architecture. In addition, we present
an extension to map other budgeted learning problems with large number of sensors to our DAG architecture and demonstrate empirical performance exceeding
state-of-the-art algorithms for data composed of both few and many sensors.
1
Introduction
Many scenarios involve classification systems constrained by measurement acquisition budget. In
this setting, a collection of sensor modalities with varying costs are available to the decision system.
Our goal is to learn adaptive decision rules from labeled training data that, when presented with an
unseen example, would select the most informative and cost-effective acquisition strategy for this
example. In contrast, non-adaptive methods [24] attempt to identify a common sparse subset of
sensors that can work well for all data. Our goal is an adaptive method that can classify typical cases
using inexpensive sensors while using expensive sensors only for atypical cases.
We propose an adaptive sensor acquisition system learned using labeled training examples. The
system, modeled as a directed acyclic graph (DAG), is composed of internal nodes, which contain
decision functions, and a single sink node (the only node with no outgoing edges), representing
the terminal action of stopping and classifying (SC). At each internal node, a decision function
routes an example along one of the outgoing edges. Sending an example to another internal node
represents acquisition of a previously unacquired sensor, whereas sending an example to the sink
node indicates that the example should be classified using the currently acquired set of sensors. The
goal is to learn these decision functions such that the expected error of the system is minimized
subject to an expected budget constraint.
First, we consider the case where the number of sensors available is small (as in [19, 23, 20]), though
the dimensionality of data acquired by each sensor may be large (such as an image taken in different
1
modalities). In this scenario, we construct a DAG that allows for sensors to be acquired in any order
and classification to occur with any set of sensors. In this regime, we propose a novel algorithm to
learn node decisions in the DAG by emulating dynamic programming (DP). In our approach, we
decouple a complex sequential decision problem into a series of tractable cost-sensitive learning
subproblems. Cost-sensitive learning (CSL) generalizes multi-decision learning by allowing decision costs to be data dependent [2]. Such reduction enables us to employ computationally efficient
CSL algorithms for iteratively learning node functions in the DAG. In our theoretical analysis, we
show that, given a fixed DAG architecture, the policy risk learned by our algorithm converges to the
Bayes risk as the size of the training set grows.
Next, we extend our formulation to the case where a large number of sensors exist, but the number
of distinct sensor subsets that are necessary for classification is small (as in [25, 11] where the depth
of the trees is fixed to 5). For this regime, we present an efficient subset selection algorithm based
on sub-modular approximation. We treat each sensor subset as a new ?sensor,? construct a DAG
over unions of these subsets, and apply our DP algorithm. Empirically, we show that our approach
outperforms state-of-the-art methods in both small and large scale settings.
Related Work: There is an extensive literature on adaptive methods for sensor selection for reducing
test-time costs. It arguably originated with detection cascades (see [26, 4] and references therein),
a popular method in reducing computation cost in object detection for cases with highly skewed
class imbalance and generic features. Computationally cheap features are used at first to filter out
negative examples and more expensive features are used in later stages.
Our technical approach is closely related to Trapeznikov et al. [19] and Wang et al. [23, 20].
Like us they formulate an ERM problem and generalize detection cascades to classifier cascades
and trees and handle balanced and/or multi-class scenarios. Trapeznikov et al. [19] propose a
similar training scheme for the case of cascades, however restrict their training to cascades and
simple decision functions which require alternating optimization to learn. Alternatively, Wang et
al. [21, 22, 23, 20] attempt to jointly solve the decision learning problem by formulating a linear
upper-bounding surrogate, converting the problem into a linear program (LP).
Conceptually, our work is closely related to Xu et al. [25]
and Kusner et al.[11], who introduce Cost-Sensitive Trees
of Classifiers (CSTC) and Approximately Submodular
Trees of Classifiers (ASTC), respectively, to reducing test
time costs. Like our paper they propose a global ERM
problem. They solve for the tree structure, internal decision rules and leaf classifiers jointly using alternative
minimization techniques. Recently, Kusner et al.[11]
propose Approximately Submodular Trees of Classifiers
(ASTC), a variation of CSTC which provides robust performance with significantly reduced training time and
greedy approximation, respectively. Recently, Nan et al.
[14] proposed random forests to efficiently learn budgeted
systems using greedy approximation over large data sets.
Figure 1: A simple example of a sensor
selection DAG for a three sensor system.
At each state, represented by a binary vector indicating measured sensors, a policy ?
chooses between either adding a new sensor
or stopping and classifying. Note that the
state sSC has been repeated for simplicity.
The subject of this paper is broadly related to other
adaptive methods in the literature. Generative methods
[17, 8, 9, 6] pose the problem as a POMDP, learn conditional probability models, and myopically select features
based information gain of unknown features. MDP-based methods [5, 10, 7, 3] encode current observations as state, unused features as action space, and formulate various reward functions to account
for classification error and costs. He et al. [7] apply imitation learning of a greedy policy with a single classification step as actions. Dulac-Arnold et al. [5] and Karayev et al. [10] apply reinforcement
learning to solve this MDP. Benbouzid et al.[3] propose classifier cascades with an additional skip
action within an MDP framework. Nan et al. [15] consider a nearest neighbor approach to feature
selection, with confidence driven by margin magnitude.
2
2
Adaptive Sensor Acquisition by DAG
In this section, we present our adaptive sensor acquisition DAG that during test-time sequentially decides which sensors should be acquired for every new example entering the system. Before formally
describing the system and our learning approach, we first provide a simple illustration for a 3 sensor
DAG shown in Fig. 1. The state indicating acquired sensors is represented by a binary vector, with
a 0 indicating that a sensor measurement has not been acquired and a 1 representing an acquisition.
Consider a new example that enters the system. Initially, it has a state of [0, 0, 0]T (as do all samples
during test-time) since no sensors have been acquired. It is routed to the policy function ?0 , which
makes a decision to measure one of the three sensors or to stop and classify. Let us assume that the
function ?0 routes the example to the state [1, 0, 0]T , indicating that the first sensor is acquired. At
this node, the function ?1 has to decide whether to acquire the second sensor, acquire the third, or
classifying using only the first. If ?1 chooses to stop and classify then this example will be classified
using only the first sensor.
Such decision process is performed for every new example. The system adaptively collects sensors
until the policy chooses to stop and classify (we assume that when all sensors have been collected
the decision function has no choice but to stop and classify, as shown for ?7 in Fig. 1).
Problem Formulation: A data instance, x ? X , consists of M sensor measurements, x =
{x1 , x2 , . . . , xM }, and belongs to one of L classes indicated by its label y ? Y = {1, 2, . . . L}.
Each sensor measurement, xm , is not necessarily a scalar but may instead be multi-dimensional. Let
the pair, (x, y), be distributed according to an unknown joint distribution D. Additionally, associated
with each sensor measurement xm is an acquisition cost, cm .
To model the acquisition process, we define a state space S = {s1 , . . . , sK , sSC }. The states
{s1 , . . . , sK } represent subsets of sensors, and the stop-and-classify state sSC represents the action
of stopping and classifying with a current subset. Let Xs correspond to the space of sensor measurements in subset s. We assume that the state space includes all possible subsets1 , K = 2M .
For example in Fig. 1, the system contains all subsets of 3 sensors. We also introduce the state
transition function, T : S ? S, that defines a set of actions that can be taken from the current
state. A transition from the current sensor subset to a new subset corresponds to an acquisition of
new sensor measurements. A transition to the state sSC corresponds to stopping and classifying
using the available information. This terminal state, sSC , has access to a classifier bank used to
predict the label of an example. Since classification has to operate on any sensor subset, there is one
classifier for every sk : fs1 , . . . , fsK such that fs : Xs ? Y. We assume the classifier bank is given
and pre-trained. Practically, the classifiers can be either unique for each subset or a missing feature
(i.e. sensor) classification system as in [13]. We overload notation and use node, subset of sensors,
and path leading up to that subset on the DAG interchangeably. In particular we let S denote the
collection of subsets of nodes. Each subset is associated with a node on the DAG graph. We refer to
each node as a state since it represents the ?state-of-information? for an instance at that node.
We define the loss associated with classifying an example/label pair (x, y) using the sensors in sj as
X
Lsj (x, y) = 1fsj (x)6=y +
ck .
(1)
k?sj
Using this convention, the loss is the sum of the empirical risk associated with classifier fsj and the
cost of the sensors in the subset sj . The expected loss over the data is defined
LD (?) = Ex,y?D L?(x) (x, y) .
(2)
Our goal is to find a policy which adaptively selects subsets for examples such that their average
loss is minimized
min LD (?),
(3)
???
where ? : X ? S is a policy selected from a family of policies ? and ?(x) is the state selected by
the policy ? for example x. We denote the quantity LD as the value of (3) when ? is the family
of all measurable functions. LD is the Bayes cost, representing the minimum possible cost for any
1
While enumerating all possible combinations is feasible for small M , for large M this problem becomes
intractable. We will overcome this limitation in Section 3 by applying a novel sensor selection algorithm. For
now, we remain in the small M regime.
3
function given the distribution of data. In practice, the distribution D is unknown, and instead we
are given training examples (x1 , y1 ), . . . , (xn , yn ) drawn I.I.D. from D. The problem becomes an
n
X
empirical risk minimization:
min
???
L?(xi ) (xi , yi ).
(4)
i=1
Recall that our sensor acquisition system is represented as a DAG. Each node in a graph corresponds
to a state (i.e. sensor subset) in S, and the state transition function, T (sj ), defines the outgoing edges
from every node sj . We refer to the entire edge set in the DAG as E. In such a system, the policy ? is
parameterized by the set of decision functions ?1 , . . . , ?K at every node in the DAG. Each function,
?j : X ? T (sj ), maps an example to a new state (node) from the set specified by outgoing edges.
Rather than directly minimizing the empirical risk in (4), first, we define a step-wise cost associated
(P
with all edges (sj , sk ) ? E
C(x, y, sj , sk ) =
t?sk \sj
1fsj (x)6=y
ct
if sk 6= sSC
.
otherwise
(5)
C(?) is either the cost of acquiring new sensors or is the classification error induced by classifying
with the current subset if sk = sSC . Using this step-wise cost, we define the empirical loss of the
system w.r.t a path for an example x:
X
R (x, y, ?1 , ..., ?K ) =
C (x, y, sj , sj+1 ) ,
(6)
(sj ,sj+1 ) ? path(x,?1 ,...,?K )
where path (x, ?1 , . . . , ?K ) is the path on the DAG induced by the policy functions ?1 , . . . , ?K for
example x. The empirical minimization equivalent to (4) for our DAG system is a sample average
n
over all example specific path losses:
X
?
?1? , . . . , ?K
= argmin
?1 ,...,?K ??
R (xi , yi , ?1 , . . . , ?K ) .
(7)
i=1
Next, we present a reduction to learn the functions ?1 , ..., ?K that minimize the loss in (7).
2.1
Learning Policies in a DAG
Learning the functions ?1 , . . . , ?K that minimize the cost in (7) is a highly coupled problem. Learning a decision function ?j is dependent on the other functions in two ways: (a) ?j is dependent on
functions at nodes downstream (nodes for which a path exists from ?j ), as these determine the cost
of each action taken by ?j on an individual example (the cost-to-go), and (b) ?j is dependent on
functions at nodes upstream (nodes for which a path exists to ?j ), as these determine the distribution
of examples that ?j acts on. Consider a policy ?j at a node corresponding to state sj such that all
outgoing edges from j lead to leaves. Also, we assume all examples pass through this node ?j (we
are ignoring the effect of upstream dependence b). This yields the following important lemma:
Lemma 2.1. Given the assumptions above, the problem of minimizing the risk in (6) w.r.t a single
policy function, ?j , is equivalent to solving a k-class cost sensitive learning (CSL) problem.2
Proof. Consider the risk in (6) with ?j such that all outgoing edges from j lead to a leaf. Ignoring
n
the effect of other policy functions
is:
X upstream from j, the risk w.r.t ?jX
C(x, y, sj , sk )1?j (x)=sk ? min
R(x, y, ?j ) =
???
sk ?T (sj )
R(xi , yi , ?j ).
i=1
Minimizing the risk over training examples yields the optimization problem on the right hand side.
This is equivalent to a CSL problem over the space of ?labels? T (sj ) with costs given by the transition costs C(x, y, sj , sk ).
In order to learn the policy functions ?1 , . . . , ?K , we propose Algorithm 1, which iteratively learns
policy functions using Lemma 2.1. We solve the CSL problem by using a filter-tree scheme [2]
for Learn, which constructs a tree of binary classifiers. Each binary classifier can be trained using
regularized risk minimization. For concreteness we define the Learn algorithm as
Learn ((x1 , w
~ 1 ), ..., (xn , w
~ n )) ,
F ilterT ree((x1 , w
~ 1 ), ..., (xn , w
~ n ))
(8)
where the binary classifiers in the filter tree are trained using an appropriately regularized calibrated convex loss function. Note that multiple schemes exist that map the CSL problem to binary
classification.
2
We consider the k-class CSL problem formulated by Beygelzimer et al. [2], where an instance of the
problem is defined by a distribution D over X ?[0, inf)k , a space of features and associated costs for predicting
each of the k labels for each realization of features. The goal is to learn a function which maps each element of
X to a label {1, . . . , k} s.t. the expected cost is minimized.
4
A single iteration of Algorithm 1 proAlgorithm 1 Graph Reduce Algorithm
ceeds as follows: (1) A node j is choInput: Data: (xi , yi )ni=1 ,
sen whose outgoing edges connect only
DAG: (nodes S, edges E, costs C(xi , yi , e), ?e ? E),
to leaf nodes. (2) The costs associated
CSL alg: Learn ((x1 , w
~ 1 ), . . . , (xn , w
~ n ))) ? ?(?)
with each connected leaf node are found.
while Graph S is NOT empty do
(3) The policy ?j is trained on the entire
(1) Choose a node, j ? S, s.t. all children of j are
set of training data according to these
leaf
nodes
costs by solving a CSL problem. (4)
for example i ? {1, . . . , n} do
The costs associated with taking the ac(2) Construct the weight vector w
~ i of edge costs
tion ?j are computed for each example,
per
action.
and the costs of moving to state j are
end for
updated. (5) Outgoing edges from node
(3) ?j ? Learn ((x1 , w
~ 1 ), . . . , (xn , w
~ n ))
j are removed (making it a leaf node),
(4) Evaluate ?j and update edge costs to node j:
and (6) disconnected nodes (that were
C(xi , yi , sn , sj ) ? w
~ ij (?j (xi )) + C(xi , yi , sn , sj )
previously connected to node j) are re(5) Remove all outgoing edges from node j in E
moved. The algorithm iterates through
(6) Remove all disconnected nodes from S.
these steps until all edges have been reend while
moved. We denote the policy functions
Output: Policy functions, ?1 , . . . , ?K
trained on the empirical data using Alg.
n
n
1 as ?1 , . . . , ?K .
2.2
Analysis
Our goal is to show that the expected risk of the policy functions ?1 , . . . , ?K learned by Alg. 1
converge to the Bayes risk. We first state our main result:
Theorem 2.2. Alg. 1 is universally consistent, that is
n
lim LD (?1n , . . . , ?K
) ? LD
(9)
n??
n
are the policy functions learned using Alg. (1), which in turn uses Learn dewhere ?1n , . . . , ?K
scribed by Eq. 8.
Alg. 1 emulates a dynamic program applied in an empirical setting. Policy functions are decoupled
and trained from leaf to root conditioned on the output of descendant nodes.
To adapt to the empirical setting, we optimize at each stage over all examples in the training set. The
key insight is the fact that universally consistent learners output optimal decisions over subsets of the
space of data, that is they are locally optimal. To illustrate this point, consider a standard classification problem. Let X 0 ? X be the support (or region) of examples induced by upstream deterministic
decisions. d? and f ? , Bayes optimal classifiers w.r.t the full space and subset, respectively, are equal
on the reduced support:
d? (x) = arg min E 1d(x)6=y |x = f ? (x) = arg min E 1f (x)6=y |x, x ? X 0 ? X ? x ? X 0 .
d
f
From this insight, we decouple learning problems while still training a system that converges to the
Bayes risk. This can be achieved by training universally consistent CSL algorithms such as filter
trees [2] that reduce the problem to binary classification. By learning consistent binary classifiers
[1, 18], the risk of the cost-sensitive function can be shown to converge to the Bayes risk [2]. Proof
of Theorem 2.2 is included in the Supplementary Material.
Computational Efficiency: Alg. 1 reduces the problem to solving a series of O(KM ) binary classification problems, where K is the number of nodes in the DAG and M is the number of sensors.
Finding each binary classifier is computationally efficient, reducing to a convex problem with O(n)
variables. In contrast, nearly all previous approaches require solving a non-convex problem and
resort to alternating optimization [25, 19] or greedy approximation [11]. Alternatively, convex surrogates proposed for the global problem [23, 20] require solving large convex programs with ?(n)
variables, even for simple linear decision functions. Furthermore, existing off-the-shelf algorithms
cannot be applied to train these systems, often leading to less efficient implementation.
2.3
Generalization to Other Budgeted Learning Problems
Although, we presented our algorithm in the context of supervised classification and a uniform
linear sensor acquisition cost structure, the above framework holds for a wide range of problems.
5
In particular, any loss-based learning problem can be solved using the proposed DAG approach by
generalizing the cost function
(
? y, sj , sk ) =
C(x,
c(x, y, sj , sk ) if sk 6= sSC
,
D (x, y, sj )
otherwise
(10)
where c(x, y, sj , sk ) is the cost of acquiring sensors in sk \sj for example (x, y) given the current
state sj and D (x, y, sj ) is some loss associated with applying sensor subset sj to example (x, y).
This framework allows for significantly more complex budgeted learning problems to be handled.
For example, the sensor acquisition cost, c(x, y, sj , sk ), can be object dependent and non-linear,
such as increasing acquisition costs as time increases (which can arise in image retrieval problems,
where users are less likely to wait as time increases). The cost D (x, y, sj ) can include alternative
costs such as `2 error in regression, precision error in ranking, or model error in structured learning.
As in the supervised learning case, the learning functions and example labels do not need to be
explicitly known. Instead, the system requires only empirical performance to be provided, allowing complex decision systems (such as humans) to be characterized or systems learned where the
classifiers and labels are sensitive information.
3
Adaptive Sensor Acquisition in High-Dimensions
So far, we considered the case where the DAG system allows for any subset of sensors to be acquired, however this is often computationally intractable as the number of nodes in the graph grows
exponentially with the number of sensors. In practice, these complete systems are only feasible for
data generated from a small set of sensors ( 10 or less).
3.1
Learning Sensor Subsets
Although constructing an exhaustive DAG for data with
a large number of sensors is computationally intractable,
in many cases this is unnecessary. Motivated by previous
methods [6, 25, 11], we assume that the number of ?active? nodes in the exhaustive graph is small, that is these
nodes are either not visited by any examples or all examples that visit the node acquire the same next sensor.
Equivalently, this can be viewed as the system needing
only a small number of sensor subsets to classify all examples with low acquisition cost.
Figure 2: An example of a DAG system using the 3 sensor subsets shown on the bottom left. The new states are the union of
these sensor subsets, with the system otherwise constructed in the same fashion as the
small scale system.
Rather than attempt to build the entire combinatorially
sized graph, we instead use this assumption to first find
these ?active? subsets of sensors and construct a DAG to choose between unions of these subsets.
The step of finding these sensor subsets can be viewed as a form of feature clustering, with a goal
of grouping features that are jointly useful for classification. By doing so, the size of the DAG is reduced from exponential in the number of sensors, 2M , to exponential in a much smaller user chosen
parameter number of subsets, 2t . In experimental results, we limit t = 8, which allows for a diverse
subsets of sensors to be found while preserving computational tractability and efficiency.
Our goal is to learn sensor subsets with high classification performance and low acquisition cost
(empirically low cost as defined in (1)). Ideally, our goal is to jointly learn the subsets which minimize the empirical risk of the entire system as defined in (4), however this presents a computationally
intractable problem due to the exponential search space. Rather than attempt to solve this difficult
problem directly, we minimize classification error over a collection of sensor subsets ?1 , . . . , ?t
subject to a cost constraint on the total number of sensors used. We decouple the problem from the
policy learning problem by assuming that each example is classified by the best possible subset. For
a constant sensor cost, the problem can be expressed as a set constraint problem:
min
?1 ,...,?t
N
t
h
i
X
B
1 X
min
1f?j (xi )6=yi such that:
|?j | ? ,
N i=1 j?{1,...,t}
?
j=1
where B is the total sensor budget over all sensor subsets and ? is the cost of a single sensor.
6
(11)
Although minimizing this loss is still computationally intractable, consider instead the equivalent
problem of maximizing the ?reward? (the event of a correct classification) of the subsets, defined as
G=
N
X
i=1
max
j?{1,...,t}
i
h
1f?j (xi )=yi ? max
?1 ,...,?t
t
X
1
B
G(c1 , . . . , ct ) such that:
|?j | ? .
N
?
j=1
(12)
This problem is related to the knapsack problem with a non-linear objective. Maximizing the reward
in (12) is still a computationally intractable problem, however the reward function is structured to
allow for efficient approximation.
Lemma 3.1. The objective of the maximization in (12) is sub-modular with respect to the set of
subsets, such that adding any new set to the reward yields diminishing returns.
Theorem 3.2. Given that the empirical risk of each classifier f?k is submodular and monotonically
decreasing w.r.t. the elements in ?k and uniform sensor costs, the strategy in Alg. 2 is an O(1)
approximation of the optimal reward in (12).
Proof of these statements is included in the Supplementary Material and centers on showing that the
objective is sub-modular, and therefore applying a greedy strategy yields a 1 ? 1e approximation of
the optimal strategy [16].
3.2
Algorithm 2 Sensor Subset Selection
Input: Number of Subsets t, Cost Constraint B?
Output: Feature subsets, ?1 , . . . , ?t
Initialize: ?1 , . . . , ?t = ?
(i, j) = argmaxi?{1,...,t} argmaxj??iC
G(?1 , ..., ?i ? j, ..., ?t )
PT
while j=1 |?j | ? C? do
?i = ?i ? j
(i, j) = argmaxi?{1,...,t} argmaxj??iC
G(?1 , ..., ?i ? j, ..., ?t )
end while
Constructing DAG using Sensor Subsets
Alg. 2 requires
computation of the reward G for
only O B? tM sensor subsets, where M is the number of sensors, to return a constant-order approximation to the NP-hard knapsack-type problem. Given
the set of sensor subsets ?1 , . . . , ?t , we can now
construct a DAG using all possible unions of these
subsets, where each sensor subset ?j is treated as
a new single sensor, and apply the small scale system presented in Sec. 2. The result is an efficiently
learned system with relatively low complexity yet
strong performance/cost trade-off. Additionally, this
result can be extended to the case of non-uniform
costs, where a simple extension of the greedy algorithm yields a constant-order approximation [12].
A simple case where three subsets are used is shown in Fig. 2. The three learned subsets of sensors
are shown on the bottom left of Fig. 2, and these three subsets are then used to construct the entire
DAG in the same fashion as in Fig. 1. At each stage, the state is represented by the union of sensor
subsets acquired. Grouping the sensors in this fashion reduces the size of the graph to 8 nodes as
opposed to 64 nodes required if any subset of the 6 sensors can be selected. This approach allows
us to map high-dimensional adaptive sensor selection problems to small scale DAG in Sec. 2.
4
Experimental Results
To demonstrate the performance of our DAG sensor acquisition system, we provide experimental
results on data sets previously used in budgeted learning. Three data sets previously used for budget
cascades [19, 23] are tested. In these data sets, examples are composed of a small number of sensors
(under 4 sensors). To compare performance, we apply the LP approach to learning sensor trees [20]
and construct trees containing all subsets of sensors as opposed to fixed order cascades [19, 23].
Next, we examine performance of the DAG system using 3 higher dimensional sets of data previously used to compare budgeted learning performance [11]. In these cases, the dimensionality of
the data (between 50 and 400 features) makes exhaustive subset construction computationally infeasible. We greedily construct sensor subsets using Alg. 2, then learn a DAG over all unions of these
sensor subsets. We compare performance with CSTC [25] and ASTC [11].
For all experiments, we use cost sensitive filter trees [2], where each binary classifier in the tree is
learned using logistic regression. Homogeneous polynomials are used as decision functions in the
filter trees. For all experiments, uniform sensor cost were were varied in the range [0, M ] achieve
systems with different budgets. Performance between the systems is compared by plotting the average number of features acquired during test-time vs. the average test error.
7
4.1
Small Sensor Set Experiments
0.4
0.27
LP Tree
DAG
0.45
LP Tree
DAG
0.26
0.35
LP Tree
DAG
0.4
0.25
0.25
0.24
Average Test Error
Average Test Error
Average Test Error
0.35
0.3
0.23
0.22
0.21
0.3
0.25
0.2
0.2
0.2
0.15
0.19
1
1.2
1.4
1.6
1.8
2
2.2
2.4
Average Features Used
2.6
2.8
0.18
1
3
1.2
1.4
1.6
(a) letter
1.8
2
2.2
2.4
Average Features Used
2.6
2.8
0.1
3
1
1.5
2
2.5
3
Average Features Used
(b) pima
3.5
4
(c) satimage
Figure 3: Average number of sensors acquired vs. average test error comparison between LP tree systems and
DAG systems.
We compare performance of our trained DAG with that of a complete tree trained using an LP
surrogate [20] on the landsat, pima, and letter datasets. To construct each sensor DAG, we include all
subsets of sensors (including the empty set) and connect any two nodes differing by a single sensor,
with the edge directed from the smaller sensor subset to the larger sensor subset. By including the
empty set, no initial sensor needs to be selected. 3rd -order homogeneous polynomials are used for
both the classification and system functions in the LP and DAG.
As seen in Fig. 3, the systems learned with a DAG outperform the LP tree systems. Additionally,
the performance of both of the systems is significantly better than previously reported performance
on these data sets for budget cascades [19, 23]. This arises due to both the higher complexity of the
classifiers and decision functions as well as the flexibility of sensor acquisition order in the DAG
and LP tree compared to cascade structures. For this setting, it appears that the DAG approach is
superior approach to LP trees for learning budgeted systems.
4.2
Large Sensor Set Experiments
0.28
0.3
ASTC
CSTC
DAG
0.44
ASTC
CSTC
DAG
0.27
ASTC
CSTC
DAG
0.42
0.25
0.2
0.15
0.26
0.4
0.25
0.38
0.24
0.36
0.23
0.34
0.1
0.22
0.05
0
5
10
15
20
25
30
35
40
45
50
0.21
0.32
0
5
10
15
20
25
30
35
40
45
50
0.3
0
50
100
150
200
250
(a) MiniBooNE
(b) Forest
(c) CIFAR
Figure 4: Comparison between CSTC, ASTC, and DAG of the average number of acquired features (x-axis)
vs. test error (y-axis).
Next, we compare performance of our trained DAG with that of CSTC [25] and ASTC [11] for
the MiniBooNE, Forest, and CIFAR datasets. We use the validation data to find the homogeneous
polynomial that gives the best classification performance using all features (MiniBooNE: linear,
Forest: 2nd order, CIFAR: 3rd order). These polynomial functions are then used for all classification
and policy functions. For each data set, Alg. 2 was used to find 7 subsets, with an 8th subset of all
features added. An exhaustive DAG was trained over all unions of these 8 subsets.
Fig. 4 shows performance comparing the average cost vs. average error of CSTC, ASTC, and our
DAG system. The systems learned with a DAG outperform both CSTC and ASTC on the MiniBooNE and Forest data sets, with comparable performance on CIFAR at low budgets and superior
performance at higher budgets.
Acknowledgments
This material is based upon work supported in part by the U.S. National Science Foundation Grant 1330008,
by the Department of Homeland Security, Science and Technology Directorate, Office of University Programs,
under Grant Award 2013- ST-061-ED0001, by ONR Grant 50202168 and US AF contract FA8650-14-C-1728.
The views and conclusions contained in this document are those of the authors and should not be interpreted as
necessarily representing the social policies, either expressed or implied, of the U.S. DHS, ONR or AF.
8
References
[1] P. Bartlett, M. Jordan, and J. Mcauliffe. Convexity, Classification, and Risk Bounds. Journal of American
Statistical Association, 101(473):138?156, 2006.
[2] A. Beygelzimer, J. Langford, and P. Ravikumar. Multiclass classification with filter trees. 2007.
[3] R. Busa-Fekete, D. Benbouzid, and B. K?gl. Fast classification using sparse decision dags. In Proceedings
of the 29th International Conference on Machine Learning, 2012.
[4] M. Chen, Z. Xu, K. Weinberger, O. Chapelle, and D. Kedem. Classifier cascade: Tradeoff between
accuracy and feature evaluation cost. In International Conference on Artificial Intelligence and Statistics,
2012.
[5] G. Dulac-Arnold, L. Denoyer, P. Preux, and P. Gallinari. Datum-wise classification: a sequential approach
to sparsity. In Machine Learning and Knowledge Discovery in Databases, pages 375?390. 2011.
[6] T. Gao and D. Koller. Active classification based on value of classifier. In Advances in Neural Information
Processing Systems, volume 24, pages 1062?1070, 2011.
[7] H. He, H. Daume III, and J. Eisner. Imitation learning by coaching. In Advances In Neural Information
Processing Systems, pages 3158?3166, 2012.
[8] S. Ji and L. Carin. Cost-sensitive feature acquisition and classification. Pattern Recognition, 40(5), 2007.
[9] P. Kanani and P. Melville. Prediction-time active feature-value acquisition for cost-effective customer
targeting. In Advances In Neural Information Processing Systems, 2008.
[10] S. Karayev, M. Fritz, and T. Darrell. Dynamic feature selection for classification on a budget. In International Conference on Machine Learning: Workshop on Prediction with Sequential Models, 2013.
[11] M. Kusner, W. Chen, Q. Zhou, Z. Xu, K. Weinberger, and Y. Chen. Feature-cost sensitive learning with
submodular trees of classifiers. In Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014.
[12] J. Leskovec, A. Krause, C. Guestrin, C. Faloutsos, J. VanBriesen, and N. Glance. Cost-effective outbreak
detection in networks. In International Conference on Knowledge Discovery and Data Mining, 2007.
[13] L. Maaten, M. Chen, S. Tyree, and K. Q. Weinberger. Learning with marginalized corrupted features. In
Proceedings of the 30th International Conference on Machine Learning, 2013.
[14] F. Nan, J. Wang, and V. Saligrama. Feature-budgeted random forest. In Proceedings of the 32nd International Conference on Machine Learning, 2015.
[15] F. Nan, J. Wang, K. Trapeznikov, and V. Saligrama. Fast margin-based cost-sensitive classification. In
International Conference on Acoustics, Speech and Signal Processing, 2014.
[16] G. Nemhauser, L. Wolsey, and M. Fisher. An analysis of approximations for maximizing submodular set
functions???i. Mathematical Programming, 14(1):265?294, 1978.
[17] V. S. Sheng and C. X. Ling. Feature value acquisition in testing: A sequential batch test algorithm. In
Proceedings of the 23rd International Conference on Machine Learning, pages 809?816, 2006.
[18] I. Steinwart. Consistency of support vector machines and other regularized kernel classifiers. Information
Theory, IEEE Transactions on, 51(1):128?142, 2005.
[19] K. Trapeznikov and V. Saligrama. Supervised sequential classification under budget constraints. In
International Conference on Artificial Intelligence and Statistics, pages 581?589, 2013.
[20] J. Wang, T. Bolukbasi, K. Trapeznikov, and V. Saligrama. Model selection by linear programming. In
European Conference on Computer Vision, pages 647?662, 2014.
[21] J. Wang and V. Saligrama. Local supervised learning through space partitioning. In Advances in Neural
Information Processing Systems, pages 91?99. 2012.
[22] J. Wang and V. Saligrama. Locally-linear learning machines (L3M). In Asian Conference on Machine
Learning, pages 451?466, 2013.
[23] J. Wang, K. Trapeznikov, and V. Saligrama. An lp for sequential learning under budgets. In International
Conference on Artificial Intelligence and Statistics, pages 987?995, 2014.
[24] Z. Xu, O. Chapelle, and K. Weinberger. The greedy miser: Learning under test-time budgets. In Proceedings of the 29th International Conference on Machine Learning, 2012.
[25] Z. Xu, M. Kusner, M. Chen, and K. Weinberger. Cost-sensitive tree of classifiers. In Proceedings of the
30th International Conference on Machine Learning, pages 133?141, 2013.
[26] C. Zhang and Z. Zhang. A Survey of Recent Advances in Face Detection. Technical report, Microsoft
Research, 2010.
9
| 5982 |@word polynomial:4 nd:2 km:1 ld:6 reduction:2 initial:1 series:3 contains:1 document:1 outperforms:1 existing:1 current:6 com:1 comparing:1 beygelzimer:2 yet:1 informative:1 enables:1 cheap:1 remove:2 update:1 v:4 greedy:7 leaf:8 generative:1 selected:4 intelligence:4 bolukbasi:1 provides:1 iterates:1 node:50 zhang:2 mathematical:1 along:1 constructed:1 descendant:1 consists:1 busa:1 introduce:2 acquired:13 expected:5 examine:1 multi:3 terminal:2 decreasing:1 csl:10 increasing:1 becomes:2 provided:1 notation:1 cm:1 argmin:1 interpreted:1 differing:1 finding:2 guarantee:1 every:5 act:1 classifier:26 gallinari:1 partitioning:1 grant:3 yn:1 arguably:1 mcauliffe:1 before:1 engineering:2 local:1 treat:1 limit:1 path:8 ree:1 approximately:2 fs1:1 therein:1 collect:1 range:2 scribed:1 directed:4 unique:1 acknowledgment:1 testing:1 union:7 practice:2 empirical:13 cascade:11 significantly:3 confidence:1 pre:1 wait:1 cannot:1 targeting:1 selection:9 risk:19 applying:3 context:1 optimize:1 measurable:1 map:5 equivalent:4 missing:1 deterministic:1 maximizing:3 go:1 center:1 customer:1 convex:6 pomdp:1 formulate:2 survey:1 simplicity:1 rule:3 insight:2 handle:1 variation:1 updated:1 dulac:2 pt:1 construction:1 user:2 programming:4 homogeneous:3 us:1 element:2 expensive:2 recognition:1 labeled:2 database:1 bottom:2 kedem:1 wang:9 electrical:2 enters:1 solved:1 region:1 connected:2 cstc:10 trade:1 removed:1 balanced:1 convexity:1 complexity:2 reward:7 ideally:1 dynamic:4 trained:10 solving:5 upon:1 efficiency:2 learner:1 sink:2 joint:1 represented:4 various:1 train:1 distinct:1 fast:2 effective:3 argmaxi:2 artificial:4 sc:1 exhaustive:4 whose:1 modular:3 posed:1 solve:5 supplementary:2 larger:1 otherwise:3 melville:1 statistic:3 unseen:1 jointly:5 karayev:2 sen:1 propose:8 saligrama:8 realization:1 flexibility:1 achieve:1 moved:2 convergence:1 empty:3 darrell:1 converges:2 object:2 illustrate:1 ac:1 pose:1 measured:1 ij:1 nearest:1 eq:1 strong:1 skip:1 convention:1 closely:2 correct:1 filter:7 human:1 material:3 require:3 generalization:1 extension:2 hold:1 practically:1 considered:1 trapeznikov:8 ic:2 miniboone:4 predict:1 jx:1 label:8 currently:1 visited:1 sensitive:12 combinatorially:1 minimization:5 sensor:112 rather:4 ck:1 zhou:1 shelf:1 varying:1 office:1 coaching:1 encode:1 indicates:1 contrast:2 greedily:1 dependent:5 stopping:4 landsat:1 entire:5 initially:1 diminishing:1 koller:1 selects:1 arg:2 classification:30 constrained:2 art:2 initialize:1 equal:1 construct:10 represents:3 dhs:1 nearly:1 carin:1 minimized:3 np:1 report:1 few:1 employ:1 composed:3 national:1 individual:1 asian:1 microsoft:1 attempt:4 detection:5 highly:3 mining:1 evaluation:1 edge:16 necessary:2 decoupled:1 tree:26 benbouzid:2 re:1 theoretical:1 leskovec:1 instance:3 classify:8 maximization:1 cost:62 tractability:1 subset:65 uniform:4 reported:1 connect:2 corrupted:1 chooses:3 confident:1 adaptively:3 calibrated:1 st:1 international:12 fritz:1 bu:2 contract:1 off:2 aaai:1 opposed:2 choose:3 containing:1 ssc:8 resort:1 american:1 leading:2 return:2 account:1 sec:2 includes:1 explicitly:1 ranking:1 later:1 performed:1 tion:1 root:1 view:1 doing:1 bayes:6 minimize:4 ni:1 accuracy:1 emulates:1 who:1 efficiently:2 correspond:2 identify:1 yield:5 conceptually:1 generalize:1 classified:3 inexpensive:1 acquisition:23 associated:9 proof:3 gain:1 stop:5 popular:1 recall:1 lim:1 knowledge:2 dimensionality:2 appears:1 higher:3 supervised:4 formulation:2 though:1 furthermore:1 stage:3 until:2 langford:1 hand:1 sheng:1 steinwart:1 joewang:1 glance:1 defines:2 logistic:1 indicated:1 grows:2 mdp:3 effect:2 contain:1 alternating:2 entering:1 iteratively:2 skewed:1 during:3 interchangeably:1 complete:2 demonstrate:2 image:2 wise:3 novel:2 recently:2 common:1 superior:2 empirically:2 ji:1 exponentially:1 volume:1 extend:1 he:2 association:1 measurement:8 refer:2 dag:55 rd:3 consistency:1 submodular:5 moving:1 access:1 chapelle:2 recent:1 optimizing:1 belongs:1 driven:1 inf:1 scenario:3 route:2 binary:11 onr:2 yi:9 preserving:1 minimum:1 additional:1 seen:1 guestrin:1 converting:1 determine:2 converge:2 monotonically:1 signal:1 multiple:1 full:1 needing:1 reduces:2 technical:2 adapt:1 characterized:1 af:2 retrieval:1 cifar:4 astc:10 ravikumar:1 visit:1 award:1 prediction:4 regression:2 ed0001:1 vision:1 iteration:1 represent:1 kernel:1 achieved:1 c1:1 addition:1 whereas:1 krause:1 modality:2 myopically:1 appropriately:1 operate:1 subject:3 induced:3 jordan:1 unused:1 iii:1 architecture:3 restrict:1 reduce:2 tm:1 multiclass:1 tradeoff:1 enumerating:1 whether:2 motivated:2 handled:1 bartlett:1 routed:1 f:1 fa8650:1 speech:1 action:8 useful:1 involve:1 fsk:1 locally:2 reduced:3 outperform:2 exist:2 per:1 broadly:1 diverse:1 key:1 drawn:1 budgeted:8 graph:11 downstream:1 concreteness:1 sum:1 miser:1 parameterized:1 letter:2 family:2 decide:1 denoyer:1 decision:27 maaten:1 comparable:1 bound:1 ct:2 nan:4 datum:1 occur:1 constraint:5 x2:1 min:7 formulating:1 relatively:1 department:3 structured:2 according:2 combination:1 disconnected:2 remain:1 smaller:2 vanbriesen:1 kusner:4 lp:12 joseph:1 making:1 s1:2 outbreak:1 erm:2 taken:3 computationally:10 resource:1 previously:6 describing:1 turn:1 argmaxj:2 tractable:1 end:2 sending:2 available:4 generalizes:1 apply:5 generic:1 alternative:2 weinberger:5 faloutsos:1 batch:1 knapsack:2 clustering:1 include:2 marginalized:1 eisner:1 build:1 implied:1 objective:4 added:1 quantity:1 strategy:4 dependence:1 surrogate:3 nemhauser:1 dp:2 srv:1 collected:1 assuming:1 modeled:1 illustration:1 minimizing:4 acquire:4 equivalently:1 difficult:1 statement:1 pima:2 subproblems:1 negative:1 implementation:1 policy:27 unknown:3 fsj:3 allowing:2 imbalance:1 upper:1 observation:1 twenty:1 datasets:2 emulating:1 extended:1 y1:1 varied:1 venkatesh:1 pair:2 required:1 specified:1 extensive:1 security:1 homeland:1 acoustic:1 learned:10 pattern:1 xm:3 eighth:1 regime:3 sparsity:1 program:4 preux:1 max:2 including:2 event:1 treated:1 regularized:3 predicting:1 representing:4 scheme:3 technology:2 axis:2 coupled:2 sn:2 literature:2 discovery:2 loss:11 limitation:1 wolsey:1 acyclic:3 proven:1 validation:1 foundation:1 consistent:4 plotting:1 tyree:1 bank:2 classifying:7 supported:1 gl:1 infeasible:1 side:1 allow:1 kirill:2 arnold:2 neighbor:1 wide:1 taking:1 face:1 sparse:2 distributed:1 overcome:1 depth:1 xn:5 transition:5 dimension:1 author:1 collection:3 adaptive:10 reinforcement:1 universally:3 far:1 social:1 transaction:1 lsj:1 sj:30 global:3 sequentially:1 decides:1 active:4 unnecessary:1 xi:11 alternatively:2 imitation:2 search:1 sk:18 additionally:3 learn:20 robust:1 ignoring:2 forest:6 alg:11 complex:3 necessarily:2 upstream:4 constructing:2 european:1 main:1 directorate:1 bounding:1 ling:1 arise:1 daume:1 repeated:1 child:1 xu:5 x1:6 fig:8 fashion:3 precision:1 sub:3 originated:1 exceeding:1 exponential:3 atypical:1 third:1 learns:1 theorem:3 specific:1 showing:1 x:2 grouping:2 intractable:6 exists:2 workshop:1 sequential:6 adding:2 magnitude:1 budget:12 conditioned:1 margin:2 chen:5 boston:4 generalizing:1 likely:1 gao:1 expressed:2 contained:1 scalar:1 acquiring:2 fekete:1 corresponds:3 ma:3 conditional:1 goal:10 formulated:1 viewed:2 sized:1 satimage:1 fisher:1 feasible:2 hard:1 included:2 typical:1 reducing:6 decouple:3 lemma:4 total:2 pas:1 experimental:3 indicating:4 select:3 formally:1 internal:5 support:3 arises:1 overload:1 evaluate:1 outgoing:9 tested:1 ex:1 |
5,505 | 5,983 | Estimating Jaccard Index with Missing Observations:
A Matrix Calibration Approach
Wenye Li
Macao Polytechnic Institute
Macao SAR, China
[email protected]
Abstract
The Jaccard index is a standard statistics for comparing the pairwise similarity between data samples. This paper investigates the problem of estimating a Jaccard
index matrix when there are missing observations in data samples. Starting from
a Jaccard index matrix approximated from the incomplete data, our method calibrates the matrix to meet the requirement of positive semi-definiteness and other
constraints, through a simple alternating projection algorithm. Compared with
conventional approaches that estimate the similarity matrix based on the imputed
data, our method has a strong advantage in that the calibrated matrix is guaranteed to be closer to the unknown ground truth in the Frobenius norm than the
un-calibrated matrix (except in special cases they are identical). We carried out a
series of empirical experiments and the results confirmed our theoretical justification. The evaluation also reported significantly improved results in real learning
tasks on benchmark datasets.
1
Introduction
A critical task in data analysis is to determine how similar two data samples are. The applications
arise in many science and engineering disciplines. For example, in statistical and computing sciences, similarity analysis lays a foundation for cluster analysis, pattern classification, image analysis
and recommender systems [15, 8, 17].
A variety of similarity models have been established for different types of data. When data samples
can be represented as algebraic vectors, popular choices include cosine similarity model, linear
kernel model, and so on [24, 25]. When each vector element takes a value of zero or one, the
Jaccard index model is routinely applied, which measures the similarity by the ratio of the number
of unique elements common to two samples against the total number of unique elements in either of
them [14, 23].
Despite the wide applications, the Jaccard index model faces a non-trivial challenge when data
samples are not fully observed. As a treatment, imputation approaches may be applied, which
replace the missing observations with substituted values and then calculate the Jaccard index based
on the imputed data. Unfortunately, with a large portion of missing observations, imputing data
samples often becomes un-reliable or even infeasible, as evidenced in our evaluation.
Instead of trying to fill in the missing values, this paper investigates a completely different approach
based on matrix calibration. Starting from an approximate Jaccard index matrix that is estimated
from incomplete samples, the proposed method calibrates the matrix to meet the requirement of
positive semi-definiteness and other constraints. The calibration procedure is carried out with a
simple yet flexible alternating projection algorithm.
1
The proposed method has a strong theoretical advantage. The calibrated matrix is guaranteed to be
better than, or at least identical to (in special cases), the un-calibrated matrix in terms of a shorter
Frobenius distance to the true Jaccard index matrix, which was verified empirically as well. Besides, our evaluation of the method also reported improved results in learning applications, and the
improvement was especially significant with a high portion of missing values.
A note on notation. Throughout the discussion, a data sample, Ai (1 ? i ? n), is treated as a set of
features. Let F = {f1 , ? ? ? , fd } be the set of all possible features. Without causing ambiguity, Ai
also represents a binary-valued vector. If the j-th (1 ? j ? d) element of vector Ai is one, it means
fj ? Ai (feature fj belongs to sample Ai ); if the element is zero, fj 6? Ai ; if the element is marked
as missing, it remains unknown whether feature fj belongs to sample Ai or not.
2
2.1
Background
The Jaccard index
The Jaccard index is a commonly used statistical indicator for measuring the pairwise similarity
[14, 23]. For two nonempty and finite sets Ai and Aj , it is defined to be the ratio of the number of
elements in their intersection against the number of elements in their union:
?
Jij
=
|Ai ? Aj |
|Ai ? Aj |
where |?| denotes the cardinality of a set.
The Jaccard index has a value of 0 when the two sets have no elements in common, 1 when they have
exactly the same elements, and strictly between 0 and 1 otherwise. The two sets are more similar
(have more common elements) when the value gets closer to 1.
?
For n
sets A1 , ? ? ? , An (n ? 2), the Jaccard index matrix is defined as an n ? n matrix J =
? n
Jij i,j=1 . The matrix is symmetric and all diagonal elements of the matrix are 1.
2.2
Handling missing observations
When data samples are fully observed, the accurate Jaccard index can be obtained trivially by enumerating the intersection and the union between each pair of samples if both the number of samples
and the number of features are small. For samples with a large number of features, the index can
often be approximated by MinHash and related methods [5, 18], which avoid the explicit counting
of the intersection and the union of the two sets.
When data samples are not fully observed, however, obtaining the accurate Jaccard index generally
becomes infeasible. One na??ve approximation is to ignore the features with missing values. Only
those features that have no missing values in all samples are used to calculate the Jaccard index.
Obviously, for a large dataset with missing-at-random features, it is very likely that this method will
throw away all features and therefore does not work at all.
The mainstream work tries to replace the missing observations with substituted values, and then
calculates the Jaccard index based on the imputed data. Several simple approaches, including zero,
median and k-nearest neighbors (kNN) methods, are popularly used. A missing element is set to
zero, often implying the corresponding feature does not exist in a sample. It can also be set to the
median value (or the mean value) of the feature over all samples, or sometimes over a number of
nearest neighboring instances.
A more systematical imputation framework is based on the classical expectation maximization (EM)
algorithm [6], which generalizes maximum likelihood estimation to the case of incomplete data.
Assuming the existence of un-observed latent variables, the algorithm alternates between the expectation step and the maximization step, and finds maximum likelihood or maximum a posterior
estimates of the un-observed variables. In practice, the imputation is often carried out through iterating between learning a mixture of clusters of the filled data and re-filling missing values using
cluster means, weighted by the posterior probability that a cluster generates the samples [11].
2
3
Solution
Our work investigates the Jaccard index matrix estimation problem for incomplete data. Instead
of throwing away the un-observed features or imputing the missing values, a completely different
solution based on matrix calibration is designed.
3.1
Initial approximation
For a sample Ai , denote by Oi+ the set of features that are known to be in Ai , and denote by Oi? the
set of features that are known to be not in Ai . Let Oi = Oi+ ? Oi? . If Oi = F , Ai is fully observed
without missing values; otherwise, Ai is not fully observed with missing values. The complement
of Oi with respect to F , denoted by Oi , gives Ai ?s unknown features and missing values.
For two samples Ai and Aj with missing values, we approximate their Jaccard index by:
+
+
O ? Oj ? O + ? Oi
O ? O +
i
j
i
j
0
= +
Jij = +
O ? Oj ? O + ? Oi
O ? Oj ? O + ? Oi
j
j
i
i
0
Here we assume that each sample has at least one observed feature. It is obvious that Jij
is equal to
?
the ground truth Jij if the samples are fully observed.
?
There exists an interval [?ij , ?ij ] that the true value Jij
lies in, where
?
if i = j
?1,
?ij = |Oi+ ?Oj+ |
? ? ? , otherwise
Oi ?Oj
and
?ij =
?
?1,
?
if i = j
?
?
Oi ?Oj
|Oi ?Oj ?Oi+ ?Oj+ |
,
otherwise
.
The lower bound ?ij is obtained from the extreme case of setting the missing values in a way that the
two sets have the fewest features in their intersection while having the most features in their union.
On the contrary, the upper bound ?ij is obtained from the other extreme. When the samples are fully
?
observed, the interval shrinks to a single point ?ij = ?ij = Jij
.
3.2
Matrix calibration
? n
Denote by J ? = Jij
the true Jaccard index matrix for a set of data samples {A1 , ? ? ? , An },
ij=1
we have [2]:
Theorem 1. For a given set of data samples, its Jaccard index matrix J ? is positive semi-definite.
0 n
For data samples with missing values, the matrix J 0 = Jij
often loses positive semiij=1
definiteness. Nevertheless, it can be calibrated to ensure the property by seeking an n ? n matrix
n
J = {Jij }ij=1 to minimize:
2
L0 (J) =
J ? J 0
F
subject to the constraints:
J 0, and, ?ij ? Jij ? ?ij (1 ? i, j ? n)
where J 0 requires J to be positive semi-definite and k?kF denotes the Frobenius norm of a
P
2
2
matrix and kJkF = ij Jij
.
Let Mn be the set of n ? n symmetric matrices. The feasible region defined by the constraints,
denoted by R, is a nonempty closed and convex subset of Mn . Following standard results in optimization theory [20, 3, 10], the problem of minimizing L0 (J) is convex. Denote by PR the
0
projection onto R. Its unique solution is given by the projection of J0 onto R: JR
= PR J 0 .
0
For JR
, we have:
3
?
0
2
J ? J 0
2 . The equality holds iff J 0 ? R, i.e., J 0 = J 0 .
?
Theorem 2.
J ? ? JR
R
F
F
Proof. Define an inner product on Mn that induces the Frobenius norm:
hX, Y i = trace X T Y , for X, Y ? Mn .
Then
=
=
?
?
?
J ? J 0
2
F
?
0
2
0
J ? JR
? J 0 ? JR
F
0
?
0
0
0
2
0
2
J ? JR
J ? JR
? 2 J ? ? JR
, J 0 ? JR
+
F
F
?
0
0
0
2
J ? JR
? 2 J ? ? JR
, J 0 ? JR
F
?
0
2
J ? JR
F
The second ??? holds due to the Kolmogrov?s criterion, which states that the projection of J 0 onto
0
R, JR
, is unique and characterized by:
0
0
0
? 0 for all J ? R.
JR
? R, and J ? JR
, J 0 ? JR
0
0
0
0
2
= 0, i.e., J 0 = JR
.
, J 0 ? JR
= 0 and J ? ? JR
The equality holds iff
J 0 ? JR
F
This key observation shows that projecting J 0 onto the feasible region R will produce an improved
estimate towards J ? , although this ground truth matrix remains unknown to us.
3.3
Projection onto subsets
Based on the results in Section 3.2, we are to seek a minimizer to L0 (J) to improve the estimate
J 0 . Define two nonempty closed and convex subsets of Mn :
S = {X|X ? Mn , X 0}
and
T = {X|X ? Mn , ?ij ? Xij ? ?ij (1 ? i, j ? n)} .
Obviously R = S ? T . Now our minimization problem becomes finding the projection of J 0 onto
the intersection of two sets S and T with respect to the Frobenius norm. This can be done by
studying the projection onto the two sets individually. Denote by PS the projection onto S, and PT
the projection onto T . For projection onto T , a straightforward result based on the Kolmogrov?s
criterion is:
Theorem 3. For a given matrix X ? Mn , its projection onto T , XT = PT (X), is given by
?
?Xij , if ?ij ? Xij ? ?ij
.
(XT )ij = ?ij , if Xij < ?ij
?
?ij , if Xij > ?ij
For projection onto S, a well known result is the following [12, 16, 13]:
Theorem 4. For X ? Mn and its singular value decomposition X = U ?V T where ? =
diag (?1 , ? ? ? , ?n ), the projection of X onto S is given by: XS = PS (X) = U ?? V T where
?? = diag (??1 , ? ? ? , ??n ) and
?i , if ?i ? 0
?
?i =
.
0, otherwise
The matrix XS = PS (X) gives the positive semi-definite matrix that most closely approximates X
with respect to the Frobenius norm.
4
3.4
Dykstra?s algorithm
To study the orthogonal projection onto the intersection of subspaces, a classical result is von Neumann?s alternating projection algorithm. Let H be a Hilbert space with two closed subspaces C1
and C2 . The orthogonal projection onto the intersection C1 ? C2 can be obtained by the product of
the two projections PC1 PC2 when the two projections commute (PC1 PC2 = PC2 PC1 ). When they
do not commute, the work shows that for each x0 ? H, the projection of x0 onto the intersection
can be obtained by the limit
point of a sequence
of projections onto each subspace respectively:
k
limk?? (PC2 PC1 ) x0 = PC1 ?C2 x0 . The algorithm generalizes to any finite number of subspaces and projections onto them.
Unfortunately, different from the application in [19], in our problem both S and T are not subspaces
but subsets, and von Neumann?s convergence result does not apply. The limit point of the generated
sequence may converge to non-optimal points.
To handle the difficulty, Dykstra extended von Neumann?s
work and proposed an algorithm that
Tr
works with subsets [9]. Consider the case of C = i=1 Ci where C is nonempty and each Ci is
a closed and convex subset in H. Assume that for any x ? H, obtaining PC (x) is hard, while
0
obtaining each
x ? H, Dykstra?s algorithm produces two sequences,
from
kP Ci (x) is easy. Starting
k
the iterates xi and the increments Ii . The two sequences are generated by:
xk0
=
xrk?1
xki
=
PCi xki?1 ? Iik?1
Iik
=
xki ? xki?1 ? Iik?1
where i = 1, ? ? ? , r and k = 1, 2, ? ? ? . The initial values are given by x0r = x0 , Ii0 = 0.
The sequence of xki converges to the optimal solution with a theoretical guarantee [9, 10].
Theorem 5. Let C1 , ? ? ? , Cr be closed and convex subsets of a Hilbert space H such that C =
r
T
Ck 6= ?. For any i = 1, ? ? ? , r and any x0 ? H, the sequence xki converges strongly to
k=1
x0C = PC x0 (i.e.
xki ? x0C
? 0 as k ? ?).
The convergent rate of Dykstra?s algorithm for polyhedral sets is linear [7], which coincides with
the convergence rate of von Neumann?s alternating projection method.
3.5
An iterative method
Based on the discussion in Section 3.4, we have a simple approach, shown in Algorithm 1, that finds
the projection of an initial matrix J 0 onto the nonempty set R = S ? T . Here the projections onto
S and T are given by the two theorems in Section 3.3. The algorithm stops when J k falls into the
feasible region or when a maximal number of iterations is achieved. For practical implementation,
a more robust stopping criterion can be adopted [1].
3.6
Related work
It is a known study in mathematical optimization field to find a positive semi-definite matrix that
is closest to a given matrix. A number of methods have been proposed recently. The idea of alternating projection method was firstly applied in a financial application [13]. The problem can also
be phrased as a semi-definite programming (SDP) model [13] and be solved via the interior-point
method. In the work of [21] and [4], the quasi-Newton method and the projected gradient method
to the Lagrangian dual of the original problem were applied, which reported faster results than the
SDP formulation. An even faster Newton?s method was developed in [22] by investigating the dual
problem, which is unconstrained with a twice continuously differentiable objective function and has
a quadratically convergent solution.
5
Algorithm 1 Projection onto R = S ? T
Require: Initial matrix J 0
k=0
JT0 = J 0
IS0 = 0
IT0 = 0
while NOT CONVERGENT
do
JSk+1 = PS JTk ? ISk
ISk+1 = JSk+1 ? JTk ? ISk
JTk+1 = PT JSk+1 ? ITk
ITk+1 = JTk+1 ? JSk+1 ? ITk
k =k+1
end while
return J k = JTk
4
Evaluation
To evaluate the performance of the proposed method, four benchmark datasets were used in our
experiments.
? MNIST: a grayscale image database of handwritten digits (?0? to ?9?). After binarization,
each image is represented as a 784-dimensional 0-1 vector.
? USPS: another grayscale image database of handwritten digits. After binarization, each
image is represented as a 256-dimensional 0-1 vector.
? PROTEIN: a bioinformatics database with three classes of instances. Each instance is represented as a sparse 357-dimensional 0-1 vector.
? WEBSPAM: a dataset with both spam and non-spam web pages. Each page is represented
as a 0-1 vector. The data are highly sparse. On average one vector has about 4, 000 non-zero
values out of more than 16 million features.
Our experiments have two objectives. One is to verify the effectiveness of the proposed method in
estimating the Jaccard index matrix by measuring the derivation of the calibrated matrix from the
ground truth in Frobenius norm. The other is to evaluate the performance of the calibrated matrix in
general learning applications. The comparison is made against the popular imputation approaches
listed in Section 2.2, including the zero, kNN and EM 1 approaches. (As the median approach gave
very similar performance as the zero approach, its results were not reported separately.)
4.1
Jaccard index matrix estimation
The experiment was carried out under various settings. For each dataset, we experimented with
1, 000 and 10, 000 samples respectively. For each sample, different portions (from 10% to 90%)
of feature values were marked as missing, which was assumed to be ?missing at random? and all
features had the same probability of being marked.
As mentioned in Section 3, for the proposed calibration approach, an initial Jaccard index matrix
was firstly built based on the incomplete data. Then the matrix was calibrated to meet the positive
semi-definite requirement and the lower and upper bounds requirement. While for the imputation
approaches, the Jaccard index matrix was calculated directly from the imputed data.
Note that for the kNN approach, we iterated different k from 1 to 5 and the best result was collected,
which actually overestimated its performance. Under some settings, the results of the EM approach
were not available due to its prohibitive computational requirement to our platform.
The results are presented through the comparison of mean square deviations from the ground truth
of the Jaccard index matrix J ? . For an n ? n estimated matrix J ? , its mean square deviation from
1
ftp://ftp.cs.toronto.edu/pub/zoubin/old/EMcode.tar.Z
6
.FBO4RVBSF%FWJBUJPO (1,000 Samples)
ZERO/MEDIAN
kNN
EM
NO_CALIBRATION
CALIBRATION
?3
10
ZERO/MEDIAN
kNN
EM
NO_CALIBRATION
CALIBRATION
?1
10
Mean Square Deviation (log?scale)
0HDQ6TXDUH'HYLDWLRQ (log?scale)
Mean Square Deviation (log?scale)
?2
10
.FBO4RVBSF%FWJBUJPO (1,000 Samples)
Mean Square Deviation (1,000 Samples)
ZERO/MEDIAN
kNN
EM
NO_CALIBRATION
CALIBRATION
?1
10
?2
10
?3
10
ZERO/MEDIAN
kNN
NO_CALIBRATION
CALIBRATION
?2
10
0HDQ6TXDUH'HYLDWLRQ (log?scale)
Mean Square Deviation (1,000 Samples)
?1
10
?2
10
?3
10
?3
10
?4
10
?5
10
?4
?4
10
?4
10
0.1
0.2
0.3
0.4
0.5
0.6
Ratio of Observed Features
0.7
0.8
0.9
1
10
0
0.1
0.2
(a) MNIST
0.7
0.8
0.9
1
0
0.1
?3
10
0.4
0.5
0.6
Ratio of Observed Features
0.7
0.8
0.9
1
0
0.1
?2
10
?3
10
0.2
0.3
0.4
0.5
0.6
Ratio of Observed Features
0.7
0.8
0.9
1
(d) WEBSPAM
.FBO4RVBSF%FWJBUJPO (10,000 Samples)
ZERO/MEDIAN
kNN
NO_CALIBRATION
CALIBRATION
?1
10
Mean Square Deviation (log?scale)
?2
10
0.3
Mean Square Deviation (10,000 Samples)
ZERO/MEDIAN
kNN
EM
NO_CALIBRATION
CALIBRATION
?1
10
0.2
(c) PROTEIN
.FBO4RVBSF%FWJBUJPO (10,000 Samples)
ZERO/MEDIAN
kNN
EM
NO_CALIBRATION
CALIBRATION
0HDQ6TXDUH'HYLDWLRQ (log?scale)
Mean Square Deviation (log?scale)
0.4
0.5
0.6
Ratio of Observed Features
(b) USPS
Mean Square Deviation (10,000 Samples)
?1
10
0.3
ZERO/MEDIAN
kNN
NO_CALIBRATION
CALIBRATION
?2
10
0HDQ6TXDUH'HYLDWLRQ (log?scale)
0
?2
10
?3
10
?3
10
?4
10
?5
10
?4
?4
10
?4
10
0
0.1
0.2
0.3
0.4
0.5
0.6
Ratio of Observed Features
0.7
0.8
0.9
1
(e) MNIST
10
0
0.1
0.2
0.3
0.4
0.5
0.6
Ratio of Observed Features
0.7
0.8
0.9
1
0
(f) USPS
0.1
0.2
0.3
0.4
0.5
0.6
Ratio of Observed Features
0.7
0.8
(g) PROTEIN
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
Ratio of Observed Features
0.7
0.8
0.9
1
(h) WEBSPAM
Figure 1: Mean square deviations from the ground truth on benchmark datasets by different methods.
Horizontal: percentages of observed values (from 10% to 90%); Vertical: mean square deviations
in log-scale. (a)-(d): 1, 000 samples; (e)-(f): 10, 000 samples. (For better visualization effect of the
results shown in color, the reader is referred to the soft copy of this paper.)
J ? is defined as the square Frobenius distance between the two matrices, divided by the number of
Pn
2
(J ? ?Jij? )
elements, i.e., ij=1 nij
. In addition to the comparison with the popular approaches, the mean
2
square deviation between the un-calibrated matrix J 0 and J ? , shown as NO CALIBRATION, is
also reported as a baseline.
Figure 1 shows the results. It can be seen that the calibrated matrices reported the smallest derivation
from the ground truth in nearly all experiments. The improvement is especially significant when the
ratio of observed features is low (the missing ratio is high). It is guaranteed to be no worse than the
un-calibrated matrix. As evidenced in the results, for all the imputation approaches, there is no such
a guarantee.
4.2
Supervised learning
Knowing the improved results in reducing the deviation from the ground truth matrix, we would like
to further investigate whether this improvement indeed benefits practical applications, specifically
in supervised learning.
We applied the calibrated results in nearest neighbor classification tasks. Given a training set of
labeled samples, we tried to predict the labels of the samples in the testing set. For each testing
sample, its label was determined by the label of the sample in the training set that had the largest
Jaccard index value with it.
Similarly the experiment was carried out with 1, 000/10, 000 samples and different portions of missing values from 10% to 90% respectively. In each run, 90% of the samples were randomly chosen as
the training set and the remaining 10% were used as the testing set. The mean and standard deviation
of the classification errors in 1, 000 runs were reported. As a reference, the results from the ground
truth matrix J ? , shown as FULLY OBSERVED, were also included.
Figure 2 shows the results. Again the matrix calibration method reported evidently improved results
over the imputation approaches in most experiments. The improvement verified the benefits brought
by the reduced deviation from the true Jaccard index matrix, and therefore justified the usefulness
of the proposed method in learning applications.
7
Classification Error (1,000 Samples)
1
0.8
Classification Error (1,000 Samples)
1
FULLY_OBSERVED
ZERO/MEDIAN
kNN
EM
NO_CALIBRATION
CALIBRATION
0.9
0.8
Classification Error (1,000 Samples)
Classification Error (1,000 Samples)
0.8
FULLY_OBSERVED
ZERO/MEDIAN
kNN
EM
NO_CALIBRATION
CALIBRATION
0.9
0.11
FULLY_OBSERVED
ZERO/MEDIAN
kNN
EM
NO_CALIBRATION
CALIBRATION
0.75
0.09
0.7
0.7
FULLY_OBSERVED
ZERO/MEDIAN
kNN
NO_CALIBRATION
CALIBRATION
0.1
0.08
0.4
0.6
0.5
0.4
Classification Error
0.5
Classification Error
Classification Error
Classification Error
0.7
0.6
0.65
0.07
0.06
0.05
0.6
0.3
0.3
0.2
0.2
0.1
0.1
0.04
0.03
0.55
0
0
0.1
0.2
0.3
0.4
0.5
0.6
Ratio of Observed Features
0.7
0.8
0.9
0
1
0.02
0
0.1
0.2
(a) MNIST
0.7
0.8
0.9
0.5
1
0
0.1
0.8
0.7
0.3
0.4
0.5
0.6
Ratio of Observed Features
0.7
0.8
0.9
0.01
1
0.3
0.4
0.5
0.6
Ratio of Observed Features
0.7
0.8
0.9
1
Classification Error (10,000 Samples)
FULLY_OBSERVED
ZERO/MEDIAN
kNN
NO_CALIBRATION
CALIBRATION
0.06
0.4
0.6
0.5
0.4
0.05
Classification Error
0.5
Classification Error
Classification Error
0.6
Classification Error
0.2
0.07
FULLY_OBSERVED
ZERO/MEDIAN
kNN
NO_CALIBRATION
CALIBRATION
0.65
0.1
0.7
0.6
0.55
0.5
0.3
0.3
0.2
0.2
0.04
0.03
0.45
0.1
0
0
(d) WEBSPAM
Classification Error (10,000 Samples)
FULLY_OBSERVED
ZERO/MEDIAN
kNN
EM
NO_CALIBRATION
CALIBRATION
0.9
0.2
(c) PROTEIN
Classification Error (10,000 Samples)
1
FULLY_OBSERVED
ZERO/MEDIAN
kNN
EM
NO_CALIBRATION
CALIBRATION
0.8
0.4
0.5
0.6
Ratio of Observed Features
(b) USPS
Classification Error (10,000 Samples)
1
0.9
0.3
0.02
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
Ratio of Observed Features
0.7
(e) MNIST
0.8
0.9
1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
Ratio of Observed Features
0.7
0.8
0.9
0.4
1
(f) USPS
0
0.1
0.2
0.3
0.4
0.5
0.6
Ratio of Observed Features
0.7
0.8
(g) PROTEIN
0.9
1
0.01
0
0.1
0.2
0.3
0.4
0.5
0.6
Ratio of Observed Features
0.7
0.8
0.9
1
(h) WEBSPAM
Figure 2: Classification errors on benchmark datasets by different methods. Horizontal: percentage
of observed values (from 10% to 90%); Vertical: classification errors. (a)-(d): 1, 000 samples; (e)(f): 10, 000 samples. (For better visualization effect of the results shown in color, the reader is
referred to the soft copy of this paper.)
5
Discussion and conclusion
The Jaccard index measures the pairwise similarity between data samples, which is routinely used
in real applications. Unfortunately in practice, it is non-trivial to estimate the Jaccard index matrix
for incomplete data samples. This paper investigates the problem, and proposes a matrix calibration
approach in a way that is completely different from the existing methods. Instead of throwing
away the unknown features or imputing the missing values, the proposed approach calibrates any
approximate Jaccard index matrix by ensuring the positive semi-definite requirement on the matrix.
It is theoretically shown and empirically verified that the approach indeed brings about improvement
in practical problems.
One point that is not particularly addressed in this paper is the computational complexity issue. We
adopted a simple alternating projection procedure based on Dykstra?s algorithm. The computational
complexity of the algorithm heavily depends on the successive matrix decompositions. It is expensive when the size of the matrix becomes large. Calibrating a Jaccard index matrix for 1, 000
samples can be finished in seconds of time on our platform, while calibrating a matrix for 10, 000
samples quickly increases to more than an hour. Further investigations for faster solutions are thus
necessary for scalability.
Actually, there is a simple divide-and-conquer heuristic to calibrate a large matrix. Firstly divide
the matrix into small sub-matrices. Then calibrate each sub-matrix to meet the constraints. Finally
merge the results. Although the heuristic may not give the optimal result, it also guarantees to
produce a matrix better than or identical to the un-calibrated matrix. The heuristic runs with high
parallel efficiency and easily scales to very large matrices. The detailed discussion is omitted here
due to the space limit.
Acknowledgments
The work is supported by The Science and Technology Development Fund (Project No.
006/2014/A), Macao SAR, China.
8
References
[1] E.G. Birgin and M. Raydan. Robust stopping criteria for Dykstra?s algorithm. SIAM Journal on Scientific
Computing, 26(4):1405?1414, 2005.
[2] M. Bouchard, A.L. Jousselme, and P.E. Dor?e. A proof for the positive definiteness of the Jaccard index
matrix. International Journal of Approximate Reasoning, 54(5):615?626, 2013.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, New York, NY, USA,
2004.
[4] S. Boyd and L. Xiao. Least-squares covariance matrix adjustment. SIAM Journal on Matrix Analysis and
Applications, 27(2):532?546, 2005.
[5] A.Z. Broder, M. Charikar, A.M. Frieze, and M. Mitzenmacher. Min-wise independent permutations. In
Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing, pages 327?336. ACM,
1998.
[6] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society, Series B, 39:1?38, 1977.
[7] F. Deutsch. Best Approximation in Inner Product Spaces. Springer, New York, NY, USA, 2001.
[8] R.O. Duda and P.E. Hart. Pattern Classification. John Wiley and Sons, Hoboken, NJ, USA, 2000.
[9] R.L. Dykstra. An algorithm for restricted least squares regression. Journal of the American Statistical
Association, 78(384):837?842, 1983.
[10] R. Escalante and M. Raydan. Alternating Projection Methods. SIAM, Philadelphia, PA, USA, 2011.
[11] Z. Ghahramani and M.I. Jordan. Supervised learning from incomplete data via an EM approach. In
Advances in Neural Information Processing Systems, volume 6, pages 120?127. Morgan Kaufmann, 1994.
[12] G.H. Golub and C.F. Van Loan. Matrix Computations. Johns Hopkins University Press, Baltimore, MD,
USA, 1996.
[13] N.J. Higham. Computing the nearest correlation matrix - a problem from finance. IMA Journal of Numerical Analysis, 22:329?343, 2002.
[14] P. Jaccard. The distribution of the flora in the alpine zone. New Phytologist, 11(2):37?50, 1912.
[15] A.K. Jain, M.N. Murty, and P.J. Flynn. Data clustering: A review. ACM Computing Surveys, 31(3):264?
323, 1999.
[16] D.L. Knol and J.M.F. ten Berge. Least-squares approximation of an improper correlation matrix by a
proper one. Psychometrika, 54(1):53?61, 1989.
[17] J. Leskovec, A. Rajaraman, and J. Ullman. Mining of Massive Datasets. Cambridge University Press,
New York, NY, USA, 2014.
[18] P. Li and A.C. K?onig. Theory and applications of b-bit minwise hashing. Communications of the ACM,
54(8):101?109, 2011.
[19] W. Li, K.H. Lee, and K.S. Leung. Large-scale RLSC learning without agony. In Proceedings of the 24th
International Conference on Machine Learning, pages 529?536. ACM, 2007.
[20] D.G. Luenberger. Optimization by Vector Space Methods. John Wiley & Sons, New York, NY, USA,
1969.
[21] J. Malick. A dual approach to semidefinite least-squares problems. SIAM Journal on Matrix Analysis and
Applications, 26(1):272?284, 2004.
[22] H. Qi and D. Sun. A quadratically convergent newton method for computing the nearest correlation
matrix. SIAM Journal on Matrix Analysis and Applications, 28(2):360?385, 2006.
[23] D.J. Rogers and T.T. Tanimoto. A computer program for classifying plants. Science, 132(3434):1115?
1118, 1960.
[24] G. Salton, A. Wong, and C.S. Yang. A vector space model for automatic indexing. Communications of
the ACM, 18(11):613?620, 1975.
[25] B. Scholk?opf and A.J. Smola. Learning With Kernels, Support Vector Machines, Regularization, Optimization, and Beyond. The MIT Press, Cambridge, MA, USA, 2001.
9
| 5983 |@word norm:6 duda:1 rajaraman:1 seek:1 tried:1 decomposition:2 covariance:1 commute:2 tr:1 ipm:1 initial:5 series:2 pub:1 existing:1 comparing:1 yet:1 hoboken:1 john:3 numerical:1 designed:1 fund:1 implying:1 prohibitive:1 iterates:1 toronto:1 successive:1 firstly:3 mathematical:1 c2:3 symposium:1 polyhedral:1 theoretically:1 x0:7 pairwise:3 indeed:2 sdp:2 cardinality:1 becomes:4 project:1 estimating:3 notation:1 psychometrika:1 developed:1 flynn:1 finding:1 nj:1 guarantee:3 finance:1 exactly:1 onig:1 positive:10 engineering:1 limit:3 despite:1 meet:4 merge:1 twice:1 china:2 hdq:4 unique:4 practical:3 acknowledgment:1 testing:3 union:4 practice:2 definite:7 digit:2 procedure:2 j0:1 empirical:1 significantly:1 murty:1 projection:29 boyd:2 protein:5 zoubin:1 get:1 onto:21 interior:1 wong:1 conventional:1 lagrangian:1 missing:26 straightforward:1 starting:3 convex:6 survey:1 ii0:1 fill:1 vandenberghe:1 financial:1 handle:1 justification:1 sar:2 increment:1 pt:3 heavily:1 massive:1 escalante:1 programming:1 pa:1 element:14 approximated:2 particularly:1 expensive:1 lay:1 database:3 labeled:1 observed:31 solved:1 calculate:2 region:3 improper:1 sun:1 mentioned:1 dempster:1 complexity:2 jtk:5 efficiency:1 completely:3 usps:5 easily:1 represented:5 routinely:2 various:1 fewest:1 derivation:2 jain:1 kp:1 pci:1 heuristic:3 valued:1 is0:1 otherwise:5 statistic:1 knn:19 laird:1 obviously:2 advantage:2 sequence:6 differentiable:1 evidently:1 jij:13 product:3 maximal:1 causing:1 neighboring:1 iff:2 frobenius:8 scalability:1 convergence:2 cluster:4 requirement:6 p:4 neumann:4 produce:3 converges:2 ftp:2 ij:23 nearest:5 strong:2 throw:1 berge:1 c:1 deutsch:1 popularly:1 closely:1 wenye:1 rogers:1 require:1 hx:1 f1:1 investigation:1 strictly:1 hold:3 ground:9 predict:1 mo:1 smallest:1 omitted:1 estimation:3 label:3 individually:1 largest:1 weighted:1 minimization:1 brought:1 mit:1 ck:1 avoid:1 cr:1 pn:1 tar:1 thirtieth:1 l0:3 improvement:5 raydan:2 likelihood:3 baseline:1 stopping:2 leung:1 quasi:1 issue:1 classification:22 flexible:1 dual:3 denoted:2 malick:1 proposes:1 development:1 platform:2 special:2 equal:1 field:1 having:1 identical:3 represents:1 filling:1 nearly:1 randomly:1 frieze:1 ve:1 ima:1 dor:1 fd:1 highly:1 investigate:1 mining:1 evaluation:4 golub:1 mixture:1 extreme:2 semidefinite:1 pc:2 accurate:2 xrk:1 closer:2 necessary:1 shorter:1 orthogonal:2 filled:1 incomplete:8 old:1 divide:2 re:1 theoretical:3 nij:1 leskovec:1 instance:3 soft:2 measuring:2 maximization:2 calibrate:2 deviation:16 subset:7 usefulness:1 reported:8 calibrated:13 international:2 siam:5 broder:1 overestimated:1 lee:1 discipline:1 continuously:1 quickly:1 hopkins:1 na:1 von:4 ambiguity:1 again:1 worse:1 american:1 return:1 ullman:1 li:3 depends:1 try:1 closed:5 portion:4 parallel:1 bouchard:1 minimize:1 oi:16 square:18 kaufmann:1 handwritten:2 iterated:1 confirmed:1 kolmogrov:2 against:3 obvious:1 proof:2 salton:1 stop:1 dataset:3 treatment:1 popular:3 color:2 hilbert:2 actually:2 hashing:1 supervised:3 improved:5 formulation:1 done:1 shrink:1 strongly:1 mitzenmacher:1 smola:1 correlation:3 horizontal:2 web:1 tanimoto:1 brings:1 aj:4 scientific:1 usa:8 effect:2 calibrating:2 verify:1 true:4 equality:2 regularization:1 alternating:7 symmetric:2 cosine:1 coincides:1 criterion:4 scholk:1 trying:1 fj:4 reasoning:1 image:5 wise:1 recently:1 common:3 imputing:3 empirically:2 volume:1 million:1 association:1 approximates:1 significant:2 cambridge:3 ai:17 automatic:1 unconstrained:1 trivially:1 similarly:1 had:2 calibration:25 similarity:8 mainstream:1 posterior:2 closest:1 systematical:1 belongs:2 binary:1 seen:1 morgan:1 determine:1 converge:1 semi:9 ii:1 faster:3 characterized:1 divided:1 hart:1 rlsc:1 a1:2 calculates:1 ensuring:1 qi:1 regression:1 expectation:2 iteration:1 kernel:2 sometimes:1 itk:3 achieved:1 c1:3 justified:1 background:1 addition:1 separately:1 interval:2 addressed:1 baltimore:1 median:19 singular:1 limk:1 subject:1 contrary:1 effectiveness:1 jordan:1 counting:1 yang:1 easy:1 minhash:1 variety:1 gave:1 inner:2 idea:1 knowing:1 enumerating:1 whether:2 algebraic:1 york:4 generally:1 iterating:1 detailed:1 listed:1 ten:1 induces:1 imputed:4 reduced:1 exist:1 xij:5 percentage:2 estimated:2 key:1 four:1 nevertheless:1 imputation:7 verified:3 run:3 throughout:1 reader:2 jaccard:34 investigates:4 pc2:4 bit:1 bound:3 guaranteed:3 convergent:4 calibrates:3 annual:1 constraint:5 throwing:2 phrased:1 generates:1 xki:7 min:1 charikar:1 alternate:1 jr:21 em:15 son:2 projecting:1 restricted:1 pr:1 indexing:1 visualization:2 remains:2 nonempty:5 end:1 studying:1 generalizes:2 adopted:2 available:1 luenberger:1 apply:1 polytechnic:1 away:3 existence:1 original:1 denotes:2 remaining:1 include:1 ensure:1 clustering:1 newton:3 ghahramani:1 especially:2 conquer:1 classical:2 dykstra:7 society:1 seeking:1 objective:2 md:1 diagonal:1 gradient:1 subspace:5 distance:2 collected:1 trivial:2 assuming:1 besides:1 index:34 ratio:20 minimizing:1 unfortunately:3 trace:1 implementation:1 xk0:1 proper:1 unknown:5 x0c:2 recommender:1 upper:2 observation:7 vertical:2 datasets:5 benchmark:4 finite:2 extended:1 communication:2 pc1:5 evidenced:2 pair:1 complement:1 x0r:1 quadratically:2 established:1 hour:1 beyond:1 pattern:2 challenge:1 program:1 built:1 reliable:1 including:2 oj:8 royal:1 webspam:5 critical:1 treated:1 difficulty:1 indicator:1 mn:9 improve:1 technology:1 finished:1 carried:5 philadelphia:1 binarization:2 review:1 kf:1 opf:1 fully:8 plant:1 permutation:1 foundation:1 xiao:1 rubin:1 classifying:1 supported:1 copy:2 infeasible:2 macao:3 institute:1 wide:1 neighbor:2 face:1 fall:1 sparse:2 benefit:2 van:1 calculated:1 birgin:1 commonly:1 made:1 projected:1 spam:2 approximate:4 ignore:1 investigating:1 assumed:1 it0:1 xi:1 grayscale:2 un:9 latent:1 iterative:1 robust:2 isk:3 obtaining:3 substituted:2 diag:2 arise:1 referred:2 definiteness:4 ny:4 wiley:2 sub:2 explicit:1 lie:1 theorem:6 xt:2 x:2 experimented:1 exists:1 mnist:5 higham:1 ci:3 intersection:8 likely:1 iik:3 adjustment:1 springer:1 truth:9 loses:1 minimizer:1 acm:6 ma:1 marked:3 towards:1 replace:2 feasible:3 hard:1 jt0:1 included:1 specifically:1 except:1 reducing:1 determined:1 loan:1 total:1 zone:1 support:1 bioinformatics:1 minwise:1 evaluate:2 handling:1 |
5,506 | 5,984 | Sample Efficient Path Integral Control under
Uncertainty
Yunpeng Pan, Evangelos A. Theodorou, and Michail Kontitsis
Autonomous Control and Decision Systems Laboratory
Institute for Robotics and Intelligent Machines
School of Aerospace Engineering
Georgia Institute of Technology, Atlanta, GA 30332
{ypan37,evangelos.theodorou,kontitsis}@gatech.edu
Abstract
We present a data-driven optimal control framework that is derived using the path
integral (PI) control approach. We find iterative control laws analytically without a
priori policy parameterization based on probabilistic representation of the learned
dynamics model. The proposed algorithm operates in a forward-backward manner
which differentiate it from other PI-related methods that perform forward sampling to find optimal controls. Our method uses significantly less samples to find
analytic control laws compared to other approaches within the PI control family
that rely on extensive sampling from given dynamics models or trials on physical
systems in a model-free fashion. In addition, the learned controllers can be generalized to new tasks without re-sampling based on the compositionality theory for
the linearly-solvable optimal control framework. We provide experimental results
on three different tasks and comparisons with state-of-the-art model-based methods to demonstrate the efficiency and generalizability of the proposed framework.
1
Introduction
Stochastic optimal control (SOC) is a general and powerful framework with applications in many
areas of science and engineering. However, despite the broad applicability, solving SOC problems
remains challenging for systems in high-dimensional continuous state action spaces. Various function approximation approaches to optimal control are available [1, 2] but usually sensitive to model
uncertainty. Over the last decade, SOC based on exponential transformation of the value function has
demonstrated remarkable applicability in solving real world control and planning problems. In control theory the exponential transformation of the value function was introduced in [3, 4]. In the recent
decade it has been explored in terms of path integral interpretations and theoretical generalizations
[5, 6, 7, 8], discrete time formulations [9], and scalable RL/control algorithms [10, 11, 12, 13, 14].
The resulting stochastic optimal control frameworks are known as Path Integral (PI) control for continuous time, Kullback Leibler (KL) control for discrete time, or more generally Linearly Solvable
Optimal Control [9, 15].
One of the most attractive characteristics of PI control is that optimal control problems can be solved
with forward sampling of Stochastic Differential Equations (SDEs). While the process of sampling
with SDEs is more scalable than numerically solving partial differential equations, it still suffers
from the curse of dimensionality when performed in a naive fashion. One way to circumvent this
problem is to parameterize policies [10, 11, 14] and then perform optimization with sampling. However, in this case one has to impose the structure of the policy a-priori, therefore restrict the possible
optimal control solutions within the assumed parameterization. In addition, the optimized policy
parameters can not be generalized to new tasks. In general, model-free PI policy search approaches
1
require a large number of samples from trials performed on real physical systems. The issue of
sample inefficiency further restricts the applicability of PI control methods on physical systems with
unknown or partially known dynamics.
Motivated by the aforementioned limitations, in this paper we introduce a sample efficient, modelbased approach to PI control. Different from existing PI control approaches, our method combines
the benefits of PI control theory [5, 6, 7] and probabilistic model-based reinforcement learning
methodologies [16, 17]. The main characteristics of the our approach are summarized as follows
? It extends the PI control theory [5, 6, 7] to the case of uncertain systems. The structural
constraint is enforced between the control cost and uncertainty of the learned dynamics,
which can be viewed as a generalization of previous work [5, 6, 7].
? Different from parameterized PI controllers [10, 11, 14, 8], we find analytic control law
without any policy parameterization.
? Rather than keeping a fixed control cost weight [5, 6, 7, 10, 18], or ignoring the constraint between control authority and noise level [11], in this work the control cost weight
is adapted based on the explicit uncertainty of the learned dynamics model.
? The algorithm operates in a different manner compared to existing PI-related methods that
perform forward sampling [5, 6, 7, 10, 18, 11, 12, 14, 8]. More precisely our method perform successive deterministic approximate inference and backward computation of optimal
control law.
? The proposed model-based approach is significantly more sample efficient than samplingbased PI control [5, 6, 7, 18]. In RL setting our method is comparable to the state-of-the-art
RL methods [17, 19] in terms of sample and computational efficiency.
? Thanks to the linearity of the backward Chapman-Kolmogorov PDE, the learned controllers
can be generalized to new tasks without re-sampling by constructing composite controllers.
In contrast, most policy search and trajectory optimization methods [10, 11, 14, 17, 19, 20,
21, 22] find policy parameters that can not be generalized.
2
2.1
Iterative Path Integral Control for a Class of Uncertain Systems
Problem formulation
We consider a nonlinear stochastic system described by the following differential equation
dx = f (x) + G(x)u dt + Bd?,
n
m
(1)
p
with state x ? R , control u ? R , and standard Brownian motion noise ? ? R with variance
?? . f (x) is the unknown drift term (passive dynamics), G(x) ? Rn?m is the control matrix and
B ? Rn?p is the diffusion matrix. Given some previous control uold , we seek the optimal control
correction term ?u such that the total control u = uold + ?u. The original system becomes
dx = f (x) + G(x)(uold + ?u) dt + Bd? = f (x) + G(x)uold dt + G(x)?udt + Bd?.
|
{z
}
?
f (x,uold )
In this work we assume the dynamics based on the previous control can be represented by Gaussian
processes (GP) such that
fGP (x) = ?f (x, uold )dt + Bd?,
(2)
where fGP is the GP representation of the biased drift term ?f under the previous control. Now the
original dynamical system (1) can be represented as follow
fGP ? GP(?f , ?f ),
dx = fGP + G?udt,
(3)
where ?f , ?f are predictive mean and covariance functions, respectively. For the GP model we use
a prior of zero mean and covariance function K(xi , xj ) = ?s2 exp(? 21 (xi ? xj )T W(xi ? xj )) +
?ij ??2 , with ?s , ?? , W the hyper-parameters. ?ij is the Kronecker symbol that is one iff i = j and
zero otherwise. Samples over fGP can be drawn using an vector of i.i.d. Gaussian variable ?
?fGP = ?f + Lf ?
2
(4)
where Lf is obtained using Cholesky factorization such that ?f = Lf LT
f . Note that generally ? is
an infinite dimensional vector and we can use the same sample to represent uncertainty during learning [23]. Without loss of generality we assume ? to be the standard zero-mean Brownian motion.
For the rest of the paper we use simplified notations with subscripts indicating the
? time step. The
discrete-time representation of the system is xt+dt = xt +?f t +Gt ?ut dt+Lf t ?t dt, and the con
ditional probability of xt+dt given xt and ?ut is a Gaussian p xt+dt |xt , ?ut = N ?t+dt , ?t+dt ,
where ?t+dt = xt + ?f t + Gt ?ut and ?t+dt = ?f t . In this paper we consider a finite-horizon
stochastic optimal control problem
Z T
i
h
L(xt , ?ut )dt ,
J(x0 ) = E q(xT ) +
t=0
where the immediate cost is defined as L(xt , ut ) = q(xt ) + 12 ?uT
t Rt ?ut , and q(xt ) = (xt ?
xdt )T Q(xt ? xdt ) is a quadratic cost function where xdt is the desired state. Rt = R(xt ) is a statedependent positive definite weight matrix. Next we show the linearized Hamilton-Jacobi-Bellman
equation for this class of optimal control problems.
2.2
Linearized Hamilton-Jacobi-Bellman equation for uncertain dynamics
At each iteration the goal is to find the optimal control update ?ut that minimizes the value function
h Z t+dt
i
V (xt , t) = min E
L(xt , ?ut )dt + V (xt + dxt , t + dt)dt|xt .
(5)
?ut
t
(5) is the Bellman equation. By approximating the integral for a small dt and applying It?o?s rule we
obtain the Hamilton-Jacobi-Bellman (HJB) equation (detailed derivation is skipped):
1
1
??t Vt = min(qt + ?uT
Rt ?ut + (?f t + Gt ?ut )T ?x Vt + Tr(?f t ?xx Vt )).
?ut
2 t
2
To find the optimal control update, we take gradient of the above expression (inside the parentheses)
T
with respect to ?ut and set to 0. This yields ?ut = ?R?1
t Gt ?x Vt . Inserting this expression into
the HJB equation yields the following nonlinear and second order PDE
1
1
(6)
??t Vt = qt + (?x Vt )T ?f t ? (?x Vt )T Gt R?1 GT
Tr(?f t ?xx Vt ).
t ?x V t +
2
2
In order to solve the above PDE we use the exponential transformation of the value function
Vt = ?? log ?t , where ?t = ?(xt ) is called the desirability of xt . The corresponding
partial derivatives can be found as ?t Vt = ? ??t ?t ?t , ?x Vt = ? ??t ?x ?t and ?xx Vt =
?
? ? ? ?T ? ??t ?xx ?t . Inserting these terms to (6) results in
?2 x t x t
t
?
?
?2
?
?
T
T
?1
T
T
?t ?t = qt ?
(?x ?t ) ?f t ?
(?x ?t ) Gt Rt Gt ?x ?t +
Tr((?x ?t ) ?f t ?x ?t )?
Tr(?xx ?t ?f t ).
?t
?t
2?2t
2?2t
2?t
T
The quadratic terms ?x ?t will cancel out under the assumption of ?Gt R?1
t Gt = ?f t . This
constraint is different from existing works in path integral control [5, 6, 7, 10, 18, 8] where the
constraint is enforced between the additive noise covariance and control authority, more precisely
T
T
?Gt R?1
t Gt = B?? B . The new constraint enables an adaptive update of control cost weight
based on explicit uncertainty of the learned dynamics. In contrast, most existing works use a fixed
control cost weight [5, 6, 7, 10, 18, 12, 14, 8]. This condition also leads to more exploration (more
aggressive control) under high uncertainty and less exploration with more certain dynamics. Given
the aforementioned assumption, the above PDE is simplified as
1
1
?t ?t = qt ?t ? ?T
Tr(?xx ?t ?f t ),
(7)
f t ? x ?t ?
?
2
subject to the terminal condition ?T = exp(? ?1 qT ). The resulting Chapman-Kolmogorov PDE (7)
is linear. In general, solving (7) analytically is intractable for nonlinear systems and cost functions.
We apply the Feynman-Kac formula which gives a probabilistic representation of the solution of the
linear PDE (7)
Z
T ?dt
1 X
?t = lim
p(?t |xt ) exp ? (
qj dt) ?T d?t ,
(8)
dt?0
? j=t
3
where ?t is the state trajectory from time t to T . The optimal control is obtained as
? ?
x t
?1 T ?x ?t
T
? t = ?Gt R?1
Gt ? u
= ?f t
t Gt (?x Vt ) = ?Gt Rt Gt
?t
?t
? ?
x t
?1
old
?
.
=??
ut = uold
+
?
u
=
u
+
G
?
t
f
t
t
t
t
?t
(9)
? t can be approximated based on path
Rather than computing ?x ?t and ?t , the optimal control u
costs of sampled trajectories. Next we briefly review some of the existing approaches.
2.3
Related works
According to the path integral control theory [5, 6, 7, 10, 18, 8], the stochastic optimal control
problem becomes an approximation problem of a path integral (8). This problem can be solved by
? t is approximated
forward sampling of the uncontrolled (u = 0) SDE (1). The optimal control u
based on path costs of sampled trajectories. Therefore the computation of optimal controls becomes
a forward process. More precisely, when the control and noise act in the same subspace,
the optimal
? t = Ep(?t |xt ) d? t , where the
control can be evaluated as the weighted average of the noise u
exp(? 1 S(? |x ))
probability of a trajectory is p(?t |xt ) = R exp(? 1?S(? t|x t))d? , and S(?t |xt ) is defined as the path
t
t
?
cost computed by performing forward sampling. However, these approaches require a large amount
of samples from a given dynamics model, or extensive trials on physical systems when applied in
model-free reinforcement learning settings. In order to improve sample efficiency, a nonparametric
approach was developed by representing the desirability ?t in terms of linear operators in a reproducing kernel Hilbert space (RKHS) [12]. As a model-free approach, it allows sample re-use but
relies on numerical methods to estimate the gradient of desirability, i.e., ?x ?t , which can be computationally expensive. On the other hand, computing the analytic expressions of the path integral
embedding is intractable and requires exact knowledge of the system dynamics. Furthermore, the
control approximation is based on samples from the uncontrolled dynamics, which is usually not
sufficient for highly nonlinear or underactuated systems.
Another class of PI-related method is based on policy parameterization. Notable approaches include PI2 [10], PI2 -CMA [11], PI-REPS[14] and recently developed state-dependent PI[8]. The
limitations of these methods are: 1) They do not take into account model uncertainty in the passive
dynamics f (x). 2) The imposed policy parameterizations restrict optimal control solutions. 3) The
optimized policy parameters can not be generalized to new tasks. A brief comparison of some of
these methods can be found in Table 1. Motivated by the challenge of combining sample efficiency
and generalizability, next we introduce a probabilistic model-based approach to compute the optimal
control (9) analytically.
PI [5, 6, 7], iterative PI [18] PI2 [10], PI2 -CMA [11] PI-REPS[14] State feedback PI[8]
Our method
T
T
Structural constraint
?Gt R?1
same as PI
same as PI
same as PI
?GR?1 GT = ?f
t Gt = B?? B
Dynamics model
model-based
model-free
model-based
model-based
GP model-based
Policy parameterization
No
Yes
Yes
Yes
No
Table 1: Comparison with some notable and recent path integral-related approaches.
3
3.1
Proposed Approach
Analytic path integral control: a forward-backward scheme
In order to derive the proposed framework, firstly we learn the function fGP (xt ) = ?f (x, uold )dt +
Bd? from sampled data. Learning the continuous mapping from state to state transition can be
viewed as an inference with the goal of inferring the state transition d?
xt = fGP (xt ). The kernel
function has been defined in Sec.2.1, which can be interpreted as a similarity measure of random
variables. More specifically, if the training input xi and xj are close to each other in the kernel
space, their outputs dxi and dxj are highly correlated. Given a sequence of states {x0 , . . . xT },
and the corresponding state transition {d?
x0 , . . . , d?
xT }, the posterior distribution can be obtained
by conditioning the joint prior distribution on the observations. In this work we make the standard
assumption of independent outputs (no correlation between each output dimension).
4
To propagate the GP-based dynamics over a trajectory of time horizon T we employ the moment
matching approach [24, 17] to compute the predictive distribution. Given an input distribution over
the state N (?t , ?t ), the predictive distribution over the state at t + dt can be approximated as a
Gaussian p(xt+dt ) ? N (?t+dt , ?t+dt ) such that
?t+dt = ?t + ?f t ,
?t+dt = ?t + ?f t + COV[xt , d?
xt ] + COV[d?
xt , xt ].
(10)
The above formulation is used to approximate one-step transition probabilities over the trajectory.
Details regarding the moment matching method can be found in [24, 17]. All mean and variance
terms can be computed analytically. The hyper-parameters ?s , ?? , W are learned by maximizing
the log-likelihood of the training outputs given the inputs [25]. Given the approximation of transition
probability (10), we now introduce a Bayesian nonparametric formulation of path integral control
based on probabilistic representation of the dynamics. Firstly we perform approximate inference
(forward propagation) to obtain the Gaussian belief (predictive mean and covariance of the state)
over the trajectory. Since the exponential transformation of the state cost exp(? ?1 q(x)dt) is an
?1
unnormalized Gaussian N (xd , 2?
). We can evaluate the following integral analytically
dt Q
Z
? 1
1
dt
1
dt
d T dt
?1
d
N ?j , ?j exp ? qj dt dxj = I +
?j Q 2 exp ? (?j ? xj )
Q(I +
??j Q) (?j ? xj ) ,
?
2?
2
2?
2?
(11)
for j = t+dt, ..., T . Thus given a boundary condition ?T = exp(? ?1 qT ) and predictive distribution
at the final step N (?T , ?T ), we can evaluate the one-step backward desirability ?T ?dt analytically
using the above expression (11). More generally we use the following recursive rule
Z
1
?j?dt = ?(xj , ?j ) = N ?j , ?j exp ? qj dt ?j dxj ,
(12)
?
for j = t + dt, ..., T ? dt. Since we use deterministic approximate inference based on (10) instead
of explicitly sampling from the corresponding SDE, we approximate the conditional distribution
p(xj |xj?dt ) by the Gaussian predictive distribution N (?j , ?j ). Therefore the path integral
?dt
1 TX
p ?t |xt exp ? (
qj dt) ?T d?t
? j=t
Z
Z
1
Z
1
? ... N ?T ?dt , ?T ?dt exp ? qT ?dt dt
N ?T , ?T exp ? qT dxT dxT ?dt ...dxt+dt
?
|
{z ? }
Z
?t =
?T
|
|
Z
=
{z
?T ?dt
{z
?T ?2dt
1
N ?t+dt , ?t+dt exp ? qt+dt dt ?t+dt dxt+dt = ?(xt+dt , ?t+dt ).
?
}
}
(13)
We evaluate the desirability ?t backward in time by successive computation using the above recur? t (9) requires gradients of the desirability function with
sive expression. The optimal control law u
respect to the state, which can be computed backward in time as well. For simplicity we denote the
function ?(xj , ?j ) by ?j . Thus we compute the gradient of the recursive expression (13)
?x ?j?dt = ?j ?x ?j + ?j ?x ?j ,
(14)
where j = t + dt, ..., T ? dt. Given the expression in (11) we compute the gradient terms in (14) as
d?j dp(xj )
??j d?j
??j d?j
??j
dt
dt
= ?j (?j ? xdj )T
=
+
, where
Q(I +
??j Q)?1 ,
dp(xj ) dxt
??j dxt
??j dxt
??j
2?
2?
dt
T
??j
?j dt
dt
dt
=
Q(I +
??j Q)?1 ?j ? xdj ?j ? xdj ? I
Q(I +
??j Q)?1 , and
??j
2 2?
2?
2?
2?
n ?? d?
d{?j , ?j }
??j d?j?dt ??j d?j?dt
??j d?j?dt o
j
j?dt
=
+
,
+
.
dxt
??j?dt dxt
??j?dt dxt
??j?dt dxt
??j?dt dxt
?x ?j =
??
??
??
??
The term ?x ?T ?dt is compute similarly. The partial derivatives ? ? j , ? ? j , ? ? j , ? ? j
j?dt
j?dt
j?dt
j?dt
can be computed analytically as in [17]. We compute all gradients using this scheme without any
numerical method (finite differences, etc.). Given ?t and ?x ?t , the optimal control takes a analytic
5
form as in eq.(9). Since ?t and ?x ?t are explicit functions of xt , the resulting control law is essentially different from the feedforward control in sampling-based path integral control frameworks
[5, 6, 7, 10, 18] as well as the parameterized state feedback PI control policies [14, 8]. Notice that
? t,...,T using the presented forward-backward
at current time step t, we update the control sequence u
? t is applied to the system to move to the next step, while the controls u
? t+dt,...,T are
scheme. Only u
used for control update at future steps. The transition sample recorded at each time step is incorporated to update the GP model of the dynamics. A summary of the proposed algorithm is shown in
Algorithm 1.
Algorithm 1 Sample efficient path integral control under uncertain dynamics
? 0,..,T to the physical system (1), record data.
1: Initialization: Apply random controls u
2: repeat
3:
for t=0:T do
4:
Incorporate transition sample to learn GP dynamics model.
5:
repeat
? t,..,T , see (10).
6:
Approximate inference for predictive distributions using uold
t,..,T = u
? t,..,T , see (13)(14)(9).
7:
Backward computation of optimal control updates ? u
? t,..,T .
? t,..,T = uold
8:
Update optimal controls u
t,..,T + ? u
9:
until Convergence.
? t to the system. Move one step forward and record data.
10:
Apply optimal control u
11:
end for
12: until Task learned.
3.2
Generalization to unlearned tasks without sampling
In this section we describe how to generalize the learned controllers for new (unlearned) tasks without any interaction with the real system. The proposed approach is based on the compositionality
theory [26] in linearly solvable optimal control (LSOC). We use superscripts to denote previously
? d and old targets
learned task indexes. Firstly we define a distance measure between the new target x
xdk , k = 1, .., K, i.e., a Gaussian kernel
1
xd ? xdk )T P(?
xd ? xdk ) ,
(15)
? k = exp ? (?
2
where P is a diagonal matrix (kernel width). The composite terminal cost q?(xT ) for the new task
becomes
PK
1 k
k
k=1 ? exp(? ? q (xT ))
q?(xT ) = ?? log
,
(16)
PK
k
k=1 ?
where q k (xT ) is the terminal cost for old tasks. For conciseness we define a normalized distance
k
measure ?
? k = PK? ?k , which can be interpreted as a probability weight. Based on (16) we have
k=1
the composite terminal desirability for the new task which is a linear combination of ?kT
K
X
? T = exp ? 1 q?(xT ) =
?
? k ?kT .
?
?
(17)
k=1
Since ?kt is the solution to the linear Chapman-Kolmogorov PDE (7), the linear combination of
desirability (17) holds everywhere from t to T as long as it holds on the boundary (terminal time
step). Therefore we obtain the composite control
?t =
u
K
X
?
? k ?kt
?k.
u
PK
k ?k t
?
?
t
k=1
k=1
(18)
The composite control law in (18) is essentially different from an interpolating control law[26]. It
enables sample-free controllers that constructed from learned controllers for different tasks. This
scheme can not be adopted in policy search or trajectory optimization methods such as [10, 11,
14, 17, 19, 20, 21, 22]. Alternatively, generalization can be achieved by imposing task-dependent
policies [27]. However, this approach might restrict the choice of optimal controls given the assumed
structure of control policy.
6
4
Experiments and Analysis
We consider 3 simulated RL tasks: cart-pole (CP) swing up, double pendulum on a cart (DPC)
swing up, and PUMA-560 robotic arm reaching. The CP and DPC systems consist of a cart and a
single/double-link pendulum. The tasks are to swing-up the single/double-link pendulum from the
initial position (point down). Both CP and DPC are under-actuated systems with only one control
acting on the cart. PUMA-560 is a 3D robotic arm that has 12 state dimensions, 6 degrees of
freedom with 6 actuators on the joints. The task is to steer the end-effector to the desired position
and orientation.
In order to demonstrate the performance, we compare the proposed control framework with three
related methods: iterative path integral control [18] with known dynamics model, PILCO [17] and
PDDP [19]. Iterative path integral control is a sampling-based stochastic control method. It is
based on importance sampling using controlled diffusion process rather than passive dynamics used
in standard path integral control [5, 6, 7]. Iterative PI control is used as a baseline with a given
dynamics model. PILCO is a model-based policy search method that features state-of-the-art data
efficiency in terms of number of trials required to learn a task. PILCO requires an extra optimizer
(such as BFGS) for policy improvement. PDDP is a Gaussian belief space trajectory optimization
approach. It performs dynamic programming based on local approximation of the learned dynamics
and value function. Both PILCO and PDDP are applied with unknown dynamics. In this work we do
not compare our method with model-free PI-related approaches such as [10, 11, 12, 14] since these
methods would certainly cost more samples than model-based methods such as PILCO and PDDP.
The reason for choosing these two methods for comparison is that our method adopts a similar model
learning scheme while other state-of-the-art methods, such as [20] is based on a different model.
In experiment 1 we demonstrate the sample efficiency of our method using the CP and DPC tasks.
For both tasks we choose T = 1.2 and dt = 0.02 (60 time steps per rollout). The iterative PI
[18] with a given dynamics model uses 103 /104 (CP/DPC) sample rollouts per iteration and 500
iterations at each time step. We initialize PILCO and the proposed method by collecting 2/6 sample
rollouts (corresponding to 120/360 transition samples) for CP/DPC tasks respectively. At each trial
(on the true dynamics model), we use 1 sample rollout for PILCO and our method. PDDP uses
4/5 rollouts (corresponding to 240/300 transition samples) for initialization as well as at each trial
for the CP/DPC tasks. Fig. 1 shows the results in terms of ?T and computational time. For both
tasks our method shows higher desirability (lower terminal state cost) at each trial, which indicates
higher sample efficiency for task learning. This is mainly because our method performs online reoptimization at each time step. In contrast, the other two methods do not use this scheme. However
we assume partial information of the dynamics (G matrix) is given. PILCO and PDDP perform
optimization on entirely unknown dynamics. In many robotic systems G corresponds to the inverse
of the inertia matrix, which can be identified based on data as well. In terms of computational efficiency, our method outperforms PILCO since we compute the optimal control update analytically,
while PILCO solves large scale nonlinear optimization problems to obtain policy parameters. Our
method is more computational expensive than PDDP because PDDP seeks local optimal controls
that rely on linear approximations, while our method is a global optimal control approach. Despite
the relatively higher computational burden than PDDP, our method offers reasonable efficiency in
terms of the time required to reach the baseline performance.
In experiment 2 we demonstrate the generalizability of the learned controllers to new tasks using
the composite control law (18) based on the PUMA-560 system. We use T = 2 and dt = 0.02
(100 time steps per rollout). First we learn 8 independent controllers using Algorithm 1. The target
postures are shown in Fig. 2. For all tasks we initialize with 3 sample rollouts and 1 sample at each
trial. Blue bars in Fig. 2b shows the desirabilities ?T after 3 trials. Next we use the composite law
(18) to construct controllers without re-sampling using 7 other controllers learned using Algorithm
k k
P
? 1t = 8k=2 P8?? ???tk ?k u
? kt . The
1. For instance the composite controller for task#1 is found as u
t
k=2
performance comparison of the composite controllers with controllers learned from trials is shown in
Fig. 2. It can be seen that the composite controllers give close performance as independently learned
controllers. The compositionality theory [26] generally does not apply to policy search methods and
trajectory optimizers such as PILCO, PDDP, and other recent methods [20, 21, 22]. Our method
benefits from the compositionality of control laws that can be applied for multi-task control without
re-sampling.
7
Double pendulum on a cart
Cart-pole
1
1
Iterative PI (true model, 10 3 samp/iter)
PILCO (1 sample/trial)
PDDP (4 samples/trial)
Ours (1 sample/trial)
0.8
0.8
15
0.7
?T
10
Time
?T
0.6
0.5
0.4
0.3
Iterative PI (true model, 10 4 samp/iter)
PILCO (1 sample/trial)
PDDP (5 samples/trial)
Ours (1 sample/trial)
0.9
0.7
350
0.6
300
0.5
250
0.4
200
Time
0.9
0.3
5
150
0.2
0.2
100
0.1
0.1
50
0
0
0
1
2
3
0
1
2
0
3
Trial#
Trial#
0
0
2
4
6
8
0
(a)
2
4
6
8
Trial#
Trial#
(b)
Figure 1: Comparison in terms of sample efficiency and computational efficiency for (a) cart-pole
and (b) double pendulum on a cart swing-up tasks. Left subfigures show the terminal desirability ?T
(for PILCO and PDDP, ?T is computed using terminal state costs) at each trial. Right subfigures
show computational time (in minute) at each trial.
1.2
Independent controller (1 samp/trial, 3 trials)
Composite controller (no sampling)
3
4
2
5
1
1
0.8
?T
6
0.6
0.4
0.2
7
0
8
1
2
3
4
5
6
7
8
Task#
(a)
(b)
Figure 2: Resutls for the PUMA-560 tasks. (a) 8 tasks tested in this experiment. Each number
indicates a corresponding target posture. (b) Comparison of the controllers learned independently
from trials and the composite controllers without sampling. Each composite controller is obtained
(18) from 7 other independent controllers learned from trials.
5
Conclusion and Discussion
We presented an iterative learning control framework that can find optimal controllers under uncertain dynamics using very small number of samples. This approach is closely related to the family
of path integral (PI) control algorithms. Our method is based on a forward-backward optimization scheme, which differs significantly from current PI-related approaches. Moreover, it combines
the attractive characteristics of probabilistic model-based reinforcement learning and linearly solvable optimal control theory. These characteristics include sample efficiency, optimality and generalizability. By iteratively updating the control laws based on probabilistic representation of the
learned dynamics, our method demonstrated encouraging performance compared to the state-ofthe-art model-based methods. In addition, our method showed promising potential in performing
multi-task control based on the compositionality of learned controllers. Besides the assumed structural constraint between control cost weight and uncertainty of the passive dynamics, the major
limitation is that we have not taken into account the uncertainty in the control matrix G. Future
work will focus on further generalization of this framework and applications to real systems.
Acknowledgments
This research is supported by NSF NRI-1426945.
8
References
[1] D.P. Bertsekas and J.N. Tsitsiklis. Neuro-dynamic programming (optimization and neural computation
series, 3). Athena Scientific, 7:15?23, 1996.
[2] A.G. Barto, W. Powell, J. Si, and D.C. Wunsch. Handbook of learning and approximate dynamic programming. 2004.
[3] W.H. Fleming. Exit probabilities and optimal stochastic control. Applied Math. Optim, 9:329?346, 1971.
[4] W. H. Fleming and H. M. Soner. Controlled Markov processes and viscosity solutions. Applications of
mathematics. Springer, New York, 1st edition, 1993.
[5] H. J. Kappen. Linear theory for control of nonlinear stochastic systems. Phys Rev Lett, 95:200?201, 2005.
[6] H. J. Kappen. Path integrals and symmetry breaking for optimal control theory. Journal of Statistical
Mechanics: Theory and Experiment, 11:P11011, 2005.
[7] H. J. Kappen. An introduction to stochastic control theory, path integrals and reinforcement learning. AIP
Conference Proceedings, 887(1), 2007.
[8] S. Thijssen and H. J. Kappen.
91:032104, Mar 2015.
Path integral control and state-dependent feedback.
Phys. Rev. E,
[9] E. Todorov. Efficient computation of optimal actions. Proceedings of the national academy of sciences,
106(28):11478?11483, 2009.
[10] E. Theodorou, J. Buchli, and S. Schaal. A generalized path integral control approach to reinforcement
learning. The Journal of Machine Learning Research, 11:3137?3181, 2010.
[11] F. Stulp and O. Sigaud. Path integral policy improvement with covariance matrix adaptation. In Proceedings of the 29th International Conference on Machine Learning (ICML), pages 281?288. ACM, 2012.
[12] K. Rawlik, M. Toussaint, and S. Vijayakumar. Path integral control by reproducing kernel hilbert space
embedding. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence,
IJCAI?13, pages 1628?1634, 2013.
[13] Y. Pan and E. Theodorou. Nonparametric infinite horizon kullback-leibler stochastic control. In 2014
IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), pages 1?8.
IEEE, 2014.
[14] V. G?omez, H.J. Kappen, J. Peters, and G. Neumann. Policy search for path integral control. In Machine
Learning and Knowledge Discovery in Databases, pages 482?497. Springer, 2014.
[15] K. Dvijotham and E Todorov. Linearly solvable optimal control. Reinforcement learning and approximate
dynamic programming for feedback control, pages 119?141, 2012.
[16] M.P. Deisenroth, G. Neumann, and J. Peters. A survey on policy search for robotics. Foundations and
Trends in Robotics, 2(1-2):1?142, 2013.
[17] M. Deisenroth, D. Fox, and C. Rasmussen. Gaussian processes for data-efficient learning in robotics and
control. IEEE Transsactions on Pattern Analysis and Machine Intelligence, 27:75?90, 2015.
[18] E. Theodorou and E. Todorov. Relative entropy and free energy dualities: Connections to path integral
and kl control. In 51st IEEE Conference on Decision and Control, pages 1466?1473, 2012.
[19] Y. Pan and E. Theodorou. Probabilistic differential dynamic programming. In Advances in Neural Information Processing Systems (NIPS), pages 1907?1915, 2014.
[20] S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown
dynamics. In Advances in Neural Information Processing Systems (NIPS), pages 1071?1079, 2014.
[21] S. Levine and V. Koltun. Learning complex neural network policies with trajectory optimization. In
Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 829?837, 2014.
[22] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. Trust region policy optimization. arXiv
preprint arXiv:1502.05477, 2015.
[23] P. Hennig. Optimal reinforcement learning for gaussian systems. In Advances in Neural Information
Processing Systems (NIPS), pages 325?333, 2011.
[24] J. Quinonero Candela, A. Girard, J. Larsen, and C. E. Rasmussen. Propagation of uncertainty in bayesian
kernel models-application to multiple-step ahead forecasting. In IEEE International Conference on
Acoustics, Speech, and Signal Processing, 2003.
[25] C.K.I Williams and C.E. Rasmussen. Gaussian processes for machine learning. MIT Press, 2006.
[26] E. Todorov. Compositionality of optimal control laws. In Advances in Neural Information Processing
Systems (NIPS), pages 1856?1864, 2009.
[27] M.P. Deisenroth, P. Englert, J. Peters, and D. Fox. Multi-task policy search for robotics. In Proceedings
of 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014.
9
| 5984 |@word trial:26 briefly:1 seek:2 linearized:2 propagate:1 covariance:5 tr:5 kappen:5 moment:2 inefficiency:1 series:1 initial:1 rkhs:1 ours:2 outperforms:1 existing:5 current:2 optim:1 si:1 dx:3 bd:5 additive:1 numerical:2 analytic:5 sdes:2 enables:2 update:9 intelligence:2 parameterization:5 samplingbased:1 record:2 authority:2 statedependent:1 parameterizations:1 successive:2 math:1 firstly:3 rollout:3 constructed:1 differential:4 symposium:1 koltun:1 combine:2 hjb:2 inside:1 introduce:3 manner:2 x0:3 p8:1 planning:1 mechanic:1 multi:3 terminal:8 bellman:4 encouraging:1 curse:1 becomes:4 xx:6 linearity:1 notation:1 moreover:1 sde:2 interpreted:2 minimizes:1 developed:2 transformation:4 collecting:1 act:1 xd:3 control:118 hamilton:3 bertsekas:1 positive:1 engineering:2 local:2 despite:2 subscript:1 path:30 might:1 initialization:2 challenging:1 factorization:1 pddp:13 acknowledgment:1 recursive:2 definite:1 lf:4 differs:1 optimizers:1 powell:1 area:1 significantly:3 composite:13 matching:2 puma:4 ga:1 close:2 operator:1 applying:1 deterministic:2 demonstrated:2 imposed:1 maximizing:1 williams:1 independently:2 survey:1 simplicity:1 rule:2 wunsch:1 embedding:2 autonomous:1 target:4 exact:1 programming:6 us:3 trend:1 approximated:3 expensive:2 updating:1 database:1 ep:1 levine:3 preprint:1 solved:2 parameterize:1 region:1 unlearned:2 dynamic:39 solving:4 predictive:7 efficiency:12 exit:1 joint:3 sigaud:1 various:1 represented:2 kolmogorov:3 tx:1 derivation:1 describe:1 artificial:1 hyper:2 choosing:1 solve:1 otherwise:1 cov:2 cma:2 gp:8 final:1 superscript:1 online:1 differentiate:1 sequence:2 interaction:1 adaptation:1 inserting:2 combining:1 iff:1 academy:1 convergence:1 double:5 ijcai:1 neumann:2 pi2:4 tk:1 derive:1 ij:2 qt:9 school:1 eq:1 solves:1 soc:3 guided:1 closely:1 stochastic:11 exploration:2 require:2 adprl:1 abbeel:2 generalization:5 correction:1 hold:2 exp:17 mapping:1 rawlik:1 major:1 optimizer:1 ditional:1 sensitive:1 weighted:1 evangelos:2 mit:1 gaussian:12 desirability:11 rather:3 reaching:1 barto:1 gatech:1 derived:1 focus:1 schaal:1 improvement:2 likelihood:1 indicates:2 mainly:1 contrast:3 skipped:1 baseline:2 inference:5 dependent:3 issue:1 aforementioned:2 orientation:1 priori:2 art:5 initialize:2 construct:1 sampling:19 chapman:3 broad:1 cancel:1 icml:2 future:2 intelligent:1 aip:1 employ:1 national:1 rollouts:4 freedom:1 atlanta:1 highly:2 certainly:1 uold:10 kt:5 integral:29 partial:4 fox:2 old:3 re:5 desired:2 theoretical:1 subfigure:2 uncertain:5 effector:1 instance:1 steer:1 applicability:3 cost:18 pole:3 gr:1 theodorou:6 generalizability:4 thanks:1 st:3 international:5 recur:1 vijayakumar:1 probabilistic:8 modelbased:1 recorded:1 choose:1 derivative:2 udt:2 aggressive:1 account:2 underactuated:1 bfgs:1 potential:1 summarized:1 sec:1 automation:1 notable:2 explicitly:1 performed:2 candela:1 pendulum:5 samp:3 variance:2 characteristic:4 yield:2 ofthe:1 yes:3 generalize:1 bayesian:2 trajectory:12 reach:1 suffers:1 phys:2 energy:1 larsen:1 jacobi:3 dxi:1 con:1 conciseness:1 sampled:3 lim:1 ut:18 dimensionality:1 knowledge:2 hilbert:2 higher:3 dt:85 follow:1 methodology:1 formulation:4 evaluated:1 mar:1 generality:1 furthermore:1 correlation:1 until:2 hand:1 trust:1 nonlinear:6 propagation:2 scientific:1 normalized:1 true:3 swing:4 analytically:8 moritz:1 laboratory:1 leibler:2 iteratively:1 stulp:1 attractive:2 during:1 width:1 unnormalized:1 generalized:6 demonstrate:4 performs:2 motion:2 cp:7 passive:4 recently:1 physical:5 rl:4 conditioning:1 interpretation:1 numerically:1 imposing:1 mathematics:1 similarly:1 similarity:1 sive:1 gt:20 etc:1 brownian:2 posterior:1 recent:3 showed:1 driven:1 certain:1 rep:2 vt:13 seen:1 reoptimization:1 impose:1 michail:1 signal:1 pilco:14 multiple:1 offer:1 pde:7 long:1 parenthesis:1 controlled:2 scalable:2 neuro:1 controller:24 essentially:2 arxiv:2 iteration:3 represent:1 kernel:7 robotics:6 achieved:1 addition:3 englert:1 biased:1 rest:1 extra:1 subject:1 cart:8 dxj:3 buchli:1 jordan:1 structural:3 feedforward:1 todorov:4 xj:12 restrict:3 identified:1 regarding:1 qj:4 motivated:2 expression:7 forecasting:1 peter:3 speech:1 york:1 action:2 generally:4 detailed:1 amount:1 nonparametric:3 viscosity:1 restricts:1 kac:1 nsf:1 notice:1 per:3 fgp:8 blue:1 discrete:3 hennig:1 iter:2 drawn:1 diffusion:2 backward:10 enforced:2 inverse:1 parameterized:2 uncertainty:11 powerful:1 everywhere:1 extends:1 family:2 reasonable:1 decision:2 comparable:1 entirely:1 uncontrolled:2 quadratic:2 adapted:1 xdk:3 ahead:1 kronecker:1 constraint:7 precisely:3 min:2 optimality:1 performing:2 relatively:1 according:1 combination:2 soner:1 pan:3 rev:2 taken:1 computationally:1 equation:8 remains:1 previously:1 feynman:1 end:2 adopted:1 available:1 nri:1 apply:4 actuator:1 original:2 include:2 xdt:3 approximating:1 icra:1 move:2 posture:2 rt:5 diagonal:1 gradient:6 dp:2 subspace:1 distance:2 link:2 simulated:1 athena:1 quinonero:1 reason:1 besides:1 index:1 policy:28 unknown:5 perform:6 twenty:1 observation:1 markov:1 finite:2 yunpeng:1 immediate:1 incorporated:1 rn:2 reproducing:2 drift:2 compositionality:6 introduced:1 required:2 kl:2 extensive:2 optimized:2 connection:1 aerospace:1 acoustic:1 learned:20 fleming:2 nip:4 bar:1 usually:2 dynamical:1 pattern:1 challenge:1 belief:2 rely:2 circumvent:1 solvable:5 dpc:7 arm:2 representing:1 scheme:7 improve:1 technology:1 brief:1 naive:1 prior:2 review:1 discovery:1 schulman:1 relative:1 law:13 loss:1 dxt:13 limitation:3 remarkable:1 toussaint:1 foundation:1 degree:1 sufficient:1 pi:32 summary:1 repeat:2 last:1 keeping:1 free:8 supported:1 rasmussen:3 tsitsiklis:1 institute:2 benefit:2 feedback:4 dimension:2 boundary:2 world:1 transition:9 lett:1 forward:12 adopts:1 reinforcement:8 adaptive:2 simplified:2 inertia:1 approximate:8 kullback:2 global:1 robotic:3 handbook:1 assumed:3 xi:4 alternatively:1 continuous:3 iterative:10 search:9 decade:2 table:2 promising:1 learn:4 ignoring:1 actuated:1 symmetry:1 interpolating:1 complex:1 constructing:1 pk:4 main:1 linearly:5 s2:1 noise:5 edition:1 girard:1 fig:4 georgia:1 fashion:2 inferring:1 position:2 explicit:3 exponential:4 breaking:1 third:1 formula:1 down:1 minute:1 xt:43 symbol:1 explored:1 intractable:2 consist:1 xdj:3 burden:1 importance:1 horizon:3 entropy:1 lt:1 omez:1 partially:1 dvijotham:1 springer:2 corresponds:1 relies:1 acm:1 conditional:1 viewed:2 goal:2 infinite:2 specifically:1 operates:2 acting:1 total:1 called:1 duality:1 experimental:1 indicating:1 deisenroth:3 cholesky:1 incorporate:1 evaluate:3 tested:1 correlated:1 |
5,507 | 5,985 | Efficient Thompson Sampling for Online
Matrix-Factorization Recommendation
Jaya Kawale, Hung Bui, Branislav Kveton
Adobe Research
San Jose, CA
{kawale, hubui, kveton}@adobe.com
Long Tran Thanh
University of Southampton
Southampton, UK
[email protected]
Sanjay Chawla
Qatar Computing Research Institute, Qatar
University of Sydney, Australia
[email protected]
Abstract
Matrix factorization (MF) collaborative filtering is an effective and widely used
method in recommendation systems. However, the problem of finding an optimal
trade-off between exploration and exploitation (otherwise known as the bandit
problem), a crucial problem in collaborative filtering from cold-start, has not been
previously addressed. In this paper, we present a novel algorithm for online MF
recommendation that automatically combines finding the most relevant items with
exploring new or less-recommended items. Our approach, called Particle Thompson sampling for MF (PTS), is based on the general Thompson sampling framework, but augmented with a novel efficient online Bayesian probabilistic matrix
factorization method based on the Rao-Blackwellized particle filter. Extensive experiments in collaborative filtering using several real-world datasets demonstrate
that PTS significantly outperforms the current state-of-the-arts.
1
Introduction
Matrix factorization (MF) techniques have emerged as a powerful tool to perform collaborative
filtering in large datasets [1]. These algorithms decompose a partially-observed matrix R ? RN ?M
into a product of two smaller matrices, U ? RN ?K and V ? RM ?K , such that R ? U V T .
A variety of MF-based methods have been proposed in the literature and have been successfully
applied to various domains. Despite their promise, one of the challenges faced by these methods
is recommending when a new user/item arrives in the system, also known as the problem of coldstart. Another challenge is recommending items in an online setting and quickly adapting to the
user feedback as required by many real world applications including online advertising, serving
personalized content, link prediction and product recommendations.
In this paper, we address these two challenges in the problem of online low-rank matrix completion
by combining matrix completion with bandit algorithms. This setting was introduced in the previous
work [2] but our work is the first satisfactory solution to this problem. In a bandit setting, we
can model the problem as a repeated game where the environment chooses row i of R and the
learning agent chooses column j. The Rij value is revealed and the goal (of the learning agent) is
to minimize the cumulative regret with respect to the optimal solution, the highest entry in each row
of R. The key design principle in a bandit setting is to balance between exploration and exploitation
which solves the problem of cold start naturally. For example, in online advertising, exploration
implies presenting new ads, about which little is known and observing subsequent feedback, while
exploitation entails serving ads which are known to attract high click through rate.
1
While many solutions have been proposed for bandit problems, in the last five years or so, there
has been a renewed interest in the use of Thompson sampling (TS) which was originally proposed
in 1933 [3, 4]. In addition to having competitive empirical performance, TS is attractive due to its
conceptual simplicity. An agent has to choose an action a (column) from a set of available actions so
as to maximize the reward r, but it does not know with certainty which action is optimal. Following
TS, the agent will select a with the probability that a is the best action. Let ? denotes the unknown
parameter governing reward structure, and O1:t the history of observations currently available to the
agent. The agent chooses a? = a with probability
Z h
i
0
I E [r|a, ?] = max
E
[r|a
,
?]
P (?|O1:t )d?
0
a
which can be implemented by simply sampling ? from the posterior P (?|O1:t ) and let a? =
arg maxa0 E [r|a0 , ?]. However for many realistic scenarios (including for matrix completion), sampling from P (?|O1:t ) is not computationally efficient and thus recourse to approximate methods is
required to make TS practical.
We propose a computationally-efficient algorithm for solving our problem, which we call Particle
Thompson sampling for matrix factorization (PTS). PTS is a combination of particle filtering for
online Bayesian parameter estimation and TS in the non-conjugate case when the posterior does
not have a closed form. Particle filtering uses a set of weighted samples (particles) to estimate
the posterior density. In order to overcome the problem of the huge parameter space, we utilize
Rao-Blackwellization and design a suitable Monte Carlo kernel to come up with a computationally
and statistically efficient way to update the set of particles as new data arrives in an online fashion.
Unlike the prior work [2] which approximates the posterior of the latent item features by a single
point estimate, our approach can maintain a much better approximation of the posterior of the latent
features by a diverse set of particles. Our results on five different real datasets show a substantial
improvement in the cumulative regret vis-a-vis other online methods.
2
Probabilistic Matrix Factorization
We first review the probabilistic matrix factorization approach to
the low-rank matrix completion problem. In matrix completion, a
portion Ro of the N ? M matrix R = (rij ) is observed, and the
goal is to infer the unobserved entries of R. In probabilistic matrix
factorization (PMF) [5], R is assumed to be a noisy perturbation of
? = U V > where UN ?K and VM ?K are termed
a rank-K matrix R
the user and item latent features (K is typically small). The full
generative model of PMF is
Ui i.i.d. ?
Vj i.i.d. ?
N (0, ?u2 IK )
?,
v
Vj
Ui
M
Rij
u
N (0, ?v2 IK )
N (Ui> Vj , ? 2 )
(1)
N ?M
Figure 1: Graphical model of
probabilistic matrix factoriza2
where the variances (? 2 , ?U
, ?V2 ) are the parameters of the model. tion model
We also consider a full Bayesian treatment where the variances
2
?U
and ?V2 are drawn from an inverse Gamma prior (while ? 2
?2
is held fixed), i.e., ?U = ?U
? ?(?, ?); ?V = ?V?2 ? ?(?, ?) (this is a special case of the
Bayesian PMF [6] where we only consider isotropic Gaussians)1 . Given this generative model,
from the observed ratings Ro , we would like to estimate the parameters U and V which will allow us to ?complete? the matrix R. PMF is a MAP point-estimate which finds U, V to maximize
Pr(U, V |Ro , ?, ?U , ?V ) via (stochastic) gradient ascend (alternate least square can also be used [1]).
Bayesian PMF [6] attempts to approximate the full posterior Pr(U, V |Ro , ?, ?, ?). The joint posterior of U and V are intractable; however, the structure of the graphical model (Fig. 1) can be
exploited to derive an efficient Gibbs sampler.
rij |U, V i.i.d. ?
We now provide the expressions for the conditional probabilities of interest. Supposed that V and
?U are known. Then the vectors Ui are independent for each user i. Let rts(i) = {j|rij ? Ro } be
o
the set of items rated by user i, observe that the ratings {Rij
|j ? rts(i)} are generated i.i.d. from Ui
1
[6] considers the full covariance structure, but they also noted that isotropic Gaussians are effective enough.
2
following a simple conditional linear Gaussian model. Thus, the posterior of Ui has the closed form
o
, ?U , ?) = N (Ui |?ui , (?ui )?1 )
Pr(Ui |V, Ro , ?, ?U ) = Pr(Ui |Vrts(i) , Ri,rts(i)
X
1 X
1
1
o
rij
Vj .
where ?ui = 2 (?ui )?1 ?iu ; ?ui = 2
Vj Vj> + 2 IK ; ?iu =
?
?
?u
(2)
(3)
j?rts(i)
j?rts(i)
The conditional posterior of V , Pr(V |U, Ro , ?V , ?) is similarly factorized into
QM
v ?1
v
) where the mean and precision are similarly defined. The posterior
j=1 N (Vj |?j , (?j )
?2
of the precision ?U = ?U
given U (and simiarly for ?V ) is obtained from the conjugacy of the
Gamma prior and the isotropic Gaussian
Pr(?U |U, ?, ?) = ?(?U |
NK
1
2
+ ?, kU kF + ?).
2
2
(4)
Although not required for Bayesian PMF, we give the likelihood expression
Pr(Rij = r|V, Ro , ?U , ?) = N (r|Vj> ?ui ,
1
+ Vj> ?V,i Vj ).
?2
(5)
The advantage of the Bayesian approach is that uncertainty of the estimate of U and V are available
which is crucial for exploration in a bandit setting. However, the bandit setting requires maitaining
online estimates of the posterior as the ratings arrive over time which makes it rather awkward
for MCMC. In this paper, we instead employ a sequential Monte-Carlo (SMC) method for online
Bayesian inference [7, 8]. Similar to the Gibbs sampler [6], we exploit the above closed form updates
to design an efficient Rao-Blackwellized particle filter [9] for maintaining the posterior over time.
3
Matrix-Factorization Recommendation Bandit
In a typical deployed recommendation system, users and observed ratings (also called rewards)
arrive over time, and the task of the system is to recommend item for each user so as to maximize
the accumulated expected rewards. The bandit setting arises from the fact that the system needs to
learn over time what items have the best ratings (for a given user) to recommend, and at the same
time sufficiently explore all the items.
We formulate the matrix factorization bandit as follows. We assume that ratings are generated
following Eq. (1) with a fixed but unknown latent features (U ? , V ? ). At time t, the environment
chooses user it and the system (learning agent) needs to recommend an item jt . The user then
rates the recommended item with rating rit ,jt ? N (Ui?t > Vj?t , ? 2 ) and the agent receives this rating
as a reward. We abbreviate this as rto = rit ,jt . The system recommends item jt using a policy
o
o
that takes into account the history of the observed ratings prior to time t, r1:t?1
, where r1:t
=
o t
?> ?
{(ik , jk , rk )}k=1 . The highest expected reward the system can earn at time t is maxj Ui Vj , and
this is achieved if the optimal item j ? (i) = arg maxj Ui? > Vj? is recommended. Since (U ? , V ? )
are unknown, the optimal item j ? (i) is also not known a priori. The quality of the recommendation
system is measured by its expected cumulative regret:
" n
#
" n
#
X
X
o
o
?> ?
CR = E
[rt ? rit ,j ? (it ) ] = E
[rt ? max Uit Vj ]
(6)
t=1
t=1
j
where the expectation is taken with respect to the choice of the user at time t and also the randomness
in the choice of the recommended items by the algorithm.
3.1
Particle Thompson Sampling for Matrix Factorization Bandit
While it is difficult to optimize the cumulative regret directly, TS has been shown to work well in
practice for contextual linear bandit [3]. To use TS for matrix factorization bandit, the main difficulty
is to incrementally update the posterior of the latent features (U, V ) which control the reward structure. In this subsection, we describe an efficient Rao-Blackwellized particle filter (RBPF) designed
to exploit the specific structure of the probabilistic matrix factorization model. Let ? = (?, ?, ?) be
o
the control parameters and let posterior at time t be pt = Pr(U, V, ?U , ?V , |r1:t
, ?). The standard
3
Algorithm 1 Particle Thompson Sampling for Matrix Factorization (PTS)
Global control params: ?, ?U , ?V ; for Bayesian version (PTS-B): ?, ?, ?
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
16:
17:
18:
19:
20:
21:
22:
23:
24:
25:
p?0 ? InitializeParticles()
Ro = ?
for t = 1, 2 . . . do
i ? current user
Sample d ? p?t?1 .w
V? ? p?t?1 .V (d)
(d)
[If PTS-B] ?
?U ? p?t?1 .?U
o
?
?
Sample Ui ? Pr(Ui |V , ?
?U , ?, r1:t?1
)
. sample new Ui due to Rao-Blackwellization
?i> V?j
?j ? arg maxj U
Recommend ?j for user i and observe rating r.
rto ? (i, ?j, r)
o
p?t ? UpdatePosterior(?
pt?1 , r1:t
)
end for
o
procedure U PDATE P OSTERIOR(?
p, r1:t
)
(d)
(d)
. p? has the structure (w, particles) where particles[d] = (U (d) , V (d) , ?U , ?V ).
o
(i, j, r) ? rt
o
o
?d, ?ui (d) ? ?ui (V (d) , r1:t?1
), ?iu (d) ? ?iu (V (d) , r1:t?1
)
. see Eq. (3)
P
(d)
o
(d)
wd = 1
. Reweighting; see Eq.(5)
?d, wd ? Pr(Rij = r|V , ?U , ?, r1:t?1 ), see Eq.(5),
1
?d, i ? p?.w; p?0 .particles[d] ? p?.particles[i]; ?d, p?0 .wd ? D
. Resampling
for all d do
. Move
?ui (d) ? ?ui (d) + ?12 Vj Vj> ; ?iu (d) ? ?iu (d) + rVj
(d)
(d)
o
)
. see Eq. (2)
p0 .V (d) , p?0 .?U , ?, r1:t
p?0 .Ui ? Pr(Ui |?
0
(d)
[If PTS-B] Update the norm of p? .U
o
o
?vj (d) ? ?vj (V (d) , r1:t
), ?jv (d) ? ?iu (V (d) , r1:t
)
(d)
0
0
(d)
0 (d)
o
p? .Vj ? Pr(Vj |?
p .U , p? .?V , ?, r1:t )
(d)
p0 .U (d) , ?, ?)
26:
[If PTS-B] p?0 .?U ? Pr(?U |?
27:
end for
28:
return p?0
29: end procedure
. see Eq.(4)
particle filter would sample all of the parameters (U, V, ?U , ?V ). Unfortunately, in our experiments, degeneracy is highly problematic for such a vanilla particle filter (PF) even when ?U , ?V
are assumed known (see Fig. 4(b)). Our RBPF algorithm maintains the posterior distribution pt as
follows. Each of the particle conceptually represents a point-mass at V, ?U (U and ?V are integrated
PD
1
out analytically whenever possible)2 . Thus, pt (V, ?U ) is approximated by p?t = D
(d)
d=1 ?(V (d) ,?U
)
where D is the number of particles.
Crucially, since the particle filter needs to estimate a set of non-time-vayring parameters, having
0
an effective and efficient MCMC-kernel move Kt (V 0 , ?U
; V, ?U ) stationary w.r.t. pt is essential.
Our design of the move kernel Kt are based on two observations. First, we can make use of
U and ?V as auxiliary variables, effectively sampling U, ?V |V, ?U ? pt (U, ?V |V, ?U ), and then
0
0
V 0 , ?U
|U, ?V ? pt (V 0 , ?U
|U, ?V ). However, this move would be highly inefficient due to the number of variables that need to be sampled at each update. Our second observation is the key to an
efficient implementation. Note that latent features for all users except the current user U?it are independent of the current observed rating rto : pt (U?it |V, ?U ) = pt?1 (U?it |V, ?U ), therefore at time
t we only have to resample Uit as there is no need to resample U?it . Furthermore, it suffices to
resample the latent feature of the current item Vjt . This leads to an efficient implementation of the
RBPF where each particle in fact stores3 U, V, ?U , ?V , where (U, ?V ) are auxiliary variables, and
0
for the kernel move Kt , we sample Uit |V, ?U then Vj0t |U, ?V and ?U
|U, ?, ?.
? +M
? )K 2 + K 3 )D)
The PTS algorithm is given in Algo. 1. At each time t, the complexity is O(((N
? and M
? are the maximum number of users who have rated the same item and the maximum
where N
2
When there are fewer users than items, a similar strategy can be derived to integrate out U and ?V instead.
This is not inconsistent with our previous statement that conceptually a particle represents only a pointmass distribution ?V,?U .
3
4
number of items rated by the same user, respectively. The dependency on K 3 arises from having
to invert the precision matrix, but this is not a concern since the rank K is typically small. Line
24 can be replaced by an incremental update with caching: after line 22, we can incrementally
update ?vj and ?jv for all item j previously rated by the current user i. This reduces the complexity
? K 2 + K 3 )D), a potentially significant improvement in a real recommendation systems
to O((M
where each user tends to rate a small number of items.
4
Analysis
We believe that the regret of PTS can be bounded. However, the existing work on TS and bandits
does not provide sufficient tools for proper
? analysis of our algorithm. In particular, while existing
techniques can provide O(log T ) (or O( T ) for gap-independent) regret bounds for our problem,
these bounds are typically linear in the number of entries of the observation matrix R (or at least
linear in the number of users), which is typically very large, compared to T . Thus, an ideal regret
bound in our setting is the one that has sub-linear dependency (or no dependency at all) on the
number of users. A key obstacle of achieving this is that, while the conditional posteriors of U and
V are Gaussians, neither their marginal and joint posteriors belong to well behaved classes (e.g.,
conjugate posteriors, or having closed forms). Thus, novel tools, that can handle generic posteriors,
are needed for efficient analysis. Moreover, in the general setting, the correlation between Ro and
the latent features U and V are non-linear (see, e.g., [10, 11, 12] for more details). As existing
techniques are typically designed for efficiently learning linear regressions, they are not suitable for
our problem. Nevertheless, we show how to bound the regret of TS in a very specific case of n ? m
rank-1 matrices, and we leave the generalization of these results for future work.
In particular, we analyze the regret of PTS in the setting of Gopalan et al. [13]. We model our
N ?1
problem as follows. The parameter space is ?u ? ?v , where ?u = {d, 2d, . . . , 1}
and ?v =
M ?1
{d, 2d, . . . , 1}
are discretizations of the parameter spaces of rank-1 factors u and v for some
integer 1/d. For the sake of theoretical analysis, we assume that PTS can sample from the full
posterior. We also assume that ri,j ? N (u?i vj? , ? 2 ) for some u? ? ?u and v ? ? ?u . Note that
in this setting, the highest-rated item in expectation is the same for all users. We denote this item
by j ? = arg max 1?j?M vj? and assume that it is uniquely optimal, u?j ? > u?j for any j 6= j ? . We
leverage these properties in our analysis. The random variable Xt at time t is a pair of a random
N,M
rating matrix Rt = {ri,j }i=1,j=1 and a random row 1 ? it ? N . The action At at time t is a
column 1 ? jt ? M . The observation is Yt = (it , rit ,jt ). We bound the regret of PTS as follows.
Theorem 1. For any ? ? (0, 1) and ? (0, 1), there exists T ? such that PTS on ?u ? ?v recom1+ ? 2
mends items j 6= j ? in T ? T ? steps at most (2M 1?
d4 log T + B) times with probability of at
least 1 ? ?, where B is a constant independent of T .
Proof. By Theorem 1 of Gopalan et al. [13], the number of recommendations j 6= j ? is bounded by
C(log T ) + B, where B is a constant independent of T . Now we bound C(log T ) by counting the
number of times that PTS selects models that cannot be distinguished from (u? , v ? ) after observing
Yt under the optimal action j ? . Let:
?j = (u, v) ? ?u ? ?v : ?i : ui vj ? = u?i vj?? , vj ? maxk6=j vk
be the set of such models where action j is optimal. Suppose that our algorithm chooses model
(u, v) ? ?j . Then the KL divergence between the distributions of ratings ri,j under models (u, v)
and (u? , v ? ) is bounded from below as:
(ui vj ? u?i vj? )2
d4
?
.
2
2?
2? 2
for any i. The last inequality follows from the fact that ui vj ? ui vj? = u?i vj?? > u?i vj? , because j ? is uniquely optimal in (u? , v ? ). We know that ui vj ? u?i vj? ? d2 because the granularity of our discretization is d. Let i1 , . . . , in be any n row indices. Then the KL divergence
between the distributions of ratings in positions (i1 , j), . . . , (in , j) under models (u, v) and (u? , v ? )
Pn
d4
is t=1 DKL (uit vj k u?it vj? ) ? n 2?
2 . By Theorem 1 of Gopalan et al. [13], the models (u, v) ? ?j
Pn
are unlikely to be chosen by PTS in T steps when t=1 DKL (uit vj k u?it vj? ) ? log T . This happens
DKL (ui vj k u?i vj? ) =
2
1+ ?
after at most n ? 2 1?
d4 log T selections of (u, v) ? ?j . Now we apply the same argument to all
?j , M ? 1 in total, and sum up the corresponding regrets.
5
2
1+ ?
Remarks: Note that Theorem 1 implies at O(2M 1?
d4 log T ) regret bound that holds with high
2
probability. Here, d plays the role of a gap ?, the smallest possible difference between the expected
ratings of item j 6= j ? in any row i. In this sense, our result is O((1/?2 ) log T ) and is of a
similar magnitude as the results in Gopalan et al. [13]. While we restrict u? , v ? ? (0, 1]K?1 in
the proof, this does not affect the algorithm. In fact, the proof only focuses on high probability
events where the samples from the posterior are concentrated around the true parameters, and thus,
are within (0, 1]K?1 as well. Extending our proof to the general setting is not trivial. In particular,
moving from discretized parameters to continuous space introduces the abovementioned ill behaved
posteriors. While increasing the value of K will violate the fact that the best item will be the same
for all users, which allowed us to eliminate N from the regret bound.
5
Experiments and Results
The goal of our experimental evaluation is twofold: (i) evaluate the PTS algorithm for making online
recommendations with respect to various baseline algorithms on several real-world datasets and (ii)
understand the qualitative performance and intuition of PTS.
5.1
Dataset description
We use a synthetic dataset and five real world datasets to evaluate our approach. The synthetic
dataset is generated as follows - At first we generate the user and item latent features (U and V )
of rank K by drawing from a Gaussian distribution N (0, ?u2 ) and N (0, ?v2 ) respectively. The true
rating matrix is then R? = U V T . We generate the observed rating matrix R from R? by adding
Gaussian noise N (0, ? 2 ) to the true ratings. We use five real world datasets as follows: Movielens
100k, Movielens 1M, Yahoo Music4 , Book crossing5 and EachMovie as shown in Table 1.
# users
# items
# ratings
5.2
Movielens 100k Movielens 1M Yahoo Music Book crossing
943
6040
15400
6841
1682
3900
1000
5644
100k
1M
311,704
90k
Table 1: Characteristics of the datasets used in our study
EachMovie
36656
1621
2.58M
Baseline measures
There are no current approaches available that simultaneously learn both the user and item factors
by sampling from the posterior in a bandit setting. From the currently available algorithms, we
choose two kinds of baseline methods - one that sequentially updates the the posterior of the user
features only while fixing the item features to a point estimate (ICF) and another that updates the
MAP estimates of user and item features via stochastic gradient descent (SGD-Eps). A key challenge in online algorithms is unbiased offline evaluation. One problem in the offline setting is the
partial information available about user feedback, i.e., we only have information about the items
that the user rated. In our experiment, we restrict the recommendation space of all the algorithms
to recommend among the items that the user rated in the entire dataset which makes it possible to
empirically measure regret at every interaction. The baseline measures are as follows:
1) Random : At each iteration, we recommend a random movie to the user.
2) Most Popular : At each iteration, we recommend the most popular movie restricted to the movies
rated by the user on the dataset. Note that this is an unrealistically optimistic baseline for an online
algorithm as it is not possible to know the global popularity of the items beforehand.
3) ICF: The ICF algorithm [2] proceeds by first estimating the user and item latent factors (U and
V ) on a initial training period and then for every interaction thereafter only updates the user features
(U ) assuming the item features (V ) as fixed. We run two scenarios for the ICF algorithm one in
which we use 20% (icf-20) and 50% (icf-50) of the data as the training period respectively. During
this period of training, we randomly recommend a movie to the user to compute the regret. We use
the PMF implementation by [5] for estimating the U and V .
4) SGD-Eps: We learn the latent factors using an online variant of the PMF algorithm [5]. We use
the stochastic gradient descent to update the latent factors with a mini-batch size of 50. In order
to make a recommendation, we use the -greedy strategy and recommend the highest Ui V T with a
probability and make a random recommendations otherwise. ( is set as 0.95 in our experiments.)
4
5
http://webscope.sandbox.yahoo.com/
http://www.bookcrossing.com
6
5.3
Results on Synthetic Dataset
30
20
10
0
0
20
40
60
Iterations
80
450
180
400
160
350
140
120
100
80
60
300
250
200
150
40
100
20
50
0
0
100
(a) N, M=10,K=1
200
100
200
300
Iterations
400
0
0
500
(b) N, M=20,K=1
200
400
600
Iterations
800
120
140
100
120
80
100
Cummulative Regret
40
Cummulative Regret
50
Cummulative Regret
Cummulative Regret
60
Cummulative Regret
We generated the synthetic dataset as mentioned earlier and run the PTS algorithm with 100 particles
for recommendations. We simulate the setting as mentioned in Section 3 and assume that at time t,
a random user it arrives and the system recommends an item jt . The user rates the recommended
item rit ,jt and we evaluate the performance of the model by computing the expected cumulative
regret defined in Eq(6). Fig. 2 shows the cumulative regret of the algorithm on the synthetic data
averaged over 100 runs using different size of the matrix and latent features K. The cumulative
regret increases sub-linearly with the number of interactions and this gives us confidence that our
approach works well on the synthetic dataset.
60
40
20
(c) N, M=30,K=1
60
40
20
0
0
1000
80
20
40
60
Iterations
80
0
0
100
20
40
60
80
100
Iterations
(d) N, M=10,K=2
(e) N, M=10,K=3
Figure 2: Cumulative regret on different sizes of the synthetic data and K averaged over 100 runs.
Results on Real Datasets
4
5
x 10
10
15
PTS
random
popular
icf?20
icf?50
sgd?eps
PTS?B
Cummulative Regret
Cummulative Regret
15
5
5
x 10
6
PTS
random
popular
icf?20
icf?50
sgd?eps
PTS?B
10
x 10
PTS
random
popular
icf?20
icf?50
sgd?eps
PTS?B
5
Cummulative Regret
5.4
5
4
3
2
1
0
0
2
4
6
8
Iterations
10
0
0
2
4
6
Iterations
4
x 10
(a) Movielens 100k
12
6
PTS
random
popular
icf?50
sgd?eps
PTS?B
0
0
0.5
1
1.5
2
Iterations
5
2.5
3
3.5
5
x 10
(c) Yahoo Music
x 10
5
Cummulative Regret
Cummulative Regret
14
12
x 10
6
x 10
16
10
(b) Movielens 1M
4
18
8
10
8
6
4
PTS
random
popular
icf?20
icf?50
sgd?eps
PTS?B
3
2
4
1
2
0
0
2
4
6
8
Iterations
0
0
10
4
x 10
(d) Book Crossing
0.5
1
1.5
Iterations
2
2.5
3
6
x 10
(e) EachMovie
Figure 3: Comparison with baseline methods on five datasets.
Next, we evaluate our algorithms on five real datasets and compare them to the various baseline
algorithms. We subtract the mean ratings from the data to centre it at zero. To simulate an extreme
cold-start scenario we start from an empty set of user and rating. We then iterate over the datasets
and assume that a random user it has arrived at time t and the system recommends an item jt
constrained to the items rated by this user in the dataset. We use K = 2 for all the algorithms and
use 30 particles for our approach. For PTS we set the value of ? 2 = 0.5 and ?u2 = 1, ?v2 = 1.
For PTS-B (Bayesian version, see Algo. 1 for more details), we set ? 2 = 0.5 and the initial shape
parameters of the Gamma distribution as ? = 2 and ? = 0.5. For both ICF-20 and ICF-50, we set
? 2 = 0.5 and ?u2 = 1. Fig. 3 shows the cumulative regret of all the algorithms on the five datasets6 .
Our approach performs significantly better as compared to the baseline algorithms on this diverse
set of datasets. PTS-B with no parameter tuning performs slightly better than PTS and achieves the
best regret. It is important to note that both PTS and PTS-B performs comparable to or even better
than the ?most popular? baseline despite not knowing the global popularity in advance. Note that
ICF is very sensitive to the length of the initial training period; it is not clear how to set this apriori.
6
ICF-20 fails to run on the Bookcrossing dataset as the 20% data is too sparse for the PMF implementation.
7
4
4
RB
No RB
test error
pmf
3.5
3
0.6
ICF?20
PTS?20
PTS?100
0.4
3
MSE
MSE
2
2.5
0.2
0
1
2
?0.2
0
1.5
?0.4
?1
1
?0.6
0.5
?200
0
200
400
600
Iterations x 1000
(a) Movielens 1M
800
1000
?2
0
20
40
60
80
Iterations
(b) RB particle filter
100
?0.8
?0.4
?0.2
0
0.2
0.4
0.6
(c) Movie feature vector
Figure 4: a) shows MSE on movielens 1M dataset, the red line is the MSE using the PMF algorithm
b) shows performance of a RBPF (blue line) as compared to vanilla PF (red line) on a synthetic
dataset N,M=10 and c) shows movie feature vectors for a movie with 384 ratings, the red dot is the
feature vector from the ICF-20 algorithm (using 73 ratings). PTS-20 is the feature vector at 20% of
the data (green dots) and PTS-100 at 100% (blue dots).
We also evaluate the performance of our model in an offline setting as follows: We divide the
datasets into training and test set and iterate over the training data triplets (it , jt , rt ) by pretending
that jt is the movie recommended by our approach and update the latent factors according to RBPF.
? as the average prediction U V T from the particles at each time
We compute the recovered matrix R
step and compute the mean squared error (MSE) on the test dataset at each iteration. Unlike the
batch method such as PMF which takes multiple passes over the data, our method was designed to
have bounded update complexity at each iteration. We ran the algorithm using 80% data for training
and the rest for testing and computed the MSE by averaging the results over 5 runs. Fig. 4(a) shows
the average MSE on the movielens 1M dataset. Our MSE (0.7925) is comparable to the PMF MSE
(0.7718) as shown by the red line. This demonstrates that the RBPF is performing reasonably well
for matrix factorization. In addition, Fig. 4(b) shows that on the synthetic dataset, the vanilla PF
suffers from degeneration as seen by the high variance. To understand the intuition why fixing the
latent item features V as done in the ICF does not work, we perform an experiment as follows: We
run the ICF algorithm on the movielens 100k dataset in which we use 20% of the data for training.
At this point the ICF algorithm fixes the item features V and only updates the user features U . Next,
we run our algorithm and obtain the latent features. We examined the features for one selected movie
from the particles at two time intervals - one when the ICF algorithm fixes them at 20% and another
one in the end as shown in the Fig. 4(c). It shows that movie features have evolved into a different
location and hence fixing them early is not a good idea.
6
Related Work
Probabilistic matrix completion in a bandit setting setting was introduced in the previous work by
Zhao et al. [2]. The ICF algorithm in [2] approximates the posterior of the latent item features by
a single point estimate. Several other bandit algorithms for recommendations have been proposed.
Valko et al. [14] proposed a bandit algorithm for content-based recommendations. In this approach,
the features of the items are extracted from a similarity graph over the items, which is known in
advance. The preferences of each user for the features are learned independently by regressing the
ratings of the items from their features. The key difference in our approach is that we also learn
the features of the items. In other words, we learn both the user and item factors, U and V , while
[14] learn only U . Kocak et al. [15] combine the spectral bandit algorithm in [14] with TS. Gentile
et al. [16] propose a bandit algorithm for recommendations that clusters users in an online fashion
based on the similarity of their preferences. The preferences are learned by regressing the ratings of
the items from their features. The features of the items are the input of the learning algorithm and
they only learn U . Maillard et al. [17] study a bandit problem where the arms are partitioned into
unknown clusters unlike our work which is more general.
7
Conclusion
We have proposed an efficient method for carrying out matrix factorization (M ? U V T ) in a bandit
setting. The key novelty of our approach is the combined use of Rao-Blackwellized particle filtering
and Thompson sampling (PTS) in matrix factorization recommendation. This allows us to simultaneously update the posterior probability of U and V in an online manner while minimizing the
cumulative regret. The state of the art, till now, was to either use point estimates of U and V or use
a point estimate of one of the factor (e.g., U ) and update the posterior probability of the other (V ).
PTS results in substantially better performance on a wide variety of real world data sets.
8
References
[1] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30?37, 2009.
[2] Xiaoxue Zhao, Weinan Zhang, and Jun Wang. Interactive collaborative filtering. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge
management, pages 1411?1420. ACM, 2013.
[3] Olivier Chapelle and Lihong Li. An empirical evaluation of thompson sampling. In NIPS,
pages 2249?2257, 2011.
[4] Shipra Agrawal and Navin Goyal. Thompson sampling for contextual bandits with linear
payoffs. In ICML (3), pages 127?135, 2013.
[5] Ruslan Salakhutdinov and Andriy Mnih. Probabilistic matrix factorization. In NIPS, volume 1,
pages 2?1, 2007.
[6] Ruslan Salakhutdinov and Andriy Mnih. Bayesian probabilistic matrix factorization using
markov chain monte carlo. In ICML, pages 880?887, 2008.
[7] Nicolas Chopin. A sequential particle filter method for static models. Biometrika, 89(3):539?
552, 2002.
[8] Pierre Del Moral, Arnaud Doucet, and Ajay Jasra. Sequential monte carlo samplers. Journal
of the Royal Statistical Society: Series B (Statistical Methodology), 68(3):411?436, 2006.
[9] Arnaud Doucet, Nando De Freitas, Kevin Murphy, and Stuart Russell. Rao-blackwellised
particle filtering for dynamic bayesian networks. In Proceedings of the Sixteenth conference
on Uncertainty in artificial intelligence, pages 176?183. Morgan Kaufmann Publishers Inc.,
2000.
[10] A. Gelman and X. L Meng. A note on bivariate distributions that are conditionally normal.
Amer. Statist., 45:125?126, 1991.
[11] B. C. Arnold, E. Castillo, J. M. Sarabia, and L. Gonzalez-Vega. Multiple modes in densities
with normal conditionals. Statist. Probab. Lett., 49:355?363, 2000.
[12] B. C. Arnold, E. Castillo, and J. M. Sarabia. Conditionally specified distributions: An introduction. Statistical Science, 16(3):249?274, 2001.
[13] Aditya Gopalan, Shie Mannor, and Yishay Mansour. Thompson sampling for complex online
problems. In Proceedings of The 31st International Conference on Machine Learning, pages
100?108, 2014.
[14] Michal Valko, R?emi Munos, Branislav Kveton, and Tom?as? Koc?ak. Spectral bandits for smooth
graph functions. In 31th International Conference on Machine Learning, 2014.
[15] Tom?as? Koc?ak, Michal Valko, R?emi Munos, and Shipra Agrawal. Spectral thompson sampling.
In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014.
[16] Claudio Gentile, Shuai Li, and Giovanni Zappella. Online clustering of bandits. arXiv preprint
arXiv:1401.8257, 2014.
[17] Odalric-Ambrym Maillard and Shie Mannor. Latent bandits. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014,
pages 136?144, 2014.
9
| 5985 |@word exploitation:3 version:2 norm:1 nd:1 d2:1 crucially:1 covariance:1 p0:2 sgd:7 initial:3 qatar:2 series:1 renewed:1 outperforms:1 existing:3 freitas:1 current:7 com:3 contextual:2 wd:3 discretization:1 recovered:1 michal:2 subsequent:1 realistic:1 shape:1 designed:3 update:16 resampling:1 stationary:1 generative:2 fewer:1 greedy:1 item:51 rts:5 selected:1 intelligence:2 isotropic:3 mannor:2 location:1 preference:3 zhang:1 five:7 blackwellized:4 ik:4 qualitative:1 combine:2 manner:1 ascend:1 expected:5 blackwellization:2 discretized:1 salakhutdinov:2 automatically:1 little:1 pf:3 increasing:1 estimating:2 bounded:4 moreover:1 factorized:1 mass:1 what:1 evolved:1 kind:1 substantially:1 finding:2 unobserved:1 certainty:1 blackwellised:1 every:2 interactive:1 biometrika:1 ro:10 rm:1 qm:1 uk:2 control:3 demonstrates:1 tends:1 despite:2 ak:2 meng:1 au:1 china:1 examined:1 factorization:20 smc:1 statistically:1 averaged:2 practical:1 testing:1 kveton:3 practice:1 regret:32 yehuda:1 goyal:1 cold:3 procedure:2 empirical:2 discretizations:1 bell:1 significantly:2 adapting:1 confidence:1 word:1 cannot:1 selection:1 gelman:1 optimize:1 branislav:2 map:2 www:1 thanh:1 yt:2 independently:1 thompson:12 formulate:1 simplicity:1 handle:1 pt:51 suppose:1 play:1 user:46 yishay:1 olivier:1 us:1 crossing:2 approximated:1 jk:1 observed:7 role:1 preprint:1 rij:9 wang:1 mend:1 degeneration:1 russell:1 trade:1 kawale:2 highest:4 ran:1 substantial:1 intuition:2 environment:2 pd:1 ui:34 complexity:3 reward:7 mentioned:2 dynamic:1 carrying:1 solving:1 algo:2 shipra:2 joint:2 various:3 pdate:1 effective:3 describe:1 monte:4 artificial:2 kevin:1 emerged:1 widely:1 drawing:1 otherwise:2 rto:3 noisy:1 online:20 advantage:1 agrawal:2 propose:2 tran:1 interaction:3 product:2 relevant:1 combining:1 till:1 supposed:1 sixteenth:1 description:1 empty:1 cluster:2 r1:13 extending:1 incremental:1 leave:1 derive:1 ac:1 fixing:3 completion:6 measured:1 eq:7 solves:1 sydney:2 implemented:1 auxiliary:2 implies:2 come:1 filter:8 stochastic:3 exploration:4 nando:1 australia:1 maxa0:1 suffices:1 generalization:1 sandbox:1 decompose:1 fix:2 exploring:1 hold:1 sufficiently:1 around:1 normal:2 achieves:1 early:1 smallest:1 resample:3 estimation:1 osterior:1 ruslan:2 currently:2 sensitive:1 successfully:1 tool:3 weighted:1 gaussian:4 rather:1 caching:1 cr:1 pn:2 claudio:1 derived:1 focus:1 june:1 datasets6:1 improvement:2 jaya:1 rank:7 likelihood:1 vk:1 baseline:9 sense:1 inference:1 attract:1 accumulated:1 entire:1 typically:5 a0:1 integrated:1 unlikely:1 bandit:26 eliminate:1 chopin:1 selects:1 i1:2 iu:7 arg:4 among:1 ill:1 priori:1 yahoo:4 art:2 special:1 constrained:1 marginal:1 apriori:1 having:4 sampling:16 represents:2 stuart:1 icml:3 future:1 recommend:9 employ:1 randomly:1 gamma:3 divergence:2 simultaneously:2 murphy:1 maxj:3 replaced:1 sarabia:2 maintain:1 attempt:1 interest:2 huge:1 highly:2 mnih:2 evaluation:3 regressing:2 introduces:1 arrives:3 extreme:1 held:1 chain:1 kt:3 beforehand:1 partial:1 divide:1 pmf:13 theoretical:1 column:3 earlier:1 obstacle:1 rao:7 southampton:2 entry:3 too:1 dependency:3 params:1 synthetic:9 chooses:5 combined:1 st:1 density:2 international:4 probabilistic:9 off:1 vm:1 quickly:1 earn:1 squared:1 aaai:1 management:1 choose:2 book:3 inefficient:1 zhao:2 return:1 li:2 account:1 de:1 inc:1 ad:2 vi:2 tion:1 closed:4 optimistic:1 observing:2 analyze:1 portion:1 start:4 competitive:1 maintains:1 red:4 weinan:1 collaborative:5 minimize:1 square:1 variance:3 who:1 efficiently:1 characteristic:1 kaufmann:1 conceptually:2 bayesian:12 carlo:4 advertising:2 randomness:1 history:2 suffers:1 koc:2 whenever:1 volinsky:1 naturally:1 proof:4 soton:1 degeneracy:1 sampled:1 static:1 dataset:16 treatment:1 popular:8 subsection:1 knowledge:1 maillard:2 originally:1 methodology:1 awkward:1 tom:2 amer:1 done:1 furthermore:1 governing:1 shuai:1 correlation:1 receives:1 navin:1 reweighting:1 incrementally:2 del:1 mode:1 quality:1 behaved:2 believe:1 true:3 unbiased:1 analytically:1 hence:1 arnaud:2 satisfactory:1 attractive:1 conditionally:2 game:1 during:1 uniquely:2 noted:1 d4:5 presenting:1 arrived:1 complete:1 demonstrate:1 performs:3 novel:3 vega:1 empirically:1 volume:1 belong:1 approximates:2 eps:7 significant:1 gibbs:2 tuning:1 vanilla:3 similarly:2 particle:31 centre:1 dot:3 lihong:1 moving:1 chapelle:1 entail:1 similarity:2 posterior:27 scenario:3 termed:1 inequality:1 exploited:1 seen:1 morgan:1 gentile:2 novelty:1 maximize:3 period:4 recommended:6 ii:1 full:5 violate:1 multiple:2 infer:1 reduces:1 eachmovie:3 smooth:1 long:1 dkl:3 adobe:2 prediction:2 variant:1 regression:1 ajay:1 expectation:2 arxiv:2 iteration:16 kernel:4 achieved:1 invert:1 addition:2 unrealistically:1 conditionals:1 addressed:1 interval:1 crucial:2 publisher:1 rest:1 unlike:3 webscope:1 pass:1 cummulative:10 shie:2 inconsistent:1 call:1 integer:1 leverage:1 ideal:1 revealed:1 counting:1 enough:1 recommends:3 granularity:1 variety:2 affect:1 iterate:2 restrict:2 click:1 andriy:2 idea:1 knowing:1 expression:2 moral:1 icf:26 action:7 remark:1 gopalan:5 clear:1 statist:2 concentrated:1 generate:2 http:2 problematic:1 popularity:2 rb:3 serving:2 diverse:2 blue:2 promise:1 key:6 thereafter:1 nevertheless:1 achieving:1 drawn:1 jv:2 neither:1 utilize:1 graph:2 year:1 sum:1 beijing:1 run:8 jose:1 inverse:1 powerful:1 uncertainty:2 arrive:2 gonzalez:1 comparable:2 bound:8 koren:1 ri:4 personalized:1 sake:1 simulate:2 argument:1 emi:2 performing:1 jasra:1 according:1 alternate:1 combination:1 conjugate:2 smaller:1 slightly:1 partitioned:1 making:1 happens:1 restricted:1 pr:13 taken:1 recourse:1 computationally:3 conjugacy:1 previously:2 vjt:1 needed:1 know:3 end:4 available:6 gaussians:3 apply:1 observe:2 v2:5 generic:1 spectral:3 chawla:2 pierre:1 distinguished:1 batch:2 denotes:1 clustering:1 graphical:2 maintaining:1 music:2 exploit:2 society:1 move:5 strategy:2 rt:5 abovementioned:1 gradient:3 link:1 chris:1 considers:1 odalric:1 trivial:1 assuming:1 length:1 o1:4 index:1 mini:1 balance:1 minimizing:1 difficult:1 unfortunately:1 robert:1 statement:1 potentially:1 design:4 implementation:4 proper:1 policy:1 unknown:4 perform:2 pretending:1 recommender:1 twenty:1 observation:5 datasets:13 markov:1 descent:2 t:10 payoff:1 rn:2 perturbation:1 mansour:1 rating:25 introduced:2 pair:1 required:3 kl:2 extensive:1 specified:1 learned:2 nip:2 address:1 proceeds:1 sanjay:2 below:1 eighth:1 challenge:4 including:2 max:3 green:1 royal:1 suitable:2 event:1 difficulty:1 zappella:1 valko:3 abbreviate:1 arm:1 movie:10 rated:9 jun:1 faced:1 prior:4 literature:1 review:1 probab:1 kf:1 filtering:9 integrate:1 agent:8 rbpf:6 sufficient:1 principle:1 row:5 last:2 offline:3 allow:1 understand:2 ambrym:1 institute:1 wide:1 arnold:2 munos:2 sparse:1 feedback:3 overcome:1 lett:1 world:6 cumulative:10 uit:5 giovanni:1 san:1 ec:1 approximate:2 bui:1 global:3 sequentially:1 doucet:2 conceptual:1 assumed:2 recommending:2 un:1 latent:18 continuous:1 triplet:1 why:1 table:2 ku:1 learn:7 maxk6:1 ca:1 reasonably:1 nicolas:1 mse:9 complex:1 domain:1 vj:40 main:1 linearly:1 noise:1 repeated:1 allowed:1 augmented:1 fig:7 fashion:2 deployed:1 precision:3 sub:2 position:1 fails:1 rk:1 theorem:4 specific:2 xt:1 jt:11 concern:1 bivariate:1 intractable:1 essential:1 exists:1 sequential:3 effectively:1 adding:1 magnitude:1 nk:1 gap:2 mf:5 subtract:1 simply:1 explore:1 aditya:1 partially:1 recommendation:18 u2:4 extracted:1 acm:2 conditional:4 goal:3 twofold:1 content:2 typical:1 except:1 movielens:10 sampler:3 averaging:1 called:2 total:1 castillo:2 experimental:1 select:1 rit:5 arises:2 evaluate:5 mcmc:2 hung:1 |
5,508 | 5,986 | Parallelizing MCMC with Random Partition Trees
Xiangyu Wang
Dept. of Statistical Science
Duke University
[email protected]
Fangjian Guo
Dept. of Computer Science
Duke University
[email protected]
Katherine A. Heller
Dept. of Statistical Science
Duke University
[email protected]
David B. Dunson
Dept. of Statistical Science
Duke University
[email protected]
Abstract
The modern scale of data has brought new challenges to Bayesian inference. In
particular, conventional MCMC algorithms are computationally very expensive
for large data sets. A promising approach to solve this problem is embarrassingly
parallel MCMC (EP-MCMC), which first partitions the data into multiple subsets
and runs independent sampling algorithms on each subset. The subset posterior
draws are then aggregated via some combining rules to obtain the final approximation. Existing EP-MCMC algorithms are limited by approximation accuracy and
difficulty in resampling. In this article, we propose a new EP-MCMC algorithm
PART that solves these problems. The new algorithm applies random partition
trees to combine the subset posterior draws, which is distribution-free, easy to resample from and can adapt to multiple scales. We provide theoretical justification
and extensive experiments illustrating empirical performance.
1
Introduction
Bayesian methods are popular for their success in analyzing complex data sets. However, for large
data sets, Markov Chain Monte Carlo (MCMC) algorithms, widely used in Bayesian inference, can
suffer from huge computational expense. With large data, there is increasing time per iteration, increasing time to convergence, and difficulties with processing the full data on a single machine due
to memory limits. To ameliorate these concerns, various methods such as stochastic gradient Monte
Carlo [1] and sub-sampling based Monte Carlo [2] have been proposed. Among directions that have
been explored, embarrassingly parallel MCMC (EP-MCMC) seems most promising. EP-MCMC
algorithms typically divide the data into multiple subsets and run independent MCMC chains simultaneously on each subset. The posterior draws are then aggregated according to some rules to
produce the final approximation. This approach is clearly more efficient as now each chain involves
a much smaller data set and the sampling is communication-free. The key to a successful EP-MCMC
algorithm lies in the speed and accuracy of the combining rule.
Existing EP-MCMC algorithms can be roughly divided into three categories. The first relies on
asymptotic normality of posterior distributions. [3] propose a ?Consensus Monte Carlo? algorithm,
which produces final approximation by a weighted averaging over all subset draws. This approach is
effective when the posterior distributions are close to Gaussian, but could suffer from huge bias when
skewness and multi-modes are present. The second category relies on calculating an appropriate
variant of a mean or median of the subset posterior measures [4, 5]. These approaches rely on
asymptotics (size of data increasing to infinity) to justify accuracy, and lack guarantees in finite
samples. The third category relies on the product density equation (PDE) in (1). Assuming X is the
1
observed data and ? is the parameter of interest, when the observations are iid conditioned on ?, for
any partition of X = X (1) ? X (2) ? ? ? ? ? X (m) , the following identity holds,
p(?|X) ? ?(?)p(X|?) ? p(?|X (1) )p(?|X (2) ) ? ? ? p(?|X (m) ),
(1)
Qm
if the prior on the full data and subsets satisfy ?(?) = i=1 ?i (?). [6] proposes using kernel density
estimation on each subset posterior and then combining via (1). They use an independent Metropolis
sampler to resample from the combined density. [7] apply the Weierstrass transform directly to (1)
and developed two sampling algorithms based on the transformed density. These methods guarantee
the approximation density converges to the true posterior density as the number of posterior draws
increase. However, as both are kernel-based, the two methods are limited by two major drawbacks.
The first is the inefficiency of resampling. Kernel density estimators are essentially mixture distributions. Assuming we have collected 10,000 posterior samples on each machine, then multiplying
just two densities already yields a mixture distribution containing 108 components, each of which
is associated with a different weight. The resampling requires the independent Metropolis sampler
to search over an exponential number of mixture components and it is likely to get stuck at one
?good? component, resulting in high rejection rates and slow mixing. The second is the sensitivity
to bandwidth choice, with one bandwidth applied to the whole space.
In this article, we propose a novel EP-MCMC algorithm termed ?parallel aggregation random trees?
(PART), which solves the above two problems. The algorithm inhibits the explosion of mixture
components so that the aggregated density is easy to resample. In addition, the density estimator is
able to adapt to multiple scales and thus achieve better approximation accuracy. In Section 2, we
motivate the new methodology and present the algorithm. In Section 3, we present error bounds and
prove consistency of PART in the number of posterior draws. Experimental results are presented in
Section 4. Proofs and part of the numerical results are provided in the supplementary materials.
2
Method
Recall the PDE identity (1) in the introduction. When data set X is partitioned into m subsets
X = X (1) ? ? ? ? ? X (m) , the posterior distribution of the ith subset can be written as
f (i) (?) ? ?(?)1/m p(X (i) |?),
(2)
where ?(?) is the prior assigned to the full data set. Assuming observations are iid given ?, the
relationship between the full data posterior and subset posteriors is captured by
m
m
Y
Y
p(?|X) ? ?(?)
p(X (i) |?) ?
f (i) (?).
(3)
i=1
i=1
Due to the flaws of applying kernel-based density estimation to (3) mentioned above, we propose
to use random partition trees or multi-scale histograms. Let FK be the collection of all Rp partitions formed by K disjoint rectangular blocks, where a rectangular block takes the form of
def
Ak = (lk,1 , rk,1 ] ? (lk,2 , rk,2 ] ? ? ? ? (lk,p , rk,p ] ? Rp for some lk,q < rk,q . A K-block histogram
is then defined as
K
(i)
X
nk
f?(i) (?) =
1(? ? Ak ),
(4)
N |Ak |
k=1
(i)
where {Ak : k = 1, 2, ? ? ? , K} ? FK are the blocks and N, nk are the total number of posterior
samples on the ith subset and of those inside the block Ak respectively (assuming the same N across
subsets). We use | ? | to denote the area of a block. Assuming each subset posterior is approximated
by a K-block histogram, if the partition {Ak } is restricted to be the same across all subsets, then the
aggregated density after applying (3) is still a K-block histogram (illustrated in the supplement),
m
K m
K
(i)
X
1 Y ?(i)
1 X Y nk
p?(?|X) =
f (?) =
1(? ? Ak ) =
wk gk (?),
(5)
Z i=1
Z
|Ak |
i=1
k=1
k=1
PK Qm (i)
where Z = k=1 i=1 nk /|Ak |m?1 is the normalizing constant, wk ?s are the updated weights,
and gk (?) = unif(?; Ak ) is the block-wise distribution. Common histogram blocks across subsets
control the number of mixture components, leading to simple aggregation and resampling procedures. Our PART algorithm consists of space partitioning followed by density aggregation, with
aggregation simply multiplying densities across subsets for each block and then normalizing.
2
2.1
Space Partitioning
To find good partitions, our algorithm recursively bisects (not necessarily evenly) a previous block
along a randomly selected dimension, subject to certain rules. Such partitioning is multi-scale and
related to wavelets [8]. Assume we are currently splitting the block A along the dimension q and
(i)
denote the posterior samples in A by {?j }j?A for the ith subset. The cut point on dimension q is
(1)
(2)
(m)
determined by a partition rule ?({?j,q }, {?j,q }, ? ? ? , {?j,q }). The resulting two blocks are subject
to further bisecting under the same procedure until one of the following stopping criteria is met ?
(i) nk /N < ?? or (ii) the area of the block |Ak | becomes smaller than ?|A| . The algorithm returns a
tree with K leafs, each corresponding to a block Ak . Details are provided in Algorithm 1.
Algorithm 1 Partition tree algorithm
(1)
(2)
(m)
1: procedure B UILD T REE({?j }, {?j }, ? ? ? , {?j
2:
D ? {1, 2, ? ? ? , p}
3:
while D not empty do
4:
Draw q uniformly at random from D.
}, ?(?), ?? , ?a , N , L, R)
. Randomly choose the dimension to cut
(i)
T .n(i) ? Cardinality of {?j } for all i
P
P
(i)
(i)
6:
if ?q? ? Lq > ?a , Rq ? ?q? > ?a and min( j 1(?j,q ? ?q? ), j 1(?j,q > ?q? )) > N ??
for all i then
7:
L0 ? L, L0q ? ?q? , R0 ? R, Rq0 ? ?q?
. Update left and right boundaries
5:
8:
(1)
(2)
(m)
?q? ? ?({?j,q }, {?j,q }, ? ? ? , {?j,q }),
(1)
T .L ? B UILD T REE({?j
(1)
(1)
(m)
(1)
(m)
: ?j,q ? ?q? }, ? ? ? , {?j
(m)
: ?j,q ? ?q? }, ? ? ? , N, L, R0 )
(m)
9:
T .R ? B UILD T REE({?j : ?j,q > ?q? }, ? ? ? , {?j : ?j,q > ?q? }, ? ? ? , N, L0 , R)
10:
return T
11:
else
12:
D ? D \ {q}
. Try cutting at another dimension
13:
end if
14:
end while
15:
T .L ? NULL, T .R ? NULL, return T
. Leaf node
16: end procedure
In Algorithm 1, ?|A| becomes the minimum edge length of a block ?a (possibly different across dimensions). Quantities L, R ? Rp are the left and right boundaries of the samples respectively, which
take the sample minimum/maximum when the support is unbounded. We consider two choices for
the partition rule ?(?) ? maximum (empirical) likelihood partition (ML) and median/KD-tree partition (KD).
Maximum Likelihood Partition (ML) ML-partition searches for partitions by greedily maximizing the empirical log likelihood at each iteration. For m = 1 we have
n1
n2
n1
n2
?
? = ?ML ({?j,q , j = 1, ? ? ? , n}) =
arg max
,
(6)
n|A2 |
n1 +n2 =n,A1 ?A2 =A n|A1 |
where n1 and n2 are counts of posterior samples in A1 and A2 , respectively. The solution to (6) falls
inside the set {?j }. Thus, a simple linear search after sorting samples suffices (by book-keeping the
ordering, sorting the whole block once is enough for the entire procedure). For m > 1, we have
(i)
(i)
m
(i) n2
(i) n1
Y
n1
n2
?q,ML (?) = arg max
,
(7)
n(i) |A1 |
n(i) |A2 |
(i)
? ? ??m {? } i=1
i=1
j
similarly solved by a linear search. This is dominated by sorting and takes O(n log n) time.
Median/KD-Tree Partition (KD) Median/KD-tree partition cuts at the empirical median of posterior samples. When there are multiple subsets, the median is taken over pooled samples to force
{Ak } to be the same across subsets. Searching for median takes O(n) time [9], which is faster than
ML-partition especially when the number of posterior draws is large. The same partitioning strategy
is adopted by KD-trees [10].
3
2.2
Density Aggregation
Given a common partition, Algorithm 2 aggregates all subsets in one stage. However, assuming a
single ?good? partition for all subsets is overly restrictive when m is large. Hence, we also consider
pairwise aggregation [6, 7], which recursively groups subsets into pairs, combines each pair with
Algorithm 2, and repeats until one final set is obtained. Run time of PART is dominated by space
partitioning (B UILD T REE), with normalization and resampling very fast.
Algorithm 2 Density aggregation algorithm (drawing N 0 samples from the aggregated posterior)
(1)
(2)
(m)
1: procedure O NE S TAGE AGGREGATE({?j }, {?j }, ? ? ? , {?j
2:
(1)
(2)
(m)
T ? B UILD T REE({?j }, {?j }, ? ? ? , {?j
}, ?(?), ?? , ?a , N , N 0 , L, R)
}, ?(?), ?? , ?a , N , L, R),
Z?0
(i)
({Ak }, {nk })
3:
? T RAVERSE L EAF(T )
4:
for k = 1, 2, ? ? ? , K do
Qm (i)
5:
w
?k ? i=1 nk /|Ak |m?1 , Z ? Z + w
?k
6:
end for
7:
wk ? w
?k /Z for all k
8:
for t = 1, 2, ? ? ? , N 0 do
9:
Draw k with weights {wk } and then draw ?t ? gk (?)
10:
end for
11:
return {?1 , ?2 , ? ? ? , ?N 0 }
12: end procedure
2.3
. Multiply inside each block
. Normalize
Variance Reduction and Smoothing
Random Tree Ensemble Inspired by random forests [11, 12], the full posterior is estimated by
averaging T independent trees output by Algorithm 1. Smoothing and averaging can reduce variance and yield better approximation accuracy. The trees can be built in parallel and resampling in
Algorithm 2 only additionally requires picking a tree uniformly at random.
Local Gaussian Smoothing As another approach to increase smoothness, the blockwise uniform
distribution in (5) can be replaced by a Gaussian distribution gk = N (?; ?k , ?k ), with mean and
covariance estimated ?locally? by samples within the block. A multiplied Gaussian approximation
Pm ? (i)?1 ?1
Pm ? (i)?1 (i)
(i)
? (i) and ?
is used: ?k = ( i=1 ?
) , ?k = ?k ( i=1 ?
?
?k ), where ?
?k are estimated
k
k
k
th
with the i subset. We apply both random tree ensembles and local Gaussian smoothing in all
applications of PART in this article unless explicitly stated otherwise.
3
Theory
In this section, we provide consistency theory (in the number of posterior samples) for histograms
and the aggregated density. We do not consider the variance reduction and smoothing modifications
in these developments for simplicity in exposition, but extensions are possible. Section 3.1 provides error bounds on ML and KD-tree partitioning-based histogram density estimators constructed
from N independent samples from a single joint posterior; modified bounds can be obtained for
MCMC samples incorporating the mixing rate, but will not be considered here. Section 3.2 then
provides corresponding error bounds for our PART-aggregrated density estimators in the one-stage
and pairwise cases. Detailed proofs are provided in the supplementary materials.
Let f (?) be a p-dimensional posterior density function. Assume f is supported on a measurable
set ? ? Rp . Since one can always transform ? to a bounded region by scaling, we simply assume
? = [0, 1]p as in [8, 13] without loss of generality. We also assume that f ? C 1 (?).
3.1
Space partitioning
Maximum likelihood partition (ML) For a given K, ML partition solves the following problem:
K
nk
1 X
?
nk log
, s.t. nk /N ? c0 ?, |Ak | ? ?/D,
(8)
fM L = arg max
N
N |Ak |
k=1
4
for some c0 and ?, where D = kf k? < ?. We have the following result.
Theorem 1. Choose ? = 1/K 1+1/(2p) . For any ? > 0, if the sample size satisfies that N >
2(1 ? c0 )?2 K 1+1/(2p) log(2K/?), then with probability at least 1 ? ?, the optimal solution to (8)
satisfies that
s
K
1
3eN
8
?
DKL (f kf?M L ) ? (C1 + 2 log K)K 2p + C2 max log D, 2 log K
log
log
,
N
K
?
?
where C1 = log D + 4pLD with L = kf 0 k? and C2 = 48 p + 1.
When multiple densities f (1) (?), ? ? ? , f (m) (?) are presented, our goal of imposing the same partition
on all functions requires solving a different problem,
(i)
(f?M L )m
i=1 = arg max
(i)
m
K
X
nk
1 X (i)
nk log
,
N
N
i
i |Ak |
i=1
(i)
s.t. nk /Ni ? c0 ?, |Ak | ? ?/D,
(9)
k=1
where Ni is the number of posterior samples for function f (i) . A similar result as Theorem 1 for (9)
is provided in the supplementary materials.
Median partition/KD-tree (KD) The KD-tree f?KD cuts at the empirical median for different
dimensions. We have the following result.
1
Theorem 2. For any ? > 0, define r? = log2 1 + 2+3L/? . For any ? > 0, if N >
32e2 (log K)2 K log(2K/?), then with probability at least 1 ? ?, we have
s
2K
2K
?r? /p
?
log
kfKD ? fKD k1 ? ? + pLK
+ 4e log K
.
N
?
If f (?) is further lower bounded by some
0 > 0, we can then obtain an upper bound on
constant b
the KL-divergence. Define rb0 = log2 1 +
1
2+3L/b0
and we have
pLD ?rb /p
DKL (f kf?KD ) ?
K 0 + 8e log K
b0
s
2K
log
N
2K
.
?
When there are multiple functions and the median partition is performed on pooled data, the partition
might not happen at the empirical median on each subset. However, as long as the partition quantiles
are upper and lower bounded by ? and 1 ? ? for some ? ? [1/2, 1), we can establish results similar
to Theorem 2. The result is provided in the supplementary materials.
3.2
Posterior aggregation
The previous section provides estimation error bounds on individual posterior densities, through
which we can bound the distance between the true posterior conditional on the full data set and the
(i)
aggregated density via (3). Assume we have m density
functions
R Q {f (i), i = 1, 2, ? ? ? , m} and intend
Q
(i)
to approximate their aggregated density fI = i?I f /
, where I = {1, 2, ? ? ? , m}.
i?I f
S
Notice that for any I 0 ? I, fI 0 = p(?| i?I 0 X (i) ). Let D = maxI 0 ?I kfI 0 k? , i.e., D is an upper
RQ
(i)
bound on all posterior densities formed by a subset of X. Also define ZI 0 =
. These
i?I 0 f
quantities depend only on the model and the observed data (not posterior samples). We denote f?M L
and f?KD by f? as the following results apply similarly to both methods.
The ?one-stage? aggregation (Algorithm 2) first obtains an approximation
each f (i) (via either
R Q for
Q
(i)
(i)
?
?
?
ML-partition or KD-partition) and then computes fI = i?I f /
.
i?I f
5
Theorem 3 (One-stage aggregation). Denote the average total variation distance between f (i) and
f?(i) by ?. Assume the conditions in Theorem 1 and 2 and for ML-partition
s
p
?
3
1
3eN
8
+ 2p
?1
2
N ? 32c0
2(p + 1)K
log
log
K
?
and for KD-partition
N > 128e2 K(log K)2 log(K/?).
Then with high probability the total variation distance between fI and f?I is bounded by kfI ?f?I k1 ?
2
m?1
?, where ZI is a constant that does not depend on the posterior samples.
ZI m(2D)
The approximation error of Algorithm 2 increases dramatically with the number of subsets. To
ameliorate this, we introduce the pairwise aggregation strategy in Section 2, for which we have the
following result.
Theorem 4 (Pairwise aggregation). Denote the average total variation distance between f (i) and
f?(i) by ?. Assume the conditions in Theorem 3. Then with high probability the total variation distance between fI and f?I is bounded by kfI ? f?I k1 ? (4C0 D)log2 m+1 ?, where C0 =
Z 00 Z 0 00
maxI 00 ?I 0 ?I I Z I0 \I is a constant that does not depend on posterior samples.
I
4
Experiments
In this section, we evaluate the empirical performance of PART1 and compare the two algorithms
PART-KD and PART-ML to the following posterior aggregation algorithms.
1. Simple averaging (average): each aggregated sample is an arithmetic average of M samples coming from M subsets.
2. Weighted averaging (weighted): also called Consensus Monte Carlo algorithm [3],
where each aggregated sample is a weighted average of M samples. The weights are
optimally chosen for a Gaussian posterior.
3. Weierstrass rejection sampler (Weierstrass): subset posterior samples are passed through
a rejection sampler based on the Weierstrass transform to produce the aggregated samples [7]. We use its R package2 for experiments.
4. Parametric density product (parametric): aggregated samples are drawn from a multivariate Gaussian, which is a product of Laplacian approximations to subset posteriors [6].
5. Nonparametric density product (nonparametric): aggregated posterior is approximated
by a product of kernel density estimates of subset posteriors [6]. Samples are drawn with
an independent Metropolis sampler.
6. Semiparametric density product (semiparametric): similar to the nonparametric, but
with subset posteriors estimated semiparametrically [6, 14].
All experiments except the two toy examples use adaptive MCMC [15, 16] 3 for posterior sampling.
For PART-KD/ML, one-stage aggregation (Algorithm 2) is used only for the toy examples (results
from pairwise aggregation are provided in the supplement). For other experiments, pairwise aggregation is used, which draws 50,000 samples for intermediate stages and halves ?? after each stage
to refine the resolution (The value of ?? listed below is for the final stage). The random ensemble of
PART consists of 40 trees.
4.1
Two Toy Examples
The two toy examples highlight the performance of our methods in terms of (i) recovering multiple
modes and (ii) correctly locating posterior mass when subset posteriors are heterogeneous. The
PART-KD/PART-ML results are obtained from Algorithm 2 without local Gaussian smoothing.
1
MATLAB
implementation
available
from
https://github.com/richardkwo/
random-tree-parallel-MCMC
2
https://github.com/wwrechard/weierstrass
3
http://helios.fmi.fi/?lainema/mcmc/
6
Bimodal Example Figure 1 shows an example consisting of m = 10 subsets. Each subset
2
consists of 10,000 samples drawn from a mixture of two univariate normals 0.27N (?i,1 , ?i,1
)+
2
), with the means and standard deviations slightly different across subsets, given by
0.73N (?i,2 , ?i,2
?i,1 = ?5 + i,1 , ?i,2 = 5 + i,2 and ?i,1 = 1 + |?i,1 |, ?i,2 = 4 + |?i,2 |, where i,l ? N (0, 0.5),
?i,l ? N (0, 0.1) independently for m = 1, ? ? ? , 10 and l = 1, 2. The resulting true combined posterior (red solid) consists of two modes with different scales. In Figure 1, the left panel shows the
subset posteriors (dashed) and the true posterior; the right panel compares the results with various
methods to the truth. A few are omitted in the graph: average and weighted average overlap with
parametric, and Weierstrass overlaps with PART-KD/PART-ML.
1
1
True density
Subset densities
density
0.8
0.6
6
0.4
4
0.2
0
-10
True density
PART-KD
PART-ML
Parametric
Nonparametric
Semiparametric
8
2
-5
0
5
0
15 -10
10
-5
0
5
10
15
x
x
Figure 1: Bimodal posterior combined from 10 subsets. Left: the true posterior and subset posteriors
(dashed). Right: aggregated posterior output by various methods compared to the truth. Results are
based on 10,000 aggregated samples.
iid
Rare Bernoulli Example We consider N = 10, 000 Bernoulli trials xi ? Ber(?) split into m =
15 subsets. The parameter ? is chosen to be 2m/N so that on average each subset only contains
2 successes. By random partitioning, the subset posteriors are rather heterogeneous as plotted in
dashed lines in the left panel of Figure 2. The prior is set as ?(?) = Beta(?; 2, 2). The right
panel of Figure 2 compares the results of various methods. PART-KD, PART-ML and Weierstrass
capture the true posterior shape, while parametric, average and weighted average are all biased. The
nonparametric and semiparametric methods produce flat densities near zero (not visible in Figure 2
due to the scale).
800
True posterior
Subset posteriors
density
600
PART-KD
PART-ML
400
200
0
0
0.005
0.01
3
0.015
0.02
Figure 2: The posterior for the probability ? of a rare event. Left: the full posterior (solid) and
m = 15 subset posteriors (dashed). Right: aggregated posterior output by various methods. All
results are based on 20,000 aggregated samples.
4.2
Bayesian Logistic Regression
Synthetic dataset The dataset {(xi , yi )}N
i=1 consists of N = 50, 000 observations in p = 50
dimensions. All features xi ? Rp?1 are drawn from Np?1 (0, ?) with p = 50 and ?k,l = 0.9|k?l| .
The model intercept is set to ?3 and the other coefficient ?j? ?s are drawn randomly from N (0, 52 ).
Conditional on xi , yi ? {0, 1} follows p(yi = 1) = 1/(1 + exp(?? ?T [1, xi ])). The dataset is
randomly split into m = 40 subsets. For both full chain and subset chains, we run adaptive MCMC
for 200,000 iterations after 100,000 burn-in. Thinning by 4 results in T = 50, 000 samples.
The samples from the full chain (denoted as {?j }Tj=1 ) are treated as the ground truth. To compare the
accuracy of different methods, we resample T points {??j } from each aggregated posterior and then
7
P
P
1
compare them using the following metrics: (1) RMSE of posterior mean k pT
( j ??j ? j ?j )k2
(2) approximate KL divergence DKL (p(?)k?
p(?)) and DKL (?
p(?)kp(?)), where p? and p are both
approximated
by multivariate Gaussians (3) the posterior concentration ratio, defined as r =
qP
P
k??j ? ? ? k2 /
k?j ? ? ? k2 , which measures how posterior spreads out around the true
j
2
2
j
value (with r = 1 being ideal). The result is provided in Table 1. Figure 4 shows the DKL (pk?
p)
versus the length of subset chains supplied to the aggregation algorithm. The results of PART are
obtained with ?? = 0.001, ?a = 0.0001 and 40 trees. Figure 3 showcases the aggregated posterior
for two parameters in terms of joint and marginal distributions.
Method
RMSE
DKL (pk?
p)
DKL (?
pkp)
r
PART (KD)
PART (ML)
average
weighted
Weierstrass
parametric
nonparametric
semiparametric
0.587
1.399
29.93
38.28
6.47
10.07
25.59
25.45
3.95 ? 102
8.05 ? 101
2.53 ? 103
2.60 ? 104
7.20 ? 102
2.46 ? 103
3.40 ? 104
2.06 ? 104
6.45 ? 102
5.47 ? 102
5.41 ? 104
2.53 ? 105
2.62 ? 103
6.12 ? 103
3.95 ? 104
3.90 ? 104
3.94
9.17
184.62
236.15
39.96
62.13
157.86
156.97
PART-KD
PART-ML
Table 1: Accuracy of posterior aggregation on logistic regression. Figure 3: Posterior of ?1 and ?17 .
Real datasets We also run experiments on two real datasets: (1) the Covertype dataset4 [17] consists of 581,012 observations in 54 dimensions, and the task is to predict the type of forest cover
with cartographic measurements; (2) the MiniBooNE dataset5 [18, 19] consists of 130,065 observations in 50 dimensions, whose task is to distinguish electron neutrinos from muon neutrinos with
experimental data. For both datasets, we reserve 1/5 of the data as the test set. The training set
is randomly split into m = 50 and m = 25 subsets respectively for covertype and MiniBooNE.
Figure 5 shows the prediction accuracy versus total runtime (parallel subset MCMC + aggregation
time) for different methods. For each MCMC chain, the first 20% iterations are discarded before aggregation as burn-in. The aggregated chain is required to be of the same length as the subset chains.
As a reference, we also plot the result for the full chain and lasso [20] run on the full training set.
PART-KD
PART-ML
prediction accuracy
PART-KD
PART-ML
5
Weierstrass
Average
1
0.75
0.8
0.7
Weighted
Full chain
Lasso
0.6
Covertype
0.65
0.6
Figure 4: Approximate KL divergence between the full chain and
the combined posterior versus the
length of subset chains.
Parametric
Nonparametric
0.8
0
200
400
total time (sec)
600
MiniBooNE
0.4
0.2
800 0
50
100
150
200
total time (sec)
250
300
Figure 5: Prediction accuracy versus total runtime (running
chain + aggregation) on Covertype and MiniBooNE datasets
(semiparametric is not compared due to its long running time).
Plots against the length of chain are provided in the supplement.
Conclusion
In this article, we propose a new embarrassingly-parallel MCMC algorithm PART that can efficiently
draw posterior samples for large data sets. PART is simple to implement, efficient in subset combining and has theoretical guarantees. Compared to existing EP-MCMC algorithms, PART has substantially improved performance. Possible future directions include (1) exploring other multi-scale
density estimators which share similar properties as partition trees but with a better approximation
accuracy (2) developing a tuning procedure for choosing good ?? and ?a , which are essential to the
performance of PART.
4
5
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/binary.html
https://archive.ics.uci.edu/ml/machine-learning-databases/00199
8
References
[1] Max Welling and Yee W Teh. Bayesian learning via stochastic gradient langevin dynamics. In
Proceedings of the 28th International Conference on Machine Learning (ICML-11), 2011.
[2] Dougal Maclaurin and Ryan P Adams. Firefly Monte Carlo: Exact MCMC with subsets of
data. Proceedings of the conference on Uncertainty in Artificial Intelligence (UAI), 2014.
[3] Steven L Scott, Alexander W Blocker, Fernando V Bonassi, Hugh A Chipman, Edward I
George, and Robert E McCulloch. Bayes and big data: The consensus Monte Carlo algorithm.
In EFaBBayes 250 conference, volume 16, 2013.
[4] Stanislav Minsker, Sanvesh Srivastava, Lizhen Lin, and David Dunson. Scalable and robust
bayesian inference via the median posterior. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), 2014.
[5] Sanvesh Srivastava, Volkan Cevher, Quoc Tran-Dinh, and David B Dunson. WASP: Scalable
Bayes via barycenters of subset posteriors. In Proceedings of the 18th International Conference
on Artificial Intelligence and Statistics (AISTATS), volume 38, 2015.
[6] Willie Neiswanger, Chong Wang, and Eric Xing. Asymptotically exact, embarrassingly parallel MCMC. In Proceedings of the Thirtieth Conference Annual Conference on Uncertainty in
Artificial Intelligence (UAI-14), pages 623?632, Corvallis, Oregon, 2014. AUAI Press.
[7] Xiangyu Wang and David B Dunson. Parallel MCMC via Weierstrass sampler. arXiv preprint
arXiv:1312.4605, 2013.
[8] Linxi Liu and Wing Hung Wong. Multivariate density estimation based on adaptive partitioning: Convergence rate, variable selection and spatial adaptation. arXiv preprint
arXiv:1401.2597, 2014.
[9] Manuel Blum, Robert W Floyd, Vaughan Pratt, Ronald L Rivest, and Robert E Tarjan. Time
bounds for selection. Journal of Computer and System Sciences, 7(4):448?461, 1973.
[10] Jon Louis Bentley. Multidimensional binary search trees used for associative searching. Communications of the ACM, 18(9):509?517, 1975.
[11] Leo Breiman. Random forests. Machine Learning, 45(1):5?32, 2001.
[12] Leo Breiman. Bagging predictors. Machine Learning, 24(2):123?140, 1996.
[13] Xiaotong Shen and Wing Hung Wong. Convergence rate of sieve estimates. The Annals of
Statistics, pages 580?615, 1994.
[14] Nils Lid Hjort and Ingrid K Glad. Nonparametric density estimation with a parametric start.
The Annals of Statistics, pages 882?904, 1995.
[15] Heikki Haario, Marko Laine, Antonietta Mira, and Eero Saksman. DRAM: efficient adaptive
MCMC. Statistics and Computing, 16(4):339?354, 2006.
[16] Heikki Haario, Eero Saksman, and Johanna Tamminen. An adaptive Metropolis algorithm.
Bernoulli, pages 223?242, 2001.
[17] Jock A Blackard and Denis J Dean. Comparative accuracies of neural networks and discriminant analysis in predicting forest cover types from cartographic variables. In Proc. Second
Southern Forestry GIS Conf, pages 189?199, 1998.
[18] Byron P Roe, Hai-Jun Yang, Ji Zhu, Yong Liu, Ion Stancu, and Gordon McGregor. Boosted
decision trees as an alternative to artificial neural networks for particle identification. Nuclear
Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 543(2):577?584, 2005.
[19] M. Lichman. UCI machine learning repository, 2013.
[20] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal
Statistical Society. Series B (Methodological), pages 267?288, 1996.
9
| 5986 |@word trial:1 illustrating:1 repository:1 seems:1 c0:7 unif:1 covariance:1 forestry:1 solid:2 recursively:2 reduction:2 inefficiency:1 contains:1 liu:2 lichman:1 series:1 existing:3 com:2 manuel:1 written:1 ronald:1 numerical:1 partition:33 happen:1 visible:1 shape:1 plot:2 update:1 resampling:6 half:1 selected:1 leaf:2 intelligence:3 haario:2 ith:3 volkan:1 weierstrass:10 provides:3 node:1 denis:1 unbounded:1 along:2 constructed:1 c2:2 beta:1 ingrid:1 prove:1 consists:7 combine:2 firefly:1 inside:3 introduce:1 pairwise:6 roughly:1 multi:4 inspired:1 cardinality:1 increasing:3 becomes:2 provided:8 bounded:5 rivest:1 panel:4 mass:1 mcculloch:1 null:2 skewness:1 substantially:1 developed:1 guarantee:3 multidimensional:1 auai:1 runtime:2 qm:3 k2:3 control:1 partitioning:9 louis:1 before:1 local:3 limit:1 minsker:1 ak:19 analyzing:1 ree:5 pkp:1 might:1 burn:2 tamminen:1 limited:2 kfi:3 block:20 implement:1 wasp:1 procedure:8 asymptotics:1 area:2 empirical:7 get:1 close:1 selection:3 cartographic:2 applying:2 package2:1 intercept:1 yee:1 www:1 conventional:1 measurable:1 wong:2 vaughan:1 maximizing:1 dean:1 independently:1 rectangular:2 resolution:1 shen:1 simplicity:1 splitting:1 rule:6 estimator:5 nuclear:1 searching:2 variation:4 justification:1 updated:1 annals:2 pt:1 exact:2 duke:8 fkd:1 expensive:1 approximated:3 showcase:1 cut:4 database:1 xw56:1 ep:9 observed:2 csie:1 steven:1 wang:3 solved:1 capture:1 preprint:2 spectrometer:1 region:1 ordering:1 mentioned:1 rq:2 dynamic:1 motivate:1 depend:3 solving:1 eric:1 bisecting:1 joint:2 various:5 leo:2 fast:1 effective:1 monte:7 eaf:1 kp:1 artificial:4 aggregate:2 choosing:1 whose:1 widely:1 solve:1 supplementary:4 drawing:1 otherwise:1 statistic:4 gi:1 transform:3 final:5 associative:1 propose:5 tran:1 product:6 coming:1 adaptation:1 uci:2 combining:4 mixing:2 achieve:1 normalize:1 saksman:2 convergence:3 empty:1 produce:4 comparative:1 adam:1 converges:1 stat:3 b0:2 edward:1 solves:3 recovering:1 c:1 involves:1 met:1 direction:2 drawback:1 stochastic:2 libsvmtools:1 material:4 suffices:1 ntu:1 ryan:1 extension:1 exploring:1 hold:1 around:1 considered:1 ground:1 normal:1 exp:1 miniboone:4 ic:1 maclaurin:1 predict:1 electron:1 reserve:1 major:1 a2:4 omitted:1 resample:4 estimation:5 proc:1 currently:1 bisects:1 weighted:8 brought:1 clearly:1 gaussian:8 always:1 modified:1 rather:1 shrinkage:1 breiman:2 thirtieth:1 boosted:1 l0:2 methodological:1 bernoulli:3 likelihood:4 greedily:1 linxi:1 pld:2 equipment:1 inference:3 flaw:1 stopping:1 i0:1 typically:1 entire:1 transformed:1 arg:4 among:1 html:1 denoted:1 proposes:1 development:1 smoothing:6 spatial:1 marginal:1 once:1 plk:1 sampling:5 icml:2 jon:1 future:1 np:1 gordon:1 few:1 modern:1 randomly:5 simultaneously:1 divergence:3 individual:1 replaced:1 consisting:1 n1:6 huge:2 interest:1 dougal:1 multiply:1 chong:1 mixture:6 tj:1 chain:16 edge:1 explosion:1 unless:1 tree:24 divide:1 plotted:1 theoretical:2 cevher:1 cover:2 deviation:1 subset:57 rare:2 uniform:1 predictor:1 successful:1 optimally:1 synthetic:1 combined:4 st:1 density:39 international:3 sensitivity:1 hugh:1 physic:1 picking:1 containing:1 choose:2 possibly:1 conf:1 book:1 muon:1 leading:1 return:4 wing:2 toy:4 pooled:2 wk:4 sec:2 coefficient:1 oregon:1 satisfy:1 explicitly:1 performed:1 try:1 red:1 xing:1 aggregation:21 bayes:2 parallel:9 start:1 rmse:2 johanna:1 formed:2 ni:2 accuracy:12 variance:3 efficiently:1 ensemble:3 yield:2 bayesian:6 identification:1 iid:3 carlo:7 multiplying:2 detector:1 against:1 e2:2 associated:2 proof:2 dataset:3 popular:1 recall:1 embarrassingly:4 thinning:1 methodology:1 improved:1 generality:1 just:1 stage:8 until:2 fmi:1 lack:1 bonassi:1 mode:3 logistic:2 bentley:1 neutrino:2 true:10 willie:1 hence:1 assigned:1 sieve:1 illustrated:1 floyd:1 marko:1 criterion:1 wise:1 novel:1 fi:6 common:2 qp:1 ji:1 volume:2 lizhen:1 measurement:1 dinh:1 corvallis:1 imposing:1 smoothness:1 tuning:1 consistency:2 fk:2 similarly:2 pm:2 particle:1 posterior:69 multivariate:3 termed:1 certain:1 binary:2 success:2 yi:3 captured:1 minimum:2 george:1 r0:2 xiangyu:2 aggregated:20 fernando:1 dashed:4 ii:2 arithmetic:1 multiple:8 full:13 faster:1 adapt:2 pde:2 long:2 lin:1 divided:1 dept:4 dkl:7 a1:4 laplacian:1 prediction:3 variant:1 regression:3 scalable:2 heterogeneous:2 essentially:1 metric:1 jock:1 arxiv:4 iteration:4 kernel:5 histogram:7 normalization:1 bimodal:2 roe:1 ion:1 c1:2 addition:1 semiparametric:6 else:1 median:12 biased:1 archive:1 subject:2 byron:1 heikki:2 chipman:1 near:1 yang:1 ideal:1 intermediate:1 split:3 easy:2 enough:1 pratt:1 hjort:1 zi:3 bandwidth:2 fm:1 lasso:3 reduce:1 passed:1 suffer:2 locating:1 matlab:1 dramatically:1 detailed:1 listed:1 nonparametric:8 locally:1 category:3 http:5 supplied:1 notice:1 estimated:4 disjoint:1 per:1 overly:1 rb:1 correctly:1 tibshirani:1 group:1 key:1 blum:1 drawn:5 graph:1 asymptotically:1 blocker:1 laine:1 run:6 uncertainty:2 ameliorate:2 uild:5 draw:12 decision:1 scaling:1 bound:9 def:1 followed:1 distinguish:1 refine:1 annual:1 covertype:4 infinity:1 flat:1 yong:1 dominated:2 speed:1 min:1 xiaotong:1 inhibits:1 glad:1 developing:1 according:1 kd:26 smaller:2 across:7 slightly:1 partitioned:1 metropolis:4 tw:1 modification:1 lid:1 quoc:1 restricted:1 taken:1 computationally:1 equation:1 count:1 cjlin:1 neiswanger:1 instrument:1 end:6 adopted:1 available:1 gaussians:1 multiplied:1 apply:3 appropriate:1 alternative:1 rp:5 bagging:1 running:2 dataset5:1 include:1 log2:3 calculating:1 restrictive:1 k1:3 especially:1 establish:1 society:1 intend:1 already:1 quantity:2 strategy:2 parametric:8 concentration:1 sanvesh:2 barycenter:1 hai:1 southern:1 gradient:2 distance:5 antonietta:1 evenly:1 collected:1 consensus:3 tage:1 dataset4:1 discriminant:1 assuming:6 length:5 relationship:1 ratio:1 katherine:1 dunson:5 fangjian:1 robert:4 blockwise:1 expense:1 gk:4 stated:1 dram:1 implementation:1 teh:1 upper:3 observation:5 markov:1 datasets:5 discarded:1 finite:1 langevin:1 communication:2 tarjan:1 parallelizing:1 david:4 pair:2 required:1 kl:3 extensive:1 able:1 below:1 scott:1 challenge:1 built:1 max:6 memory:1 royal:1 overlap:2 event:1 difficulty:2 rely:1 force:1 treated:1 predicting:1 zhu:1 normality:1 github:2 ne:1 lk:4 jun:1 helios:1 heller:1 prior:3 kf:4 asymptotic:1 stanislav:1 loss:1 highlight:1 accelerator:1 versus:4 article:4 share:1 efabbayes:1 repeat:1 supported:1 free:2 keeping:1 bias:1 ber:1 fall:1 boundary:2 dimension:10 computes:1 stuck:1 collection:1 adaptive:5 welling:1 approximate:3 obtains:1 cutting:1 blackard:1 ml:23 uai:2 eero:2 xi:5 search:5 table:2 additionally:1 promising:2 robust:1 kheller:1 forest:4 complex:1 necessarily:1 aistats:1 pk:3 spread:1 whole:2 big:1 n2:6 en:2 quantiles:1 slow:1 sub:1 mira:1 exponential:1 lq:1 lie:1 third:1 wavelet:1 rq0:1 rk:4 theorem:8 maxi:2 explored:1 concern:1 normalizing:2 incorporating:1 essential:1 supplement:3 conditioned:1 nk:13 sorting:3 rejection:3 simply:2 likely:1 univariate:1 applies:1 srivastava:2 truth:3 satisfies:2 relies:3 acm:1 conditional:2 identity:2 goal:1 exposition:1 determined:1 except:1 uniformly:2 averaging:5 justify:1 sampler:6 total:9 called:1 nil:1 experimental:2 support:1 guo:2 alexander:1 evaluate:1 mcmc:27 mcgregor:1 hung:2 |
5,509 | 5,987 | Fast Lifted MAP Inference via Partitioning
Somdeb Sarkhel
The University of Texas at Dallas
Parag Singla
I.I.T. Delhi
Vibhav Gogate
The University of Texas at Dallas
Abstract
Recently, there has been growing interest in lifting MAP inference algorithms for
Markov logic networks (MLNs). A key advantage of these lifted algorithms is that
they have much smaller computational complexity than propositional algorithms
when symmetries are present in the MLN and these symmetries can be detected
using lifted inference rules. Unfortunately, lifted inference rules are sound but
not complete and can often miss many symmetries. This is problematic because
when symmetries cannot be exploited, lifted inference algorithms ground the MLN,
and search for solutions in the much larger propositional space. In this paper, we
present a novel approach, which cleverly introduces new symmetries at the time of
grounding. Our main idea is to partition the ground atoms and force the inference
algorithm to treat all atoms in each part as indistinguishable. We show that by
systematically and carefully refining (and growing) the partitions, we can build
advanced any-time and any-space MAP inference algorithms. Our experiments
on several real-world datasets clearly show that our new algorithm is superior to
previous approaches and often finds useful symmetries in the search space that
existing lifted inference rules are unable to detect.
Markov logic networks (MLNs) [5] allow application designers to compactly represent and reason
about relational and probabilistic knowledge in a large number of application domains including
computer vision and natural language understanding using a few weighted first-order logic formulas.
These formulas act as templates for generating large Markov networks ? the undirected probabilistic
graphical model. A key reasoning task over MLNs is maximum a posteriori (MAP) inference, which
is defined as the task of finding an assignment of values to all random variables in the Markov network
that has the maximum probability. This task can be solved using propositional (graphical model)
inference techniques. Unfortunately, these techniques are often impractical because the Markov
networks can be quite large, having millions of variables and features.
Recently, there has been growing interest in developing lifted inference algorithms [4, 6, 17, 22]
for solving the MAP inference task [1, 2, 3, 7, 13, 14, 16, 18, 19]. These algorithms work, as much
as possible, on the much smaller first-order specification, grounding or propositionalizing only as
necessary and can yield significant complexity reductions in practice. At a high level, lifted algorithms
can be understood as algorithms that identify symmetries in the first-order specification using lifted
inference rules [9, 13, 19], and then use these symmetries to simultaneously infer over multiple
symmetric objects. Unfortunately, in a vast majority of cases, the inference rules are unable to identify
several useful symmetries (the rules are sound but not complete), either because the symmetries are
approximate or because the symmetries are domain-specific and do not belong to a known type. In
such cases, lifted inference algorithms partially ground some atoms in the MLN and search for a
solution in this much larger partially propositionalized space.
In this paper, we propose the following straight-forward yet principled approach for solving this
partial grounding problem [21, 23]: partition the ground atoms into groups and force the inference
algorithm to treat all atoms in each group as indistinguishable (symmetric). For example, consider
a first-order atom R(x) and assume that x can be instantiated to the following set of constants:
{1, 2, 3, 4, 5}. If the atom possesses the so-called non-shared or single-occurrence symmetry [13, 19],
then the lifted inference algorithm will search over only two assignments: all five groundings of R(x)
are either all true or all false, in order to find the MAP solution. When no identifiable symmetries
exist, the lifted algorithm will inefficiently search over all possible 32 truth assignments to the 5
1
ground atoms and will be equivalent in terms of (worst-case) complexity to a propositional algorithm.
In our approach, we would partition the domain, say as {{1, 3}, {2, 4, 5}}, and search over only
the following 4 assignments: all groundings in each part can be either all true or all false. Thus, if
we are lucky and the MAP solution is one of the 4 assignments, our approach will yield significant
reductions in complexity even though no identifiable symmetries exist in the problem.
Our approach is quite general and includes the fully lifted and fully propositional approaches as
special cases. For instance, setting the partition size k to 1 and n respectively where n is the number
of constants will yield exactly the same solution as the one output by the fully lifted and fully
propositional approach. Setting k to values other than 1 and n yields a family of inference schemes
that systematically explores the regime between these two extremes. Moreover, by controlling the
size k of each partition we can control the size of the ground theory, and thus the space and time
complexity of our algorithm.
We prove properties and improve upon our basic idea in several ways. First, we prove that our
proposed approach yields a consistent assignment that is a lower-bound on the MAP value. Second,
we show how to improve the lower bound and thus the quality of the MAP solution by systematically
refining the partitions. Third, we show how to further improve the complexity of our refinement
procedure by exploiting the exchangeability property of successive refinements. Specifically, we show
that the exchangeable refinements can be arranged on a lattice, which can then be searched via a
heuristic search procedure to yield an efficient any-time, any-space algorithm for MAP inference.
Finally, we demonstrate experimentally that our method is highly scalable and yields close to optimal
solutions in a fraction of the time as compared to existing approaches. In particular, our results show
that for even small values of k (k bounds the partition size), our algorithm yields close to optimal
MAP solutions, clearly demonstrating the power of our approach.
1
Notation And Background
Partition of a Set. A collection of sets C is a partition of a set X if and only if each set in C is
nonempty, pairwise disjoint and the union of all sets equals X. The sets in C are called the cells or
parts of the partition. If two elements, a, b, of the set appear in a same cell of a partition ? we denote
them by the operator ??? ?, i.e., a ?? b. A partition ? of a set X is a refinement of a partition ? of
X if every element of ? is a subset of some element of ?. Informally, this means that ? is a further
fragmentation of ?. We say that ? is finer than ? (or ? is coarser than ?) and denote it as ? ? ?. We
will also use the notation ? ? to denote that either ? is finer than ?, or ? is the same as ?. For
example, let ? = {{1, 2}, {3}} be a partition of the set X = {1, 2, 3} containing two cells {1, 2} and
{3} and let ? = {{1}, {2}, {3}} be another partition of X, then ? is a refinement ?, namely, ? ? ?.
First-order logic. We will use a strict subset of first-order logic that has no function symbols,
equality constraints or existential quantifiers. Our subset consists of (1) constants, denoted by upper
case letters (e.g., X, Y , etc.), which model objects in the domain; (2) logical variables, denoted
by lower case letters (e.g., x, y, etc.) which can be substituted with objects, (3) logical operators
such as ? (disjunction), ? (conjunction), ? (implication) and ? (equivalence), (4) universal (?)
and existential (?) quantifiers and (5) predicates which model properties and relationships between
objects. A predicate consists of a predicate symbol, denoted by typewriter fonts (e.g., Friends, R,
etc.), followed by a parenthesized list of arguments. A term is a logical variable or a constant. A literal
is a predicate or its negation. A formula in first order logic is an atom (a predicate), or any complex
sentence that can be constructed from atoms using logical operators and quantifiers. For example, ?x
Smokes(x) ? Asthma(x) is a formula. A clause is a disjunction of literals. Throughout, we will
assume that all formulas are clauses and their variables are standardized apart.
A ground atom is an atom containing only constants. A ground formula is a formula obtained by
substituting all of its variables with a constant, namely a formula containing only ground atoms.
For example, the groundings of ? Smokes(x) ? Asthma(x) where ?x = {Ana, Bob}, are the two
propositional formulas: ? Smokes(Ana) ? Asthma(Ana) and ? Smokes(Bob) ? Asthma(Bob).
Markov logic. A Markov logic network (MLN) is a set of weighted clauses in first-order logic. We
will assume that all logical variables in all formulas are universally quantified (and therefore we will
drop the quantifiers from all formulas), are typed and can be instantiated to a finite set of constants
(for a variable x, this set will be denoted by ?x) and there is a one-to-one mapping between the
constants and objects in the domain (Herbrand interpretations). Note that the class of MLNs we
are assuming is not restrictive at all because almost all MLNs used in application domains such as
2
natural language processing and the Web fall in this class. Given a finite set of constants, the MLN
represents a (ground) Markov network that has one random variable for each ground atom in its
Herbrand base and a weighted feature for each ground clause in the Herbrand base. The weight
of each feature is the weight of the corresponding first-order clause. Given a world ?, which is a
truth assignment to all the ground
Patoms, the Markov network represents the following probability
distribution P (?) = Z ?1 exp( i wi N (fi , ?)) where (fi , wi ) is a weighted first-order formula,
N (fi , ?) is the number of true groundings of fi in ? and Z is the partition function.
For simplicity, we will assume that the MLN is in normal form, which is defined as an MLN that
satisfies the following two properties: (i) there are no constants in any formula; and (ii) if two distinct
atoms of predicate R have variables x and y as the same argument of R, then ?x = ?y. Because of
the second condition, in normal MLNs, we can associate domains with each argument of a predicate.
Let iR denote the i-th argument of predicate R and let D(iR ) denote the number of elements in the
domain of iR . We will also assume that all domains are of the form {1, ..., D(iR )}. Since domain size
is finite, any domain can be converted to this form.
A common optimization inference task over MLNs is finding the most probable state of the world ?,
that is finding a complete assignment to all ground atoms which maximizes the probability. Formally,
!
X
X
1
arg max PM (?) = arg max
exp
wi N (fi , ?) = arg max
wi N (fi , ?)
?
?
?
Z(M)
i
i
(1)
From Eq. (1), we can see that the MAP problem reduces to finding a truth assignment that maximizes the sum of weights of satisfied clauses. Therefore, any weighted satisfiability solver such
as MaxWalkSAT [20] can used to solve it. However, MaxWalkSAT is a propositional solver and is
unable to exploit symmetries in the first-order representation, and as a result can be quite inefficient.
Alternatively, the MAP problem can be solved in a lifted manner by leveraging various lifted inference
rules such as the decomposer, the binomial rule [6, 9, 22] and the recently proposed single occurrence
rule [13, 19]. A schematic of such a procedure is given in Algorithm 1. Before presenting the
algorithm, we will describe some required definitions. Let iR denote the i-th argument of predicate R.
Given an MLN, two arguments iR and jS of its predicates R and S respectively are called unifiable
if they share a logical variable in an MLN formula. Being symmetric and transitive, the unifiable
relation splits the arguments of all the predicates into a set of domain equivalence classes.
Example 1. Consider a normal MLN M having two weighted formulas (R(x) ? S(x, y), w1 ) and
(R(z) ? T(z), w2 ). Here, we have two sets of domain equivalence classes {1R , 1S , 1T } and {2S }.
Algorithm 1 has five recursive steps and returns
Algorithm 1 LMAP(MLN M )
the optimal MAP value. The first two lines are
// base case
the base case and the simplification step, in
if M is empty return 0
which the MLN is simplified by deleting redunSimplify(M )
dant formulas, rewriting predicates by remov// Propositional decomposition
ing constants (so that lifted conditioning can be
if M has disjoint
MLNs M1 , . . . , Mk then
applied) and assigning values to ground atoms
Pk
return
LMAP(M
i)
i=1
whose values can be inferred using assignments
//
Lifted
decomposition
made so far. The second step is the propositional
if M has a liftable domain equivalence class U then
decomposition step in which the algorithm rereturn LMAP(M |U )
curses over disjoint MLNs (if any) and returns
//
Lifted
conditioning
their sum. In the lifted decomposition step, the
if
M
has
a singleton atom A then
algorithm finds a domain equivalence class U
D(1 )
return
maxi=0 A LMAP(M |(A, i)) + w(A, i)
such that in the MAP solution all ground atoms
//
Partial
grounding
of the predicates that have elements of U as arHeuristically select a domain equivalence class U
guments are either all true or all false. To find
and ground it yielding a new MLN M 0
such a class, rules given in [9, 13, 19] can be
return LMAP(M 0 )
used. In the algorithm, M |U denotes the MLN
obtained by setting the domain of all elements
of U to 1 and updating the formula weights accordingly. In the lifted conditioning step, if there is
an atom having just one argument (singleton atom), then the algorithm partitions the possible truth
assignments to groundings of A such that, in each part all truth assignments have the same number
of true atoms. In the algorithm, M |(A, i) denotes the MLN obtained by setting i groundings of A
to true and the remaining to false. w(A, i) is the total weight of ground formulas satisfied by the
3
assignment. The final step in LMAP is the partial grounding step and is executed only when the
algorithm is unable to apply lifted inference rules. In this step, the algorithm heuristically selects a
domain equivalence class U and grounds it completely. For example,
Example 2. Consider an MLN with two formulas: R(x, y) ? S(y, z), w1 and S(a, b) ? T(a, c), w2 .
Let D(2R ) = 2. After grounding the equivalence class {2R , 1S , 1T }, we get an MLN having four
formulas: (R(x1 , 1)?S(1, z1 ), w1 ), (R(x2 , 2)?S(1, z2 ), w1 ), (S(1, b1 )?T(1, c1 ), w2 ) and (S(2, b2 )?
T(2, c2 ), w2 ).1
2
Scaling up the Partial Grounding Step using Set Partitioning
Partial grounding often yields a much bigAlgorithm 2 Constrained-Ground
ger MLN than the original MLN and is the
(MLN M , Size k and domain equivalence class U )
chief reason for the inefficiency and poor
M0 = M
scalability of Algorithm LMAP. To address
Create a partition ? of size k of ?iR where iR ? U
this problem, we propose a novel approach
foreach predicate R such that ? iR ? U do
to speed up inference by adding additional
foreach cell ?j of ? do
constraints to the existing lifted MAP forAdd all possible hard formulas of the form
mulation. Our idea is as follows: reduce the
R(x1 , . . . , xr ) ? R(y1 , . . . , yr )
number of ground atoms by partitioning them
such that xi = yi if iR ?
/ U and
and treating all atoms in each part as indistinx
=
X
,
y
=
X
if
i
? U where Xa , Xb ? ?j .
i
a
i
R
b
guishable. Thus, instead of introducing O(tn)
0
return
M
new ground atoms where t is the cardinality
of the domain equivalence class and n is the number of constants, our approach will only introduce
O(tk) ground atoms where k << n.
Our new, approximate partial grounding method (which will replace the partial grounding step in
Algorithm 1) is formally described in Algorithm 2. The algorithm takes as input an MLN M , an
integer k > 0 and a domain equivalence class U as input and outputs a new MLN M 0 . The algorithm
first partitions the domain of the class U into k cells, yielding a partition ?. Then, for each cell ?j of
? and each predicate R such that one or more of its arguments is in U , the algorithm adds all possible
constraints of the form R(x1 , . . . , xr ) ? R(y1 , . . . , yr ) such that for each i: (1) we add the equality
constraint between the logical variables xi and yi if the i-th argument of the predicate is not in U
and (1) set xi = Xa and yi = Xb if i-th argument of R is in U where Xa , Xb ? ?j . Since adding
constraints restricts feasible solutions to the optimization problem, it is easy to show that:
Proposition 1. Let M 0 = Constrain-Ground(M, k), where M is an MLN and k > 0 is an integer,
be the MLN used in the partial grounding step of Algorithm 1 (instead of the partial grounding step
described in the algorithm). Then, the MAP value returned by the modified algorithm will be smaller
than or equal to the one returned by Algorithm 1.
The following example demonstrates how Algorithm 2 constructs a new MLN.
Example 3. Consider the MLN in Example 2. Let {{1, D2,R }} be a 1-partition of the domain of
U . Then, after applying Algorithm 2, the new MLN will have the following three hard formulas in
addition to the formulas given in Example 2: (1) R(x3 , 1) ? R(x3 , 2), (2) S(1, x4 ) ? S(2, x4 ) and
(3) T(1, x5 ) ? T(2, x5 ).
Although, adding constraints reduces the search space of the MAP problem, Algorithm 2 still needs
to ground the MLN. This can be time consuming. Alternatively, we can group indistinguishable
atoms together without grounding the MLN using the following definition:
Definition 1. Let U be a domain equivalence class and let ? be its partition. Two ground atoms
R(x1 , ..., xr ) and R(y1 , ..., yr ) of a predicate R such that ?iR ? U are equivalent if xi = yi if iR ?
/U
and xi = Xa , yi = Xb if iR ? U where Xa , Xb ? ?j . We denote this by R(x1 , ..., xr )?? R(y1 , ..., yr ).
Notice that the relation ?? is symmetric and reflexive. Thus, we can group all the ground atoms
corresponding to the transitive closure of this relation, yielding a ?meta ground atom? such that if
the meta atom is assigned to true (false), all the ground atoms in the transitive closure will be true
(false). This yields the partition-ground algorithm described as Algorithm 3. The algorithm starts
1
The constants can be removed by renaming the predicates yielding a normal MLN. For example, we can
rename R(x1 , 1) as R1 (x1 ). This renaming occurs in the simplification step.
4
by creating a k partition of the domain of U . It then updates the domain of U so that it only contains
k values, grounds all arguments of predicates that are in the set U and updates the formula weights
appropriately. The formula weights should be updated because, when the domain is compressed,
several ground formulas are replaced by just one ground formula. Intuitively, if t (partially) ground
formulas having weight w are replaced by one (partially) ground formula (f, w0 ) then w0 should be
equal to wt. The two for loops in Algorithm 3 accomplish this. We can show that:
Proposition 2. The MAP value output by replacing the partial grounding step in Algorithm 1 with
Algorithm Partition-Ground, is the same as the one output by replacing the the partial grounding step
in Algorithm 1 with Algorithm Constrained-Ground.
The key advantage using Algorithm PartitionAlgorithm 3 Partition-Ground
Ground is that the lifted algorithm (LMAP)
(MLN M , Size k and domain equivalence class U )
will have much smaller space complexity
M0 = M
than the one using Algorithm ConstrainedCreate a partition ? of size k of ?iR where iR ? U
Ground. Specifically, unlike the latter, which
Update the domain ?iR to {1, . . . , k} in M 0
yields O(n|U |) ground atoms (assuming
Ground
all predicates R such that iR ? U
each predicate has only one argument in
foreach
formula (f 0 , w0 ) in M 0 such that f
U ) where n is the number of constants in
contains an atom of R where iR ? U do
the domain of U , the former generates only
Let f be the formula in M from which f 0 was derived
O(k|U |) ground atoms, where k << n.
The following example illustrates how algorithm partition-ground constructs a new
MLN.
foreach logical variable in f that was substituted
by the j-th value in ?iR to yield f 0 do
w0 = w0 ? |?j | where ?j is the j-th cell of ?
return M 0
Example 4. Consider an MLN M , with two formulas: (R(x, y) ? S(y, z), w1 ) and (S(a, b) ?
T(a, c), w2 ). Let D(2R ) = 3 and ? = {{1, 2}, {3}} = {?1 , ?2 }. After grounding 2R with respect to ?,
we get an MLN, M 0 , having four formulas: (R?1 (x1 ) ? S?1 (z1 ), 2w1 ), (R?2 (x2 ) ? S?2 (z2 ), w1 ),
(S?1 (b1 ) ? T?1 (c1 ), 2w2 ) and (S?2 (b2 ) ? T?2 (c2 ), w2 ). The total weight of grounding in M is
(3w1 D(1R )D(2S ) + 3w2 D(2T )D(2S )) which is the same as in M 0 .
The following example illustrates how the algorithm constructs a new MLN in presence of self-joins.
Example 5. Consider an MLN, M , with the single formula: ?R(x, y) ? R(y, x), w. Let D(1R ) =
D(2R ) = 3 and ? = {{1, 2}, {3}} = {?1 , ?2 }. After grounding 1R (and also on D(2R ), as they belong
to the same domain equivalence class) with respect to ?, we get an MLN, M 0 , having following four
formulas: (R?1 ,?1 ? R?1 ,?1 , 4w), (R?1 ,?2 ? R?2 ,?1 , 2w), (R?2 ,?1 ? R?1 ,?2 , 2w) and (R?2 ,?2 ? R?2 ,?2 , w).
2.1
Generalizing the Partition Grounding Approach
Algorithm Partition-Ground allows us to group the equivalent atoms with respect to a partition and
has much smaller space complexity and time complexity than the partial grounding strategy described
in Algorithm 1. However, it yields a lower bound on the MAP value. In this section, we show how to
improve the lower bound using refinements of the partition. The basis of our generalization is the
following theorem:
Theorem 1. Given two partitions ? and ? of U such that ? ?, the MAP value of the partially
ground MLN with respect to ? is less than or equal to the MAP value of the partially ground MLN
with respect to ? .
Proof. Sketch: Since the partition ? is a finer refinement of ?, any candidate MAP assignment corresponding to the MLN obtained via ? already includes all the candidate assignments corresponding to
the MLN obtained via ?, and since the MAP value of both of these MLNs are a lower bound of the
original MAP value, the theorem follows.
We can use Theorem 1 to devise a new any-time MAP algorithm which refines the partitions to get a
better estimate of MAP values. Our approach is presented in Algorithm 4.
The algorithm begins by identifying all non-liftable domains, namely domains Ui that will be
partially grounded during the execution of Algorithm 1, and associating a 1-partition ?i with each
domain. Then, until there is timeout, it iterates through the following two steps. First, it runs the
LMAP algorithm, which uses the pair (Ui , ?i ) in Algorithm partition-ground during the i-th partial
5
grounding step, yielding a MAP solution ?. Second, it heuristically selects a partition ?j and refines
it. From Theorem 1, it is clear that as the number of iterations is increased, the MAP solution will
either improve or remain the same. Thus, Algorithm Refine-MAP is an anytime algorithm.
Alternatively, we can also devise an any-space
algorithm using the following idea. We will Algorithm 4 Refine-MAP(MLN M )
first determine k, the maximum size of a partiLet U = {Ui } be the non-liftable domains
tion that we can fit in the memory. As different
Set ?i = {?jR } where jR ? Ui for all Ui ? U
partitions of size k will give us different MAP
? = ??
values, we can search through them to find the
while timeout has not occurred do
best possible MAP solution. A drawback of
? =LMAP(M )
the any-space approach is that it explores a
/* LMAP uses the pair (Ui , ?i ) and Algorithm
prohibitively large search space. In particular,
partition-ground for its i-th partial grounding step. */
the number of possible partitions
n of size k for
Heuristically select a partition ?j and refine it
a set of size n (denoted by k ) is given by
return ?
the so called Stirling numbers of the second
kind which grows
with n. (The total number of partitions of a set is given by the Bell
Pnexponentially
number, Bn = k=1 nk ). Clearly, searching over all the possible partitions of size k is not practical.
Luckily, we can exploit symmetries in the MLN representation to substantially reduce the number of
partitions we have to consider, since many of them will give us the same MAP value. Formally,
Theorem 2. Given two k-partitions ? = {?1 , . . . , ?k } and ? = {?1 , . . . , ?k } of U such that
|?i | = |?i | for all i, the MAP value of the partially ground MLN with respect to ? is equal to the
MAP value of the partially ground MLN with respect to ? .
Proof. Sketch: A formula f , when ground on an argument iR with respect to a partition ? creates |?|
copies of the formula. Since |?| = |?| = k grounding on iR with respect to ? also creates the same
number of formulas which are identical upto a renaming of constants. Furthermore, since |?i | = |?i |
(each of their parts have identical cardinality) and as weight of a ground formula is determined by
the cell sizes (see Algorithm Partition-Ground) the ground formulas obtained using ? and ? will
have same weights as well. As a result, MLNs obtained by grounding on any argument iR with
respect to ? and ? are indistinguishable (subject to renaming of variables and constants) and the
proof follows.
From Theorem 2, it follows that the number
{{1}, {2}, {3}, {4}}
of elements in cells and the number of cells
of a partition is sufficient to define a partially
{{1}, {2}, {3, 4}}
ground MLN with respect to that partition.
Consecutive refinements of such partitions
will thus yield a lattice, which we will refer to
{{1}, {2, 3, 4}}
{{1, 2}, {3, 4}}
as Exchangeable Partition Lattice. The term
?exchangeable? refers to the fact that two parti{{1, 2, 3, 4}}
tions containing same number of elements of
cells and same number of cells are exchange- Figure 1: Exchangeable Partition Lattice corresponding
able with each other (in terms of MAP so- to the domain {1, 2, 3, 4}.
lution quality). Figure 1 shows the Exchangeable Partition Lattice corresponding to the domain
{1, 2, 3,
the number of partitions in the lattice would have been
4}.
Ifwe
donot
useexchangeability,
B4 = 41 + 42 + 43 + 44 = 1 + 7 + 6 + 1 = 15. On the other hand, the lattice has 5 elements.
Different traversal strategies of this exchangeable partition lattice will give rise to different lifted
MAP algorithms. For example, a greedy depth-first traversal of the lattice yields Algorithm 4. We can
also explore the lattice using systematic depth-limited search and return the maximum solution found
for a particular depth limit d. This yields an improved version of our any-space approach described
earlier. We can even combine the two strategies by traversing the lattice in some heuristic order. For
our experiments, we use greedy depth-limited search, because full depth-limited search was very
expensive. Note that although our algorithm assumes normal MLNs, which are pre-shattered, we can
easily extend it to use shattering as needed [10]. Moreover by clustering evidence atoms together
[21, 23] we can further reduce the size of the shattered theory [4].
6
3
Experiments
We implemented our algorithm on top of the lifted MAP algorithm of Sarkhel et al. [18], which
reduces lifted MAP inference to an integer polynomial program (IPP). We will call our algorithm
P-IPP (which stands for partition-based IPP). We performed two sets of experiments. The first set
measures the impact of increasing the partition size k on the quality of the MAP solution output
by our algorithm. The second set compares the performance and scalability of our algorithm with
several algorithms from literature. All of our experiments were run on a third generation i7 quad-core
machine having 8GB RAM.
We used following five MLNs in our experimental study: (1) An MLN which we call Equivalence that consists of following three formulas: Equals(x,x), Equals(x,y) ? Equals(y,x), and
Equals(x,y) ? Equals(y,z) ? Equals(x,z); (2) The Student MLN from [18, 19], consisting
of four formulas and three predicates; (3) The Relationship MLN from [18], consisting of four
formulas and three predicates; (4) WebKB MLN [11] from the Alchemy web page, consisting of
three predicates and seven formulas; and (5) Citation Information-Extraction (IE) MLN from the
Alchemy web page [11], consisting of five predicates and fourteen formulas .
We compared the solution quality and scalability of our approach with the following algorithms
and systems: Alchemy (ALY) [11], Tuffy (TUFFY) [15], ground inference based on integer linear
programming (ILP) and the IPP algorithm of Sarkhel et al. [18]. Alchemy and Tuffy are two stateof-the-art open source software packages for learning and inference in MLNs. Both of them ground
the MLN and then use an approximate solver, MaxWalkSAT [20] to compute the MAP solution.
Unlike Alchemy, Tuffy uses clever Database tricks to speed up computation and in principle can be
much more scalable than Alchemy. ILP is obtained by converting the MAP problem over the ground
Markov network to an Integer Linear Program. We ran each algorithm on the aforementioned MLNs
for varying time-bounds and recorded the solution quality, which is measured using the total weight
of the false clauses in the (approximate) MAP solution, also referred to as the cost. Smaller the cost,
better the MAP solution. For a fair comparison, we used a parallelized Integer Linear Programming
solver called Gurobi [8] to solve the integer linear programming problems generated by our algorithm
as well as by other competing algorithms.
Figure 2 shows our experimental results. Note that if the curve for an algorithm is not present in a plot,
then it means that the corresponding algorithm ran out of either memory or time on the MLN and did
not output any solution. We observe that Tuffy and Alchemy are the worst performing systems both in
terms of solution quality and scalability. ILP scales slightly better than Tuffy and Alchemy. However,
it is unable to handle MLNs having more than 30K clauses. We can see that our new algorithm P-IPP,
run as an anytime scheme, by refining partitions, not only finds higher quality MAP solutions but also
scales better in terms of time complexity than IPP. In particular, IPP could not scale to the equivalence
MLN having roughly 1 million ground clauses and the relation MLN having roughly 125.8M ground
clauses. The reason is that these MLNs have self-joins (same predicate appearing multiple times in
a formula), which IPP is unable to lift. On the other hand, our new approach is able to find useful
approximate symmetries in these hard MLNs.
To measure the impact of varying the partition size on the MAP solution quality, we conducted the
following experiment. We first ran the IPP algorithm until completion to compute the optimum MAP
value. Then, we ran our algorithm multiple times, until completion as well, and recorded the solution
quality achieved in each run for different partition sizes. Figure 3 plots average cost across various
runs as a function of k (the error bars show the standard deviation). For brevity, we only show results
for the IE and Equivalence MLNs. The optimum solutions for the three MLNs were found in (a) 20
minutes, (b) 6 hours and (c) 8 hours respectively. On the other hand, our new approach P-IPP yields
close to optimal solutions in a fraction of the time, and for relatively small values of k (? 5 ? 10).
4
Summary and Future Work
Lifted inference techniques have gained popularity in recent years, and have quickly become the
approach of choice to scale up inference in MLNs. A pressing issue with existing lifted inference
technology is that most algorithms only exploit exact, identifiable symmetries and resort to grounding
or propositional inference when such symmetries are not present. This is problematic because
grounding can blow up the search space. In this paper, we proposed a principled, approximate
approach to solve this grounding problem. The main idea in our approach is to partition the ground
atoms into a small number of groups and then treat all ground atoms in a group as indistinguishable
7
60000
-4.4e+06
TUFFY
ALY
P-IPP
IPP
ILP
50000
1e+07
P-IPP
IPP
-1e+07
-2e+07
40000
-4.8e+06
20000
-3e+07
Cost
30000
Cost
Cost
P-IPP
IPP
0
-4.6e+06
-5e+06
-4e+07
-5e+07
-6e+07
-5.2e+06
10000
-7e+07
-8e+07
-5.4e+06
0
-9e+07
-10000
-5.6e+06
0
20
40
60
80
100
120
140
160
180
-1e+08
200
0
20
40
60
Time in Seconds
80
100
120
140
160
180
200
0
(a) IE(3.2K,1M)
TUFFY
ALY
P-IPP
IPP
ILP
60
80
100
120
140
160
180
200
180
200
(c) IE(3.02M,302B)
40000
-160000
TUFFY
ALY
P-IPP
IPP
ILP
30000
P-IPP
-180000
-200000
-220000
400
200
Cost
Cost
20000
Cost
40
Time in Seconds
(b) IE(380K,15.6B)
800
600
20
Time in Seconds
10000
-240000
-260000
-280000
0
-300000
0
-200
-320000
-340000
-400
-10000
0
20
40
60
80
100
120
140
160
180
200
0
20
40
60
Time in Seconds
80
100
120
140
160
180
200
0
20
40
Time in Seconds
(d) Equivalence(100,1.2K)
(e) Equivalence(900,28.8K)
758
60
80
100
120
140
160
Time in Seconds
(f) Equivalence(10K,1.02M)
5e+06
25300
P-IPP
P-IPP
IPP
P-IPP
756
25200
4e+06
754
25100
750
Cost
3e+06
Cost
Cost
752
25000
2e+06
748
24900
746
1e+06
24800
744
742
0
0
20
40
60
80
100
120
140
160
180
200
24700
0
20
40
60
Time in Seconds
80
100
120
140
160
180
200
0
20
40
Time in Seconds
(g) WebKb(3.2K,1M)
60
80
100
120
140
160
180
200
Time in Seconds
(h) Student(3M,1T)
(i) Relation(750K,125.8M)
Figure 2: Cost vs Time: Cost of unsatisfied clauses(smaller is better) vs time for different domain sizes.
Notation used to label each figure: MLN(numvariables, numclauses). Note: the quantities reported are for ground
Markov network associated with the MLN. Standard deviation is plotted as error bars.
-2000
-100000
Optimum
P-IPP
-2200
-140
Optimum
P-IPP
Optimum
P-IPP
-160
-200000
-180
-2400
-2800
Cost
Cost
Cost
-200
-300000
-2600
-400000
-220
-240
-260
-3000
-500000
-280
-3200
-300
-600000
-3400
-320
-3600
-700000
2
3
4
5
6
7
8
9
10
-340
2
3
4
5
k
(a) IE(3.2K,1M)
6
7
8
k
(b) IE(82.8K,731.6M)
9
10
2
3
4
5
6
7
8
9
10
k
(c) Equivalence(100,1.2K)
Figure 3: Cost vs Partition Size: Notation used to label each figure: MLN(numvariables, numclauses).
(from each other). This simple idea introduces new, approximate symmetries which can help speed-up
the inference process. Although our proposed approach is inherently approximate, we proved that it
has nice theoretical properties in that it is guaranteed to yield a consistent assignment that is a lowerbound on the MAP value. We further described an any-time algorithm which can improve this lower
bound through systematic refinement of the partitions. Finally, based on the exchangeability property
of the refined partitions, we demonstrated a method for organizing the partitions in a lattice structure
which can be traversed heuristically to yield efficient any-time as well as any-space lifted MAP
inference algorithms. Our experiments on a wide variety of benchmark MLNs clearly demonstrate the
power of our new approach. Future work includes connecting this work to the work on Sherali-Adams
hierarchy [2]; deriving a variational principle for our method [14]; and developing novel branch and
bound [12] as well as weight learning algorithms based on our partitioning approach.
Acknowledgments: This work was supported in part by the DARPA Probabilistic Programming for
Advanced Machine Learning Program under AFRL prime contract number FA8750-14-C-0005.
8
References
[1] U. Apsel and R. Braman. Exploiting uniform assignments in first-order MPE. In Proceedings of the
Twenty-Sixth AAAI Conference on Artificial Intelligence, pages 74?83, 2012.
[2] U. Apsel, K. Kersting, and M. Mladenov. Lifting Relational MAP-LPs Using Cluster Signatures. In
Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, 2014.
[3] H. Bui, T. Huynh, and S. Riedel. Automorphism groups of graphical models and lifted variational inference.
In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, 2013.
[4] R. de Salvo Braz. Lifted First-Order Probabilistic Inference. PhD thesis, University of Illinois, UrbanaChampaign, IL, 2007.
[5] P. Domingos and D. Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Morgan &
Claypool, 2009.
[6] V. Gogate and P. Domingos. Probabilistic Theorem Proving. In Proceedings of the Twenty-Seventh
Conference on Uncertainty in Artificial Intelligence, pages 256?265. AUAI Press, 2011.
[7] F. Hadiji and K. Kersting. Reduce and Re-Lift: Bootstrapped Lifted Likelihood Maximization for MAP. In
Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, 2013.
[8] Gurobi Optimization Inc. Gurobi Optimizer Reference Manual, 2014.
[9] A. Jha, V. Gogate, A. Meliou, and D. Suciu. Lifted Inference from the Other Side: The tractable Features.
In Proceedings of the 24th Annual Conference on Neural Information Processing Systems, 2010.
[10] J. Kisynski and D. Poole. Constraint Processing in Lifted Probabilistic Inference. In Proceedings of the
Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 293?302, 2009.
[11] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, D. Lowd, J. Wang, and P. Domingos. The Alchemy
System for Statistical Relational AI. Technical report, Department of Computer Science and Engineering,
University of Washington, Seattle, WA, 2008. http://alchemy.cs.washington.edu.
[12] R. Marinescu and R. Dechter. AND/OR Branch-and-Bound Search for Combinatorial Optimization in
Graphical Models. Artificial Intelligence, 173(16-17):1457?1491, 2009.
[13] H. Mittal, P. Goyal, V. Gogate, and P. Singla. New Rules for Domain Independent Lifted MAP Inference.
In Advances in Neural Information Processing Systems, 2014.
[14] M. Mladenov, A. Globerson, and K. Kersting. Efficient Lifting of MAP LP Relaxations Using k-Locality.
Proceedings of the 17th International Conference on Artificial Intelligence and Statistics, 2014.
[15] F. Niu, C. R?e, A. Doan, and J. Shavlik. Tuffy: Scaling up Statistical Inference in Markov Logic Networks
Using an RDBMS. Proceedings of the VLDB Endowment, 2011.
[16] J. Noessner, M. Niepert, and H. Stuckenschmidt. RockIt: Exploiting Parallelism and Symmetry for MAP
Inference in Statistical Relational Models. In Proceedings of the Twenty-Seventh AAAI Conference on
Artificial Intelligence, 2013.
[17] D. Poole. First-Order Probabilistic Inference. In Proceedings of the Eighteenth International Joint
Conference on Artificial Intelligence, pages 985?991, Acapulco, Mexico, 2003. Morgan Kaufmann.
[18] S. Sarkhel, D. Venugopal, P. Singla, and V. Gogate. An Integer Polynomial Programming Based Framework
for Lifted MAP Inference. In Advances in Neural Information Processing Systems, 2014.
[19] S. Sarkhel, D. Venugopal, P. Singla, and V. Gogate. Lifted MAP inference for Markov Logic Networks.
Proceedings of the 17th International Conference on Artificial Intelligence and Statistics, 2014.
[20] B. Selman, H. Kautz, and B. Cohen. Local Search Strategies for Satisfiability Testing. In Cliques, Coloring,
and Satisfiability: Second DIMACS Implementation Challenge. 1996.
[21] G. Van den Broeck and A. Darwiche. On the Complexity and Approximation of Binary Evidence in Lifted
Inference. In Advances in Neural Information Processing Systems, 2013.
[22] G. Van den Broeck, N. Taghipour, W. Meert, J. Davis, and L. De Raedt. Lifted Probabilistic Inference by
First-Order Knowledge Compilation. In Proceedings of the Twenty Second International Joint Conference
on Artificial Intelligence, pages 2178?2185, 2011.
[23] D. Venugopal and V. Gogate. Evidence-based Clustering for Scalable Inference in Markov Logic. In
Machine Learning and Knowledge Discovery in Databases. 2014.
9
| 5987 |@word version:1 polynomial:2 open:1 heuristically:4 d2:1 closure:2 vldb:1 bn:1 decomposition:4 reduction:2 inefficiency:1 contains:2 sherali:1 bootstrapped:1 fa8750:1 existing:4 z2:2 yet:1 assigning:1 dechter:1 refines:2 partition:67 drop:1 treating:1 update:3 plot:2 v:3 greedy:2 intelligence:13 yr:4 braz:1 mln:58 accordingly:1 core:1 iterates:1 successive:1 five:4 constructed:1 c2:2 become:1 prove:2 consists:3 combine:1 introduce:1 darwiche:1 manner:1 pairwise:1 roughly:2 growing:3 alchemy:10 curse:1 quad:1 solver:4 cardinality:2 increasing:1 begin:1 webkb:2 moreover:2 notation:4 maximizes:2 kind:1 substantially:1 finding:4 impractical:1 decomposer:1 every:1 act:1 auai:1 exactly:1 prohibitively:1 demonstrates:1 partitioning:4 control:1 exchangeable:6 appear:1 before:1 understood:1 engineering:1 treat:3 dallas:2 limit:1 local:1 niu:1 quantified:1 equivalence:21 limited:3 lmap:11 propositionalizing:1 lowerbound:1 practical:1 acknowledgment:1 globerson:1 testing:1 practice:1 union:1 recursive:1 goyal:1 x3:2 xr:4 procedure:3 universal:1 lucky:1 bell:1 pre:1 refers:1 renaming:4 get:4 cannot:1 close:3 clever:1 operator:3 applying:1 equivalent:3 map:57 demonstrated:1 eighteenth:1 sumner:1 simplicity:1 identifying:1 rule:12 parti:1 deriving:1 proving:1 searching:1 handle:1 updated:1 stuckenschmidt:1 controlling:1 hierarchy:1 exact:1 programming:5 us:3 domingo:3 associate:1 element:9 trick:1 expensive:1 updating:1 coarser:1 database:2 solved:2 wang:1 worst:2 automorphism:1 removed:1 ran:4 principled:2 meert:1 complexity:11 ui:6 traversal:2 signature:1 solving:2 upon:1 creates:2 completely:1 basis:1 compactly:1 easily:1 darpa:1 joint:2 various:2 instantiated:2 fast:1 distinct:1 describe:1 detected:1 artificial:13 lift:2 mladenov:2 refined:1 disjunction:2 quite:3 heuristic:2 larger:2 solve:3 whose:1 say:2 numvariables:2 compressed:1 statistic:2 richardson:1 final:1 timeout:2 advantage:2 pressing:1 propose:2 unifiable:2 loop:1 organizing:1 poon:1 scalability:4 exploiting:3 seattle:1 empty:1 optimum:5 r1:1 cluster:1 generating:1 adam:1 object:5 tk:1 tions:1 friend:1 completion:2 help:1 urbanachampaign:1 measured:1 eq:1 implemented:1 c:1 mulation:1 drawback:1 luckily:1 rdbms:1 ana:3 exchange:1 parag:1 generalization:1 proposition:2 probable:1 acapulco:1 traversed:1 ground:64 normal:5 exp:2 claypool:1 mapping:1 substituting:1 m0:2 optimizer:1 consecutive:1 mlns:22 label:2 combinatorial:1 singla:5 mittal:1 create:1 weighted:6 noessner:1 clearly:4 sarkhel:5 modified:1 lifted:40 exchangeability:3 varying:2 kersting:3 conjunction:1 derived:1 refining:3 likelihood:1 detect:1 posteriori:1 inference:45 marinescu:1 shattered:2 relation:5 selects:2 arg:3 aforementioned:1 issue:1 denoted:5 stateof:1 constrained:2 special:1 art:1 equal:11 construct:3 having:11 maxwalksat:3 atom:39 extraction:1 x4:2 represents:2 identical:2 shattering:1 washington:2 future:2 report:1 few:1 simultaneously:1 replaced:2 consisting:4 negation:1 interest:2 highly:1 numclauses:2 introduces:2 extreme:1 yielding:5 suciu:1 compilation:1 xb:5 implication:1 partial:14 necessary:1 lution:1 typewriter:1 traversing:1 re:1 plotted:1 theoretical:1 mk:1 instance:1 increased:1 earlier:1 stirling:1 assignment:17 lattice:12 maximization:1 cost:18 introducing:1 reflexive:1 subset:3 deviation:2 raedt:1 uniform:1 predicate:26 conducted:1 seventh:3 reported:1 accomplish:1 broeck:2 explores:2 international:4 ie:7 probabilistic:8 systematic:2 contract:1 meliou:1 together:2 quickly:1 connecting:1 w1:8 thesis:1 aaai:4 satisfied:2 recorded:2 containing:4 literal:2 creating:1 resort:1 inefficient:1 return:10 converted:1 singleton:2 blow:1 de:2 b2:2 student:2 includes:3 jha:1 inc:1 tion:1 performed:1 mpe:1 start:1 kautz:1 il:1 ir:22 kaufmann:1 yield:19 identify:2 finer:3 straight:1 bob:3 manual:1 definition:3 sixth:1 typed:1 proof:3 associated:1 proved:1 logical:8 knowledge:3 anytime:2 satisfiability:3 liftable:3 carefully:1 coloring:1 afrl:1 higher:1 improved:1 arranged:1 though:1 niepert:1 furthermore:1 just:2 xa:5 asthma:4 until:3 sketch:2 hand:3 web:3 replacing:2 smoke:4 quality:9 lowd:2 vibhav:1 grows:1 grounding:33 true:8 former:1 equality:2 assigned:1 symmetric:4 indistinguishable:5 x5:2 self:2 during:2 huynh:1 davis:1 tuffy:10 dimacs:1 presenting:1 complete:3 demonstrate:2 tn:1 interface:1 reasoning:1 kisynski:1 variational:2 novel:3 recently:3 fi:6 superior:1 common:1 clause:11 fourteen:1 conditioning:3 foreach:4 b4:1 million:2 belong:2 interpretation:1 m1:1 occurred:1 extend:1 significant:2 refer:1 ai:1 pm:1 illinois:1 language:2 specification:2 etc:3 base:4 add:2 j:1 recent:1 apart:1 prime:1 meta:2 binary:1 yi:5 exploited:1 devise:2 morgan:2 additional:1 converting:1 parallelized:1 determine:1 ii:1 branch:2 multiple:3 sound:2 full:1 infer:1 reduces:3 ing:1 technical:1 schematic:1 impact:2 scalable:3 basic:1 vision:1 iteration:1 represent:1 grounded:1 achieved:1 cell:12 c1:2 background:1 addition:1 source:1 appropriately:1 w2:8 unlike:2 posse:1 strict:1 subject:1 undirected:1 leveraging:1 integer:8 call:2 presence:1 split:1 easy:1 variety:1 fit:1 associating:1 competing:1 reduce:4 idea:6 texas:2 i7:1 gb:1 returned:2 useful:3 clear:1 informally:1 kok:1 http:1 exist:2 restricts:1 problematic:2 notice:1 taghipour:1 designer:1 disjoint:3 popularity:1 herbrand:3 group:8 key:3 four:5 demonstrating:1 rewriting:1 vast:1 ram:1 relaxation:1 fraction:2 sum:2 year:1 run:5 package:1 letter:2 uncertainty:3 family:1 throughout:1 almost:1 scaling:2 bound:10 layer:1 followed:1 simplification:2 guaranteed:1 refine:3 identifiable:3 annual:1 constraint:7 riedel:1 constrain:1 x2:2 software:1 generates:1 speed:3 argument:15 performing:1 relatively:1 department:1 developing:2 poor:1 cohen:1 cleverly:1 smaller:7 remain:1 jr:2 slightly:1 across:1 wi:4 lp:2 intuitively:1 quantifier:4 den:2 nonempty:1 ilp:6 needed:1 hadiji:1 tractable:1 apply:1 observe:1 upto:1 occurrence:2 appearing:1 inefficiently:1 original:2 standardized:1 binomial:1 denotes:2 remaining:1 assumes:1 graphical:4 clustering:2 top:1 exploit:3 restrictive:1 build:1 already:1 quantity:1 occurs:1 font:1 strategy:4 unable:6 majority:1 w0:5 seven:1 reason:3 assuming:2 relationship:2 gogate:7 mexico:1 unfortunately:3 executed:1 rise:1 implementation:1 twenty:8 upper:1 markov:15 datasets:1 benchmark:1 finite:3 relational:4 y1:4 ninth:1 aly:4 inferred:1 propositional:11 namely:3 required:1 pair:2 gurobi:3 sentence:1 z1:2 delhi:1 hour:2 salvo:1 address:1 able:2 bar:2 poole:2 parallelism:1 eighth:1 regime:1 challenge:1 program:3 including:1 max:3 memory:2 deleting:1 power:2 rockit:1 natural:2 force:2 advanced:2 scheme:2 improve:6 technology:1 transitive:3 existential:2 nice:1 understanding:1 literature:1 discovery:1 unsatisfied:1 fully:4 generation:1 ger:1 sufficient:1 consistent:2 doan:1 principle:2 systematically:3 share:1 endowment:1 summary:1 supported:1 copy:1 apsel:2 allow:1 side:1 shavlik:1 fall:1 template:1 wide:1 fifth:1 van:2 curve:1 depth:5 world:3 stand:1 forward:1 collection:1 refinement:9 universally:1 simplified:1 made:1 ipp:28 far:1 selman:1 approximate:8 citation:1 bui:1 logic:13 clique:1 b1:2 consuming:1 xi:5 alternatively:3 search:16 propositionalized:1 chief:1 parenthesized:1 inherently:1 symmetry:21 complex:1 domain:39 substituted:2 venugopal:3 did:1 pk:1 main:2 fair:1 x1:8 referred:1 join:2 candidate:2 third:2 formula:46 theorem:8 minute:1 specific:1 symbol:2 list:1 maxi:1 evidence:3 false:7 adding:3 gained:1 fragmentation:1 lifting:3 phd:1 execution:1 illustrates:2 nk:1 locality:1 generalizing:1 explore:1 partially:10 truth:5 satisfies:1 somdeb:1 shared:1 replace:1 feasible:1 experimentally:1 hard:3 specifically:2 determined:1 wt:1 miss:1 called:5 total:4 experimental:2 formally:3 select:2 rename:1 searched:1 latter:1 brevity:1 |
5,510 | 5,988 | Active Learning from Weak and Strong Labelers
Chicheng Zhang
UC San Diego
[email protected]
Kamalika Chaudhuri
UC San Diego
[email protected]
Abstract
An active learner is given a hypothesis class, a large set of unlabeled examples and
the ability to interactively query labels to an oracle of a subset of these examples;
the goal of the learner is to learn a hypothesis in the class that ?ts the data well by
making as few label queries as possible.
This work addresses active learning with labels obtained from strong and weak
labelers, where in addition to the standard active learning setting, we have an extra
weak labeler which may occasionally provide incorrect labels. An example is
learning to classify medical images where either expensive labels may be obtained
from a physician (oracle or strong labeler), or cheaper but occasionally incorrect
labels may be obtained from a medical resident (weak labeler). Our goal is to
learn a classi?er with low error on data labeled by the oracle, while using the weak
labeler to reduce the number of label queries made to this labeler. We provide an
active learning algorithm for this setting, establish its statistical consistency, and
analyze its label complexity to characterize when it can provide label savings over
using the strong labeler alone.
1
Introduction
An active learner is given a hypothesis class, a large set of unlabeled examples and the ability to
interactively make label queries to an oracle on a subset of these examples; the goal of the learner is
to learn a hypothesis in the class that ?ts the data well by making as few oracle queries as possible.
As labeling examples is a tedious task for any one person, many applications of active learning
involve synthesizing labels from multiple experts who may have slightly different labeling patterns.
While a body of recent empirical work [27, 28, 29, 25, 26, 11] has developed methods for combining
labels from multiple experts, little is known on the theory of actively learning with labels from
multiple annotators. For example, what kind of assumptions are needed for methods that use labels
from multiple sources to work, when these methods are statistically consistent, and when they can
yield bene?ts over plain active learning are all open questions.
This work addresses these questions in the context of active learning from strong and weak labelers.
Speci?cally, in addition to unlabeled data and the usual labeling oracle in standard active learning,
we have an extra weak labeler. The labeling oracle is a gold standard ? an expert on the problem
domain ? and it provides high quality but expensive labels. The weak labeler is cheap, but may provide incorrect labels on some inputs. An example is learning to classify medical images where either
expensive labels may be obtained from a physician (oracle), or cheaper but occasionally incorrect
labels may be obtained from a medical resident (weak labeler). Our goal is to learn a classi?er in a
hypothesis class whose error with respect to the data labeled by the oracle is low, while exploiting
the weak labeler to reduce the number of queries made to this oracle. Observe that in our model
the weak labeler can be incorrect anywhere, and does not necessarily provide uniformly noisy labels
everywhere, as was assumed by some previous works [7, 23].
1
A plausible approach in this framework is to learn a difference classi?er to predict where the weak
labeler differs from the oracle, and then use a standard active learning algorithm which queries the
weak labeler when this difference classi?er predicts agreement. Our ?rst key observation is that
this approach is statistically inconsistent; false negative errors (that predict no difference when O
and W differ) lead to biased annotation for the target classi?cation task. We address this problem
by learning instead a cost-sensitive difference classi?er that ensures that false negative errors rarely
occur. Our second key observation is that as existing active learning algorithms usually query labels
in localized regions of space, it is suf?cient to train the difference classi?er restricted to this region
and still maintain consistency. This process leads to signi?cant label savings. Combining these
two ideas, we get an algorithm that is provably statistically consistent and that works under the
assumption that there is a good difference classi?er with low false negative error.
We analyze the label complexity of our algorithm as measured by the number of label requests to
the labeling oracle. In general we cannot expect any consistent algorithm to provide label savings
under all circumstances, and indeed our worst case asymptotic label complexity is the same as that
of active learning using the oracle alone. Our analysis characterizes when we can achieve label
savings, and we show that this happens for example if the weak labeler agrees with the labeling
oracle for some fraction of the examples close to the decision boundary. Moreover, when the target
classi?cation task is agnostic, the number of labels required to learn the difference classi?er is of a
lower order than the number of labels required for active learning; thus in realistic cases, learning
the difference classi?er adds only a small overhead to the total label requirement, and overall we get
label savings over using the oracle alone.
Related Work. There has been a considerable amount of empirical work on active learning where
multiple annotators can provide labels for the unlabeled examples. One line of work assumes a
generative model for each annotator?s labels. The learning algorithm learns the parameters of the
individual labelers, and uses them to decide which labeler to query for each example. [28, 29, 12]
consider separate logistic regression models for each annotator, while [18, 19] assume that each
annotator?s labels are corrupted with a different amount of random classi?cation noise. A second
line of work [11, 15] that includes Pro-Active Learning, assumes that each labeler is an expert
over an unknown subset of categories, and uses data to measure the class-wise expertise in order to
optimally place label queries. In general, it is not known under what conditions these algorithms are
statistically consistent, particularly when the modeling assumptions do not strictly hold, and under
what conditions they provide label savings over regular active learning.
[24], the ?rst theoretical work to consider this problem, consider a model where the weak labeler
is more likely to provide incorrect labels in heterogeneous regions of space where similar examples
have different labels. Their formalization is orthogonal to ours ? while theirs is more natural in a
non-parametric setting, ours is more natural for ?tting classi?ers in a hypothesis class. In a NIPS
2014 Workshop paper, [20] have also considered learning from strong and weak labelers; unlike
ours, their work is in the online selective sampling setting, and applies only to linear classi?ers and
robust regression. [10] study learning from multiple teachers in the online selective sampling setting
in a model where different labelers have different regions of expertise.
Finally, there is a large body of theoretical work [1, 8, 9, 13, 30, 2, 4] on learning a binary classi?er
based on interactive label queries made to a single labeler. In the realizable case, [21, 8] show
that a generalization of binary search provides an exponential improvement in label complexity
over passive learning. The problem is more challenging, however, in the more realistic agnostic
case, where such approaches lead to inconsistency. The two styles of algorithms for agnostic active
learning are disagreement-based active learning (DBAL) [1, 9, 13, 4] and the more recent marginbased or con?dence-based active learning [2, 30]. Our algorithm builds on recent work in DBAL [4,
14].
2
Preliminaries
The Model. We begin with a general framework for actively learning from weak and strong labelers.
In the standard active learning setting, we are given unlabelled data drawn from a distribution U over
an input space X , a label space Y = {?1, 1}, a hypothesis class H , and a labeling oracle O to
which we can make interactive queries.
2
In our setting, we additionally have access to a weak labeling oracle W which we can query interactively. Querying W is signi?cantly cheaper than querying O; however, querying W generates a
label yW drawn from a conditional distribution PW (yW |x) which is not the same as the conditional
distribution PO (yO |x) of O.
Let D be the data distribution over labelled examples such that: PD (x, y) = PU (x)PO (y|x). Our goal
is to learn a classi?er h in the hypothesis class H such that with probability ? 1? ? over the sample,
we have: PD (h(x) = y) ? minh? ?H PD (h? (x) = y) + ? , while making as few (interactive) queries to
O as possible.
Observe that in this model W may disagree with the oracle O anywhere in the input space; this
is unlike previous frameworks [7, 23] where labels assigned by the weak labeler are corrupted by
random classi?cation noise with a higher variance than the labeling oracle. We believe this feature
makes our model more realistic.
Second, unlike [24], mistakes made by the weak labeler do not have to be close to the decision
boundary. This keeps the model general and simple, and allows greater ?exibility to weak labelers.
Our analysis shows that if W is largely incorrect close to the decision boundary, then our algorithm
will automatically make more queries to O in its later stages.
Finally note that O is allowed to be non-realizable with respect to the target hypothesis class H .
Background on Active Learning Algorithms. The standard active learning setting is very similar
to ours, the only difference being that we have access to the weak oracle W . There has been a long
line of work on active learning [1, 6, 8, 13, 2, 9, 4, 30]. Our algorithms are based on a style called
disagreement-based active learning (DBAL). The main idea is as follows. Based on the examples
seen so far, the algorithm maintains a candidate set Vt of classi?ers in H that is guaranteed with
high probability to contain h? , the classi?er in H with the lowest error. Given a randomly drawn
unlabeled example xt , if all classi?ers in Vt agree on its label, then this label is inferred; observe that
with high probability, this inferred label is h? (xt ). Otherwise, xt is said to be in the disagreement
region of Vt , and the algorithm queries O for its label. Vt is updated based on xt and its label, and
algorithm continues.
Recent works in DBAL [9, 4] have observed that it is possible to determine if an xt is in the disagreement region of Vt without explicitly maintaining Vt . Instead, a labelled dataset St is maintained;
the labels of the examples in St are obtained by either querying the oracle or direct inference. To
determine whether an xt lies in the disagreement region of Vt , two constrained ERM procedures are
performed; empirical risk is minimized over St while constraining the classi?er to output the label
of xt as 1 and ?1 respectively. If these two classi?ers have similar training errors, then xt lies in
the disagreement region of Vt ; otherwise the algorithm infers a label for xt that agrees with the label
assigned by h? .
More De?nitions and Notation. The error of a classi?er h under a labelled data distribution Q is
de?ned as: errQ (h) = P(x,y)?Q (h(x) = y); we use the notation err(h, S) to denote its empirical error
on a labelled data set S. We use the notation h? to denote the classi?er with the lowest error under D
and ? to denote its error errD (h? ), where D is the target labelled data distribution.
Our active learning algorithm implicitly maintains a (1 ? ? )-con?dence set for h? throughout the
algorithm. Given a set S of labelled examples, a set of classi?ers V (S) ? H is said to be a (1 ? ? )con?dence set for h? with respect to S if h? ? V with probability ? 1 ? ? over S.
The disagreement between two classi?ers h1 and h2 under an unlabelled data distribution U, denoted
by ?U (h1 , h2 ), is Px?U (h1 (x) = h2 (x)). Observe that the disagreements under U form a pseudometric over H . We use BU (h, r) to denote a ball of radius r centered around h in this metric. The
disagreement region of a set V of classi?ers, denoted by DIS(V ), is the set of all examples x ? X
such that there exist two classi?ers h1 and h2 in V for which h1 (x) = h2 (x).
3
Algorithm
Our main algorithm is a standard single-annotator DBAL algorithm with a major modi?cation: when
the DBAL algorithm makes a label query, we use an extra sub-routine to decide whether this query
should be made to the oracle or the weak labeler, and make it accordingly. How do we make this
3
decision? We try to predict if weak labeler differs from the oracle on this example; if so, query the
oracle, otherwise, query the weak labeler.
Key Idea 1: Cost Sensitive Difference Classi?er. How do we predict if the weak labeler differs
from the oracle? A plausible approach is to learn a difference classi?er hd f in a hypothesis class
H d f to determine if there is a difference. Our ?rst key observation is when the region where
O and W differ cannot be perfectly modeled by H d f , the resulting active learning algorithm is
statistically inconsistent. Any false negative errors (that is, incorrectly predicting no difference)
made by difference classi?er leads to biased annotation for the target classi?cation task, which in
turn leads to inconsistency. We address this problem by instead learning a cost-sensitive difference
classi?er and we assume that a classi?er with low false negative error exists in H d f . While training,
we constrain the false negative error of the difference classi?er to be low, and minimize the number
of predicted positives (or disagreements between W and O) subject to this constraint. This ensures
that the annotated data used by the active learning algorithm has diminishing bias, thus ensuring
consistency.
Key Idea 2: Localized Difference Classi?er Training. Unfortunately, even with cost-sensitive
training, directly learning a difference classi?er accurately is expensive. If d ? is the VC-dimension
of the difference hypothesis class H d f , to learn a target classi?er to excess error ? , we need a
difference classi?er with false negative error O(? ), which, from standard generalization theory, re? ? /? ) labels [5, 22]! Our second key observation is that we can save on labels by training
quires O(d
the difference classi?er in a localized manner ? because the DBAL algorithm that builds the target
classi?er only makes label queries in the disagreement region of the current con?dence set for h? .
Therefore we train the difference classi?er only on this region and still maintain consistency. Additionally this provides label savings because while training the target classi?er to excess error ? , we
? ? ?k /? ) labels where ?k is the probability mass of
need to train a difference classi?er with only O(d
this disagreement region. The localized training process leads to an additional technical challenge:
as the con?dence set for h? is updated, its disagreement region changes. We address this through an
epoch-based DBAL algorithm, where the con?dence set is updated and a fresh difference classi?er
is trained in each epoch.
Main Algorithm. Our main algorithm (Algorithm 1) combines these two key ideas, and like [4],
implicitly maintains the (1 ? ? )-con?dence set for h? by through a labeled dataset S?k . In epoch k,
the target excess error is ?k ? 21k , and the goal of Algorithm 1 is to generate a labeled dataset S?k
that implicitly represents a (1 ? ?k )-con?dence set on h? . Additionally, S?k has the property that the
empirical risk minimizer over it has excess error ? ?k .
?
?k2 ) labeled examples, where d is the VC
A naive way to generate such an S?k is by drawing O(d/
dimension of H . Our goal, however, is to generate S?k using a much smaller number of label queries,
which is accomplished by Algorithm 5. This is done in two ways. First, like standard DBAL, we
infer the label of any x that lies outside the disagreement region of the current con?dence set for h? .
Algorithm 4 identi?es whether an x lies in this region. Second, for any x in the disagreement region,
we determine whether O and W agree on x using a difference classi?er; if there is agreement, we
query W , else we query O. The difference classi?er used to determine agreement is retrained in the
beginning of each epoch by Algorithm 2, which ensures that the annotation has low bias.
The algorithms use a constrained ERM procedure CONS-LEARN. Given a hypothesis class H, a
labeled dataset S and a set of constraining examples C, CONS-LEARNH (C, S) returns a classi?er in
H that minimizes the empirical error on S subject to h(xi ) = yi for each (xi , yi ) ? C.
Identifying the Disagreement Region. Algorithm 4 (deferred to the Appendix) identi?es if an
unlabeled example x lies in the disagreement region of the current (1 ? ? )-con?dence set for h? ;
recall that this con?dence set is implicitly maintained through S?k . The identi?cation is based on two
ERM queries. Let h? be the empirical risk minimizer on the current labeled dataset S?k?1 , and h? ? be
?
the empirical risk minimizer on S?k?1 under the constraint that h? ? (x) = ?h(x).
If the training errors
?
of h? and h? are very different, then, all classi?ers with training error close to that of h? assign the same
label to x, and x lies outside the current disagreement region.
4
Training the Difference Classi?er. Algorithm 2 trains a difference classi?er on a random set of
examples which lies in the disagreement region of the current con?dence set for h? . The training
process is cost-sensitive, and is similar to [16, 17, 5, 22]. A hard bound is imposed on the falsenegative error, which translates to a bound on the annotation bias for the target task. The number of
positives (i.e., the number of examples where W and O differ) is minimized subject to this constraint;
this amounts to (approximately) minimizing the fraction of queries made to O.
The number of labeled examples used in training is large enough to ensure false negative error
O(?k /?k ) over the disagreement region of the current con?dence set; here ?k is the probability mass
of this disagreement region under U. This ensures that the overall annotation bias introduced by
this procedure in the target task is at most O(?k ). As ?k is small and typically diminishes with k,
this requires less labels than training the difference classi?er globally which would have required
? ? /?k ) queries to O.
O(d
Algorithm 1 Active Learning Algorithm from Weak and Strong Labelers
1: Input: Unlabeled distribution U, target excess error ? , con?dence ? , labeling oracle O, weak
oracle W , hypothesis class H , hypothesis class for difference classi?er H d f .
2: Output: Classi?er h? in H .
3: Initialize: initial error ?0 = 1, con?dence ?0 = ? /4. Total number of epochs k0 = ?log ?1 ?.
4: Initial number of examples n0 = O( 12 (d ln 12 + ln ?1 )).
?
?
0
0
0
5: Draw a fresh sample and query O for its labels S?0 = {(x1 , y1 ), . . . , (xn0 , yn0 )}. Let ?0 = ? (n0 , ?0 ).
6: for k = 1, 2, . . . , k0 do
7:
Set target excess error ?k = 2?k , con?dence ?k = ? /4(k + 1)2 .
8:
# Train Difference Classi?er
9:
h? dk f ? Call Algorithm 2 with inputs unlabeled distribution U, oracles W and O, target excess
10:
11:
12:
13:
error ?k , con?dence ?k /2, previously labeled dataset S?k?1 .
# Adaptive Active Learning using Difference Classi?er
?k , S?k ? Call Algorithm 5 with inputs unlabeled distribution U, oracles W and O, difference
classi?er h? dk f , target excess error ?k , con?dence ?k /2, previously labeled dataset S?k?1 .
end for
return h? ? CONS-LEARNH (0,
/ S?k0 ).
Adaptive Active Learning using the Difference Classi?er. Finally, Algorithm 5 (deferred to the
Appendix) is our main active learning procedure, which generates a labeled dataset S?k that is implicitly used to maintain a tighter (1 ? ? )-con?dence set for h? . Speci?cally, Algorithm 5 generates
a S?k such that the set Vk de?ned as:
Vk = {h : err(h, S?k ) ? min err(h? k , S?k ) ? 3?k /4}
h? k ?H
has the property that:
{h : errD (h) ? errD (h? ) ? ?k /2} ? Vk ? {h : errD (h) ? errD (h? ) ? ?k }
This is achieved by labeling, through inference or query, a large enough sample of unlabeled data
drawn from U. Labels are obtained from three sources - direct inference (if x lies outside the disagreement region as identi?ed by Algorithm 4), querying O (if the difference classi?er predicts a
difference), and querying W . How large should the sample be to reach the target excess error? If
? ? /? 2 ) samples, where d is the VC
errD (h? ) = ? , then achieving an excess error of ? requires O(d
k
dimension of the hypothesis class. As ? is unknown in advance, we use a doubling procedure in
lines 4-14 to iteratively determine the sample size.
3?
1 Note that if in Algorithm 3, the upper con?dence bound of P
x?U (in disagr region(T? , 2 , x) = 1) is lower
than ? /64, then we can halt Algorithm 2 and return an arbitrary hd f in H d f . Using this hd f will still guarantee
the correctness of Algorithm 1.
5
Algorithm 2 Training Algorithm for Difference Classi?er
1: Input: Unlabeled distribution U, oracles W and O, target error ? , hypothesis class H d f , con?-
dence ? , previous labeled dataset T? .
2: Output: Difference classi?er h? d f .
3: Let p? be an estimate of Px?U (in disagr region(T? , 32? , x) = 1), obtained by calling Algorithm 3(deferred to the Appendix) with failure probability ? /3. 1
4: Let U ? = 0,
/ i = 1, and
m=
64 ? 1024 p? ? 512 ? 1024 p?
72
(d ln
+ ln )
?
?
?
(1)
5: repeat
6:
Draw an example xi from U.
7:
if in disagr region(T? , 32? , xi ) = 1 then # xi is inside the disagreement region
8:
query both W and O for labels to get yi,W and yi,O .
9:
end if
10:
U ? = U ? ? {(xi , yi,O , yi,W )}
11:
i = i+1
12: until |U ? | = m
13: Learn a classi?er h? d f ? H d f based on the following empirical risk minimizer:
h? d f = argminhd f ?H d f
m
m
i=1
i=1
? 1(hd f (xi ) = +1), s.t. ? 1(hd f (xi ) = ?1 ? yi,O = yi,W ) ? m? /256 p?
(2)
14: return h? d f .
4
Performance Guarantees
We now examine the performance of our algorithm, which is measured by the number of label
queries made to the oracle O. Additionally we require our algorithm to be statistically consistent,
which means that the true error of the output classi?er should converge to the true error of the best
classi?er in H on the data distribution D.
Since our framework is very general, we cannot expect any statistically consistent algorithm to
achieve label savings over using O alone under all circumstances. For example, if labels provided
by W are the complete opposite of O, no algorithm will achieve both consistency and label savings.
We next provide an assumption under which Algorithm 1 works and yields label savings.
Assumption. The following assumption states that difference hypothesis class contains a good costsensitive predictor of when O and W differ in the disagreement region of BU (h? , r); a predictor is
good if it has low false-negative error and predicts a positive label with low frequency. If there is no
such predictor, then we cannot expect an algorithm similar to ours to achieve label savings.
Assumption 1. Let D be the joint distribution: PD (x, yO , yW ) = PU (x)PW (yW |x)PO (yO |x). For any
r, ? > 0, there exists an h?d ,rf ? H d f with the following properties:
PD (hd? ,rf (x) = ?1, x ? DIS(BU (h? , r)), yO = yW ) ? ?
PD (hd? ,rf (x)
= 1, x ? DIS(BU (h? , r))) ? ? (r, ? )
(3)
(4)
Note that (3), which states there is a hd f ? H d f with low false-negative error, is minimally restrictive, and is trivially satis?ed if H d f includes the constant classi?er that always predicts 1.
Theorem shows that (3) is suf?cient to ensure statistical consistency.
(4) in addition states that the number of positives predicted by the classi?er h?d ,rf is upper bounded
by ? (r, ? ). Note ? (r, ? ) ? PU (DIS(BU (h? , r))) always; performance gain is obtained when ? (r, ? )
is lower, which happens when the difference classi?er predicts agreement on a signi?cant portion of
DIS(BU (h? , r)).
6
Consistency. Provided Assumption 1 holds, we next show that Algorithm 1 is statistically consistent. Establishing consistency is non-trivial for our algorithm as the output classi?er is trained on
labels from both O and W .
Theorem 1 (Consistency). Let h? be the classi?er that minimizes the error with respect to D. If
Assumption 1 holds, then with probability ? 1 ? ? , the classi?er h? output by Algorithm 1 satis?es:
? ? errD (h? ) + ? .
errD (h)
Label Complexity. The label complexity of standard DBAL is measured in terms of the disagreement coef?cient. The disagreement coef?cient ? (r) at scale r is de?ned as: ? (r) =
?
U (h,r ))
suph?H supr? ?r PU (DIS(B
; intuitively, this measures the rate of shrinkage of the disagreement
r?
region with the radius of the ball BU (h, r) for any h in H . It was shown by [9] that the label com? ? (2? + ? )(1 + ? 22 )) where the O?
plexity of DBAL for target excess generalization error ? is O(d
?
notation hides factors logarithmic in 1/? and 1/? . In contrast, the label complexity of our algorithm can be stated in Theorem 2. Here we use the O? notation for convenience; we have the same
dependence on log 1/? and log 1/? as the bounds for DBAL.
Theorem 2 (Label Complexity). Let d be the VC dimension of H and let d ? be the VC dimension
of H d f . If Assumption 1 holds, and if the error of the best classi?er in H on D is ? , then with
probability ? 1 ? ? , the following hold:
1. The number of label queries made by Algorithm 1 to the oracle O in epoch k at most:
?
mk = O?
k?1
) + ?k?1 ) d ? P(DIS(BU (h? , 2? + ?k?1 )))
d(2? + ?k?1 )(? (2? + ?k?1 , 1024
+
?k
?k2
(5)
2. The total number of label queries made by Algorithm 1 to the oracle O is at most:
O? sup
r??
4.1
r
)+r
? (2? + r, 1024
?d
2? + r
?2
?
+1
+ 1 + ? (2? + ? )d ?
?2
?
(6)
Discussion
The ?rst terms in (5) and (6) represent the labels needed to learn the target classi?er, and second
terms represent the overhead in learning the difference classi?er.
In the realistic agnostic case (where ? > 0), as ? ? 0, the second terms are lower order compared
to the label complexity of DBAL. Thus even if d ? is somewhat larger than d, ?tting the difference
classi?er does not incur an asymptotically high overhead in the more realistic agnostic case. In the
realizable case, when d ? ? d, the second terms are of the same order as the ?rst; therefore we should
use a simpler difference hypothesis class H d f in this case. We believe that the lower order overhead
term comes from the fact that there exists a classi?er in H d f whose false negative error is very low.
Comparing Theorem 2 with the corresponding results for DBAL, we observe that instead of
? (2? + ? ), we have the term supr?? ? (2? +r,r/1024)
. Since supr?? ? (2? +r,r/1024)
? ? (2? + ? ), the
2? +r
2? +r
worst case asymptotic label complexity is the same as that of standard DBAL. This label complexity
may be considerably better however if supr?? ? (2? +r,r/1024)
is less than the disagreement coef?cient.
2? +r
As we expect, this will happen when the region of difference between W and O restricted to the disagreement regions is relatively small, and this region is well-modeled by the difference hypothesis
class H d f .
An interesting case is when the weak labeler differs from O close to the decision boundary and agrees
with O away from this boundary. In this case, any consistent algorithm should switch to querying O
close to the decision boundary. Indeed in earlier epochs, ? is low, and our algorithm obtains a good
difference classi?er and achieves label savings. In later epochs, ? is high, the difference classi?ers
always predict a difference and the label complexity of the later epochs of our algorithm is the same
order as DBAL. In practice, if we suspect that we are in this case, we can switch to plain active
learning once ?k is small enough.
Case Study: Linear Class?cation under Uniform Distribution. We provide a simple example
where our algorithm provides a better asymptotic label complexity than DBAL. Let H be the class
7
?
P({x : h? d f (x) = 1}) = g = o( d ? )
P({x : hw? (x) = yO }) = ?
?
w?
W
+
+
?
{x : P(yO = yW |x) > 0}
Figure 1: Linear classi?cation over unit ball with d = 2. Left: Decision boundary of labeler O and
h? = hw? . The region where O differs from h? is shaded, and has probability ? . Middle: Decision
boundary of weak labeler W . Right: h? d f , W and O. Note that {x : P(yO = yW |x) > 0} ? {x : h? d f (x) =
1}.
of homogeneous linear separators on the d-dimensional unit ball and let H d f = {h?h? : h, h? ? H }.
Furthermore, let U be the uniform distribution over the unit ball.
Suppose that O is a deterministic labeler such that errD (h? ) = ? > 0. Moreover, suppose that W is
such that there exists a difference classi?er h? d?f with false negative error 0 for which PU (h? d f (x) =
1) ? g. Additionally, we assume that g = o( d ? ); observe that this is not a strict assumption on
H d f , as ? could be as much as a constant. Figure 1 shows an example in d = 2 that satis?es these
assumptions. In this case, as ? ? 0, Theorem 2 gives the following label complexity bound.
Corollary 1. With probability ? 1 ? ? , the number of label queries made to oracle O by Algorithm 1
2
is O? d max( ?g , 1)( ?? 2 + 1) + d 3/2 1 + ?? , where the O? notation hides factors logarithmic in 1/?
and 1/? .
?
? 3/2 (1 + ? 22 )).
As g = o( d ? ), this improves over the label complexity of DBAL, which is O(d
?
Conclusion. In this paper, we take a step towards a theoretical understanding of active learning from
multiple annotators through a learning theoretic formalization for learning from weak and strong labelers. Our work shows that multiple annotators can be successfully combined to do active learning
in a statistically consistent manner under a general setting with few assumptions; moreover, under
reasonable conditions, this kind of learning can provide label savings over plain active learning.
An avenue for future work is to explore a more general setting where we have multiple labelers
with expertise on different regions of the input space. Can we combine inputs from such labelers
in a statistically consistent manner? Second, our algorithm is intended for a setting where W is
biased, and performs suboptimally when the label generated by W is a random corruption of the
label provided by O. How can we account for both random noise and bias in active learning from
weak and strong labelers?
Acknowledgements
We thank NSF under IIS 1162581 for research support and Jennifer Dy for introducing us to the
problem of active learning from multiple labelers.
References
[1] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. J. Comput. Syst.
Sci., 75(1):78?89, 2009.
[2] M.-F. Balcan and P. M. Long. Active and passive learning of linear separators under logconcave distributions. In COLT, 2013.
[3] A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. Active learning with an ERM oracle,
2009.
8
[4] A. Beygelzimer, D. Hsu, J. Langford, and T. Zhang. Agnostic active learning without constraints. In NIPS, 2010.
[5] Nader H. Bshouty and Lynn Burroughs. Maximizing agreements with one-sided error with
applications to heuristic learning. Machine Learning, 59(1-2):99?123, 2005.
[6] D. A. Cohn, L. E. Atlas, and R. E. Ladner. Improving generalization with active learning.
Machine Learning, 15(2), 1994.
[7] K. Crammer, M. J. Kearns, and J. Wortman. Learning from data of variable quality. In NIPS,
2005.
[8] S. Dasgupta. Coarse sample complexity bounds for active learning. In NIPS, 2005.
[9] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In
NIPS, 2007.
[10] O. Dekel, C. Gentile, and K. Sridharan. Selective sampling and active learning from single
and multiple teachers. JMLR, 13:2655?2697, 2012.
[11] P. Donmez and J. Carbonell. Proactive learning: Cost-sensitive active learning with multiple
imperfect oracles. In CIKM, 2008.
[12] Meng Fang, Xingquan Zhu, Bin Li, Wei Ding, and Xindong Wu. Self-taught active learning
from crowds. In ICDM, pages 858?863. IEEE, 2012.
[13] S. Hanneke. A bound on the label complexity of agnostic active learning. In ICML, 2007.
[14] D. Hsu. Algorithms for Active Learning. PhD thesis, UC San Diego, 2010.
[15] Panagiotis G Ipeirotis, Foster Provost, Victor S Sheng, and Jing Wang. Repeated labeling using
multiple noisy labelers. Data Mining and Knowledge Discovery, 28(2):402?441, 2014.
[16] Adam Tauman Kalai, Varun Kanade, and Yishay Mansour. Reliable agnostic learning. J.
Comput. Syst. Sci., 78(5):1481?1495, 2012.
[17] Varun Kanade and Justin Thaler. Distribution-independent reliable learning. In COLT, 2014.
[18] C. H. Lin, Mausam, and D. S. Weld. To re(label), or not to re(label). In HCOMP, 2014.
[19] C.H. Lin, Mausam, and D.S. Weld. Reactive learning: Actively trading off larger noisier
training sets against smaller cleaner ones. In ICML Workshop on Crowdsourcing and Machine
Learning and ICML Active Learning Workshop, 2015.
[20] L. Malago, N. Cesa-Bianchi, and J. Renders. Online active learning with strong and weak
annotators. In NIPS Workshop on Learning from the Wisdom of Crowds, 2014.
[21] R. D. Nowak. The geometry of generalized binary search. IEEE Transactions on Information
Theory, 57(12):7893?7906, 2011.
[22] Hans Ulrich Simon. Pac-learning in the presence of one-sided classi?cation noise. Ann. Math.
Artif. Intell., 71(4):283?300, 2014.
[23] S. Song, K. Chaudhuri, and A. D. Sarwate. Learning from data with heterogeneous noise using
sgd. In AISTATS, 2015.
[24] R. Urner, S. Ben-David, and O. Shamir. Learning from weak teachers. In AISTATS, pages
1252?1260, 2012.
[25] S. Vijayanarasimhan and K. Grauman. What?s it going to cost you?: Predicting effort vs.
informativeness for multi-label image annotations. In CVPR, pages 2262?2269, 2009.
[26] S. Vijayanarasimhan and K. Grauman. Cost-sensitive active visual category learning. IJCV,
91(1):24?44, 2011.
[27] P. Welinder, S. Branson, S. Belongie, and P. Perona. The multidimensional wisdom of crowds.
In NIPS, pages 2424?2432, 2010.
[28] Y. Yan, R. Rosales, G. Fung, and J. G. Dy. Active learning from crowds. In ICML, pages
1161?1168, 2011.
[29] Y. Yan, R. Rosales, G. Fung, F. Farooq, B. Rao, and J. G. Dy. Active learning from multiple
knowledge sources. In AISTATS, pages 1350?1357, 2012.
[30] C. Zhang and K. Chaudhuri. Beyond disagreement-based agnostic active learning. In NIPS,
2014.
9
| 5988 |@word middle:1 pw:2 dekel:1 tedious:1 open:1 eng:1 sgd:1 initial:2 contains:1 ours:5 existing:1 err:3 current:7 com:1 comparing:1 beygelzimer:3 realistic:5 happen:1 cant:2 cheap:1 atlas:1 n0:2 v:1 alone:4 generative:1 accordingly:1 beginning:1 provides:4 coarse:1 math:1 simpler:1 zhang:4 direct:2 incorrect:7 ijcv:1 overhead:4 combine:2 inside:1 manner:3 indeed:2 examine:1 multi:1 globally:1 automatically:1 little:1 begin:1 provided:3 moreover:3 notation:6 bounded:1 agnostic:11 mass:2 lowest:2 what:4 kind:2 minimizes:2 developed:1 guarantee:2 multidimensional:1 interactive:3 grauman:2 k2:2 unit:3 medical:4 positive:4 mistake:1 establishing:1 meng:1 approximately:1 minimally:1 challenging:1 shaded:1 branson:1 statistically:10 practice:1 differs:5 procedure:5 empirical:9 yan:2 regular:1 get:3 cannot:4 unlabeled:11 close:6 convenience:1 context:1 risk:5 vijayanarasimhan:2 imposed:1 deterministic:1 maximizing:1 identifying:1 fang:1 hd:8 updated:3 tting:2 diego:3 target:19 suppose:2 yishay:1 shamir:1 homogeneous:1 us:2 hypothesis:19 agreement:5 expensive:4 particularly:1 continues:1 nader:1 nitions:1 predicts:5 labeled:12 observed:1 ding:1 wang:1 worst:2 region:35 ensures:4 pd:6 complexity:17 trained:2 incur:1 learner:4 po:3 joint:1 k0:3 train:5 query:34 labeling:12 outside:3 crowd:4 whose:2 heuristic:1 larger:2 plausible:2 cvpr:1 drawing:1 otherwise:3 ability:2 noisy:2 online:3 mausam:2 combining:2 chaudhuri:3 achieve:4 gold:1 exploiting:1 rst:5 requirement:1 jing:1 adam:1 ben:1 measured:3 bshouty:1 strong:11 predicted:2 signi:3 come:1 trading:1 rosales:2 differ:4 radius:2 annotated:1 vc:5 centered:1 bin:1 require:1 assign:1 generalization:4 preliminary:1 tighter:1 strictly:1 hold:5 around:1 considered:1 predict:5 major:1 achieves:1 diminishes:1 panagiotis:1 label:96 sensitive:7 agrees:3 correctness:1 successfully:1 always:3 kalai:1 shrinkage:1 corollary:1 yo:7 improvement:1 vk:3 contrast:1 realizable:3 inference:3 typically:1 diminishing:1 perona:1 selective:3 going:1 provably:1 overall:2 colt:2 denoted:2 constrained:2 initialize:1 uc:3 once:1 saving:13 sampling:3 labeler:28 represents:1 icml:4 future:1 minimized:2 few:4 randomly:1 modi:1 intell:1 individual:1 cheaper:3 intended:1 geometry:1 maintain:3 satis:3 mining:1 deferred:3 nowak:1 orthogonal:1 supr:4 re:3 theoretical:3 mk:1 classify:2 modeling:1 earlier:1 rao:1 learnh:2 cost:8 introducing:1 subset:3 predictor:3 uniform:2 wortman:1 welinder:1 characterize:1 optimally:1 teacher:3 corrupted:2 considerably:1 combined:1 person:1 st:3 cantly:1 bu:8 physician:2 off:1 thesis:1 cesa:1 interactively:3 expert:4 style:2 return:4 actively:3 syst:2 account:1 li:1 de:4 includes:2 explicitly:1 later:3 performed:1 h1:5 try:1 proactive:1 analyze:2 characterizes:1 portion:1 sup:1 maintains:3 errd:9 annotation:6 simon:1 chicheng:1 minimize:1 variance:1 who:1 largely:1 yield:2 wisdom:2 weak:34 accurately:1 expertise:3 hanneke:1 corruption:1 cation:10 reach:1 coef:3 monteleoni:1 ed:2 urner:1 failure:1 against:1 frequency:1 burroughs:1 con:24 gain:1 hsu:4 dataset:9 recall:1 knowledge:2 infers:1 improves:1 routine:1 higher:1 varun:2 wei:1 done:1 furthermore:1 anywhere:2 stage:1 until:1 langford:3 sheng:1 cohn:1 resident:2 logistic:1 costsensitive:1 quality:2 xindong:1 quire:1 believe:2 artif:1 contain:1 true:2 assigned:2 iteratively:1 self:1 maintained:2 generalized:1 complete:1 plexity:1 theoretic:1 performs:1 pro:1 passive:2 balcan:2 image:3 wise:1 donmez:1 sarwate:1 theirs:1 consistency:9 trivially:1 access:2 han:1 labelers:15 add:1 pu:5 recent:4 hide:2 occasionally:3 binary:3 vt:8 inconsistency:2 accomplished:1 yi:8 victor:1 seen:1 greater:1 additional:1 somewhat:1 gentile:1 speci:2 determine:6 converge:1 ii:1 multiple:14 infer:1 hcomp:1 technical:1 unlabelled:2 long:2 lin:2 icdm:1 halt:1 ensuring:1 regression:2 heterogeneous:2 circumstance:2 metric:1 represent:2 achieved:1 addition:3 background:1 else:1 source:3 extra:3 biased:3 unlike:3 strict:1 subject:3 suspect:1 logconcave:1 inconsistent:2 sridharan:1 call:2 presence:1 constraining:2 enough:3 switch:2 perfectly:1 opposite:1 reduce:2 idea:5 imperfect:1 avenue:1 translates:1 whether:4 effort:1 song:1 render:1 yw:7 involve:1 cleaner:1 amount:3 category:2 generate:3 exist:1 nsf:1 cikm:1 dasgupta:2 taught:1 key:7 achieving:1 drawn:4 asymptotically:1 fraction:2 everywhere:1 you:1 place:1 throughout:1 reasonable:1 decide:2 wu:1 draw:2 decision:8 appendix:3 dy:3 bound:7 guaranteed:1 oracle:36 occur:1 constraint:4 constrain:1 calling:1 dence:21 weld:2 generates:3 min:1 pseudometric:1 px:2 relatively:1 ned:3 fung:2 request:1 marginbased:1 ball:5 smaller:2 slightly:1 making:3 happens:2 intuitively:1 restricted:2 erm:4 sided:2 ln:4 agree:2 previously:2 jennifer:1 turn:1 needed:2 end:2 observe:6 away:1 disagreement:30 save:1 assumes:2 ensure:2 maintaining:1 cally:2 restrictive:1 build:2 establish:1 question:2 parametric:1 dependence:1 usual:1 said:2 separate:1 thank:1 sci:2 carbonell:1 trivial:1 fresh:2 suboptimally:1 modeled:2 minimizing:1 unfortunately:1 lynn:1 negative:12 stated:1 synthesizing:1 unknown:2 bianchi:1 disagree:1 upper:2 observation:4 ladner:1 minh:1 t:3 incorrectly:1 y1:1 mansour:1 ucsd:2 provost:1 arbitrary:1 retrained:1 inferred:2 introduced:1 david:1 required:3 bene:1 identi:4 yn0:1 nip:8 address:5 justin:1 beyond:1 usually:1 pattern:1 challenge:1 rf:4 max:1 reliable:2 natural:2 predicting:2 ipeirotis:1 zhu:1 thaler:1 naive:1 epoch:9 understanding:1 acknowledgement:1 discovery:1 asymptotic:3 expect:4 suf:2 suph:1 interesting:1 querying:7 localized:4 annotator:9 h2:5 consistent:10 informativeness:1 foster:1 ulrich:1 repeat:1 dis:7 bias:5 tauman:1 boundary:8 plain:3 dimension:5 made:11 adaptive:2 san:3 far:1 transaction:1 excess:11 obtains:1 implicitly:5 keep:1 active:56 xingquan:1 assumed:1 belongie:1 xi:8 search:2 additionally:5 kanade:2 learn:12 robust:1 improving:1 necessarily:1 separator:2 domain:1 aistats:3 main:5 noise:5 allowed:1 repeated:1 body:2 x1:1 cient:5 formalization:2 sub:1 exponential:1 comput:2 candidate:1 lie:8 jmlr:1 learns:1 hw:2 theorem:6 xt:9 pac:1 er:73 dk:2 workshop:4 exists:4 false:12 kamalika:2 phd:1 logarithmic:2 likely:1 explore:1 visual:1 doubling:1 applies:1 minimizer:4 conditional:2 goal:7 ann:1 towards:1 labelled:6 considerable:1 change:1 hard:1 uniformly:1 classi:80 kearns:1 total:3 called:1 e:4 xn0:1 rarely:1 support:1 crammer:1 noisier:1 reactive:1 exibility:1 crowdsourcing:1 |
5,511 | 5,989 | Learnability of Influence in Networks
Harikrishna Narasimhan
David C. Parkes
Yaron Singer
Harvard University, Cambridge, MA 02138
[email protected], {parkes, yaron}@seas.harvard.edu
Abstract
We show PAC learnability of influence functions for three common influence models, namely, the Linear Threshold (LT), Independent Cascade (IC) and Voter models, and present concrete sample complexity results in each case. Our results for
the LT model are based on interesting connections with neural networks; those for
the IC model are based an interpretation of the influence function as an expectation over random draw of a subgraph and use covering number arguments; and
those for the Voter model are based on a reduction to linear regression. We show
these results for the case in which the cascades are only partially observed and we
do not see the time steps in which a node has been influenced. We also provide
efficient polynomial time learning algorithms for a setting with full observation,
i.e. where the cascades also contain the time steps in which nodes are influenced.
1
Introduction
For several decades there has been much interest in understanding the manner in which ideas, language, and information cascades spread through society. With the advent of social networking
technologies in recent years, digital traces of human interactions are becoming available, and the
problem of predicting information cascades from these traces has gained enormous practical value.
For example, this is critical in applications like viral marketing, where one needs to maximize awareness about a product by selecting a small set of influential users [1].
To this end, the spread of information in networks is modeled as an influence function which maps
a set of seed nodes who initiate the cascade to (a distribution on) the set of individuals who will be
influenced as a result [2]. These models are parametrized by variables that are unknown and need
to be estimated from data. There has been much work on estimating the parameters of influence
models (or the structure of the underlying social graph) from observed cascades of influence spread,
and on using the estimated parameters to predict influence for a given seed set [3, 4, 5, 6, 7, 8].
These parameter estimation techniques make use of local influence information at each node, and
there has been a recent line of work devoted to providing sample complexity guarantees for these
local estimation techniques [9, 10, 11, 12, 13].
However, one cannot locally estimate the influence parameters when the cascades are not completely
observed (e.g. when the cascades do not contain the time at which the nodes are influenced). Moreover, influence functions can be sensitive to errors in model parameters, and existing results do not
tell us to what accuracy the individual parameters need to be estimated to obtain accurate influence
predictions. If the primary goal in an application is to predict influence accurately, it is natural to
ask for algorithms that have learnability guarantees on the influence function itself. A benchmark
for studying such questions is the Probably Approximately Correct (PAC) learning framework [14]:
Are influence functions PAC learnable?
While many influence models have been popularized due to their approximation guarantees for
influence maximization [2, 15, 16], learnability of influence is an equally fundamental property.
Part of this work was done when HN was a PhD student at the Indian Institute of Science, Bangalore.
1
In this paper, we show PAC learnability for three well-studied influence models: the Linear Threshold, the Independent Cascade, and the Voter models. We primarily consider a setting where the
cascades are partially observed, i.e. where only the nodes influenced and not the time steps at which
they were influenced are observed. This is a setting where existing local estimation techniques cannot be applied to obtain parameter estimates. Additionally, for a fully observed setting where the
time of influence is also observed, we show polynomial time learnability; our methods here are akin
to using local estimation techniques, but come with guarantees on the global influence function.
Main results. Our learnability results are summarized below.
? Linear threshold (LT) model: Our result here is based on an interesting observation that
LT influence functions can be seen as multi-layer neural network classifiers, and proceed by
bounding their VC-dimension. The method analyzed here picks a function with zero training
error. While this can be computationally hard to implement under partial observation, we
provide a polynomial time algorithm for the full observation case using local computations.
? Independent cascade (IC) model: Our result uses an interpretation of the influence function
as an expectation over random draw of a subgraph [2]; this allows us to show that the function is
Lipschitz and invoke covering number arguments. The algorithm analyzed for partial observation is based on global maximum likelihood estimation. Under full observation (and additional
assumptions), we show polynomial time learnability using a local estimation technique.
? Voter model: Our result follows from a reduction of the learning problem to a linear regression
problem; the resulting learning algorithm can be implemented in polynomial time for both the
full and partial observation settings.
Related work. A related problem to ours is that of inferring the structure of the underlying social
graph from cascades [6]. There has been a series of results on polynomial sample complexity guarantees for this problem under variants of the IC model [9, 12, 10, 11]. Most of these results make
specific assumptions on the cascades/graph structure, and assume a full observation setting. On the
other hand, in our problem, the structure of the social graph is assumed to be known, and the goal
is to provably learn the underlying influence function. Our results do not depend on assumptions on
the network structure, and primarily apply to the more challenging partial observation setting.
The work that is most related to ours is that of Du et al. [13], who show polynomial sample complexity results for learning influence in the LT and IC models (under partial observation). However, their
approach uses approximations to influence functions and consequently requires a strong technical
condition to hold, which is not necessarily satisfied in general. Our results for the LT and IC models
are some what orthogonal. While the authors in [13] trade-off assumptions on learnability and gain
efficient algorithms that work well in practice, our goal is to show unconditional sample complexity
for learning influence. We do this at the expense of the efficiency of the learning algorithms in the
partial observation setting. Moreover, the technical approach we take is substantially different.
There has also been work on learnability of families of discrete functions such as submodular [17]
and coverage functions [18], under the PAC and the variant PMAC frameworks. These results
assume availability of a training sample containing exact values of the target function on the given
input sets. While IC influence functions can be seen as coverage functions [2], the previous results do
not directly apply to the IC class, as in practice, the true (expected) value of an IC influence function
on a seed set is never observed, and only a random realization is seen. In contrast, our learnability
result for IC functions do not require the exact function values to be known. Moreover, the previous
results require strict assumptions on the input distribution. Since we focus on learnability of specific
function classes rather than large families of discrete functions, we are able to handle general seed
distributions for most part. Other results relevant to our work include learnability of linear influence
games [19], where the techniques used bear some similarity to our analysis for the LT model.
2
Preliminaries
Influence models. We represent a social network as a finite graph G = (V, E), where the nodes
V = {1, . . . , n} represent a set of n individuals and edges E ? V 2 represent their social links. Let
|E| = r. The graph is assumed to be directed unless otherwise specified. Each edge (u, v) ? E
is associated with a weight wuv ? R+ that indicates the strength of influence of node v on node
u. We consider a setting where each node in the network holds an opinion in {0, 1} and opinions
2
disseminate in the network. This dissemination process begins with a small subset of nodes called
the seed which have opinion 1 while the rest have opinion 0, and continues in discrete time steps.
In every time step, a node may change its opinion from 0 to 1 based on the opinion of its neighbors,
and according to some local model of influence; if this happens, we say that the node is influenced.
We will use N (u) to denote the set of neighbors of node u, and At to denote the set of nodes that
are influenced at time step t. We consider three well-studied models:
? Linear threshold (LT) model: Each node u holds a threshold ru ? R+ , and is influenced
at time t if the total incoming weight from
P its neighbors that were influenced at the previous
time step t ? 1 exceeds the threshold: v?N (u)?At?1 wuv ? ku . Once influenced, node u
can then influence its neighbors for one time step, and never changes its opinion to 0.1
? Independent cascade (IC) model: Restricting edge weights wuv to be in [0, 1], a node u is
influenced at time t independently by each neighbor v who was influenced at time t ? 1. The
node can then influence its neighbors for one time step, and never changes its opinion to 0.
? Voter model: The graph is assumed to be undirected (with self-loops);
at time step t, a node
P
u adopts the opinion of its neighbor v with probability wuv / v0 ?N (u)?{u} wuv0 . Unlike the
LT and IC models, here a node may change its opinion from 1 to 0 or 0 to 1 at every step.
We stress that a node is influenced at time t if it changes its opinion from 0 to 1 exactly at t. Also, in
both the LT and IC models, no node gets influenced more than once and hence an influence cascade
can last for at most n time steps. For simplicity, we shall consider in all our definitions only cascades
of length n. While revisiting the Voter model in Section 5, we will look at more general cascades.
Definition 1 (Influence function). Given an influence model, a (global) influence function F :
2V ? [0, 1]n maps an initial set of nodes X ? V seeded with opinion 1 to a vector of probabilities
[F1 (X), . . . , Fn (X)] ? [0, 1]n , where the uth coordinate indicates the probability of node u ? V
being influenced during any time step of the corresponding influence cascades.
Note that for the LT model, the influence process is deterministic, and the influence function simply
outputs a binary vector in {0, 1}n . Let FG denote the class of all influence functions under an
influence model over G, obtained for different choices of parameters (edge weights/thresholds) in
the model. We will be interested in learning the influence function for a given parametrization of
this influence model. We shall assume that the initial set of nodes that are seeded with opinion 1 at
the start of the influence process, or the seed set, is chosen i.i.d. according to a distribution ? over
all subsets of nodes. We are given a training sample consisting of draws of initial seed sets from ?,
along with observations of nodes influenced in the corresponding influence process. Our goal is to
then learn from FG an influence function that best captures the observed influence process.
Measuring Loss. To measure quality of the learned influence function, we define a loss function
` : 2V ? [0, 1]n ? R+ that for any subset of influenced nodes Y ? V and predicted influence
probabilities p ? [0, 1]n assigns a value `(Y, p) measuring discrepancy between Y and p. We define
the error of a learned function F ? FG for a given seed distribution ? and model parametrization as
the expected loss incurred by F :
err` [F ] = EX,Y ` Y, F (X) ,
where the above expectation is over a random draw of the seed set X from distribution ? and over
the corresponding subsets of nodes Y influenced during the cascade.
We will be particularly interested in the difference between the error of an influence function FS ?
FG learned from a training
sample S and the minimum possible error achievable over all influence
functions in FG : err` FS ? inf F ?FG err` F , and would like to learn influence functions for which
this difference is guaranteed to be small (using only polynomially many training examples).
Full and partial observation. We primarily work in a setting in which we observe the nodes
influenced in a cascade, but not the time step at which they were influenced. In other words, we
assume availability of a partial observed training sample S = {(X 1 , Y 1 ) . . . , (X m , Y m )}, where
X i denotes the seed set of a cascade i and Y i is the set of nodes influenced in that cascade. We
will also consider a refined notion of full observation in which we are provided a training sample
1
m
i
S = {(X 1 , Y1:n
) . . . , (X m , Y1:n
)}, where Y1:n
= {Y1i , . . . , Yni } and Yti is the set of nodes in
1
In settings where the node thresholds are unknown, it is common to assume that they are chosen randomly
by each node [2]. In our setup, the thresholds are parameters that need to be learned from cascades.
3
cascade i who were influenced precisely
Sn at time step t. Notice that here the complete set of nodes
influenced in cascade i is given by t=1 Yti . This setting is particularly of interest when discussing
learnability in polynomial time. The structure of the social graph is always assumed to be known.
PAC learnability of influence functions. Let FG be the class of all influence functions under an
influence model over a n-node social network G = (V, E). We say FG is probably approximately
correct (PAC) learnable w.r.t. loss ` if there exists an algorithm s.t. the following holds for ?, ? ?
(0, 1), for all parametrizations of the model, and for all (or a subset of) distributions ? over seed sets:
when the algorithm is given a partially observed training sample S = {(X 1 , Y 1 ), . . . , (X m , Y m )}
with m ? poly(1/, 1/?) examples, it outputs an influence function FS ? FG for which
PS err` FS ? inf err` F ? ? ?,
F ?FG
where the above probability is over the randomness in S. Moreover, FG is efficiently PAC learnable
under this setting if the running time of the algorithm in the above definition is polynomial in m
and in the size of G. We say FG is (efficiently) PAC learnable under full observation if the above
1
m
definition holds with a fully observed training sample S = {(X 1 , Y1:n
), . . . , (X m , Y1:n
)}.
Sensitivity of influence functions to parameter errors. A common approach to predicting influence under full observation is to estimate the model parameters using local influence information at
each node. However, an influence function can be highly sensitive to errors in estimated parameters.
E.g. consider an IC model on a chain of n nodes where all edge parameters are 1; if the parameters
have all been underestimated with a constant error of , the estimated probability of the last node
being influenced is (1 ? )n , which is exponentially smaller than the true value 1 for large n. Our
results for full observation provide concrete sample complexity guarantees for learning influence
functions using local estimation, to any desired accuracy; in particular, for the above example, our
results prescribe that be driven below 1/n for accurate predictions (see Section 4 on IC model).
Of course, under partial observation, we do not see enough information to locally estimate the individual model parameters, and the influence function needs to be learned directly from cascades.
3
The Linear Threshold model
We start with learnability in the Linear Threshold (LT) model. Given that the influence process is
deterministic and the influence function outputs binary values, we use the 0-1 loss for evaluation; for
any subset of nodes Y ? V and predicted boolean
q ? {0, 1}n , this is the fraction of nodes on
Pvector
n
1
which the prediction is wrong: `0-1 (Y, q) = n u=1 1(?u (Y ) 6= qu ), where ?u (Y ) = 1(u ? Y ).
Theorem 1 (PAC learnability under LT model). The class of influence functions under the LT
e ?1 (r + n) . Furmodel is PAC learnable w.r.t. `0-1 and the corresponding sample complexity is O
thermore, in the full observation setting the influence functions can be learned in polynomial time.
The proof is in Appendix A and we give an outline here. Let F w denote a LT influence function
with parameters w ? Rr+n (edge weights and thresholds) and let us focus on the partial observation
setting (only a node and not its time of influence is observed). Consider a simple algorithm that
outputs an influence function with zero error on training sample S = {(X 1 , Y 1 ), . . . , (X m , Y m )}:
m
m n
1 XX
1 X
`0-1 Y i , F w (X i ) =
1 ?u (Y i ) 6= Fuw (X i ) .
(1)
m i=1
mn i=1 u=1
Such a function always exists as the training cascades are generated using the LT model. We will
shortly look at computational issues in implementing this algorithm. We now explain our PAC
learnability result for this algorithm. The main idea is in interpreting LT influence functions as
neural networks with linear threshold activations. The proof follows by bounding the VC-dimension
of the class of all functions Fuw for node u, and using standard arguments in showing learnability
under finite VC-dimension [20]. We sketch the neural network (NN) construction in two steps (local
influence as a two-layer NN, and the global influence as a multilayer network; see Figure 1), where a
crucial part is in ensuring that no node gets influenced more than once during the influence process:
1. Local influence as a two-layerPNN. Recall that the (local)
influence at a node u for previously
influenced nodes Z is given by 1
w
?
k
.
This
can be modeled as a linear (binary)
uv
u
v?N (u)?Z
classifier, or equivalently as a two-layer NN with linear threshold activations. Here the input layer
contains a unit for each node in the network and takes a binary value indicating whether the node
4
Figure 1: Modeling a single time step t of the influence
process Ft,u : 2V ? {0, 1} as a neural network (t ? 2):
the portion in black computes whether or not node u is influenced in the current time step t, while that in red/blue
enforces the constraint that u does not get influenced more
than once during the influence process. Here ?t,u is 1 when
a node has been influenced previously and 0 otherwise.
The dotted red edges represent strong negative signals (has
a large negative weight) and the dotted blue edges represent
strong positive signals. The initial input to each node u in
the input layer is 1(u ? X), while that for the auxiliary
nodes (in red) is 0.
is present in Z; the output layer contains a binary unit indicating whether u is influenced after one
time step; the connections between the two layers correspond to the edges between u and other
nodes; and the threshold term on the output unit is the threshold parameter ku . Thus the first step
of the influence process can be modeled using a NN with two n-node layers (the input layer takes
information about the seed set, and the binary output indicates which nodes got influenced).
2. From local to global: the multilayer network. The two-layer NN can be extended to multiple
time steps by replicating the output layer once for each step. However, the resulting NN will allow a
node to get influenced more than once during the influence process. To avoid this, we introduce an
additional binary unit u0 for each node u in a layer, which will record whether node u was influenced
in previous time steps. In particular, whenever node u is influenced in a layer, a strong positive signal
is sent to activate u0 in the next layer, which in turn will send out strong negative signals to ensure
u is never activated in subsequent layers2 ; we use additional connections to ensure that u0 remains
active there after. Note that a node u in layer t + 1 is 1 whenever u is influenced at time step t;
w
: 2V ? {0, 1} denote this function computed at u for a given seed set. The LT influence
let Ft,u
is 1 whenever u is influenced in any one of the n time steps) is
function Fuw (which for seed
Pn set X
w
(X). Clearly, Fuw can be modeled as a NN with n + 1 layers.
then given by Fuw (X) = t=1 Ft,u
A naive application of classic VC-dimension results for NN [21] will give us that the VC-dimension
e
of the class of functions Fu is O(n(r
+ n)) (counting r + n parameters for each layer). Since the
e + n). The remaining proof
same parameters are repeated across layers, this can be tightened to O(r
involves standard uniform convergence arguments [20] and a union bound over all nodes.
3.1
Efficient computation
Having shown PAC learnability, we turn to efficient implementation of the prescribed algorithm.
Partial observation. In the case where the training set does not specify the time at which each
node was infected, finding an influence function with zero training error is computationally hard
in general (as this is similar to learning a recurrent neural network). In practice, however, we can
leverage the neural network construction, and solve the problem approximately by replacing linear
threshold activation functions with sigmoidal activations and the 0-1 loss with a suitable continuous
surrogate loss, and apply back-propagation based methods used for neural network learning.
Full observation. Here it turns out that the algorithm can be implemented in polynomial time using
1
m
local computations. Given a fully observed sample S = {(X 1 , Y1:n
), . . . , (X m , Y1:n
)}, the loss of
an influence function F for any (X, Y1:n ) is given by `0-1 (?nt=1 Yt , F (X)) and as before measures the
fraction of mispredicted nodes. The prescribed algorithm then seeks to find parameters w for which
the corresponding training error is 0. Given that the time of influence is observed, this problem
can be decoupled into a set of linear programs (LPs) at each node; this is akin to locally estimating
the parameters at each node. In particular, let wuPdenote the parameterslocal to node u (incoming
weights and threshold), and let fu (Z; wu ) = 1
v?N (u)?Z wuv ? ku denote the local influence
Pm
1
i
i
at u for set Z of previously influence nodes. Let ?
b1,u (w
u) = m
i=1 1 ?u (Y1 ) 6= fu (X ; wu )
P
m
1
i
i
i
and ?
bt,u (wu ) = m
i=1 1 ?u (Yt ) 6= fu (Yt?1 ; wu ) , t ? 2, that given the set of nodes Yt?1
influenced at time t ? 1, measures the local prediction error at time t. Since the training sample was
2
By a strong signal, we mean a large positive/negative connection weight which will outweigh signals from
other connections. Indeed such connections can be created when the weights are all bounded.
5
generated by a LT model, there always exists parameters such that ?
bt,u (wu ) = 0 for each t and u,
which also implies that the overall training error is 0. Such a set of parameters can be obtained by
formulating a suitable LP that can be solved in polynomial time. The details are in Appendix A.2.
4
The Independent Cascade model
We now address the question of learnability in the Independent Cascade (IC) model. Since the
influence functions here have probabilistic outputs, the proof techniques we shall use will be different from the previous section, and will rely on arguments based on covering numbers. In this
case,
Y ? V and q ? [0, 1]n , is given by: `sq (Y, q) =
Pnwe use the squared2 loss which for any
1
2
u=1 [?u (Y )(1 ? qu ) + (1 ? ?u (Y ))qu ]. We shall make a mild assumption that the edge probn
abilities are bounded away from 0 and 1, i.e. w ? [?, 1 ? ?]r for some ? ? (0, 0.5).
Theorem 2 (PAC learnability under IC model). The class of influence functions
under the IC
e ?2 n3 r . Furthermore,
model is PAC learnable w.r.t. `sq and the sample complexity is m = O
in the full observation setting, under additional assumptions (see Assumption 1), the influence
e ?2 nr3 ).
functions can be learned in polynomial time with sample complexity O(
The proof is given in Appendix B. As noted earlier, an IC influence function can be sensitive to errors
in estimated parameters. Hence before discussing our algorithms and analysis, we seek to understand
the extent to which changes in the IC parameters can produce changes in the influence function, and
in particular, check if the function is Lipschitz. For this, we use the closed-form interpretation of
the IC function as an expectation of an indicator term over a randomly drawn subset of edges from
the network (see [2]). More specifically, the IC cascade process can be seen as activating a subset
of edges in the network; since each edge can be activated at most once, the active edges can be seen
as having been chosen apriori using independent Bernoulli draws. Consider a random subgraph of
active edges obtained by choosing each edge (u, v) ? E independently with probability wuv . For
a given subset of such edges A ? E and seed set X ? V , let ?u (A, X) be an indicator function
that evaluates to 1 if u is reachable from a node in X via edges in A and 0 otherwise. Then the IC
influence function can be written as an expectation of ? over random draw of the subgraph:
X Y
Y
Fuw (X) =
wab
(1 ? wab ) ?u (A, X).
(2)
A?E (a,b)?A
(a,b)?A
/
While the above definition involves an exponential number of terms, it can be verified that the
corresponding gradient is bounded, thus implying that the IC function is Lipschitz.3
0
Lemma 3. Fix X ? V . For any w, w0 ? Rr with kw ? w0 k1 ? , Fuw (X) ? Fuw (X) ? .
This result tells us how small the parameter errors need to be to obtain accurate influence predictions
and will be crucially used in our learnability results. Note that for the chain example in Section 2,
this tells us that the errors need to be less than 1/n for meaningful influence predictions.
We are now ready to provide the PAC learning algorithm for the partial observation setting with
sample S = {(X 1 , Y 1 ), . . . , (X m , Y m )}; we shall sketch the proof here. The full observation
case is outlined in Section 4.1, where we shall make use of the a different approach based on local
estimation. Let F w denote the IC influence function with parameters w. The algorithm that we
consider for partial observation resorts to a maximum likelihood (ML) estimation of the (global) IC
function. Let ?u (Y ) = 1(u ? Y ). Define the (global) log-likelihood for a cascade (X, Y ) as:
L(X, Y ; w) =
n
X
?u (Y ) ln Fuw (X) + (1 ? ?u (Y )) ln 1 ? Fuw (X) ,
u=1
The prescribed algorithm then solves the following optimization problem, and outputs an IC influence function F w from the solution w obtained.
m
X
max r
L(X i , Y i ; w).
(3)
w ? [?,1??]
i=1
3
In practice, IC influence functions can be computed through suitable sampling approaches. Also, note that
a function class can be PAC learnable even if the individual functions cannot be computed efficiently.
6
To provide learnability guarantees for the above ML based procedure, we construct a finite -cover
over the space of IC influence functions, i.e. show that the class can be approximated to a factor of
(in the infinity norm sense) by a finite set of IC influence functions. We first construct an -cover of
size O((r/)r ) over the space of parameters [?, 1 ? ?]r , and use Lipschitzness to translate this to an
-cover of same size over the IC class. Following this, standard uniform convergence arguments [20]
can be used to derive a sample complexity guarantee on the expected likelihood with a logarithmic
dependence on the cover size; this then implies the desired learnability result w.r.t. `sq :
Lemma 4 (Sample complexity guarantee on the log-likelihood objective). Fix , ? ? (0, 1) and
e ?2 n3 r . Let w be the parameters obtained from ML estimation. Then w.p. ? 1 ? ?,
m=O
1
1
sup
E L(X, Y ; w) ? E L(X, Y ; w) ? .
n
n
w?[?,1??]r
Compared to results for the LT model, the sample complexity in Theorem 2 has a square dependence
on 1/. This is not surprising, as unlike the LT model, where the optimal 0-1 error is zero, the optimal
squared error here is non-zero in general; in fact, there are standard sample complexity lower bound
results that show that for similar settings, one cannot obtain a tighter bound in terms of 1/ [20].
We wish to also note that the approach of Du et al. (2014) for learning influence under partial
observation [13] uses the same interpretation of the IC influence function as in Eq. (2), but rather
than learning the parameters of the model, they seek to learn the weights on the individual indicator
functions. Since there are exponentially many indicator terms, they resort to constructing approximations to the influence function, for which a strong technical condition needs to be satisfied; this
condition need not however hold in most settings. In contrast, our result applies to general settings.
4.1
Efficient computation
Partial observation. The optimization problem in Eq. (3) that we need to solve for the partial observation case is non-convex in general. Of course, in practice, this can be solved approximately using
gradient-based techniques, using sample-based gradient computations to deal with the exponential
number of terms in the definition of F w in the objective (see Appendix B.5).
1
m
Full observation. On the other hand, when training sample S = {(X 1 , Y1:n
), . . . , (X m , Y1:n
)}
contains fully observed cascades, we are able to show polynomial time learnability. For the LT
model, we were assured of a set of parameters that would yield zero 0-1 error on the training sample,
and hence the same procedure prescribed for partial information could be implemented under the
full observation in polynomial time by reduction to local computations. This is not the case with the
IC model, where we resort to the common approach of learning influence by estimating the model
parameters through a local maximum likelihood (ML) estimation technique. This method is similar
to the maximum likelihood procedure used in [9] for solving a different problem of recovering the
structure of an unknown network from cascades. For the purpose of showing learnability, we find it
sufficient to apply this procedure to only the first time step of the cascade.
Our analysis first provides guarantees on the estimated parameters, and uses the Lipschitz property
in Lemma 3 to translate them to guarantees on the influence function. Since we now wish to give
guarantees in the parameter space, we will require that there exists unique set of parameters that
explains the IC cascade process; for this, we will need stricter assumptions. We assume that all edges
have a minimum influence strength, and that even when all neighbors of a node u are influenced in
a time step, there is a small probability of u not being influenced in the next step; we consider a
specific seed distribution, where each node has a non-zero probability of (not) being a seed node.
Assumption 1. Let w? denote the parameters of the Q
underlying IC model. Then there exists ? ?
?
? ? (0, 0.5) such that wuv
? ? for all (u, v) ? E and v?N (u) (1 ? wuv ) ? ? for all u ? V . Also,
each node in V is chosen independently in the initial seed set with probability ? ? (0, 1).
We first define the local log-likelihood for given seed set X and nodes Y1 influenced at t = 1:
X
X
X
L(X, Y1 ; ?) =
?u (Y1 ) ln 1 ? exp ?
?uv
? (1 ? ?u (Y1 ))
?uv ,
u?X
/
v?N (u)?X
v?N (u)?X
where we have used log-transformed parameters ?uv = ? ln(1 ? wuv ), so that the objective is
concave in ?. The prescribed algorithm then solves the following maximization problem over all
7
parameters that satisfy Assumption 1 and constructs an IC influence function from the parameters.
m
X
X
1
1
maxr
L(X i , Y1i ; ?) s.t. ?(u, v) ? E, ?uv ? ln
, ?u ? V,
?uv ? ln
.
? ? R+
1
?
?
?
i=1
v?N (u)
This problem breaks down into smaller convex problems and can be solved efficiently (see [9]).
Proposition 5 (PAC learnability under IC model with full observation). Under full observation
and Assumption 1, the class of IC influence functions is PAC learnable in polynomial time through
e nr3 (?2 (1 ? ?)4 ?2 ? 2 2 )?1 .
local ML estimation. The corresponding sample complexity is O
The proof is provided in Appendix B.6 and proceeds through the following steps: (1) we use covering number arguments to show that the local log-likelihood for the estimated parameters is close to
the optimal value; (2) we then show that under Assumption 1, the expected log-likelihood is strongly
concave, which gives us that closeness to the true model parameters in terms of the likelihood also
implies closeness to the true parameters in the parameter space; (3) we finally use the Lipschitz
property in Lemma 3 to translate this to guarantees on the global influence function.
Note that the sample complexity here has a worse dependence on the number of edges r compared
to the partial observation case; this is due to the two-step approach of requiring guarantees on the
individual parameters, and then transferring them to the influence function. The better dependence
on the number of nodes n is a consequence of estimating parameters locally. It would be interesting
to see if tighter results can be obtained by using influence information from all time steps, and
making different assumptions on the model parameters (e.g. correlation decay assumption in [9]).
5
The Voter model
Before closing, we sketch of our learnability results for the Voter model, where unlike previous
models the graph is undirected (with self-loops). Here we shall be interested in learning influence
for a fixed number of K time steps as the cascades can be longer than n. With the squared loss again
as the loss function, this problem almost immediately reduces to linear least squares regression.
P
Let W ? [0, 1]n?n be a matrix of normalized edge weights with Wuv = wuv / v?N (u)?{u} wuv
if (u, v) ? E and 0 otherwise. Note that W can be seen as a one-step probability transition matrix.
Then for an initial seed set Z ? V , the probability of a node u being influenced under this model
n
after one time step can be verified to be 1>
u W1X , where 1X ? {0, 1} is a column vector containing
1 in entries corresponding to nodes in X, and 0 everywhere else. Similarly, for calculating the
probability of a node u being influenced after K time steps, one can use the K-step transition
K
K >
>
matrix: Fu (X) = 1>
u (W )1X . Now setting b = (W ) 1u , we have Fu (X) = b 1X which is
essentially a linear function parametrized by n weights.
Thus learning influence in the Voter model (for fixed cascade length) can be posed as n independent
linear regression (one per node) with n coefficients each. This can be solved in polynomial time
even with partially observed data. We then have the following from standard results [20].
Theorem 6 (PAC learnability under Voter model). The class of influence functions under the
e ?2 n .
Voter model is PAC learnable w.r.t. `sq in polynomial time and the sample complexity is O
6
Conclusion
We have established PAC learnability of some of the most celebrated models of influence in social
networks. Our results point towards interesting connections between learning theory and the literature on influence in networks. Beyond the practical implications of the ability to learn influence
functions from cascades, the fact that the main models of influence are PAC learnable, serves as further evidence of their potent modeling capabilities. It would be interesting to see if our results extend
to generalizations of the LT and IC models, and to investigate sample complexity lower bounds.
Acknowledgements. Part of this work was carried out while HN was visiting Harvard as a part of a student visit
under the Indo-US Joint Center for Advanced Research in Machine Learning, Game Theory & Optimization
supported by the Indo-US Science & Technology Forum. HN thanks Kevin Murphy, Shivani Agarwal and
Harish G. Ramaswamy for helpful discussions. YS and DP were supported by NSF grant CCF-1301976 and
YS by CAREER CCF-1452961 and a Google Faculty Research Award.
8
References
[1] Pedro Domingos and Matthew Richardson. Mining the network value of customers. In KDD,
2001.
? Tardos. Maximizing the spread of influence through
[2] David Kempe, Jon M. Kleinberg, and Eva
a social network. In KDD, 2003.
[3] Amit Goyal, Francesco Bonchi, and Laks VS Lakshmanan. Learning influence probabilities
in social networks. In KDD, 2010.
[4] Manuel Gomez-Rodriguez, David Balduzzi, and Bernhard Sch?olkopf. Uncovering the temporal dynamics of diffusion networks. In ICML, 2011.
[5] Nan Du, Le Song, Alexander J. Smola, and Ming Yuan. Learning networks of heterogeneous
influence. In NIPS, 2012.
[6] Manuel Gomez-Rodriguez, Jure Leskovec, and Andreas Krause. Inferring networks of diffusion and influence. ACM Transactions on Knowledge Discovery from Data, 5(4):21, 2012.
[7] Nan Du, Le Song, Manuel Gomez-Rodriguez, and Hongyuan Zha. Scalable influence estimation in continuous-time diffusion networks. In NIPS, 2013.
[8] Abir De, Sourangshu Bhattacharya, Parantapa Bhattacharya, Niloy Ganguly, and Soumen
Chakrabarti. Learning a linear influence model from transient opinion dynamics. In CIKM,
2014.
[9] Praneeth Netrapalli and Sujay Sanghavi. Learning the graph of epidemic cascades. In SIGMETRICS, 2012.
[10] Hadi Daneshmand, Manuel Gomez-Rodriguez, Le Song, and Bernhard Sch?olkopf. Estimating
diffusion network structures: Recovery conditions, sample complexity & soft-thresholding
algorithm. In ICML, 2014.
[11] Jean Pouget-Abadie and Thibaut Horel. Inferring graphs from cascades: A sparse recovery
framework. ICML, 2015.
[12] Bruno D. Abrahao, Flavio Chierichetti, Robert Kleinberg, and Alessandro Panconesi. Trace
complexity of network inference. In KDD, 2013.
[13] Nan Du, Yingyu Liang, Maria-Florina Balcan, and Le Song. Influence function learning in
information diffusion networks. In ICML, 2014.
[14] Leslie G. Valiant. A theory of the learnable. Commununications of the ACM, 27(11):1134?
1142, 1984.
[15] Elchanan Mossel and S?ebastien Roch. On the submodularity of influence in social networks.
In STOC, 2007.
[16] Eyal Even-Dar and Asaf Shapira. A note on maximizing the spread of influence in social
networks. Information Processing Letters, 111(4):184?187, 2011.
[17] Maria-Florina Balcan and Nicholas J.A. Harvey. Learning submodular functions. In STOC,
2011.
[18] Vitaly Feldman and Pravesh Kothari. Learning coverage functions and private release of
marginals. In COLT, 2014.
[19] Jean Honorio and Luis Ortiz. Learning the structure and parameters of large-population graphical games from behavioral data. Journal of Machine Learning Research, 16:1157?1210, 2015.
[20] Martin Anthony and Peter L. Bartlett. Neural network learning: Theoretical foundations.
Cambridge University Press, 1999.
[21] Peter L. Bartlett and Wolfgang Maass. Vapnik Chervonenkis dimension of neural nets. Handbook of Brain Theory and Neural Networks, pages 1188?1192, 1995.
[22] Tong Zhang. Statistical behaviour and consistency of classification methods based on convex
risk minimization. Annals of Mathematical Statistics, 32:56?134, 2004.
9
| 5989 |@word mild:1 private:1 faculty:1 achievable:1 polynomial:18 norm:1 seek:3 crucially:1 pick:1 lakshmanan:1 reduction:3 initial:6 celebrated:1 series:1 contains:3 selecting:1 chervonenkis:1 ours:2 yni:1 existing:2 err:5 current:1 nt:1 surprising:1 manuel:4 activation:4 written:1 luis:1 fn:1 subsequent:1 kdd:4 v:1 implying:1 parametrization:2 record:1 parkes:2 provides:1 node:80 sigmoidal:1 zhang:1 mathematical:1 along:1 chakrabarti:1 yuan:1 behavioral:1 bonchi:1 yingyu:1 introduce:1 manner:1 indeed:1 expected:4 multi:1 brain:1 ming:1 begin:1 estimating:5 underlying:4 moreover:4 provided:2 xx:1 advent:1 bounded:3 what:2 daneshmand:1 substantially:1 narasimhan:1 finding:1 lipschitzness:1 guarantee:14 temporal:1 every:2 concave:2 stricter:1 exactly:1 classifier:2 wrong:1 unit:4 grant:1 positive:3 before:3 local:23 wuv:12 consequence:1 niloy:1 becoming:1 approximately:4 black:1 voter:11 studied:2 challenging:1 directed:1 practical:2 unique:1 enforces:1 practice:5 union:1 implement:1 goyal:1 sq:4 procedure:4 cascade:41 got:1 word:1 shapira:1 get:4 cannot:4 mispredicted:1 close:1 risk:1 influence:128 outweigh:1 map:2 deterministic:2 yt:4 center:1 send:1 customer:1 maximizing:2 independently:3 convex:3 simplicity:1 recovery:2 assigns:1 immediately:1 pouget:1 classic:1 handle:1 notion:1 coordinate:1 population:1 tardos:1 annals:1 target:1 construction:2 user:1 exact:2 us:4 prescribe:1 domingo:1 harvard:4 approximated:1 particularly:2 continues:1 observed:17 ft:3 solved:4 capture:1 revisiting:1 eva:1 trade:1 thermore:1 alessandro:1 complexity:19 dynamic:2 depend:1 solving:1 efficiency:1 completely:1 joint:1 activate:1 tell:3 kevin:1 choosing:1 refined:1 jean:2 posed:1 solve:2 say:3 otherwise:4 epidemic:1 ability:2 statistic:1 richardson:1 ganguly:1 itself:1 rr:2 net:1 interaction:1 product:1 relevant:1 loop:2 realization:1 soumen:1 parametrizations:1 subgraph:4 translate:3 olkopf:2 convergence:2 p:1 sea:2 produce:1 derive:1 recurrent:1 eq:2 solves:2 strong:7 implemented:3 coverage:3 predicted:2 come:1 auxiliary:1 involves:2 implies:3 recovering:1 submodularity:1 correct:2 vc:5 human:1 transient:1 opinion:14 implementing:1 explains:1 require:3 activating:1 behaviour:1 f1:1 fix:2 generalization:1 preliminary:1 proposition:1 tighter:2 hold:6 ic:39 exp:1 seed:20 predict:2 matthew:1 purpose:1 estimation:13 pravesh:1 sensitive:3 minimization:1 clearly:1 always:3 nr3:2 sigmetrics:1 rather:2 avoid:1 pn:1 release:1 focus:2 abrahao:1 maria:2 bernoulli:1 likelihood:11 indicates:3 check:1 contrast:2 sense:1 helpful:1 inference:1 nn:8 bt:2 transferring:1 honorio:1 transformed:1 interested:3 provably:1 issue:1 overall:1 colt:1 classification:1 uncovering:1 kempe:1 apriori:1 once:7 never:4 having:2 construct:3 sampling:1 kw:1 look:2 icml:4 jon:1 discrepancy:1 sanghavi:1 bangalore:1 primarily:3 randomly:2 individual:7 murphy:1 consisting:1 ortiz:1 interest:2 highly:1 investigate:1 mining:1 evaluation:1 w1x:1 analyzed:2 unconditional:1 activated:2 pmac:1 devoted:1 chain:2 implication:1 accurate:3 fu:6 edge:21 partial:18 orthogonal:1 unless:1 decoupled:1 elchanan:1 desired:2 theoretical:1 leskovec:1 column:1 modeling:2 boolean:1 earlier:1 soft:1 infected:1 cover:4 measuring:2 maximization:2 leslie:1 subset:9 entry:1 uniform:2 learnability:31 thanks:1 fundamental:1 sensitivity:1 potent:1 probabilistic:1 invoke:1 off:1 uth:1 concrete:2 again:1 squared:2 satisfied:2 containing:2 hn:3 worse:1 resort:3 de:1 student:2 summarized:1 availability:2 coefficient:1 satisfy:1 break:1 ramaswamy:1 pnwe:1 eyal:1 closed:1 sup:1 portion:1 start:2 red:3 zha:1 capability:1 wolfgang:1 yaron:2 square:2 accuracy:2 hadi:1 who:5 efficiently:4 correspond:1 yield:1 accurately:1 randomness:1 wab:2 explain:1 influenced:43 networking:1 whenever:3 definition:6 evaluates:1 associated:1 proof:7 gain:1 ask:1 recall:1 knowledge:1 harikrishna:1 back:1 specify:1 done:1 strongly:1 furthermore:1 marketing:1 smola:1 horel:1 correlation:1 hand:2 sketch:3 replacing:1 propagation:1 google:1 rodriguez:4 quality:1 contain:2 true:4 requiring:1 normalized:1 ccf:2 hence:3 seeded:2 maass:1 deal:1 game:3 self:2 during:5 covering:4 noted:1 abir:1 stress:1 outline:1 complete:1 interpreting:1 balcan:2 thibaut:1 common:4 viral:1 exponentially:2 extend:1 interpretation:4 marginals:1 cambridge:2 netrapalli:1 feldman:1 uv:6 consistency:1 outlined:1 pm:1 similarly:1 closing:1 sujay:1 bruno:1 submodular:2 language:1 replicating:1 reachable:1 similarity:1 longer:1 v0:1 recent:2 inf:2 driven:1 harvey:1 binary:7 discussing:2 flavio:1 seen:6 minimum:2 additional:4 maximize:1 signal:6 u0:3 full:18 multiple:1 reduces:1 exceeds:1 technical:3 equally:1 visit:1 y:2 award:1 ensuring:1 prediction:6 variant:2 regression:4 scalable:1 multilayer:2 essentially:1 expectation:5 heterogeneous:1 florina:2 represent:5 agarwal:1 krause:1 underestimated:1 else:1 crucial:1 sch:2 rest:1 unlike:3 probably:2 strict:1 undirected:2 sent:1 vitaly:1 counting:1 leverage:1 enough:1 andreas:1 idea:2 praneeth:1 panconesi:1 whether:4 bartlett:2 akin:2 song:4 f:4 peter:2 proceed:1 dar:1 fuw:10 locally:4 shivani:1 nsf:1 notice:1 dotted:2 estimated:8 cikm:1 per:1 blue:2 discrete:3 shall:7 threshold:18 enormous:1 drawn:1 verified:2 diffusion:5 graph:11 fraction:2 year:1 everywhere:1 letter:1 family:2 almost:1 wu:5 draw:6 appendix:5 layer:18 bound:4 guaranteed:1 gomez:4 nan:3 strength:2 precisely:1 constraint:1 infinity:1 n3:2 y1i:2 kleinberg:2 argument:7 prescribed:5 formulating:1 martin:1 influential:1 according:2 popularized:1 dissemination:1 smaller:2 across:1 lp:2 qu:3 making:1 happens:1 computationally:2 ln:6 previously:3 remains:1 turn:3 singer:1 initiate:1 end:1 serf:1 studying:1 available:1 apply:4 observe:1 away:1 nicholas:1 bhattacharya:2 shortly:1 denotes:1 running:1 include:1 ensure:2 remaining:1 harish:1 graphical:1 laks:1 calculating:1 k1:1 amit:1 balduzzi:1 society:1 forum:1 objective:3 question:2 primary:1 dependence:4 surrogate:1 visiting:1 gradient:3 dp:1 link:1 parametrized:2 w0:2 extent:1 ru:1 length:2 modeled:4 providing:1 equivalently:1 setup:1 liang:1 robert:1 stoc:2 expense:1 trace:3 negative:4 implementation:1 ebastien:1 unknown:3 observation:34 francesco:1 kothari:1 benchmark:1 finite:4 extended:1 y1:15 david:3 namely:1 specified:1 connection:7 learned:7 established:1 nip:2 address:1 able:2 beyond:1 proceeds:1 below:2 jure:1 roch:1 program:1 max:1 critical:1 suitable:3 natural:1 rely:1 predicting:2 indicator:4 advanced:1 mn:1 technology:2 mossel:1 created:1 ready:1 carried:1 naive:1 sn:1 understanding:1 literature:1 acknowledgement:1 discovery:1 fully:4 loss:11 bear:1 interesting:5 digital:1 foundation:1 awareness:1 incurred:1 sufficient:1 thresholding:1 tightened:1 course:2 supported:2 last:2 sourangshu:1 allow:1 understand:1 institute:1 neighbor:8 sparse:1 fg:12 dimension:6 transition:2 computes:1 author:1 adopts:1 polynomially:1 social:13 transaction:1 bernhard:2 ml:5 global:8 active:3 incoming:2 maxr:1 hongyuan:1 b1:1 handbook:1 assumed:4 continuous:2 decade:1 additionally:1 learn:5 ku:3 career:1 du:5 necessarily:1 poly:1 constructing:1 anthony:1 assured:1 spread:5 main:3 bounding:2 repeated:1 chierichetti:1 tong:1 inferring:3 wish:2 exponential:2 indo:2 theorem:4 down:1 specific:3 pac:23 showing:2 learnable:11 decay:1 abadie:1 closeness:2 evidence:1 exists:5 restricting:1 vapnik:1 valiant:1 gained:1 phd:1 lt:23 logarithmic:1 simply:1 partially:4 disseminate:1 applies:1 pedro:1 acm:2 ma:1 goal:4 consequently:1 towards:1 lipschitz:5 yti:2 hard:2 change:7 specifically:1 lemma:4 called:1 total:1 meaningful:1 indicating:2 alexander:1 indian:1 ex:1 |
5,512 | 599 | Harmonic Grammars
for Formal Languages
Paul Smolensky
Department of Computer Science &
Institute of Cognitive Science
U ni versity of Colorado
Boulder, Colorado 80309-0430
Abstract
Basic connectionist principles imply that grammars should take the
form of systems of parallel soft constraints defining an optimization
problem the solutions to which are the well-formed structures in
the language. Such Harmonic Grammars have been successfully
applied to a number of problems in the theory of natural languages.
Here it is shown that formal languages too can be specified by
Harmonic Grammars, rather than by conventional serial re-write
rule systems.
1
HARMONIC GRAMMARS
In collaboration with Geraldine Legendre, Yoshiro Miyata, and Alan Prince, I have
been studying how symbolic computation in human cognition can arise naturally
as a higher-level virtual machine realized in appropriately designed lower-level connectionist networks. The basic computational principles of the approach are these:
(1)
a. \Vhell analyzed at the lower level, mental representations are distributed patterns of connectionist activity; when analyzed at a higher
level, these same representations constitute symbolic structures. The
particular symbolic structure s is characterized as a set of filler/role
bindings {f d ri}, using a collection of structural roles {rd each of
which may be occupied by a filler fi-a constituent symbolic struc847
848
Smolensky
ture. The corresponding lower-level description is an activity vector
s = Li fi0ri. These tensor product representations can be defined
recursively: fillers which are themselves complex structures are represented by vectors which in turn are recursively defined as tensor
product representations. (Smolensky, 1987; Smolensky, 1990).
b. When analyzed at the lower level, mental processes are massively parallel numerical activation spreading; when analyzed at a higher level,
these same processes constitute a form of symbol manipulation in which
entire structures, possibly involving recursive embedding, are manipulated in parallel. (Dolan and Smolensky, 1989; Legendre et al., 1991a;
Smolensky, 1990).
c. When the lower-level description of the activation spreading processes
satisfies certain mathematical properties, this process can be analyzed
on a higher level as the construction of that symbolic structure including the given input structure which maximizes Harmony (equivalently,
minimizes 'energy'. The Harmony can be computed either at the lower
level as a particular mathematical function of the numbers comprising
the activation pattern, or at the higher level as a function of the symbolic constituents comprising the structure. In the simplest cases, the
core of the Harmony function can be written at the lower, connectionist level simply as the quadratic form H
aTWa, where a is the
network's activation vector and W its connection weight matrix. At
the higher level, H LC1,C2 H C1 ; C2; each H C1 ; C2 is the Harmony ofhaving the two symbolic constituents Cl and C2 in the same structure (the
Ci are constituents in particular structural roles, and may be the same).
(Cohen and Grossberg, 1983; Golden, 1986; Golden j 1988; Hinton and
Sejnowski, 1983; Hinton and Sejnowski, 1986; Hopfield, 1982; Hopfield, 1984; lIopfield, 1987; Legendre et al., 1990a; Smolensky, 1983;
Smolensky, 1986).
=
=
Once Harmony (connectionist well-formed ness) is identified with grammaticality
(linguistic well-formedness), the following results (Ic) (Legendre et al., 1990a):
(2)
a. The explicit form of the Harmony function can be computed to be a
sum of terms each of which measures the well-formedness arising from
the coexistence, within a single structure, of a pair of constituents in
their particular structural roles.
b. A ( descriptive) grammar can thus be identified as a set of soft rules
each of the form:
If a linguistic structure S simultaneously contains constituent Cl
in structural role rl and constituent C2 in structural role r2, then
add to H(S), the Harmony value of S, the quantity H cl ,rl;c2,r2
(which may be positive or negative).
A set of such soft rules (or "constraints," or "preferences") defines a
Harmonic Grammar.
c. The constituents in the soft rules include both those that are given
in the input and the "hidden" constituents that are assigned to the
input by the grammar. The problem for the parser (computational
Harmonic Gt:ammars for Formal Languages
grammar) is to construct that structure S, containing both input and
"hidden" constituents, with the highest overall Harmony H(S).
Harmonic Grammar (IIG) is a formal development of conceptual ideas linking Harmony to linguistics which were first proposed in Lakoff's cognitive phonology (Lakoff,
1988; Lakoff, 1989) and Goldsmith's harmonic phonology (Goldsmith, 1990; Goldsmith, in press). For an application of HG to natural language syntax/semantics,
see (Legendre et al., 1990a; Legendre et al., 1990b; Legendre et al., 1991b; Legendre
et al., in press). Harmonic Grammar has more recently evolved into a non-numerical
formalism called Optimality Theory which has been successfully applied to a range
of problems in phonology (Prince and Smolensky, 1991; Prince and Smolensky, in
preparation). For a comprehensive discussion of the overall research program see
(Smolensky et al., 1992).
2
HGs FOR FORMAL LANGUAGES
One means for assessing the expressive power of Harmonic Grammar is to apply
it to the specification of formal languages. Can, e.g., any Context-Free Language
(CFL) L be specified by an IIG? Can a set of soft rules of the form (2b) be given
so that a string s E L iff the maximum-Harmony tree with s as terminals has, say,
H ~ O? A crucial limitation of these soft rules is that each may only refer to a
pair of constituents: in this sense, they are only second order. (It simplifies the
exposition to describe as "pairs" those in which both constituents are the same;
these actually correspond to first order soft rules, which also exist in HG.)
For a CFL, a tree is well-formed iff all of its local trees are--where a local tree is
just some node and all its children. Thus the HG rules need only refer to pairs of
nodes which fall in a single local tree, i.e., parent-child pairs and/or sibling pairs.
The II value of the entire tree is just the sum of all the numbers for each such pair
of nodes given by the soft rules defining the I1G.
It is clear that for a general context-free grammar (CFG), pairwise evalu-
ation doesn't suffice.
Consider, e.g., the following CFG fragment, Go
C, A~D E, F~B E, and the ill-formed local tree (A ; (B E)) (here,
A is the parent, Band E the two children). Pairwise well-formedness checks fail
to detect the ill-formed ness , since the first rule says B can be a left child of A,
the second that E can be a right child of A, and the third that B can be a left
sibling of E. The ill-formedness can be detected only by examining all three nodes
simultaneously, and seeing that this triple is not licensed by any single rule.
A~B
One possible approach would be to extend HG to rules higher than second order,
involving more than two constituents; this corresponds to H functions of degree
higher than 2. Such H functions go beyond standard connectionist networks with
pairwise connectivity, requiring networks defined over hypergraphs rather than ordinary graphs. There is a natural alternative, however, that requires no change at
all in I1G, but instead adopts a special kind of grammar for the CFL. The basic
trick is a modification of an idea taken from Generalized Phrase Structure Grammar
(Gazdar et al., 1985), a theory that adapts CFGs to the study of natural languages.
It is useful to introduce a new normal form for CFGs, Harmonic Normal Form
849
850
Smolensky
(HNF). In IINF, all rules of are three types: A[i]-B C, A-a, and A-A[i]; and
there is the further requirement that there can be only one branching rule with a
given left hand side-the unique branching condition. Here we use lowercase letters
to denote terminal symbols, and have two sorts of non-terminals: general symbols
like A and subcategorized symbols like A[I], A[2], ... , A[i]. To see that every CFL L
does indeed have an HNF grammar, it suffices to first take a CFG for L in Chomsky
Normal Form, and, for each (necessarily binary) branching rule A-B C, (i) replace
the symbol A on the left hand side with A[i], using a different value of i for each
branching rule with a given left hand side, and (ii) add the rule A-A[i].
Subcategorizing the general category A, which may have several legal branching
expansions, into the specialized subcategories A[i], each of which has only one legal
branching expansion, makes it possible to determine the well-formedness of an entire
tree simply by examining each parent/child pair separately: an entire tree is wellformed iff every parent/child pair is. The unique branching condition enables us
to evaluate the Harmony of a tree simply by adding up a collection of numbers
(specified by the soft rules of an IIG), one for each node and one for each link of
the tree. Now, any CFL L can be specified by a Harmonic Grammar. First, find an
HNF grammar G H N F for L; from it, generate a set of soft rules defining a Harmonic
Grammar GIl via the correspondences:
a
A
A[i]
start symbol S
A-a (a = a or A[i))
A[i]-B C
Ra: If a is at any node, add -1 to H
RA: If A is at any node, add -2 to H
RA[i]: If A[i] is at any node, add -3 to H
R root : If S is at the root, add +1 to H
If a is a left child of A, add +2 to H
If B is a left child of A[i], add +2 to H
If C is a right child of A[i], add +2 to H
The soft rules R a , R A , RA[i] and R root are first-order and evaluate tree nodes; the
remaining second-order soft rules are legal domination rules evaluating tree links.
This IIG assigns H = 0 to any legal parse tree (with S at the root), and H < 0 for
any other tree; thus s E L iff the maximal-Harmony completion of s to a tree has
H ~ O.
P1'OOf. 'Ve evaluate the Harmony of any tree by conceptually breaking up its nodes and links into pieces each of which contributes
either +1 or -1 to H. In legal trees, there will be complete cancellation of the positive and negative contributions; illegal trees will
have uncancelled -Is leading to a total H < O.
The decomposition of nodes and links proceeds as follows. Replace
each (undirected) link in the tree with a pair of directed links, one
pointing up to the parent, the other down to the child. If the link
joins a lega.l parent/child pair, the corresponding legal domination
rule will contribute +2 to H; break this +2 into two contributions
of +1, one for each of the directed links. We similarly break up the
non-terminal nodes into sub-nodes. A non-terminal node labelled
Harmonic Grammars for Formal Languages
A[i] has two children in legal trees, and we break such a node into
three sub-nodes, one corresponding to each downward link to a
child and one corresponding to the upward link to the parent of
A[i]. According to soft rule RA[ij, the contribution of this node
A[l1 to II is -3; this is distributed as three contributions of -1,
one for each sub-node. Similarly, a non-terminal node labelled A
has only one child in a legal tree, so we break it into two sub-nodes,
one for the downward link to the only child, one for the upward
link to the parent of A. The contribution of -2 dictated by soft
rule RA is similarly decomposed into two contribution:) of -1, one
for each sub-node. There is no need to break up terminal nodes,
which in legal trees have only one outgoing link, upward to the
parent; the contribution from Ra is already just -1.
\Ve can evaluate the Harmony of any tree by examining each node,
now decomposed into a set of sub-nodes, and determining the contribution to II made by the node and its outgoing directed links.
We will not double-count link contributions this way; half the contribution of each original undirected link is counted at each of the
nodes it connects.
Consider first a non-terminal node n labelled by A[i]; if it has a
legal parent, it will have an upward link to the parent that contributes +1, which cancels the -1 contributed by n's corresponding
sub-node. If n has a legal left child, the downward link to it will
contribute + 1, cancelling the -1 contributed by n's corresponding
sub-node. Similarly for the right child. Thus the total contribution
of this node will be 0 if it has a legal parent and two legal children.
For each m,issing legal child or parent, the node contributes an uncancelled -1, so the contribution of this node n in the general case
IS:
(3) lIn = -(the number of missing legal children and parents
of node n)
The same result (3) holds of the non-branching non-terminals labelled A; the only difference is that now the only child that could
be missing is a legal left child. If A happens to be a legal start symbol in root position, then the -1 of the sub-node corresponding to
the upward link to a parent is cancelled not by a legal parent, as
usual, but rather by the +1 of the soft rule R root . The result (3)
still holds even in this case, if we simply agree to count the root
position itself as a legal parent for start symbols. And finally, (3)
holds of a terminal node n labelled a; such a node can have no
missing child, but might have a missing legal parent.
Thus the total Harmony of a tree is II
Ln lIn, with lIn given
by (3). That is, II is the minus the total number of missing legal
children and parents for all nodes in the tree. Thus, II 0 if each
node has a legal parent and all its required legal children, otherwise
H ~ O. Because the grammar is in Harmonic Normal Form, a parse
tree is legal iff every every node has a legal parent and its required
=
=
851
852
Smolensky
number of legal children, where "legal" parenti child dominations
are defined only pairwise, in terms of the parent and one child,
blind to any other children that might be present or absent. Thus
we have established the desired result, that the maximum-Harmony
parse of a string s has H > 0 iff s E L.
We can also now see how to understand the soft rules of G H, and
how to generalize beyond Context-Free Languages. The soft rules
say that each node makes a negative contribution equal to its valence, while each link makes a positive contribution equal to its
valence (2); where the "valence" of a node (or link) is just the
number of links (or nodes) it is attached to in a legal tree. The
negative contributions of the nodes are made any time the node
is present; these are cancelled by positive contributions from the
links only when the link constitutes a legal domination, sanctioned
by the grammar.
So in order to apply the same strategy to unrestricted grammars,
we will simply set the magnitude of the (negative) contributions of
nodes equal to their valence, as determined by the grammar. 0
We can illustrate the technique by showing how HNF solves the problem with
the simple three-rule grammar fragment Go introduced early in this section. The
corresponding HNF grammar fragment GHNF given by the above construction is
A[l]~B C, A~A(1l, A[2]~D E, A~A[2l, F[l]~B E, F~F[l]. To avoid extraneous complications from adding a start node above and terminal nodes below,
suppose that both A and F are valid start symbols and that B, C, D, E are terminal
nodes. Then the corresponding HG GH assigns to the ill-formed tree (A ; (B E))
the Harmony -4, since, according to GHNF, Band E are both missing a legal parent and A is missing two legal children. Introducing a now-necessary subcategorized
version of A helps, but not enough: (A ; (A[l] ; (B E))) and (A ; (A[2] ; (B E)))
both have H = -2 since in each, one leaf node is missing a legal parent (E and B,
respectively), and the A[i] node is missing the corresponding legal child. But the
correct parse of the string B E, (F ; (F[l] ; (B E))), has H
O.
=
This technique can be generalized from context-free to unrestricted (type 0) formal
languages, which are equivalent to Turing Machines in the languages they generate
(e.g., (Hopcroft and Ullman, 1979)). The ith production rule in an unrestricted
grammar, Ri: ala2?? ?an? ~ i31i32?? ?i3m. is replaced by the two rules: R~ :
ala2? .. ani -- r[i] and Ri' : r[i] ~ i31i32 ... i3mi' introducing new non-terminal
symbols r[i]. The corresponding soft rules in the Harmonic Grammar are then: "If
the kth parent of r[i] is ak, add +2 to H" and "If i3k is the kth child of r[il, add
+2 to H"; there is also the rule Rr[;]: "If r[i] is at any node, add -ni - mi to H ."
There are also soft rules R a , R A , and R root , defined as in the context-free case.
Acknowledgements
I am grateful to Geraldine Legendre, Yoshiro Miyata, and Alan Prince for many
helpful discussions. The research presented here has been supported in part by
NSF grant BS-9209265 and by the University of Colorado at Boulder Council on
Research and Creative Work.
Harmonic Grammars for Formal Languages
References
Cohen, M. A. and Grossberg, S. (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE
Transactions on Systems, Man, and Cybernetics, 13:815-825.
Dolan, C. P. and Smolensky, P. (1989). Tensor Product Production System: A
modular architecture and representation. Connection Science, 1:53-68.
Gazdar, G., Klein, E., Pullum, G., and Sag, 1. (1985). Generalized Phrase Structure
Grammar. Harvard University Press, Cambridge, MA.
Golden, R. M. (1986). The "Brain-State-in-a-Box" neural model is a gradient descent algorithm. Mathematical Psychology, 30-31:73-80.
Golden, R. M. (1988). A unified framework for connectionist systems. Biological
Cybernetics, 59:109-120.
Goldsmith, J. A. (1990). Autosegmental and lv/etrical Phonology. Basil Blackwell,
Oxford.
Goldsmith, J. A. (In press). Phonology as an intelligent system. In Napoli, D. J. and
Kegl, J. A., editors, Bridges between Psychology and Linguistics: A Swarthmore
Festschrift for Lila Gleitman. Cambridge University Press, Cambridge.
Hinton, G. E. and Sejnowski, T. J. (1983). Analyzing cooperative computation. In
Proceedings of the Fifth Annual Conference of the Cognitive Science Society,
Rochester, NY. Erlbaum Associates.
Hinton, G. E. and Sejnowski, T. J. (1986). Learning and relearning in Boltzmann
machines. In Rumelhart, D. E., McClelland, J. L., and the PDP Research
Group, editors, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations, chapter 7, pages 282-317. MIT
Press/Bradford Books, Cambridge, MA.
Hopcroft, J. E. and Ullman, J. D. (1979). Introduction to Automata Theory, Languages, and Computation. Addison-Wesley, Reading, MA.
Hopfield, J. J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences,
USA, 79:2554-2558.
Hopfield, J. J. (1984). Neurons with graded response have collective computational
properties like those of two-state neurons. Proceedings of the National Academy
of Sciences, USA, 81:3088-3092.
IIopfield, J. J. (1987). Learning algorithms and probability distributions in feedforward and feed-back networks. Proceedings of the National Academy of Sciences, USA, 84:8429-8433.
Lakoff, G. (1988). A suggestion for a linguistics with connectionist foundations. In
Touretzky, D., Hinton, G. E., and Sejnowski, T. J., editors, Proceedings of the
Connectionist Afodels Summer School, pages 301-314, San ~Iateo, CA. Morgan
Kaufmann .
Lakoff, G. (1989). Cognitive phonology. Paper presented at the UC-Bel'keley Workshop on Rules and Constraints.
853
854
Smolensky
Legendre, G., Miyata, Y., and Smolensky, P. (1990a). Harmonic Grammar-A formal multi-level connectionist theory of linguistic well-formedness: Theoretical
foundations. In Proceedings of the Twelfth Annual Conference of the Cognitive
Science Society, pages 388-395, Cambridge, MA. Lawrence Erlbaum.
Legendre, G., Miyata, Y., and Smolensky, P. (1990b). Harmonic Grammar-A
formal multi-level connectionist theory of linguistic well-formedness: An application. In Proceedings of the Twelfth Annual Conference of the Cognitive
Science Society, pages 884-891, Cambridge, MA. Lawrence Erlbaum.
Legendre, G., Miyata, Y., and Smolensky, P. (1991a). Distributed recursive structure processing. In Touretzky, D. S. and Lippman, R., editors, Advances in Neural Information Processing Systems 3, pages 591-597, San Mateo, CA. Morgan
Kaufmann. Slightly expanded version in Brian Mayoh, editor, Scandinavian
Conference on Artificial Intelligence-g1, pages 47-53. lOS Press, Amsterdam.
Legendre, G., Miyata, Y., and Smolensky, P. (1991b). Unifying syntactic and semantic approaches to unaccusativity: A connectionist approach. In Sutton, L.
aud Johnson (with Ruth Shields), C., editors, Proceedings of the Seventeenth
Annual Afeeting of the Berkeley Linguistics Society, pages 156-167, Berkeley,
CA.
Legendre, G., ~'1iyata, Y., and Smolensky, P. (In press). Can connectionism contribute to syntax? Harmonic Grammar, with an application. In Deaton, K.,
Noske, M., and Ziolkowski, M., editors, Proceedings of the 26th Afeeting of the
Chicago Linguistic Society, Chicago, IL.
Prince, A. and Smolensky, P. (1991). Notes on connectionism and Harmony Theory
in linguistics. Technical report, Department of Computer Science, University
of Colorado at Boulder. Technical Report CU-CS-533-91.
Prince, A. and Smolensky, P. (In preparation). Optimality Theory: Constraint
interaction in generative grammar.
Smolensky, P. (1983). Schema selection and stochastic inference in modular environments. In Proceedings of the National Conference on Artificial Intelligence,
pages 378-382, Washington, DC.
Smolensky, P. (1986). Information processing in dynamical systems: Foundations
of Harmony Theory. In Rumelhart, D. E., McClelland, J. L., and the PDP
Research Group, editors, Parallel Distributed Processing: Explorations in the
Microstructure of Cognition. Volume 1: Foundations, chapter 6, pages 194-28l.
MIT Press/Bradford Books, Cambridge, MA.
Smolensky, P. (1987). On variable binding and the representation of symbolic structures in connectionist systems. Technical report, Department of Computer
Science, University of Colorado at Boulder. Technical Report CU-CS-355-87.
Smolensky, P. (1990). Tensor product variable binding and the representation of
symbolic structures in connectionist networks. Artificial Intelligence, 46: 159216.
Smolensky, P., Legendre, G., and Miyata, Y. (1992). Principles for an integrated
connectionist/symbolic theory of higher cognition. Technical report, Department of Computer Science, University of Colorado at Boulder. Technical Report
CU-CS-600-92.
| 599 |@word cu:3 version:2 twelfth:2 decomposition:1 minus:1 autosegmental:1 recursively:2 contains:1 fragment:3 activation:4 written:1 chicago:2 numerical:2 enables:1 designed:1 half:1 leaf:1 intelligence:3 generative:1 ith:1 core:1 mental:2 node:51 contribute:3 preference:1 complication:1 mathematical:3 c2:6 introduce:1 pairwise:4 ra:7 indeed:1 themselves:1 p1:1 multi:2 brain:1 terminal:13 decomposed:2 versity:1 suffice:1 maximizes:1 evolved:1 kind:1 minimizes:1 string:3 unified:1 berkeley:2 every:4 golden:4 sag:1 grant:1 positive:4 lila:1 local:4 sutton:1 ak:1 oxford:1 analyzing:1 might:2 mateo:1 range:1 seventeenth:1 grossberg:2 unique:2 directed:3 recursive:2 lippman:1 illegal:1 seeing:1 chomsky:1 symbolic:10 selection:1 storage:1 context:5 conventional:1 equivalent:1 missing:9 go:3 automaton:1 assigns:2 rule:35 embedding:1 stability:1 construction:2 suppose:1 colorado:6 parser:1 trick:1 harvard:1 associate:1 rumelhart:2 cooperative:1 role:6 highest:1 environment:1 grateful:1 iig:4 hopcroft:2 hopfield:4 emergent:1 represented:1 chapter:2 describe:1 sejnowski:5 detected:1 artificial:3 formation:1 modular:2 say:3 otherwise:1 grammar:34 cfg:3 ability:1 g1:1 syntactic:1 itself:1 descriptive:1 rr:1 interaction:1 product:4 maximal:1 cancelling:1 iff:6 adapts:1 academy:3 description:2 constituent:13 los:1 parent:25 double:1 requirement:1 assessing:1 help:1 illustrate:1 completion:1 ij:1 school:1 solves:1 c:3 aud:1 correct:1 stochastic:1 exploration:2 human:1 virtual:1 suffices:1 microstructure:2 formedness:7 biological:1 brian:1 connectionism:2 hold:3 ic:1 normal:4 lawrence:2 cognition:4 pointing:1 early:1 harmony:19 spreading:2 council:1 bridge:1 successfully:2 mit:2 rather:3 occupied:1 avoid:1 linguistic:5 check:1 sense:1 detect:1 am:1 helpful:1 inference:1 lowercase:1 entire:4 integrated:1 hidden:2 comprising:2 semantics:1 upward:5 overall:2 ill:4 extraneous:1 development:1 ness:2 special:1 uc:1 equal:3 once:1 construct:1 washington:1 cancel:1 constitutes:1 connectionist:15 report:6 intelligent:1 manipulated:1 simultaneously:2 ve:2 comprehensive:1 national:4 festschrift:1 replaced:1 connects:1 geraldine:2 cfl:5 analyzed:5 hg:6 necessary:1 tree:29 re:1 prince:6 desired:1 theoretical:1 yoshiro:2 formalism:1 soft:19 cfgs:2 licensed:1 ordinary:1 phrase:2 introducing:2 examining:3 johnson:1 erlbaum:3 too:1 connectivity:1 containing:1 possibly:1 cognitive:6 book:2 leading:1 ullman:2 li:1 grammaticality:1 blind:1 piece:1 root:8 break:5 schema:1 start:5 sort:1 competitive:1 parallel:6 rochester:1 keley:1 contribution:17 formed:6 ni:2 il:2 kaufmann:2 correspond:1 conceptually:1 generalize:1 cybernetics:2 touretzky:2 energy:1 naturally:1 mi:1 coexistence:1 actually:1 back:1 wesley:1 feed:1 higher:9 response:1 box:1 just:4 hand:3 parse:4 expressive:1 defines:1 usa:3 lakoff:5 requiring:1 assigned:1 semantic:1 branching:8 generalized:3 syntax:2 goldsmith:5 complete:1 l1:1 gh:1 harmonic:20 fi:1 recently:1 specialized:1 rl:2 physical:1 cohen:2 attached:1 volume:2 linking:1 extend:1 hypergraphs:1 refer:2 cambridge:7 rd:1 similarly:4 cancellation:1 language:16 uncancelled:2 specification:1 scandinavian:1 gt:1 add:12 dictated:1 massively:1 manipulation:1 certain:1 gleitman:1 binary:1 morgan:2 unrestricted:3 determine:1 ii:7 alan:2 technical:6 characterized:1 lin:3 serial:1 involving:2 basic:3 c1:2 separately:1 crucial:1 appropriately:1 undirected:2 structural:5 feedforward:1 enough:1 ture:1 psychology:2 architecture:1 identified:2 idea:2 simplifies:1 sibling:2 absent:1 constitute:2 useful:1 clear:1 band:2 category:1 simplest:1 mcclelland:2 generate:2 exist:1 nsf:1 gil:1 arising:1 klein:1 write:1 group:2 basil:1 ani:1 graph:1 sum:2 turing:1 letter:1 summer:1 correspondence:1 quadratic:1 annual:4 activity:2 constraint:4 ri:3 optimality:2 expanded:1 department:4 according:2 creative:1 legendre:15 slightly:1 modification:1 happens:1 b:1 napoli:1 boulder:5 taken:1 legal:33 ln:1 agree:1 turn:1 count:2 fail:1 addison:1 evalu:1 studying:1 apply:2 cancelled:2 alternative:1 original:1 remaining:1 include:1 linguistics:5 unifying:1 phonology:6 graded:1 society:5 tensor:4 already:1 realized:1 quantity:1 strategy:1 usual:1 gradient:1 kth:2 valence:4 link:24 iinf:1 ruth:1 equivalently:1 negative:5 collective:2 boltzmann:1 contributed:2 neuron:2 descent:1 kegl:1 defining:3 hinton:5 pdp:2 dc:1 introduced:1 pair:11 required:2 specified:4 blackwell:1 connection:2 bel:1 established:1 beyond:2 proceeds:1 below:1 pattern:3 dynamical:1 smolensky:27 reading:1 program:1 including:1 memory:1 power:1 ation:1 natural:4 imply:1 acknowledgement:1 dolan:2 determining:1 subcategories:1 suggestion:1 limitation:1 lv:1 triple:1 foundation:5 degree:1 principle:3 editor:8 collaboration:1 production:2 supported:1 free:5 formal:11 side:3 understand:1 institute:1 fall:1 absolute:1 fifth:1 distributed:5 evaluating:1 valid:1 doesn:1 adopts:1 collection:2 made:2 san:2 counted:1 transaction:1 global:1 conceptual:1 ca:3 miyata:7 contributes:3 expansion:2 complex:1 cl:3 necessarily:1 paul:1 arise:1 child:33 join:1 ny:1 shield:1 sub:9 position:2 explicit:1 breaking:1 third:1 down:1 showing:1 symbol:10 r2:2 workshop:1 adding:2 ci:1 magnitude:1 downward:3 relearning:1 simply:5 amsterdam:1 binding:3 corresponds:1 satisfies:1 ma:6 oof:1 exposition:1 labelled:5 replace:2 man:1 change:1 determined:1 called:1 total:4 bradford:2 lc1:1 domination:4 filler:3 preparation:2 evaluate:4 outgoing:2 |
5,513 | 5,990 | A Pseudo-Euclidean Iteration for Optimal Recovery
in Noisy ICA
James Voss
The Ohio State University
[email protected]
Mikhail Belkin
The Ohio State University
[email protected]
Luis Rademacher
The Ohio State University
[email protected]
Abstract
Independent Component Analysis (ICA) is a popular model for blind signal separation. The ICA model assumes that a number of independent source signals are
linearly mixed to form the observed signals. We propose a new algorithm, PEGI
(for pseudo-Euclidean Gradient Iteration), for provable model recovery for ICA
with Gaussian noise. The main technical innovation of the algorithm is to use a
fixed point iteration in a pseudo-Euclidean (indefinite ?inner product?) space. The
use of this indefinite ?inner product? resolves technical issues common to several
existing algorithms for noisy ICA. This leads to an algorithm which is conceptually
simple, efficient and accurate in testing.
Our second contribution is combining PEGI with the analysis of objectives for
optimal recovery in the noisy ICA model. It has been observed that the direct
approach of demixing with the inverse of the mixing matrix is suboptimal for signal
recovery in terms of the natural Signal to Interference plus Noise Ratio (SINR)
criterion. There have been several partial solutions proposed in the ICA literature.
It turns out that any solution to the mixing matrix reconstruction problem can be
used to construct an SINR-optimal ICA demixing, despite the fact that SINR itself
cannot be computed from data. That allows us to obtain a practical and provably
SINR-optimal recovery method for ICA with arbitrary Gaussian noise.
1
Introduction
Independent Component Analysis refers to a class of methods aiming at recovering statistically
independent signals by observing their unknown linear combination. There is an extensive literature
on this and a number of related problems [7].
In the P
ICA model, we observe n-dimensional realizations x(1), . . . , x(N ) of a latent variable model
m
X = k=1 Sk Ak = AS where Ak denotes the k th column of the n ? m mixing matrix A and
S = (S1 , . . . , Sm )T is the unseen latent random vector of ?signals?. It is assumed that S1 , . . . , Sm
are independent and non-Gaussian. The source signals and entries of A may be either real- or
complex-valued. For simplicity, we will assume throughout that S has zero mean, as this may be
achieved in practice by centering the observed data.
Many ICA algorithms use the preprocessing ?whitening? step whose goal is to orthogonalize the
independent components. In the noiseless, case this is commonly done by computing the square
root of the covariance matrix of X. Consider now the noisy ICA model X = AS + ? with additive
0-mean noise ? independent of S. It turns out that the introduction of noise makes accurate recovery
of the signals significantly more involved. Specifically, whitening using the covariance matrix does
not work in the noisy ICA model as the covariance matrix combines both signal and noise. For
the case when the noise is Gaussian, matrices constructed from higher order statistics (specifically,
cumulants) can be used instead of the covariance matrix. However, these matrices are not in general
positive definite and thus the square root cannot always be extracted. This limits the applicability of
1
several previous methods, such as [1, 2, 9]. The GI-ICA algorithm proposed in [21] addresses this
issue by using a complicated quasi-orthogonalization step followed by an iterative method.
In this paper (section 2), we develop a simple and practical one-step algorithm, PEGI (for pseudoEuclidean Gradient Iteration) for provably recovering A (up to the unavoidable ambiguities of the
model) in the case when the noise is Gaussian (with an arbitrary, unknown covariance matrix). The
main technical innovation of our approach is to formulate the recovery problem as a fixed point
method in an indefinite (pseudo-Euclidean) ?inner product? space.
The second contribution of the paper is combining PEGI with the analysis of objectives for optimal
recovery in the noisy ICA model. In most applications of ICA (e.g., speech separation [18], MEG/EEG
artifact removal [20] and others) one cares about recovering the signals s(1), . . . , s(N ). This is known
as the source recovery problem. This is typically done by first recovering the matrix A (up to an
appropriate scaling of the column directions).
At first, source recovery and recovering the mixing matrix A appear to be essentially equivalent. In
the noiseless ICA model, if A in invertible1 then s(t) = A?1 x(t) recovers the sources. On the other
hand, in the noisy model, the exact recovery of the latent sources s(t) becomes impossible even if A
is known exactly. Part of the ?noise? can be incorporated into the ?signal? preserving the form of the
model. Even worse, neither A nor S are defined uniquely as there is an inherent ambiguity in the
setting. There could be many equivalent decompositions of the observed signal as X = A0 S0 + ? 0
(see the discussion in section 3).
?
:= BX for a choice of m ? n demixing matrix B.
We consider recovered signals of the form S(B)
?
Signal recovery is considered optimal if the coordinates of S(B)
= (S?1 (B), . . . , S?m (B)) maximize
Signal to Interference plus Noise Ratio (SINR) within any fixed model X = AS + ?. Note that
the value of SINR depends on the decomposition of the observed data into ?noise? and ?signal?:
X = A0 S0 + ? 0 .
Surprisingly, the SINR optimal demixing matrix does not depend on the decomposition of data into
signal plus noise. As such, SINR optimal ICA recovery is well defined given access to data despite
the inherent ambiguity in the model. Further, it will be seen that the SINR optimal demixing can be
constructed from cov(X) and the directions of the columns of A (which are also well-defined across
signal/noise decompositions).
Our SINR-optimal demixing approach combined with the PEGI algorithm provides a complete
SINR-optimal recovery algorithm in the ICA model with arbitrary Gaussian noise. We note that the
ICA papers of which we are aware that discuss optimal demixing do not observe that SINR optimal
demixing is invariant to the choice of signal/noise decomposition. Instead, they propose more limited
strategies for improving the demixing quality within a fixed ICA model. For instance, Joho et al.
[14] show how SINR-optimal demixing can be approximated with extra sensors when assuming a
white additive noise, and Koldovsk`y and Tichavsk`y [16] discuss how to achieve asymptotically low
bias ICA demixing assuming white noise within a fixed ICA model. However, the invariance of the
SINR-optimal demixing matrix appears in the array sensor systems literature [6].
Finally, in section 4, we demonstrate experimentally that our proposed algorithm for ICA outperforms
existing practical algorithms at the task of noisy signal recovery, including those specifically designed
for beamforming, when given sufficiently many samples. Moreover, most existing practical algorithms
for noisy source recovery have a bias and cannot recover the optimal demixing matrix even with
infinite samples. We also show that PEGI requires significantly fewer samples than GI-ICA [21] to
perform ICA accurately.
1.1 The Indeterminacies of ICA
Notation: We use M ? to denote the entry-wise complex conjugate of a matrix M , M T to denote its
transpose, M H to denote its conjugate transpose, and M ? to denote its Moore-Penrose pseudoinverse.
Before proceeding with our results, we discuss the somewhat subtle issue of indeterminacies in ICA.
These ambiguities arise from the fact that the observed X may have multiple decompositions into
ICA models X = AS + ? and X = A0 S0 + ? 0 .
1
A?1 can be replaced with A? (A?s pseudoinverse) in the discussion below for over-determined ICA.
2
Noise-free ICA has two natural indeterminacies. For any nonzero constant ?, the contribution of
the k th component Ak Sk to the model can equivalently be obtained by replacing Ak with ?Ak and
Sk with the rescaled signal ?1 Sk . To lessen this scaling indeterminacy, we use the convention2 that
cov(S) = I throughout this paper. As such, each source Sk (or equivalently each Ak ) is defined up
to a choice of sign (a unit modulus factor in the complex case). In addition, there is an ambiguity
in the order of theP
latent signals. For any
? of [m] (where [m] := {1, . . . , m}), the
Ppermutation
m
m
ICA models X = k=1 Sk Ak and X = k=1 S?(k) A?(k) are indistinguishable. In the noise free
setting, A is said to be recovered if we recover each column of A up to a choice of sign (or up to a unit
modulus factor in the complex case) and an unknown permutation. As the sources S1 , . . . , Sm are
only defined up to the same indeterminacies, inverting the recovered matrix A? to obtain a demixing
matrix works for signal recovery.
In the noisy ICA setting, there is an additional indeterminacy in the definition of the sources. Consider
a 0-mean axis-aligned Gaussian random vector ?. Then, the noisy ICA model X = A(S + ?) + ? in
which ? is considered part of the latent source signal S0 = S + ?, and the model X = AS + (A? + ?)
in which ? is part of the noise are indistinguishable. In particular, the latent source S and its covariance
are ill-defined. Due to this extra indeterminacy, the lengths of the columns of A no longer have a fully
defined meaning even when we assume cov(S) = I. In the noisy setting, A is said to be recovered if
we obtain the columns of A up to non-zero scalar multiplicative factors and an arbitrary permutation.
The last indeterminacy is the most troubling as it suggests that the power of each source signal is itself
ill-defined in the noisy setting. Despite this indeterminacy, it is possible to perform an SINR-optimal
demixing without additional assumptions about what portion of the signal is source and what portion
is noise. In section 3, we will see that SINR-optimal source recovery takes on a simple form: Given
any solution A? which recovers A up to the inherent ambiguities of noisy ICA, then A?H cov(X)? is
an SINR-optimal demixing matrix.
1.2 Related Work and Contributions
Independent Component Analysis is probably the most used model for Blind Signal Separation.
It has seen numerous applications and has generated a vast literature, including in the noisy and
underdetermined settings. We refer the reader to the books [7, 13] for a broad overview of the subject.
It was observed early on by Cardoso [4] that ICA algorithms based soley on higher order cumulant
statistics are invariant to additive Gaussian noise. This observation has allowed the creation of many
algorithms for recovering the ICA mixing matrix in the noisy and often underdetermined settings.
Despite the significant work on noisy ICA algorithms, they remain less efficient, more specialized, or
less practical than the most popular noise free ICA algorithms.
Research on cumulant-based noisy ICA can largely be split into several lines of work which we only
highlight here. Some algorithms such as FOOBI [4] and BIOME [1] directly use the tensor structure
of higher order cumulants. In another line of work, De Lathauwer et al. [8] and Yeredor [23] have
suggested algorithms which jointly diagonalize cumulant matrices in a manner reminiscent of the
noise-free JADE algorithm [3]. In addition, Yeredor [22] and Goyal et al. [11] have proposed ICA
algorithms based on random directional derivatives of the second characteristic function.
Each line of work has its advantages and disadvantages. The joint diagonalization algorithms and
the tensor based algorithms tend to be practical in the sense that they use redundant cumulant information in order to achieve more accurate results. However, they have a higher memory complexity
than popular noise free ICA algorithms such as FastICA [12]. While the tensor methods (FOOBI
and BIOME) can be used when there are more sources than the dimensionality of the space (the
underdetermined ICA setting), they require all the latent source signals to have positive order 2k
cumulants (k ? 2, a predetermined fixed integer) as they rely on taking a matrix square root. Finally,
the methods based on random directional derivatives of the second characteristic function rely heavily
upon randomness in a manner not required by the most popular noise free ICA algorithms.
We continue a line of research started by Arora et al. [2] and Voss et al. [21] on fully determined noisy
ICA which addresses some of these practical issues by using a deflationary approach reminiscent
of FastICA. Their algorithms thus have lower memory complexity and are more scalable to high
dimensional data than the joint diagonalization and tensor methods. However, both works require
2
Alternatively, one may place the scaling information in the signals by setting kAk k = 1 for each k.
3
a preprocessing step (quasi-orthogonalization) to orthogonalize the latent signals which is based
on taking a matrix square root. Arora et al. [2] require each latent signal to have positive fourth
cumulant in order to carry out this preprocessing step. In contrast, Voss et al. [21] are able to
perform quasi-orthogonalization with source signals of mixed sign fourth cumulants; but their quaseorthogonalization step is more complicated and can run into numerical issues under sampling error.
We demonstrate that quasi-orthogonalization is unnecessary. We introduce the PEGI algorithm to
work within a (not necessarily positive definite) inner product space instead. Experimentally, this
leads to improved demixing performance. In addition, we handle the case of complex signals.
Finally, another line of work attempts to perform SINR-optimal source recovery in the noisy ICA
setting. It was noted by Koldovsk`y and Tichavsk`y [15] that for noisy ICA, traditional ICA algorithms
such as FastICA and JADE actually outperform algorithms which first recover A in the noisy setting
and then use the resulting approximation of A? to perform demixing. It was further observed that
A? is not the optimal demixing matrix for source recovery. Later, Koldovsk`y and Tichavsk`y [17]
proposed an algorithm based on FastICA which performs a low SINR-bias beamforming.
2
Pseudo-Euclidean Gradient Iteration ICA
In this section, we introduce the PEGI algorithm for recovering A in the ?fully determined? noisy
ICA setting where m ? n. PEGI relies on the idea of Gradient Iteration introduced Voss et al. [21].
However, unlike GI-ICA Voss et al. [21], PEGI does not require the source signals to be orthogonalized. As such, PEGI does not require the complicated quasi-orthogonalization preprocessing step of
GI-ICA which can be inaccurate to compute in practice. We sketch the Gradient Iteration algorithm
in Section 2.1, and then introduce PEGI in Section 2.2. For simplicity, we limit this discussion to
the case of real-valued signals. A mild variation of our PEGI algorithm works for complex-valued
signals, and its construction is provided in the supplementary material.
In this section we assume a noisy ICA model X = AS + ? such that ? is arbitrary Gaussian and
independent of S. We also assume that m ? n, that m is known, and that the columns of A are
linearly independent.
2.1 Gradient Iteration with Orthogonality
The gradient iteration relies on the properties of cumulants. We will focus on the fourth cumulant,
though similar constructions may be given using other even order cumulants of higher order. For
a zero-mean random variable X, the fourth order cumulant may be defined as ?4 (X) := E[X 4 ] ?
3E[X 2 ]2 [see 7, Chapter 5, Section 1.2]. Higher order cumulants have nice algebraic properties
which make them useful for ICA. In particular, ?4 has the following properties: (1) (Independence) If
X and Y are independent, then ?4 (X + Y ) = ?4 (X) + ?4 (Y ). (2) (Homogeneity) If ? is a scalar,
then ?4 (?X) = ?4 ?4 (X). (3) (Vanishing Gaussians) If X is normally distributed then ?4 (X) = 0.
We consider the following function defined on the unit sphere: f (u) := ?4 (hX, ui). Expanding f (u)
using the above properties we obtain:
Xm
Xm
f (u) = ?4
hAk , uiSk + hu, ?i =
hAk , ui4 ?4 (Sk ) .
k=1
k=1
Taking derivatives we obtain:
?f (u) = 4
Xm
Hf (u) = 12
k=1
Xm
hAk , ui3 ?4 (Sk )Ak
k=1
hAk , ui2 ?4 (Sk )Ak ATk = AD(u)AT
(1)
(2)
where D(u) is a diagonal matrix with entries D(u)kk = 12hAk , ui2 ?4 (Sk ). We also note that f (u),
?f (u), and Hf (u) have natural sample estimates (see [21]).
Voss et al. [21] introduced GI-ICA as a fixed point algorithm under the assumption that the
columns of A are orthogonal but not necessarily unit vectors. The main idea is that the update
u ? ?f (u)/k?f (u)k is a form of a generalized power iteration. From equation (1), each Ak may
be considered as a direction in a hidden orthogonal basis of the space. During each iteration, the Ak
coordinate of u is raised to the 3rd power and multiplied by a constant. Treating this iteration as a
fixed point update, it was shown that given a random starting point, this iterative procedure converges
rapidly to one of the columns of A (up to a choice of sign). The rate of convergence is cubic.
4
However, the GI-ICA algorithm requires a somewhat complicated preprocessing step called
quasi-orthogonalization to linearly transform the data to make columns of A orthogonal. Quasiorthogonalization makes use of evaluations of Hessians of the fourth cumulant function to construct
a matrix of the form C = ADAT where D has all positive diagonal entries?a task which is complicated by the possibility that the latent signals Si may have fourth order cumulants of differing
signs?and requires taking the matrix square root of a positive definite matrix of this form. However, the algorithm used for constructing C under sampling error is not always positive definite in
practice, which can make the preprocessing step fail. We will show how our PEGI algorithm makes
quasi-orthogonalization unnecessary, in particular, resolving this issue.
2.2 Gradient Iteration in a Pseudo-Euclidean Space
We now show that the gradient iteration can be performed using in a pseudo-Euclidean space
in which the columns of A are orthogonal. The natural candidate for the ?inner product space?
would be to use h?, ?i? defined as hu, vi? := uT (AAT )? v. Clearly, hAi , Aj i? = ?ij gives the
desired orthogonality property. However, there are two issues with this ?inner product space?:
First, it is only an inner product space when A is invertible. This turns out not to be a major
issue, and we move forward largely ignoring this point. The second issue is more fundamental: We only have access to AAT in the noise free setting where cov(X) = AAT . In the noisy
setting, we have access to matrices of the form Hf (u) = AD(u)AT from equation (2) instead.
product deAlgorithm 1 Recovers a column of A up to a We consider a pseudo-Euclidean inner
T
fined
as
follows:
Let
C
=
ADA
where
D is a
scaling factor if u0 is generically chosen.
diagonal matrix with non-zero diagonal entries, and
Inputs: Unit vector u0 , C, ?f
define h?, ?iC by hu, viC = uT C ? v. When D conk?1
tains negative entries, this is not a proper inner prodrepeat
uct since C is not positive definite. In particular,
uk ? ?f (C ? uk?1 )/k?f (C ? uk?1 )k hAk , Ak iC = AT (ADAT )? Ak = d?1 may be negk
kk
k ?k+1
ative. Nevertheless, when k 6= j, hAk , Aj iC =
until Convergence (up to sign)
ATk (ADAT )? Aj = 0 gives that the columns of A
return uk
are orthogonal in this space.
We defineP
functions ?k : Rn ? R by ?k (u) = (A? u)k such that for any u ? span(A1 , . . . , Am ),
m
then u = i=1 ?i (u)Ai is the expansion
of u in its Ai basis. Continuing
Pn
Pn from equation (1), for any
u ? S n?1 we see ?f (C ? u) = 4 k=1 hAk , C ? ui3 ?4 (Sk )Ak = 4 k=1 hAk , ui3C ?4 (Sk )Ak is the
gradient iteration recast in the h?, ?iC space. Expanding u in its Ak basis, we obtain
Xm
Xm
?f (C ? u) = 4
(?k (u)hAk , Ak iC )3 ?4 (Sk )Ak = 4
?k (u)3 (d?3
(3)
kk ?4 (Sk ))Ak ,
k=1
k=1
which is a power iteration in the unseen Ak coordinate system. As no assumptions are made upon the
?4 (Sk ) values, the d?3
kk scalings which were not present in eq. (1) cause no issues. Using this update,
we obtain Alg. 1, a fixed point method for recovering a single column of A up to an unknown scaling.
Before proceeding, we should clarify the notion of fixed point convergence in Algorithm 1. We say
?
that the sequence {uk }?
k=0 converges to v up to sign if there exists a sequence {ck }k=0 such that
each ck ? {?1} and ck uk ? v as k ? ?. We have the following convergence guarantee.
Theorem 1. If u0 is chosen uniformly at random from S n?1 , then with probability 1, there exists
` ? [m] such that the sequence {uk }?
k=0 defined as in Algorithm 1 converges to A` /kA` k up to sign.
Further, the rate of convergence is cubic.
Due to limited space, we omit the proof of Theorem 1. It is similar to the proof of [21, Theorem 4].
In practice, we test near convergence by checking if we are still making significant progress. In
particular, for some predefined > 0, if there exists a sign value ck ? {?1} such that kuk ?
ck uk?1 k < , then we declare convergence achieved and return the result. As there are only two
choices for ck , this is easily checked, and we exit the loop if this condition is met.
Full ICA Recovery Via the Pseudo-Euclidean GI-Update. We are able to recover a single column
of A up to its unknown scale. However, for full recovery of A, we would like (given recovered
columns A`1 , . . . , A`j ) to be able to recover a column Ak such that k 6? {`1 , . . . , `j } on demand.
The idea behind the simultaneous recovery of all columns of A is two-fold. First, instead of just
finding columns of A using Algorithm 1, we simultaneously find rows of A? . Then, using the
5
recovered columns of A and rows of A? , we project u onto the orthogonal complement of the
recovered columns of A within the h?, ?iC pseudo-inner product space.
Recovering rows of A? . Suppose we have access to a column Ak (which may be achieved using
Algorithm 1). Let A?k? denote the k th row of A? . Then, we note that C ? Ak = (ADAT )? Ak =
?
? T
?1
?1
T ?
d?1
kk (A )k = dkk (Ak? ) recovers Ak? up to an arbitrary, unknown constant dkk . However, the
?1
?1
?
T
constant dkk may be recovered by noting that hAk , Ak iC = (C Ak ) Ak = dkk . As such, we may
estimate A?k? as [C ? Ak /((C ? Ak )T Ak )]T .
Algorithm 2 Full ICA matrix recovery algorithm. Enforcing Orthogonality During the GI
Given access to a vector u =
Pm
Returns two matrices: (1) A? is the recovered mix- Update.
?
(u)A
k
k + PA? u (where PA? is the
k=1
ing matrix for the noisy ICA model X = AS + ?,
projection
onto
the orthogonal complements
? is a running estimate of A?? .
and (2) B
of the range of A), some recovered columns
A`1 , . . . , A`r , and corresponding rows of A? ,
we may zero out the components of u corresponding to the recovered columns of A. LetPr
ting u0 = u ? j=1 A`j A?`j ? u, then u0 =
P
k?[m]\{`1 ,...,`r } ?k (u)Ak + PA? u. In particular, u0 is orthogonal (in the h?, ?iC space) to the
previously recovered columns of A. This allows
the non-orthogonal gradient iteration algorithm
to recover a new column of A.
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
Inputs: C, ?f
??0
A? ? 0, B
for j ? 1 to m do
Draw u uniformly at random from S n?1 .
repeat
?
u ? u ? A?Bu
?
u ? ?f (C u)/k?f (C ? u)k.
until Convergence (up to sign)
A?j ? u
?j? ? [C ? Aj /((C ? Aj )T Aj )]T
B
end for
? B
?
return A,
3
SINR Optimal Recovery in Noisy ICA
Using these ideas, we obtain Algorithm 2, which
is the PEGI algorithm for recovery of the mixing matrix A in noisy ICA up to the inherent
ambiguities of the problem. Within this Algorithm, step 6 enforces orthogonality with previously
found columns of A, guaranteeing that convergence to a new column ofP
A.
n
1
Practical Construction of C. In our implementation, we set C = 12
k=1 Hf (ek ), as it can be
Pn
2
T
shown from equation (2) that k=1 Hf (ek ) = ADA with dkk = kAk k ?4 (Sk ). This deterministically guarantees that each latent signal has a significant contribution to C.
In this section, we demonstrate how to perform SINR optimal ICA within the noisy ICA framework
given access to an algorithm (such as PEGI) to recover the directions of the columns of A. To this
end, we first discuss the SINR optimal demixing solution within any decomposition of the ICA model
into signal and noise as X = AS + ?. We then demonstrate that the SINR optimal demixing matrix
is actually the same across all possible model decompositions, and that it can be recovered. The
results in this section hold in greater generality than in section 2. They hold even if m ? n (the
underdetermined setting) and even if the additive noise ? is non-Gaussian.
?
:= BX the resulting approximation to
Consider B an m ? n demixing matrix, and define S(B)
S. It will also be convenient to estimate the source signal S one coordinate at a time: Given a row
?
?
?
?
vector b, we define S(b)
:= bX. If b = Bk? (the k th row of B), then S(b)
= [S(B)]
k = Sk (B)
th
is our estimate to the k latent signal Sk . Within a specific ICA model X = AS + ?, signal to
intereference-plus-noise ratio (SINR) is defined by the following equation:
var(bAk Sk )
var(bAk Sk )
SINRk (b) :=
=
.
(4)
var(bAS ? bAk Sk ) + var(b?)
var(bAX) ? var(bAk Sk )
SINRk is the variance of the contribution of k th source divided by the variance of the noise and
interference contributions within the signal.
Given access to the mixing matrix A, we define Bopt = AH (AAH + cov(?))? . Since cov(X) =
AAH + cov(?), it follows that Bopt = AH cov(X)? . Here, cov(X)? may be estimated from data,
but due to the ambiguities of the noisy ICA model, A (and specifically its column norms) cannot be.
Koldovsk`y and Tichavsk`y [15] observed that when ? is a white Gaussian noise, Bopt jointly maximizes SINRk for each k ? [m], i.e., SINRk takes on its maximal value at (Bopt )k? . We generalize
this result in Proposition 2 below to include arbitrary non-spherical, potentially non-Gaussian noise.
6
(a) Accuracy under additive Gaussian noise.
(b) Bias under additive Gaussian noise.
Figure 1: SINR performance comparison of ICA algorithms.
It is interesting to note that even after the data is whitened, i.e. cov(X) = I, the optimal SINR
solution is different from the optimal solution in the noiseless case unless A is an orthogonal matrix,
i.e. A? = AH . This is generally not the case, even if ? is white Gaussian noise.
Proposition 2. For each k ? [m], (Bopt )k? is a maximizer of SINRk .
The proof of Proposition 2 can be found in the supplementary material.
Since SINR is scale invariant, Proposition 2 implies that any matrix of the form DBopt =
DAH cov(X)? where D is a diagonal scaling matrix (with non-zero diagonal entries) is an SINRoptimal demixing matrix. More formally, we have the following result.
Theorem 3. Let A? be an n ? m matrix containing the columns of A up to scale and an arbitrary
permutation. Then, (A?H cov(X)? )?(k)? is a maximizer of SINRk .
By Theorem 3, given access to a matrix A? which recovers the directions of the columns of A, then
A?H cov(X)? is the SINR-optimal demixing matrix. For ICA in the presence of Gaussian noise, the
directions of the columns of A are well defined simply from X, that is, the directions of the columns
of A do not depend on the decomposition of X into signal and noise (see the discussion in section 1.1
on ICA indeterminacies). The problem of SINR optimal demixing is thus well defined for ICA in
the presence of Gaussian noise, and the SINR optimal demixing matrix can be estimated from data
without any additional assumptions on the magnitude of the noise in the data.
Finally, we note that in the noise-free case, the SINR-optimal source recovery simplifies to be A?? .
Corollary 4. Suppose that X = AS is a noise free (possibly underdetermined) ICA model. Suppose
that A? ? Rn?m contains the columns of A up to scale and permutation, i.e., there exists diagonal
matrix D with non-zero entries and a permutation matrix ? such that A? = AD?. Then A?? is an
SINR-optimal demixing matrix.
Corollary 4 is consistent with known beamforming results. In particular, it is known that A? is optimal
(in terms of minimum mean squared error) for underdetermined ICA [19, section 3B].
4
Experimental Results
We compare the proposed PEGI algorithm with existing ICA algorithms. In addition to qorth+GI-ICA
(i.e., GI-ICA with quasi-orthogonalization for preprocessing), we use the following baselines:
JADE [3] is a popular fourth cumulant based ICA algorithm designed for the noise free setting. We
use the implementation of Cardoso and Souloumiac [5].
FastICA [12] is a popular ICA algorithm designed for the noise free setting based on a deflationary
approach of recovering one component at a time. We use the implementation of G?avert et al. [10].
1FICA [16, 17] is a variation of FastICA with the tanh contrast function designed to have low bias
for performing SINR-optimal beamforming in the presence of Gaussian noise.
Ainv performs oracle demixing algorithm which uses A? as the demixing matrix.
SINR-opt performs oracle demixing using AH cov(X)? to achieve SINR-optimal demixing.
7
We compare these algorithms on simulated data with n = m. We constructed mixing matrices A
with condition number 3 via a reverse singular value decomposition (A = U ?V T ). The matrices U
and V were random orthogonal matrices, and ? was chosen to have 1 as its minimum and 3 as its
maximum singular values, with the intermediate singular values chosen uniformly at random. We
drew data from a noisy ICA model X = AS + ? where cov(?) = ? was chosen to be malaligned
with cov(AS) = AAT . We set ? = p(10I ? AAT ) where p is a constant defining the noise power.
maxv var(vT ?)
It can be shown that p = max
is the ratio of the maximum directional noise variance to
T
v var(v AS)
the maximum directional signal variance. We generated 100 matrices A for our experiments with
100 corresponding ICA data sets for each sample size and noise power. When reporting results, we
apply each algorithm to each of the 100 data sets for the corresponding sample size and noise power
and we report the mean performance. The source distributions used in our ICA experiments were the
Laplace and Bernoulli distribution with parameters 0.05 and 0.5 respectively, the t-distribution with
3 and 5 degrees of freedom respectively, the exponential distribution, and the uniform distribution.
Each distribution was normalized to have unit variance, and the distributions were each used twice to
create 14-dimensional data. We compare the algorithms using either SINR or the SINR loss from the
optimal demixing matrix (defined by SINR Loss = [Optimal SINR ? Achieved SINR]).
In Figure 1b, we compare our proprosed ICA algorithm with various ICA algorithms for signal
recovery. In the PEGI-?4 +SINR algorithm, we use PEGI-?4 to estimate A, and then perform
demixing using the resulting estimate of AH cov(X)?1 , the formula for SINR-optimal demixing. It
is apparent that when given sufficient samples, PEGI-?4 +SINR provides the best SINR demixing.
JADE, FastICA-tanh, and 1FICA each have a bias in the presence of additive Gaussian noise which
keeps them from being SINR-optimal even when given many samples.
In Figure 1a, we compare algorithms at various sample sizes. The PEGI-?4 +SINR algorithm relies more heavily on accurate estimates
of fourth order statistics than JADE, and the
FastICA-tanh and 1FICA algorithms do not require the estimation of fourth order statistics.
For this reason, PEGI-?4 +SINR requires more
samples than the other algorithms in order to be
run accurately. However, once sufficient samples are taken, PEGI-?4 +SINR outperforms the
other algorithms including 1FICA, which is designed to have low SINR bias. We also note
that while not reported in order to avoid clutter, the kurtosis-based FastICA performed very
similarly to FastICA-tanh in our experiments.
Figure 2: Accuracy comparison of PEGI using
pseudo-inner product spaces and GI-ICA using In order to avoid clutter, we did not include
qorth+GI-ICA-?4 +SINR (the SINR optimal
quasi-orthogonalization.
demixing estimate constructed using qorth+GIICA-?4 to estimate A) in the figures 1b and 1a. It is also assymptotically unbiased in estimating
the directions of the columns of A, and similar conclusions could be drawn using qorth+GI-ICA-?4
in place of PEGI-?4 . However, in Figure 2, we see that PEGI-?4 +SINR requires fewer samples
than qorth+GI-ICA-?4 +SINR to achieve good performance. This is particularly highlighted in the
medium sample regime.
On the Performance of Traditional ICA Algorithms for Noisy ICA. An interesting observation
[first made in 15] is that the popular noise free ICA algorithms JADE and FastICA perform reasonably
well in the noisy setting. In Figures 1b and 1a, they significantly outperform demixing using A?1 for
source recovery. It turns out that this may be explained by a shared preprocessing step. Both JADE
and FastICA rely on a whitening preprocessing step in which the data are linearly transformed to
have identity covariance. It can be shown in the noise free setting that after whitening, the mixing
matrix A is a rotation matrix. These algorithms proceed by recovering an orthogonal matrix A? to
approximate the true mixing matrix A. Demixing is performed using A??1 = A?H . Since the data is
white (has identity covariance), then the demixing matrix A?H = A?H cov(X)?1 is an estimate of the
SINR-optimal demixing matrix. Nevertheless, the traditional ICA algorithms give a biased estimate
of A under additive Gaussian noise.
8
References
[1] L. Albera, A. Ferr?eol, P. Comon, and P. Chevalier. Blind identification of overcomplete mixtures of sources
(BIOME). Linear algebra and its applications, 391:3?30, 2004.
[2] S. Arora, R. Ge, A. Moitra, and S. Sachdeva. Provable ICA with unknown Gaussian noise, with implications
for Gaussian mixtures and autoencoders. In NIPS, pages 2384?2392, 2012.
[3] J. Cardoso and A. Souloumiac. Blind beamforming for non-Gaussian signals. In Radar and Signal
Processing, IEE Proceedings F, volume 140(6), pages 362?370. IET, 1993.
[4] J.-F. Cardoso. Super-symmetric decomposition of the fourth-order cumulant tensor. Blind identification of
more sources than sensors. In ICASSP, pages 3109?3112. IEEE, 1991.
[5] J.-F. Cardoso and A. Souloumiac. Matlab JADE for real-valued data v 1.8. http://perso.
telecom-paristech.fr/?cardoso/Algo/Jade/jadeR.m, 2005. [Online; accessed 8-May2013].
[6] P. Chevalier. Optimal separation of independent narrow-band sources: Concept and performance 1. Signal
Processing, 73(12):27 ? 47, 1999. ISSN 0165-1684.
[7] P. Comon and C. Jutten, editors. Handbook of Blind Source Separation. Academic Press, 2010.
[8] L. De Lathauwer, B. De Moor, and J. Vandewalle. Independent component analysis based on higher-order
statistics only. In Statistical Signal and Array Processing, 1996. Proceedings., 8th IEEE Signal Processing
Workshop on, pages 356?359. IEEE, 1996.
[9] L. De Lathauwer, J. Castaing, and J. Cardoso. Fourth-order cumulant-based blind identification of
underdetermined mixtures. Signal Processing, IEEE Transactions on, 55(6):2965?2973, June 2007. ISSN
1053-587X. doi: 10.1109/TSP.2007.893943.
[10] H. G?avert, J. Hurri, J. S?arel?a, and A. Hyv?arinen. Matlab FastICA v 2.5. http://research.ics.
aalto.fi/ica/fastica/code/dlcode.shtml, 2005. [Online; accessed 1-May-2013].
[11] N. Goyal, S. Vempala, and Y. Xiao. Fourier PCA and robust tensor decomposition. In STOC, pages
584?593, 2014.
[12] A. Hyv?arinen and E. Oja. Independent component analysis: Algorithms and applications. Neural Networks,
13(4-5):411?430, 2000.
[13] A. Hyv?arinen, J. Karhunen, and E. Oja. Independent component analysis. John Wiley & Sons, 2001.
[14] M. Joho, H. Mathis, and R. H. Lambert. Overdetermined blind source separation: Using more sensors than
source signals in a noisy mixture. In Proc. International Conference on Independent Component Analysis
and Blind Signal Separation. Helsinki, Finland, pages 81?86, 2000.
[15] Z. Koldovsk`y and P. Tichavsk`y. Methods of fair comparison of performance of linear ICA techniques in
presence of additive noise. In ICASSP, pages 873?876, 2006.
[16] Z. Koldovsk`y and P. Tichavsk`y. Asymptotic analysis of bias of fastica-based algorithms in presence of
additive noise. Technical report, Technical report, 2007.
[17] Z. Koldovsk`y and P. Tichavsk`y. Blind instantaneous noisy mixture separation with best interference-plusnoise rejection. In Independent Component Analysis and Signal Separation, pages 730?737. Springer,
2007.
[18] S. Makino, T.-W. Lee, and H. Sawada. Blind speech separation. Springer, 2007.
[19] B. D. Van Veen and K. M. Buckley. Beamforming: A versatile approach to spatial filtering. IEEE assp
magazine, 5(2):4?24, 1988.
[20] R. Vig?ario, J. Sarela, V. Jousmiki, M. Hamalainen, and E. Oja. Independent component approach to the
analysis of EEG and MEG recordings. Biomedical Engineering, IEEE Transactions on, 47(5):589?593,
2000.
[21] J. R. Voss, L. Rademacher, and M. Belkin. Fast algorithms for Gaussian noise invariant independent
component analysis. In Advances in Neural Information Processing Systems 26, pages 2544?2552. 2013.
[22] A. Yeredor. Blind source separation via the second characteristic function. Signal Processing, 80(5):
897?902, 2000.
[23] A. Yeredor. Non-orthogonal joint diagonalization in the least-squares sense with application in blind source
separation. Signal Processing, IEEE Transactions on, 50(7):1545?1553, 2002.
9
| 5990 |@word mild:1 norm:1 hyv:3 hu:3 covariance:8 decomposition:12 versatile:1 carry:1 contains:1 outperforms:2 existing:4 recovered:13 ka:1 si:1 reminiscent:2 luis:1 john:1 numerical:1 additive:10 predetermined:1 designed:5 treating:1 update:5 maxv:1 fewer:2 vanishing:1 deflationary:2 provides:2 cse:3 bopt:5 accessed:2 lathauwer:3 constructed:4 direct:1 quasiorthogonalization:1 combine:1 introduce:3 manner:2 ica:93 nor:1 yeredor:4 voss:7 spherical:1 resolve:1 becomes:1 provided:1 project:1 moreover:1 notation:1 maximizes:1 estimating:1 medium:1 what:2 differing:1 finding:1 guarantee:2 pseudo:11 exactly:1 uk:8 unit:6 normally:1 omit:1 appear:1 positive:8 before:2 aat:5 declare:1 engineering:1 limit:2 aiming:1 vig:1 despite:4 ak:33 plus:4 twice:1 suggests:1 sawada:1 limited:2 range:1 statistically:1 practical:8 enforces:1 testing:1 practice:4 goyal:2 definite:5 procedure:1 veen:1 significantly:3 projection:1 convenient:1 refers:1 cannot:4 onto:2 fica:4 impossible:1 equivalent:2 starting:1 giica:1 formulate:1 simplicity:2 recovery:29 array:2 handle:1 notion:1 coordinate:4 variation:2 laplace:1 construction:3 suppose:3 heavily:2 ui2:2 exact:1 magazine:1 us:1 overdetermined:1 pa:3 approximated:1 particularly:1 observed:9 rescaled:1 complexity:2 ui:1 radar:1 depend:2 algebra:1 algo:1 creation:1 upon:2 exit:1 basis:3 easily:1 joint:3 icassp:2 chapter:1 various:2 tichavsk:7 fast:1 doi:1 jade:9 whose:1 apparent:1 supplementary:2 valued:4 say:1 statistic:5 gi:14 unseen:2 cov:19 jointly:2 noisy:35 itself:2 transform:1 highlighted:1 online:2 tsp:1 advantage:1 sequence:3 kurtosis:1 propose:2 reconstruction:1 product:10 maximal:1 fr:1 aligned:1 combining:2 realization:1 rapidly:1 loop:1 mixing:10 achieve:4 convergence:9 rademacher:2 guaranteeing:1 converges:3 develop:1 ij:1 progress:1 indeterminacy:10 eq:1 recovering:11 implies:1 met:1 direction:8 bak:4 perso:1 material:2 atk:2 require:6 arinen:3 hx:1 proposition:4 opt:1 underdetermined:7 clarify:1 hold:2 sufficiently:1 considered:3 ic:9 bax:1 major:1 finland:1 early:1 estimation:1 proc:1 tanh:4 create:1 moor:1 clearly:1 sensor:4 gaussian:24 always:2 super:1 ck:6 pn:3 avoid:2 shtml:1 corollary:2 focus:1 june:1 bernoulli:1 aalto:1 contrast:2 mbelkin:1 sense:2 am:1 baseline:1 inaccurate:1 typically:1 a0:3 hidden:1 quasi:9 transformed:1 provably:2 issue:10 ill:2 raised:1 spatial:1 construct:2 aware:1 once:1 sampling:2 broad:1 others:1 report:3 inherent:4 belkin:2 oja:3 simultaneously:1 homogeneity:1 lessen:1 replaced:1 albera:1 attempt:1 freedom:1 possibility:1 evaluation:1 generically:1 mixture:5 behind:1 predefined:1 accurate:4 implication:1 chevalier:2 partial:1 orthogonal:13 unless:1 euclidean:9 continuing:1 desired:1 orthogonalized:1 overcomplete:1 instance:1 column:36 disadvantage:1 cumulants:8 ada:2 applicability:1 entry:8 uniform:1 fastica:15 vandewalle:1 iee:1 reported:1 conk:1 combined:1 fundamental:1 international:1 bu:1 lee:1 invertible:1 squared:1 ambiguity:8 unavoidable:1 moitra:1 containing:1 possibly:1 worse:1 book:1 ek:2 derivative:3 bx:3 return:4 de:4 vi:1 blind:13 depends:1 ad:3 multiplicative:1 root:5 later:1 performed:3 observing:1 portion:2 recover:7 hf:5 complicated:5 ative:1 sarela:1 contribution:7 square:6 accuracy:2 variance:5 largely:2 characteristic:3 castaing:1 directional:4 conceptually:1 generalize:1 avert:2 identification:3 lambert:1 accurately:2 randomness:1 ah:5 simultaneous:1 checked:1 definition:1 centering:1 involved:1 james:1 proof:3 recovers:5 popular:7 ut:2 dimensionality:1 subtle:1 foobi:2 actually:2 appears:1 higher:7 improved:1 done:2 though:1 generality:1 just:1 uct:1 biomedical:1 until:2 autoencoders:1 hand:1 sketch:1 replacing:1 maximizer:2 jutten:1 quality:1 artifact:1 aj:6 modulus:2 normalized:1 unbiased:1 true:1 concept:1 symmetric:1 moore:1 nonzero:1 white:5 indistinguishable:2 during:2 uniquely:1 noted:1 kak:2 criterion:1 generalized:1 complete:1 demonstrate:4 performs:3 orthogonalization:9 meaning:1 wise:1 instantaneous:1 ohio:6 fi:1 common:1 rotation:1 specialized:1 overview:1 volume:1 refer:1 significant:3 ai:2 rd:1 pm:1 similarly:1 access:8 longer:1 whitening:4 reverse:1 continue:1 vt:1 preserving:1 seen:2 additional:3 care:1 somewhat:2 greater:1 minimum:2 maximize:1 redundant:1 signal:58 u0:6 resolving:1 multiple:1 full:3 mix:1 ing:1 technical:5 academic:1 sphere:1 fined:1 ofp:1 divided:1 a1:1 scalable:1 whitened:1 noiseless:3 essentially:1 iteration:17 achieved:4 addition:4 singular:3 source:34 diagonalize:1 extra:2 biased:1 unlike:1 probably:1 subject:1 tend:1 recording:1 beamforming:6 integer:1 near:1 noting:1 presence:6 intermediate:1 split:1 independence:1 suboptimal:1 inner:11 idea:4 simplifies:1 pca:1 ferr:1 algebraic:1 speech:2 hessian:1 cause:1 proceed:1 matlab:2 buckley:1 useful:1 generally:1 cardoso:7 clutter:2 band:1 http:2 outperform:2 sign:10 estimated:2 ario:1 indefinite:3 nevertheless:2 drawn:1 neither:1 kuk:1 vast:1 asymptotically:1 hak:11 run:2 inverse:1 fourth:11 place:2 throughout:2 reader:1 reporting:1 separation:12 draw:1 scaling:7 followed:1 fold:1 oracle:2 orthogonality:4 helsinki:1 dkk:5 fourier:1 span:1 performing:1 vempala:1 combination:1 conjugate:2 across:2 remain:1 son:1 making:1 s1:3 comon:2 explained:1 invariant:4 interference:4 taken:1 definep:1 equation:5 previously:2 turn:4 discus:4 fail:1 ge:1 end:2 gaussians:1 multiplied:1 apply:1 observe:2 appropriate:1 sachdeva:1 assumes:1 denotes:1 running:1 include:2 adat:4 ting:1 tensor:6 objective:2 move:1 strategy:1 traditional:3 diagonal:7 said:2 hai:1 gradient:11 hamalainen:1 simulated:1 arel:1 reason:1 provable:2 enforcing:1 assuming:2 meg:2 length:1 issn:2 code:1 kk:5 ratio:4 innovation:2 equivalently:2 troubling:1 potentially:1 stoc:1 negative:1 ba:1 implementation:3 proper:1 unknown:7 perform:8 observation:2 sm:3 defining:1 incorporated:1 assp:1 rn:2 arbitrary:8 introduced:2 inverting:1 complement:2 required:1 bk:1 extensive:1 narrow:1 nip:1 address:2 able:3 suggested:1 below:2 xm:6 regime:1 recast:1 including:3 memory:2 max:1 power:7 natural:4 rely:3 vic:1 numerous:1 axis:1 started:1 arora:3 nice:1 literature:4 removal:1 checking:1 asymptotic:1 fully:3 loss:2 permutation:5 highlight:1 mixed:2 interesting:2 filtering:1 var:8 degree:1 sufficient:2 consistent:1 s0:4 xiao:1 editor:1 row:7 surprisingly:1 last:1 transpose:2 free:13 repeat:1 bias:8 taking:4 mikhail:1 distributed:1 van:1 souloumiac:3 forward:1 commonly:1 made:2 preprocessing:9 makino:1 transaction:3 approximate:1 keep:1 tains:1 pseudoinverse:2 handbook:1 assumed:1 unnecessary:2 hurri:1 thep:1 alternatively:1 latent:12 iterative:2 iet:1 sk:22 reasonably:1 robust:1 expanding:2 ignoring:1 eeg:2 improving:1 alg:1 expansion:1 complex:6 necessarily:2 constructing:1 mathis:1 did:1 main:3 linearly:4 noise:57 arise:1 allowed:1 fair:1 telecom:1 cubic:2 wiley:1 deterministically:1 exponential:1 candidate:1 theorem:5 formula:1 specific:1 aah:2 demixing:40 exists:4 workshop:1 drew:1 diagonalization:3 magnitude:1 karhunen:1 demand:1 rejection:1 simply:1 penrose:1 scalar:2 springer:2 relies:3 extracted:1 goal:1 identity:2 shared:1 experimentally:2 paristech:1 specifically:4 infinite:1 determined:3 uniformly:3 called:1 invariance:1 experimental:1 orthogonalize:2 formally:1 cumulant:11 |
5,514 | 5,991 | Differentially Private Subspace Clustering
Yining Wang, Yu-Xiang Wang and Aarti Singh
Machine Learning Department, Carnegie Mellon Universty, Pittsburgh, USA
{yiningwa,yuxiangw,aarti}@cs.cmu.edu
Abstract
Subspace clustering is an unsupervised learning problem that aims at grouping
data points into multiple ?clusters? so that data points in a single cluster lie approximately on a low-dimensional linear subspace. It is originally motivated by
3D motion segmentation in computer vision, but has recently been generically
applied to a wide range of statistical machine learning problems, which often involves sensitive datasets about human subjects. This raises a dire concern for
data privacy. In this work, we build on the framework of differential privacy
and present two provably private subspace clustering algorithms. We demonstrate
via both theory and experiments that one of the presented methods enjoys formal
privacy and utility guarantees; the other one asymptotically preserves differential
privacy while having good performance in practice. Along the course of the proof,
we also obtain two new provable guarantees for the agnostic subspace clustering
and the graph connectivity problem which might be of independent interests.
1
Introduction
Subspace clustering was originally proposed to solve very specific computer vision problems having
a union-of-subspace structure in the data, e.g., motion segmentation under an affine camera model
[11] or face clustering under Lambertian illumination models [15]. As it gains increasing attention
in the statistics and machine learning community, people start to use it as an agnostic learning tool in
social network [5], movie recommendation [33] and biological datasets [19]. The growing applicability of subspace clustering in these new domains inevitably raises the concern of data privacy, as
many such applications involve dealing with sensitive information. For example, [19] applies subspace clustering to identify diseases from personalized medical data and [33] in fact uses subspace
clustering as a effective tool to conduct linkage attacks on individuals in movie rating datasets. Nevertheless, privacy issues in subspace clustering have been less explored in the past literature, with
the only exception of a brief analysis and discussion in [29]. However, the algorithms and analysis
presented in [29] have several notable deficiencies. For example, data points are assumed to be incoherent and it only protects the differential privacy of any feature of a user rather than the entire user
profile in the database. The latter means it is possible for an attacker to infer with high confidence
whether a particular user is in the database, given sufficient side information.
It is perhaps reasonable why there is little work focusing on private subspace clustering, which
is by all means a challenging task. For example, a negative result in [29] shows that if utility is
measured in terms of exact clustering, then no private subspace clustering algorithm exists when
neighboring databases are allowed to differ on an entire user profile. In addition, state-of-the-art
subspace clustering methods like Sparse Subspace Clustering (SSC, [11]) lack a complete analysis of
its clustering output, thanks to the notorious ?graph connectivity? problem [21]. Finally, clustering
could have high global sensitivity even if only cluster centers are released, as depicted in Figure 1.
As a result, general private data releasing schemes like output perturbation [7, 8, 2] do not apply.
In this work, we present a systematic and principled treatment of differentially private subspace
clustering. To circumvent the negative result in [29], we use the perturbation of recovered low1
dimensional subspace from the ground truth as the utility measure. Our contributions are two-fold.
First, we analyze two efficient algorithms based on the sample-aggregate framework [22] and established formal privacy and utility guarantees when data are generated from some stochastic model or
satisfy certain deterministic separation conditions. New results on (non-private) subspace clustering
are obtained along our analysis, including a fully agnostic subspace clustering on well-separated
datasets using stability arguments and exact clustering guarantee for thresholding-based subspace
clustering (TSC, [14]) in the noisy setting. In addition, we employ the exponential mechanism [18]
and propose a novel Gibbs sampler for sampling from this distribution, which involves a novel tweak
in sampling from a matrix Bingham distribution. The method works well in practice and we show it
is closely related to the well-known mixtures of probabilistic PCA model [27].
Related work Subspace clustering can be thought as a generalization of PCA and k-means clustering. The former aims at finding a single low-dimensional subspace and the latter uses zerodimensional subspaces as cluster centers. There has been extensive research on private PCA
[2, 4, 10] and k-means [2, 22, 26]. Perhaps the most similar work to ours is [22, 4]. [22] applies the
sample-aggregate framework to k-means clustering and [4] employs the exponential mechanism to
recover private principal vectors. In this paper we give non-trivial generalization of both work to the
private subspace clustering setting.
2
2.1
Preliminaries
Notations
P
For a vector x ? Rd , its p-norm is defined as kxkp = ( i xpi )1/p . If p is not explicitly specified
then the 2-norm is used. For a matrix A ? Rn?m , we use ?1 (A) ? ? ? ? ? ?n (A) ? 0 to
denote its singular values (assuming without loss of generality that n ? m). We use k ? k? to
denote matrix norms, with ? = 2 the
spectral norm and ? = F the Frobenious norm. That
pmatrix
Pn
d
2
is, kAk2 = ?1 (A) and kAkF =
i=1 ?i (A) . For a q-dimensional subspace S ? R , we
d?q
associate with a basis U ? R , where the q columns in U are orthonormal and S = range(U).
We use Sdq to denote the set of all q-dimensional subspaces in Rd .
Given x ? Rd and S ? Rd , the distance d(x, S) is defined as d(x, S) = inf y?S kx ? yk2 . If S is
a subspace associated with a basis U, then we have d(x, S) = kx ? PS (x)k2 = kx ? UU> xk2 ,
where PS (?) denotes the projection operator onto subspace S. For two subspaces S, S 0 of dimension
q, the distance d(S, S 0 ) is defined as the Frobenious norm of the sin matrix of principal angles; i.e.,
d(S, S 0 ) = k sin ?(S, S 0 )kF = kUU> ? U0 U0> kF ,
(1)
where U, U0 are orthonormal basis associated with S and S 0 , respectively.
2.2
Subspace clustering
Given n data points x1 , ? ? ? , xn ? Rd , the task of subspace clustering is to cluster the data points
into k clusters so that data points within a subspace lie approximately on a low-dimensional subspace. Without loss of generality, we assume kxi k2 ? 1 for all i = 1, ? ? ? , n. We also use
X = {x1 , ? ? ? , xn } to denote the dataset and X ? Rd?n to denote the data matrix by stacking
all data points in columnwise order. Subspace clustering seeks to find k q-dimensional subspaces
C? = {S?1 , ? ? ? , S?k } so as to minimize the Wasserstein?s distance or distance squared defined as
? C?) =
d2W (C,
min
?:[k]?[k]
k
X
?
d2 (S?i , S?(i)
),
i=1
?
(2)
where ? are taken over all permutations on [k] and S are the optimal/ground-truth subspaces. In a
model based approach, C ? is fixed and data points {xi }ni=1 are generated either deterministically or
stochastically from one of the ground-truth subspaces in C ? with noise corruption; for a completely
agnostic setting, C ? is defined as the minimizer of the k-means subspace clustering objective:
n
1X
C ? := argminC={S1 ,??? ,Sk }?Sdq cost(C; X ) = argminC={S1 ,??? ,Sk }?Sdq
min d2 (xi , Sj ). (3)
n i=1 j
To simplify notations, we use ?k (X ) = cost(C ? ; X ) to denote cost of the optimal solution.
2
Algorithm 1 The sample-aggregate framework [22]
1: Input: X = {xi }n
? Rd , number
of subsets m, privacy parameters ?, ?; f , dM .
p
? i=1
2: Initialize: s = m, ? = ?/(5 2 ln(2/?)), ? = ?/(4(D + ln(2/?))).
3: Subsampling: Select m random subsets of size n/m of X independently and uniformly at
4:
5:
6:
7:
2.3
random
without replacement. Repeat this step until no single data point appears in more than
?
m of the sets. Mark the subsampled subsets XS1 , ? ? ? , XSm .
D
Separate queries: Compute B = {si }m
i=1 ? R , where si = f (XSi ).
m+s
?
Aggregation: Compute g(B) = si? where i = argminm
i=1 ri (t0 ) with t0 = ( 2 + 1). Here
ri (t0 ) denotes the distance dM (?, ?) between si and the t0 -th nearest neighbor to si in B.
Noise calibration: Compute S(B) = 2 maxk (?(t0 + (k + 1)s) ? e??k ), where ?(t) is the mean
of the top bs/?c values in {r1 (t), ? ? ? , rm (t)}.
Output: A(X ) = g(B) + S(B)
? u, where u is a standard Gaussian random vector.
Differential privacy
Definition 2.1 (Differential privacy, [7, 8]). A randomized algorithm A is (?, ?)-differentially private
if for all X , Y satisfying d(X , Y) = 1 and all sets S of possible outputs the following holds:
Pr[A(X ) ? S] ? e? Pr[A(Y) ? S] + ?.
(4)
In addition, if ? = 0 then the algorithm A is ?-differentially private.
In our setting, the distance d(?, ?) between two datasets X and Y is defined as the number of different
columns in X and Y. Differential privacy ensures the output distribution is obfuscated to the point
that every user has a plausible deniability about being in the dataset, and in addition any inferences
about individual user will have nearly the same confidence before and after the private release.
3
Sample-aggregation based private subspace clustering
In this section we first summarize the sample-aggregate framework introduced in [22] and argue
why it should be preferred to conventional output perturbation mechanisms [7, 8] for subspace clustering. We then analyze two efficient algorithms based on the sample-aggregate framework and
prove formal privacy and utility guarantees. We also prove new results in our analysis regarding
the stability of k-means subspace clustering (Lem. 3.3) and graph connectivity (i.e., consistency) of
noisy threshold-based subspace clustering (TSC, [14]) under a stochastic model (Lem. 3.5).
3.1
Smooth local sensitivity and the sample-aggregate framework
Most existing privacy frameworks [7, 8] are
based on the idea of global sensitivity, which
is defined as the maximum output perturbation
kf (X1 ) ? f (X2 )k? , where maximum is over
all neighboring databases X1 , X2 and ? = 1 or
2. Unfortunately, global sensitivity of clustering problems is usually high even if only cluster centers are released. For example, Figure
1 shows that the global sensitivity of k-means
subspace clustering could be as high as O(1), Figure 1: Illustration of instability of k-means
subspace clustering solutions (d = 2, k = 2, q =
which ruins the algorithm utility.
1). Blue dots represent evenly spaced data points
To circumvent the above-mentioned chal- on the unit circle; blue crosses indicate an addilenges, Nissim et al. [22] introduces the tional data point. Red lines are optimal solutions.
sample-aggregate framework based on the concept of a smooth version of local sensitivity.
Unlike global sensitivity, local sensitivity measures the maximum perturbation kf (X ) ? f (X 0 )k?
over all databases X 0 neighboring to the input database X . The proposed sample-aggregate framework (pseudocode in Alg. 1) enjoys local sensitivity and comes with the following guarantee:
Theorem 3.1 ([22], Theorem 4.2). Let f : D ? RD be an efficiently computable function where
D is the collection of all databases and D is the output dimension. Let dM (?, ?) be a semimetric on
3
?
the outer space of f . 1 Set ? > 2D/ m and m = ?(log2 n). The sample-aggregate algorithm
A in Algorithm 1 is an efficient (?, ?)-differentially private algorithm. Furthermore, if f and m are
chosen such that the `1 norm of the output of f is bounded by ? and
Pr [dM (f (XS ), c) ? r] ?
XS ?X
3
4
(5)
for some c ? RD and r? > 0, then the standard deviation of Gaussian noise added is upper bounded
? m
by O(r/?) + ?? e??( D ) . In addition, when m satisfies m = ?(D2 log2 (r/?)/?2 ), with high
? is upper bounded by O(r/?), where c
? depending on A(X )
probability each coordinate of A(X ) ? c
?) = O(r).
satisfies dM (c, c
Let f be any subspace clustering solver that outputs k estimated low-dimensional subspaces and
dM be the Wasserstein?s distance as defined in Eq. (2). Theorem 3.1 provides privacy guarantee
for an efficient meta-algorithm with any f . In addition, utility guarantee holds with some more
assumptions on input dataset X . In following sections we establish utility guarantees. The main
idea is to prove stability results as outlined in Eq. (5) for particular subspace clustering solvers and
then apply Theorem 3.1.
3.2
The agnostic setting
We first consider the setting when data points {xi }ni=1 are arbitrarily placed. Under such agnostic
setting the optimal solution C ? is defined as the one that minimizes the k-means cost as in Eq. (3).
The solver f is taken to be any (1 + )-approximation2 of optimal k-means subspace clustering; that
? X ) ? (1 + )cost(C ? ; X ). Efficient core-set
is, f always outputs subspaces C? satisfying cost(C;
based approximation algorithms exist, for example, in [12]. The key task of this section it to identify
assumptions under which the stability condition in Eq. (5) holds with respect to an approximate
solver f . The example given in Figure 1 also suggests that identifiability issue arises when the input
data X itself cannot be well clustered. For example, no two straight lines could well approximate
data uniformly distributed on a circle. To circumvent the above-mentioned difficulty, we impose the
following well-separation condition on the input data X :
Definition 3.2 (Well-separation condition for k-means subspace clustering). A dataset X is
(?, ?, ?)-well separated if there exist constants ?, ? and ?, all between 0 and 1, such that
?2k (X ) ? min ?2 ?2k?1 (X ), ?2k,? (X ) ? ?, ?2k,+ (X ) + ? ,
(6)
where ?k?1 , ?k,? and ?k,+ are defined as ?2k?1 (X ) = minS1:k?1 ?Sdq cost({Si }; X ); ?2k,? (X ) =
minS1 ?Sdq?1 ,S2:k ?Sdq cost({Si }; X ); and ?2k,+ (X ) = minS1 ?Sdq+1 ,S2:k ?Sdq cost({Si }; X ).
The first condition in Eq. (6), ?2k (X ) ? ?2 ?2k?1 (X ), constrains that the input dataset X cannot
be well clustered using k ? 1 instead of k clusters. It was introduced in [23] to analyze stability of
k-means solutions. For subspace clustering, we need another two conditions regarding the intrinsic
dimension of each subspace. The ?2k (X ) ? ?2k,? (X ) ? ? asserts that replacing a q-dimensional
subspace with a (q ? 1)-dimensional one is not sufficient, while ?2k (X ) ? ?2k,+ (X ) + ? means an
additional subspace dimension does not help much with clustering X .
The following lemma is our main stability result for subspace clustering on well-separated datasets.
It states that when a candidate clustering C? is close to the optimal clustering C ? in terms of clustering
cost, they are also close in terms of the Wasserstein distance defined in Eq. (2).
Lemma 3.3 (Stability of agnostic k-means subspace clustering). Assume X is (?, ?, ?)-well separated with ?2 < 1/1602, ? > ?. Suppose a candidate clustering C? = {S?1 , ? ? ? , S?k } ? Sdq satisfies
2
? X ) ? a ? cost(C ? ; X ) for some a < 1?802?
cost(C;
Then ?
the following holds:
800?2 . ?
2
600
2?
k
? C?) ?
dW (C,
.
(7)
2
(1 ? 150? )(? ? ?)
The following theorem is then a simple corollary, with a complete proof in Appendix B.
1
2
dM (?, ?) satisfies dM (x, y) ? 0, dM (x, x) = 0 and dM (x, y) ? dM (x, z) + dM (y, z) for all x, y, z.
Here is an approximation constant and is not related to the privacy parameter ?.
4
Algorithm 2 Threshold-based subspace clustering (TSC), a simplified version
d
1: Input: X = {xi }n
i=1 ? R , number of clusters k and number of neighbors s.
2: Thresholding: construct G ? {0, 1}n?n by connecting xi to the other s data points in X with
the largest absolute inner products |hxi , x0 i|. Complete G so that it is undirected.
3: Clustering: Let X (1) , ? ? ? , X (`) be the connected components in G. Construct X? (`) by sampling q points from X (`) uniformly at random without replacement.
4: Output: subspaces C? = {S?(`) }k`=1 ; S?(`) is the subspace spanned by q arbitrary points in X? (`) .
Theorem 3.4. Fix a (?, ?, ?)-well separated dataset X with n data points and ?2 < 1/1602,
? > ?. Suppose XS ? X is a subset of X with size m, sampled uniformly at random without
replacement. Let C? = {S?1 , ? ? ? , S?2 } be an (1 + )-approximation of optimal k-means subspace
2
kqd log(qd/? 0 ?2k (X ))
clustering computed on XS . If m = ?(
) with ? 0 < 1?802?
800?2 ? 2(1 + ), then we
? 02 ?4k (X )
have:
#
"
? 2?
3
600
2?
k
?
?C )?
? ,
Pr dW (C,
(8)
XS
(1 ? 150?2 )(? ? ?)
4
where C ? = {S1? , ? ? ? , Sk? } is the optimal clustering on X ; that is, cost(C ? ; X ) = ?2k (X ).
Consequently, applying Theorem 3.4 together with the sample-aggregate framework we obtain a
weak polynomial-time ?-differentially private algorithm for agnostic k-means subspace
clustering,
?
?2 k
with additional amount of per-coordinate Gaussian noise upper bounded by O( ?(???) ). Our bound
is comparable to the one obtained in [22] for private k-means clustering, except for the (? ? ?) term
which characterizes the well-separatedness under the subspace clustering scenario.
3.3
The stochastic setting
We further consider the case when data points are stochastically generated from some underlying
?true? subspace set C ? = {S1? , ? ? ? , Sk? }. Such settings were extensively investigated in previous
development of subspace clustering algorithms [24, 25, 14]. Below we give precise definition of the
considered stochastic subspace clustering model:
(`)
The stochastic model For every cluster ` associated with subspace S`? , a data point xi ? Rd
(`)
(`)
(`)
(`)
belonging to cluster ` can be written as xi = y i + ?i , where y i is sampled uniformly at
random from {y ? S`? : kyk2 = 1} and ?i ? N (0, ? 2 /d ? Id ) for some noise parameter ?.
Under the stochastic setting we consider the solver f to be the Threshold-based Subspace Clustering
(TSC, [14]) algorithm. A simplified version of TSC is presented in Alg. 2. An alternative idea is to
apply results in the previous section since the stochastic model implies well-separated dataset when
noise level ? is small. However, the running time of TSC is O(n2 d), which is much more efficient
than core-set based methods. TSC is provably correct in that the similarity graph G has no false
connections and is connected per cluster, as shown in the following lemma:
Lemma 3.5 (Connectivity of TSC). Fix ? > 1 and assume max 0.04n` ? s ? min n` /6. If for
every ` ? {1, ? ? ? , k}, the number of data points n` and the noise level ? satisfy
s
?
?
d2 (S`? , S`?0 )
n`
?? 2q(12?)q?1
?(1 + ?) q
1
? ?
>
; ?
? 1 ? min0
;
`6=`
log n`
0.01(q/2 ? 1)(q ? 1)
15 log n
q
log n d
s
"
1 !
1 !#
?
? 2?q log n` q?1
0.01(q/2 ? 1)(q ? 1) q?1
d
?
cos 12?
? cos
?
?<
,
24 log n
n`
?
?
?
P
where ?
? = 2 5? + ? 2 . Then with probability at least 1 ? n2 e? d ? n ` e?n` /400 ?
P 1??
P
/(? log n` ) ? 12/n ? ` n` e?c(n` ?1) , the connected components in G correspond ex` n`
actly to the k subspaces.
Conditions in Lemma 3.5 characterize the interaction between sample complexity n` , noise level
? and ?signal? level min`6=`0 d(S`? , S`?0 ). Theorem 3.6 is then a simple corollary of Lemma 3.5.
Complete proofs are deferred to Appendix C.
5
Theorem 3.6 (Stability of TSC on stochastic data). Assume conditions in Lemma 3.5 hold with
respect to n0 = n/m for ?(log2 n) ? m ? o(n). Assume in addition that limn?? n` = ? for all
` = 1, ? ? ? , L and the failure probability does not exceed 1/8. Then for every > 0 we have
h
i
? C ? ) > = 0.
lim Pr dW (C,
(9)
n?? XS
Compared to Theorem 3.4 for the agnostic model, Theorem 3.6 shows that one can achieve consistent estimation of underlying subspaces under a stochastic model. It is an interesting question to
derive finite sample bounds for the differentially private TSC algorithm.
3.4
Discussion
It is worth noting that the sample-aggregate framework is an (?, ?)-differentially private mechanism
for any computational subroutine f . However, the utility claim (i.e., the O(r/?) bound on each
coordinate of A(X ) ? c) requires the stability of the particular subroutine f , as outlined in Eq.
(5). It is unfortunately hard to theoretically argue for stability of state-of-the-art subspace clustering
methods such as sparse subspace cluster (SSC, [11]) due to the ?graph connectivity? issue [21]3 .
Nevertheless, we observe satisfactory performance of SSC based algorithms in simulations (see
Sec. 5). It remains an open question to derive utility guarantee for (user) differentially private SSC.
4
Private subspace clustering via the exponential mechanism
In Section 3 we analyzed two algorithms with provable privacy and utility guarantees for subspace clustering based on the sample-aggregate framework. However, empirical evidence shows
that sample-aggregate based private clustering suffers from poor utility in practice [26]. In this section, we propose a practical private subspace clustering algorithm based on the exponential mechanism [18]. In particular, given the dataset X with n data points, we propose to samples parameters
? = ({S` }k`=1 , {zi }ni=1 ) where S` ? Sqd , zj ? {1, ? ? ? , k} from the following distribution:
!
n
? X 2
p(?; X ) ? exp ? ?
d (xi , Szi ) ,
(10)
2 i=1
where ? > 0 is the privacy parameter. The following proposition shows that exact sampling from
the distribution in Eq. (10) results in a provable differentially private algorithm. Its proof is trivial
and is deferred to Appendix D.1. Note that unlike sample-aggregate based methods, the exponential
mechanism can privately release clustering assignment z. This does not violate the lower bound in
[29] because the released clustering assignment z is not guaranteed to be exactly correct.
Proposition 4.1. The random algorithm A : X 7? ? that outputs one sample from the distribution
defined in Eq. (10) is ?-differential private.
4.1
A Gibbs sampling implementation
It is hard in general to sample parameters from distributions as complicated as in Eq. (10). We
present a Gibbs sampler that iteratively samples subspaces {Si } and cluster assignments {zj } from
their conditional distributions.
Update of zi : When {S` } and z?i are fixed, the conditional distribution of zi is
p(zi |{S` }k`=1 , z?i ; X ) ? exp(??/2 ? d2 (xi , Szi )).
(11)
Since d(xi , Szi ) can be efficiently computed (given an orthonormal basis of Szi ), update of zi can
be easily done by sampling zj from a categorical distribution.
Update of S` : Let Xe(`) = {xi ? X : zi = `} denote data points that are assigned to cluster ` and
e (`) ? Rd??n` as the matrix with columns corresponding to all data points in
n
? ` = |Xe(`) |. Denote X
(`)
e
X . The distribution over S` conditioned on z can then be written as
d?q
p(S` = range(U` )|z; X ) ? exp(?/2 ? tr(U>
, U>
(12)
` A` U` )); U` ? R
` U` = Iq?q ,
>
e (`) X
e (`) is the unnormalized sample covariance matrix. Distribution of the form in
where A` = X
Eq. (12) is a special case of the matrix Bingham distribution, which admits a Gibbs sampler [16]. We
give implementation details in Appendix D.2 with modifications so that the resulting Gibbs sampler
is empirically more efficient for a wide range of parameter settings.
3
Recently [28] established full clustering guarantee for SSC, however, under strong assumptions.
6
4.2 Discussion
The proposed Gibbs sampler resembles the k-plane algorithm for subspace clustering [3]. It is
in fact a ?probabilistic? version of k-plane since sampling is performed at each iteration rather
than deterministic updates. Furthermore, the proposed Gibbs sampler could be viewed as posterior
sampling for the following generative model: first sample U` uniformly at random from Sdq for
each subspace S` ; afterwards, cluster assignments {zi }ni=1 are sampled such that Pr[zi = j] = 1/k
wi , where y i is sampled uniformly at random from the qand xi is set as xi = U` y i + PU?
`
dimensional unit ball and wi ? N (0, Id /?). Connection between the above-mentioned generative
model and Gibbs sampler is formally justified in Appendix D.3. The generative model is strikingly
similar to the well-known mixtures
of probabilistic PCA (MPPCA, [27]) model by setting variance
p
parameters ?` in MPPCA to 1/?. The only difference is that y i are sampled uniformly at random
from a unit ball 4 and noise wi is constrained to U?
` , the complement space of U` . Note that this is
closely related to earlier observation that ?posterior sampling is private? [20, 6, 31], but different in
that we constructed a model from a private procedure rather than the other way round.
As the privacy parameter ? ? ? (i.e., no privacy guarantee), we arrive immediately at the exact
k-plane algorithm and the posterior distribution concentrates around the optimal k-means solution
(C ? , z ? ). This behavior is similar to what a small-variance asymptotic analysis on MPPCA models
reveals [30]. On the other hand, the proposed Gibbs sampler is significantly different from previous
Bayesian probabilisitic PCA formulation [34, 30] in that the subspaces are sampled from a matrix
Bingham distribution. Finally, we remark that the proposed Gibbs sampler is only asymptotically
private because Proposition 4.1 requires exact (or nearly exact [31]) sampling from Eq. (10).
5
Numerical results
We provide numerical results of both the sample-aggregate and Gibbs sampling algorithms on synthetic and real-world datasets. We also compare with a baseline method implemented based on the
k-plane algorithm [3] with perturbed sample covariance matrix via the SuLQ framework [2] (details presented in Appendix E). Three solvers are considered for the sample-aggregate framework:
threshold-based subspace clustering (TSC, [14]), which has provable utility guarantee with sampleaggregation on stochastic models, along with sparse subspace clustering (SSC, [11]) and low-rank
representation (LRR, [17]), the two state-of-the-art methods for subspace clustering. For Gibbs
sampling, we use non-private SSC and LRR solutions as initialization for the Gibbs sampler. All
methods are implemented using Matlab.
For synthetic datasets, we first generate k random q-dimensional linear subspaces. Each subspace is
generated by first sampling a d ? q random Gaussian matrix and then recording its column space. n
data points are then assigned to one of the k subspaces (clusters) uniformly at random. To generate
a data point xi assigned with subspace S` , we first sample y i ? Rq with ky i k2 = 1 uniformly
at random from the q-dimensional unit sphere. Afterwards, xi is set as xi = U` y i + wi , where
U` ? Rd?q is an orthonormal basis associated with S` and wi ? N (0, ? 2 Id ) is a noise vector.
? X ) and the WasserFigure 2 compares the utility (measured in terms of k-means objective cost(C;
?
? C )) of sample aggregation, Gibbs sampling and SuLQ subspace clustering.
stein?s distance dW (C,
As shown in the plots, sample-aggregation algorithms have poor utility unless the privacy parameter
? is truly large (which means very little privacy protection). On the other hand, both Gibbs sampling
and SuLQ subspace clustering give reasonably good performance. Figure 2 also shows that SuLQ
scales poorly with the ambient dimension d. This is because SuLQ subspace clustering requires
calibrating noise to a d ? d sample covariance matrix, which induces much error when d is large.
Gibbs sampling seems to be robust to various d settings.
We also experiment on real-world datasets. The right two plots in Figure 2 report utility on a subset of the extended Yale Face Dataset B [13] for face clustering. 5 random individuals are picked,
forming a subset of the original dataset with n = 320 data points (images). The dataset is preprocessed by projecting each individual onto a 9D affine subspace via PCA. Such preprocessing step
was adopted in [32, 29] and was theoretically justified in [1]. Afterwards, ambient dimension of
the entire dataset is reduced to d = 50 by random Gaussian projection. The plots show that Gibbs
sampling significantly outperforms the other algorithms.
4
In MPPCA latent variables y i are sampled from a normal distribution N (0, ?2 Iq ).
7
0.2
0.7
0.15
0.1
0.05
0.5
0.4
0.3
0.2
0.1
?0.5
0
0.5
1
Log10?
1.5
2
2.5
0
?1
3
s.a., SSC
s.a., TSC
s.a., LRR
exp., SSC
exp. LRR
SuLQ?10
SuLQ?50
?0.5
0.6
0.5
0.4
0.3
0.2
0.1
0
0.5
1
Log10?
1.5
2
2.5
0
?1
3
4
9
3.5
8
2
1.5
1
0.5
0
?0.5
?1
s.a., SSC
s.a., TSC
s.a., LRR
exp., SSC
exp. LRR
SuLQ?10
SuLQ?50
?0.5
0
0.5
3
2.5
2
1.5
1
0.5
1
Log10?
1.5
2
2.5
0
?1
3
Wasserstein distance
3
2.5
Wasserstein distance
Wasserstein distance
0
?1
0.8
K?means cost
0.25
0.9
0.6
K?means cost
K?means cost
0.7
s.a., SSC
s.a., TSC
s.a., LRR
exp., SSC
exp. LRR
SuLQ?10
SuLQ?50
0.3
s.a., SSC
s.a., TSC
s.a., LRR
exp., SSC
exp. LRR
SuLQ?10
SuLQ?50
?0.5
0
0.5
1
Log10?
1.5
2
2.5
?0.5
0
0.5
1
1.5
2
2.5
3
1
1.5
2
2.5
3
Log10?
7
s.a., SSC
s.a., TSC
s.a., LRR
exp., SSC
exp. LRR
SuLQ?10
SuLQ?50
6
5
4
3
2
?1
3
s.a., SSC
s.a., TSC
s.a., LRR
exp., SSC
exp. LRR
SuLQ?10
SuLQ?50
?0.5
0
0.5
Log10?
Figure 2: Utility under fixed privacy budget ?. Top row shows k-means cost and bottom row shows
? C ? ). From left to right: synthetic dataset, n = 5000, d = 5, k =
the Wasserstein?s distance dW (C,
3, q = 3, ? = 0.01; n = 1000, d = 10, k = 3, q = 3, ? = 0.1; extended Yale Face Dataset B
(a subset). n = 320, d = 50, k = 5, q = 9, ? = 0.01. ? is set to 1/(n ln n) for (?, ?)-privacy
algorithms. ?s.a.? stands for smooth sensitivity and ?exp.? stands for exponential mechanism.
?SuLQ-10? and ?SuLQ-50? stand for the SuLQ framework performing 10 and 50 iterations. Gibbs
sampling is run for 10000 iterations and the mean of the last 100 samples is reported.
Test statistic
0.8
0.6
K?means cost
?=0.1
?=1
?=10
?=100
0.4
0.8
4
0.7
3.5
Wasserstein distance
1
0.6
0.5
0.4
0.3
0.2
0.2
0.1
0
0
20
40
60
? 100 iterations
80
100
0
0
3
2.5
2
1.5
1
0.5
20
40
60
? 100 iterations
80
100
0
0
20
40
60
? 100 iterations
80
100
? C ? ) of 8 trials of the Gibbs sampler under different
Figure 3: Test statistics, k-means cost and dW (C,
privacy settings. Synthetic dataset setting: n = 1000, d = 10, k = 3, q = 3, ? = 0.1.
In Figure 3 we investigate the mixing behavior of proposed Gibbs sampler. We plot for multiple
?
trials of Gibbs sampling the k-means objective, Wasserstein?s distance and a test statistic 1/ kq ?
Pk
PT
(t)
(t)
( `=1 k1/T ? t=1 U` k2F )1/2 , where U` is a basis sample of S` at the tth iteration. The test
statistic has mean zero under distribution in Eq. (10) and a similar statistic was used in [4] as a
diagnostic of the mixing behavior of another Gibbs sampler. Figure 3 shows that under various
privacy parameter settings, the proposed Gibbs sampler mixes quite well after 10000 iterations.
6
Conclusion
In this paper we consider subspace clustering subject to formal differential privacy constraints. We
analyzed two sample-aggregate based algorithms with provable utility guarantees under agnostic and
stochastic data models. We also propose a Gibbs sampling subspace clustering algorithm based on
the exponential mechanism that works well in practice. Some interesting future directions include
utility bounds for state-of-the-art subspace clustering algorithms like SSC or LRR.
Acknowledgement This research is supported in part by grant NSF CAREER IIS-1252412, NSF
Award BCS-0941518, and a grant by Singapore National Research Foundation under its International Research Centre @ Singapore Funding Initiative administered by the IDM Programme Office.
8
References
[1] R. Basri and D. Jacobs. Lambertian reflectance and linear subspaces. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 25(2):218?233, 2003.
[2] A. Blum, C. Dwork, F. McSherry, and K. Nissim. Practical privacy: the SULQ framework. In PODS,
2015.
[3] P. S. Bradley and O. L. Mangasarian. k-plane clustering. Journal of Global Optimization, 16(1), 2000.
[4] K. Chaudhuri, A. Sarwate, and K. Sinha. Near-optimal algorithms for differentially private principal
components. In NIPS, 2012.
[5] Y. Chen, A. Jalali, S. Sanghavi, and H. Xu. Clustering partially observed graphs via convex optimization.
The Journal of Machine Learning Research, 15(1):2213?2238, 2014.
[6] C. Dimitrakakis, B. Nelson, A. Mitrokotsa, and B. I. Rubinstein. Robust and private bayesian inference.
In Algorithmic Learning Theory, pages 291?305. Springer, 2014.
[7] C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor. Our data, ourselves: Privacy via
distributed noise generation. In EUROCRYPT, 2006.
[8] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis.
In TCC, 2006.
[9] C. Dwork and A. Roth. The algorithmic foundations of differential privacy. Foundations and Trends in
Theoretical Computer Science, 9(3?4):211?407, 2014.
[10] C. Dwork, K. Talwar, A. Thakurta, and L. Zhang. Analyze Gauss: Optimal bounds for privacy-preserving
principal component analysis. In STOC, 2014.
[11] E. Elhamifar and R. Vidal. Sparse subspace clustering: Algorithm, theory and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(11):2765?2781, 2013.
[12] D. Feldman, M. Schmidt, and C. Sohler. Turning big data into tiny data: Constant-size coresets for
k-means, pca and projective clustering. In SODA, 2013.
[13] A. Georghiades, P. Belhumeur, and D. Kriegman. From few to many: Illumination cone models for
face recognition under variable lighting and pose. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 23(6):643?660, 2001.
[14] R. Heckel and H. B?olcskei. Robust subspace clustering via thresholding. arXiv:1307.4891, 2013.
[15] J. Ho, M.-H. Yang, J. Lim, K.-C. Lee, and D. Kriegman. Clustering appearances of objects under varying
illumination conditions. In CVPR, 2003.
[16] P. Hoff. Simulation of the matrix bingham-conmises-fisher distribution, with applications to multivariate
and relational data. Journal of Computational and Graphical Statistics, 18(2):438?456, 2009.
[17] G. Liu, Z. Lin, S. Yan, J. Sun, Y. Ma, and Y. Yu. Robust recovery of subspace structures by low-rank
representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):171?184, 2012.
[18] F. McSherry and K. Talwar. Mechanism design via differential privacy. In FOCS, 2007.
[19] B. McWilliams and G. Montana. Subspace clustering of high-dimensional data: a predictive approach.
Data Mining and Knowledge Discovery, 28(3):736?772, 2014.
[20] D. J. Mir. Differential privacy: an exploration of the privacy-utility landscape. PhD thesis, Rutgers
University, 2013.
[21] B. Nasihatkon and R. Hartley. Graph connectivity in sparse subspace clustering. In CVPR, 2011.
[22] K. Nissim, S. Raskhodnikova, and A. Smith. Smooth sensitivity and sampling in private data analysis. In
STOC, 2007.
[23] R. Ostrovksy, Y. Rabani, L. Schulman, and C. Swamy. The effectiveness of Lloyd-type methods for the
k-means problem. In FOCS, 2006.
[24] M. Soltanolkotabi, E. J. Candes, et al. A geometric analysis of subspace clustering with outliers. The
Annals of Statistics, 40(4):2195?2238, 2012.
[25] M. Soltanolkotabi, E. Elhamifa, and E. Candes. Robust subspace clustering. The Annals of Statistics,
42(2):669?699, 2014.
[26] D. Su, J. Cao, N. Li, E. Bertino, and H. Jin. Differentially private k-means clustering. arXiv, 2015.
[27] M. Tipping and C. Bishop. Mixtures of probabilistic principle component anlyzers. Neural computation,
11(2):443?482, 1999.
[28] Y. Wang, Y.-X. Wang, and A. Singh. Clustering consistent sparse subspace clustering. arXiv, 2015.
[29] Y. Wang, Y.-X. Wang, and A. Singh. A deterministic analysis of noisy sparse subspace clustering for
dimensionality-reduced data. In ICML, 2015.
[30] Y. Wang and J. Zhu. DP-space: Bayesian nonparametric subspace clustering with small-variance asymptotic analysis. In ICML, 2015.
[31] Y.-X. Wang, S. Fienberg, and A. Smola. Privacy for free: Posterior sampling and stochastic gradient
monte carlo. In ICML, 2015.
[32] Y.-X. Wang and H. Xu. Noisy sparse subspace clustering. In ICML, pages 89?97, 2013.
[33] A. Zhang, N. Fawaz, S. Ioannidis, and A. Montanari. Guess who rated this movie: Identifying users
through subspace clustering. arXiv, 2012.
[34] Z. Zhang, K. L. Chan, J. Kwok, and D.-Y. Yeung. Bayesian inference on principal component analysis
using reversible jump markov chain monte carlo. In AAAI, 2004.
9
| 5991 |@word trial:2 private:34 version:4 polynomial:1 norm:7 seems:1 open:1 d2:5 seek:1 simulation:2 covariance:3 jacob:1 tr:1 liu:1 ours:1 past:1 existing:1 outperforms:1 recovered:1 bradley:1 protection:1 si:9 written:2 numerical:2 plot:4 update:4 n0:1 mitrokotsa:1 generative:3 intelligence:4 guess:1 plane:5 smith:2 core:2 provides:1 attack:1 zhang:3 along:3 constructed:1 differential:11 initiative:1 focs:2 prove:3 naor:1 privacy:35 theoretically:2 x0:1 behavior:3 growing:1 probabilisitic:1 little:2 solver:6 increasing:1 notation:2 bounded:4 underlying:2 agnostic:10 what:1 minimizes:1 kenthapadi:1 finding:1 guarantee:15 every:4 exactly:1 k2:3 rm:1 unit:4 medical:1 grant:2 mcwilliams:1 before:1 local:4 id:3 approximately:2 might:1 initialization:1 resembles:1 montana:1 argminc:2 suggests:1 challenging:1 co:2 projective:1 range:4 lrr:15 practical:2 camera:1 practice:4 union:1 procedure:1 empirical:1 yan:1 thought:1 significantly:2 projection:2 confidence:2 onto:2 cannot:2 close:2 operator:1 applying:1 instability:1 raskhodnikova:1 argminm:1 conventional:1 deterministic:3 center:3 roth:1 attention:1 independently:1 pod:1 convex:1 recovery:1 immediately:1 identifying:1 mironov:1 orthonormal:4 spanned:1 dw:6 stability:10 coordinate:3 annals:2 pt:1 suppose:2 user:8 exact:6 us:2 associate:1 trend:1 satisfying:2 recognition:1 database:7 bottom:1 observed:1 wang:9 ensures:1 connected:3 sun:1 disease:1 principled:1 mentioned:3 rq:1 complexity:1 constrains:1 kriegman:2 singh:3 raise:2 predictive:1 basis:6 completely:1 strikingly:1 easily:1 georghiades:1 various:2 separated:6 effective:1 monte:2 query:1 rubinstein:1 aggregate:17 quite:1 mppca:4 solve:1 plausible:1 cvpr:2 statistic:9 yiningwa:1 noisy:4 itself:1 propose:4 tcc:1 interaction:1 product:1 neighboring:3 cao:1 mixing:2 poorly:1 achieve:1 chaudhuri:1 asserts:1 differentially:12 ky:1 cluster:17 p:2 r1:1 approximation2:1 object:1 help:1 depending:1 derive:2 iq:2 pose:1 measured:2 nearest:1 eq:13 strong:1 implemented:2 c:1 involves:2 indicate:1 uu:1 implies:1 differ:1 qd:1 direction:1 come:1 closely:2 correct:2 min0:1 concentrate:1 stochastic:12 hartley:1 exploration:1 human:1 deniability:1 fix:2 generalization:2 obfuscated:1 preliminary:1 pmatrix:1 clustered:2 biological:1 proposition:3 hold:5 around:1 considered:2 ground:3 ruin:1 exp:16 normal:1 algorithmic:2 claim:1 released:3 aarti:2 xk2:1 estimation:1 thakurta:1 sensitive:2 largest:1 tool:2 gaussian:5 always:1 aim:2 rather:3 pn:1 varying:1 office:1 corollary:2 release:2 rank:2 baseline:1 inference:3 tional:1 entire:3 sulq:21 szi:4 subroutine:2 provably:2 issue:3 development:1 art:4 special:1 initialize:1 constrained:1 hoff:1 construct:2 having:2 sampling:22 sohler:1 yu:2 unsupervised:1 nearly:2 k2f:1 icml:4 future:1 report:1 sanghavi:1 simplify:1 employ:2 few:1 preserve:1 national:1 individual:4 subsampled:1 ourselves:1 replacement:3 interest:1 investigate:1 dwork:5 mining:1 deferred:2 generically:1 truly:1 yining:1 mixture:3 introduces:1 analyzed:2 mcsherry:4 chain:1 ambient:2 unless:1 conduct:1 circle:2 theoretical:1 sinha:1 column:4 earlier:1 assignment:4 stacking:1 applicability:1 tweak:1 subset:7 deviation:1 cost:20 kq:1 characterize:1 reported:1 perturbed:1 kxi:1 synthetic:4 thanks:1 international:1 sensitivity:12 xpi:1 randomized:1 systematic:1 probabilistic:4 lee:1 connecting:1 together:1 connectivity:6 squared:1 thesis:1 aaai:1 ssc:20 stochastically:2 li:1 sec:1 lloyd:1 coresets:1 satisfy:2 notable:1 explicitly:1 performed:1 picked:1 analyze:4 characterizes:1 red:1 start:1 recover:1 aggregation:4 complicated:1 candes:2 identifiability:1 contribution:1 minimize:1 ni:4 variance:3 who:1 efficiently:2 spaced:1 identify:2 correspond:1 landscape:1 weak:1 bayesian:4 carlo:2 worth:1 lighting:1 corruption:1 straight:1 suffers:1 definition:3 failure:1 semimetric:1 dm:12 proof:4 associated:4 gain:1 sampled:7 dataset:15 treatment:1 lim:2 knowledge:1 yuxiangw:1 dimensionality:1 segmentation:2 focusing:1 appears:1 originally:2 tipping:1 formulation:1 done:1 generality:2 furthermore:2 smola:1 until:1 hand:2 replacing:1 su:1 reversible:1 lack:1 perhaps:2 usa:1 calibrating:2 concept:1 true:1 former:1 assigned:3 iteratively:1 satisfactory:1 round:1 sin:2 kyk2:1 unnormalized:1 tsc:17 complete:4 demonstrate:1 motion:2 image:1 novel:2 recently:2 funding:1 mangasarian:1 pseudocode:1 heckel:1 empirically:1 sarwate:1 mellon:1 gibbs:24 feldman:1 rd:12 consistency:1 outlined:2 centre:1 soltanolkotabi:2 dot:1 hxi:1 calibration:1 similarity:1 yk2:1 pu:1 eurocrypt:1 posterior:4 multivariate:1 chan:1 inf:1 scenario:1 certain:1 meta:1 arbitrarily:1 xe:2 preserving:1 wasserstein:9 additional:2 impose:1 belhumeur:1 signal:1 u0:3 ii:1 multiple:2 violate:1 bcs:1 infer:1 full:1 afterwards:3 smooth:4 mix:1 cross:1 sphere:1 lin:1 award:1 xsm:1 xsi:1 vision:2 cmu:1 rutgers:1 arxiv:4 iteration:8 represent:1 yeung:1 justified:2 addition:7 singular:1 limn:1 releasing:1 unlike:2 mir:1 subject:2 recording:1 undirected:1 effectiveness:1 near:1 noting:1 yang:1 exceed:1 zi:8 inner:1 regarding:2 idea:3 computable:1 administered:1 t0:5 whether:1 motivated:1 pca:7 utility:20 linkage:1 kuu:1 remark:1 matlab:1 involve:1 amount:1 nonparametric:1 stein:1 extensively:1 induces:1 tth:1 reduced:2 generate:2 exist:2 zj:3 nsf:2 singapore:2 estimated:1 diagnostic:1 per:2 blue:2 carnegie:1 nasihatkon:1 key:1 nevertheless:2 threshold:4 blum:1 preprocessed:1 asymptotically:2 graph:7 cone:1 dimitrakakis:1 run:1 angle:1 talwar:2 soda:1 arrive:1 reasonable:1 frobenious:2 separation:3 appendix:6 comparable:1 bound:6 guaranteed:1 yale:2 fold:1 constraint:1 deficiency:1 ri:2 x2:2 personalized:1 protects:1 argument:1 min:5 rabani:1 performing:1 department:1 ball:2 poor:2 belonging:1 wi:5 b:1 s1:4 lem:2 modification:1 projecting:1 outlier:1 pr:6 notorious:1 taken:2 fienberg:1 ln:3 remains:1 mechanism:10 adopted:1 vidal:1 apply:3 observe:1 lambertian:2 kwok:1 spectral:1 alternative:1 schmidt:1 ho:1 swamy:1 original:1 denotes:2 clustering:95 subsampling:1 top:2 running:1 include:1 log2:3 graphical:1 log10:6 reflectance:1 k1:1 build:1 establish:1 objective:3 added:1 question:2 kak2:1 jalali:1 gradient:1 dp:1 subspace:105 distance:15 separate:1 columnwise:1 outer:1 evenly:1 nelson:1 nissim:4 argue:2 trivial:2 provable:5 idm:1 assuming:1 illustration:1 unfortunately:2 stoc:2 dire:1 negative:2 implementation:2 design:1 attacker:1 upper:3 observation:1 datasets:9 markov:1 finite:1 jin:1 inevitably:1 maxk:1 extended:2 relational:1 precise:1 rn:1 perturbation:5 arbitrary:1 community:1 rating:1 introduced:2 complement:1 specified:1 extensive:1 connection:2 established:2 nip:1 usually:1 below:1 pattern:4 chal:1 summarize:1 including:1 max:1 difficulty:1 circumvent:3 turning:1 zhu:1 scheme:1 movie:3 rated:1 brief:1 incoherent:1 categorical:1 literature:1 acknowledgement:1 discovery:1 kf:4 schulman:1 geometric:1 xiang:1 asymptotic:2 fully:1 loss:2 kakf:1 permutation:1 ioannidis:1 interesting:2 generation:1 foundation:3 affine:2 sufficient:2 consistent:2 thresholding:3 principle:1 kxkp:1 tiny:1 actly:1 row:2 course:1 repeat:1 placed:1 last:1 supported:1 free:1 enjoys:2 formal:4 side:1 xs1:1 wide:2 neighbor:2 face:5 absolute:1 sparse:8 distributed:2 dimension:6 xn:2 world:2 stand:3 collection:1 jump:1 preprocessing:1 simplified:2 programme:1 social:1 transaction:4 sj:1 approximate:2 preferred:1 basri:1 bertino:1 dealing:1 global:6 reveals:1 pittsburgh:1 assumed:1 xi:17 bingham:4 latent:1 sk:4 why:2 reasonably:1 robust:5 career:1 alg:2 investigated:1 domain:1 pk:1 main:2 montanari:1 privately:1 s2:2 noise:13 big:1 profile:2 n2:2 allowed:1 x1:4 xu:2 deterministically:1 exponential:7 lie:2 candidate:2 theorem:11 specific:1 bishop:1 explored:1 x:6 admits:1 concern:2 grouping:1 exists:1 intrinsic:1 evidence:1 false:1 olcskei:1 phd:1 illumination:3 conditioned:1 budget:1 elhamifar:1 kx:3 chen:1 depicted:1 appearance:1 forming:1 partially:1 recommendation:1 applies:2 springer:1 truth:3 minimizer:1 satisfies:4 ma:1 conditional:2 viewed:1 consequently:1 fisher:1 hard:2 kqd:1 except:1 uniformly:10 sampler:14 principal:5 lemma:7 gauss:1 exception:1 select:1 formally:1 people:1 mark:1 latter:2 arises:1 ex:1 |
5,515 | 5,992 | Compressive spectral embedding: sidestepping the
SVD
Upamanyu Madhow
[email protected]
ECE Department, UC Santa Barbara
Dinesh Ramasamy
[email protected]
ECE Department, UC Santa Barbara
Abstract
Spectral embedding based on the Singular Value Decomposition (SVD) is a
widely used ?preprocessing? step in many learning tasks, typically leading to dimensionality reduction by projecting onto a number of dominant singular vectors
and rescaling the coordinate axes (by a predefined function of the singular value).
However, the number of such vectors required to capture problem structure grows
with problem size, and even partial SVD computation becomes a bottleneck. In
this paper, we propose a low-complexity compressive spectral embedding algorithm, which employs random projections and finite order polynomial expansions
to compute approximations to SVD-based embedding. For an m?n matrix with T
non-zeros, its time complexity is O ((T + m + n) log(m + n)), and the embedding dimension is O(log(m + n)), both of which are independent of the number
of singular vectors whose effect we wish to capture. To the best of our knowledge,
this is the first work to circumvent this dependence on the number of singular vectors for general SVD-based embeddings. The key to sidestepping the SVD is the
observation that, for downstream inference tasks such as clustering and classification, we are only interested in using the resulting embedding to evaluate pairwise
similarity metrics derived from the ?2 -norm, rather than capturing the effect of the
underlying matrix on arbitrary vectors as a partial SVD tries to do. Our numerical
results on network datasets demonstrate the efficacy of the proposed method, and
motivate further exploration of its application to large-scale inference tasks.
1 Introduction
Inference tasks encountered in natural language processing, graph inference and manifold learning
employ the singular value decomposition (SVD) as a first step to reduce dimensionality while retaining useful structure in the input. Such spectral embeddings go under various guises: Principle
Component Analysis (PCA), Latent Semantic Indexing (natural language processing), Kernel Principal Component Analysis, commute time and diffusion embeddings of graphs, to name a few. In
this paper, we present a compressive approach for accomplishing SVD-based dimensionality reduction, or embedding, without actually performing the computationally expensive SVD step.
The setting is as follows. The input is represented in matrix form. This matrix could represent the
adjacency matrix or the Laplacian of a graph, the probability transition matrix of a random walker
on the graph, a bag-of-words representation of documents, the action of a kernel on a set of l points
{x(p) ? Rd : p = 1, . . . , m} (kernel PCA)[1][2] such as
2
2
A(p, q) = e ?kx(p)?x(q)k / 2? (or) A(p, q) = I(kx(p) ? x(q)k < ?), 1 ? p, q ? l,
(1)
where I(?) denotes the indicator function or matrices derived from K-nearest-neighbor graphs constructed from {x(p)}. We wish to compute a transformation of the rows of this m ? n matrix
A which succinctly captures the global structure of A via euclidean distances (or similarity metrics derived from the ?2 -norm, such as normalized correlations). A common approach is to com1
Pl=k
pute a partial SVD of A, l=1 ?l ul vlT , k ? n, and to use it to embed the rows of A into a
k-dimensional space using the rows of E = [f (?1 )u1 f (?2 )u2 ? ? ? f (?k )uk ], for some function
f (?). The embedding of the variable corresponding to the l-th row of the matrix A is the l-th row of
E. For example, f (x) = x corresponds to Principal Component Analysis (PCA): the k-dimensional
rows of E are projections of the n-dimensional rows of A along the first k principal components,
{vl , l = 1,
?. . . , k}. Other important choices include f (x) = constant used to cut graphs [3] and
f (x) = 1 1 ? x for commute time embedding of graphs [4]. Inference tasks such as (unsupervised) clustering and (supervised) classification are performed using ?2 -based pairwise similarity
metrics on the embedded coordinates (rows of E) instead of the ambient data (rows of A).
Beyond the obvious benefit of dimensionality reduction from n to k, embeddings derived from the
leading partial-SVD can often be interpreted as denoising, since the ?noise? in matrices arising from
real-world data manifests itself via the smaller singular vectors of A (e.g., see [5], which analyzes
graph adjacency matrices). This is often cited as a motivation for choosing PCA over ?isotropic?
dimensionality reduction techniques such as random embeddings, which, under the setting of the
Johnson-Lindenstrauss (JL) lemma, can also preserve structure.
The number of singular vectors k needed to capture the structure of an m ? n matrix grows with its
size, and two bottlenecks emerge as we scale: (a) The computational effort required to extract a large
number of singular vectors using conventional iterative methods such as Lanczos or simultaneous
iteration or approximate algorithms like Nystrom [6], [7] and Randomized SVD [8] for computation
of partial SVD becomes prohibitive (scaling as ?(kT ), where T is the number of non-zeros in A)
(b) the resulting k-dimensional embedding becomes unwieldy for use in subsequent inference steps.
Approach and Contributions: In this paper, we tackle these scalability bottlenecks by focusing on
what embeddings are actually used for: computing ?2 -based pairwise similarity metrics typically
used for supervised or unsupervised learning. For example, K-means clustering uses pairwise Euclidean distances, and SVM-based classification uses pairwise inner products. We therefore ask the
following question: ?Is it possible to compute an embedding which captures the pairwise euclidean
distances between the rows of the spectral embedding E = [f (?1 )u1 ? ? ? f (?k )uk ], while sidestepping the computationally expensive partial SVD?? We answer this question in the affirmative by
presenting a compressive algorithm which directly computes a low-dimensional embedding.
There are two key insights that drive our algorithm:
? By approximating f (?) by a low-order (L ? min{m, n}) polynomial, we can compute the embedding iteratively using matrix-vector products of the form Aq or AT q.
? The iterations can be computed compressively: by virtue of the celebrated JL lemma, the embedding geometry is approximately captured by a small number d = O(log(m + n)) of randomly
picked starting vectors.
The number of passes over A, AT and time complexity of the algorithm are L, L and O(L(T + m +
n) log(m + n)) respectively. These are all independent of the number of singular vectors k whose
effect we wish to capture via the embedding. This is in stark contrast to embedding directly based on
the partial SVD. Our algorithm lends itself to parallel implementation as a sequence of 2L matrixvector products interlaced with vector additions, run in parallel across d = O(log(m+n)) randomly
chosen starting vectors. This approach significantly reduces both computational complexity and
embedding dimensionality relative to partial SVD. A freely downloadable Python implementation
of the proposed algorithm that exploits this inherent parallelism can be found in [9].
2 Related work
As discussed in Section 3.1, the concept of compressive measurements forms a key ingredient in our
algorithm, and is based on the JL lemma [10]. The latter, which provides probabilistic guarantees on
approximate preservation of the Euclidean geometry for a finite collection of points under random
projections, forms the basis for many other applications, such as compressive sensing [11].
We now mention a few techniques for exact and approximate SVD computation, before discussing
algorithms that sidestep the SVD as we do. The time complexity of the full SVD of an m ? n
matrix is O(mn2 ) (for m > n). Partial SVDs are computed using iterative
methods
for eigen
decompositions of symmetric matrices derived from A such as AAT and 0 AT ; A 0 [12]. The
2
complexity of standard iterative eigensolvers such as simultaneous iteration[13] and the Lanczos
method scales as ?(kT ) [12], where T denotes the number of non-zeros of A.
The leading k singular value, vector triplets {(?l , ul , vl ), l = 1, . . . , k} minimize the matrix
reconstruction error under a rank k constraint: they are a solution to the optimization problem
P
T 2
arg minkA ? l=k
l=1 ?l ul vl kF , where k ? kF denotes the Frobenius norm. Approximate SVD algorithms strive to reduce this error while also placing constraints on the computational budget and/or
the number of passes over A. A commonly employed approximate eigendecomposition algorithm
is the Nystrom method [6], [7] based on random sampling of s columns of A, which has time complexity O(ksn + s3 ). A number of variants of the Nystrom method for kernel matrices like (1) have
been proposed in the literature. These aim to improve accuracy using preprocessing steps such as
K-means clustering [14] or random projection trees [15]. Methods to reduce the complexity of the
Nystrom algorithm to O(ksn + k 3 )[16], [17] enable Nystrom sketches that see more columns of A.
The complexity of all of these grow as ?(ksn). Other randomized algorithms, involving iterative
computations, include the Randomized SVD [8]. Since all of these algorithms set out to recover
k-leading eigenvectors (exact or otherwise), their complexity scales as ?(kT ).
We now turn to algorithms that sidestep SVD computation. In [18], [19], vertices of a graph are
embedded based on diffusion of probability mass in random walks on the graph, using the power
iteration run independently on random starting vectors, and stopping ?prior to convergence.? While
this approach is specialized to probability transition matrices (unlike our general framework) and
does not provide explicit control on the nature of the embedding as we do, a feature in common with
the present paper is that the time complexity of the algorithm and the dimensionality of the resulting
embedding are independent of the number of eigenvectors k captured by it. A parallel implementation of this algorithm was considered in [20]; similar parallelization directly applies to our algorithm.
Another specific application that falls within our general framework is the commute time?
embedding
on a graph, based on the normalized adjacency matrix and weighing function f (x) = 1/ 1 ? x [4],
[21]. Approximate commute time embeddings have been computed using Spielman-Teng solvers
[22], [23] and the JL lemma in [24]. The complexity of the latter algorithm and the dimensionality of
the resulting embedding are comparable to ours, but the method
? is specially designed for the normalized adjacency matrix and the weighing function f (x) = 1/ 1 ? x. Our more general framework
would, for example, provide the flexibility of suppressing
small eigenvectors from contributing to
?
the embedding (e.g, by setting f (x) = I(x > ?)/ 1 ? x).
Thus, while randomized projections are extensively used in the embedding literature, to the best
of our knowledge, the present paper is the first to develop a general compressive framework for
spectral embeddings derived from the SVD. It is interesting to note that methods similar to ours
have been used in a different context, to estimate the empirical distribution of eigenvalues of a large
hermitian matrix [25], [26]. These methods use a polynomial approximation of indicator functions
f (?) = I(a ? ? ? b) and random projections to compute an approximate histogram of the number
of eigenvectors across different bands of the spectrum: [a, b] ? [?min , ?max ].
3 Algorithm
We first present the algorithm for a symmetric n ? n matrix S. Later, in Section 3.5, we show
how to handle a general m ? n matrix by considering a related (m + n) ? (m + n) symmetric
matrix. Let ?l denote the eigenvalues of S sorted in descending order and vl their corresponding
unit-norm eigenvectors (chosen to be orthogonal in case of repeated eigenvalues). For any funcPl=n
tion g(x) : R 7? R, we denote by g(S) the n ? n symmetric matrix g(S) = l=1 g(?l )vl vlT .
We now develop an O(n log n) algorithm to compute a d = O(log n) dimensional embedding
which approximately captures pairwise euclidean distances between the rows of the embedding
E = [f (?1 ) v1 f (?2 ) v2 ? ? ? f (?n ) vn ].
Rotations are inconsequential: We first observe that rotation of basis does not alter ?2 -based similarity metrics. Since V = [v1 ? ? ? vn ] satisfies V V T = V T V = In , pairwise distances between the rows of E are equal to corresponding pairwise distances between the rows of EV T =
Pl=n
T
l=1 f (?l )vl vl = f (S). We use this observation to compute embeddings of the rows of f (S)
rather than those of E.
3
3.1 Compressive embedding
Suppose now that we know f (S). This constitutes an n-dimensional embedding, and similarity
queries between two ?vertices? (we refer to the variables corresponding to rows of S as vertices,
as we would for matrices derived from graphs) requires O(n) operations. However, we can reduce
this time to O(log n) by using the JL lemma, which informs us that pairwise distances can be
approximately captured by compressive projection onto d = O(log n) dimensions.
Specifically, for d > (4 + 2?) log n ?2 /2 ? ?3 /3 , let ? denote an n ? d matrix with i.i.d. entries
?
drawn uniformly at random from {?1/ d}. According to the JL lemma, pairwise distances between
the rows of f (S)? approximate pairwise distances between the rows of f (S) with high probability.
2
In particular, the following statement holds with probability at least 1 ? n?? : (1 ? ?) ku ? vk ?
2
2
k(u ? v) ?k ? (1 + ?) ku ? vk , for any two rows u, v of f (S).
The key take-aways are that (a) we can reduce the embedding dimension to d = O(log n), since we
are only interested in pairwise similarity measures, and (b) We do not need to compute f (S). We
only need to compute f (S)?. We now discuss how to accomplish the latter efficiently.
3.2 Polynomial approximation of embedding
Direct
computation of E ? = f (S)? from the eigenvectors and eigenvalues of S, as f (S) =
P
f (?l )vl vlT would suggest, is expensive (O(n3 )). However, we now observe that computation
Pp=L
of ?(S)? is easy when ?(?) is a polynomial. In this case, ?(S) = p=0 bp S p for some bp ? R, so
that ?(S)? can be computed as a sequence of L matrix-vector products interlaced with vector additions run in parallel for each of the d columns of ?. Therefore, they only require LdT + O(Ldn)
e = feL (S)?, where feL (x) is an L-th order
flops. Our strategy is to approximate E ? = f (S)? by E
polynomial approximation of f (x). We defer the details of computing a ?good? polynomial approximation to Section 3.4. For now, we assume that one such approximation feL (?) is available and give
bounds on the loss in fidelity as a result of this approximation.
3.3 Performance guarantees
Pr=n
The spectral norm of the ?error matrix? Z = f (S) ? fe(S) = r=1 (f (?r ) ? feL (?r ))vr vrT satisfies
kZk = ? = maxl |f (?l ) ? feL (?l )| ? max{|f (x) ? feL (x)|}, where the spectral norm of a matrix
B, denoted by kBk refers to the induced ?2 -norm. For symmetric matrices, kBk ? ? ?? |?l | ?
? ?l, where ?l are the eigenvalues of B. Letting ip denote the unit vector along the p-th coordinate
of Rn , the distance between the p, q-th rows of fe(S) can be written as
?
kfeL (S) (ip ? iq )k = kf (S) (ip ? iq ) ? Z (ip ? iq )k ? kE T (ip ? iq )k + ? 2.
(2)
?
Similarly, we have that kfeL (S) (ip ? iq )k ? kE T (ip ? iq )k ? ? 2. Thus pairwise distances between the rows of feL (S) approximate those between the rows of E. However, the distortion term
?
? 2 is additive and must be controlled by carefully choosing feL (?), as discussed in Section 4.
Applying the JL lemma [10] to the rows of feL (S), we have that when d > O ??2 log n with i.i.d.
?
e = feL (S)? captures pairwise
entries drawn uniformly at random from {?1/ d}, the embedding E
distances between the rows of feL (S) up to a multiplicative distortion of 1 ? ? with high probability:
?
eT
E (ip ? iq )
=
?T feL (S) (ip ? iq )
? 1 + ?
feL (S) (ip ? iq )
?
?
e T (ip ? iq )k ?
1 + ? kE T (ip ? iq )k + ? 2 .
Using (2), we can show that kE
?
?
e T (ip ? iq )k ? 1 ? ? kE T (ip ? iq )k ? ? 2 . We state this result in Theorem 1.
kE
Similarly,
Theorem 1. Let feL (x) denote an L-th order polynomial such that: ? = maxl |f (?l ) ? feL (?l )| ?
max|f (x) ? feL (x)|?and ? an n ? d matrix with entries drawn independently
and uniformly at
random from {?1/ d}, where d is an integer satisfying d > (4 + 2?) log n (?2 /2 ? ?3 /3) . Let
4
g : Rp ? Rd denote the mapping from the i-th row of E = [f (?1 ) v1 ? ? ? f (?n ) vn ] to the i-th
e = feL (S)?. The following statement is true with probability at least 1 ? n?? :
row of E
?
?
?
?
1 ? ?(ku ? vk ? ? 2) ? kg(u) ? g(v)k ? 1 + ?(ku ? vk + ? 2)
for any two rows u, v of E. Furthermore, there exists an algorithm to compute each of the d =
e in O(L(T + n)) flops independent of its other columns which makes L
O(log n) columns of E
passes over S (T is the number of non-zeros in S).
3.4 Choosing the polynomial approximation
We restrict attention to matrices which satisfy kSk ? 1, which implies that |?l | ? 1. We observe
that we can trivially center and scale the spectrum of any matrix to satisfy this assumption when
we have the following bounds: ?l ? ?max and ?l ? ?min via the rescaling and centering operation
given by: S ? = 2S/(?max ? ?min ) ? (?max + ?min ) In /(?max ? ?min ) and by modifying f (x) to
f ? (x) = f ( x (?max ? ?min )/2 + (?max + ?min )/2 ).
In order to compute a polynomial approximation of f (x), we need to define the notion of ?good?
approximation. We showed in Section 3.3 that the errors introduced by the polynomial approximation can be summarized by furnishing a bound on the spectral norm of the error matrix
Z = f (S) ? feL (S): Since kZk = ? = maxl |f (?l ) ? feL (?l )|, what matters is how well we
approximate the function f (?) at the eigenvalues {?l } of S. Indeed, if we know the eigenvalues,
we can minimize kZk by minimizing maxl |f (?l ) ? feL (?l )|. This is not a particularly useful approach, since computing the eigenvalues is expensive. However, we can use our prior knowledge
of the domain from which the matrix S comes from to penalize deviations from f (?) differently
for different values of ?. For example, if we know the distribution p(x) of the eigenvalues of
R1
S, we can minimize the average error ?L = ?1 p(?)|f (?) ? feL (?)|2 dx. In our examples, for
the sake of concreteness, we assume that the eigenvalues are uniformly distributed over [?1, 1]
and give a procedure to compute an L-th order polynomial approximation of f (x) that minimizes
R1
?L = (1/2) ?1 |f (x) ? feL (x)|2 dx.
A numerically stable procedure to generate finite order polynomial approximations of a function
R1
over [?1, 1] with the objective of minimizing ?1 |f (x) ? feL (x)|2 dx is via Legendre polynomials
p(r, x), r = 0, 1, . . . , L. They satisfy the recursion p(r, x) = (2 ? 1/r)xp(r ? 1, x) ? (1 ?
R1
1/r)p(r ? 2, x) and are orthogonal: ?1 p(k, x)p(l, x)dx = 2I(k = l)/(2r + 1) . Therefore we
R1
Pr=L
p(r, x)f (x)dx. We give a method
set feL (x) =
a(r)p(r, x) where a(r) = (r + 1/2)
r=0
?1
in Algorithm 1 that uses the Legendre recursion to compute p(r, S)?, r = 0, 1, . . . , L using Ld
matrix-vector products and vector additions. The coefficients a(r) are used to compute feL (S)? by
adding weighted versions of p(r, S)?.
Algorithm 1 Proposed algorithm to compute approximate d-dimensional eigenvector embedding of
a n ? n symmetric matrix S (such that kSk ? 1) using the n ? d random projection matrix ?.
1: Procedure FAST E MBED EIG(S, f (x), L, ?):
R1
2: //* Compute polynomial approximation feL (x) which minimizes ?1 |f (x) ? feL (x)|2 dx *//
3: for r = 0, . . . , L do
R x=1
4:
a(r) ? (r + 1/2) x=?1 f (x)p(r, x)dx
//* p(r, x): Order r Legendre polynomial *//
e ? a(0)Q(0)
5: Q(0) ? ?, Q(?1) ? 0, E
6: for r = 1, 2, . . . , L do
7:
Q(r) ? (2 ? 1/r)SQ(r ? 1) ? (1 ? 1/r)Q(r ? 2)
e?E
e + a(r)Q(r)
8:
E
e
9: return E
//* Q(r) = p(r, S)? *//
e now holds fer (S)? *//
//* E
e = feL (S)? *//
//* E
As described in Section 4, if we have prior knowledge of the distribution of eigenvalues (as we do for
many commonly encountered large matrices), then we can ?boost? the performance of the generic
Algorithm 1 based on the assumption of eigenvalues uniformly distributed over [?1, 1].
5
3.5 Embedding general matrices
We complete the algorithm description by generalizing to any m ? n matrix A (not necessarily symmetric) such that kAk ? 1. The approach is to utilize Algorithm 1 to compute
an approximate d-dimensional embedding of the symmetric matrix
S = [0 AT ; A 0]. Let
P
T
??
{(?l , ul , vl ) : l = 1, . . . , min{m, n}} be an SVD of A =
l ?l ul vl (kAk ? 1
?l ? 1). Consider the following spectral mapping of the rows of A to the rows of Erow =
[f (?1 )u1 ? ? ? f (?m )um ] and the columns of A to the rows of Ecol = [f (?1 )v1 ? ? ? f (?n )v
?n ].
It can be shown that the unit-norm orthogonal eigenvectors of S take the form [vl ; ul ] 2
?
and [vl ; ?ul ] 2 , l = 1, . . . , min{m, n}, and their corresponding eigenvalues are ?l and ??l
respectively. The remaining |m ? n| eigenvalues of S are equal to 0. Therefore, we call
eall ? FAST E MBED EIG(S, f ? (x), L, ?) with f ? (x) = f (x)I(x ? 0) ? f (?x)I(x < 0) and ?
E
is an (m + n) ??d, d = O(log(m + n)) matrix (entries drawn independently and uniformly at ranecol and E
erow denote the first n and last m rows of E
eall . From Theorem 1,
dom from {?1/ d}). Let E
erow apwe know that, with overwhelming probability, pairwise distances between any two rows of E
proximates those between corresponding rows of Erow . Similarly, pairwise distances between any
ecol approximates those between corresponding rows of Ecol .
two rows of E
4 Implementation considerations
We now briefly go over implementation considerations before presenting numerical results in Section 5.
Spectral norm estimates In order to ensure that the eigenvalues of S are within [?1, 1] as we have
assumed, we scale the matrix by its spectral norm (kSk = max|?l |). To this end, we obtain a tight
lower bound (and a good approximation) on the spectral norm using power iteration (20 iterates on
6 log n randomly chosen starting vectors), and then scale this up by a small factor (1.01) for our
estimate (typically an upper bound) for kSk.
Polynomial approximation order L: The error in approximating f (?) by feL (?), as measured by
R1
?L = ?1 |f (x) ? feL (x)|2 dx is a non-increasing function of the polynomial order L. Reduction
in ?L often corresponds to a reduction in ? that appears as a bound on distortion in Theorem 1.
?Smooth? functions generally admit a lower order approximation for the same target error ?L , and
hence yield considerable savings in algorithm complexity, which scales linearly with L.
Polynomial approximation method: The rate at which ? decreases
R as we increase L depends on
the function p(?) used to compute feL (?) (by minimizing ?L = p(?)|f (?) ? feL (?)|2 dx).
? The
choice p(?) ? 1 yields the Legendre recursion used in Algorithm 1, whereas p(?) ? 1/ 1 ? ?2
corresponds to the Chebyshev recursion, which is known to result in fast convergence. We defer to
future work a detailed study of the impact of alternative choices for p(?) on ?.
Denoising by cascading In large-scale problems, it may be necessary to drive the contribution from
certain singular vectors to zero. In many settings, singular vectors with smaller singular values correspond to noise. The number of such singular values can scale as fast as O(min{m, n}). Therefore,
when we place nulls (zeros) in f (?), it is desirable to ensure that these nulls are pronounced after
b
we approximate f (?) by feL (?). We do this by computing geL/b (S) ?, where geL/b (?) is an L/bth order approximation of g(?) = f 1/b (?). The small values in the polynomial approximation of
f 1/b (?) which correspond to f (?) = 0 (nulls which we have set) get amplified when we pass them
through the xb non-linearity.
5 Numerical results
While the proposed approach is particularly useful for large problems in which exact eigendecomposition is computationally infeasible, for the purpose of comparison, our results are restricted to
smaller settings where the exact solution can be computed. We compute the exact partial eigendecomposition using the ARPACK library (called from MATLAB). For a given choice of weighing
6
Normalized correlation
Normalized correlation
1
1
99 percentile
95 percentile
75 percentile
50 percentile
25 percentile
5 percentile
1 percentile
0.4
0.6
99 percentile
95 percentile
75 percentile
50 percentile
25 percentile
5 percentile
1 percentile
0.2
0
?0.2
?0.4
0.4
0.8
0.6
Compressive embedding
0.8
Compressive embedding
Change in normalized inner product
0.6
0.2
0
?0.2
?0.4
?0.6
?0.6
40
60
d
80
100
120
(a) Effect of dimensionality d of embedding
0.2
0
?0.2
?0.4
?0.6
?0.8
20
0.4
99 percentile
95 percentile
75 percentile
50 percentile
25 percentile
5 percentile
1 percentile
?0.8
?1
?1
?0.5
0
0.5
Eigenvector embedding
1
?1
?1
?0.5
0
0.5
Eigenvector embedding
1
(b) Effect of cascading: b = 1(left) and b = 2 (right)
Figure 1: DBLP collaboration network normalized correlations
function f (?), the associated embedding E = [f (?1 )v1 ? ? ? f (?n )vn ] is compared with the come returned by Algorithm 1. The latter was implemented in Python using the
pressive embedding E
Scipy?s sparse matrix-multiplication routines and is available for download from [9].
We consider two real world undirected graphs in [27] for our evaluation, and compute embeddings
e (= D?1/2 AD?1/2 , where D is a diagonal matrix with row
for the normalized adjacency matrix A
e lie in [?1, 1]) for graphs. We study the acsums of the adjacency matrix A; the eigenvalues of A
curacy of embeddings by comparing pairwise normalized correlations between i, j-th rows of E
given by < E(i, :), E(j, :) >/kE(i, :)kkE(j, :)k with those predicted by the approximate embede :), E(j,
e :) > /kE(i,
e :)kkE(j,
e :)k (E(i, :) is short-hand for the i-th row of E).
ding < E(i,
DBLP collaboration network [27] is an undirected graph on n = 317080 vertices with 1049866
e The smalledges. We compute the leading 500 eigenvectors of the normalized adjacency matrix A.
e
est of the five hundred eigenvalues is 0.98, so we set f (?) = I(? ? 0.98) and S = A in Algorithm 1
e with E = [v1 ? ? ? v500 ]. We demonstrate the dependence
and compare the resulting embedding E
e returned by the proposed algorithm on two parameters: (i) numof the quality of the embedding E
ber of random starting vectors d, which gives the dimensionality of the embedding and (ii) the
boosting/cascading parameter b using this dataset.
Dependence on the number of random projections d: In Figure (1a), d ranges from 1 to 120 ? 9 log n
and plot the 1-st, 5-th, 25-th, 50-th, 75-th, 95-th and 99-th percentile values of the deviation between
e and the corresponding exact normalthe compressive normalized correlation (from the rows of E)
ized correlation (rows of E). The deviation decreases with increasing d, corresponding to ?2 -norm
concentration (JL lemma), but this payoff saturates for large values of d as polynomial approximation errors start to dominate. From the 5-th and 95-th percentile curves, we see that a significant
e lie within ?0.2 of their corresponding valfraction (90%) of pairwise normalized correlations in E
ues in E when d = 80 ? 6 log n. For Figure (1a), we use L = 180 matrix-vector products for each
randomly picked starting vector and set cascading parameter b = 2 for the algorithm in Section 4.
Dependence on cascading parameter b: In Section 4 we described how cascading can help suppress
e of the eigenvectors whose eigenvalues lie in regions where we
the contribution to the embedding E
have set f (?) = 0. We illustrate the importance of this boosting procedure by comparing the quality
e for b = 1 and b = 2 (keeping the other parameters of the algorithm in Section 4
of the embedding E
fixed: L = 180 matrix-vector products for each of d = 80 randomly picked starting vectors).
We report the results in Figure (1b) where we plot percentile values of compressive normalized
e for different values of the exact normalized correlation (rows of
correlation (from the rows of E)
E). For b = 1, the polynomial approximation of f (?) does not suppress small eigenvectors. As a
result, we notice a deviation (bias) of the 50-percentile curve (green) from the ideal y = x dotted
line drawn (Figure 1b left). This disappears for b = 2 (Figure 1b right).
The running time for our algorithm on a standard workstation was about two orders of magnitude
smaller than partial SVD using off-the-shelf sparse eigensolvers (e.g., the 80 dimensional embedding
of the leading 500 eigenvectors of the DBLP graph took 1 minute whereas their exact computation
7
took 105 minutes). A more detailed comparison of running times is beyond the scope of this paper,
but it is clear that the promised gains in computational complexity are realized in practice.
Application to graph clustering for the Amazon co-purchasing network [27] : This is an undirected graph on n = 334863 vertices with 925872 edges. We illustrate the potential downstream
benefits of our algorithm by applying K-means clustering on embeddings (exact and compressive)
e exof this network. For the purpose of our comparisons, we compute the first 500 eigenvectors for A
e which capplicitly using an exact eigensolver, and use an 80-dimensional compressive embedding E
tures the effect of these, with f (?) = I(? ? ?500 ), where ?500 is the 500th eigenvalue. We compare
e E = [v1 ? ? ? v80 ].
this against the usual spectral embedding using the first 80 eigenvectors of A:
We keep the dimension fixed at 80 in the comparison because K-means complexity scales linearly
with it, and quickly becomes the bottleneck. Indeed, our ability to embed a large number of eigenvectors directly into a low dimensional space (d ? 6 log n) has the added benefit of dimensionality
reduction within the subspace of interest (in this case the largest 500 eigenvectors).
We consider 25 instances of K-means clustering with K = 200 throughout, reporting the median
of a commonly used graph clustering score, modularity [28] (larger values translate to better cluse is 0.87. This is
tering solutions). The median modularity for clustering based on our embedding E
significantly better than that for E, which yields median modularity of 0.835. In addition, the come is one-fifth that for E (1.5 minutes versus 10 minutes). When we replace the
putational cost for E
exact eigenvector embedding E with approximate eigendecomposition using Randomized SVD [8]
(parameters: power iterates q = 5 and excess dimensionality l = 10), the time taken reduces from
10 minutes to 17 seconds, but this comes at the expense of inference quality: median modularity
drops to 0.748. On the other hand, the median modularity increases to 0.845 when we consider exact partial SVD embedding with 120 eigenvectors. This indicates that our compressive embedding
yields better clustering quality because it is able to concisely capture more eigenvectors(500 in this
example, compared to 80 and 120 with conventional partial SVD). It is worth pointing out that, even
for known eigenvectors, the number of dominant eigenvectors k that yields the best inference performance is often unknown a priori, and is treated as a hyper-parameter. For compressive spectral
e an elegant approach for implicitly optimizing over k is to use the embedding function
embedding E,
f (?) = I(? ? c), with c as a hyper-parameter.
6 Conclusion
We have shown that random projections and polynomial expansions provide a powerful approach for
spectral embedding of large matrices: for an m ? n matrix A, our O((T + m + n) log(m + n)) algorithm computes an O(log(m+n))-dimensional compressive embedding that provably approximates
pairwise distances between points in the desired spectral embedding. Numerical results for several
real-world data sets show that our method provides good approximations for embeddings based on
partial SVD, while incurring much lower complexity. Moreover, our method can also approximate
spectral embeddings which depend on the entire SVD, since its complexity does not depend on the
number of dominant vectors whose effect we wish to model. A glimpse of this potential is provided
by the example of K-means based clustering for estimating sparse-cuts of the Amazon graph, where
our method yields much better performance (using graph metrics) than a partial SVD with significantly higher complexity. This motivates further investigation into applications of this approach for
improving downstream inference tasks in a variety of large-scale problems.
Acknowledgments
This work is supported in part by DARPA GRAPHS (BAA-12-01) and by Systems on Nanoscale
Information fabriCs (SONIC), one of the six SRC STARnet Centers, sponsored by MARCO and
DARPA. Any opinions, findings, and conclusions or recommendations expressed in this material
are those of the authors and do not necessarily reflect the views of the funding agencies.
References
[1] B. Sch?olkopf, A. Smola, and K.-R. M?uller, ?Kernel principal component analysis,? in Artificial Neural
Networks ICANN?97, ser. Lecture Notes in Computer Science, W. Gerstner, A. Germond, M. Hasler, and
J.-D. Nicoud, Eds. Springer Berlin Heidelberg, 1997, pp. 583?588.
8
[2] S. Mika, B. Sch?olkopf, A. J. Smola, K.-R. M?uller, M. Scholz, and G. R?atsch, ?Kernel PCA and de-noising
in feature spaces,? in Advances in Neural Information Processing Systems, 1999.
[3] S. White and P. Smyth, ?A spectral clustering approach to finding communities in graph.? in SDM, vol. 5.
SIAM, 2005.
[4] F. G?obel and A. A. Jagers, ?Random walks on graphs,? Stochastic Processes and their Applications, 1974.
[5] R. R. Nadakuditi and M. E. J. Newman, ?Graph spectra and the detectability of community structure in
networks,? Physical Review Letters, 2012.
[6] C. Fowlkes, S. Belongie, F. Chung, and J. Malik, ?Spectral grouping using the Nystr?om method,? IEEE
Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 2, 2004.
[7] P. Drineas and M. W. Mahoney, ?On the Nystr?om Method for Approximating a Gram Matrix for Improved
Kernel-Based Learning,? Journal on Machine Learning Resources, 2005.
[8] N. Halko, P. G. Martinsson, and J. A. Tropp, ?Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions,? SIAM Rev., 2011.
[9] ?Python
implementation
of
FastEmbed.?
[Online].
Available:
https://bitbucket.org/dineshkr/fastembed/src/NIPS2015
[10] D. Achlioptas, ?Database-friendly random projections,? in Proceedings of the Twentieth ACM SIGMODSIGACT-SIGART Symposium on Principles of Database Systems, ser. PODS ?01, 2001.
[11] E. Candes and M. Wakin, ?An introduction to compressive sampling,? Signal Processing Magazine, IEEE,
March 2008.
[12] L. N. Trefethen and D. Bau, Numerical Linear Algebra. SIAM, 1997.
[13] S. F. McCormick and T. Noe, ?Simultaneous iteration for the matrix eigenvalue problem,? Linear Algebra
and its Applications, vol. 16, no. 1, pp. 43?56, 1977.
[14] K. Zhang, I. W. Tsang, and J. T. Kwok, ?Improved Nystr?om Low-rank Approximation and Error Analysis,? in Proceedings of the 25th International Conference on Machine Learning, ser. ICML ?08. ACM,
2008.
[15] D. Yan, L. Huang, and M. I. Jordan, ?Fast Approximate Spectral Clustering,? in Proceedings of the
15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ?09.
ACM, 2009.
[16] M. Li, J. T. Kwok, and B.-L. Lu, ?Making Large-Scale Nystr?om Approximation Possible.? in ICML,
2010.
[17] S. Kumar, M. Mohri, and A. Talwalkar, ?Ensemble Nystr?om method,? in Advances in Neural Information
Processing Systems, 2009.
[18] F. Lin and W. W. Cohen, ?Power iteration clustering,? in Proceedings of the 27th International Conference
on Machine Learning (ICML-10), 2010.
[19] F. Lin, ?Scalable methods for graph-based unsupervised and semi-supervised learning,? Ph.D. dissertation, Carnegie Mellon University, 2012.
[20] W. Yan, U. Brahmakshatriya, Y. Xue, M. Gilder, and B. Wise, ?PIC: Parallel power iteration clustering
for big data,? Journal of Parallel and Distributed Computing, 2013.
[21] L. Lov?asz, ?Random walks on graphs: A survey,? Combinatorics, Paul erdos is eighty, vol. 2, no. 1, pp.
1?46, 1993.
[22] D. A. Spielman and S.-H. Teng, ?Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems,? in Proceedings of the Thirty-sixth Annual ACM Symposium on Theory
of Computing, ser. STOC ?04. New York, NY, USA: ACM, 2004.
[23] D. Spielman and S. Teng, ?Nearly linear time algorithms for preconditioning and solving symmetric,
diagonally dominant linear systems,? SIAM Journal on Matrix Analysis and Applications, vol. 35, Jan.
2014.
[24] D. Spielman and N. Srivastava, ?Graph sparsification by effective resistances,? SIAM Journal on Computing, 2011.
[25] R. N. Silver, H. Roeder, A. F. Voter, and J. D. Kress, ?Kernel polynomial approximations for densities
of states and spectral functions,? Journal of Computational Physics, vol. 124, no. 1, pp. 115?130, Mar.
1996.
[26] E. Di Napoli, E. Polizzi, and Y. Saad, ?Efficient estimation of eigenvalue counts in an interval,?
arXiv:1308.4275 [cs], Aug. 2013.
[27] J. Yang and J. Leskovec, ?Defining and evaluating network communities based on ground-truth,? in 2012
IEEE 12th International Conference on Data Mining (ICDM), Dec. 2012.
[28] S. Fortunato, ?Community detection in graphs,? Physics Reports, vol. 486, no. 3-5, Feb. 2010.
9
| 5992 |@word briefly:1 version:1 polynomial:24 norm:13 decomposition:4 commute:4 mention:1 nystr:5 ld:1 reduction:7 celebrated:1 eigensolvers:2 efficacy:1 score:1 document:1 ours:2 suppressing:1 comparing:2 dx:9 written:1 must:1 nanoscale:1 numerical:5 subsequent:1 additive:1 kdd:1 designed:1 plot:2 drop:1 sponsored:1 intelligence:1 prohibitive:1 weighing:3 isotropic:1 short:1 dissertation:1 provides:2 iterates:2 boosting:2 org:1 zhang:1 five:1 along:2 constructed:1 direct:1 symposium:2 hermitian:1 bitbucket:1 pairwise:20 lov:1 indeed:2 overwhelming:1 solver:1 considering:1 becomes:4 increasing:2 provided:1 underlying:1 linearity:1 moreover:1 mass:1 estimating:1 null:3 what:2 kg:1 interpreted:1 minimizes:2 eigenvector:4 affirmative:1 compressive:19 finding:3 transformation:1 sparsification:2 guarantee:2 noe:1 friendly:1 tackle:1 um:1 uk:2 control:1 unit:3 ser:5 partitioning:1 before:2 aat:1 approximately:3 inconsequential:1 mika:1 voter:1 co:1 scholz:1 range:1 acknowledgment:1 thirty:1 practice:1 sq:1 procedure:4 jan:1 empirical:1 yan:2 significantly:3 projection:11 word:1 refers:1 suggest:1 get:1 onto:2 noising:1 context:1 applying:2 descending:1 conventional:2 center:2 go:2 attention:1 starting:7 independently:3 pod:1 survey:1 ke:8 amazon:2 scipy:1 insight:1 cascading:6 dominate:1 embedding:58 handle:1 notion:1 coordinate:3 target:1 suppose:1 magazine:1 exact:12 smyth:1 us:3 expensive:4 satisfying:1 particularly:2 cut:2 database:2 ding:1 capture:9 tsang:1 svds:1 region:1 decrease:2 src:2 agency:1 complexity:18 proximates:1 dom:1 motivate:1 depend:2 tight:1 solving:2 algebra:2 basis:2 preconditioning:1 drineas:1 darpa:2 differently:1 fabric:1 various:1 represented:1 mn2:1 fast:5 vlt:3 effective:1 query:1 artificial:1 newman:1 hyper:2 choosing:3 whose:4 trefethen:1 widely:1 larger:1 distortion:3 otherwise:1 ability:1 itself:2 ip:14 online:1 ldt:1 sequence:2 eigenvalue:21 sdm:1 took:2 propose:1 reconstruction:1 product:8 fer:1 translate:1 flexibility:1 amplified:1 nips2015:1 description:1 frobenius:1 pronounced:1 fel:33 scalability:1 olkopf:2 convergence:2 r1:7 silver:1 help:1 iq:13 develop:2 informs:1 bth:1 illustrate:2 measured:1 nearest:1 aug:1 implemented:1 predicted:1 c:1 implies:1 come:4 modifying:1 stochastic:1 exploration:1 enable:1 opinion:1 material:1 adjacency:7 require:1 investigation:1 pl:2 hold:2 ecol:3 marco:1 considered:1 ground:1 mapping:2 scope:1 pointing:1 purpose:2 estimation:1 bag:1 largest:1 weighted:1 uller:2 aim:1 rather:2 shelf:1 compressively:1 ax:1 derived:7 vk:4 rank:2 indicates:1 contrast:1 sigkdd:1 talwalkar:1 inference:9 roeder:1 stopping:1 vl:12 typically:3 entire:1 interested:2 provably:1 arg:1 classification:3 fidelity:1 denoted:1 priori:1 retaining:1 uc:2 equal:2 saving:1 sampling:2 placing:1 unsupervised:3 constitutes:1 icml:3 nearly:2 alter:1 future:1 report:2 eighty:1 inherent:1 employ:2 few:2 curacy:1 randomly:5 preserve:1 geometry:2 detection:1 interest:1 mining:2 putational:1 evaluation:1 mahoney:1 xb:1 predefined:1 kt:3 ambient:1 edge:1 partial:15 necessary:1 glimpse:1 orthogonal:3 nadakuditi:1 tree:1 euclidean:5 walk:3 desired:1 leskovec:1 instance:1 column:6 lanczos:2 cost:1 vertex:5 entry:4 deviation:4 hundred:1 johnson:1 answer:1 com1:1 accomplish:1 xue:1 st:1 cited:1 international:4 randomized:5 siam:5 density:1 probabilistic:2 off:1 physic:2 quickly:1 reflect:1 huang:1 ucsb:2 admit:1 sidestep:2 leading:6 rescaling:2 stark:1 strive:1 return:1 chung:1 potential:2 li:1 de:1 downloadable:1 summarized:1 coefficient:1 matter:1 satisfy:3 combinatorics:1 depends:1 ad:1 performed:1 try:1 picked:3 later:1 tion:1 multiplicative:1 view:1 start:1 recover:1 parallel:6 candes:1 defer:2 contribution:3 minimize:3 om:5 accuracy:1 accomplishing:1 efficiently:1 tering:1 yield:6 correspond:2 ensemble:1 lu:1 worth:1 drive:2 randomness:1 simultaneous:3 ed:1 sixth:1 centering:1 against:1 pp:5 minka:1 obvious:1 nystrom:5 associated:1 di:1 workstation:1 gain:1 dataset:1 ask:1 manifest:1 knowledge:5 dimensionality:12 routine:1 carefully:1 actually:2 starnet:1 focusing:1 appears:1 higher:1 supervised:3 improved:2 mar:1 furthermore:1 smola:2 achlioptas:1 correlation:10 sketch:1 hand:2 tropp:1 eig:2 quality:4 grows:2 name:1 effect:7 usa:1 normalized:14 concept:1 true:1 hence:1 symmetric:9 iteratively:1 semantic:1 dinesh:1 white:1 gram:1 percentile:25 kak:2 presenting:2 complete:1 demonstrate:2 wise:1 consideration:2 funding:1 common:2 rotation:2 specialized:1 physical:1 cohen:1 jl:8 discussed:2 martinsson:1 approximates:2 numerically:1 measurement:1 refer:1 significant:1 mellon:1 rd:2 trivially:1 similarly:3 language:2 aq:1 pute:1 stable:1 similarity:7 feb:1 dominant:4 showed:1 optimizing:1 barbara:2 certain:1 discussing:1 captured:3 analyzes:1 matrixvector:1 employed:1 freely:1 bau:1 signal:1 semi:1 preservation:1 full:1 desirable:1 ii:1 reduces:2 smooth:1 lin:2 icdm:1 laplacian:1 controlled:1 impact:1 variant:1 involving:1 scalable:1 baa:1 metric:6 arxiv:1 iteration:8 kernel:8 represent:1 histogram:1 dec:1 penalize:1 addition:4 whereas:2 eigensolver:1 interval:1 singular:15 walker:1 grow:1 median:5 sch:2 parallelization:1 saad:1 unlike:1 specially:1 asz:1 pass:3 induced:1 elegant:1 undirected:3 jordan:1 integer:1 call:1 yang:1 ideal:1 embeddings:14 easy:1 variety:1 restrict:1 reduce:5 inner:2 chebyshev:1 bottleneck:4 interlaced:2 six:1 pca:5 ul:7 effort:1 returned:2 resistance:1 york:1 action:1 matlab:1 useful:3 generally:1 santa:2 eigenvectors:19 detailed:2 clear:1 extensively:1 band:1 furnishing:1 ph:1 generate:1 http:1 s3:1 notice:1 dotted:1 arising:1 sidestepping:3 detectability:1 carnegie:1 vol:7 key:4 promised:1 drawn:5 diffusion:2 utilize:1 hasler:1 v1:7 graph:31 downstream:3 concreteness:1 run:3 letter:1 powerful:1 place:1 throughout:1 reporting:1 vn:4 scaling:1 comparable:1 capturing:1 bound:6 encountered:2 annual:1 constraint:2 bp:2 n3:1 sake:1 u1:3 min:11 kumar:1 performing:1 department:2 according:1 march:1 legendre:4 smaller:4 across:2 aways:1 rev:1 making:1 kbk:2 projecting:1 restricted:1 indexing:1 pr:2 napoli:1 taken:1 computationally:3 resource:1 turn:1 discus:1 count:1 needed:1 know:4 letting:1 end:1 available:3 operation:2 incurring:1 observe:3 kwok:2 v2:1 spectral:22 generic:1 fowlkes:1 alternative:1 eigen:1 rp:1 denotes:3 clustering:15 include:2 remaining:1 ensure:2 running:2 wakin:1 exploit:1 approximating:3 objective:1 malik:1 question:2 realized:1 added:1 strategy:1 concentration:1 dependence:4 usual:1 diagonal:1 lends:1 subspace:1 distance:15 berlin:1 manifold:1 gel:2 minimizing:3 fe:2 statement:2 stoc:1 expense:1 sigart:1 fortunato:1 ized:1 suppress:2 implementation:6 motivates:1 unknown:1 mccormick:1 upper:1 observation:2 datasets:1 finite:3 flop:2 payoff:1 saturates:1 defining:1 rn:1 arbitrary:1 download:1 community:4 pic:1 introduced:1 required:2 concisely:1 boost:1 beyond:2 able:1 parallelism:1 pattern:1 ev:1 kke:2 max:10 green:1 power:5 natural:2 treated:1 circumvent:1 indicator:2 recursion:4 improve:1 library:1 disappears:1 extract:1 ues:1 prior:3 literature:2 review:1 python:3 kf:3 multiplication:1 contributing:1 relative:1 discovery:1 embedded:2 loss:1 lecture:1 ksk:4 interesting:1 tures:1 versus:1 ingredient:1 eigendecomposition:4 purchasing:1 xp:1 principle:2 collaboration:2 row:41 succinctly:1 mohri:1 diagonally:1 supported:1 last:1 keeping:1 infeasible:1 bias:1 ber:1 neighbor:1 fall:1 emerge:1 fifth:1 sparse:3 benefit:3 distributed:3 kzk:3 dimension:4 maxl:4 evaluating:1 lindenstrauss:1 transition:2 world:3 computes:2 curve:2 author:1 collection:1 commonly:3 preprocessing:2 transaction:1 excess:1 approximate:19 erdos:1 implicitly:1 arpack:1 keep:1 global:1 mbed:2 assumed:1 belongie:1 spectrum:3 latent:1 iterative:4 triplet:1 modularity:5 nature:1 ku:4 improving:1 heidelberg:1 expansion:2 gerstner:1 necessarily:2 constructing:1 domain:1 icann:1 linearly:2 motivation:1 noise:2 big:1 paul:1 repeated:1 ny:1 vr:1 guise:1 wish:4 explicit:1 lie:3 unwieldy:1 theorem:4 embed:2 minute:5 specific:1 sensing:1 svm:1 virtue:1 grouping:1 exists:1 adding:1 importance:1 magnitude:1 budget:1 kx:2 dblp:3 generalizing:1 halko:1 twentieth:1 expressed:1 u2:1 recommendation:1 applies:1 springer:1 corresponds:3 truth:1 satisfies:2 acm:6 sorted:1 replace:1 considerable:1 change:1 specifically:1 uniformly:6 denoising:2 principal:4 lemma:8 ksn:3 teng:3 vrt:1 ece:4 svd:32 pas:1 called:1 est:1 atsch:1 latter:4 spielman:4 evaluate:1 srivastava:1 |
5,516 | 5,993 | Generalization in Adaptive Data Analysis and
Holdout Reuse?
Cynthia Dwork
Microsoft Research
Toniann Pitassi
University of Toronto
Vitaly Feldman
IBM Almaden Research Center?
Omer Reingold
Samsung Research America
Moritz Hardt
Google Research
Aaron Roth
University of Pennsylvania
Abstract
Overfitting is the bane of data analysts, even when data are plentiful. Formal
approaches to understanding this problem focus on statistical inference and generalization of individual analysis procedures. Yet the practice of data analysis is
an inherently interactive and adaptive process: new analyses and hypotheses are
proposed after seeing the results of previous ones, parameters are tuned on the
basis of obtained results, and datasets are shared and reused. An investigation of
this gap has recently been initiated by the authors in [7], where we focused on the
problem of estimating expectations of adaptively chosen functions.
In this paper, we give a simple and practical method for reusing a holdout (or
testing) set to validate the accuracy of hypotheses produced by a learning algorithm
operating on a training set. Reusing a holdout set adaptively multiple times can
easily lead to overfitting to the holdout set itself. We give an algorithm that enables
the validation of a large number of adaptively chosen hypotheses, while provably
avoiding overfitting. We illustrate the advantages of our algorithm over the standard
use of the holdout set via a simple synthetic experiment.
We also formalize and address the general problem of data reuse in adaptive data
analysis. We show how the differential-privacy based approach given in [7] is
applicable much more broadly to adaptive data analysis. We then show that a
simple approach based on description length can also be used to give guarantees of
statistical validity in adaptive settings. Finally, we demonstrate that these incomparable approaches can be unified via the notion of approximate max-information
that we introduce. This, in particular, allows the preservation of statistical validity guarantees even when an analyst adaptively composes algorithms which have
guarantees based on either of the two approaches.
1
Introduction
The goal of machine learning is to produce hypotheses or models that generalize well to the unseen
instances of the problem. More generally, statistical data analysis is concerned with estimating
properties of the underlying data distribution, rather than properties that are specific to the finite data
set at hand. Indeed, a large body of theoretical and empirical research was developed for ensuring
generalization in a variety of settings. In this work, it is commonly assumed that each analysis
procedure (such as a learning algorithm) operates on a freshly sampled dataset ? or if not, is validated
on a freshly sampled holdout (or testing) set.
?
?
See [6] for the full version of this work.
Part of this work done while visiting the Simons Institute, UC Berkeley.
1
Unfortunately, learning and inference can be more difficult in practice, where data samples are often
reused. For example, a common practice is to perform feature selection on a dataset, and then use
the features for some supervised learning task. When these two steps are performed on the same
dataset, it is no longer clear that the results obtained from the combined algorithm will generalize.
Although not usually understood in these terms, ?Freedman?s paradox" is an elegant demonstration of
the powerful (negative) effect of adaptive analysis on the same data [10]. In Freedman?s simulation,
variables with significant t-statistic are selected and linear regression is performed on this adaptively
chosen subset of variables, with famously misleading results: when the relationship between the
dependent and explanatory variables is non-existent, the procedure overfits, erroneously declaring
significant relationships.
Most of machine learning practice does not rely on formal guarantees of generalization for learning
algorithms. Instead a dataset is split randomly into two (or sometimes more) parts: the training set
and the testing, or holdout, set. The training set is used for learning a predictor, and then the holdout
set is used to estimate the accuracy of the predictor on the true distribution (Additional averaging over
different partitions is used in cross-validation.). Because the predictor is independent of the holdout
dataset, such an estimate is a valid estimate of the true prediction accuracy (formally, this allows
one to construct a confidence interval for the prediction accuracy on the data distribution). However,
in practice the holdout dataset is rarely used only once, and as a result the predictor may not be
independent of the holdout set, resulting in overfitting to the holdout set [17, 16, 4]. One well-known
reason for such dependence is that the holdout data is used to test a large number of predictors and
only the best one is reported. If the set of all tested hypotheses is known and independent of the
holdout set, then it is easy to account for such multiple testing.
However such static approaches do not apply if the estimates or hypotheses tested on the holdout are
chosen adaptively: that is, if the choice of hypotheses depends on previous analyses performed on the
dataset. One prominent example in which a holdout set is often adaptively reused is hyperparameter
tuning (e.g.[5]). Similarly, the holdout set in a machine learning competition, such as the famous
ImageNet competition, is typically reused many times adaptively. Other examples include using
the holdout set for feature selection, generation of base learners (in aggregation techniques such as
boosting and bagging), checking a stopping condition, and analyst-in-the-loop decisions. See [13] for
a discussion of several subtle causes of overfitting.
The concrete practical problem we address is how to ensure that the holdout set can be reused to
perform validation in the adaptive setting. Towards addressing this problem we also ask the more
general question of how one can ensure that the final output of adaptive data analysis generalizes
to the underlying data distribution. This line of research was recently initiated by the authors in [7],
where we focused on the case of estimating expectations of functions from i.i.d. samples (these are
also referred to as statistical queries). .
1.1
Our Results
We propose a simple and general formulation of the problem of preserving statistical validity in
adaptive data analysis. We show that the connection between differentially private algorithms
and generalization from [7] can be extended to this more general setting, and show that similar
(but sometimes incomparable) guarantees can be obtained from algorithms whose outputs can be
described by short strings. We then define a new notion, approximate max-information, that unifies
these two basic techniques and gives a new perspective on the problem. In particular, we give an
adaptive composition theorem for max-information, which gives a simple way to obtain generalization
guarantees for analyses in which some of the procedures are differentially private and some have
short description length outputs. We apply our techniques to the problem of reusing the holdout set
for validation in the adaptive setting.
A reusable holdout: We describe a simple and general method, together with two specific instantiations, for reusing a holdout set for validating results while provably avoiding overfitting to the
holdout set. The analyst can perform any analysis on the training dataset, but can only access the
holdout set via an algorithm that allows the analyst to validate her hypotheses against the holdout set.
Crucially, our algorithm prevents overfitting to the holdout set even when the analyst?s hypotheses
are chosen adaptively on the basis of the previous responses of our algorithm.
2
Our first algorithm, referred to as Thresholdout, derives its guarantees from differential privacy and
the results in [7, 14]. For any function ? : X ? [0, 1] given by the analyst, Thresholdout uses the
holdout set to validate that ? does not overfit to the training set, that is, it checks that the mean value
of ? evaluated on the training set is close to the mean value of ? evaluated on the distribution P from
which the data was sampled. The standard approach to such validation would be to compute the mean
value of ? on the holdout set. The use of the holdout set in Thresholdout differs from the standard use
in that it exposes very little information about the mean of ? on the holdout set: if ? does not overfit
to the training set, then the analyst receives only the confirmation of closeness, that is, just a single
bit. On the other hand, if ? overfits then Thresholdout returns the mean value of ? on the training set
perturbed by carefully calibrated noise.
Using results from [7, 14] we show that for datasets consisting of i.i.d. samples these modifications
provably prevent the analyst from constructing functions that overfit to the holdout set. This ensures
correctness of Thresholdout?s responses. Naturally, the specific guarantees depend on the number of
samples n in the holdout set. The number of queries that Thresholdout can answer is exponential in n
as long as the number of times that the analyst overfits is at most quadratic in n.
? Zs
?
Our second algorithm SparseValidate is based on the idea that if most of the time the analyst?A
procedures generate results that do not overfit, then validating them against the holdout set does not
reveal much information about the holdout set. Specifically, the generalization guarantees of this
method follow from the observation that the transcript of the interaction between a data analyst and
the holdout set can be described concisely. More formally, this method allows the analyst to pick
any Boolean function of a dataset ? (described by an algorithm) and receive back its value on the
holdout set. A simple example of such a function would be whether the accuracy of a predictor on
the holdout set is at least a certain value ?. (Unlike in the case of Thresholdout, here there is no
need to assume that the function that measures the accuracy has a bounded range or even Lipschitz,
making it qualitatively different from the kinds of results achievable subject to differential privacy). A
more involved example of validation would be to run an algorithm on the holdout dataset to select an
hypothesis and check if the hypothesis is similar to that obtained on the training set (for any desired
notion of similarity). Such validation can be applied to other results of analysis; for example one
could check if the variables selected on the holdout set have large overlap with those selected on the
training set. An instantiation of the SparseValidate algorithm has already been applied to the problem
of answering statistical (and more general) queries in the adaptive setting [1].
We describe a simple experiment on synthetic data that illustrates the danger of reusing a standard
holdout set, and how this issue can be resolved by our reusable holdout. The design of this experiment
is inspired by Freedman?s classical experiment, which demonstrated the dangers of performing
variable selection and regression on the same data [10].
Generalization in adaptive data analysis: We view adaptive analysis on the same dataset as an
execution of a sequence of steps A1 ? A2 ? ? ? ? ? Am . Each step is described by an algorithm
Ai that takes as input a fixed dataset S = (x1 , . . . , xn ) drawn from some distribution D over
X n , which remains unchanged over the course of the analysis. Each algorithm Ai also takes as
input the outputs of the previously run algorithms A1 through Ai?1 and produces a value in some
range Yi . The dependence on previous outputs represents all the adaptive choices that are made
at step i of data analysis. For example, depending on the previous outputs, Ai can run different
types of analysis on S. We note that at this level of generality, the algorithms can represent the
choices of the data analyst, and need not be explicitly specified. We assume that the analyst uses
algorithms which individually are known to generalize when executed on a fresh dataset sampled
independently from a distribution D. We formalize this by assuming that for every fixed value
y1 , . . . , yi?1 ? Y1 ? ? ? ? ? Yi?1 , with probability at least 1 ? ?i over the choice of S according
to distribution D, the output of Ai on inputs y1 , . . . , yi?1 and S has a desired property relative to
the data distribution D (for example has low generalization error). Note that in this assumption
y1 , . . . , yi?1 are fixed and independent of the choice of S, whereas the analyst will execute Ai on
values Y1 , . . . , Yi?1 , where Yj = Aj (S, Y1 , . . . , Yj?1 ). In other words, in the adaptive setup, the
algorithm Ai can depend on the previous outputs, which depend on S, and thus the set S given to
Ai is no longer an independently sampled dataset. Such dependence invalidates the generalization
guarantees of individual procedures, potentially leading to overfitting.
Differential privacy: First, we spell out how the differential privacy based approach from [7] can
be applied to this more general setting. Specifically, a simple corollary of results in [7] is that for
3
a dataset consisting of i.i.d. samples any output of a differentially-private algorithm can be used in
subsequent analysis while controlling the risk of overfitting, even beyond the setting of statistical
queries studied in [7]. A key property of differential privacy in this context is that it composes
adaptively: namely if each of the algorithms used by the analyst is differentially private, then the
whole procedure will be differentially private (albeit with worse privacy parameters). Therefore, one
way to avoid overfitting in the adaptive setting is to use algorithms that satisfy (sufficiently strong)
guarantees of differential-privacy.
Description length: We then show how description length bounds can be applied in the context
of guaranteeing generalization in the presence of adaptivity. If the total length of the outputs of
algorithms A1 , . . . , Ai?1 can be described with k bits then there are at most 2k possible values of
the input y1 , . . . , yi?1 to Ai . For each of these individual inputs Ai generalizes with probability
1 ? ?i . Taking a union bound over failure probabilities implies generalization with probability at least
1 ? 2k ?i . Occam?s Razor famously implies that shorter hypotheses have lower generalization error.
Our observation is that shorter hypotheses (and the results of analysis more generally) are also better
in the adaptive setting since they reveal less about the dataset and lead to better generalization of
subsequent analyses. Note that this result makes no assumptions about the data distribution D. In the
full versionwe also show that description length-based analysis suffices for obtaining an algorithm
(albeit not an efficient one) that can answer an exponentially large number of adaptively chosen
statistical queries. This provides an alternative proof for one of the results in [7].
Approximate max-information: Our main technical contribution is the introduction and analysis of
a new information-theoretic measure, which unifies the generalization arguments that come from
both differential privacy and description length, and that quantifies how much information has been
learned about the data by the analyst. Formally, for jointly distributed random variables (S, Y ),
the max-information is the maximum of the logarithm of the factor by which uncertainty about S
.
| Y =y]
is reduced given the value of Y , namely I? (S, Y ) = log max P[S=S
, where the maximum
P[S=S]
is taken over all S in the support of S and y in the support Y . Approximate max-information is a
relaxation of max-information. In our use, S denotes a dataset drawn randomly from the distribution
D and Y denotes the output of a (possibly randomized) algorithm on S. We prove that approximate
max-information has the following properties
? An upper bound on (approximate) max-information gives generalization guarantees.
? Differentially private algorithms have low max-information for any distribution D over
datasets. A stronger bound holds for approximate max-information on i.i.d. datasets. These
bounds apply only to so-called pure differential privacy (the ? = 0 case).
? Bounds on the description length of the output of an algorithm give bounds on the approximate max-information of the algorithm for any D.
? Approximate max-information composes adaptively.
Composition properties of approximate max-information imply that one can easily obtain generalization guarantees for adaptive sequences of algorithms, some of which are differentially private,
and others of which have outputs with short description length. These properties also imply that
differential privacy can be used to control generalization for any distribution D over datasets, which
extends its generalization guarantees beyond the restriction to datasets drawn i.i.d. from a fixed
distribution, as in [7].
We remark that (pure) differential privacy and description length are otherwise incomparable. Bounds
on max-information or differential privacy of an algorithm can, however, be translated to bounds on
randomized description length for a different algorithm with statistically indistinguishable output.
Here we say that a randomized algorithm has randomized description length of k if for every fixing
of the algorithm?s random bits, it has description length of k. Details of these results and additional
discussion appear in Section 2 and the full version.
1.2
Related Work
This work complements [7] where we initiated the formal study of adaptivity in data analysis. The
primary focus of [7] is the problem of answering adaptively chosen statistical queries. The main
technique is a strong connection between differential privacy and generalization: differential privacy
4
guarantees that the distribution of outputs does not depend too much on any one of the data samples,
and thus, differential privacy gives a strong stability guarantee that behaves well under adaptive data
analysis. The link between generalization and approximate differential privacy made in [7] has been
subsequently strengthened, both qualitatively ? by [1], who make the connection for a broader
range of queries ? and quantitatively, by [14] and [1], who give tighter quantitative bounds. These
papers, among other results, give methods for accurately answering exponentially (in the dataset
size) many adaptively chosen queries, but the algorithms for this task are not efficient. It turns out
this is for fundamental reasons ? Hardt and Ullman [11] and Steinke and Ullman [19] prove that,
under cryptographic assumptions, no efficient algorithm can answer more than quadratically many
statistical queries chosen adaptively by an adversary who knows the true data distribution.
The classical approach in theoretical machine learning to ensure that empirical estimates generalize
to the underlying distribution is based on the various notions of complexity of the set of functions
output by the algorithm, most notably the VC dimension. If one has a sample of data large enough
to guarantee generalization for all functions in some class of bounded complexity, then it does not
matter whether the data analyst chooses functions in this class adaptively or non-adaptively. Our goal,
in contrast, is to prove generalization bounds without making any assumptions about the class from
which the analyst can output functions.
An important line of work [3, 15, 18] establishes connections between the stability of a learning
algorithm and its ability to generalize. Stability is a measure of how much the output of a learning
algorithm is perturbed by changes to its input. It is known that certain stability notions are necessary
and sufficient for generalization. Unfortunately, the stability notions considered in these prior works
do not compose in the sense that running multiple stable algorithms sequentially and adaptively may
result in a procedure that is not stable. The measure we introduce in this work (max information),
like differential privacy, has the strength that it enjoys adaptive composition guarantees. This makes
it amenable to reasoning about the generalization properties of adaptively applied sequences of
algorithms, while having to analyze only the individual components of these algorithms. Connections
between stability, empirical risk minimization and differential privacy in the context of learnability
have been recently explored in [21].
Numerous techniques have been developed by statisticians to address common special cases of
adaptive data analysis. Most of them address a single round of adaptivity such as variable selection
followed by regression on selected variables or model selection followed by testing and are optimized
for specific inference procedures (the literature is too vast to adequately cover here, see Ch. 7 in [12]
for a textbook introduction and [20] for a survey of some recent work). In contrast, our framework
addresses multiple stages of adaptive decisions, possible lack of a predetermined analysis protocol
and is not restricted to any specific procedures.
Finally, inspired by our work, Blum and Hardt [2] showed how to reuse the holdout set to maintain
an accurate leaderboard in a machine learning competition that allows the participants to submit
adaptively chosen models in the process of the competition (such as those organized by Kaggle Inc.).
Their analysis also relies on the description length-based technique we used to analyze SparseValidate.
2
Max-Information
Preliminaries: In the discussion below log refers to binary logarithm and ln refers to the natural
logarithm. For two random variables X and Y over the same domain X the max-divergence of X
from Y is defined as D? (XkY ) = log maxx?X P[X=x]
P[Y =x] . ?-approximate max-divergence is defined
as
P[X ? O] ? ?
?
D?
(XkY ) = log
max
.
P[Y ? O]
O?X , P[X?O]>?
Definition 1. [9, 8] A randomized algorithm A with domain X n for n > 0 is (?, ?)-differentially
?
private if for all pairs of datasets that differ in a single element S, S 0 ? X n : D?
(A(S)kA(S 0 )) ?
?
log(e ). The case when ? = 0 is sometimes referred to as pure differential privacy, and in this case
we may say simply that A is ?-differentially private.
Consider two algorithms A : X n ? Y and B : X n ? Y ? Y 0 that are composed adaptively and
assume that for every fixed input y ? Y, B generalizes for all but fraction ? of datasets. Here we
are speaking of generalization informally: our definitions will support any property of input y ? Y
5
and dataset S. Intuitively, to preserve generalization of B we want to make sure that the output of A
does not reveal too much information about the dataset S. We demonstrate that this intuition can be
captured via a notion of max-information and its relaxation approximate max-information.
For two random variables X and Y we use X ? Y to denote the random variable obtained by
drawing X and Y independently from their probability distributions.
Definition 2. Let X and Y be jointly distributed random variables. The max-information between
X and Y is defined as I? (X; Y ) = D? ((X, Y )kX ? Y ). The ?-approximate max-information
?
?
is defined as I?
(X; Y ) = D?
((X, Y )kX ? Y ).
In our use (X, Y ) is going to be a joint distribution (S, A(S)), where S is a random n-element
dataset and A is a (possibly randomized) algorithm taking a dataset as an input.
Definition 3. We say that an algorithm A has ?-approximate max-information of k if for every
?
distribution S over n-element datasets, I?
(S; A(S)) ? k, where S is a dataset chosen randomly
?
according to S. We denote this by I?
(A, n) ? k.
An immediate corollary of our definition of approximate max-information is that it controls the
probability of ?bad events" that can happen as a result of the dependence of A(S) on S.
Theorem 4. Let S be a random dataset in X n and A be an algorithm with range Y such that for
?
some ? ? 0, I?
(S; A(S)) = k. Then for any event O ? X n ? Y,
P[(S, A(S)) ? O] ? 2k ? P[S ? A(S) ? O] + ?.
In particular, P[(S, A(S)) ? O] ? 2k ? maxy?Y P[(S, y) ? O] + ?.
We remark that mutual information between S and A(S) would not suffice for ensuring that bad
events happen with tiny probability. For example mutual information of k allows P[(S, A(S)) ? O]
to be as high as k/(2 log(1/?)), where ? = P[S ? A(S) ? O].
Approximate max-information satisfies the following adaptive composition property:
?1
(A, n) ? k1 , and let B : X n ? Y ? Z
Lemma 5. Let A : X n ? Y be an algorithm such that I?
be an algorithm such that for every y ? Y, B(?, y) has ?2 -approximate max-information k2 . Let
?1 +?2
C : X n ? Z be defined such that C(S) = B(S, A(S)). Then I?
(C, n) ? k1 + k2 .
Bounds on Max-information: Description length k gives the following bound on max-information.
Theorem 6. Let A be a randomized algorithm taking as an input an n-element dataset and outputting
?
a value in a finite set Y. Then for every ? > 0, I?
(A, n) ? log(|Y|/?).
Next we prove a simple bound on max-information of differentially private algorithms that applies to
all distributions over datasets.
Theorem 7. Let A be an -differentially private algorithm. Then I? (A, n) ? log e ? n.
Finally, we prove a stronger bound on approximate max-information for datasets consisting of
i.i.d. samples using the technique from [7].
Theorem 8. Let A be an ?-differentially private algorithm with range Y. For a distribution P over
X , let S be a random variable drawn from P n . Let Y = A(S) denote the
p random variable output
?
2
by A on input S. Then for any ? > 0, I? (S; A(S)) ? log e(? n/2 + ? n ln(2/?)/2).
One way to apply a bound on max-information is to start with a concentration of measure result which
ensures that the estimate of predictor?s accuracy is correct with high probability when the predictor is
chosen independently of the samples. For example for a loss function with range [0, 1], Hoeffding?s
bound implies that for a dataset consisting of i.i.d. samples the empirical estimate is not within ? of
2
the true accuracy with probability ? 2e?2? n . Now, given a bound of log e ? ? 2 n on ?-approximate
information of the algorithm that produces the estimator, Thm. 4 implies that the produced estimate is
2
2
2
not within ? of the true accuracy with probability ? 2log e?? n ? 2e?2? n + ? ? 2e?? n + ?. Thm. 7
2
implies that any ? -differentially private algorithm has max-information of at most log e ? ? 2 n. For
a dataset consisting of i.i.d. samples Thm. 8 implies that a ? -differentially private algorithm has
2
?-approximate max-information of 1.25 log e ? ? 2 n for ? = 2e?? n .
6
3
Reusable Holdout
We describe two simple algorithms that enable validation of analyst?s queries in the adaptive setting.
Thresholdout: Our first algorithm Thresholdout follows the approach in [7] where differentially
private algorithms are used to answer adaptively chosen statistical queries. This approach can also be
applied to any low-sensitivity functions of the dataset but for simplicity we present the results for
statistical queries. Here we address an easier problem in which the analyst?s queries only need to be
answered when they overfit. Also, unlike in [7], the analyst has full access to the training set and the
holdout algorithm only prevents overfitting to holdout dataset. As a result, unlike in the general query
answering setting, our algorithm can efficiently validate an exponential in n number of queries as
long as a relatively small number of them overfit.
. Pn
For a function ? : X ? R and a dataset S = (x1 , . . . , xn ), let ES [?] = n1 i=1 ?(xi ). Thresholdout
is given access to the training dataset St and holdout dataset Sh and a budget limit B. It allows any
query of the form ? : X ? [0, 1] and its goal is to provide an estimate of P[?]. To achieve this the
algorithm gives an estimate of ESh [?] in a way that prevents overfitting of functions generated by the
analyst to the holdout set. In other words, responses of Thresholdout are designed to ensure that, with
high probability, ESh [?] is close to P[?] and hence an estimate of ESh [?] gives an estimate of the true
expectation P[?].
Given a function ?, Thresholdout first checks if the difference between the average value of ? on the
training set St (or ESt [?]) and the average value of ? on the holdout set Sh (or ESh [?]) is below a
certain threshold T + ?. Here, T is a fixed number such as 0.01 and ? is a Laplace noise variable
whose standard deviation needs to be chosen depending on the desired guarantees (The Laplace
distribution is a symmetric exponential distribution.) If the difference is below the threshold, then
the algorithm returns ESt [?]. If the difference is above the threshold, then the algorithm returns
ESh [?] + ? for another Laplacian noise variable ?. Each time the difference is above threshold the
?overfitting" budget B is reduced by one. Once it is exhausted, Thresholdout stops answering queries.
We provide the pseudocode of Thresholdout below.
Input: Training set St , holdout set Sh , threshold T, noise rate ?, budget B
1. sample ? ? Lap(2 ? ?); T? ? T + ?
2. For each query ? do
(a) if B < 1 output ???
(b) else
i. sample ? ? Lap(4 ? ?)
ii. if |ESh [?] ? ESt [?]| > T? + ?
A. sample ? ? Lap(?), ? ? Lap(2 ? ?)
B. B ? B ? 1 and T? ? T + ?
C. output ESh [?] + ?
iii. else output ESt [?].
We now establish the formal generalization guarantees that Thresholdout enjoys.
Theorem 9. Let ?, ? > 0 and m ? B > 0. We set T = 3? /4 and ? = ? /(96 ln(4m/?)). Let
S denote a holdout dataset of size n drawn i.i.d. from a distribution P and St be any additional
dataset over X . Consider an algorithm that is given access to St and adaptively chooses functions
?1 , . . . , ?m while interacting with Thresholdout which is given datasets S, St and values ?, B, T .
For every i ? [m], let ai denote the answer of Thresholdout on function ?i : X ? [0, 1]. Further, for
.
every i ? [m], we define the counter of overfitting Zi = |{j ? i : |P[?j ] ? ESt [?j ]| > ? /2}| . Then
P [?i ? [m], Zi < B & |ai ? P[?i ]| ? ? ] ? ?
p
whenever n ? n0 = O ln(m/?)
? min{B, B ln(ln(m/?)/? )}.
?2
SparseValidate: We now present a general algorithm for validation on the holdout set that can
validate many arbitrary queries as long as few of them fail the validation. More formally, our
7
algorithm allows the analyst to pick any Boolean function of a dataset ? (or even any algorithm that
outputs a single bit) and provides back the value of ? on the holdout set ?(Sh ). SparseValidate has a
budget m for the total number of queries that can be asked and budget B for the number of queries
that returned 1. Once either of the budgets is exhausted, no additional answers are given. We now
give a general description of the guarantees of SparseValidate.
Theorem 10. Let S denote a randomly chosen holdout set of size n. Let A be an algorithm
that is given access to SparseValidate(m, B) and outputs queries ?1 , . . . , ?m such that each ?i
is in some set ?i of functions from X n to {0, 1}. Assume that for every i ? [m] and ?i ? ?i ,
P[?i (S) = 1] ? ?i . Let ?i be the random variable equal to the i?th query of A on S. Then
Pmin{i?1,B} i
B
P[?i (S) = 1] ? `i ? ?i , where `i = j=0
j ?m .
In this general formulation it is the analyst?s responsibility to use the budgets economically and
pick query functions that do not fail validation often. At the same time, SparseValidate ensures
that (for the appropriate values of the parameters) the analyst can think of the holdout set as a fresh
sample for the purposes of validation. Hence the analyst can pick queries in such a way that failing
the validation reliably indicates overfitting. An example of the application of SparseValidate for
answering statistical and low-sensitivity queries that is based on our analysis can be found in [1]. The
analysis of generalization on the holdout set in [2] and the analysis of the Median Mechanism we
give in the full version also rely on this sparsity-based technique.
Experiments: In our experiment the analyst is given a d-dimensional labeled data set S of size 2n
and splits it randomly into a training set St and a holdout set Sh of equal size. We denote an element
of S by a tuple (x, y) where x is a d-dimensional vector and y ? {?1, 1} is the corresponding class
label. The analyst wishes to select variables to be included in her classifier. For various values of the
number of variables to select k, she picks k variables with the largest absolute correlations with the
label. However, she verifies the correlations (with the label) on the holdout set and uses only those
variables whose correlation agrees in sign with the correlation on the training set and both correlations
are larger than some threshold in absolute value. She then creates a simple linear threshold classifier
on the selected variables using only the signs of the correlations of the selected variables. A final test
evaluates the classification accuracy of the classifier on both the training set and the holdout set.
In our first experiment, each attribute of x is drawn independently from the normal distribution
N (0, 1) and we choose the class label y ? {?1, 1} uniformly at random so that there is no correlation
between the data point and its label. We chose n = 10, 000, d = 10, 000 and varied the number
of selected variables k. In this scenario no classifier can achieve true accuracy better than 50%.
Nevertheless, reusing a standard holdout results in reported accuracy of over 63% for k = 500 on
both the training set and the holdout set (the standard deviation of the error is less than 0.5%). The
average and standard deviation of results obtained from 100 independent executions of the experiment
are plotted above. For comparison, the plot also includes the accuracy of the classifier on another
fresh data set of size n drawn from the same distribution. We then executed the same algorithm with
our reusable holdout. Thresholdout was invoked with T = 0.04 and ? = 0.01 explaining why the
accuracy of the classifier reported by Thresholdout is off by up to 0.04 whenever the accuracy on the
holdout set is within 0.04 of the accuracy on the training set. We also used Gaussian noise instead of
Laplacian noise as it has stronger concentration properties. Thresholdout prevents the algorithm from
overfitting to the holdout set and gives a valid estimate of classifier accuracy. Additional experiments
and discussion are presented in the full version.
8
References
[1] Raef Bassily, Adam Smith, Thomas Steinke, and Jonathan Ullman. More general queries and
less generalization error in adaptive data analysis. CoRR, abs/1503.04843, 2015.
[2] Avrim Blum and Moritz Hardt. The ladder: A reliable leaderboard for machine learning
competitions. CoRR, abs/1502.04585, 2015.
[3] Olivier Bousquet and Andr? Elisseeff. Stability and generalization. JMLR, 2:499?526, 2002.
[4] Gavin C. Cawley and Nicola L. C. Talbot. On over-fitting in model selection and subsequent
selection bias in performance evaluation. Journal of Machine Learning Research, 11:2079?2107,
2010.
[5] Chuong B. Do, Chuan-Sheng Foo, and Andrew Y. Ng. Efficient multiple hyperparameter
learning for log-linear models. In NIPS, pages 377?384, 2007.
[6] Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron
Roth. Generalization in adaptive data analysis and holdout reuse. CoRR, abs/1506. Extended
abstract to appear in NIPS 2015.
[7] Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Aaron
Roth. Preserving statistical validity in adaptive data analysis. CoRR, abs/1411.2664, 2014.
Extended abstract in STOC 2015.
[8] Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our
data, ourselves: Privacy via distributed noise generation. In EUROCRYPT, pages 486?503,
2006.
[9] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to
sensitivity in private data analysis. In Theory of Cryptography, pages 265?284. Springer, 2006.
[10] David A. Freedman. A note on screening regression equations. The American Statistician,
37(2):152?155, 1983.
[11] Moritz Hardt and Jonathan Ullman. Preventing false discovery in interactive data analysis is
hard. In FOCS, pages 454?463, 2014.
[12] Trevor Hastie, Robert Tibshirani, and Jerome H. Friedman. The Elements of Statistical Learning:
Data Mining, Inference, and Prediction. Springer series in statistics. Springer, 2009.
[13] John Langford. Clever methods of overfitting. http://hunch.net/?p=22, 2005.
[14] Kobbi Nissim and Uri Stemmer. On the generalization properties of differential privacy. CoRR,
abs/1504.05800, 2015.
[15] Tomaso Poggio, Ryan Rifkin, Sayan Mukherjee, and Partha Niyogi. General conditions for
predictivity in learning theory. Nature, 428(6981):419?422, 2004.
[16] R. Bharat Rao and Glenn Fung. On the dangers of cross-validation. an experimental evaluation.
In International Conference on Data Mining, pages 588?596. SIAM, 2008.
[17] Juha Reunanen. Overfitting in making comparisons between variable selection methods. Journal
of Machine Learning Research, 3:1371?1382, 2003.
[18] Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. Learnability,
stability and uniform convergence. The Journal of Machine Learning Research, 11:2635?2670,
2010.
[19] Thomas Steinke and Jonathan Ullman. Interactive fingerprinting codes and the hardness of
preventing false discovery. arXiv preprint arXiv:1410.1228, 2014.
[20] Jonathan Taylor and Robert J. Tibshirani. Statistical learning and selective inference. Proceedings of the National Academy of Sciences, 112(25):7629?7634, 2015.
[21] Yu-Xiang Wang, Jing Lei, and Stephen E. Fienberg. Learning with differential privacy: Stability,
learnability and the sufficiency and necessity of ERM principle. CoRR, abs/1502.06309, 2015.
9
| 5993 |@word economically:1 private:16 version:4 achievable:1 stronger:3 reused:5 simulation:1 crucially:1 elisseeff:1 pick:5 necessity:1 plentiful:1 series:1 tuned:1 ka:1 yet:1 john:1 subsequent:3 happen:2 partition:1 predetermined:1 enables:1 designed:1 plot:1 n0:1 selected:7 smith:2 short:3 provides:2 boosting:1 toronto:1 differential:21 focs:1 prove:5 naor:1 compose:1 fitting:1 introduce:2 bharat:1 privacy:23 notably:1 hardness:1 indeed:1 tomaso:1 inspired:2 little:1 estimating:3 underlying:3 bounded:2 suffice:1 kind:1 string:1 z:1 developed:2 textbook:1 unified:1 kenthapadi:1 guarantee:21 berkeley:1 every:9 quantitative:1 interactive:3 k2:2 classifier:7 control:2 appear:2 understood:1 limit:1 initiated:3 chose:1 studied:1 range:6 statistically:1 practical:2 testing:5 yj:2 practice:5 union:1 differs:1 procedure:10 danger:3 empirical:4 maxx:1 confidence:1 word:2 refers:2 seeing:1 close:2 selection:8 clever:1 risk:2 context:3 restriction:1 demonstrated:1 center:1 roth:3 independently:5 focused:2 survey:1 simplicity:1 pure:3 mironov:1 estimator:1 stability:9 notion:7 laplace:2 controlling:1 shamir:1 olivier:1 us:3 hypothesis:13 hunch:1 element:6 mukherjee:1 labeled:1 preprint:1 wang:1 ensures:3 counter:1 intuition:1 complexity:2 asked:1 existent:1 depend:4 creates:1 learner:1 basis:2 translated:1 easily:2 samsung:1 resolved:1 joint:1 various:2 america:1 describe:3 query:27 shalev:1 whose:3 larger:1 say:3 drawing:1 otherwise:1 ability:1 statistic:2 raef:1 unseen:1 niyogi:1 think:1 jointly:2 itself:1 final:2 advantage:1 sequence:3 net:1 propose:1 outputting:1 interaction:1 loop:1 rifkin:1 omer:3 achieve:2 academy:1 description:15 validate:5 competition:5 differentially:15 convergence:1 jing:1 produce:3 guaranteeing:1 adam:2 illustrate:1 depending:2 andrew:1 fixing:1 transcript:1 strong:3 implies:6 come:1 differ:1 correct:1 attribute:1 subsequently:1 vc:1 enable:1 suffices:1 generalization:33 investigation:1 preliminary:1 tighter:1 ryan:1 hold:1 sufficiently:1 considered:1 gavin:1 normal:1 a2:1 purpose:1 failing:1 applicable:1 label:5 expose:1 individually:1 largest:1 agrees:1 correctness:1 establishes:1 minimization:1 gaussian:1 rather:1 avoid:1 pn:1 broader:1 corollary:2 validated:1 focus:2 she:3 check:4 indicates:1 contrast:2 am:1 sense:1 inference:5 dependent:1 stopping:1 typically:1 explanatory:1 her:2 going:1 selective:1 provably:3 issue:1 among:1 classification:1 almaden:1 special:1 uc:1 mutual:2 equal:2 construct:1 once:3 having:1 ng:1 represents:1 yu:1 others:1 quantitatively:1 few:1 krishnaram:1 randomly:5 composed:1 preserve:1 divergence:2 national:1 individual:4 consisting:5 ourselves:1 statistician:2 microsoft:1 maintain:1 n1:1 ab:6 friedman:1 karthik:1 screening:1 dwork:5 mining:2 evaluation:2 sh:5 mcsherry:2 amenable:1 accurate:1 tuple:1 necessary:1 poggio:1 shorter:2 ohad:1 taylor:1 logarithm:3 desired:3 plotted:1 theoretical:2 instance:1 boolean:2 rao:1 cover:1 addressing:1 subset:1 deviation:3 predictor:8 uniform:1 too:3 learnability:3 reported:3 answer:6 perturbed:2 synthetic:2 combined:1 adaptively:23 calibrated:1 chooses:2 fundamental:1 randomized:7 sensitivity:3 st:7 international:1 siam:1 off:1 together:1 ilya:1 concrete:1 choose:1 possibly:2 hoeffding:1 worse:1 bane:1 american:1 leading:1 return:3 ullman:5 reusing:6 pmin:1 account:1 kobbi:2 includes:1 matter:1 inc:1 satisfy:1 explicitly:1 depends:1 performed:3 view:1 chuong:1 responsibility:1 overfits:3 analyze:2 start:1 aggregation:1 participant:1 shai:1 simon:1 contribution:1 partha:1 accuracy:17 who:3 efficiently:1 generalize:5 famous:1 unifies:2 accurately:1 produced:2 composes:3 whenever:2 trevor:1 definition:5 against:2 failure:1 evaluates:1 involved:1 naturally:1 proof:1 static:1 sampled:5 stop:1 holdout:64 hardt:7 dataset:35 ask:1 organized:1 formalize:2 subtle:1 carefully:1 back:2 supervised:1 follow:1 xky:2 response:3 formulation:2 done:1 evaluated:2 execute:1 generality:1 sufficiency:1 just:1 stage:1 correlation:7 overfit:6 hand:2 receives:1 sheng:1 jerome:1 langford:1 lack:1 google:1 aj:1 reveal:3 lei:1 effect:1 validity:4 calibrating:1 true:7 spell:1 adequately:1 hence:2 moritz:5 symmetric:1 round:1 indistinguishable:1 razor:1 prominent:1 theoretic:1 demonstrate:2 reasoning:1 invoked:1 recently:3 common:2 behaves:1 pseudocode:1 exponentially:2 significant:2 composition:4 feldman:3 ai:13 tuning:1 kaggle:1 similarly:1 access:5 stable:2 longer:2 operating:1 similarity:1 pitassi:3 base:1 moni:1 eurocrypt:1 recent:1 showed:1 perspective:1 scenario:1 certain:3 binary:1 yi:7 preserving:2 captured:1 additional:5 preservation:1 ii:1 full:6 multiple:5 stephen:1 technical:1 cross:2 long:3 a1:3 laplacian:2 ensuring:2 prediction:3 regression:4 basic:1 expectation:3 arxiv:2 sometimes:3 represent:1 receive:1 whereas:1 want:1 cawley:1 interval:1 else:2 median:1 unlike:3 sure:1 subject:1 elegant:1 validating:2 vitaly:3 reingold:3 sridharan:1 invalidates:1 presence:1 split:2 easy:1 concerned:1 enough:1 variety:1 iii:1 zi:2 pennsylvania:1 hastie:1 incomparable:3 idea:1 whether:2 reuse:4 returned:1 speaking:1 cause:1 remark:2 generally:2 clear:1 informally:1 chuan:1 reduced:2 generate:1 http:1 andr:1 sign:2 tibshirani:2 broadly:1 hyperparameter:2 reusable:4 key:1 threshold:7 blum:2 nevertheless:1 drawn:7 prevent:1 vast:1 relaxation:2 fraction:1 run:3 powerful:1 uncertainty:1 extends:1 decision:2 bit:4 bound:18 followed:2 quadratic:1 strength:1 bousquet:1 erroneously:1 nathan:1 answered:1 argument:1 min:1 performing:1 relatively:1 fung:1 according:2 modification:1 making:3 maxy:1 intuitively:1 restricted:1 erm:1 taken:1 fienberg:1 ln:6 equation:1 remains:1 previously:1 turn:1 fail:2 mechanism:1 know:1 generalizes:3 apply:4 appropriate:1 alternative:1 thomas:2 bagging:1 denotes:2 running:1 include:1 ensure:4 k1:2 establish:1 nicola:1 classical:2 unchanged:1 question:1 already:1 primary:1 dependence:4 concentration:2 visiting:1 link:1 nissim:2 reason:2 fresh:3 analyst:30 assuming:1 length:15 code:1 relationship:2 demonstration:1 difficult:1 unfortunately:2 executed:2 setup:1 potentially:1 stoc:1 frank:2 robert:2 negative:1 design:1 reliably:1 cryptographic:1 perform:3 upper:1 observation:2 datasets:12 juha:1 finite:2 immediate:1 extended:3 paradox:1 y1:7 interacting:1 varied:1 arbitrary:1 thm:3 fingerprinting:1 david:1 complement:1 namely:2 pair:1 specified:1 connection:5 imagenet:1 optimized:1 concisely:1 learned:1 quadratically:1 nip:2 address:6 beyond:2 adversary:1 usually:1 below:4 sparsity:1 max:36 reliable:1 overlap:1 event:3 natural:1 rely:2 misleading:1 imply:2 numerous:1 ladder:1 prior:1 understanding:1 literature:1 checking:1 discovery:2 relative:1 xiang:1 toniann:3 loss:1 adaptivity:3 generation:2 declaring:1 srebro:1 leaderboard:2 validation:14 sufficient:1 principle:1 famously:2 tiny:1 occam:1 ibm:1 course:1 enjoys:2 formal:4 bias:1 institute:1 steinke:3 explaining:1 taking:3 stemmer:1 absolute:2 distributed:3 dimension:1 xn:2 valid:2 preventing:2 author:2 commonly:1 adaptive:28 qualitatively:2 made:2 predictivity:1 approximate:21 overfitting:18 instantiation:2 sequentially:1 assumed:1 xi:1 shwartz:1 freshly:2 quantifies:1 why:1 glenn:1 nature:1 confirmation:1 inherently:1 obtaining:1 constructing:1 protocol:1 submit:1 domain:2 main:2 whole:1 noise:8 freedman:4 verifies:1 cryptography:1 body:1 x1:2 referred:3 bassily:1 strengthened:1 foo:1 wish:1 exponential:3 answering:6 jmlr:1 sayan:1 theorem:7 bad:2 specific:5 cynthia:5 explored:1 talbot:1 closeness:1 derives:1 albeit:2 avrim:1 corr:6 false:2 execution:2 illustrates:1 budget:7 exhausted:2 kx:2 uri:1 gap:1 easier:1 lap:4 simply:1 prevents:4 applies:1 springer:3 ch:1 satisfies:1 relies:1 goal:3 towards:1 shared:1 lipschitz:1 change:1 hard:1 included:1 specifically:2 operates:1 uniformly:1 averaging:1 lemma:1 total:2 called:1 e:1 experimental:1 est:5 aaron:3 formally:4 rarely:1 select:3 support:3 jonathan:4 tested:2 avoiding:2 |
5,517 | 5,994 | Online F-Measure Optimization
R?obert Busa-Fekete
Department of Computer Science
University of Paderborn, Germany
[email protected]
Bal?azs Sz?or?enyi
Technion, Haifa, Israel /
MTA-SZTE Research Group on
Artificial Intelligence, Hungary
[email protected]
?
Krzysztof Dembczynski
Institute of Computing Science
Pozna?n University of Technology, Poland
[email protected]
?
Eyke Hullermeier
Department of Computer Science
University of Paderborn, Germany
[email protected]
Abstract
The F-measure is an important and commonly used performance metric for binary prediction tasks. By combining precision and recall into a single score, it
avoids disadvantages of simple metrics like the error rate, especially in cases of
imbalanced class distributions. The problem of optimizing the F-measure, that
is, of developing learning algorithms that perform optimally in the sense of this
measure, has recently been tackled by several authors. In this paper, we study
the problem of F-measure maximization in the setting of online learning. We
propose an efficient online algorithm and provide a formal analysis of its convergence properties. Moreover, first experimental results are presented, showing that
our method performs well in practice.
1
Introduction
Being rooted in information retrieval [16], the so-called F-measure is nowadays routinely used as a
b = (b
performance metric in various prediction tasks. Given predictions y
y1 , . . . , ybt ) 2 {0, 1}t of t
binary labels y = (y1 , . . . , yt ), the F-measure is defined as
Pt
b ) ? recall(y, y
b)
2 i=1 yi ybi
2 ? precision(y, y
b ) = Pt
F (y, y
=
2 [0, 1] ,
(1)
Pt
b
b
precision(y,
y
)
+
recall(y,
y
)
y
+
y
b
i=1 i
i=1 i
Pt
Pt
Pt
Pt
b) =
b) =
where precision(y, y
bi / i=1 ybi , recall(y, y
bi / i=1 yi , and where
i=1 yi y
i=1 yi y
0/0 = 1 by definition. Compared to measures like the error rate in binary classification, maximizing
the F-measure enforces a better balance between performance on the minority and majority class;
therefore, it is more suitable in the case of imbalanced data. Optimizing for such an imbalanced
measure is very important in many real-world applications where positive labels are significantly
less frequent than negative ones. It can also be generalized to a weighted harmonic average of precision and recall. Yet, for the sake of simplicity, we stick to the unweighted mean, which is often
referred to as the F1-score or the F1-measure.
Given the importance and usefulness of the F-measure, it is natural to look for learning algorithms
that perform optimally in the sense of this measure. However, optimizing the F-measure is a quite
challenging problem, especially because the measure is not decomposable over the binary predictions. This problem has received increasing attention in recent years and has been tackled by several
authors [19, 20, 18, 10, 11]. However, most of this work has been done in the standard setting of
batch learning.
1
In this paper, we study the problem of F-measure optimization in the setting of online learning
[4, 2], which is becoming increasingly popular in machine learning. In fact, there are many applications in which training data is arriving progressively over time, and models need to be updated and
maintained incrementally. In our setting, this means that in each round t the learner first outputs a
prediction ybt and then observes the true label yt . Formally, the protocol in round t is as follows:
1. first an instance xt 2 X is observed by the learner,
2. then the predicted label ybt for xt is computed on the basis of the first t instances (x1 , . . . , xt ),
the t 1 labels (y1 , . . . , yt 1 ) observed so far, and the corresponding predictions (b
y1 , . . . , ybt 1 ),
3. finally, the label yt is revealed to the learner.
The goal of the learner is then to maximize
F(t) = F ((y1 , . . . , yt ), (b
y1 , . . . , ybt ))
(2)
over time. Optimizing the F-measure in an online fashion is challenging mainly because of the
non-decomposability of the measure, and the fact that the ybt cannot be changed after round t.
As a potential application of online F-measure optimization consider the recommendation of news
from RSS feeds or tweets [1]. Besides, it is worth mentioning that online methods are also relevant
in the context of big data and large-scale learning, where the volume of data, despite being finite,
prevents from processing each data point more than once [21, 7]. Treating the data as a stream, online
algorithms can then be used as single-pass algorithms. Note, however, that single-pass algorithms
are evaluated only at the end of the training process, unlike online algorithms that are supposed to
learn and predict simultaneously.
We propose an online algorithm for F-measure optimization, which is not only very efficient but also
easy to implement. Unlike other methods, our algorithm does not require extra validation data for
tuning a threshold (that separates between positive and negative predictions), and therefore allows
the entire data to be used for training. We provide a formal analysis of the convergence properties
of our algorithm and prove its statistical consistency under different assumptions on the learning
process. Moreover, first experimental results are presented, showing that our method performs well
in practice.
2
Formal Setting
In this paper, we consider a stochastic setting in which (x1 , y1 ), . . . , (xt , yt ) are assumed to be i.i.d.
samples from some unknown distribution ?(?) on X ? Y, where Y = {0, 1} is the label space and
X is some instance space. We denote the marginal distribution of the feature vector X by ?(?).1
Then, the posterior probability of the positive class, i.e., the conditional probability that Y = 1
?(x,1)
given X = x, is ?(x) = P(Y = 1 | X = x) = ?(x,0)+?(x,1)
. The prior distribution of class 1 can
R
be written as ?1 = P(Y = 1) = x2X ?(x) d?(x).
Let B = {f : X ! {0, 1}} be the set of all binary classifiers over the set X . The F-measure of a
binary classifier f 2 B is calculated as
R
2 X ?(x)f (x) d?(x)
2E [?(X)f (X)]
R
F (f ) = R
=
.
E
[?(X)]
+ E [f (X)]
?(x)
d?(x)
+
f
(x)
d?(x)
X
X
According to [19], the expected value of (1) ?converges to F (f ) with t ! 1 when
? f is used to
calculate yb, i.e., ybt = f (xt ). Thus, limt!1 E F (y1 , . . . , yt ), (f (x1 ), . . . , f (xt )) = F (f ).
Now, let G = {g : X ! [0, 1]} denote the set of all probabilistic binary classifiers over the set
X , and let T ? B denote the set of binary classifiers that are obtained by thresholding a classifier
g 2 G?that is, classifiers of the form
g ? (x) = Jg(x)
?K
(3)
for some threshold ? 2 [0, 1], where J?K is the indicator function that evaluates to 1 if its argument
is true and 0 otherwise.
1
X is assumed to exhibit the required measurability properties.
2
According to [19], the optimal F-score computed as maxf 2B F (f ) can be achieved by a thresholded
classifier. More precisely, let us define the thresholded F-measure as
R
2 X ?(x) J?(x) ? K d?(x)
2E [?(X) J?(X) ? K]
?
R
F (? ) = F (? ) = R
=
(4)
E
[?(X)]
+ E [J?(X) ? K]
?(x)
d?(x)
+
J?(x)
?
K
d?(x)
X
X
Then the optimal threshold ? ? can be obtained as
? ? = argmax F (? ) .
(5)
0?? ?1
Clearly, for the classifier in the form of (3) with g(x) = ?(x) and ? = ? ? , we have F (g ? ) = F (? ? ).
Then, as shown by [19] (see their Theorem 4), the performance of any binary classifier f 2 B
cannot exceed F (? ? ), i.e., F (f ) ? F (? ? ) for all f 2 B. Therefore, estimating posteriors first and
adjusting a threshold afterward appears to be a reasonable strategy. In practice, this seems to be the
most popular way of maximizing the F-measure in a batch mode; we call it the 2-stage F-measure
maximization approach, or 2S for short. More specifically, the 2S approach consists of two steps:
first, a classifier is trained for estimating the posteriors, and second, a threshold is tuned on the
posterior estimates. For the time being, we are not interested in the training of this classifier but
focus on the second step, that is, the labelling of instances via thresholding posterior probabilities.
For doing this, suppose a finite set DN = {(xi , yi )}N
i=1 of labeled instances are given as training
information. Moreover, suppose estimates pbi = g(xi ) of the posterior probabilities pi = ?(xi )
are provided by a classifier g 2 G. Next, one might define the F-score obtained by applying the
threshold classifier g ? on the data DN as follows:
PN
yi J? ? g(xi )K
F (? ; g, DN ) = PN i=1 PN
(6)
y
+
i=1 i
i=1 J? ? g(xi )K
In order to find an optimal threshold ?N 2 argmax0?? ?1 F (? ; g, DN ), it suffices to search the
P
finite set {b
p1 , . . . , pbN }, which requires time O(N log N ). In [19], it is shown that F (? ; g, DN ) !
F (g ? ) as N ! 1 for any ? 2 (0, 1), and [11] provides an even stronger result: If a classifier gDN
is induced from DN by an L1 -consistent learner,2 and a threshold ?N is obtained by maximizing (6)
P
?N
0
on an independent set DN
, then F (gD
) ! F (? ? ) as N ! 1 (under mild assumptions on the
N
data distribution).
3
Maximizing the F-Measure on a Population Level
In this section we assume that the data distribution is known. According to the analysis in the
previous section, optimizing the F-measure boils down to finding the optimal threshold ? ? . At this
point, an observation is in order.
Remark 1. In general, the function F (? ) is neither convex nor concave. For example, when X is
finite, then the denominator and enumerator of (4) are step functions, whence so is F (? ). Therefore,
gradient methods cannot be applied for finding ? ? .
Nevertheless, ? ? can be found based on a recent result of [20], who show that finding the root of
Z
h(? ) =
max (0, ?(x) ? ) d?(x) ? ?1
(7)
x2X
is a necessary and sufficient condition for optimality. Note that h(? ) is continuous and strictly
decreasing, with h(0) = ?1 and h(1) = ?1 . Therefore, h(? ) = 0 has a unique solution which
is ? ? . Moreover, [20] also prove an interesting relationship between the optimal threshold and the
F-measure induced by that threshold: F (? ? ) = 2? ? .
The marginal
distribution of the feature vectors, ?(?), induces a distribution ?(?) on the posteriors:
R
?(p) = x2X J?(x) = pK d?(x) for all p 2 [0, 1]. By definition, J?(x) = pK is the Radon-Nikodym
derivative of d?
d? , and ?(p) the density of observing an instance x for which the probability of the
2
A learning algorithm, viewed as a map?from samples DN to classifiers gDN ?
, is called L1 -consistent w.r.t.
R
the data distribution ? if limN !1 PDN ?? x2X |gDN (x) ?(x)| d?(x) > ? = 0 for all ? > 0.
3
positive label is p. We shall write concisely d?(p) = ?(p) dp. Since ?(?) is an induced probability
measure, the measurable transformation allows us to rewrite the notions introduced above in terms
of ?(?) instead of ?(?)?see, for example, Section 1.4 in [17]. For example, the prior probability
R
R1
?(x) d? can be written equivalently as 0 p d?(p). Likewise, (7) can be rewritten as follows:
X
Z 1
Z 1
Z 1
Z 1
h(? ) =
max (0, p ? ) d?(p) ?
p d?(p) =
p ? d?(p) ?
p d?(p)
0
0
?
0
?Z 1
Z 1
Z 1
=
p d?(p) ?
1 d?(p)
p d?(p)
(8)
?
?
0
Equation (8) will play a central role in our analysis. Note that precise knowledge of ?(?) suffices to
find the maxima of F (? ). This is illustrated by two examples presented in Appendix E, in which we
assume specific distributions for ?(?), namely uniform and Beta distributions.
4
Algorithmic Solution
In this section, we provide an algorithmic solution to the online F-measure maximization problem.
For this, we shall need in each round t some classifier gt 2 G that provides us with some estimate
pbt = gt (xt ) of the probability ?(xt ). We would like to stress again that the focus of our analysis is
on optimal thresholding instead of classifier learning. Thus, we assume the sequence of classifiers
g1 , g2 , . . . to be produced by an external online learner, for example, logistic regression trained by
stochastic gradient descent.
As an aside, we note that F-measure maximization is not directly comparable with the task that
is most often considered and analyzed in online learning, namely regret minimization [4]. This is
mainly because the F-measure is a non-decomposable performance metric. In fact, the cumulative
regret is a summation of a per-round regret rt , which only depends on the prediction ybt and the true
outcome yt [11]. In the case of the F-measure, the score F(t) , and therefore the optimal prediction
ybt , depends on the entire history, that is, all observations and decisions made by the learner till time
t. This is discussed in more detail in Section 6.
The most naive way of forecasting labels is to implement online learning as repeated batch learning,
that is, to apply a batch learner (such as 2S) to Dt = {(xi , yi )}ti=1 in each time step t. Obviously,
however, this strategy is prohibitively expensive, as it requires storage of all data points seen so
far (at least in mini-batches), as well as optimization of the threshold ?t and re-computation of the
classifier gt on an ever growing number of examples.
In the following, we propose a more principled technique to maximize the online F-measure. Our
approach is based on the observation that h(? ? ) = 0 and h(? )(? ? ? ) < 0 for any ? 2 [0, 1] such
that ? 6= ? ? [20]. Moreover, it is a monotone decreasing continuous function. Therefore, finding
the optimal threshold ? ? can be viewed as a root finding problem. In practice, however, h(? ) is not
known and can only be estimated. Let us define h ?, y, yb = yb
y ? (y + yb) . For now, assume ?(x)
b
to be known and write concisely h(? ) = h(?, y, J?(x) ? K). We can compute the expectation of
b
h(? ) with respect to the data distribution for a fixed threshold ? as follows:
h
i
E b
h(? ) = E [h(?, y, J?(x) ? K)] = E [y J?(x) ? K ? (y + J?(x) ? K)]
?Z 1
?
Z 1
=
p Jp ? K d?(p) ?
p + Jp ? K d?(p)
=
Z
0
1
p d?(p)
?
?
?Z
0
1
p d?(p) +
0
Z
1
(9)
1 d?(p) = h(? )
?
Thus, an unbiased estimate of h(? ) can be obtained by evaluating b
h(? ) for an instance x. This
suggests designing a stochastic approximation algorithm that is able to find the root of h(?) similarly
to the Robbins-Monro algorithm [12]. Exploiting the relationship between the optimal F-measure
and the optimal threshold, F (? ? ) = 2? ? , we define the threshold in time step t as
t
?t =
X
1
at
F(t) =
where at =
yi ybi ,
2
bt
i=1
4
bt =
t
X
i=1
yi +
t
X
i=1
ybi .
(10)
With this threshold, the first differences between thresholds, i.e. ?t+1 ?t , can be written as follows.
Proposition 2. If thresholds ?t are defined according to (10) and ybt+1 as J?(xt+1 ) > ?t K, then
(?t+1
?t )bt+1 = h(?t , yt+1 , ybt+1 ) .
(11)
The proof of Prop. 2 is deferred to Appendix A. According to (11), the method we obtain ?almost?
coincides with the update rule of the Robbins-Monro algorithm. There are, however, some notable
differences. In particular, the sequence of coefficients, namely the values 1/bt+1 , does not consist
of predefined real values converging to zero (as fast as 1/t). Instead, it consists of random quantities
that depend on the history, namely the observed labels y1 , . . . , yt and the predicted labels yb1 , . . . , ybt .
Moreover, these ?coefficients? are not independent of h(?t , yt+1 , ybt+1 ) either. In spite of these
additional difficulties, we shall present a convergence analysis of our algorithm in the next section.
The pseudo-code of our online F-measure optimization algorithm, called Online F-measure Algorithm 1 OFO
Optimizer (OFO), is shown in Algorithm 1. 1: Select g0 from B, and set ?0 = 0
The forecast rule can be written in the form of 2: for t = 1 ! 1 do
Observe the instance xt
ybt = Jpt ?t 1 K for xt where the threshold is 3:
pbt
gt 1 (xt )
. estimate posterior
defined in (10) and pt = ?(xt ). In practice, we 4:
ybt
Jb
pt ? t 1 K
. current prediction
use pbt = gt 1 (xt ) as an estimate of the true 5:
Observe label yt
posterior pt . In line 8 of the code, an online 6:
at
t
Calculate F(t) = 2a
learner A : G ? X ? Y ! G is assumed, 7:
bt and ?t = bt
which produces classifiers gt by incrementally 8:
gt
A(gt 1 , xt , yt ) . update the classifier
updating the current classifier with the newly 9: return ?T
observed example, i.e., gt = A(gt 1 , xt , yt ).
In our experimental study, we shall test and compare various state-of-the-art online learners as possible choices for A.
5
Consistency
In this section, we provide an analysis of the online F-measure optimizer proposed in the previous
section. More specifically, we show the statistical consistency of the OFO algorithm: The sequence
of online thresholds and F-scores produced by this algorithm converge, respectively, to the optimal
threshold ? ? and the optimal thresholded F-score F (? ? ) in probability. As a first step, we prove this
result under the assumption of knowledge about the true posterior probabilities; then, in a second
step, we consider the case of estimated posteriors.
Theorem 3. Assume the posterior probabilities pt = ?(xt ) of the positive class to be known in each
step of the online learning process. Then, the sequences of thresholds ?t and online F-scores F(t)
produced by OFO both converge in probability to their optimal values ? ? and F (? ? ), respectively:
For any ? > 0, we have limt!1 P |?t ? ? | > ? = 0 and limt!1 P |F(t) F (? ? )| > ? = 0.
Here is a sketch of the proof of this theorem, the details of which can be found in the supplementary
material (Appendix B):
1
? We focus on {?t }t=1 , which is a stochastic process the filtration of which is defined as
Ft = h{y1 , . . . , iyt , yb1 , . . . , ybt }. For this filtration, one can show that b
h(?t ) is Ft -measurable
b
and E h(?t )|Ft = h(?t ) based on (9).
h
i
1 b
? As a first step, we can decompose the update rule given in (11) as follows: E bt+1
h(?t ) Ft =
? ?
1
1
h(?
)
+
O
conditioned on the filtration Ft (see Lemma 7).
t
bt +2
b2t
?
?
P1
? Next, we show that the sequence 1/bt behaves similarly to 1/t, in the sense that t=1 E 1/b2t <
P1
P1 1
1 (see Lemma 8). Moreover, one can show that t=1 E [1/bt ]
t=1 2t = 1.
? Although h(? ) is not differentiable on [0, 1] in general (it can be piecewise linear, for example),
one can show that its finite difference is between 1 ?1 and ?1 (see Proposition 9 in the
appendix). As a consequence of this result, our process defined in (11) does not get stuck even
close to ? ? .
? The
? main part? of the proof is devoted to analyzing the properties of the sequence of t =
E (?t ? ? )2 for which we show that limt!1 t = 0, which is sufficient for the statement
5
of the theorem. Our proof follows the convergence analysis of [12]. Nevertheless, our analysis
essentially differs from theirs, since in our case, the coefficients cannot be chosen freely. Instead, as explained before, they depend on the labels observed and predicted so far. In addition,
the noisy estimation of h(?) depends on the labels, too, but the decomposition step allows us to
handle this undesired effect.
Remark 4. In principle, the Robbins-Monro algorithm can be applied for finding the root of h(?)
as well. This yields an update rule similar to (11), with 1/bt+1 replaced by C/t for a constant
C > 0. In this case, however, the convergence of the online F-measure is difficult to analyze (if at
all), because the empirical process cannot be written in a nice form. Moreover, as it has been found
in the analysis, the coefficient C should be set ? 1/?1 (see Proposition 9 and the choice of {kt } at
the end of the proof of Theorem 3). Yet, since ?1 is not known beforehand, it needs to be estimated
from the samples, which implies that the coefficients are not independent of the noisy evaluations
of h(?)?just like in the case of the OFO algorithm. Interestingly, OFO seems to properly adjust
the values 1/bt+1 in an adaptive manner (bt is a sum of two terms, the first of which is t?1 in
expectation), which is a very nice property of the algorithm. Empirically, based on synthetic data,
we found the performance of the original Robbins-Monro algorithm to be on par with OFO.
As already announced, we are now going to relax the assumption of known posterior probabilities
pt = ?(xt ). Instead, estimates pbt = gt (xt ) ? pt of these probabilities are obtained by classifiers gt
that are provided by the external online learner in Algorithm 1. More concretely, assume an online
learner A : G ? X ? Y ! G, where G is the set of probabilistic classifiers. Given a current model
gt and a new example (xt , yt ), this learner produces an updated classifier gt+1 = A(gt , xt , yt ).
Showing a consistency result for this scenario requires some assumptions on the online learner.
With this formal definition of online learner, a statistical consistency result similar to Theorem 3
can be shown. The proof of the following theorem is again deferred to supplementary material
(Appendix C).
Theorem 5. Assume that the classifiers (gt )1
are provided by an online
t=1 in the OFO framework
?R
?
learner for which the following holds: There is a > 0 such that E x2X |?(x) gt (x)| d?(x) =
O(t
P
P
) . Then, F(t) ! F (? ? ) and ?t ! ? ? .
This theorem?s requirement on the online learner is stronger than what is assumed by [11] and
recalled in Footnote 2. First, the learner is trained online and not in a batch mode. Second, we also
require that the L1 error of the learner goes to 0 with a convergence rate of order t .
It might be interesting to note that a universal rate of convergence cannot be established without
assuming regularity properties of the data distribution, such as smoothness via absolute continuity.
Results of that kind are beyond the scope of this study. Instead, we refer the reader to [5, 6] for
details on L1 consistency and its connection to the rate of convergence.
6
Discussion
Regret optimization and stochastic approximation: Stochastic approximation algorithms can be applied for finding the optimum of (4) or, equivalently, to find the unique root of (8) based on noisy
evaluations?the latter formulation is better suited for the classic version of the Robbins-Monro root
finding algorithm [12]. These algorithms are iterative methods whose analysis focuses on the difference of F (?t ) from F (? ? ), where ?t denotes the estimate of ? ? in iteration t, whereas our online
setting is concerned with the distance of F ((y1 , . . . , yt ), (b
y1 , . . . , ybt )) from F (? ? ), where ybi is the
prediction for yi in round i. This difference is crucial because F (?t ) only depends on ?t and in
addition, if ?t is close to ? ? then F (?t ) is also close to F (? ? ) (see [19] for concentration properties), whereas in the online F-measure optimization setup, F ((y1 , . . . , yt ), (b
y1 , . . . , ybt )) can be very
different from F (? ? ) even if the current estimate ?t is close to ? ? in case the number of previous
incorrect predictions is large.
In online learning and online optimization it is common to P
work with the notion of (cumulative)
t
regret. In our case, this notion could be interpreted either as i=1 |F ((y1 , . . . , yi ), (b
y1 , . . . , ybi ))
P
t
F (? ? )| or as i=1 |yi ybi |. After division by t, the former becomes the average accuracy of the
F-measure over time and the latter the accuracy of our predictions. The former is hard to interpret
because |F ((y1 , . . . , yi ), (b
y1 , . . . , ybi )) F (? ? )| itself is an aggregate measure of our performance
6
Table 1: Main statistics of the benchmark datasets and one pass F-scores obtained by OFO and 2S
methods on various datasets. The bold numbers indicate when the difference is significant between
the performance of OFO and 2S methods. The significance level is set to one sigma that is estimated
based on the repetitions.
Learner:
Dataset
#instances
#pos
gisette
7000
3500
news20.bin
19996
9997
Replab
45671
10797
WebspamUni
350000
212189
epsilon
500000
249778
covtype
581012
297711
url
2396130
792145
SUSY
5000000 2287827
kdda
8918054 7614730
kddb
20012498 17244034
#neg #features
3500
5000
9999 1355191
34874
353754
137811
254
250222
2000
283301
54
1603985 3231961
2712173
18
1303324 20216830
2768464 29890095
LogReg
OFO
0.954
0.879
0.924
0.912
0.878
0.761
0.962
0.762
0.927
0.934
2S
0.955
0.876
0.923
0.918
0.872
0.762
0.963
0.762
0.926
0.934
Pegasos
OFO
0.950
0.879
0.926
0.914
0.884
0.754
0.951
0.754
0.921
0.930
2S
0.935
0.883
0.928
0.910
0.886
0.760
0.950
0.745
0.926
0.929
Perceptron
OFO
0.935
0.908
0.914
0.927
0.862
0.732
0.971
0.710
0.913
0.923
2S
0.920
0.930
0.914
0.912
0.872
0.719
0.972
0.720
0.927
0.928
over the first t rounds, which thus makes no sense to aggregate again. The latter, on the other hand,
differs qualitatively from our ultimate goal; in fact, |F ((y1 , . . . , yt ), (b
y1 , . . . , ybt )) F (? ? )| is the
alternate measure that we are aiming to optimize for instead of the accuracy.
Online optimization of non-decomposable measures: Online optimization of the F-measure can be
seen as a special case of optimizing non-decomposable loss functions as recently considered by [9].
Their framework essentially differs from ours in several points. First, regarding the data generation
process, the adversarial setup with oblivious adversary is assumed, unlike our current study where
a stochastic setup is assumed. From this point of view, their assumption is more general since
the oblivious adversary captures the stochastic setup. Second, the set of classifiers is restricted to
differentiable parametric functions, which may not include the F-measure maximizer. Therefore,
their proof of vanishing regret does in general not imply convergence to the optimal F-score. Seen
from this point of view, their result is weaker than our proof of consistency (i.e., convergence to
the optimal F-measure in probability if the posterior estimates originate from a consistent learner).
There are some other non-decomposable performance measures which are intensively used in many
practical applications. Their optimization had already been investigated in the online or one-pass
setup. The most notable such measure might be the area under the ROC curve (AUC) which had
been investigated in an online learning framework by [21, 7].
7
Experiments
In this section, the performance of the OFO algorithm is evaluated in a one-pass learning scenario
on benchmark datasets, and compared with the performance of the 2-stage F-measure maximization
approach (2S) described in Section 2. We also assess the rate of convergence of the OFO algorithm
in a pure online learning setup.3
The online learner A in OFO was implemented in different ways, using Logistic Regression
(L OG R EG), the classical Perceptron algorithm (P ERCEPTRON) [13] and an online linear SVM called
P EGASOS [14]. In the case of L OG R EG, we applied the algorithm introduced in [15] which handles
L1 and L2 regularization. The hyperparameters of the methods and the validation procedures are
described below and in more detail in Appendix D. If necessary, the raw outputs of the learners were
turned into probability estimates, i.e., they were rescaled to [0, 1] using logistic transform.
We used in the experiments nine datasets taken from the LibSVM repository of binary classification
tasks.4 Many of these datasets are commonly used as benchmarks in information retrieval where the
F-score is routinely applied for model selection. In addition, we also used the textual data released
in the Replab challenge of identifying relevant tweets [1]. We generated the features used by the
winner team [8]. The main statistics of the datasets are summarized in Table 1.
3
4
Additional results of experiments conducted on synthetic data are presented in Appendix F.
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/binary.html
7
WebspamUni
kdda
url
1
0.9
0.8
0.8
0.8
0.8
0.7
0.7
0.7
0.7
0.6
0.5
0.4
One?pass+LogReg
Online+LogReg
One?pass+Pegasos
Online+Pegasos
One?pass+Perceptron
Online+Peceptron
0.3
0.2
0.1
0 0
10
2
10
4
10
Num. of samples
6
10
0.6
0.5
0.4
One?pass+LogReg
Online+LogReg
One?pass+Pegasos
Online+Pegasos
One?pass+Perceptron
Online+Peceptron
0.3
0.2
0.1
0 0
10
1
10
2
10
3
10
4
10
0.6
0.5
0.4
One?pass+LogReg
Online+LogReg
One?pass+Pegasos
Online+Pegasos
One?pass+Perceptron
Online+Peceptron
0.3
0.2
0.1
0 0
10
5
10
Num. of samples
2
10
4
10
Num. of samples
6
10
(Online) F?score
1
0.9
(Online) F?score
1
0.9
(Online) F?score
(Online) F?score
SUSY
1
0.9
0.6
0.5
0.4
One?pass+LogReg
Online+LogReg
One?pass+Pegasos
Online+Pegasos
One?pass+Perceptron
Online+Peceptron
0.3
0.2
0.1
0 0
10
2
10
4
10
6
10
Num. of samples
Figure 1: Online F-scores obtained by OFO algorithm on various dataset. The dashed lines represent
the one-pass performance of the OFO algorithm from Table 1 which we considered as baseline.
One-pass learning. In one-pass learning, the learner is allowed to read the training data only
once, whence online learners are commonly used in this setting. We run OFO along with the three
classifiers trained on 80% of the data. The learner obtained by OFO is of the form gt?t , where t
is the number of training samples. The rest 20% of the data was used to evaluate gt?t in terms of
the F-measure. We run every method on 10 randomly shuffled versions of the data and averaged
results. The means of the F-scores computed on the test data are shown in Table 1. As a baseline,
we applied the 2S approach. More concretely, we trained the same set of learners on 60% of the
data and validated the threshold on 20% by optimizing (6). Since both approaches are consistent,
the performance of OFO should be on par with the performance of 2S. This is confirmed by the
results, in which significant differences are observed in only 7 of 30 cases. These differences in
performance might be explained by the finiteness of the data. The advantage of our approach over
2S is that there is no need of validation and the data needs to be read only once, therefore it can
be applied in a pure one-pass learning scenario. The hyperparameters of the learning methods are
chosen based on the performance of 2S. We tuned the hyperparameters in a wide range of values
which we report in Appendix D.
Online learning. The OFO algorithm has also been evaluated in the online learning scenario in
terms of the online F-measure (2). The goal of this experiment is to assess the convergence rate of
OFO. Since the optimal F-measure is not known for the datasets, we considered the test F-scores
reported in Table 1. The results are plotted in Figure 1 for four benchmark datasets (the plots for
the remaining datasets can be found in Appendix G). As can be seen, the online F-score converges
to the test F-score obtained in one-pass evalaution in almost every case. There are some exceptions
in the case of P EGASOS and P ERCEPTRON. This might be explained by the fact that SVM-based
methods as well as the P ERCEPTRON tend to produce poor probability estimates in general (which
is a main motivation for calibration methods turning output scores into valid probabilities [3]).
8
Conclusion and Future Work
This paper studied the problem of online F-measure optimization. Compared to many conventional online learning tasks, this is a specifically challenging problem, mainly because of the nondecomposable nature of the F-measure. We presented a simple algorithm that converges to the
optimal F-score when the posterior estimates are provided by a sequence of classifiers whose L1
error converges to zero as fast as t for some > 0. As a key feature of our algorithm, we note
that it is a purely online approach; moreover, unlike approaches such as 2S, there is no need for a
hold-out validation set in batch mode. Our promising results from extensive experiments validate
the empirical efficacy of our algorithm.
For future work, we plan to extend our online optimization algorithm to a broader family of complex
performance measures which can be expressed as ratios of linear combinations of true positive, false
positive, false negative and true negative rates [10]; the F-measure also belongs to this family. Moreover, going beyond consistency, we plan to analyze the rate of convergence of our OFO algorithm.
This might be doable thanks to several nice properties of the function h(? ). Finally, an intriguing
question is what can be said about the case when some bias is introduced because the classifier gt
does not converge to ?.
Acknowledgments. Krzysztof Dembczy?nski is supported by the Polish National Science Centre
under grant no. 2013/09/D/ST6/03917.
8
References
[1] E. Amig?o, J. C. de Albornoz, I. Chugur, A. Corujo, J. Gonzalo, T. Mart??n-Wanton, E. Meij,
M. de Rijke, and D. Spina. Overview of RepLab 2013: Evaluating online reputation monitoring
systems. In CLEF, volume 8138, pages 333?352, 2013.
[2] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
? o, and Gy. Szarvas. Tune and mix: Learning to rank using
[3] R. Busa-Fekete, B. K?egl, T. Eltet?
ensembles of calibrated multi-class classifiers. Machine Learning, 93(2?3):261?292, 2013.
[4] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University
Press, 2006.
[5] L. Devroye and L. Gy?orfi. Nonparametric Density Estimation: The L1 View. Wiley, NY, 1985.
[6] L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer,
NY, 1996.
[7] W. Gao, R. Jin, S. Zhu, and Z.-H. Zhou. One-pass AUC optimization. In ICML, volume 30:3,
pages 906?914, 2013.
[8] V. Hangya and R. Farkas. Filtering and polarity detection for reputation management on tweets.
In Working Notes of CLEF 2013 Evaluation Labs and Workshop, 2013.
[9] P. Kar, H. Narasimhan, and P. Jain. Online and stochastic gradient methods for nondecomposable loss functions. In NIPS, 2014.
[10] N. Nagarajan, S. Koyejo, R. Ravikumar, and I. Dhillon. Consistent binary classification with
generalized performance metrics. In NIPS, pages 2744?2752, 2014.
[11] H. Narasimhan, R. Vaish, and Agarwal S. On the statistical consistency of plug-in classifiers
for non-decomposable performance measures. In NIPS, 2014.
[12] H. Robbins and S. Monro. A stochastic approximation method. Ann. Math. Statist., 22(3):400?
407, 1951.
[13] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization
in the brain. Psychological Review, 65(6):386?408, 1958.
[14] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver
for SVM. In ICML, pages 807?814, 2007.
[15] Y. Tsuruoka, J. Tsujii, and S. Ananiadou. Stochastic gradient descent training for L1regularized log-linear models with cumulative penalty. In ACL, pages 477?485, 2009.
[16] C.J. van Rijsbergen. Foundation and evalaution. Journal of Documentation, 30(4):365?373,
1974.
[17] S. R. S. Varadhan. Probability Theory. New York University, 2000.
[18] W. Waegeman, K. Dembczy?nski, A. Jachnik, W. Cheng, and E. H?ullermeier. On the Bayesoptimality of F-measure maximizers. Journal of Machine Learning Research, 15(1):3333?
3388, 2014.
[19] N. Ye, K. M. A. Chai, W. S. Lee, and H. L. Chieu. Optimizing F-measure: A tale of two
approaches. In ICML, 2012.
[20] M. Zhao, N. Edakunni, A. Pocock, and G. Brown. Beyond Fano?s inequality: Bounds on the
optimal F-score, BER, and cost-sensitive risk and their implications. JMLR, pages 1033?1090,
2013.
[21] P. Zhao, S. C. H. Hoi, R. Jin, and T. Yang. Online AUC maximization. In ICML, pages
233?240, 2011.
9
| 5994 |@word mild:1 repository:1 version:2 seems:2 stronger:2 r:1 decomposition:1 score:23 efficacy:1 tuned:2 ours:1 interestingly:1 current:5 com:1 gmail:1 yet:2 written:5 intriguing:1 treating:1 plot:1 progressively:1 update:4 aside:1 farkas:1 intelligence:1 vanishing:1 short:1 num:4 provides:2 math:1 ofo:24 kdda:2 dn:8 along:1 beta:1 incorrect:1 prove:3 consists:2 busa:2 manner:1 news20:1 expected:1 p1:4 nor:1 growing:1 multi:2 brain:1 decreasing:2 armed:1 solver:1 increasing:1 becomes:1 provided:4 estimating:2 moreover:10 gisette:1 israel:1 what:2 kind:1 interpreted:1 narasimhan:2 finding:8 transformation:1 pseudo:1 every:2 ti:1 concave:1 prohibitively:1 classifier:32 stick:1 jpt:1 grant:1 positive:7 before:1 consequence:1 aiming:1 despite:1 analyzing:1 becoming:1 lugosi:2 might:6 acl:1 yb1:2 studied:1 suggests:1 challenging:3 mentioning:1 bi:2 range:1 averaged:1 unique:2 practical:1 enforces:1 acknowledgment:1 practice:5 regret:7 implement:2 differs:3 ybt:19 procedure:1 nondecomposable:2 area:1 empirical:2 universal:1 significantly:1 orfi:2 spite:1 get:1 cannot:6 close:4 pegasos:10 selection:1 put:1 context:1 applying:1 storage:2 risk:1 optimize:1 measurable:2 map:1 www:1 yt:19 maximizing:4 conventional:1 go:1 attention:1 convex:1 simplicity:1 decomposable:6 identifying:1 pure:2 rule:4 l1regularized:1 population:1 handle:2 notion:3 classic:1 updated:2 pt:13 suppose:2 play:1 designing:1 trend:1 documentation:1 expensive:1 recognition:1 updating:1 pozna:1 labeled:1 observed:6 role:1 ft:5 csie:1 capture:1 calculate:2 hullermeier:1 news:1 rescaled:1 observes:1 principled:1 trained:5 depend:2 rewrite:1 poznan:1 purely:1 division:1 learner:27 basis:1 logreg:9 po:1 routinely:2 various:4 enyi:1 fast:2 jain:1 artificial:1 aggregate:2 outcome:1 shalev:1 paderborn:2 quite:1 whose:2 supplementary:2 relax:1 otherwise:1 statistic:2 g1:1 transform:1 noisy:3 itself:1 online:71 obviously:1 sequence:7 differentiable:2 advantage:1 doable:1 propose:3 frequent:1 relevant:2 combining:1 hungary:1 turned:1 till:1 supposed:1 validate:1 az:1 exploiting:1 convergence:13 regularity:1 requirement:1 r1:1 optimum:1 produce:3 chai:1 converges:4 tale:1 received:1 implemented:1 c:1 predicted:3 implies:1 indicate:1 stochastic:12 libsvmtools:1 material:2 hoi:1 bin:1 require:2 pbn:1 f1:2 suffices:2 nagarajan:1 decompose:1 ntu:1 proposition:3 summation:1 strictly:1 pl:1 hold:2 considered:4 algorithmic:2 predict:1 scope:1 optimizer:2 released:1 estimation:2 label:14 sensitive:1 robbins:6 repetition:1 weighted:1 minimization:1 enumerator:1 clearly:1 pn:3 zhou:1 og:2 broader:1 validated:1 focus:4 properly:1 rank:1 mainly:3 polish:1 adversarial:1 baseline:2 sense:4 whence:2 entire:2 bt:13 bandit:1 going:2 interested:1 germany:2 jachnik:1 classification:3 html:1 plan:2 art:1 special:1 marginal:2 once:3 clef:2 look:1 icml:4 future:2 jb:1 report:1 piecewise:1 ullermeier:1 oblivious:2 randomly:1 simultaneously:1 national:1 replaced:1 argmax:1 detection:1 organization:1 evaluation:3 adjust:1 deferred:2 analyzed:1 primal:1 devoted:1 implication:1 predefined:1 kt:1 beforehand:1 nowadays:1 necessary:2 edakunni:1 haifa:1 re:1 plotted:1 psychological:1 instance:9 disadvantage:1 maximization:6 cost:1 decomposability:1 uniform:1 technion:1 usefulness:1 conducted:1 dembczy:2 too:1 optimally:2 reported:1 synthetic:2 gd:1 nski:2 thanks:1 density:2 calibrated:1 st6:1 probabilistic:4 lee:1 again:3 central:1 cesa:2 management:1 external:2 derivative:1 zhao:2 return:1 potential:1 de:4 gy:3 bold:1 summarized:1 coefficient:5 notable:2 depends:4 stream:1 root:6 view:3 lab:1 doing:1 observing:1 analyze:2 dembczynski:1 monro:6 ass:2 accuracy:3 who:1 likewise:1 ensemble:1 yield:1 ybi:8 rijke:1 raw:1 produced:3 monitoring:1 worth:1 confirmed:1 history:2 footnote:1 definition:3 evaluates:1 proof:8 boil:1 newly:1 dataset:2 adjusting:1 popular:2 intensively:1 recall:5 knowledge:2 appears:1 feed:1 dt:1 egasos:2 yb:4 formulation:1 done:1 evaluated:3 tsuruoka:1 just:1 stage:2 sketch:1 hand:1 working:1 maximizer:1 incrementally:2 continuity:1 mode:3 logistic:3 tsujii:1 measurability:1 effect:1 ye:1 brown:1 true:7 unbiased:1 vaish:1 former:2 regularization:1 shuffled:1 read:2 dhillon:1 illustrated:1 undesired:1 eyke:2 round:7 eg:2 game:1 auc:3 rooted:1 maintained:1 coincides:1 bal:1 generalized:2 stress:1 performs:2 l1:7 upb:2 harmonic:1 recently:2 obert:1 common:1 behaves:1 empirically:1 overview:1 winner:1 jp:2 volume:3 discussed:1 extend:1 theirs:1 interpret:1 refer:1 significant:2 cambridge:1 smoothness:1 tuning:1 gdn:3 consistency:9 similarly:2 fano:1 centre:1 varadhan:1 gonzalo:1 jg:1 had:2 calibration:1 gt:20 posterior:15 imbalanced:3 recent:2 optimizing:8 belongs:1 scenario:4 susy:2 kar:1 binary:12 inequality:1 yi:13 neg:1 seen:4 additional:2 freely:1 converge:3 maximize:2 dashed:1 mix:1 plug:1 retrieval:2 ravikumar:1 prediction:14 converging:1 regression:2 denominator:1 essentially:2 metric:5 expectation:2 iteration:1 represent:1 limt:4 szarvas:1 agarwal:1 achieved:1 addition:3 whereas:2 x2x:5 koyejo:1 finiteness:1 limn:1 crucial:1 extra:1 rest:1 unlike:4 induced:3 tend:1 call:1 yang:1 revealed:1 exceed:1 easy:1 concerned:1 nonstochastic:1 regarding:1 maxf:1 pdn:1 url:2 ultimate:1 forecasting:1 penalty:1 york:1 nine:1 remark:2 tune:1 nonparametric:1 statist:1 induces:1 http:1 ananiadou:1 estimated:5 per:1 rosenblatt:1 write:2 shall:4 group:1 key:1 four:1 waegeman:1 threshold:24 nevertheless:2 neither:1 libsvm:1 thresholded:3 krzysztof:2 monotone:1 tweet:3 year:1 sum:1 run:2 almost:2 reasonable:1 reader:1 family:2 pbi:1 decision:1 appendix:9 radon:1 comparable:1 announced:1 bound:1 b2t:2 tackled:2 cheng:1 precisely:1 szte:1 sake:1 argument:1 optimality:1 kddb:1 department:2 mta:1 developing:1 according:5 alternate:1 combination:1 poor:1 increasingly:1 pocock:1 tw:1 explained:3 restricted:1 taken:1 equation:1 cjlin:1 singer:1 end:2 rewritten:1 apply:1 observe:2 batch:7 original:1 denotes:1 remaining:1 include:1 epsilon:1 especially:2 classical:1 g0:1 already:2 quantity:1 question:1 strategy:2 concentration:1 rt:1 parametric:1 said:1 exhibit:1 gradient:5 dp:1 distance:1 separate:1 majority:1 originate:1 minority:1 assuming:1 devroye:2 besides:1 code:2 rijsbergen:1 relationship:2 mini:1 iyt:1 balance:1 ratio:1 equivalently:2 difficult:1 setup:6 polarity:1 statement:1 sigma:1 negative:4 filtration:3 unknown:1 perform:2 bianchi:2 observation:3 datasets:10 benchmark:4 finite:5 descent:2 jin:2 ever:1 precise:1 team:1 y1:20 introduced:3 namely:4 required:1 extensive:1 connection:1 recalled:1 concisely:2 textual:1 established:1 nip:3 able:1 beyond:3 adversary:2 below:1 pattern:1 challenge:1 max:2 suitable:1 natural:1 difficulty:1 indicator:1 turning:1 zhu:1 technology:1 imply:1 naive:1 poland:1 prior:2 nice:3 l2:1 review:1 loss:2 par:2 interesting:2 generation:1 afterward:1 filtering:1 srebro:1 validation:4 foundation:2 sufficient:2 consistent:5 thresholding:3 principle:1 nikodym:1 pi:1 changed:1 supported:1 arriving:1 formal:4 weaker:1 bias:1 perceptron:7 institute:1 wide:1 ber:1 absolute:1 van:1 curve:1 calculated:1 valid:1 world:1 avoids:1 unweighted:1 cumulative:3 evaluating:2 author:2 commonly:3 made:1 stuck:1 adaptive:1 concretely:2 qualitatively:1 far:3 sz:1 assumed:6 xi:6 shwartz:1 search:1 continuous:2 iterative:1 reputation:2 table:5 promising:1 learn:1 nature:1 pbt:4 investigated:2 complex:1 protocol:1 erceptron:3 pk:2 main:4 significance:1 big:1 motivation:1 hyperparameters:3 repeated:1 allowed:1 x1:3 referred:1 roc:1 fashion:1 ny:2 wiley:1 precision:5 sub:1 jmlr:1 theorem:9 down:1 xt:21 specific:1 showing:3 covtype:1 svm:3 maximizers:1 consist:1 workshop:1 false:2 importance:1 labelling:1 conditioned:1 egl:1 forecast:1 suited:1 bubeck:1 gao:1 prevents:1 expressed:1 g2:1 recommendation:1 chieu:1 fekete:2 springer:1 mart:1 prop:1 conditional:1 goal:3 viewed:2 ann:1 hard:1 specifically:3 lemma:2 called:4 pas:23 experimental:3 exception:1 formally:1 select:1 latter:3 evaluate:1 |
5,518 | 5,995 | A Market Framework for Eliciting Private Data
Bo Waggoner
Harvard SEAS
[email protected]
Rafael Frongillo
University of Colorado
[email protected]
Jacob Abernethy
University of Michigan
[email protected]
Abstract
We propose a mechanism for purchasing information from a sequence of participants. The participants may simply hold data points they wish to sell, or may have
more sophisticated information; either way, they are incentivized to participate as
long as they believe their data points are representative or their information will
improve the mechanism?s future prediction on a test set. The mechanism, which
draws on the principles of prediction markets, has a bounded budget and minimizes generalization error for Bregman divergence loss functions. We then show
how to modify this mechanism to preserve the privacy of participants? information: At any given time, the current prices and predictions of the mechanism reveal
almost no information about any one participant, yet in total over all participants,
information is accurately aggregated.
1
Introduction
A firm that relies on the ability to make difficult predictions can gain a lot from a large collection
of data. The goal is often to estimate values y ? Y given observations x ? X according to an
appropriate class of hypotheses F describing the relationship between x and y (for example, y = a ?
x + b for linear regression). In classic statistical learning theory, the goal is formalized as attempting
to approximately solve
min E Loss(f ; (x, y))
(1)
f ?F x,y
where Loss(?) is an appropriate inutility function and (x, y) is drawn from an unknown distribution.
In the present paper we are concerned with the case in which the data are not drawn or held by a
central authority but are instead inherently distributed. By this we mean that the data is (disjointly)
partitioned across a set of agents, with agent i privately possessing some portion of the dataset Si ,
and agents have no obvious incentive to reveal this data to the firm seeking it. The vast swaths of data
available in our personal email accounts could provide massive benefits to a range of companies, for
example, but users are typically loathe to provide account credentials, even when asked politely.
We will be concerned with the design of financial mechanisms that provide a community of agents,
each holding a private set of data, an incentive to contribute to the solution of a large learning or
prediction task. Here we use the term ?mechanism? to mean an algorithmic interface that can receive
and answer queries, as well as engage in monetary exchange (deposits and payouts). Our aim will
be to design such a mechanism that satisfies the following three properties:
1. The mechanism is efficient in that it approaches a solution to (1) as the amount of data and
participation grows while spending a constant, fixed total budget.
2. The mechanism is incentive-compatible in the sense that agents are rewarded when their
contributions provide marginal value in terms of improved hypotheses, and are not rewarded for bad or misleading information.
3. The mechanism provides reasonable privacy guarantees, so that no agent j (or outside
observer) can manipulate the mechanism in order to infer the contributions of agent i 6= j.
1
Ultimately we would like our mechanism to approach the performance of a learning algorithm that
had direct access to all the data, while only spending a constant budget to acquire data and improve
predictions and while protecting participants? privacy.
Our construction relies on the recent surge in literature on prediction markets [13, 14, 19, 20],
popular for some time in the field of economics and recently studied in great detail in computer
science [8, 16, 6, 15, 18, 1]. A prediction market is a mechanism designed for the purpose of
information aggregation, particularly when there is some underlying future event about which many
members of the population may have private and useful information. For instance, it may elicit
predictions about which team will win an upcoming sporting event, or which candidate will win an
election. These predictions are eventually scored on the actual outcome of the event.
Applying these prediction market techniques allows participants to essentially ?trade in a market?
based on their data. (This approach is similar to prior work on crowdsourcing contests [3].) Members
of the population have private information, just as with prediction markets ? in this case, data points
or beliefs ? and the goal is to incentivize them to reveal and aggregate that information into a final
hypothesis or prediction. Their final profits are tied to the outcome of a test set of data, with each
participant being paid in accordance with how much their information improved the performance
on the test set. Our techniques depart from the framework of [3] in two significant aspects: (a) we
focus on the particular problem of data aggregation, and most of our results take advantage of kernel
methods; and (b) our mechanisms are the first to combine differential privacy guarantees with data
aggregation in a prediction-market framework.
This framework will provide efficiency and truthfulness. We will also show how to achieve privacy
in many scenarios. We will give mechanisms where the prices and predictions published satisfy
(, ?)-differential privacy [10] with respect to each participant?s data. The mechanism?s output can
still give reasonable predictions while no observer can infer much about any participant?s input data.
2
Mechanisms for Eliciting and Aggregating Data
We now give a broad description of the mechanism we will study. In brief, we imagine a central
authority (the mechanism, or market) maintaining a hypothesis f t representing the current aggregation of all the contributions made thus far. A new (or returning) participant may query f t at no cost,
perhaps evaluating the quality of the predictions on a privately-held dataset, and can then propose an
update df t+1 to f t that possibly requires an investment (a ?bet?). Bets are evaluated at the close of
the market when a true data sample is generated (analogous to a test set), and payouts are distributed
according to the quality of the updates.
After describing this initial framework as Mechanism 1, which is based loosely on the setting of
[3], we turn our attention to the special case in which our hypotheses must lie in a Reproducing
Kernel Hilbert Space (RKHS) [17] for a given kernel k(?, ?). This kernel-based ?nonparametric
mechanism? is particularly well-suited for the problem of data aggregation, as the betting space of
the participants consists essentially of updates of the form df t = ?t k(zt , ?), where zt is the data
object offered by the participant and ?t ? R is the ?magnitude? of the bet.
A drawback of Mechanism 1 is the lack of privacy guarantees associated with the betting protocol:
utilizing one?s data to make bets or investments in the mechanism can lead to a loss of privacy for
the owner of that data. When a participant submits a bet of the form df t = ?t k(zt , ?), where zt
could contain sensitive personal information, another participant may be able to infer zt by querying
the mechanism. One of the primary contributions of the present work, detailed in Section 3, is a
technique to allow for productive participation in the mechanism while maintaining a guarantee on
the privacy of the data submitted.
2.1
The General Template
There is a space of examples X ?Y, where x ? X are features and y ? Y are labels. The mechanism
designer chooses a function space F consisting of f : X ? Y ? R, and assumed to have Hilbert
space structure; one may view F as either the hypothesis class or the associated loss class, that is
where fh (x, y) measures the loss/performance of hypothesis h on observation x and label y. In each
case we will refer to f ? F as a hypothesis, eliding the distinction between fh and h.
2
The pricing scheme of the mechanism relies on a convex cost function Cx (?) : F ? R which is
parameterized by elements x ? X but whose domain is the set of hypotheses F. The cost function
is publicly available and determined in advance. The interaction with the mechanism is a sequential
process of querying and betting. On round t ? 1 the mechanism publishes a hypothesis f t?1 , the
?state? of the market, which participants may query. Each participant arrives sequentially, and on
round t a participant may place a ?bet? df t ? F, also called a ?trade? or ?update?, modifying the
hypothesis f t?1 ? f t = f t?1 + df t . Finally participation ends and the mechanism samples (or
reveals) a test example1 (x, y) from the underlying distribution and pays (or charges) each participant
according to the relative performance of their marginal contributions. Precisely, the total reward for
participant t?s bet df t is the value df t (x, y) minus the cost Cx (f t ) ? Cx (f t?1 ).
Mechanism 1: The Market Template
M ARKET announces f 0 ? F
for t = 1, 2, . . . , T do
PARTICIPANT may query functions ?f Cx (f t?1 ) and f t?1 (x, y) for examples (x, y)
PARTICIPANT t may submit a bet df t ? F to M ARKET
M ARKET updates state f t = f t?1 + df t
M ARKET observes a true sample (x, y)
for t = 1, 2, . . . , T do
PARTICIPANT t receives payment df t (x, y) + Cx (f t?1 ) ? Cx (f t )
The design of cost-function prediction markets has been an area of active research over the past
several years, starting with [8] and many further refinements and generalizations [1, 6, 15]. The
general idea is that the mechanism can efficiently provide price quotes via a function C(?) which
acts as a potential on the space of outstandings shares; see [1] for a thorough review. In the present
work we have added an additional twist which is that the function Cx (?) is given an additional
parameterization of the observation x. We will not dive too deeply into the theoretical aspects of
this generalization, but this is a straightforward extension of existing theory.
Key special case: exponential family mechanism. For those more familiar with statistics and
machine learning, there is a natural and canonical family of problems that can be cast within the
general framework of Mechanism 1, which we will call the exponential family prediction mechanism following [2]. Assume that F can be parameterized as F = {f? : ? ? Rd }, that we
are given a sufficient statistics summary function ? : X ? Y ? RRd , and that function evaluation is given by f? (x, y) = h?, ?(x, y)i. We let Cx (f ) := log Y exp(f (x, y))dy so that
R
Cx (f? ) = log Y exp(h?, ?(x, y)idy. In other words, we have chosen our mechanism to encode
a particular exponential family model, with Cx (?) chosen as the conditional log partition function
over the distribution on y given x. If the market has settled on a function f? , then one may interpret
that as the aggregate market belief on the distribution of X ? Y is
R
p? (x, y) = exp(h?, ?(x, y)i ? A(?))
where
A(?) = log X ?Y exp(h?, ?(x, y)i) dx dy.
How may we view this as a ?market aggregate? belief? Notice that if a trader observes the market
state of f? and she is considering a bet of the form df = f? ? f?0 , the eventual profit will be
f?0 (x, y) ? f? (x, y) + Cx (f? ) ? Cx (f?0 ) = log
p?0 (y|x)
.
p? (y|x)
I.e., the profit is precisely the conditional log likelihood ratio of the update ? ? ?0 .
Example: Logistic regression. Let X = Rk , Y = {?1, 1}, and take F to be the set of functions f? (x, y) = y ? (?> x) for ? ? Rk . Then by our construction, Cx (f ) = log(exp(f (x, 1)) +
exp(f (x, ?1))) = log(exp(?> x) + exp(??> x)), and we let f 0 = f0 ? 0. The payoff of a
participant placing a bet which moves the market state to f 1 = f? , upon outcome (x, y), is:
f? (x, y) + Cx (f0 ) ? Cx (f? ) = y?> x + log(2) ? log(exp(?> x) + exp(??> x))
= log(2) ? log(1 + exp(?2y?> x)) ,
1
This can easily be extended to a test set by taking the average performance over the test set.
3
which is simply negative logistic loss of the parameter choice 2?. A participant wishing to maximize
profit under a belief distribution p(x, y) should therefore choose ? via logistic regression,
?? = arg min E
log(1 ? exp(2y?> x)) .
(2)
?
2.2
(x,y)?p
Properties of the Market
We next describe two nice properties of Mechanism 1: incentive-compatibility and bounded budget. Recall that, for the exponential family markets discussed above, a trader moving the market
hypothesis from f t?1 to f t was compensated according to the conditional log-likelihood ratio of
f t?1 and f t on the test data point. The implication is that traders are incentivized to minimize a
KL divergence between the market?s estimate of the distribution and the true underlying distribution. We refer to this property as incentive-compatibility because traders? interests are aligned with
the mechanism designer?s. This property indeed holds generally for Mechanism 1, where the KL
divergence is replaced with a general Bregman divergence corresponding to the Fenchel conjugate
of Cx (?); see Proposition 1 in the appendix for details.
Given that the mechanism must make a sequence of (possibly negative) payments to traders, a natural
question is whether there is the potential for large downside for the mechanism in terms of total
payment (budget). In the context of the exponential family mechanism, this question is easy to
answer: after a sequence of bets moving the market state parameter ?0 ? ?1 ? . . . ? ?final , the
total loss to the mechanism corresponds to the total payouts made to traders,
X
p? (y|x)
f?i+1 (x, y) ? f?i (x, y) + Cx (f?i ) ? Cx (f?i+1 ) = log final
;
p?0 (y|x)
i
that is, the worst-case loss is exactly the worst-case conditional log-likelihood ratio. In the context
of logistic regression this quantity can always be guaranteed to be no more than log 2 as long as
the initial parameter is set to ? = 0. For Mechanism 1 more generally, one has tight bounds on
the worst-case loss following from such results from prediction markets [1, 8], and we give a more
detailed statement in Proposition 2 in the appendix.
Price sensitivity parameter ?C . In choosing the cost function family C = {Cx : x ? X }, an
important consideration is the ?scale? of each Cx , or how quickly changes in the market hypothesis
f t translate to changes in the ?instantaneous prices? ?Cx (f t ) (which give the marginal cost for an
infinitesimal bet df t+1 ). Formally, this is captured by the price sensitivity ?C , defined as the upper
bound on the operator norm (with respect to the L1 norm) of the Hessian of the cost function Cx
(over all x). A choice of small ?C translates to a small worst-case budget required by the mechanism.
However, it means that the market prices are sensitive in that the same update df t changes the prices
much more quickly. When we consider protecting the privacy of trader updates in Section 3, we will
see that privacy imposes restrictions on the price sensitivity.
2.3
A Nonparametric Mechanism via Kernel Methods
The framework we have discussed thus far has involved a general function space F as the ?state?
of the mechanism, and the contributions by participants are in the form of modifications to these
functions. One of the downsides of this generic template is that participants may not be able to reason
about F, and they may have information about the optimal f only through their own privately-held
dataset S ? X ? Y. A more specific class of functions would be those parameterized by actual data.
This brings us to a well-studied type of non-parametric hypothesis class, namely the reproducing
kernel Hilbert space (RKHS). We can design a market based on an RKHS, which we will refer to
as a kernel market, that brings together a number of ideas including recent work of [21] as well as
kernel exponential families [4].
We have a positive semidefinite kernel k : Z ? Z ? R and associated reproducing kernel Hilbert
space F, with basis {fz (?) = k(z, ?) : z ? Z}. The reproducing property is that
P for all f ? F,
hf, k(z, ?)i = f (z). Now each hypothesis f ? F can be expressed as f (?) = s ?s k(zs , ?) for
some collection of points {(?s , zs )}.
The kernel approach has several nice properties. One is a natural extension of the exponential family
mechanism using an RKHS as a building block of the class of exponential family distributions [4]. A
4
key assumption in the exponential family mechanism is that evaluating f can be viewed as an inner
product in some feature space; this is precisely what one has given a kernel framework. Specifically,
assume we have some PSD kernel k : X ? X ? R, where Y = {?1, 1}. Then we can define the
?
associated classification kernel k? : (X ? Y) ? (X ? Y) ? R according Rto k((x,
y), (x0 , y 0 )) :=
0
0
yy k(x, x ). Under certain conditions [4], we again can take Cx (f ) = log Y exp(f (x, y))dy, and
? we have an associated distribution of the form pf (x, y) ?
for any f in the RKHS associated to k,
exp(f (x, y)). And again, a participant updating the market from f t?1 to f t is rewarded by the
conditional log-likelihood ratio of f t?1 and f t on the test data.
The second nice property mirrors one of standard kernel learning methods, namely that under certain conditions one need only search the subset of the RKHS spanned by the basis {k((xi , yi ), ?) :
(xi , yk ) ? S}, where S is the set of available data; this is a direct result of the Representer Theorem [17]. In the context of the kernel market, this suggests that participants need only interact with
the mechanism by pushing updates that lie in the span of their own data. In other words, we only
need to consider updates of the form df = ?k((x, y), ?). This naturally suggests the idea of directly
purchasing data points from traders.
Buying Data Points. So far, we have supposed that a participant knows what trade df t she prefers
to make. But what if she simply has a data point (x, y) drawn from the underlying distribution?
We would like to give this trader a ?simple? trading interface in which she can sell her data to the
mechanism without having to reason about the correct df t for this data point.
Our proposal is to mimic the behavior of natural learning algorithms, such as stochastic gradient
descent, when presented with (x, y). The market can offer the trader the purchase bundle corresponding to the update of the learning algorithm on this data point. In principle, this approach can
be used with any online learning algorithm. In particular, stochastic gradient descent gives a clean
update rule, which we now describe. The expected profit (which is the negative of expected loss)
for trade df t is Ex Cx (f t?1 + df t ) ? Cx (f t?1 ) ? Ey|x [df t (x, y)] . Given a draw (x, y), the loss
function on which to take a gradient step is ? Cx (f t?1 + df t ) ? Cx (f t?1 ) ? df t (x, y) , whose
gradient is ??f t?1 Cx + ?x,y (where ?x,y is the indicator on data
point x, y). This suggests that the
market offer the participant the trade df t = ?f t?1 Cx ? ?x,y , where can be chosen arbitrarily
as a ?learning rate?. This can be interpreted as buying a unit of shares in the participant?s data point
(x, y), then ?hedging? by selling a small amount of all other shares in proportion to their current
prices (recall that the current prices are given by ?f t Cx ).
In the kernel setting, the choice of stochastic gradient descent may be somewhat problematic, because it can result in non-sparse share purchases. It may instead be desirable to use algorithms that
guarantee sparse updates?a modern discussion of such approaches can be found in [22, 23].
Given this framework, participants with access to a private set of samples from the true underlying
distribution can simply opt for this ?standard bundle? corresponding to their data point, which is
precisely a stochastic gradient descent update. With a small enough learning rate, and assuming
that the data point is truly independent of the current hypothesis (i.e. (x, y) has not been previously
incorporated), the trade is guaranteed to make at least some positive profit in expectation. More
sophisticated alternative strategies are also possible of course, but even the proposed simple bet type
has earning potential.
3
Protecting Participants? Privacy
We now extend the mechanism to protect privacy of the participants: An adversary observing the
hypotheses and prices of the mechanism, and even controlling the trades of other participants, should
not be able to infer too much about any one trader?s update df t . This is especially relevant when
participants sell data to the mechanism and this data can be sensitive, e.g. medical data.
Here, privacy is formalized by (, ?)-differential privacy, to be defined shortly. One intuitive characterization is that, for any prior distribution some adversary has about a trader?s data, the adversary?s
posterior belief after observing the mechanism would be approximately the same even if the trader
did not participate at all. The idea is that, rather than posting the exact prices and trades made in the
market, we will publish noisy versions, with the random noise giving the above guarantee.
5
A naive approach would be to add independent noise to each participant?s trade. However, this would
require a prohibitively-large amount of noise; the final market hypothesis would be determined by
the random noise just as much as by the data and trades. The central challenge is to add carefully
correlated noise that is large enough to hide the effects of any one participant?s data point, but not
so large that the prices (equivalently, hypothesis) become meaningless. We show this is possible
by adjusting the ?price sensitivity? ?C of the mechanism, a measure of how fast prices change
in response to trades defined in 2.2. It will turn out to suffice to set the price sensitivity to be
O(1/polylog T ) when there are T participants. This can roughly be interpreted as saying that any
one participant does not move the market price noticeably (so their privacy is protected), but just
O(polylog T ) traders together can move the prices completely.
We now formally define differential privacy and discuss two useful tools at our disposal.
3.1
Differential Privacy and Tools
Differential privacy in our context is defined as follows. Consider a randomized function M operating on inputs of the form f~ = (df 1 , . . . , df T ) and having outputs of the form s. Then M is
(, ?)-differentially private if, for any coordinate t of the vector, any two distinct df1t , df2t , and any
(measurable) set of outputs S, we have Pr[M (f ?t , df1t ) ? S)] ? e Pr[M (f ?t , df2t ) ? S] + ?. The
notation f ?t means the vector f~ with the tth entry removed.
Intuitively, M is private if modifying the tth entry in the vector to a different entry does not change
the distribution on outputs too much. In our case, the data to be protected will be the trade df t of each
participant t, and the space of outputs will be the entire sequence of prices/predictions published by
the mechanism.
To preserve privacy, each trade must have a bounded size (e.g. consist only of one data point). To
enforce this, we define the following parameter chosen by the mechanism designer:
p
? = max
hdf, df i,
(3)
allowed df
where the maximum is over all trades df allowed by the mechanism. That is, ? is a scalar capturing
the maximum allowed size of any one trade. For instance,
if all trades are restricted to be of the form
p
df = ?k(z, ?), then we would have ? = max?,z ? k(z, z).
We next describe the two tools we require.
Tool 1: Private functions via Gaussian processes. Given a current market state f t = f 0 + df 1 +
? ? ? + df t , where f t lies in a RKHS, we construct a ?private? version f?t such that queries to f?t are
?accurate? ? close to the outputs of f t ? but also private with respect to each df j . In fact, it will
become convenient to privately output partial sums of trades, so we wish to output a f?t1 :t2 that is
Pt2
private and approximates ft1 :t2 = j=t
df j . This is accomplished by the following construction
1
due to [11].
Theorem 1 ([11], Corollary 9). Let G be the sample path of a Gaussian process with mean zero and
whose covariance is given by the kernel function k.2 Then
?
2 ln(2/?)
G.
(4)
f?t1 :t2 = ft1 :t2 + ?
is (, ?)-differentially private with respect to each df j for j ? {t1 , . . . , t2 }.
In general, f?t1 :t2 may be an infinite-dimensional object and thus impossible to finitely represent.
In this case, the theorem implies that releasing the results of any number of queries f?t1 :t2 (z) is
differentially private. (Of course, the more queries that are released, the larger the chance of high
error on some query.) This is computationally feasible as each sample G(z) is simply a sample from
a Gaussian having known covariance with the previous samples drawn.
Unfortunately, it would not be sufficient to independently release f?1:t at each time t, because the
amount of noise required would be prohibitive. This leads us to our next tool.
2
Formally, each G(z) is a random variable and, for any finite subset of Z, the corresponding variables are
distributed as a multivariate normal with covariance given by k.
6
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
df 0 df 1 df 2 df 3 df 4 df 5 df 6 df 7 df 8 df 9 df 10df 11df 12df 13df 14df 15df 16
Figure 1: Picturing the continual observation technique for preserving privacy. Each df t is aP
trade (e.g. a data
point sold to the market). The goal is to release, at each time step t, a noisy version of f t = tj=1 df j . To do
so, start at t and follow the arrow back to s(t). Take the partial sum of df j for j from s(t) to t and add some
random noise. Trace the next arrow from s(t) to s(s(t)) to get another partial sum and add noise to that sum
as well. Repeat until 0 is reached, then add together all the noisy partial sums to get the output at time t, which
will equal f t plus noise. The key point is that we can re-use many of the noisy partial sums in many different
time steps. For instance, the noisy partial sum from 0 to 8 can be re-used when releasing all of f 9 , . . . , f 15 .
Meanwhile, each df t participates in few noisy partial sums (the number of arrows passing above it).
Tool 2: Continual observation technique. The idea of this technique, pioneered by [9, 5], is to
Pt
construct f?t = j=0 df t by adding together noisy partial sums of the form f?t1 :t2 as constructed in
Equation 4. The idea for choosing these partial sums is pictured in Figure 1: For a function s(t) that
returns an integer smaller than t, we take f?t = f?s(t)+1:t + f?s(s(t))+1:s(t) + ? ? ? + f?0:0 . Specifically,
s(t) is determined by writing t in binary, then flipping the rightmost ?one? bit to zero. This is
pictured in Figure 1. The intuition behind why this technique helps is twofold. First, the total noise
in f?t is the sum of noises of its partial sums, and it turns out that there are at most dlog T e terms.
Second, the total noise we need to add to protect privacy is governed by how many different partial
sums each df j participates in, and it turns out that this number is also at most dlog T e. This allows
for much better privacy and accuracy guarantees than naively treating each step independently.
3.2
Mechanism and Results
Combining our market template in Mechanism 1 with the above privacy tools, we obtain Mechanism 2. There are some key differences. First, we have a bound Q on the total number of queries.
(Each query x returns the instantaneous prices in the market for x.) This is because each query
reveals information about the participants, so intuitively, allowing too many queries must sacrifice
either privacy or accuracy. Fortunately, this bound Q can be an arbitrarily large polynomial in the
number of traders without affecting the quality of the results. Second, we have PAC-style guarantees on accuracy: with probability 1 ? ?, all price queries return values within ? of their true prices.
Third, it is no longer straightforward to compute and represent the market prices ?Cx (f?t ) unless Y
is finite. We leave the more general analysis of Mechanism 2 to future work.
Either exactly or approximately, Mechanism 2 inherits the desirable properties of Mechanism 1, such
as bounded budget and incentive-compatitibility (that is, participants are incentivized to minimize
the risk of the market hypothesis). In addition, we show that it preserves privacy while maintaining
accuracy, for an appropriate choice of the price sensitivity ?C .
Theorem 2. Consider Mechanism 2, where ? is the maximimum trade size (Equation 3) and d =
|Y|. Then Mechanism 2 is (, ?) differentially private and, with T traders and Q price queries,
has the following accuracy guarantee: with probability 1 ? ?, for each query x the returned prices
satisfy k?Cx (f?t ) ? ?Cx (f t )k? ? ? by setting
?C =
?
2d?2
q
ln
Qd
?
ln
2 log T
?
log(T )3
.
If one for example takes ?, ? = exp [?polylog(Q, T )], then except for a superpolynomially low failure probability, Mechanism 2 answers all queries to within accuracy ? by setting the price sensitivity
to be ?C = O (?/polylog(Q, T )). We note, however, that this is a somewhat weaker guarantee
than is usually desired in the differential privacy literature, where ideally ? is exponentially small.
7
Mechanism 2: Privacy Protected Market
Parameters: , ? (privacy), ?, ? (accuracy), k (kernel), ? (trade size 3), Q (#queries), T (#traders)
M ARKET announces f?0 = f 0 , sets r = 0, sets C with ?C = ?C (, ?, ?, ?, ?, Q, T ) (Theorem 2)
for t = 1, 2, . . . , T do
PARTICIPANT t proposes a bet df t
M ARKET updates true position f t = f t?1 + df t
M ARKET instantiates f?s(t)+1,t as defined in Equation 4
while r ? Q and some O BSERVER wishes to make a query do
O BSERVER r submits pricing query on x
M ARKET returns prices ?Cx (f?t ), where f?t = f?s(t)+1:t + f?s(s(t))+1:s(t) + ? ? ? + f?0:0
M ARKET sets r ? r + 1
M ARKET observes a true sample (x, y)
for t = 1, 2, . . . , T do
PARTICIPANT receives payment f t?1 (x, y) ? f t (x, y) ? Cx (f?t?1 + df t ) + Cx (f?t?1 )
Computing ?Cx (f?t ). We have already discussed limiting to finite |Y| in order to efficiently compute the marginal prices ?Cx (f?t ). However, it is still not immediately clear how to compute these
prices, and hence how to implement Mechanism 2. Here, we show
R that the problem can be solved
when C comes from an exponential family, so that Cx (f ) = log Y exp [f (x, y)] dy. In this case,
the marginal prices given by the gradient of C have a nice exponential-weights form, namely the
f (x,y)
price of shares in (x, y) is ptx (y) = ?y Cx (f t ) = P e ef (x,y) . Thus evaluating the prices can be
y?Y
done by evaluating f t (x, y) for each y ? Y.
We also note that the worst-case bound used here could be greatly improved by taking into account
the structure of the kernel. For ?smooth? cases such as the Gaussian kernel, querying a second
point very close to the first one requires very little additional randomness and builds up very little
additional error. We gave only a worst-case bound that holds for all kernels.
Adding a transaction fee. In the appendix, we discuss the potential need for transaction fees.
Adding a small ?(?) fee suffices to deter arbitrage opportunities introduced by noisy pricing.
Discussion
The main contribution of this work was to bring together several tools to construct a mechanism
for incentivized data aggregation with ?contest-like? incentive properties, privacy guarantees, and
limited downside for the mechanism.
Our proposed mechanisms are also extensions of the prediction market literature. Building upon the
work of Abernethy et al. [1] we introduce the following innovations:
? Conditional markets. Our framework of Mechanism 1 can be interpreted as a prediction market
for conditional predictions p(y|x) rather than a classic market which would elicit the joint distribution p(x, y), or just the marginals. (This is similar to decision markets [12, 7], but without
out the associated incentive problems.) Naturally then, we couple conditional predictions with
restricted hypothesis spaces, allowing F to capture, e.g., a linear relationship between x and y.
? Nonparametric securities. We also extend to nonparametric hypothesis spaces using kernels,
following the kernel-based scoring rules of [21].
? Privacy guarantees. We provide the first private prediction market (to our knowledge), showing
that information about individual trades is not revealed. Our approach for preserving privacy also
holds in the classic prediction market setting with similar privacy and accuracy guarantees.
Many directions remain for future work. These mechanisms could be made more practical and
perhaps even better privacy guarantees derived, especially in nonparametric settings. One could also
explore the connections to similar settings, such as when agents have costs for acquiring data.
Acknoledgements J. Abernethy acknowledges the generous support of the US National Science
Foundation under CAREER Grant IIS-1453304 and Grant IIS-1421391.
8
References
[1] Jacob Abernethy, Yiling Chen, and Jennifer Wortman Vaughan. Efficient market making via convex
optimization, and a connection to online learning. ACM Transactions on Economics and Computation,
1(2), May 2013.
[2] Jacob Abernethy, Sindhu Kutty, S?ebastien Lahaie, and Rahul Sami. Information aggregation in exponential family markets. In Proceedings of the fifteenth ACM conference on Economics and computation,
pages 395?412. ACM, 2014.
[3] Jacob D Abernethy and Rafael M Frongillo. A collaborative mechanism for crowdsourcing prediction
problems. In Advances in Neural Information Processing Systems, pages 2600?2608, 2011.
[4] St?ephane Canu and Alex Smola. Kernel methods and the exponential family. Neurocomputing, 69(7):714?
720, 2006.
[5] T-H Hubert Chan, Elaine Shi, and Dawn Song. Private and continual release of statistics. ACM Transactions on Information and System Security (TISSEC), 14(3):26, 2011.
[6] Y. Chen and J.W. Vaughan. A new understanding of prediction markets via no-regret learning. In Proceedings of the 11th ACM Conference on Electronic Commerce (EC), pages 189?198, 2010.
[7] Yiling Chen, Ian Kash, Mike Ruberry, and Victor Shnayder. Decision markets with good incentives. In
Internet and Network Economics, pages 72?83. Springer, 2011.
[8] Yiling Chen and David M. Pennock. A utility framework for bounded-loss market makers. In In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence (UAI), pages 49?56, 2007.
[9] Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N Rothblum. Differential privacy under continual
observation. In Proceedings of the forty-second ACM symposium on Theory of computing, pages 715?724.
ACM, 2010.
[10] Cynthia Dwork and Aaron Roth. The algorithmic foundations of differential privacy. Foundations and
Trends in Theoretical Computer Science, 2014.
[11] Rob Hall, Alessandro Rinaldo, and Larry Wasserman. Differential privacy for functions and functional
data. The Journal of Machine Learning Research, 14(1):703?727, 2013.
[12] R Hanson. Decision markets. Entrepreneurial Economics: Bright Ideas from the Dismal Science, pages
79?85, 2002.
[13] R. Hanson. Combinatorial information market design. Information Systems Frontiers, 5(1):105?119,
2003.
[14] R. Hanson. Logarithmic market scoring rules for modular combinatorial information aggregation. Journal
of Prediction Markets, 1(1):3?15, 2007.
[15] Abraham Othman and Tuomas Sandholm. Automated market makers that enable new settings: extending
constant-utility cost functions. In Proceedings of the Second Conference on Auctions, Market Mechanisms
and their Applications (AMMA), pages 19?30, 2011.
[16] David M. Pennock and Rahul Sami. Computational aspects of prediction markets. In Noam Nisan,
Tim Roughgarden, Eva Tardos, and Vijay V. Vazirani, editors, Algorithmic Game Theory, chapter 26.
Cambridge University Press, 2007.
[17] Bernhard Sch?olkopf and Alexander J Smola. Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT press, 2002.
[18] Amos J. Storkey. Machine learning markets. In Proceedings of AI and Statistics (AISTATS), pages 716?
724, 2011.
[19] J. Wolfers and E. Zitzewitz. Prediction markets. Journal of Economic Perspectives, 18(2):107?126, 2004.
[20] Justin Wolfers and Eric Zitzewitz. Interpreting prediction market prices as probabilities. Technical report,
National Bureau of Economic Research, 2006.
[21] Erik Zawadzki and S?ebastien Lahaie. Nonparametric scoring rules. In Proceedings of the Twenty-Ninth
AAAI Conference on Artificial Intelligence, 2015.
[22] Lijun Zhang, Rong Jin, Chun Chen, Jiajun Bu, and Xiaofei He. Efficient online learning for large-scale
sparse kernel logistic regression. In AAAI, 2012.
[23] Lijun Zhang, Jinfeng Yi, Rong Jin, Ming Lin, and Xiaofei He. Online kernel learning with a near optimal
sparsity bound. In Proceedings of the 30th International Conference on Machine Learning (ICML-13),
pages 621?629, 2013.
9
| 5995 |@word private:16 version:3 polynomial:1 norm:2 proportion:1 jacob:4 covariance:3 paid:1 profit:6 minus:1 initial:2 rkhs:7 rightmost:1 past:1 existing:1 current:6 si:1 yet:1 dx:1 must:4 partition:1 dive:1 designed:1 treating:1 update:16 intelligence:2 prohibitive:1 parameterization:1 provides:1 authority:2 contribute:1 characterization:1 zhang:2 constructed:1 direct:2 differential:10 become:2 symposium:1 consists:1 naor:1 combine:1 owner:1 introduce:1 privacy:37 x0:1 sacrifice:1 indeed:1 expected:2 roughly:1 market:64 surge:1 behavior:1 buying:2 ming:1 company:1 election:1 actual:2 pf:1 considering:1 little:2 bounded:5 underlying:5 suffice:1 notation:1 what:3 interpreted:3 minimizes:1 z:2 guarantee:14 thorough:1 act:1 charge:1 continual:4 exactly:2 returning:1 prohibitively:1 unit:1 medical:1 grant:2 positive:2 t1:6 accordance:1 modify:1 aggregating:1 path:1 approximately:3 ap:1 rothblum:1 plus:1 studied:2 suggests:3 limited:1 range:1 practical:1 commerce:1 investment:2 block:1 implement:1 regret:1 area:1 elicit:2 convenient:1 word:2 submits:2 get:2 close:3 operator:1 context:4 applying:1 impossible:1 writing:1 risk:1 restriction:1 disjointly:1 measurable:1 vaughan:2 compensated:1 shi:1 lijun:2 straightforward:2 economics:5 attention:1 starting:1 convex:2 announces:2 independently:2 formalized:2 immediately:1 wasserman:1 rule:4 utilizing:1 spanned:1 financial:1 classic:3 population:2 crowdsourcing:2 coordinate:1 analogous:1 limiting:1 tardos:1 construction:3 imagine:1 colorado:2 massive:1 user:1 engage:1 controlling:1 exact:1 pioneered:1 hypothesis:22 harvard:2 element:1 trend:1 storkey:1 particularly:2 updating:1 mike:1 solved:1 capture:1 worst:6 eva:1 elaine:1 trade:21 removed:1 observes:3 deeply:1 yk:1 intuition:1 alessandro:1 reward:1 asked:1 productive:1 ideally:1 personal:2 ultimately:1 tight:1 upon:2 efficiency:1 eric:1 basis:2 completely:1 selling:1 easily:1 joint:1 chapter:1 distinct:1 fast:1 describe:3 query:19 artificial:2 aggregate:3 outside:1 outcome:3 abernethy:6 firm:2 whose:3 choosing:2 larger:1 solve:1 modular:1 ability:1 statistic:4 rto:1 noisy:8 final:5 online:4 sequence:4 advantage:1 propose:2 yiling:3 interaction:1 product:1 aligned:1 relevant:1 monetary:1 combining:1 translate:1 achieve:1 supposed:1 description:1 intuitive:1 differentially:4 olkopf:1 extending:1 sea:1 leave:1 object:2 help:1 polylog:4 tim:1 finitely:1 trading:1 implies:1 qd:1 come:1 direction:1 drawback:1 correct:1 modifying:2 stochastic:4 deter:1 enable:1 larry:1 noticeably:1 exchange:1 require:2 suffices:1 generalization:3 proposition:2 opt:1 extension:3 ft1:2 frontier:1 hold:4 rong:2 hall:1 normal:1 exp:16 great:1 algorithmic:3 generous:1 released:1 fh:2 purpose:1 label:2 combinatorial:2 quote:1 maker:2 sensitive:3 tool:8 amos:1 mit:1 always:1 gaussian:4 aim:1 rather:2 frongillo:2 bet:14 corollary:1 encode:1 release:3 focus:1 inherits:1 acknoledgements:1 she:4 derived:1 likelihood:4 greatly:1 wishing:1 sense:1 typically:1 entire:1 her:1 compatibility:2 arg:1 classification:1 proposes:1 special:2 marginal:5 field:1 construct:3 equal:1 having:3 sell:3 broad:1 placing:1 icml:1 representer:1 future:4 mimic:1 report:1 t2:8 ephane:1 purchase:2 few:1 modern:1 preserve:3 divergence:4 national:2 individual:1 neurocomputing:1 familiar:1 replaced:1 consisting:1 psd:1 interest:1 picturing:1 dwork:2 evaluation:1 truly:1 arrives:1 semidefinite:1 behind:1 arket:10 tj:1 hubert:1 held:3 bundle:2 implication:1 accurate:1 bregman:2 partial:11 lahaie:2 unless:1 loosely:1 re:2 desired:1 theoretical:2 instance:3 idy:1 fenchel:1 downside:3 cost:10 subset:2 entry:3 trader:17 wortman:1 too:4 answer:3 chooses:1 truthfulness:1 st:1 international:1 sensitivity:7 randomized:1 bu:1 participates:2 together:5 quickly:2 again:2 central:3 settled:1 aaai:2 choose:1 possibly:2 guy:1 pt:1 style:1 return:4 account:3 potential:4 satisfy:2 nisan:1 hedging:1 view:2 lot:1 observer:2 observing:2 portion:1 start:1 aggregation:8 participant:46 hf:1 raf:1 reached:1 shnayder:1 contribution:7 minimize:2 bright:1 publicly:1 accuracy:8 collaborative:1 efficiently:2 accurately:1 published:2 randomness:1 submitted:1 email:1 infinitesimal:1 failure:1 involved:1 obvious:1 naturally:2 associated:7 couple:1 gain:1 dataset:3 adjusting:1 popular:1 recall:2 knowledge:1 hilbert:4 sophisticated:2 carefully:1 back:1 disposal:1 follow:1 response:1 improved:3 rahul:2 evaluated:1 done:1 just:4 smola:2 until:1 receives:2 lack:1 logistic:5 brings:2 quality:3 reveal:3 perhaps:2 pricing:3 grows:1 believe:1 building:2 effect:1 contain:1 true:7 hence:1 regularization:1 round:2 game:1 kutty:1 l1:1 interface:2 bring:1 auction:1 interpreting:1 spending:2 consideration:1 instantaneous:2 possessing:1 recently:1 ef:1 dawn:1 kash:1 zawadzki:1 functional:1 twist:1 exponentially:1 wolfers:2 discussed:3 extend:2 approximates:1 dismal:1 interpret:1 marginals:1 he:2 significant:1 refer:3 cambridge:1 ai:1 rd:2 canu:1 contest:2 had:1 moving:2 access:2 f0:2 longer:1 operating:1 moni:1 pitassi:1 add:6 posterior:1 own:2 recent:2 hide:1 multivariate:1 chan:1 perspective:1 rewarded:3 scenario:1 certain:2 binary:1 arbitrarily:2 yi:2 accomplished:1 scoring:3 victor:1 captured:1 preserving:2 additional:4 somewhat:2 fortunately:1 zitzewitz:2 ey:1 payouts:3 aggregated:1 maximize:1 forty:1 ii:2 desirable:2 infer:4 smooth:1 technical:1 offer:2 long:2 lin:1 manipulate:1 prediction:33 regression:5 essentially:2 expectation:1 df:61 publish:1 fifteenth:1 kernel:28 represent:2 receive:1 proposal:1 affecting:1 addition:1 sch:1 meaningless:1 releasing:2 pennock:2 member:2 call:1 integer:1 near:1 revealed:1 easy:1 concerned:2 enough:2 sami:2 automated:1 gave:1 inner:1 idea:7 economic:2 othman:1 translates:1 whether:1 utility:2 song:1 returned:1 hessian:1 passing:1 prefers:1 useful:2 generally:2 detailed:2 clear:1 amount:4 nonparametric:6 tth:2 fz:1 canonical:1 problematic:1 notice:1 designer:3 jiajun:1 yy:1 incentive:9 key:4 drawn:4 clean:1 incentivize:1 ptx:1 vast:1 year:1 sum:13 parameterized:3 uncertainty:1 place:1 almost:1 reasonable:2 family:14 saying:1 electronic:1 earning:1 draw:2 decision:3 dy:4 appendix:3 fee:3 bit:1 capturing:1 bound:7 internet:1 pay:1 guaranteed:2 roughgarden:1 precisely:4 alex:1 aspect:3 min:2 span:1 attempting:1 betting:3 according:5 instantiates:1 conjugate:1 across:1 smaller:1 remain:1 sandholm:1 partitioned:1 rob:1 modification:1 making:1 intuitively:2 restricted:2 pr:2 dlog:2 ln:3 computationally:1 equation:3 payment:4 previously:1 describing:2 eventually:1 mechanism:78 turn:4 discus:2 know:1 jennifer:1 end:1 umich:1 available:3 appropriate:3 generic:1 enforce:1 alternative:1 shortly:1 bureau:1 opportunity:1 maintaining:3 pushing:1 pt2:1 giving:1 especially:2 build:1 eliciting:2 upcoming:1 seeking:1 move:3 added:1 question:2 depart:1 quantity:1 fa:1 primary:1 parametric:1 strategy:1 flipping:1 already:1 gradient:7 win:2 incentivized:4 participate:2 reason:2 assuming:1 erik:1 tuomas:1 relationship:2 ratio:4 acquire:1 innovation:1 equivalently:1 difficult:1 unfortunately:1 statement:1 holding:1 trace:1 negative:3 noam:1 design:5 ebastien:2 zt:5 unknown:1 twenty:1 allowing:2 upper:1 observation:6 sold:1 finite:3 protecting:3 descent:4 jin:2 xiaofei:2 payoff:1 extended:1 incorporated:1 team:1 jabernet:1 reproducing:4 ninth:1 community:1 introduced:1 david:2 publishes:1 cast:1 kl:2 required:2 namely:3 connection:2 security:2 hanson:3 distinction:1 protect:2 able:3 adversary:3 beyond:1 usually:1 justin:1 sparsity:1 challenge:1 jinfeng:1 including:1 max:2 belief:5 event:3 roth:1 natural:4 participation:3 indicator:1 pictured:2 representing:1 scheme:1 improve:2 misleading:1 brief:1 acknowledges:1 naive:1 sporting:1 prior:2 literature:3 review:1 nice:4 understanding:1 relative:1 toniann:1 loss:13 deposit:1 querying:3 foundation:3 agent:8 purchasing:2 offered:1 sufficient:2 imposes:1 principle:2 editor:1 share:5 arbitrage:1 compatible:1 summary:1 course:2 repeat:1 allow:1 weaker:1 template:4 taking:2 sparse:3 distributed:3 benefit:1 evaluating:4 collection:2 made:4 refinement:1 far:3 ec:1 transaction:4 vazirani:1 rafael:2 bernhard:1 sequentially:1 reveals:2 active:1 uai:1 assumed:1 xi:2 search:1 protected:3 why:1 correlated:1 inherently:1 career:1 interact:1 example1:1 meanwhile:1 protocol:1 domain:1 submit:1 did:1 aistats:1 main:1 privately:4 arrow:3 abraham:1 noise:12 scored:1 allowed:3 representative:1 position:1 wish:3 exponential:13 candidate:1 lie:3 tied:1 governed:1 third:1 posting:1 ian:1 rk:2 theorem:5 bad:1 specific:1 sindhu:1 pac:1 showing:1 cynthia:2 chun:1 consist:1 naively:1 sequential:1 adding:3 mirror:1 magnitude:1 budget:7 chen:5 vijay:1 suited:1 cx:40 michigan:1 logarithmic:1 simply:5 explore:1 rinaldo:1 expressed:1 bo:1 scalar:1 acquiring:1 springer:1 corresponds:1 satisfies:1 relies:3 chance:1 acm:7 conditional:8 goal:4 viewed:1 swath:1 rrd:1 eventual:1 price:35 twofold:1 feasible:1 change:5 determined:3 specifically:2 infinite:1 except:1 total:9 called:1 aaron:1 formally:3 support:2 alexander:1 ex:1 |
5,519 | 5,996 | Optimal Ridge Detection using Coverage Risk
Christopher R. Genovese
Department of Statistics
Carnegie Mellon University
[email protected]
Yen-Chi Chen
Department of Statistics
Carnegie Mellon University
[email protected]
Larry Wasserman
Department of Statistics
Carnegie Mellon University
[email protected]
Shirley Ho
Department of Physics
Carnegie Mellon University
[email protected]
Abstract
We introduce the concept of coverage risk as an error measure for density ridge
estimation. The coverage risk generalizes the mean integrated square error to set
estimation. We propose two risk estimators for the coverage risk and we show that
we can select tuning parameters by minimizing the estimated risk. We study the
rate of convergence for coverage risk and prove consistency of the risk estimators.
We apply our method to three simulated datasets and to cosmology data. In all
the examples, the proposed method successfully recover the underlying density
structure.
1
Introduction
Density ridges [10, 22, 15, 6] are one-dimensional curve-like structures that characterize high density regions. Density ridges have been applied to computer vision [2], remote sensing [21], biomedical imaging [1], and cosmology [5, 7]. Density ridges are similar to the principal curves [17, 18, 27].
Figure 1 provides an example for applying density ridges to learn the structure of our Universe.
To detect density ridges from data, [22] proposed the ?Subspace Constrained Mean Shift (SCMS)?
algorithm. SCMS is a modification of usual mean shift algorithm [14, 8] to adapt to the local
geometry. Unlike mean shift that pushes every mesh point to a local mode, SCMS moves the meshes
along a projected gradient until arriving at nearby ridges. Essentially, the SCMS algorithm detects
the ridges of the kernel density estimator (KDE). Therefore, the SCMS algorithm requires a preselected parameter h, which acts as the role of smoothing bandwidth in the kernel density estimator.
Despite the wide application of the SCMS algorithm, the choice of h remains an unsolved problem.
Similar to the density estimation problem, a poor choice of h results in over-smoothing or undersmoothing for the density ridges. See the second row of Figure 1.
In this paper, we introduce the concept of coverage risk which is a generalization of the mean
integrated expected error from function estimation. We then show that one can consistently estimate
the coverage risk by using data splitting or the smoothed bootstrap. This leads us to a data-driven
selection rule for choosing the parameter h for the SCMS algorithm. We apply the proposed method
to several famous datasets including the spiral dataset, the three spirals dataset, and the NIPS dataset.
In all simulations, our selection rule allows the SCMS algorithm to detect the underlying structure
of the data.
1
Figure 1: The cosmic web. This is a slice of the observed Universe from the Sloan Digital Sky
Survey. We apply the density ridge method to detect filaments [7]. The top row is one example
for the detected filaments. The bottom row shows the effect of smoothing. Bottom-Left: optimal
smoothing. Bottom-Middle: under-smoothing. Bottom-Right: over-smoothing. Under optimal
smoothing, we detect an intricate filament network. If we under-smooth or over-smooth the dataset,
we cannot find the structure.
1.1
Density Ridges
Density ridges are defined as follows. Assume X1 , ? ? ? , Xn are independently and identically distributed from a smooth probability density function p with compact support K. The density ridges
[10, 15, 6] are defined as
R = {x ? K : V (x)V (x)T ?p(x) = 0, ?2 (x) < 0},
where V (x) = [v2 (x), ? ? ? vd (x)] with vj (x) being the eigenvector associated with the ordered
eigenvalue ?j (x) (?1 (x) ? ? ? ? ? ?d (x)) for Hessian matrix H(x) = ??p(x). That is, R is
the collection of points whose projected gradient V (x)V (x)T ?p(x) = 0. It can be shown that
(under appropriate conditions), R is a collection of 1-dimensional smooth curves (1-dimensional
manifolds) in Rd .
The SCMS algorithm is a plug-in estimate for R by using
n
o
b2 (x) < 0 ,
bn = x ? K : Vbn (x)Vbn (x)T ?b
R
pn (x) = 0, ?
Pn
i
b2 are the associated quantities defined
is the KDE and Vbn and ?
where pbn (x) = nh1 d i=1 K x?X
h
by pbn . Hence, one can clearly see that the parameter h in the SCMS algorithm plays the same role
of smoothing bandwidth for the KDE.
2
2
Coverage Risk
Before we introduce the coverage risk, we first define some geometric concepts. Let ?` be the `dimensional Hausdorff measure [13]. Namely, ?1 (A) is the length of set A and ?2 (A) is the area
of A. Let d(x, A) be the projection distance from point x to a set A. We define UR and URbn as
bn
random variables uniformly distributed over the true density ridges R and the ridge estimator R
b
respectively. Assuming R and Rn are given, we define the following two random variables
bn ),
Wn = d(UR , R
fn = d(U b , R).
W
Rn
(1)
bn are sets. Wn is the distance from a randomly
Note that UR , URbn are random variables while R, R
bn and W
fn is the distance from a random point on R
bn to R.
selected point on R to the estimator R
Let Haus(A, B) = inf{r : A ? B ? r, B ? A ? r} be the Hausdorff distance between A and B
where A ? r = {x : d(x, A) ? r}. The following lemma gives some useful properties about Wn
fn .
and W
fn are bounded by Haus(M
cn , M ). Namely,
Lemma 1 Both random variables Wn and W
bn , R),
0 ? Wn ? Haus(R
fn ? Haus(R
bn , R).
0?W
(2)
fn are
The cumulative distribution function (CDF) for Wn and W
bn ? r)
bn ? (R ? r)
?1 R ? (R
?1 R
bn ) =
fn ? r|R
bn ) =
P(Wn ? r|R
, P(W
.
?1 (R)
b
? R
1
(3)
n
bn ) is the ratio of R being covered by padding the regions around R
bn at distance
Thus, P(Wn ? r|R
r.
This lemma follows trivially by definition so that we omit its proof. Lemma 1 links the random
fn to the Hausdorff distance and the coverage for R and R
bn . Thus, we call them
variables Wn and W
bn as
coverage random variables. Now we define the L1 and L2 coverage risk for estimating R by R
Risk1,n =
fn )
E(Wn + W
,
2
Risk2,n =
fn2 )
E(Wn2 + W
.
2
(4)
bn . Note
That is, Risk1,n (and Risk2,n ) is the expected (square) projected distance between R and R
b
that the expectation in (4) applies to both Rn and UR . One can view Risk2,n as a generalized mean
integrated square errors (MISE) for sets.
A nice property of Risk1,n and Risk2,n is that they are not sensitive to outliers of R in the sense that
a small perturbation of R will not change the risk too much. On the contrary, the Hausdorff distance
is very sensitive to outliers.
2.1
Selection for Tuning Parameters Based on Risk Minimization
In this section, we will show how to choose h by minimizing an estimate of the risk.
We propose two risk estimators. The first estimator is based on the smoothed bootstrap [25]. We
bn? . The we estimate the risk by
sample X1? , ? ? ? Xn? from the KDE pbn and recompute the estimator R
?
?2
f?
f ?2
d 2,n = E(Wn + Wn |X1 , ? ? ? , Xn ) ,
d 1,n = E(Wn + Wn |X1 , ? ? ? , Xn ) , Risk
Risk
2
2
?
?
?
b
f
b
where W = d(U b , R ) and W = d(U b? , Rn ).
n
Rn
n
n
Rn
3
(5)
?
?
The second approach is to use data splitting. We randomly split the data into X11
, ? ? ? , X1m
and
?
?
X21 , ? ? ? , X2m , assuming n is even and 2m = n. We compute the estimated manifolds by using
b? and R
b? . Then we compute
half of the data, which we denote as R
1,n
2,n
?
?
?
?2
?2
E(W1,n
+ W2,n
|X1 , ? ? ? , Xn )
E(W1,n
+ W2,n
|X1 , ? ? ? , Xn )
?
d
, Risk2,n =
,
2
2
b? ) and W ? = d(U b? , R
b? ).
= d(URb? , R
2,n
2,n
1,n
R
d
Risk
1,n =
?
where W1,n
1,n
(6)
2,n
Having estimated the risk, we select h by
?
d ,
h? = argmin Risk
1,n
(7)
?n
h?h
? n is an upper bound by the normal reference rule [26] (which is known to oversmooth, so
where h
that we only select h below this rule). Moreover, one can choose h by minimizing L2 risk as well.
In [11], they consider selecting the smoothing bandwidth for local principal curves by self-coverage.
This criterion is a different from ours. The self-coverage counts data points. The self-coverage is
a monotonic increasing function and they propose to select the bandwidth such that the derivative
is highest. Our coverage risk yields a simple trade-off curve and one can easily pick the optimal
bandwidth by minimizing the estimated risk.
3
Manifold Comparison by Coverage
The concepts of coverage in previous section can be generalized to investigate the difference between
two manifolds. Let M1 and M2 be an `1 -dimensional and an `2 -dimensional manifolds (`1 and `2
are not necessarily the same). We define the coverage random variables
W12 = d(UM1 , M2 ),
W21 = d(UM2 , M1 ).
(8)
Then by Lemma 1, the CDF for W12 and W21 contains information about how M1 and M2 are
different from each other:
?` (M1 ? (M2 ? r))
?` (M2 ? (M1 ? r))
P(W12 ? r) = 1
, P(W21 ? r) = 2
.
(9)
?`2 (M1 )
?r2 (M1 )
P(W12 ? r) is the coverage on M1 by padding regions with distance r around M2 .
We call the plots of the CDF of W12 and W21 coverage diagrams since they are linked to the
coverage over M1 and M2 . The coverage diagram allows us to study how two manifolds are different
from each other. When `1 = `2 , the coverage diagram can be used as a similarity measure for two
manifolds. When `1 6= `2 , the coverage diagram serves as a measure for quality of representing high
dimensional objects by low dimensional ones. A nice property for coverage diagram is that we can
approximate the CDF for W12 and W21 by a mesh of points (or points uniformly distributed) over
M1 and M2 . In Figure 2 we consider a Helix dataset whose support has dimension d = 3 and we
compare two curves, a spiral curve (green) and a straight line (orange), to represent the Helix dataset.
As can be seen from the coverage diagram (right panel), the green curve has better coverage at each
distance (compared to the orange curve) so that the spiral curve provides a better representation for
the Helix dataset.
In addition to the coverage diagram, we can also use the following L1 and L2 losses as summary for
the difference:
2
2
E(W12
+ W21
)
E(W12 + W21 )
, Loss2 (M1 , M2 ) =
.
(10)
Loss1 (M1 , M2 ) =
2
2
The expectation is take over UM1 and UM2 and both M1 and M2 here are fixed. The risks in (4) are
the expected losses:
cn , M ) , Risk2,n = E Loss2 (M
cn , M ) .
Risk1,n = E Loss1 (M
(11)
4
1.0
0.8
0.6
Coverage
0.4
0.0
0.2
?
? ?? ???
? ??
?
?
??
? ?
?
??
?
?
?
??
???
??
?
?
?
??
??
? ??
?
??
?
?
??
?
??? ?
?
??
??
??
?
?
?
??
?? ?
?
?
? ??
?
??
???
?
???
?
?
??
?
??
?
??
?
?
?
?
??
???
????
?
?
??
?
?
?
? ??? ? ??? ? ? ? ?
?
?
??
???
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
????
?
?
?? ?
??
??
? ?
??
??
?
??
?
?
??
??
???
?
??
? ??
???
?
???
?
? ?????
?
??
?
?
?
??
?
?
??
?
?
?
??
?
?
??
???
?
???
?
??
?
?
??
?
?
?
?
?
? ?
?
?
????
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ??
?
?? ?
???
?
?
??
?
?
?
?
?? ?
??
??
?
???
??
?
?
?? ?
??
?
? ??
?
?
?
?
?
?
?
??
??
?
?
?
?
??
?
?
?
?
?
?
?
?
??
?
?
?
? ?
?? ??
???
????
?
? ??
?
??
???
?
?
?
??
?
? ????
??
? ??
?
?
?
?? ?
??
?
?
? ??
?
??
?
?
?
?
?
???
??
??
?
??
?
??
?
?? ??????
?
?
?
??
??
??
?
?
??
? ?? ???
?
?
??
? ??
?
?
??
?
??
?? ?
?
??
???
? ??
?
?
?
???
?
?
? ?? ?
?
?
?
?
?
????
?
?
?
??
??
??
?? ? ?? ?
??
?
?
?
??
?
?
?
??
??
?
?
??
??
??
?
?
?
?
?
?????
??? ?
?
?
?
??
??
??
?
??
??
??
?
???
?
?
?
? ??
?
??
??
??
?
?
?
?
?
?
?
???
??
???
??
???
?? ?
?
? ??
??
?
?
??
?
?
??
?
??
??
?
?
???
?
?
?
?
???
?
?? ?
? ? ??
????
?
?
??
???
?? ???
???
?? ?
?
??
??
??? ??
?
???
??
?
??
?
??
?? ? ?
?
??
?
? ??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
? ?? ? ?
??
? ? ??
?????
?
?
?
???
?
??????
?
?
??
?
????? ? ? ?
?
?? ?
?
?
??
?
?
?
?
??
????
??
?
??
??? ?
?
?
??
?
?
?? ????? ?
??
? ???
??
??
???
?
?
?
?
? ??
??
?
?
? ?
? ??
?????
?
??
?
?
??
? ? ? ???
?
?
?
?
?
??
?
??
?
?
??? ?
?
?
??
???
?
?
?
?
?
?
?
??
?
??
?
?
???
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
????
?? ?
??? ? ? ?
? ?
?
?
?
? ??
??
? ?
??
?
?
???
?
??
??
??
?
?
?
?
??
? ?
? ??
??????
?
?
????
?
? ?
?
??
?
???
?
??
? ?
? ?
??
???
?
?
???
?
?
?
?
??
??? ?
??
?
??
?
???
??
?
??
?
?
??
??
?
?
?
?
????
??
??
?
?
?
??
?
? ??
??
?? ?? ?
? ?? ? ?
?
?
?
? ?
???
?
?
?
?
?
????
?
??
?
? ?
??
??
??
??
?
???
?
??
??
? ??????? ? ?
??
?
?
??
???
?
??
??
?
?
?
? ?
?
??
?
? ?
??
?? ?
??????? ? ?
???
? ?
?
?
??
? ???
??
??
? ??
?? ???
??
?
???
???
??
?
?
?? ??
??????
?
?
??
?
?
?
?
??
??
?
?
?
??
?
???
?
?
??
???
?
? ????
??
???
??
? ?
??
??
?
?
???
???
?
?
???
??
?
?
?? ?
?
?
??
?
?
?
?
??
?
?
??
?
?
?
? ???
??
??
??
??
?
?
?
?
?
???????
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
? ? ?? ????
?
?
??
?
?
????
?
?
??
?
?
???? ?
??
??
?
?? ?
?
?
??
??
?
? ?
?
???
?
?
??
?
? ?
?
??
?
?
?
?
???
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
? ?
???
??????
?
?
?
???
????
?
??
?
???
?
?
?
?
?
?
?
???
?
?
? ?
?
? ??
??
?
??
??
??
?
????
?
??
?
? ?
??
?
?
?
? ?? ??
?
?? ?
??
?
?
?
???
??
?
???
?
??
??
?
?
?
?
????
?? ?
??
???
?
?
?
?
?
?
? ? ?
??
?
?
?
?
?
????
??
?
??
?
??
???
??
?
??
?
??
?
?? ???
?
?
?????
?
?
? ?
??
?
?? ???
?
?
?
?
??
?
?
?
?
?
?
?
??
?
??
???
? ?
?
?
??
? ?
?????
?
? ?
??
?
?? ?
?
?
?
?
???
??
?
??
?
?? ? ?
??
?
? ?? ?
?
? ?
?
?
?
? ?
?
? ? ??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.0
0.2
0.4
r
0.6
0.8
1.0
Figure 2: The Helix dataset. The original support for the Helix dataset (black dots) are a 3dimensional regions. We can use green spiral curves (d = 1) to represent the regions. Note that
we also provide a bad representation using a straight line (orange). The coverage plot reveals the
quality for representation. Left: the original data. Dashed line is coverage from data points (black
dots) over green/orange curves in the left panel and solid line is coverage from green/orange curves
on data points. Right: the coverage plot for the spiral curve (green) versus a straight line (orange).
4
Theoretical Analysis
In this section, we analyze the asymptotic behavior for the coverage risk and prove the consistency
for estimating the coverage risk by the proposed method. In particular, we derive the asymptotic
properties for the density ridges. We only focus on L2 risk since by Jensen?s inequality, the L2 risk
can be bounded by the L1 risk.
Before we state our assumption, we first define the orientation of density ridges. Recall that the
density ridge R is a collection of one dimensional curves. Thus, for each point x ? R, we can
associate a unit vector e(x) that represent the orientation of R at x. The explicit formula for e(x)
can be found in Lemma 1 of [6].
Assumptions.
(R) There exist ?0 , ?1 , ?2 , ?R > 0 such that for all x ? R ? ?R ,
?2 (x) ? ??1 ,
k?p(x)kkp(3) (x)kmax ? ?0 (?1 ? ?2 ),
?1 (x) ? ?0 ? ?1 ,
(12)
where kp(3) (x)kmax is the element wise norm to the third derivative. And for each x ? R,
?1 (x)
.
|e(x)T ?p(x)| ? ?1 (x)??
2 (x)
(K1) The kernel function K is three times bounded differetiable and is symmetric, non-negative
and
Z
Z
2
x2 K (?) (x)dx < ?,
K (?) (x) dx < ?
for all ? = 0, 1, 2, 3.
(K2) The kernel function K and its partial derivative satisfies condition K1 in [16]. Specifically,
let
x?y
: x ? Rd , h > 0, |?| = 0, 1, 2
(13)
K = y 7? K (?)
h
We require that K satisfies
v
A
supN K, L2 (P ), kF kL2 (P ) ?
(14)
P
5
for some positive number A, v, where N (T, d, ) denotes the -covering number of the
metric space (T, d) and F is the envelope function of K and the supreme is taken over
the whole Rd . TheR A and v are usually called the VC characteristics of K. The norm
kF kL2 (P ) = supP |F (x)|2 dP (x).
Assumption (R) appears in [6] and is very mild. The first two inequality in (12) are just the bound
on eigenvalues. The last inequality requires the density around ridges to be smooth. The latter part
of (R) requires the direction of ridges to be similar to the gradient direction. Assumption (K1) is
the common condition for kernel density estimator see e.g. [28] and [24]. Assumption (K2) is to
regularize the classes of kernel functions that is widely assumed [12, 15, 4]; any bounded kernel
function with compact support satisfies this condition. Both (K1) and (K2) hold for the Gaussian
kernel.
Under the above condition, we derive the rate of convergence for the L2 risk.
Theorem 2 Let Risk2,n be the L2 coverage risk for estimating the density ridges and level sets.
Assume (K1?2) and (R) and p is at least four times bounded differentiable. Then as n ? ?, h ? 0
log n
and nh
d+6 ? 0
2
?R
1
2 4
4
Risk2,n = BR
h +
+
o(h
)
+
o
,
nhd+2
nhd+2
2
that depends only on the density p and the kernel function K.
for some BR and ?R
The rate in Theorem 2 shows a bias-variance decomposition. The first term involving h4 is the
bias term while the latter term is the variance part. Thanks to the Jensen?s inequality, the rate of
convergence for L1 risk is the square root of the rate Theorem 2. Note that we require the smoothing
log n
parameter h to decay slowly to 0 by nh
d+6 ? 0. This constraint comes from the uniform bound
for estimating third derivatives for p. We need this constraint since we need the smoothness for
estimated ridges to converge to the smoothness for the true ridges. Similar result for density level
set appears in [3, 20].
By Lemma 1, we can upper bound the L2 risk by expected square of the Hausdorff distance which
gives the rate
bn , R) = O(h4 ) + O log n
Risk2,n ? E Haus2 (R
(15)
nhd+2
The rate under Hausdorff distance for density ridges can be found in [6] and the rate for density
ridges appears in [9]. The rate induced by Theorem 2 agrees with the bound from the Hausdorff
distance and has a slightly better rate for variance (without a log-n factor). This phenomena is
similar to the MISE and L? error for nonparametric estimation for functions. The MISE converges
slightly faster (by a log-n factor) than square to the L? error.
Now we prove the consistency of the risk estimators. In particular, we prove the consistency for the
smoothed bootstrap. The case of data splitting can be proved in the similar way.
Theorem 3 Let Risk2,n be the L2 coverage risk for estimating the density ridges and level sets. Let
d 2,n be the corresponding risk estimator by the smoothed bootstrap. Assume (K1?2) and (R) and
Risk
log n
p is at least four times bounded differentiable. Then as n ? ?, h ? 0 and nh
d+6 ? 0,
d 2,n ? Risk2,n P
Risk
? 0.
Risk2,n
Theorem 3 proves the consistency for risk estimation using the smoothed bootstrap. This also leads
to the consistency for data splitting.
6
?
(Estimated) L1 Coverage Risk
3.0
?
?
?
2.5
?
2.0
?
1.5
?
1.0
?
?
?
?
?
?
0.5
?
?
0.0
?
?
?
?
0.5
?
1.0
1.5
2.0
1.2
Smoothing Parameter
?
(Estimated) L1 Coverage Risk
?
1.0
?
?
?
?
0.8
?
?
?
?
?
0.6
?
0.4
?
?
?
?
?
0.2
?
?
?
0.0
0.5
1.0
1.5
2.0
Smoothing Parameter
(Estimated) L1 Coverage Risk
?
?
0.06
?
?
?
0.05
?
?
0.04
?
?
?
?
?
?
0.03
?
?
?
?
0.0
?
0.5
?
?
1.0
1.5
2.0
Smoothing Parameter
Figure 3: Three different simulation datasets. Top row: the spiral dataset. Middle row: the three
spirals dataset. Bottom row: NIPS character dataset. For each row, the leftmost panel shows the
estimated L1 coverage risk using data splitting; the red straight line indicates the bandwidth selected
by least square cross validation [19], which is either undersmooth or oversmooth. Then the rest three
panels, are the result using different smoothing parameters. From left to right, we show the result
for under-smoothing, optimal smoothing (using the coverage risk), and over-smoothing. Note that
the second minimum in the coverage risk at the three spirals dataset (middle row) corresponds to a
phase transition when the estimator becomes a big circle; this is also a locally stable structure.
5
5.1
Applications
Simulation Data
We now apply the data splitting technique (7) to choose the smoothing bandwidth for density ridge
estimation. Note that we use data splitting over smooth bootstrap since in practice, data splitting
works better. The density ridge estimation can be done by the subspace constrain mean shift algorithm [22]. We consider three famous datasets: the spiral dataset, the three spirals dataset and a
?NIPS? dataset.
Figure 3 shows the result for the three simulation datasets. The top row is the spiral dataset; the
middle row is the three spirals dataset; the bottom row is the NIPS character dataset. For each row,
from left to right the first panel is the estimated L1 risk by using data splitting. Note that there is
no practical difference between L1 and L2 risk. The second to fourth panels are under-smoothing,
optimal smoothing, and over-smoothing. Note that we also remove the ridges whose density is
below 0.05 ? maxx pbn (x) since they behave like random noise. As can be seen easily, the optimal
bandwidth allows the density ridges to capture the underlying structures in every dataset. On the
contrary, the under-smoothing and the over-smoothing does not capture the structure and have a
higher risk.
7
1.5
1.3
(Estimated) L1 Coverage Risk
1.4
?
?
1.2
?
?
?
?
?
?
?
?
1.1
?
?
?
1.0
?
?
?
0.9
?
?
?
?
0.0
0.2
0.4
0.6
0.8
1.0
Smoothing Parameter
Figure 4: Another slice for the cosmic web data from the Sloan Digital Sky Survey. The leftmost
panel shows the (estimated) L1 coverage risk (right panel) for estimating density ridges under different smoothing parameters. We estimated the L1 coverage risk by using data splitting. For the
rest panels, from left to right, we display the case for under-smoothing, optimal smoothing, and
over-smoothing. As can be seen easily, the optimal smoothing method allows the SCMS algorithm
to detect the intricate cosmic network structure.
5.2
Cosmic Web
Now we apply our technique to the Sloan Digital Sky Survey, a huge dataset that contains millions
of galaxies. In our data, each point is an observed galaxy with three features:
? z: the redshift, which is the distance from the galaxy to Earth.
? RA: the right ascension, which is the longitude of the Universe.
? dec: the declination, which is the latitude of the Universe.
These three features (z, RA, dec) uniquely determine the location of a given galaxy.
To demonstrate the effectiveness of our method, we select a 2-D slice of our Universe at redshift
z = 0.050 ? 0.055 with (RA, dec) ? [200, 240] ? [0, 40]. Since the redshift difference is very tiny,
we ignore the redshift value of the galaxies within this region and treat them as a 2-D data points.
Thus, we only use RA and dec. Then we apply the SCMS algorithm (version of [7]) with data
splitting method introduced in section 2.1 to select the smoothing parameter h. The result is given in
Figure 4. The left panel provides the estimated coverage risk at different smoothing bandwidth. The
rest panels give the result for under-smoothing (second panel), optimal smoothing (third panel) and
over-smoothing (right most panel). In the third panel of Figure 4, we see that the SCMS algorithm
detects the filament structure in the data.
6
Discussion
In this paper, we propose a method using coverage risk, a generalization of mean integrated square
error, to select the smoothing parameter for the density ridge estimation problem. We show that
the coverage risk can be estimated using data splitting or smoothed bootstrap and we derive the
statistical consistency for risk estimators. Both simulation and real data analysis show that the
proposed bandwidth selector works very well in practice.
The concept of coverage risk is not limited to density ridges; instead, it can be easily generalized to
other manifold learning technique. Thus, we can use data splitting to estimate the risk and use the
risk estimator to select the tuning parameters. This is related to the so-called stability selection [23],
which allows us to select tuning parameters even in an unsupervised learning settings.
8
References
[1] E. Bas, N. Ghadarghadar, and D. Erdogmus. Automated extraction of blood vessel networks from 3d
microscopy image stacks via multi-scale principal curve tracing. In Biomedical Imaging: From Nano to
Macro, 2011 IEEE International Symposium on, pages 1358?1361. IEEE, 2011.
[2] E. Bas, D. Erdogmus, R. Draft, and J. W. Lichtman. Local tracing of curvilinear structures in volumetric color images: application to the brainbow analysis. Journal of Visual Communication and Image
Representation, 23(8):1260?1271, 2012.
[3] B. Cadre. Kernel estimation of density level sets. Journal of multivariate analysis, 2006.
[4] Y.-C. Chen, C. R. Genovese, R. J. Tibshirani, and L. Wasserman. Nonparametric modal regression. arXiv
preprint arXiv:1412.1716, 2014.
[5] Y.-C. Chen, C. R. Genovese, and L. Wasserman. Generalized mode and ridge estimation. arXiv:
1406.1803, June 2014.
[6] Y.-C. Chen, C. R. Genovese, and L. Wasserman. Asymptotic theory for density ridges. arXiv preprint
arXiv:1406.5663, 2014.
[7] Y.-C. Chen, S. Ho, P. E. Freeman, C. R. Genovese, and L. Wasserman. Cosmic web reconstruction
through density ridges: Method and algorithm. arXiv preprint arXiv:1501.05303, 2015.
[8] Y. Cheng. Mean shift, mode seeking, and clustering. Pattern Analysis and Machine Intelligence, IEEE
Transactions on, 17(8):790?799, 1995.
[9] A. Cuevas, W. Gonzalez-Manteiga, and A. Rodriguez-Casal. Plug-in estimation of general level sets.
Aust. N. Z. J. Stat., 2006.
[10] D. Eberly. Ridges in Image and Data Analysis. Springer, 1996.
[11] J. Einbeck. Bandwidth selection for mean-shift based unsupervised learning techniques: a unified approach via self-coverage. Journal of pattern recognition research., 6(2):175?192, 2011.
[12] U. Einmahl and D. M. Mason. Uniform in bandwidth consistency for kernel-type function estimators.
The Annals of Statistics, 2005.
[13] L. C. Evans and R. F. Gariepy. Measure theory and fine properties of functions, volume 5. CRC press,
1991.
[14] K. Fukunaga and L. Hostetler. The estimation of the gradient of a density function, with applications in
pattern recognition. Information Theory, IEEE Transactions on, 21(1):32?40, 1975.
[15] C. R. Genovese, M. Perone-Pacifico, I. Verdinelli, and L. Wasserman. Nonparametric ridge estimation.
The Annals of Statistics, 42(4):1511?1545, 2014.
[16] E. Gine and A. Guillou. Rates of strong uniform consistency for multivariate kernel density estimators.
In Annales de l?Institut Henri Poincare (B) Probability and Statistics, 2002.
[17] T. Hastie. Principal curves and surfaces. Technical report, DTIC Document, 1984.
[18] T. Hastie and W. Stuetzle. Principal curves. Journal of the American Statistical Association, 84(406):
502?516, 1989.
[19] M. C. Jones, J. S. Marron, and S. J. Sheather. A brief survey of bandwidth selection for density estimation.
Journal of the American Statistical Association, 91(433):401?407, 1996.
[20] D. M. Mason, W. Polonik, et al. Asymptotic normality of plug-in level set estimates. The Annals of
Applied Probability, 19(3):1108?1142, 2009.
[21] Z. Miao, B. Wang, W. Shi, and H. Wu. A method for accurate road centerline extraction from a classified
image. 2014.
[22] U. Ozertem and D. Erdogmus. Locally defined principal curves and surfaces. Journal of Machine Learning Research, 2011.
[23] A. Rinaldo and L. Wasserman. Generalized density clustering. The Annals of Statistics, 2010.
[24] D. W. Scott. Multivariate density estimation: theory, practice, and visualization, volume 383. John Wiley
& Sons, 2009.
[25] B. Silverman and G. Young. The bootstrap: To smooth or not to smooth? Biometrika, 74(3):469?479,
1987.
[26] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman and Hall, 1986.
[27] R. Tibshirani. Principal curves revisited. Statistics and Computing, 2(4):183?190, 1992.
[28] L. Wasserman. All of Nonparametric Statistics. Springer-Verlag New York, Inc., 2006.
?
9
| 5996 |@word mild:1 version:1 middle:4 norm:2 urb:1 simulation:5 bn:19 decomposition:1 pick:1 solid:1 contains:2 selecting:1 ours:1 document:1 dx:2 john:1 mesh:3 fn:9 evans:1 remove:1 plot:3 half:1 selected:2 intelligence:1 provides:3 recompute:1 draft:1 location:1 revisited:1 centerline:1 along:1 h4:2 symposium:1 prove:4 introduce:3 intricate:2 ra:4 expected:4 behavior:1 multi:1 chi:1 freeman:1 detects:2 increasing:1 becomes:1 estimating:6 underlying:3 bounded:6 moreover:1 panel:15 argmin:1 eigenvector:1 unified:1 sky:3 every:2 cuevas:1 act:1 biometrika:1 k2:3 unit:1 omit:1 before:2 positive:1 local:4 treat:1 despite:1 black:2 limited:1 brainbow:1 practical:1 filament:4 practice:3 silverman:2 bootstrap:8 cadre:1 stuetzle:1 area:1 poincare:1 maxx:1 projection:1 road:1 einmahl:1 cannot:1 selection:6 risk:61 applying:1 kmax:2 fn2:1 shi:1 independently:1 survey:4 splitting:13 wasserman:8 m2:11 estimator:17 rule:4 regularize:1 stability:1 undersmoothing:1 annals:4 play:1 associate:1 element:1 recognition:2 observed:2 role:2 bottom:6 preprint:3 wang:1 capture:2 region:6 remote:1 trade:1 highest:1 easily:4 kp:1 detected:1 choosing:1 whose:3 widely:1 casal:1 statistic:10 eigenvalue:2 differentiable:2 propose:4 reconstruction:1 macro:1 supreme:1 x1m:1 curvilinear:1 convergence:3 converges:1 object:1 derive:3 andrew:2 stat:3 strong:1 longitude:1 coverage:52 come:1 direction:2 vc:1 larry:2 crc:1 require:2 pbn:4 generalization:2 hold:1 around:3 hall:1 normal:1 earth:1 estimation:17 sensitive:2 agrees:1 successfully:1 minimization:1 clearly:1 gaussian:1 pn:2 nh1:1 focus:1 june:1 consistently:1 indicates:1 detect:5 sense:1 integrated:4 x11:1 orientation:2 constrained:1 smoothing:35 orange:6 polonik:1 having:1 extraction:2 chapman:1 jones:1 unsupervised:2 genovese:7 report:1 randomly:2 geometry:1 phase:1 detection:1 huge:1 investigate:1 loss1:2 accurate:1 partial:1 institut:1 circle:1 theoretical:1 uniform:3 too:1 characterize:1 kkp:1 marron:1 thanks:1 density:45 international:1 physic:1 off:1 w1:3 choose:3 slowly:1 nano:1 american:2 derivative:4 supp:1 de:1 b2:2 inc:1 sloan:3 depends:1 view:1 root:1 um2:2 linked:1 analyze:1 red:1 recover:1 yen:1 square:8 variance:3 characteristic:1 yield:1 famous:2 vbn:3 w21:7 straight:4 classified:1 definition:1 volumetric:1 kl2:2 galaxy:5 cosmology:2 associated:2 proof:1 unsolved:1 dataset:21 mise:3 proved:1 recall:1 color:1 appears:3 higher:1 miao:1 modal:1 done:1 hostetler:1 guillou:1 just:1 biomedical:2 until:1 web:4 christopher:1 um1:2 rodriguez:1 mode:3 quality:2 effect:1 concept:5 true:2 hausdorff:7 hence:1 symmetric:1 self:4 uniquely:1 covering:1 criterion:1 generalized:5 leftmost:2 ridge:39 demonstrate:1 l1:13 image:5 wise:1 common:1 nh:3 million:1 volume:2 association:2 m1:13 mellon:4 smoothness:2 tuning:4 rd:3 consistency:9 trivially:1 dot:2 stable:1 similarity:1 surface:2 multivariate:3 inf:1 driven:1 verlag:1 inequality:4 seen:3 minimum:1 converge:1 determine:1 dashed:1 smooth:8 technical:1 faster:1 adapt:1 plug:3 cross:1 involving:1 regression:1 essentially:1 expectation:2 gine:1 vision:1 cmu:4 metric:1 kernel:12 represent:3 arxiv:7 cosmic:5 dec:4 microscopy:1 addition:1 fine:1 diagram:7 w2:2 envelope:1 unlike:1 rest:3 induced:1 contrary:2 sheather:1 effectiveness:1 call:2 split:1 spiral:13 wn:14 identically:1 automated:1 declination:1 bandwidth:13 hastie:2 cn:3 br:2 shift:6 padding:2 hessian:1 york:1 useful:1 covered:1 nonparametric:4 locally:2 exist:1 estimated:15 tibshirani:2 carnegie:4 four:2 blood:1 shirley:1 imaging:2 annales:1 fourth:1 wu:1 w12:8 gonzalez:1 bound:5 display:1 cheng:1 constraint:2 constrain:1 x2:1 nearby:1 fukunaga:1 redshift:4 department:4 poor:1 slightly:2 son:1 character:2 ur:4 modification:1 outlier:2 taken:1 visualization:1 remains:1 count:1 serf:1 generalizes:1 apply:6 v2:1 appropriate:1 ho:2 original:2 top:3 denotes:1 clustering:2 x21:1 pacifico:1 k1:6 prof:1 seeking:1 move:1 quantity:1 ozertem:1 usual:1 gradient:4 supn:1 subspace:2 distance:14 link:1 dp:1 simulated:1 scms:13 vd:1 manifold:8 haus:4 assuming:2 length:1 ratio:1 minimizing:4 kde:4 negative:1 ba:2 upper:2 datasets:5 behave:1 communication:1 rn:6 perturbation:1 stack:1 smoothed:6 undersmooth:1 introduced:1 namely:2 nip:4 ther:1 below:2 usually:1 pattern:3 scott:1 latitude:1 preselected:1 including:1 green:6 representing:1 normality:1 brief:1 nice:2 geometric:1 l2:11 kf:2 asymptotic:4 loss:2 versus:1 digital:3 validation:1 helix:5 tiny:1 oversmooth:2 row:12 summary:1 last:1 arriving:1 bias:2 wide:1 tracing:2 distributed:3 slice:3 curve:20 dimension:1 xn:6 transition:1 cumulative:1 collection:3 projected:3 transaction:2 henri:1 approximate:1 compact:2 ignore:1 selector:1 reveals:1 nhd:3 x2m:1 assumed:1 wn2:1 learn:1 vessel:1 necessarily:1 vj:1 universe:5 whole:1 big:1 noise:1 x1:6 wiley:1 explicit:1 third:4 young:1 loss2:2 formula:1 theorem:6 bad:1 jensen:2 sensing:1 r2:1 decay:1 mason:2 perone:1 push:1 lichtman:1 dtic:1 chen:5 aust:1 visual:1 rinaldo:1 ordered:1 applies:1 monotonic:1 springer:2 corresponds:1 satisfies:3 cdf:4 erdogmus:3 change:1 specifically:1 uniformly:2 principal:7 lemma:7 called:2 verdinelli:1 select:9 support:4 latter:2 phenomenon:1 |
5,520 | 5,997 | Fast Distributed k-Center Clustering with Outliers on
Massive Data
Gustavo Malkomes, Matt J. Kusner, Wenlin Chen
Department of Computer Science and Engineering
Washington University in St. Louis
St. Louis, MO 63130
{luizgustavo,mkusner,wenlinchen}@wustl.edu
Kilian Q. Weinberger
Department of Computer Science
Cornell University
Ithaca, NY 14850
[email protected]
Benjamin Moseley
Department of Computer Science and Engineering
Washington University in St. Louis
St. Louis, MO 63130
[email protected]
Abstract
Clustering large data is a fundamental problem with a vast number of applications.
Due to the increasing size of data, practitioners interested in clustering have turned
to distributed computation methods. In this work, we consider the widely used kcenter clustering problem and its variant used to handle noisy data, k-center with
outliers. In the noise-free setting we demonstrate how a previously-proposed distributed method is actually an O(1)-approximation algorithm, which accurately
explains its strong empirical performance. Additionally, in the noisy setting, we
develop a novel distributed algorithm that is also an O(1)-approximation. These
algorithms are highly parallel and lend themselves to virtually any distributed
computing framework. We compare each empirically against the best known sequential clustering methods and show that both distributed algorithms are consistently close to their sequential versions. The algorithms are all one can hope
for in distributed settings: they are fast, memory efficient and they match their
sequential counterparts.
1
Introduction
Clustering is a fundamental machine learning problem with widespread applications. Example applications include grouping documents or webpages by their similarity for search engines [30] or
grouping web users by their demographics for targeted advertising [2]. In a clustering problem one
is given as input a set U of n data points, characterized by a set of features, and is asked to cluster
(partition) points so that points in a cluster are similar by some measure. Clustering is a well understood task on modestly sized data sets; however, today practitioners seek to cluster datasets of
massive size. Once data becomes too voluminous, sequential algorithms become ineffective due to
their running time and insufficient memory to store the data. Practitioners have turned to distributed
methods, in particular MapReduce [13], to efficiently process massive data sets.
One of the most fundamental clustering problems is the k-center problem. Here, it is assumed
that for any two input points a pair-wise distance can be computed that reflects their dissimilarity
(typically these arise from a metric space). The objective is to choose a subset of k points (called
centers) that give rise to a clustering of the input set into k clusters. Each input point is assigned to
the cluster defined by its closest center (out of the k center points). The k-center objective selects
these centers to minimize the farthest distance of any point to its cluster center.
1
The k-center problem has been studied for over three decades and is a fundamental task used for
exemplar based clustering [22]. It is known to be NP-Hard and, further, no algorithm can achieve
a (2 ? )-approximation for any > 0 unless P=NP [16, 20]. In the sequential setting, there are
algorithms which match this bound achieving a 2-approximation [16, 20].
The k-center problem is popular for clustering datasets which are not subject to noise since the
objective is sensitive to error in the data because the worst case (maximum) distance of a point to
the centers is used for the objective. In the case where data can be noisy [1, 18, 19], previous work
has considered the k-centers with outliers problem [10]. In this problem, the objective is the same,
but additionally one may discard a set of z points from the input. These z points are the outliers and
are ignored in the objective. Here, the best known algorithm is a 3-approximation [10].
Once datasets become large, the known algorithms for these two problems become ineffective. Due
to this, previous work on clustering has resorted to alternative algorithmics. There have been several
works on streaming algorithms [3, 17, 24, 26]. Others have focused on distributed computing [6,
7, 14, 25]. The work in the distributed setting has focused on algorithms which are implementable
in MapReduce, but are also inherently parallel and work in virtually any distributed computing
framework. The work of [14] was the first to consider k-center clustering in the distributed setting.
Their work gave an O(1)-round O(1)-approximate MapReduce algorithm. Their algorithm is a
sampling based MapReduce algorithm which can be used for a variety of clustering objectives.
Unfortunately, as the authors point out in their paper, the algorithm does not always perform well
empirically for the k-center objective since the objective function is very sensitive to missing data
points and the sampling can cause large errors in the solution.
The work of Kumar et al. [23] gave a (1 ? 1e )-approximation algorithm for submodular function
maximization subject to a cardinality constraint in the MapReduce setting, however, their algorithm
requires a non-constant number of MapReduce rounds. Whereas, Mirzasoleiman et al. [25] (recently, extended in [8]) gave a two MapReduce rounds algorithm but their approximation ratio is not
constant. It is known that an exact algorithm for submodular maximization subject to a cardinality
constraint gives an exact algorithm for the k-center problem. Unfortunately, both problems are NPHard and the reduction is not approximation preserving. Therefore, their theoretical results do not
imply a nontrivial approximation for the k-center problem.
For these problems, the following questions loom: What can be achieved for k-center clustering
with or without outliers in the large-scale distributed setting? What underlying algorithmic ideas are
needed for the k-center with outliers problem to be solved in the distributed setting? The k-center
with outliers problem has not been studied in the distributed setting. Given the complexity of the
sequential algorithm, it is not clear what such an algorithm would look like.
Contributions. In this work, we consider the k-center and k-center with outliers problems in the
distributed computing setting. Although the algorithms are highly parallel and work in virtually
any distributed computing framework, they are particularly well suited for the MapReduce [13]
as they require only small amounts of inter-machine communication and very little memory on
each machine. We therefore state our results for the MapReduce framework [13]. We will assume
throughout the paper that our algorithm is given some number of machines, m, to process the data.
We first begin by considering a natural interpretation of the algorithm of Mirzasoleiman et al. [25]
on submodular optimization for the k-center problem. The algorithm we introduce runs in two
MapReduce rounds and achieves a small constant approximation.
Theorem 1.1. There is a two round MapReduce algorithm which achieves a 4-approximation for
the k-center problem which communicates O(km) amount of data assuming the data is already
partitioned across the machines. The algorithm uses O(max{n/m, mk}) memory on each machine.
Next we consider the k-center with outliers problem. This problem is far more challenging and previous distributed techniques do not lend themselves to this problem. Here we combine the algorithm
developed for the problem without outliers with the sequential algorithm for k-center with outliers.
We show a two round MapReduce algorithm that achieves an O(1)-approximation.
Theorem 1.2. There is a two round MapReduce algorithm which achieves a 13-approximation for
the k-center with outliers problem which communicates O(km log n) amount of data assuming the
data is already partitioned across the machines. The algorithm uses O(max{n/m, m(k+z) log n})
memory on each machine.
2
Finally, we perform experiments with both algorithms on real world datasets. For k-center we
observe that the quality of the solutions is effectively the same as that of the sequential algorithm for
all values of k?the best one could hope for. For the k-center problem with outliers our algorithm
matches the sequential algorithm as the values of k and z vary and it significantly outperforms the
algorithm which does not explicitly consider outliers. Somewhat surprisingly our algorithm achieves
an order of magnitude speed-up over the sequential algorithm even if it is run sequentially.
2
Preliminaries
Map-Reduce. We will consider algorithms in the distributed setting where our algorithms are given
m machines. We define our algorithms in a general distributed manner, but they particularly suited
to the MapReduce model [21]. This model has become widely used both in theory and in applied
machine learning [4, 5, 9, 12, 15, 21, 25, 27, 31]. In the MapReduce setting, algorithms run in
rounds. In each round the machines are allowed to run a sequential computation without machine
communication. Between rounds, data is distributed amongst the machines in preparation for new
computation. The goal is to design an algorithm which runs in a small number of rounds since
the main running time bottleneck is distributing the data amongst the machine between each round.
Generally it is assumed that each of the machines uses sublinear memory [21]. The motivation here
is that since MapReduce is used to process large data sets, the memory on the machines should be
much smaller than the input size to the problem. It is additionally assumed that there is enough
memory to store the entire dataset across all machines. Our algorithms fall into this category and
the memory required on each machine scales inversely with m.
k-center (with outliers) problem. In the problems considered, there is a universe U of n points.
Between each pair of points u, v ? U there is a distance d(u, v) specifying their dissimilarity. The
points are assumed to lie in a metric space which implies that for all u, v, u0 ? U we have that 1.
d(u, u) = 0, 2. d(u, v) = d(v, u), and 3. d(u, v) ? d(u, u0 )+d(u0 , v) (triangle inequality). For a set
X of points, we let dX (u) := minv?X {d(u, v)} denote the minimum distance of a point u ? U to
any point in X. In the k-center problem, the goal is to choose a set of centers X of k points such
that maxv?U dX (v) is minimized (i.e., dX (v) is the distance between v and its cluster center and we
would like to minimize the largest distance, across all points). In the k-center with outliers problem,
the goal is to choose a set X of k points and a set Z of z points such that maxv?U \Z dX (v) is
minimized. Note that in this problem the algorithm simply needs to choose the set X as the optimal
set of Z points is well defined: It is the set of points in U farthest from the centers X.
Sequential algorithms The most widely used k-center (with- Algorithm 1 Sequential k-center
out outliers) algorithm is the following simple greedy proce- G REEDY(U, k)
dure, summarized in pseudo-code in Algorithm 1. The algorithm sets X = ? and then iteratively adds points from U to
1: X = ?
X until |X| = k. At each step, the algorithm greedily se2: Add any point u ? U to X
lects the farthest point in U from X, and then adds this point
3: while |X| < k do
to the updated set X. This algorithm is natural and efficient
4:
u = argmaxv?U dX (v)
and is known to give a 2-approximation for the k-center prob5:
X = X ? {u}
lem [20]. However, it is also inherently sequential and does
6: end while
not lend itself to the distributed setting (except for very small
k). A na??ve MapReduce implementation can be obtained by finding the element v ? U to maximize
dX (v) in a distributed fashion (line 4 in Algorithm 1). This, however, requires k rounds of MapReduce that must distribute the entire dataset each round. Therefore it is unsuitably inefficient for
many real world problems. The sequential algorithm for k-center with outliers is more complicated
due to the increased difficulty of the problem (for reference see: [10]). This algorithm is even more
fundamentally sequential than Algorithm 1.
3
k-Center in MapReduce
In this section we consider the k-center problem where no outliers are allowed. As mentioned
before, a similar variant of this problem has been previously studied in Mirzasoleiman et al. [25]
in the distributed setting. The work of Mirzasoleiman et al. considers submodular maximization
1
and showed a min{ k1 , m
}-approximation where m is the number of machines. Their algorithm
was shown to perform extremely well in practice (in a slightly modified clustering setup). The
3
k-center problem can be mapped to submodular maximization, but the standard reduction is not
approximation preserving and their result does not imply a non-trivial approximation for k-center.
In this section, we give a natural interpretation of their algorithm without submodular maximization.
Algorithm 2 summarizes a distributed approach for solving the k-center problem. First the data
points of U are partitioned across all m machines. Then each machine i runs the G REEDY algorithm
on the partition they are given to compute a set Ci of k points. These points are assigned to a
single machine, which runs G REEDY again to compute the final solution. The algorithm runs in two
MapReduce rounds and the only information communicated is Ci for each i if the data is already
assigned to machines. Thus, we have the following proposition.
Proposition 3.1. The algorithm G REEDY-MR runs in two MapReduce rounds and communicates
O(km) amount of data assuming the data is originally partitioned across the machines. The algorithm uses O(max{n/m, mk}) memory on each machine.
We aim to bound the approximation ratio of Algorithm 2 Distributed k-center
G REEDY-MR. Let OPT denote the optimal
solution value for the k-center problem. The G REEDY-MR(U, k)
1: Partition U into m equal sized sets U1 , . . . , Um
previous proposition and following lemma
where machine i receives Ui .
give Theorem 1.1.
2: Machine i assigns Ci = G REEDY(Ui , k)
Lemma 3.2. The algorithm G REEDY-MR is
3: All sets Ci are assigned to machine 1
a 4-approximation algorithm.
4: Machine 1 sets X = G REEDY(?m
i=1 Ci , k)
Proof. We first show for any i that dCi (u) ? 5: Output X
2OPT for any u ? Ui . Indeed, say that this is
not the case for sake of contradiction for some i. Then for some u ? Ui , dCi (u) > 2OPT which
implies u is distance greater than 2OPT from all points in Ci . By definition of G REEDY for any pair
of points v, v 0 ? Ci it must be the case that d(v, v 0 ) ? dCi (u) > 2OPT (otherwise u would have
been included in Ci ). Thus, in the set {u} ? Ci there are k + 1 points all of distance greater than
2OPT from each other. However, then two of these points v, v 0 ? ({u} ? Ci ) must be assigned to
the same center v ? in the optimal solution. Using the triangle inequality and the definition of OPT it
must be the case that d(v, v 0 ) ? d(v ? , v) + d(v ? , v 0 ) ? 2OPT, a contradiction. Thus, for all points
u ? Ui , it must be that dCi (u) ? 2OPT.
Let X denote the output solution by G REEDY-MR. We can show a similar result for points in ?m
i=1 Ci
when compared to X. That is, we show that dX (u) ? 2OPT for any u ? ?m
i=1 Ci . Indeed, say that
this is not the case for sake of contradiction. Then for some u ? ?m
i=1 Ci , dX (u) > 2OPT which
implies u is distance greater than 2OPT from all points in X. By definition of G REEDY for any pair
0
of points v, v 0 ? ?m
i=1 Ci it must be that d(v, v ) ? dX (u) > 2OPT. Thus, in the set {u} ? X there
are k+1 points all of distance greater than 2OPT from each other. However, then two of these points
v, v 0 ? ({u} ? X) must be assigned to the same center v ? in the optimal solution. Using the triangle
inequality and the definition of OPT it must be the case that d(v, v 0 ) ? d(v ? , v)+d(v ? , v 0 ) ? 2OPT,
a contradiction. Thus, for all points u ? ?m
i=1 Ci , it must be that dX (u) ? 2OPT.
Now we put these together to get a 4-approximation. Consider any point u ? U . If u is in Ci for any
i, it must be the case that dX (u) ? 2OPT by the above argument. Otherwise, u is not in Ci for any i.
Let Uj be the partition which u belongs to. We know that u is within distance 2OPT to some point
v ? Cj and further we know that v is within distance 2OPT from X from the above arguments.
Thus, using the triangle inequality, dX (u) ? d(u, v) + dX (v) ? 2OPT + 2OPT ? 4OPT.
4
k-center with Outliers
In this section, we consider the k-center with outliers problem and give the first MapReduce algorithm for the problem. The problem is more challenging than the version without outliers because
one has to also determine which points to discard, which can drastically change which centers should
be chosen. Intuitively, the right algorithmic strategy is to choose centers such that there are many
points around them. Given that they are surrounded by many points, this is a strong indicator that
these points are not outliers. This idea was formalized in the algorithm of Charikar et al. [10], a
well-known and influential algorithm for this problem in the single machine setting.
Algorithm 3 summarizes the approach of Charikar et al. [10]. It takes as input the set of points
U , the desired number centers k and a parameter G. The parameter G is a ?guess? of the optimal
solution?s value. The algorithm?s performance is best when G = OPT where OPT denotes the
4
optimal k-center objective after discarding z points. The number of outliers to be discarded, z, is
not a parameter of the algorithm and is communicated implicitly through G. The value of G can be
determined by doing a binary search on possible values of G?between the minimum and maximum
distances of any two points.
For each point u ? U , the set Bu contains Algorithm 3 Sequential k-center with outliers [10]
points within distance G of u. The algo- O UTLIERS(U, k, G)
rithm adds the point v 0 to the solution set
1: U 0 = U , X = ?
which covers the largest number of points
2: while |X| < k do
with Bv0 . The idea here is to add points which
3:
?u ? U 0 let Bu = {v : v ? U 0 , du,v ? G}
0
have many points nearby (and thus large Bv ).
4:
Let v 0 = argmaxu?U 0 |Bu |
Then the algorithm removes all points from
5:
Set X = X ? {v 0 }
the universe which are within distance 3G
6:
Compute Bv0 0 = {v : v ? U 0 , dv0 ,v ? 3G}
from v 0 and continues until k points are cho7:
U 0 = U 0 \ Bv0 0
sen to be in the set X. Recall that in the out8: end while
liers problem, choosing the centers is a well
defined solution and the outliers are simply the farthest z points from the centers. Further, it can
be shown that when G = OP T , after selecting the k centers, there are at most z outliers remaining
in U 0 . It is known that this algorithm gives a 3-approximation [10]?however it is not efficient on
large or even medium sized datasets due to the computation of the sets Bu within each iteration. For
instance, it can take ? 100 hours on a data set with 45, 000 points.
We now give a distributed approach (Algorithm 4) for clustering with outliers. This algorithm is
naturally parallel, yet it is significantly faster even if run sequentially on a single machine. It uses a
sub-procedure (Algorithm 5) which is a generalization of O UTLIERS.
The algorithm first partitions the points
across the m machines (e.g., set Ui goes Algorithm 4 Distributed k-center with outliers
to machine i). Each machine i runs the O UTLIERS -MR(U, k, z, G, ?, ?)
G REEDY algorithm on Ui , but selects k+z
points rather than k. This results in a set 1: Partition U into m equal sized sets U1 , . . . , Um
Ci . For each c ? Ci , we assign a weight
where machine i receives Ui .
wc that is the number of points in Ui that 2: Machines i sets Ci = G REEDY(Ui , k + z)
have c as their closest point in Ci (i.e., if 3: For each point c ? Ci , machine i set wc = |{v :
Ci defines an intermediate clustering of Ui
v ? Ui , d(v, c) = dCi (v)}| + 1
then wc is the number of points in the c- 4: All sets Ci are assigned to machine 1 with the
cluster). The algorithm then runs a variweights of the points in Ci
ation of O UTLIERS called C LUSTER, de- 5: Machine 1 sets X = C LUSTER(?m Ci , k, G)
i=1
scribed in Algorithm 5, on only the points 6: Output X
m
in ?i=1 Ci . The main differences are that
C LUSTER represents each point c by the number of points wc closest to it, and that it uses 5G and
11G for the radii in Bu and Bu0 .
The total machine-wise communication required for O UTLIERS -MR is Algorithm 5 Clustering subroutine
that needed to send each of the sets C LUSTER(U, k, G)
Ci to Machine 1 along with their
1: U 0 = U , X = ?
weights. Each weight can have size at
2: while |X| < k do
most n, so it only requires O(log n) 3:
?u ? U compute Bu =P
{v : v ? U 0 , du,v ? 5G}
0
space to encode the weight. This
4:
Let v = argmaxu?U u0 ?Bu wu0
gives the following proposition.
5:
Set X = X ? {v 0 }
Proposition 4.1. O UTLIERS -MR
6:
Compute Bv0 0 = {v : v ? U 0 , dv0 ,v ? 11G}
runs in two MapReduce rounds and
7:
U 0 = U 0 \ Bv0 0
communicates O((k + z)m log n) 8: end while
amount of data assuming the data
9: Output X
is originally partitioned across the
machines. The algorithm uses O(max{n/m, m(k + z) log n}) memory on each machine.
Our goal is to show that O UTLIERS -MR is an O(1)-approximation algorithm (Theorem 1.2). We
first present intermediate lemmas and give proof sketches, leaving intermediate proofs to the supplementary material. We overload notation and let OPT denote a fixed optimal solution as well as
5
the optimal objective to the problem. We will assume throughout the proof that G = OPT, as we can
? = OPT(1 + ) for arbitrarily small > 0 when running C LUSTER
perform a binary search to find G
on a single machine. We first claim that any point in Ui is not too far from any point in Ci .
Lemma 4.2. For every point u ? Ui it is the case that dCi (u) ? 2OPT for all 1 ? i ? m.
Given the above lemma, let O1 , . . . , Ok denote the clusters in the optimal solution. A cluster in OPT
is defined as a subset of the points in U , not including outliers identified by OPT, that are closest to
some fixed center chosen by OPT. The high level idea of our proof is similar to that used in [10].
Our goal is to show that when our algorithm choses each center, the set of points discarded from U 0
in C LUSTER can be mapped to some cluster in the optimal solution. At the end of C LUSTER there
should be at most z points in U 0 , which are the outliers in the optimal solution. Knowing that we
only discard points from U 0 close to centers we choose, this will imply the approximation bound.
For every point u ? U , which must fall into some Ui , we let c(u) denote the closest point in Ci to u
(i.e., c(u) is the closest intermediate cluster center found by G REEDY to u). Consider the output of
C LUSTER, X = {x1 , x2 , . . . , xk }, ordered by how elements were added to X. We will say that an
optimal cluster Oi is marked at C LUSTER iteration j if there is a point u ? Oi such that c(u) ?
/ U0
just before xj is added to X. Essentially if a cluster is marked, we can make no guarantee about
covering it within some radius of xj (which will then be discarded). Figure 1 shows examples where
Oi is (and is not) marked. We begin by noting that when xj is added to X that the weight of the
points removed from U 0 is at least as large as the maximum number of points in an unmarked cluster
in the optimal solution.
P
Lemma 4.3. When xj is added, then u0 ?Bx wu0 ? |Oi | for any unmarked cluster Oi .
j
Given this result, the following lemma considers a point v that is in some
cluster Oi . If c(v) is within the ball Bxj for xj added to X, then intuitively,
this means that we cover all of the points in Oi with Bx0 j . Another way to say
this is that after we remove the ball Bx0 j , no points in Oi contribute weight to
any point in U 0 .
Lemma 4.4. Consider that xj is to be added to X. Say that c(v) ? Bxj for
some point v ? Oi for some i. Then, for every point u ? Oi either c(u) ? Bx0 j
or c(u) has already been removed from U 0 .
See the supplementary material for the proof. The final lemma below states
that the weight of the points in ?xi :1?i?k Bx0 i is at least as large as the number
of points in ? 1?i?k Oi . Further, we know that | ? 1?i?k Oi | = n ? z since
OPT has z outliers. Viewing the points in Bx0 i as being assigned to xi in the
algorithm?s solution then this shows that the number of points covered is at
least as large as the number of points that the optimal solution covers. Hence,
there cannot be more than z points uncovered by our algorithm.
Pk P
Lemma 4.5.
0 wu ? n ? z
i=1
u?Bx
i
Finally, we are ready to complete the proof of Theorem 1.2.
Oi marked
Oi
v
9u
c(u)
v
0
deleted from U 0
c(v 0 )
c(v)
U0
Oi
unmarked
Oi
v
8u
v0
c(v 0 )
c(v)
c(u)
U0
Figure 1: Examples
in which Oi is/is
not marked.
Proof of [Theorem 1.2] Lemma 4.5 implies that the sum of the weights of the
points which are in ?xi :1?i?k Bx0 i are at least n ? z. We know that every point u contributes to the
weight of some point c(u) which is in Ci for 1 ? i ? m and by Lemma 4.2 d(u, c(u)) ? 2OPT.
We map every point u ? U to xi , such that c(u) ? Bx0 i . By definition of Bx0 i and Lemma 4.2 it is
the case d(u, xi ) ? 13OPT by the triangle inequality. Thus, we have mapped n ? z points to some
point in X within distance 13OPT. Hence, our algorithm discards at most n ? z points and achieves
a 13-approximation. With Proposition 4.1 we have shown Theorem 1.2.
2
5
Experiments
We evaluate the real-world performance of the above clustering algorithms on seven clustering
datasets, described in Table 1. We compare all methods using the k-center with outliers objective, in
which z outliers may be discarded. We begin with a brief description of the clustering methods we
6
Table 1: The clustering datasets (and their descriptions) used for evaluation.
name
Parkinsons [28]
Census1
Skin1
Yahoo [11]
Covertype1
Power1
Higgs1
n
5, 875
45, 222
245, 057
473, 134
522, 911
2, 049, 280
11, 000, 000
description
patients with early-stage Parkinson?s disease
census household information
RGB-pixel samples from face images
web-search ranking dataset (features are GBRT outputs [29])
a forest cover dataset with cartographic features
household electric power readings
particle detector measurements (the seven ?high-level? features)
dim.
22
12
3
500
13
7
7
compare. We then show how the distributed algorithms compare with their sequential counterparts
on datasets small enough to run the sequential methods, for a variety of settings. Finally, in the
large-scale setting, we compare all distributed methods for different settings of k.
Methods. We implemented the sequential G REEDY and O UTLIERS and distributed G REEDY-MR
[25] and O UTLIERS -MR. We also implemented two baseline methods: RANDOM|RANDOM: m
machines randomly select k +z points, then a single machine randomly selects k points out of the
previously selected m(k+z) points; RANDOM|O UTLIERS: m machines randomly select k+z points,
then O UTLIERS (Algorithm 4) is run over the m(k+z) points previously selected; All methods were
implemented in MATLABTM and conducted on an 8-core Intel Xeon 2 GHz machine.
m=10, z=256
10k Covertype
2
z=256
10km=10,
Power
40
35
1.6
40
25
20
0.8
15
Greedy
Outliers
Random | Random
Random | Outliers
Greedy-MR
Outliers-MR
0.4
0
20
40
60
5
80
100
0
20
40
60
k
k
k=50, m=10
k=50, m=10
2
2.5
35
2
30
1.5
25
1
20
0.5
80
100
15
20
40
number of clusters: k
40
60
80
100
35
20
0.8
30
15
25
10
0.4
20
5
5
6
7
log2(z)
8
9
0
5
6
7
log2(z)
k=50, z=256
k=50, z=256
8
9
35
1.6
5
6
2
40
2
15
number of outliers: log(z)
40
60
7
log2(z)
80
100
k
40
25
20
5
k=50, m=10
30
1.2
0
k
45
35
objective value
m=10, z=256
Census
x 10
10
1.6
8
9
5
4.5
4
3.5
3
2.5
2
1.5
1
0.5
0
k=50, m=10
x 10
5
6
5
k=50, z=256
45
3
40
2.5
7
log2(z)
8
9
k=50, z=256
x 10
30
1.2
0.8
25
35
2
20
30
1.5
15
25
1
20
0.5
10
0.4
5
0
5
3
30
1.2
0
m=10, z=256
Parkinson
45
5
10
15
20
0
5
10
m
15
m
20
15
5
number of machines: m
10
15
m
20
0
5
10
15
20
m
Figure 2: The performance of sequential and distributed methods. We plot the objective value of
four small datasets for varying k, z, and m.
Sequential vs. Distributed. Our first set of experiments evaluate how close the proposed distributed
methods are to their sequential counterparts. To this end, we vary all parameters: number of centers
k, number of outliers z, and the number of machines m. We consider datasets for which computing
the sequential methods is practical: Parkinsons, Census and two random subsamples (10, 000 inputs
each) of Covertype and Power. We show the results in Figure 2. Each column contains the results for
a single dataset and each row for a single varying parameter (k, z, or m), along with standard errors
over 5 runs. When a parameter is not varied we fix k = 50, z = 256, and m = 10. As expected, the
objective value for all methods generally decreases as k increases (as the distance of any point to its
cluster center must shrink with more clusters). RANDOM|RANDOM and RANDOM|O UTLIERS usually perform worse than G REEDY-MR for small k (save 10k Covertype) and RANDOM|O UTLIERS
1
https://archive.ics.uci.edu/ml/datasets/
7
m=10, z=256
m=10, z=256
250
objective value
m=10, z=256
m=10, z=256
0.3
2.4
Skin
Yahoo
200
m=10, z=256
100
20
Covertype
2.2
Power
80
150
1.8
100
1.6
Higgs
15
2
0.25
60
10
0.2
50
0
40
1.4
Random | Random
Random | Outliers
Greedy-MR
Outliers-MR
1.2
20
40
60
80
k
100
0.15
20
40
60
k
80
100
1
20
40
60
k
5
20
80
number of clusters: k
100
0
20
40
60
80
100
0
20
k
40
60
80
100
k
Figure 3: The objective value of five large-scale datasets, for varying k
sometimes matches it for large k. For all values of k tested, O UTLIERS -MR outperforms all other
distributed methods. Furthermore, it matches or slightly outperforms (which we attribute to randomness) the sequential O UTLIERS method in all settings. As z increases the two random methods
improve, beyond G REEDY-MR in some cases. Similar to the first plot, O UTLIERS -MR outperforms
all other distributed methods while matching the sequential clustering method. For very small settings of m (i.e., m = 2, 6), O UTLIERS -MR and G REEDY-MR perform slightly worse than sequential O UTLIERS and G REEDY. However, for practical settings of m (i.e., m ? 10), O UTLIERS -MR
matches O UTLIERS and G REEDY-MR matches G REEDY. In terms of speed, on the largest of these
datasets (Census) O UTLIERS -MR run sequentially is more than 677? faster than O UTLIERS, see
Table 2. This large speedup is due to the fact that we cannot store the full distance matrix for Census,
thus all distances need to be computed on demand.
Large-scale. Our second set of experiments Table 2: The speedup of the distributed algofocus on the performance of the distributed rithms, run sequentially, over their sequential
methods on five large-scale datasets, shown counterparts on the small datasets.
in Figure 3. We vary k between 0 and
dataset
k-center outliers
100, and fix m = 10 and z = 256. Note
that for certain datasets, clustering while tak10k Covertype
3.6
6.2
ing into account outliers produces a notice10k Power
4.8
9.4
able reduction in objective value. On YaParkinson
4.9
4.4
hoo, the G REEDY-MR method is even outperCensus
12.4
677.7
formed by RANDOM|O UTLIERS that considers
outliers. Similar to the small dataset results
O UTLIERS -MR outperforms nearly all distributed methods (save for small k on Covertype). Even
on datasets where there appear to be few outliers O UTLIERS -MR has excellent performance. Finally, O UTLIERS -MR is extremely fast: clustering on Higgs took less than 15 minutes.
6
Conclusion
In this work we described algorithms for the k-center and k-center with outliers problems in the distributed setting. For both problems we studied two round MapReduce algorithms which achieve an
O(1)-approximation and demonstrated that they perform almost identically to their sequential counterparts on real data. Further, a number of our experiments validate that using k-center clustering
on noisy data degrades the quality of the solution. We hope these techniques lead to the discovery
of fast and efficient distributed algorithms for other clustering problems. In particular, what can be
shown for the k-median or k-means with outliers problems are exciting open questions.
Acknowledgments GM was supported by CAPES/BR; MJK and KQW were supported by the NSF
grants IIA-1355406, IIS-1149882, EFRI-1137211; and BM was supported by the Google and Yahoo
Research Awards.
References
[1] P. K. Agarwal and J. M. Phillips. An efficient algorithm for 2d euclidean 2-center with outliers. In ESA,
pages 64?75, 2008.
[2] C. C. Aggarwal, J. L. Wolf, and P. S Yu. Method for targeted advertising on the web based on accumulated
self-learning data, clustering users and semantic node graph techniques, March 30 2004. US Patent
6,714,975.
[3] N. Ailon, R. Jaiswal, and C. Monteleoni. Streaming k-means approximation. In NIPS, pages 10?18,
2009.
8
[4] A. Andoni, A. Nikolov, K. Onak, and G. Yaroslavtsev. Parallel algorithms for geometric graph problems.
In STOC, pages 574?583, 2014.
[5] B. Bahmani, R. Kumar, and S. Vassilvitskii. Densest subgraph in streaming and mapreduce. PVLDB, 5
(5):454?465, 2012.
[6] Bahman Bahmani, Benjamin Moseley, Andrea Vattani, Ravi Kumar, and Sergei Vassilvitskii. Scalable
k-means++. PVLDB, 5(7):622?633, 2012.
[7] M. Balcan, S. Ehrlich, and Y. Liang. Distributed k-means and k-median clustering on general communication topologies. In NIPS, pages 1995?2003, 2013.
[8] Rafael Barbosa, Alina Ene, Huy Nguyen, and Justin Ward. The power of randomization: Distributed
submodular maximization on massive datasets. In ICML, pages 1236?1244, 2015.
[9] A. Z. Broder, L. G. Pueyo, V. Josifovski, S. Vassilvitskii, and S. Venkatesan. Scalable k-means by ranked
retrieval. In WSDM, pages 233?242, 2014.
[10] M. Charikar, S. Khuller, D. M. Mount, and G. Narasimhan. Algorithms for facility location problems
with outliers. In SODA, pages 642?651, 2001.
[11] M. Chen, K. Q. Weinberger, O. Chapelle, D. Kedem, and Z. Xu. Classifier cascade for minimizing feature
evaluation cost. In AIStats, pages 218?226, 2012.
[12] F. Chierichetti, R. Kumar, and A. Tomkins. Max-cover in map-reduce. In WWW, pages 231?240, 2010.
[13] J. Dean and S. Ghemawat. MapReduce: Simplified data processing on large clusters. In OSDI, pages
137?150, 2004.
[14] A. Ene, S. Im, and B. Moseley. Fast clustering using MapReduce. In KDD, pages 681?689, 2011.
[15] J. Feldman, S. Muthukrishnan, A. Sidiropoulos, C. Stein, and Z. Svitkina. On distributing symmetric
streaming computations. In SODA, pages 710?719, 2008.
[16] T. F. Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer Science,
38(0):293 ? 306, 1985. ISSN 0304-3975.
[17] S. Guha, A. Meyerson, N. Mishra, R. Motwani, and L. O?Callaghan. Clustering data streams: Theory and
practice. IEEE Trans. Knowl. Data Eng., 15(3):515?528, 2003.
[18] Sudipto Guha, Rajeev Rastogi, and Kyuseok Shim. Techniques for clustering massive data sets. In
Clustering and Information Retrieval, volume 11 of Network Theory and Applications, pages 35?82.
Springer US, 2004. ISBN 978-1-4613-7949-2.
[19] M. Hassani, E. M?uller, and T. Seidl. EDISKCO: energy efficient distributed in-sensor-network k-center
clustering with outliers. In SensorKDD-Workshop, pages 39?48, 2009.
[20] D. S. Hochbaum and D. B. Shmoys. A best possible heuristic for the k-center problem. Mathematics of
Operations Research, 10(2):180?184, 1985.
[21] H. J. Karloff, S. Suri, and S. Vassilvitskii. A model of computation for MapReduce. In SODA, pages
938?948, 2010.
[22] L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: An Introduction to Cluster Analysis. WileyInterscience, 9th edition, March 1990. ISBN 0471878766.
[23] Ravi Kumar, Benjamin Moseley, Sergei Vassilvitskii, and Andrea Vattani. Fast greedy algorithms in
mapreduce and streaming. In SPAA, pages 1?10, 2013.
[24] R. M. McCutchen and S. Khuller. Streaming algorithms for k-center clustering with outliers and with
anonymity. In APPROX-RANDOM, pages 165?178, 2008.
[25] B. Mirzasoleiman, A. Karbasi, R. Sarkar, and A. Krause. Distributed submodular maximization: Identifying representative elements in massive data. In NIPS, pages 2049?2057, 2013.
[26] M. Shindler, A. Wong, and A. W. Meyerson. Fast and accurate k-means for large datasets. In NIPS, pages
2375?2383, 2011.
[27] S. Suri and S. Vassilvitskii. Counting triangles and the curse of the last reducer. In WWW, pages 607?614,
2011.
[28] A. Tsanas, M. A Little, P. E McSharry, and L. O Ramig. Enhanced classical dysphonia measures and
sparse regression for telemonitoring of parkinson?s disease progression. In ICASSP, pages 594?597.
IEEE, 2010.
[29] S. Tyree, K.Q. Weinberger, K. Agrawal, and J. Paykin. Parallel boosted regression trees for web search
ranking. In WWW, pages 387?396. ACM, 2011.
[30] O. Zamir, O. Etzioni, O. Madani, and R. M Karp. Fast and intuitive clustering of web documents. In
KDD, volume 97, pages 287?290, 1997.
[31] Z. Zhao, G. Wang, A.R. Butt, M. Khan, V.S.A. Kumar, and M.V. Marathe. Sahad: Subgraph analysis in
massive networks using hadoop. In IPDPS, pages 390?401, May 2012.
9
| 5997 |@word version:2 open:1 km:4 seek:1 rgb:1 eng:1 bahmani:2 reduction:3 contains:2 uncovered:1 selecting:1 document:2 outperforms:5 mishra:1 yet:1 dx:13 must:12 sergei:2 partition:6 kdd:2 remove:2 plot:2 maxv:2 v:1 greedy:5 selected:2 guess:1 xk:1 pvldb:2 core:1 dysphonia:1 contribute:1 node:1 location:1 five:2 along:2 become:4 combine:1 manner:1 introduce:1 inter:1 expected:1 indeed:2 andrea:2 themselves:2 wsdm:1 little:2 curse:1 cardinality:2 increasing:1 becomes:1 considering:1 begin:3 underlying:1 notation:1 medium:1 what:4 kaufman:1 onak:1 developed:1 narasimhan:1 finding:2 guarantee:1 pseudo:1 every:5 um:2 ehrlich:1 classifier:1 farthest:4 grant:1 appear:1 louis:4 before:2 engineering:2 understood:1 mount:1 studied:4 specifying:1 challenging:2 josifovski:1 scribed:1 practical:2 acknowledgment:1 practice:2 minv:1 communicated:2 procedure:1 ipdps:1 empirical:1 significantly:2 cascade:1 matching:1 wustl:2 get:1 cannot:2 close:3 put:1 cartographic:1 wenlin:1 www:3 wong:1 map:3 demonstrated:1 center:74 missing:1 send:1 go:1 dean:1 focused:2 formalized:1 identifying:1 assigns:1 contradiction:4 paykin:1 handle:1 updated:1 enhanced:1 today:1 gm:1 massive:7 user:2 exact:2 densest:1 us:7 element:3 particularly:2 anonymity:1 continues:1 kedem:1 solved:1 wang:1 worst:1 zamir:1 barbosa:1 kilian:1 jaiswal:1 decrease:1 removed:2 reducer:1 mentioned:1 benjamin:3 disease:2 complexity:1 ui:15 asked:1 solving:1 algo:1 triangle:6 icassp:1 muthukrishnan:1 fast:8 choosing:1 heuristic:1 widely:3 supplementary:2 say:5 otherwise:2 ward:1 noisy:4 itself:1 final:2 subsamples:1 agrawal:1 isbn:2 sen:1 took:1 turned:2 uci:1 subgraph:2 achieve:2 sudipto:1 description:3 intuitive:1 validate:1 webpage:1 luizgustavo:1 cluster:23 motwani:1 produce:1 mirzasoleiman:5 develop:1 exemplar:1 op:1 strong:2 implemented:3 implies:4 matlabtm:1 radius:2 attribute:1 viewing:1 dure:1 material:2 explains:1 require:1 assign:1 fix:2 generalization:1 preliminary:1 randomization:1 proposition:6 opt:36 im:1 malkomes:1 around:1 considered:2 ic:1 algorithmic:2 mo:2 claim:1 achieves:6 vary:3 early:1 knowl:1 sensitive:2 largest:3 reflects:1 hope:3 uller:1 sensor:1 always:1 aim:1 modified:1 rather:1 parkinson:5 cornell:2 boosted:1 varying:3 karp:1 encode:1 consistently:1 greedily:1 baseline:1 dim:1 osdi:1 streaming:6 accumulated:1 typically:1 entire:2 subroutine:1 interested:1 voluminous:1 selects:3 pixel:1 yahoo:3 equal:2 once:2 washington:2 sampling:2 represents:1 look:1 choses:1 nearly:1 yu:1 icml:1 minimized:2 np:2 others:1 fundamentally:1 bx0:8 few:1 randomly:3 ve:1 madani:1 highly:2 evaluation:2 accurate:1 unless:1 tree:1 euclidean:1 desired:1 theoretical:2 mk:2 increased:1 instance:1 xeon:1 kqw4:1 column:1 cover:5 maximization:7 cost:1 subset:2 conducted:1 guha:2 too:2 utliers:26 st:4 fundamental:4 broder:1 bu:7 together:1 na:1 again:1 choose:6 argmaxu:2 worse:2 inefficient:1 vattani:2 bx:2 zhao:1 account:1 distribute:1 de:1 mcsharry:1 summarized:1 explicitly:1 ranking:2 stream:1 higgs:2 doing:1 parallel:6 complicated:1 contribution:1 minimize:3 oi:17 formed:1 efficiently:1 rastogi:1 shmoys:1 accurately:1 advertising:2 randomness:1 detector:1 monteleoni:1 definition:5 against:1 energy:1 naturally:1 proof:8 rithms:1 dataset:7 popular:1 recall:1 cj:1 hassani:1 actually:1 ok:1 originally:2 shrink:1 furthermore:1 just:1 stage:1 until:2 sketch:1 receives:2 web:5 rajeev:1 google:1 widespread:1 defines:1 quality:2 name:1 matt:1 svitkina:1 counterpart:5 facility:1 hence:2 assigned:8 symmetric:1 iteratively:1 bxj:2 semantic:1 round:18 self:1 covering:1 complete:1 demonstrate:1 balcan:1 image:1 wise:2 suri:2 novel:1 recently:1 empirically:2 patent:1 volume:2 interpretation:2 sidiropoulos:1 measurement:1 feldman:1 phillips:1 approx:1 mathematics:1 iia:1 particle:1 submodular:8 chapelle:1 similarity:1 v0:1 add:5 closest:6 showed:1 belongs:1 discard:4 store:3 certain:1 inequality:5 binary:2 arbitrarily:1 proce:1 preserving:2 minimum:2 argmaxv:1 somewhat:1 greater:4 mr:28 determine:1 maximize:1 nikolov:1 venkatesan:1 u0:8 ii:1 full:1 aggarwal:1 ing:1 match:7 characterized:1 faster:2 retrieval:2 lects:1 award:1 variant:2 scalable:2 regression:2 essentially:1 metric:2 patient:1 iteration:2 sometimes:1 hochbaum:1 agarwal:1 achieved:1 mkusner:1 whereas:1 krause:1 median:2 leaving:1 ithaca:1 archive:1 ineffective:2 subject:3 virtually:3 practitioner:3 noting:1 counting:1 intermediate:4 enough:2 identically:1 variety:2 xj:6 gave:3 identified:1 topology:1 karloff:1 reduce:2 idea:4 knowing:1 luster:9 br:1 bottleneck:1 vassilvitskii:6 distributing:2 cause:1 ignored:1 generally:2 clear:1 covered:1 amount:5 rousseeuw:1 stein:1 category:1 http:1 nsf:1 group:1 four:1 achieving:1 deleted:1 alina:1 ravi:2 resorted:1 vast:1 graph:2 sum:1 run:18 soda:3 throughout:2 almost:1 wu:1 gbrt:1 gonzalez:1 summarizes:2 bound:3 nontrivial:1 bv:1 covertype:6 constraint:2 x2:1 sake:2 nearby:1 wc:4 u1:2 speed:2 argument:2 min:1 extremely:2 kumar:6 speedup:2 department:3 charikar:3 influential:1 ailon:1 ball:2 march:2 hoo:1 across:8 smaller:1 slightly:3 kusner:1 partitioned:5 lem:1 outlier:51 intuitively:2 census:5 ene:2 karbasi:1 previously:4 needed:2 know:4 demographic:1 end:5 operation:1 observe:1 progression:1 save:2 alternative:1 weinberger:3 denotes:1 clustering:39 include:1 running:3 remaining:1 tomkins:1 log2:4 household:2 cape:1 k1:1 uj:1 classical:1 objective:18 skin:1 question:2 already:4 added:6 strategy:1 degrades:1 modestly:1 amongst:2 distance:21 mapped:3 seven:2 considers:3 trivial:1 assuming:4 code:1 o1:1 issn:1 insufficient:1 ratio:2 minimizing:1 liang:1 setup:1 unfortunately:2 telemonitoring:1 stoc:1 dci:6 rise:1 shindler:1 design:1 implementation:1 perform:7 datasets:19 discarded:4 implementable:1 extended:1 communication:4 varied:1 esa:1 sarkar:1 pair:4 required:2 khan:1 engine:1 algorithmics:1 hour:1 nip:4 trans:1 beyond:1 able:1 justin:1 below:1 usually:1 reading:1 max:5 memory:11 lend:3 including:1 power:6 ation:1 natural:3 difficulty:1 ranked:1 indicator:1 ramig:1 loom:1 improve:1 mjk:1 brief:1 imply:3 inversely:1 ready:1 geometric:1 mapreduce:29 discovery:1 shim:1 bahman:1 sublinear:1 etzioni:1 wu0:2 exciting:1 tyree:1 surrounded:1 row:1 surprisingly:1 supported:3 free:1 last:1 drastically:1 fall:2 face:1 sparse:1 distributed:45 ghz:1 world:3 author:1 meyerson:2 efri:1 bm:1 nguyen:1 far:2 simplified:1 approximate:1 implicitly:1 rafael:1 ml:1 butt:1 sequentially:4 assumed:4 marathe:1 xi:5 search:5 decade:1 table:4 additionally:3 spaa:1 inherently:2 hadoop:1 contributes:1 forest:1 du:2 excellent:1 electric:1 aistats:1 pk:1 main:2 universe:2 motivation:1 noise:2 arise:1 unmarked:3 huy:1 edition:1 allowed:2 kqw:1 x1:1 xu:1 intel:1 representative:1 nphard:1 fashion:1 rithm:1 ny:1 chierichetti:1 sub:1 lie:1 communicates:4 theorem:7 minute:1 discarding:1 ghemawat:1 yaroslavtsev:1 tsanas:1 grouping:2 workshop:1 gustavo:1 sequential:29 effectively:1 andoni:1 ci:31 callaghan:1 dissimilarity:2 magnitude:1 demand:1 chen:2 reedy:24 suited:2 simply:2 ordered:1 khuller:2 springer:1 wolf:1 acm:1 intercluster:1 sized:4 targeted:2 goal:5 marked:5 hard:1 change:1 included:1 determined:1 except:1 lemma:13 called:2 total:1 moseley:4 select:2 overload:1 preparation:1 evaluate:2 tested:1 wileyinterscience:1 |
5,521 | 5,998 | Orthogonal NMF through Subspace Exploration
Megasthenis Asteris
The University of Texas at Austin
[email protected]
Dimitris Papailiopoulos
University of California, Berkeley
[email protected]
Alexandros G. Dimakis
The University of Texas at Austin
[email protected]
Abstract
Orthogonal Nonnegative Matrix Factorization (ONMF) aims to approximate a
nonnegative matrix as the product of two k-dimensional nonnegative factors, one
of which has orthonormal columns. It yields potentially useful data representations as superposition of disjoint parts, while it has been shown to work well
for clustering tasks where traditional methods underperform. Existing algorithms
rely mostly on heuristics, which despite their good empirical performance, lack
provable performance guarantees.
We present a new ONMF algorithm with provable approximation guarantees. For
any constant dimension k, we obtain an additive EPTAS without any assumptions
on the input. Our algorithm relies on a novel approximation to the related Nonnegative Principal Component Analysis (NNPCA) problem; given an arbitrary
data matrix, NNPCA seeks k nonnegative components that jointly capture most
of the variance. Our NNPCA algorithm is of independent interest and generalizes
previous work that could only obtain guarantees for a single component.
We evaluate our algorithms on several real and synthetic datasets and show that
their performance matches or outperforms the state of the art.
1
Introduction
Orthogonal NMF The success of Nonnegative Matrix Factorization (NMF) in a range of disciplines spanning data mining, chemometrics, signal processing and more, has driven an extensive
practical and theoretical study [1, 2, 3, 4, 5, 6, 7, 8]. Its power lies in its potential to generate
meaningful decompositions of data into non-subtractive combinations of a few nonnegative parts.
Orthogonal NMF (ONMF) [9] is a variant of NMF with an additional orthogonality constraint: given
a real nonnegative m ? n matrix M and a target dimension k, typically much smaller than m and n,
we seek to approximate M by the product of an m ? k nonnegative matrix W with orthogonal
(w.l.o.g, orthonormal) columns, and an n ? k nonnegative matrix H. In the form of an optimization,
(ONMF)
E? ,
min
W?0, H?0
W? W=Ik
kM ? WH? k2F .
(1)
Since W is nonnegative, its columns are orthogonal if and only if they have disjoint supports. In
turn, each row of M is approximated by a scaled version of a single (transposed) column of H.
Despite the admittedly limited representational power compared to NMF, ONMF yields sparser partbased representations that are potentially easier to interpret, while it naturally lends itself to certain
applications. In a clustering setting, for example, W serves as a cluster membership matrix and the
1
columns of H correspond to k cluster centroids [9, 10, 11]. Empirical evidence shows that ONMF
performs remarkably well in certain clustering tasks, such as document classification [6, 11, 12, 13,
14, 15]. In the analysis of textual data where M is a words by documents matrix, the orthogonal
columns of W can be interpreted as topics defined by disjoint subsets of words. In the case of an
image dataset, with each column of M corresponding to an image evaluated on multiple pixels, each
of the orthogonal base vectors highlights a disjoint segment of the image area.
Nonnegative PCA For any given factor W ? 0 with orthonormal columns, the second ONMF factor H is readily determined: H = M? W ? 0. This follows from the fact that M is by assumption
nonnegative. Based on the above, it can be shown that the ONMF problem (1) is equivalent to
V? , max kM? Wk2F ,
(NNPCA)
W?Wk
where
Wk ,
(2)
W ? Rm?k : W ? 0, W? W = Ik .
For arbitrary ?i.e., not necessarily nonnegative? matrices M, the non-convex maximization (2)
coincides with the Nonnegative Principal Component Analysis (NNPCA) problem [16]. Similarly
to vanilla PCA, NNPCA seeks k orthogonal components that jointly capture most of the variance
of the (centered) data in M. The nonzero entries of the extracted components, however, must be
positive, which renders the problem NP-hard even in the case of a single component (k = 1) [17].
Our Contributions We present a novel algorithm for NNPCA. Our algorithm approximates the
solution to (2) for any real input matrix and is accompanied with global approximation guarantees.
Using the above as a building block, we develop an algorithm to approximately solve the ONMF
problem (1) on any nonnegative matrix. Our algorithm outputs a solution that strictly satisfies both
the nonnegativity and the orthogonality constraints. Our main results are as follows:
Theorem 1. (NNPCA) For any m ? n matrix M, desired number of components k, and accuracy
parameter ? ? (0, 1), our NNPCA algorithm computes W ? Wk such that
?
2
2
M W
? (1 ? ?) ? V? ? k ? ?r+1
(M),
F
r?k
where ?r+1 (M) is the (r + 1)th singular value of M, in time TSVD (r) + O 1?
?k?m .
Here, TSVD (r) denotes the time required to compute a rank-r approximation M of the input M using
the truncated singular value decomposition (SVD). Our NNPCA algorithm operates on the low-rank
matrix M. The parameter r controls a natural trade-off; higher values of r lead to tighter guarantees,
but impact the running time of our algorithm. Finally, note that despite the exponential dependence
in r and k, the complexity scales polynomially in the ambient dimension of the input.
If the input matrix M is nonnegative, as in any instance of the ONMF problem, we can compute an
approximate orthogonal nonnegative factorization in two steps: first obtain an orthogonal factor W
by (approximately) solving the NNPCA problem on M, and subsequently set H = M? W.
Theorem 2. (ONMF) For any m ? n nonnegative matrix M, target dimension k, and desired accuracy ? ? (0, 1), our ONMF algorithm computes an ONMF pair W, H, such that
in time TSVD ( k? ) + O
kM ? WH? k2F ? E? + ? ? kMk2F ,
2
1 k /?
?k?m .
?
For any constant dimension k, Theorem 2 implies an additive EPTAS for the relative ONMF approximation error. This is, to the best our knowledge, the first general ONMF approximation guarantee
since we impose no assumptions on M beyond nonnegativity.
We evaluate our NNPCA and ONMF algorithms on synthetic and real datasets. As we discuss in
Section 4, for several cases we show improvements compared to the previous state of the art.
Related Work ONMF as a variant of NMF first appeared implicitly in [18]. The formulation
in (1) was introduced in [9]. Several algorithms in a subsequent line of work [12, 13, 19, 20, 21, 22]
approximately solve variants of that optimization problem. Most rely on modifying approaches for
NMF to accommodate the orthogonality constraint; either exploiting the additional structural properties in the objective [13], introducing a penalization term [9], or updating the current estimate
2
in suitable directions [12], they typically reduce to a multiplicative update rule which attains orthogonality only in a limit sense. In [11], the authors suggest two alternative approaches: an EM
algorithm motivated by connections to spherical k-means, and an augmented Lagrangian formulation that explicitly enforces orthogonality, but only achieves nonnegativity in the limit. Despite their
good performance in practice, existing methods only guarantee local convergence.
Sep. NMF
ONMF
A significant body of work [23, 24, 25, 26] has focused on Separable NMF, a variant of NMF partially related to ONMF. Sep. NMF seeks to deW?H?
W?H?
compose M into the product of two nonnegative
matrices W and H? where W contains a permu- Figure 1: ONMF and Separable NMF, upon aptation of the k ? k identity matrix. Intuitively, the propriate permutation of the rows of M. In the
geometric picture of Sep. NMF should be quite first case, each row of M is approximated by
different from that of ONMF: in the former, the a single row of H? , while in the second, by a
rows of H? are the extreme rays of a convex cone nonnegative combination of all k rows of H? .
enclosing all rows of M, while in the latter they
should be scattered in the interior of that cone so that each row of M has one representative in small
angular distance. Algebraically, ONMF factors approximately satisfy the structural requirement of
Sep. NMF, but the converse is not true: a Sep. NMF solution is not a valid ONMF solution (Fig. 1).
In the NNPCA front, nonnegativity as a constraint on PCA first appeared in [16], which proposed
a coordinate-descent scheme on a penalized version of (2) to compute a set of nonnegative components. In [27], the authors developed a framework stemming from Expectation-Maximization
(EM) on a generative model of PCA to compute a nonnegative (and optionally sparse) component.
In [17], the authors proposed an algorithm based on sampling points from a low-dimensional subspace of the data covariance and projecting them on the nonnegative orthant. [27] and [17] focus
on the single-component problem; multiple components can be computed sequentially employing
a heuristic deflation step. Our main theoretical result is a generalization of the analysis of [17] for
multiple components. Finally, note that despite the connection between the two problems, existing
algorithms for ONMF are not suitable for NNPCA as they only operate on nonnegative matrices.
2
2.1
Algorithms and Guarantees
Overview
We first develop an algorithm to approximately solve the NNPCA problem (2) on any arbitrary ?
i.e., not necessarily nonnegative? m ? n matrix M. The core idea is to solve the NNPCA problem
not directly on M, but a rank-r approximation M instead. Our main technical contribution is a
procedure that approximates the solution to the constrained maximization (2) on a rank-r matrix
within a multiplicative factor arbitrarily close to 1, in time exponential in r, but polynomial in the
dimensions of the input. Our Low Rank NNPCA algorithm relies on generating a large number of
candidate solutions, one of which provably achieves objective value close to optimal.
The k nonnegative components W ? Wk returned by our Low Rank NNPCA algorithm on the
sketch M are used as a surrogate for the desired components of the original input M. Intuitively, the
performance of the extracted nonnegative components depends on how well M is approximated by
the low rank sketch M; a higher rank approximation leads to better results. However, the complexity
of our low rank solver depends exponentially in the rank of its input. A natural trade-off arises
between the quality of the extracted components and the running time of our NNPCA algorithm.
Using our NNPCA algorithm as a building block, we propose a novel algorithm for the ONMF
problem (1). In an ONMF instance, we are given an m ? n nonnegative matrix M and a target dimension k < m, n, and seek to approximate M with a product WH? of two nonnegative matrices,
where W additionally has orthonormal columns. Computing such a factorization is equivalent to
solving the NNPCA problem on the nonnegative matrix M. (See Appendix A.1 for a formal argument.) Once a nonnegative orthogonal factor W is obtained, the second ONMF factor is readily
determined: H = M? W minimizes the Frobenius approximation error in (1) for a given W. Under
an appropriate configuration of the accuracy parameters, for any nonnegative m ? n input M and
constant target dimension k, our algorithm yields an additive EPTAS for the relative approximation
error, without any additional assumptions on the input data.
3
2.2
Main Results
Low Rank NNPCA We develop an algorithm to approximately solve the NNPCA
problem on an m ? n real rank-r matrix M:
?
W? , arg max kM Wk.
W?Wk
(3)
Algorithm 1 LowRankNNPCA
input real m ? n rank-r matrix M, k, ? ? (0, 1)
output W ? Wk ? Rm?k
{See Lemma 1}
1: C ? {}
{Candidate solutions}
2: U, ?, V ? SVD(M, r)
{Trunc. SVD}
?k
Sr?1
do
3: for each C ? N?/2
2
4: A ? U?C
{A ? Rm?k }
c ? LocalOptW(A)
5: W
{Alg. 3}
c
6: C ? C ? W
7: end for
?
8: W ? arg maxW?C kM Wk2F
The procedure, which lies in the core of
our subsequent developments, is encoded in
Alg. 1. We describe it in detail in Section 3.
The key observation is that irrespectively of
the dimensions of the input, the maximization in (3) can be reduced to k ? r unknowns.
The algorithm generates a large number of
k-tuples of r-dimensional points; the collec
?k
tion of tuples is denoted by N?/2
Sr?1
, the kth Cartesian power of an ?/2-net of the r-dimensional
2
unit sphere. Using these points, we effectively sample the column-space of the input M. Each tuple
yields a feasible solution W ? Wk through a computationally efficient subroutine (Alg. 3). The best
among those candidate solutions is provably close to the optimal W? with respect to the objective
in (2). The approximation guarantees are formally established in the following lemma.
Lemma 1. For any real m?n matrix M with rank r, desired number of components k, and accuracy
parameter ? ? (0, 1), Algorithm 1 outputs W ? Wk such that
?
?
kM Wk2F ? (1 ? ?) ? kM W? k2F ,
where W? is the optimal solution defined in (3), in time TSVD (r) + O
2 r?k
?
?k?m .
Proof. (See Appendix A.2.)
Nonnegative PCA Given an arbitrary real m ? n matrix M, we can generate a rank-r sketch M
and solve the low rank NNPCA problem on M using Algorithm 1. The output W ? Wk of the
low rank problem can be used as a surrogate for the desired components of the original input M.
For simplicity, here we consider the case where M is the rank-r approximation of M obtained by
the truncated SVD. Intuitively, the performance of the extracted components on the original data
matrix M will depend on how well the latter is approximated by M, and in turn by the spectral
decay of the input data. For example, if M exhibits a sharp spectral decay, which is frequently the
case in real data, a moderate value of r suffices to obtain a good approximation. This leads to our
first main theorem which formally establishes the guarantees of our NNPCA algorithm.
Theorem 1. For any real m ? n matrix M, let M be its best rank-r approximation. Algorithm 1
with input M, and parameters k and ? ? (0, 1), outputs W ? Wk such that
?
2
M W
? (1 ? ?) ?
M? W?
2 ? k ?
M ? M
2 ,
F
F
2
?
2
r?k
1
where W? , arg maxW?Wk
M W
F , in time TSVD (r) + O ?
?k?m .
Proof. The proof follows from Lemma 1. It is formally provided in Appendix A.3.
Theorem 1 establishes a trade-off between the computational complexity of the proposed NNPCA
approach and the tightness of the approximation guarantees; higher values of r imply smaller
kM ? Mk22 and in turn a tighter bound (assuming that the singular values of M decay), but have
an exponential impact on the running time. Despite the exponential dependence on r and k, our
approach is polynomial in the dimensions of the input M, dominated by the truncated SVD.
In practice, Algorithm 1 can be terminated early returning the best computed result at the time
of termination, sacrificing the theoretical approximation guarantees. In Section 4 we empirically
evaluate our algorithm on real datasets and demonstrate that even for small values of r, our NNPCA
algorithms significantly outperforms existing approaches.
4
Orthogonal NMF The NNPCA algorithm straightforwardly yields an algorithm for the ONMF
problem (1). In an ONMF instance, the input matrix M is by assumption nonnegative. Given any
m ? k orthogonal nonnegative factor W, the optimal choice for the second factor is H = M? W.
Hence, it suffices to determine W, which can be obtained by solving the NNPCA problem on M.
The proposed ONMF algorithm is outlined in
Alg. 2. Given a nonnegative m ? n matrix M,
we first obtain a rank-r approximation M via
the truncated SVD, where r is an accuracy parameter. Using Alg. 1 on M, we compute an
orthogonal nonnegative factor W ? Wk that
approximately maximizes (3) within a desired
accuracy. The second ONMF factor H is readily determined as described earlier.
Algorithm 2 ONMFS
input : m ? n real M ? 0, r, k, ? ? (0, 1)
1: M ? SVD(M, r)
2: W ? LowRankNNPCA M, k, ?
{Alg. 1}
3: H ? M? W
output W, H
The accuracy parameter r once again controls a trade-off between the quality of the ONMF factors
and the complexity of the algorithm. We note, however, that for any target dimension k and desired accuracy parameter ?, setting r = ?k/?? suffices to achieve an additive ? error on the relative
approximation error of the ONMF problem. More formally,
Theorem 2. For any m ? n real nonnegative matrix M, target dimension k, and desired accuracy
? ? (0, 1), Algorithm 2 with parameter r = ?k/?? outputs an ONMF pair W, H, such that
kM ? WH? k2F ? E? + ? ? kMk2F ,
in time TSVD ( k? ) + O
2
1 k /?
?
? (k ? m) .
Proof. (See Appendix A.4.)
Theorem 2 implies an additive EPTAS1 for the relative approximation error in the ONMF problem
for any constant target dimension k; Algorithm 2 runs in time polynomial in the dimensions of the
input M. Finally, note that it did not require any assumption on M beyond nonnegativity.
3
The Low Rank NNPCA Algorithm
In this section, we re-visit Alg. 1, which plays a central role in our developments, as it is the key
piece of our NNPCA and in turn our ONMF algorithm. Alg. 1 approximately solves the NNPCA
problem (3) on a rank-r, m ? n matrix M. It operates by producing a large, but tractable number of
candidate solutions W ? Wk , and returns the one that maximizes the objective value in (2). In the
sequel, we provide a brief description of the ideas behind the algorithm.
We are interested in approximately solving the low rank NNPCA problem (3). Let M = U?V
denote the truncated SVD of M. For any W ? Rm?k ,
?
?
kM Wk2F = k?U Wk2F =
k
X
?
k?U wj k22 =
j=1
k
X
j=1
2
max
wj , U?cj ,
r?1
?
(4)
cj ?S2
where Sr?1
denotes the r-dimensional ?2 -unit sphere. Let C denote the r ? k variable formed by
2
stacking the unit-norm vectors cj , j = 1, . . . , k. The key observation is that for a given C, we can
efficiently compute a W ? Wk that maximizes the right-hand side of (4). The procedure for that
task is outlined in Alg. 3. Hence, the NNPCA problem (3) is reduced to determining the optimal
value of the low-dimensional variable C. But, first let us we provide a brief description of Alg. 3.
1
Additive EPTAS (Efficient Polynomial Time approximation Scheme [28, 29]) refers to an algorithm that
can approximate the solution of an optimization problem within an arbitrarily small additive error ? and has
complexity that scales polynomially in the input size n, but possibly exponentially in 1/?. EPTAS is more
efficient than a PTAS because it enforces a polynomial dependency on n for any ?, i.e., a running time f (1/?) ?
1
p(n), where p(n) is polynomial. For example, a running time of O(n /? ) is considered PTAS, but not EPTAS.
5
For a fixed r ? k matrix C, Algorithm 3 computes
c , arg max
W
k
X
w j , aj
W?Wk j=1
2
,
(5)
Algorithm 3 LocalOptW
input : real m ? k matrix A
2
c = arg maxW?W Pk
output W
j=1 wj , aj
k
1: CW ? {}
2: for each s ? {?1}k do
3: A? ? A ? diag(s)
4: Ij ? {}, j = 1, . . . , k
5: for i = 1 . . . , m do
6:
j? ? arg maxj A?ij
7:
if A?ij? ? 0 then
8:
Ij? ? Ij? ? {i}
9:
end if
10:
end for
11:
W ? 0m?k
12:
for j = 1, . . . , k do
13:
[wj ]Ij ? [a?j ]I /k[a?j ]I k
j
j
14:
end for
15:
CW ? C W ? W
16: end for
2
c ? arg maxW?C Pk
17: W
j=1 wj , aj
W
where A,U?C. The challenge is to determine
c ; if an oracle
the support of the optimal solution W
revealed the optimal supports Ij , j = 1, . . . , k of
its columns, then the exact value of the nonzero entries would be determined by the Cauchy-Schwarz
inequality, and the contribution
of the jth summand
P
in (5) would be equal to i?Ij A2ij . Due to the nonnegativity constrains in Wk , the optimal support Ij
of the jth column must contain indices corresponding to only nonnegative or nonpositive entries of
aj , but not a combination of both. Algorithm 3
considers all 2k possible sign combinations for the
support sets implicitly by solving (5) on all 2k matrices A? = A ? diag(s), s ? {?1}k . Hence, we
may assume without loss of generality that all support sets correspond to nonnegative entries of A.
Moreover, if index i ? [m] is assigned to Ij , then the contribution of the entire ith row of A to the
objective is equal to A2ij . Based on the above, Algorithm 3 constructs the collection of the support
sets by assigning index i to Ij if and only if Aij is nonnegative and the largest among the entries of
the ith row of A. The algorithm runs in time2 O(2k ? k ? m) and guarantees that the output is the
optimal solution to (5). A more formal analysis of the Alg. 3 is provided in Section A.5.
Thus far, we have seen that any given value of C can be associated with a feasible solution W ? Wk
via the maximization (5) and Alg. 3. If we could efficiently consider all possible values in the
(continuous) domain of C, we would be able to recover the pair that maximizes (4) and, in turn, the
optimal solution of (3). However, that is not possible. Instead, we consider a fine discretization of the
domain of C and settle for an approximate solution. In particular, let N? (Sr?1
) denote a finite ?-net
2
of the r-dimensional ?2 -unit sphere; for any point in Sr?1
,
the
net
contains
a
point within distance
2
? from the former. (see Appendix C for the construction of such a net). Further, let [N? (Sr?1
)]?k
2
denote the kth Cartesian power of the previous net; the latter is a collection of r ? k matrices C.
Alg. 1 operates on this collection: for each C it identifies a candidate solution W ? Wk via the
maximization (5) using Algorithm 3. By the properties of the ?-nets, it can be shown that at least
one of the computed candidate solutions must attain an objective value close to the optimal of (3).
The guarantees of Alg. 1 are formally established in Lemma 1. A detailed analysis of the algorithm
is provided in the corresponding proof in Appendix A.2. This completes the description of our
algorithmic developments.
4
Experimental Evaluation
NNPCA We compare our NNPCA algorithm against three existing approaches: NSPCA [16],
EM [27] and NNSPAN [17] on real datasets. NSPCA computes multiple nonnegative, but not necessarily orthogonal components; a parameter ? penalizes the overlap among their supports. We
set a high penalty (? = 1e10) to promote orthogonality. EM and NNSPAN compute only a single
nonnegative component. Multiple components are computed consecutively, interleaving an appropriate deflation step. To ensure orthogonality, the deflation step effectively zeroes out the variables
used in previously extracted components. Finally, note that both the EM and NSPCA algorithms
are randomly initialized. All depicted values are the best results over multiple random restarts. For
our algorithm, we use a sketch of rank r = 4 of the (centered) input data. Further we apply an
early termination criterion; execution is terminated if no improvement is observed in a number of
consecutive iterations (samples). This can only hurt the performance of our algorithm.
2
When used as a subroutine in Alg. 1, Alg. 3 can be simplified into an O(k ? m) procedure (lines 4-14).
6
NNSPCA
EM
NNSPAN
ONMFS
Cumulative Expl. Variance
6
+59:95%
5
4
3
2
1
0
+59.95%
+52.15%
+42.64%
+32.96%
ONMFS
NNSPAN
EM
NNSPCA
7
Cumulative Expl. Variance
7
6
+19.87%
5
+3.76%
4
+1.26%
3
2
1
1
2
3
4
5
6
7
0
8
2
3
4
5
6
7
8
# Target Components
Components
(a)
(b)
Figure 2: Cumul. variance captured by k nonnegative components; CBCL dataset [30]. In Fig. 2(a),
we set k = 8 and plot the cumul. variance versus the number of components. EM and NNSPAN
extract components greedily; first components achieve high value, but subsequent ones contribute
less to the objective. Our algorithm jointly optimizes the k = 8 components, achieving a 59.95%
improvement over the second best method. Fig. 2(b) depicts the cumul. variance for various values
of k. We note the percentage improvement of our algorithm over the second best method.
CBCL Dataset. The CBCL dataset [30] contains 2429, 19 ? 19 pixel, gray scale face images. It has
been used in the evaluation of all three methods [16, 17, 27]. We extract k orthogonal nonnegative
components using all methods and compare the total explained variance, i.e., the objective in (2).
We note that input data has been centered and it is hence not nonnegative.
Fig. 2(a) depicts the cumulative explained variance versus the number of components for k = 8.
EM and NNSPAN extract components greedily with a deflation step; the first component achieves
high value, but subsequent ones contribute less to the total variance. On the contrary, our algorithm
jointly optimizes the k = 8 components, achieving an approximately 60% increase in the total variance compared to the second best method. We repeat the experiment for k = 2, . . . , 8. Fig. 2(b)
depicts the total variance captured by each method for each value of k. Our algorithm significantly
outperforms the existing approaches.
Additional Datasets. We solve the NNPCA problem on various datasets obtained from [31]. We
arbitrarily set the target number of components to k = 5 and configure our algorithm to use a rank-4
sketch of the input. Table 1 lists the total variance captured by the extracted components for each
method. Our algorithm consistently outperforms the other approaches.
ONMF We compare our algorithm with several state-of-the-art ONMF algorithms i) the O-PNMF
algorithm of [13] (for 1000 iterations), and ii) the more recent ONP-MF iii) EM-ONMF algorithms
of [11, 32] (for 1000 iterations). We also compare to clustering methods (namely, vanilla and spherical k-means) since such algorithms also yield an approximate ONMF.
A MZN C OM . R EV
A RCENCE T RAIN
I SOLET-5
L EUKEMIA
M FEAT P IX
L OW R ES . S PEC .
B OW:KOS
(1500?10000)
(100?10000)
(1559?617)
(72?12582)
(2000?240)
(531?100)
(3430?6906)
NSPCA
EM
NNSPAN
5.44e + 01
4.96e + 04
5.83e ? 01
3.02e + 07
2.00e + 01
3.98e + 06
4.96e ? 02
7.32e + 03
3.01e + 07
3.54e + 01
7.94e + 09
3.20e + 02
2.29e + 08
2.96e + 01
7.32e + 03
3.00e + 07
3.55e + 01
8.02e + 09
3.25e + 02
2.29e + 08
3.00e + 01
ONMFS
7.86e + 03 (+7.37%)
3.80e + 07 (+26.7%)
4.55e + 01 (+28.03%)
1.04e + 10 (+29.57%)
5.24e + 02 (+61.17%)
2.41e + 08 (+5.34%)
4.59e + 01 (+52.95%)
Table 1: Total variance captured by k = 5 nonnegative components on various datasets [31]. For
each dataset, we list (#samples?#variables) and the variance captured by each method; higher values
are better. Our algorithm (labeled ONMFS) operates on a rank-4 sketch in all cases, and consistently
achieves the best results. We note the percentage improvement over the second best method.
7
Synthetic data. We generate a synthetic
dataset as follows. We select five base vectors cj , j = 1, . . . , 5 randomly and independently from the unit hypercube in 100 dimensions. Then, we generate data points xi =
ai ? cj + p ? ni , for some j ? {1, . . . , 5}, where
ai ? U ([0.1, 1]), ni ? N (0, I), and p is a
parameter controlling the noise variance. Any
negative entries of xi are set to zero.
0.25
kM ! WH> k2F =kMk2F
0.2
K-means
Sp. K-means
O-PNMF
ONP-MF
EM-ONMF
ONMFS
0.15
We vary p in [10?2 , 1]. For each p value, we
compute an approximate ONMF on 10 randomly generated datasets and measure the relative Frobenius approximation error. For the
methods that involved random initialization,
we run 10 averaging iterations per MonteCarlo trial. Our algorithm is configured to operate on a rank-5 sketch. Figure 3 depicts the
relative error achieved by each method (averaged over the random trials) versus the noise
variance p. Our algorithm, labeled ONMFS
achieves competitive or higher accuracy for
most values in the range of p.
0.1
0.05
0
-2
10
10
-1
10
0
p (Noise power)
Figure 3: Relative Frob. approximation error on
synthetic data. Data points (samples) are generated by randomly scaling and adding noise to
one of five base points that have been randomly
selected from the unit hypercube in 100 dimensions. We run ONMF methods with target dimension k = 5. Our algorithm is labeled as ONMFS.
Real Datasets. We apply the ONMF algorithms on various nonnegative datasets obtained from [31].
We arbitrarily set the target number of components to k = 6. Table 2 lists the relative Frobenius
approximation error achieved by each algorithm. We note that on the text datasets (e.g., Bag of
Words [31]) we run the algorithms on the uncentered word-by-document matrix. Our algorithm
performs competitively compared to other methods.
5
Conclusions
We presented a novel algorithm for approximately solving the ONMF problem on a nonnegative
matrix. Our algorithm relied on a new method for solving the NNPCA problem. The latter jointly
optimizes multiple orthogonal nonnegative components and provably achieves an objective value
close to optimal. Our ONMF algorithm is the first one to be equipped with theoretical approximation guarantees; for a constant target dimension k, it yields an additive EPTAS for the relative
approximation error. Empirical evaluation on synthetic and real datasets demonstrates that our algorithms outperform or match existing approaches in both problems.
Acknowledgments DP is generously supported by NSF awards CCF-1217058 and CCF-1116404
and MURI AFOSR grant 556016. This research has been supported by NSF Grants CCF 1344179,
1344364, 1407278, 1422549 and ARO YIP W911NF-14-1-0258.
A MZN C OM . R EV (10000?1500)
A RCENCE T RAIN
(100?10000)
M FEAT P IX
(2000?240)
P EMS T RAIN
(267?138672)
B OW:KOS
(3430?6906)
B OW:E NRON
(28102?39861)
B OW:NIPS
(1500?12419)
B OW:N Y T IMES
(102660?3 ? 105 )
K-MEANS
O-PNMF
ONP-MF
EM-ONMF
0.0547
0.0837
0.2489
0.1441
0.8193
0.9946
0.8137
?
0.1153
?
0.2974
0.1439
0.7692
?
0.7277
?
0.1153
0.1250
0.3074
0.1380
0.7671
0.6728
0.7277
0.9199
0.0467
0.0856
0.2447
0.1278
0.7671
0.7148
0.7375
0.9238
ONMFS
0.0462(5)
0.0788(4)
0.2615 (4)
0.1283 (5)
0.7609(4)
0.6540(4)
0.7252(5)
0.9199(5)
Table 2: ONMF approximation error on nonnegative datasets [31]. For each dataset, we list the size
(#samples?#variables) and the relative Frobenius approximation error achieved by each method;
lower values are better. We arbitrarily set the target dimension k = 6. Dashes (-) denote an invalid
solution/non-convergence. For our method, we note in parentheses the approximation rank r used.
8
References
[1] Daniel D Lee and H Sebastian Seung. Algorithms for non-negative matrix factorization. In Advances in neural information processing
systems, pages 556?562, 2000.
[2] Gershon Buchsbaum and Orin Bloch. Color categories revealed by non-negative matrix factorization of munsell color spectra. Vision
research, 42(5):559?563, 2002.
[3] Farial Shahnaz, Michael W Berry, V Paul Pauca, and Robert J Plemmons. Document clustering using nonnegative matrix factorization.
Information Processing & Management, 42(2):373?386, 2006.
[4] Chih-Jen Lin. Projected gradient methods for nonnegative matrix factorization. Neural computation, 19(10):2756?2779, 2007.
[5] Andrzej Cichocki, Rafal Zdunek, Anh Huy Phan, and Shun-ichi Amari. Nonnegative matrix and tensor factorizations: applications to
exploratory multi-way data analysis and blind source separation. John Wiley & Sons, 2009.
[6] Victor Bittorf, Benjamin Recht, Christopher Re, and Joel A Tropp. Factoring nonnegative matrices with linear programs. Advances in
Neural Information Processing Systems, 25:1223?1231, 2012.
[7] Nicolas Gillis and Stephen A Vavasis. Fast and robust recursive algorithms for separable nonnegative matrix factorization. arXiv preprint
arXiv:1208.1237, 2012.
[8] K Huang, ND Sidiropoulos, and A Swamiy. Nmf revisited: New uniqueness results and algorithms. In Acoustics, Speech and Signal
Processing (ICASSP), 2013 IEEE International Conference on, pages 4524?4528. IEEE, 2013.
[9] Chris Ding, Tao Li, Wei Peng, and Haesun Park. Orthogonal nonnegative matrix t-factorizations for clustering. In Proceedings of the
12th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 126?135. ACM, 2006.
[10] Tao Li and Chris Ding. The relationships among various nonnegative matrix factorization methods for clustering. In Data Mining, 2006.
ICDM?06. Sixth International Conference on, pages 362?371. IEEE, 2006.
[11] Filippo Pompili, Nicolas Gillis, P-A Absil, and Franc?ois Glineur. Two algorithms for orthogonal nonnegative matrix factorization with
application to clustering. arXiv preprint arXiv:1201.0901, 2012.
[12] Seungjin Choi. Algorithms for orthogonal nonnegative matrix factorization. In Neural Networks, 2008. IJCNN 2008.(IEEE World
Congress on Computational Intelligence). IEEE International Joint Conference on, pages 1828?1832. IEEE, 2008.
[13] Zhirong Yang and Erkki Oja. Linear and nonlinear projective nonnegative matrix factorization. Neural Networks, IEEE Transactions on,
21(5):734?749, 2010.
[14] Da Kuang, Haesun Park, and Chris HQ Ding. Symmetric nonnegative matrix factorization for graph clustering. In SDM, volume 12,
pages 106?117. SIAM, 2012.
[15] Zhirong Yang, Tele Hao, Onur Dikmen, Xi Chen, and Erkki Oja. Clustering by nonnegative matrix factorization using graph random
walk. In Advances in Neural Information Processing Systems, pages 1088?1096, 2012.
[16] Ron Zass and Amnon Shashua. Nonnegative sparse pca. In Advances in Neural Information Processing Systems 19, pages 1561?1568,
Cambridge, MA, 2007. MIT Press.
[17] Megasthenis Asteris, Dimitris Papailiopoulos, and Alexandros Dimakis. Nonnegative sparse pca with provable guarantees. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1728?1736, 2014.
[18] Zhijian Yuan and Erkki Oja. Projective nonnegative matrix factorization for image compression and feature extraction. In Image Analysis,
pages 333?342. Springer, 2005.
[19] Hualiang Li, T?ulay Adal, Wei Wang, Darren Emge, and Andrzej Cichocki. Non-negative matrix factorization with orthogonality constraints and its application to raman spectroscopy. The Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology, 48(1-2):83?97, 2007.
[20] Bin Cao, Dou Shen, Jian-Tao Sun, Xuanhui Wang, Qiang Yang, and Zheng Chen. Detect and track latent factors with online nonnegative
matrix factorization. In IJCAI, volume 7, pages 2689?2694, 2007.
[21] Xin Li, William KW Cheung, Jiming Liu, and Zhili Wu. A novel orthogonal nmf-based belief compression for pomdps. In Proceedings
of the 24th international conference on Machine learning, pages 537?544. ACM, 2007.
[22] Gang Chen, Fei Wang, and Changshui Zhang. Collaborative filtering using orthogonal nonnegative matrix tri-factorization. Information
Processing & Management, 45(3):368?379, 2009.
[23] NA Gillis and S Vavasis. Fast and robust recursive algorithms for separable nonnegative matrix factorization. IEEE transactions on
pattern analysis and machine intelligence, 2013.
[24] Sanjeev Arora, Rong Ge, Yoni Halpern, David Mimno, Ankur Moitra, David Sontag, Yichen Wu, and Michael Zhu. A practical algorithm
for topic modeling with provable guarantees. arXiv preprint arXiv:1212.4777, 2012.
[25] Sanjeev Arora, Rong Ge, Ravindran Kannan, and Ankur Moitra. Computing a nonnegative matrix factorization?provably. In Proceedings
of the 44th symposium on Theory of Computing, pages 145?162, 2012.
[26] Abhishek Kumar, Vikas Sindhwani, and Prabhanjan Kambadur. Fast conical hull algorithms for near-separable non-negative matrix
factorization. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 231?239, 2013.
[27] Christian D. Sigg and Joachim M. Buhmann. Expectation-maximization for sparse and non-negative pca. In Proceedings of the 25th
International Conference on Machine Learning, ICML ?08, pages 960?967, New York, NY, USA, 2008. ACM.
[28] Liming Cai, Michael Fellows, David Juedes, and Frances Rosamond. On efficient polynomial-time approximation schemes for problems
on planar structures. Journal of Computer and System Sciences, 2003.
[29] Marco Cesati and Luca Trevisan. On the efficiency of polynomial time approximation schemes. Information Processing Letters,
64(4):165?171, 1997.
[30] Kah-Kay Sung. Learning and example selection for object and pattern recognition. PhD thesis, PhD thesis, MIT, Artificial Intelligence
Laboratory and Center for Biological and Computational Learning, Cambridge, MA, 1996.
[31] M. Lichman. UCI machine learning repository, 2013.
[32] Filippo Pompili, Nicolas Gillis, Pierre-Antoine Absil, and Franc?ois Glineur. Onp-mf: An orthogonal nonnegative matrix factorization
algorithm with application to clustering. In ESANN 2013, 2013.
[33] K. Bache and M. Lichman. UCI machine learning repository, 2013.
9
| 5998 |@word trial:2 repository:2 version:2 polynomial:8 norm:1 compression:2 nd:1 termination:2 underperform:1 km:11 seek:5 decomposition:2 covariance:1 accommodate:1 configuration:1 contains:3 liu:1 lichman:2 daniel:1 document:4 outperforms:4 existing:7 current:1 discretization:1 assigning:1 must:3 readily:3 john:1 stemming:1 additive:8 subsequent:4 christian:1 plot:1 update:1 generative:1 selected:1 intelligence:3 adal:1 ith:2 pnmf:3 core:2 alexandros:2 contribute:2 revisited:1 ron:1 bittorf:1 zhang:1 five:2 symposium:1 ik:2 yuan:1 compose:1 ray:1 peng:1 ravindran:1 frequently:1 plemmons:1 multi:1 spherical:2 mzn:2 equipped:1 solver:1 provided:3 moreover:1 maximizes:4 anh:1 interpreted:1 minimizes:1 dimakis:3 developed:1 sung:1 guarantee:17 berkeley:2 fellow:1 returning:1 scaled:1 rm:4 demonstrates:1 control:2 unit:6 grant:2 converse:1 producing:1 positive:1 local:1 congress:1 limit:2 despite:6 approximately:11 pec:1 initialization:1 ankur:2 factorization:25 limited:1 projective:2 range:2 averaged:1 kah:1 practical:2 acknowledgment:1 enforces:2 practice:2 block:2 recursive:2 procedure:4 area:1 asteris:2 empirical:3 significantly:2 attain:1 word:4 refers:1 suggest:1 interior:1 close:5 selection:1 equivalent:2 lagrangian:1 expl:2 nron:1 center:1 independently:1 convex:2 focused:1 shen:1 simplicity:1 rule:1 eukemia:1 orthonormal:4 time2:1 kay:1 exploratory:1 coordinate:1 hurt:1 papailiopoulos:2 target:13 play:1 construction:1 controlling:1 exact:1 approximated:4 recognition:1 updating:1 bache:1 muri:1 labeled:3 observed:1 role:1 preprint:3 ding:3 wang:3 capture:2 imes:1 wj:5 sun:1 trade:4 benjamin:1 complexity:5 constrains:1 seung:1 trunc:1 halpern:1 depend:1 solving:7 segment:1 upon:1 efficiency:1 munsell:1 sep:5 icassp:1 joint:1 various:5 dou:1 fast:3 describe:1 artificial:1 quite:1 heuristic:2 encoded:1 solve:7 tightness:1 amari:1 seungjin:1 jointly:5 itself:1 zhili:1 online:1 sdm:1 net:6 cai:1 propose:1 aro:1 product:4 cao:1 uci:2 achieve:2 representational:1 description:3 frobenius:4 chemometrics:1 exploiting:1 convergence:2 cluster:2 requirement:1 ijcai:1 generating:1 object:1 develop:3 ij:11 pauca:1 solves:1 esann:1 ois:2 implies:2 direction:1 modifying:1 subsequently:1 consecutively:1 exploration:1 centered:3 hull:1 settle:1 shun:1 bin:1 require:1 suffices:3 generalization:1 tighter:2 biological:1 strictly:1 rong:2 marco:1 considered:1 cbcl:3 algorithmic:1 achieves:6 early:2 consecutive:1 vary:1 uniqueness:1 bag:1 superposition:1 utexas:2 schwarz:1 largest:1 changshui:1 establishes:2 mit:2 generously:1 aim:1 focus:1 joachim:1 improvement:5 consistently:2 rank:28 sigkdd:1 centroid:1 attains:1 sense:1 greedily:2 detect:1 absil:2 factoring:1 membership:1 typically:2 entire:1 vlsi:1 france:1 subroutine:2 interested:1 tao:3 provably:4 pixel:2 arg:7 classification:1 among:4 denoted:1 development:3 art:3 constrained:1 yip:1 equal:2 once:2 construct:1 extraction:1 sampling:1 qiang:1 jiming:1 kw:1 park:2 k2f:5 icml:3 promote:1 np:1 summand:1 few:1 franc:2 randomly:5 oja:3 maxj:1 william:1 frob:1 interest:1 mining:3 zheng:1 evaluation:3 joel:1 extreme:1 behind:1 configure:1 bloch:1 ambient:1 tuple:1 orthogonal:24 penalizes:1 desired:8 re:2 sacrificing:1 walk:1 initialized:1 theoretical:4 instance:3 column:12 earlier:1 modeling:1 w911nf:1 yichen:1 maximization:7 stacking:1 introducing:1 subset:1 entry:6 kuang:1 front:1 straightforwardly:1 dependency:1 synthetic:6 recht:1 st:1 international:8 siam:1 kmk2f:3 sequel:1 lee:1 off:4 discipline:1 michael:3 na:1 sanjeev:2 again:1 central:1 thesis:2 management:2 rafal:1 huang:1 possibly:1 moitra:2 return:1 li:4 potential:1 accompanied:1 wk:19 configured:1 satisfy:1 explicitly:1 depends:2 blind:1 piece:1 multiplicative:2 tion:1 shashua:1 competitive:1 recover:1 relied:1 contribution:4 om:2 formed:1 ni:2 accuracy:10 collaborative:1 variance:17 efficiently:2 yield:7 correspond:2 pomdps:1 sebastian:1 sixth:1 against:1 involved:1 naturally:1 proof:5 associated:1 transposed:1 nonpositive:1 dataset:7 wh:5 knowledge:2 color:2 cj:5 higher:5 restarts:1 planar:1 yoni:1 wei:2 formulation:2 evaluated:1 generality:1 angular:1 sketch:7 hand:1 tropp:1 christopher:1 nonlinear:1 lack:1 quality:2 aj:4 gray:1 building:2 usa:1 k22:1 contain:1 true:1 ccf:3 former:2 hence:4 assigned:1 symmetric:1 nonzero:2 laboratory:1 coincides:1 criterion:1 demonstrate:1 performs:2 image:7 novel:5 empirically:1 overview:1 exponentially:2 volume:2 approximates:2 interpret:1 sidiropoulos:1 significant:1 cambridge:2 ai:2 vanilla:2 outlined:2 similarly:1 base:3 recent:1 moderate:1 driven:1 optimizes:3 zhirong:2 certain:2 inequality:1 success:1 arbitrarily:5 victor:1 seen:1 captured:5 additional:4 ptas:2 impose:1 dew:1 algebraically:1 determine:2 signal:4 ii:1 stephen:1 multiple:7 technical:1 match:2 onp:4 sphere:3 lin:1 zass:1 icdm:1 luca:1 visit:1 award:1 parenthesis:1 impact:2 variant:4 ko:2 vision:1 expectation:2 arxiv:6 iteration:4 achieved:3 remarkably:1 fine:1 completes:1 singular:3 source:1 jian:1 onmf:48 operate:2 sr:6 tri:1 contrary:1 structural:2 near:1 yang:3 revealed:2 iii:1 gillis:4 buchsbaum:1 reduce:1 idea:2 texas:2 amnon:1 motivated:1 pca:8 penalty:1 render:1 returned:1 sontag:1 speech:1 york:1 useful:1 detailed:1 rcence:2 category:1 reduced:2 generate:4 vavasis:2 outperform:1 percentage:2 nsf:2 sign:1 disjoint:4 per:1 track:1 irrespectively:1 tsvd:6 ichi:1 key:3 achieving:2 graph:2 cone:2 run:5 letter:1 chih:1 wu:2 separation:1 raman:1 appendix:6 scaling:1 bound:1 dash:1 conical:1 nonnegative:72 oracle:1 gang:1 ijcnn:1 orthogonality:8 constraint:5 filippo:2 fei:1 erkki:3 dominated:1 generates:1 argument:1 min:1 kumar:1 separable:5 combination:4 smaller:2 em:14 son:1 sigg:1 intuitively:3 projecting:1 explained:2 computationally:1 previously:1 turn:5 discus:1 deflation:4 montecarlo:1 ge:2 tractable:1 serf:1 end:5 generalizes:1 competitively:1 apply:2 appropriate:2 spectral:2 pierre:1 alternative:1 vikas:1 original:3 denotes:2 clustering:11 running:5 ensure:1 rain:3 andrzej:2 hypercube:2 tensor:1 objective:9 dependence:2 traditional:1 surrogate:2 antoine:1 exhibit:1 ow:6 lends:1 subspace:2 distance:2 kth:2 cw:2 dp:1 gradient:1 hq:1 onur:1 chris:3 topic:2 cauchy:1 considers:1 spanning:1 provable:4 kannan:1 assuming:1 megasthenis:2 index:3 relationship:1 prabhanjan:1 kambadur:1 trevisan:1 optionally:1 mostly:1 robert:1 potentially:2 partbased:1 wk2f:5 glineur:2 hao:1 negative:6 a2ij:2 enclosing:1 unknown:1 observation:2 datasets:13 finite:1 descent:1 orthant:1 tele:1 truncated:5 arbitrary:4 sharp:1 nmf:19 introduced:1 david:3 pair:3 required:1 namely:1 extensive:1 connection:2 california:1 acoustic:1 textual:1 established:2 nip:1 beyond:2 able:1 pattern:2 dimitris:2 ev:2 appeared:2 challenge:1 program:1 max:4 video:1 belief:1 power:5 suitable:2 overlap:1 natural:2 rely:2 buhmann:1 zhu:1 scheme:4 technology:1 brief:2 imply:1 picture:1 identifies:1 arora:2 extract:3 cichocki:2 text:1 geometric:1 berry:1 discovery:1 determining:1 relative:10 afosr:1 loss:1 highlight:1 permutation:1 filtering:1 versus:3 penalization:1 subtractive:1 austin:3 row:10 penalized:1 repeat:1 supported:2 jth:2 aij:1 formal:2 side:1 face:1 sparse:4 mimno:1 dimension:19 valid:1 cumulative:3 world:1 computes:4 author:3 collection:3 projected:1 simplified:1 employing:1 polynomially:2 far:1 transaction:2 approximate:8 implicitly:2 feat:2 global:1 sequentially:1 uncentered:1 tuples:2 xi:3 abhishek:1 spectrum:1 continuous:1 latent:1 table:4 additionally:1 robust:2 nicolas:3 spectroscopy:1 alg:16 necessarily:3 domain:2 diag:2 da:1 did:1 pk:2 main:5 sp:1 terminated:2 s2:1 noise:4 paul:1 huy:1 xuanhui:1 nspca:4 body:1 augmented:1 fig:5 representative:1 scattered:1 depicts:4 ny:1 wiley:1 nonnegativity:6 haesun:2 exponential:4 lie:2 candidate:6 ix:2 interleaving:1 theorem:8 choi:1 jen:1 zdunek:1 list:4 decay:3 evidence:1 adding:1 effectively:2 phd:2 execution:1 cartesian:2 sparser:1 easier:1 mf:4 phan:1 chen:3 depicted:1 partially:1 maxw:4 sindhwani:1 springer:1 darren:1 satisfies:1 relies:2 extracted:6 acm:4 ma:2 identity:1 dikmen:1 cumul:3 cheung:1 invalid:1 feasible:2 hard:1 determined:4 operates:4 averaging:1 principal:2 admittedly:1 lemma:5 total:6 svd:8 experimental:1 e10:1 e:1 meaningful:1 xin:1 formally:5 select:1 support:8 permu:1 latter:4 arises:1 evaluate:3 liming:1 |
5,522 | 5,999 | Fast Classification Rates for High-dimensional
Gaussian Generative Models
Tianyang Li
Adarsh Prasad
Department of Computer Science, UT Austin
{lty,adarsh,pradeepr}@cs.utexas.edu
Pradeep Ravikumar
Abstract
We consider the problem of binary classification when the covariates conditioned
on the each of the response values follow multivariate Gaussian distributions. We
focus on the setting where the covariance matrices for the two conditional distributions are the same. The corresponding generative model classifier, derived
via the Bayes rule, also called Linear Discriminant Analysis, has been shown to
behave poorly in high-dimensional settings. We present a novel analysis of the
classification error of any linear discriminant approach given conditional Gaussian
models. This allows us to compare the generative model classifier, other recently
proposed discriminative approaches that directly learn the discriminant function,
and then finally logistic regression which is another classical discriminative model
classifier. As we show, under a natural sparsity assumption, and letting s denote
the sparsity of the Bayes classifier, p the number of covariates, and n the number of samples, the simple (`1 -regularized)
logistic
regression classifier achieves
p
the fast misclassification error rates of O s log
, which is much better than the
n
other approaches, which are either
under high-dimensional settings,
q inconsistent
s log p
or achieve a slower rate of O
.
n
1
Introduction
We consider the problem of classification of a binary response given p covariates. A popular class of
approaches are statistical decision-theoretic: given a classification evaluation metric, they then optimize a surrogate evaluation metric that is computationally tractable, and yet have strong guarantees
on sample complexity, namely, number of observations required for some bound on the expected
classification evaluation metric. These guarantees and methods have been developed largely for the
zero-one evaluation metric, and extending these to general evaluation metrics is an area of active
research. Another class of classification methods are relatively evaluation metric agnostic, which
is an important desideratum in modern settings, where the evaluation metric for an application is
typically less clear: these are based on learning statistical models over the response and covariates,
and can be categorized into two classes. The first are so-called generative models, where we specify
conditional distributions of the covariates conditioned on the response, and then use the Bayes rule
to derive the conditional distribution of the response given the covariates. The second are the socalled discriminative models, where we directly specify the conditional distribution of the response
given the covariates.
In the classical fixed p setting, we have now have a good understanding of the performance of
the classification approaches above. For generative and discriminative modeling based approaches,
consider the specific case of Naive Bayes generative models and logistic regression discriminative
models (which form a so-called generative-discriminative pair1 ), Ng and Jordan [27] provided qual1
In such a so-called generative-discriminative pair, the discriminative model has the same form as that of
the conditional distribution of the response given the covariates specified by the Bayes rule given the generative
model
1
itative consistency analyses, and showed that under small sample settings, the generative model
classifiers converge at a faster rate to their population error rate compared to the discriminative
model classifiers, though the population error rate of the discriminative model classifiers could be
potentially lower than that of the generative model classifiers due to weaker model assumptions.
But if the generative model assumption holds, then generative model classifiers seem preferable to
discriminative model classifiers.
In this paper, we investigate whether this conventional wisdom holds even under high-dimensional
settings. We focus on the simple generative model where the response is binary, and the covariates
conditioned on each of the response values, follows a conditional multivariate Gaussian distribution.
We also assume that the two covariance matrices of the two conditional Gaussian distributions are
the same. The corresponding generative model classifier, derived via the Bayes rule, is known in
the statistics literature as the Linear Discriminant Analysis (LDA) classifier [21]. Under classical
settings where p n, the misclassification error rate of this classifier has been shown to converge to
that of the Bayes classifier. However, in a high-dimensional setting, where the number of covariates
p could scale with the number of samples n, this performance of the LDA classifier breaks down. In
particular, Bickel and Levina [3] show that when p/n ? ?, then the LDA classifier could converge
to an error rate of 0.5, that of random chance. What should one then do, when we are even allowed
this generative model assumption, and when p > n?
Bickel and Levina [3] suggest the use of a Naive Bayes or conditional independence assumption,
which in the conditional Gaussian context, assumes the covariance matrices to be diagonal. As they
showed, the corresponding Naive Bayes LDA classifier does have misclassification error rate that is
better than chance, but it is asymptotically biased: it converges to an error rate that is strictly larger
than that of the Bayes classifier when the Naive Bayes conditional independence assumption does
not hold. Bickel and Levina [3] also considered a weakening of the Naive Bayes rule, by assuming
that the covariance matrix is weakly sparse, and an ellipsoidal constraint on the means, showed
that an estimator that leverages these structural constraints converges to the Bayes risk at a rate of
O(log(n)/n? ), where 0 < ? < 1 depends on the mean and covariance structural assumptions. A
caveat is that these covariance sparsity assumptions might not hold in practice. Similar caveats apply
to the related works on feature annealed independence rules [14], nearest shrunken centroids [29,
30], as well as those . Moreover, even when the assumptions hold, they do not yield the ?fast? rates
of O(1/n).
An alternative approach is to directly impose sparsity on the linear discriminant [28, 7], which is
weaker than the covariance sparsity assumptions (though [28] impose these in addition). [28, 7]
then proposed new estimators that leveraged these assumptions, but while they
were able
q
to show
s log p
convergence to the Bayes risk, they were only able to show a slower rate of O
.
n
It is instructive at this juncture to look at recent results on classification error rates from the machine
learning community. A key notion of importance here is whether the two classes are separable:
which can be understood as requiring that
? the classification error of the Bayes classifier is 0. Classical learning theory gives a rate of O(1/ n) for any classifier when two classes are non-separable,
and it shown that this is also minimax [12], with the note that this is relatively distribution agnostic,
since it assumes very little on the underlying distributions. When the two classes are non-separable,
only rates slower than ?(1/n) are known. Another key notion is a ?low-noise
condition? [25], under
?
which certain classifiers can be shown to attain a rate faster than o(1/ n), albeit not at the O(1/n)
rate unless the two classes are separable. Specifically, let ? denote a constant such that
P (|P(Y = 1|X) ? 1/2| ? t) ? O (t? ) ,
(1)
holds when t ? 0. This is said to be a low-noise assumption, since as ? ? +?, the two classes
start becoming separable, that is, the Bayes
riskapproaches zero. Under this low-noise assumption,
1+?
known rates for excess 0-1 risk is O ( n1 ) 2+? [23]. Note that this is always slower than O( n1 )
when ? < +?.
There has been a surge of recent results on high-dimensional statistical statistical analyses of M estimators [26, 9, 1]. These however are largely focused on parameter error bounds, empirical and
population log-likelihood, and sparsistency. In this paper however, we are interested in analyzing
the zero-one classification error under high-dimensional sampling regimes. One could stitch these
recent results to obtain some error bounds: use bounds on the excess log-likelihood, and use trans2
forms from [2], to convert excess log-likelihood bounds to get bounds on 0-1 classification error,
however, the resulting bounds are very loose, and in particular, do not yield the fast rates that we
seek.
In this paper, we leverage the closed form expression for the zero-one classification error for our
generative model, and directly analyse it to give faster rates for any linear discriminant method.
Our analyses show that, assuming a sparse linear discriminant in addition,
the simple `1 -regularized
s log p
logistic regression classifier achieves near optimal fast rates of O
, even without requiring
n
that the two classes be separable.
2
Problem Setup
We consider the problem of high dimensional binary classification under the following generative
model. Let Y ? {0, 1} denote a binary response variable, and let X = (X1 , . . . , Xp ) ? Rp denote
a set of p covariates. For technical simplicity, we assume Pr[Y = 1] = Pr[Y = 0] = 12 , however
our analysis easily extends to the more general case when Pr[Y = 1], Pr[Y = 0] ? [?0 , 1 ? ?0 ], for
some constant 0 < ?0 < 21 . We assume that X|Y ? N (?Y , ?Y ), i.e. conditioned on a response, the
covariate follows a multivariate Gaussian distribution. We assume we are given n training samples
{(X (1) , Y (1) ), (X (2) , Y (2) ), . . . , (X (n) , Y (n) )} drawn i.i.d. from the conditional Gaussian model
above.
For any classifier, C : Rp ? {1, 0}, the 0-1 risk or simply the classification error is given by
R0?1 (C) = EX,Y [`0?1 (C(X), Y )], where `0?1 (C(x), y) = 1(C(x) 6= y) is the 0-1 loss. It can
also be simply written as R(C) = Pr[C(X) 6= Y ]. The classifier attaining the lowest classification
error is known as the Bayes classifier, which we will denote by C ? . Under the generative model
=1|X]
assumption above, the Bayes classifier can be derived simply as C ? (X) = 1(log Pr[Y
Pr[Y =0|X] > 0),
so that given sample X, it would be classified as 1 if
the error of the Bayes classifier R? = R(C ? ).
Pr[Y =1|X]
Pr[Y =0|X]
> 1, and as 0 otherwise. We denote
When ?1 = ?0 = ?,
log
1
Pr[Y = 1|X]
= (?1 ? ?0 )T ? ?1 X + (??T1 ? ?1 ?1 + ?T0 ? ?1 ?0 )
Pr[Y = 0|X]
2
(2)
and we denote this quantity as w? T X + b? where
w? = ? ?1 (?1 ? ?0 ), b? =
??T1 ? ?1 ?1 + ?T0 ? ?1 ?0
,
2
so that the Bayes classifier can be written as: C ? (x) = 1(w? T x + b? > 0).
? ? R? .
For any trained classifier C? we are interested in bounding the excess risk defined as R(C)
?1
The generative approach to training a classifier is to estimate estimate ? and ? from data, and
then plug the estimates into Equation 2 to construct the classifier. This classifier is known as the
linear discriminant analysis (LDA) classifier, whose theoretical properties have been well-studied
=1|X]
in classical fixed p setting. The discriminative approach to training is to estimate Pr[Y
Pr[Y =0|X] directly
from samples.
2.1 Assumptions.
We assume that mean is bounded i.e. ?1 , ?0 ? {? ? Rp : k?k2 ? B? }, where B? is a constant
which doesn?t scale with p. We assume that the covariance matrix ? is non-degenerate i.e. all
eigenvalues of ? are in [B?min , B?max ]. Additionally we assume (?1 ? ?0 )T ? ?1 (?1 ? ?0 ) ? Bs ,
which gives a lower bound on the Bayes classifier?s classification error R? ? 1 ? ?( 21 Bs ) > 0.
Note that this assumption is different from the definition of separable classes in [11] and the low
noise condition in [25], and the two classes are still not separable because R? > 0.
2.1.1
Sparsity Assumption.
Motivated by [7], we assume that ? ?1 (?1 ? ?0 ) is sparse, and there at most s non-zero entries.
Cai and Liu [7] extensively discuss and show that such a sparsity assumption, is much weaker than
assuming either ? ?1 and (?1 ? ?0 ) to be individually sparse. We refer the reader to [7] for an
elaborate discussion.
3
2.2 Generative Classifiers
Generative techniques work by estimating ? ?1 and (?1 ? ?0 ) from data and plugging them into
Equation 2. In high-dimensions, simple estimation techniques do not perform well when p n,
? is singular; using the generalized inverse of the
the sample estimate for the covariance matrix ?
sample covariance matrix makes the estimator highly biased and unstable. Numerous alternative
approaches have been proposed by imposing structural conditions on ? or ? ?1 and ? to ensure that
they can be estimated consistently. Some early work based on nearest shrunken centroids [29, 30],
feature annealed independence rules [14], and Naive Bayes [4] imposed independence assumptions
on ?, which are often violated in real-world applications. [4, 13] impose more complex structural
assumptions on the covariance matrix and suggest more complicated thresholding techniques. Most
commonly, ? ?1 and ? are assumed to be sparse and then some thresholding techniques are used to
estimate them consistently [17, 28].
2.3 Discriminative Classifiers.
?
Recently, more direct techniques have been proposed to solve the sparse LDA problem. Let ?
?1 +?0
and ??d be consistent estimators of ? and ? =
. Fan et al. [15] proposed the Regularized
2
Optimal Affine Discriminant (ROAD) approach which minimizes wT ?w with wT ? restricted to be
a constant value and an `1 -penalty of w.
?
wROAD = argmin wT ?w
(3)
wT ?
? =1
||w||1 ?c
Kolar and Liu [22] provided theoretical insights into the ROAD estimator by analysing its consistency for variable selection. Cai and Liu [7] proposed another variant called linear programming
discriminant (LPD) which tries to make w close to the Bayes rules linear term ? ?1 (?1 ? ?0 ) in the
`? norm. This can be cast as a linear programming problem related to the Dantzig selector.[8].
wLPD = argmin ||w||1
(4)
w
? ??
s.t.||?w
?||? ? ?n
Mai et al. [24]proposed another version of the sparse linear discriminant analysis based on an equivalent least square formulation of the LDA, where they solve an `1 -regularized least squares problem
to produce a consistent classifier.
All the techniques above
do not have finite sample convergence rates, or the 0-1 risk converged
q either
s log p
.
at a slow rate of O
n
In this paper, we first provide an analysis of classification error rates for any classifier with a linear
discriminant function, and then follow this analysis by investigating the performance of generative
and discriminative classifiers for conditional Gaussian model.
3
Classifiers with Sparse Linear Discriminants
We first analyze any classifier with a linear discriminant function, of the form: C(x) = 1(wT x+b >
0). We first note that the 0-1 classification error of any such classifier is available in closed-form as
T
1
w ?1 + b
1
wT ?0 + b
R(w, b) = 1 ? ? ?
? ? ??
,
(5)
2
2
wT ?w
wT ?w
which can be shown by noting that wT X + b is a univariate normal random variable when conditioned on the label Y .
Next, we relate the 0-1 classifiction error above to that of the Bayes classifier. Recall the earlier
notation of the Bayes classifier as C ? (x) = 1(xT w? + b? > 0). The following theorem is a key
result of the paper that shows that for any linear discriminant classifier whose linear discriminant
parameters are close to that of the Bayes classifier, the excess 0-1 risk is bounded only by second
order terms of the difference. Note that this theorem will enable fast classification rates if we obtain
fast rates for the parameter error.
Theorem 1. Let w = w? + ?, b = b? + ?, and ? ? 0, ? ? 0, then we have
R(w = w? + ?, b = b? + ?) ? R(w? , b? ) = O(k?k22 + ? 2 ).
4
(6)
Proof. Denote the quantity S ? =
??T
w? ?b?
? 1
?
w T ?w?
=
1 ?
2S .
p
(?1 ? ?0 )T ? ?1 (?1 ? ?0 ), then we have
Using (5) and the Taylor series expansion of ?(?) around
1 ?
2S ,
?T w? +b?
?1
w? T ?w?
=
we have
1
?T w + b
1
1
??T w ? b
|(?( ?1
) ? ?( S ? )) + (?( ? 0
) ? ?( S ? ))| (7)
T
T
2
2
2
w ?w
w ?w
T
T
T
? w+b
1
(?1 ? ?0 ) w
?? w ? b 1 ? 2
? S ? | + K2 ( ?1
? S ? )2 + K3 ( ? 0
? S )
?K1 | ?
T
T
2
w ?w
w ?w 2
wT ?w
where K1 , K2 , K3 > 0 are constants because the first and second order derivatives of ?(?) are
bounded.
?
?
First note that | wT ?w ? w? T ?w? | = O(k?k2 ), because kw? k2 is bounded.
|R(w, b) ? R(w? , b? )| =
1
1
?
1
1
Denote w00 = ? 2 w, ?00 = ? 2 ?, w00 = ? 2 w? a00 = ? ? 2 (?1 ? ?0 ), we have (by the binomial
Taylor series expansion)
T
p
(?1 ? ?0 )T w
a00 w00
?
? S? = ? T
? a00 T a00
(8)
00 w 00
wT ?w
w
q
00T
00
00 T
00T
00
00
?
1 + 2 aa00 T?
+ ?a00T a?00
1 + aa00 T?
00
k?00 k22
a
a00
?
?
=
=
O(
)
w00T w00
00 T a00
a
00T
00
a a
T
0) w
?
Note that w00 ? a00 , ?00 ? 0, k?k2 = ?(k?00 k2 ), and S ? is lower bouned, we have | (??1 ??
wT ?w
S ? | = O(k?k22 ).
?T w+b
w ?w
Next we bound | ? 1 T
? 12 S ? |:
?
?
?
?T1 w + b
1 ?
(?T1 w? + b? )( w? T ?w? ? wT ?w) + w? T ?w? (?T1 ? + ?)
?
|?
? S |=|
|
?
wT ?w 2
wT ?w w? T ?w?
q
= O( k?k22 + ? 2 )
(9)
where we use the fact that |?T1 w? + b? | and S ? are bounded.
p
??T w?b
Similarly | ? 0T
? 12 S ? | = O( k?k22 + ? 2 ).
w ?w
Combing the above bounds we get the desired result.
4
Logistic Regression Classifier
In this section, we show that the simple `1 regularized logistic regression classifier attains fast classification error rates.
Specifically, we are interested in the M -estimator [21] below:
X
1
(i)
T
(i)
T
(i)
?
(w,
? b) = arg min
(Y (w X + b) + log(1 + exp(w X + b))) + ?(kwk1 + |b|) ,
w,b
n
(10)
which maximizes the penalized log-likelihood of the logistic regression model, which also corresponds to the conditional probability of the response given the covariates P(Y |X) for the conditional
Gaussian model.
Note that here we penalize the intercept term b as well. Although the intercept term usually is not
penalized (e.g. [19]), some packages (e.g. [16]) penalize the intercept term. Our analysis show that
penalizing the intercept term does not degrade the performance of the classifier.
In [2, 31] it is shown that minimizing the expected risk of the logistic loss also minimizes the
classification error for the corresponding linear classifier. `1 regularized logistic regression is a
popular classification method in many settings [18, 5]. Several commonly used packages ([19, 16])
have been developed for `1 regularized logistic regression. And recent works ([20, 10]) have been
on scaling regularized logistic regression to ultra-high dimensions and large number of samples.
5
4.1
Analysis
We first show that `1 regularized logistic regression estimator above converges to the Bayes classifier
parameters using techniques. Next we use the theorem from the previous section to argue that since
estimated parameter w,
? ?b is close to the Bayes classifier?s parameter w? , b? , the excess risk of the
classifier using estimated parameter is tightly bounded as well.
For the first step, we first show a restricted eigenvalue condition for X 0 = (X, 1) where X are our
covariates, that comes from a mixture of two Gaussian distributions 21 N (?1 , ?)+ 12 N (?0 , ?). Note
that X 0 is not zero centered, which is different from existing scenarios ([26], [6], etc.) that assume
?
covariates are zero centered. And we denote w0 = (w, b), S 0 = {i : w0 i 6= 0}, and s0 = |S 0 | ? s+1.
Lemma 1. With probability 1 ? ?, ?v 0 ? A0 ? {v 0 ? Rp+1 : kv 0 k2 = 1}, for some constants
?1 , ?2 , ?3 > 0 we have
r
?
1
1
0
0 0
kX v k2 ? ?1 n ? ?2 w(A ) ? ?3 log
(11)
n
?
T
where w(A0 ) = Eg0 ?N (0,Ip+1 ) [supa0 ?A0 g 0 a0 ] is the Gaussian width of A0 .
?
In the special case when A0 = {v 0 : kvS0?0 k1 ? 3kvS0 0 k1 , kv 0 k2 = 1}, we have w(A0 ) = O( s log p).
Proof. First note that X 0 is sub-Gaussian with bounded parameter and
1 0T 0
? + 21 (?1 ?T1 + ?0 ?T0 ) 12 (?1 + ?0 )
0
? = E[ X X ] =
(12)
1
T
T
1
n
2 (?1 + ?0 )
? + 14 (?1 ? ?0 )T (?1 ? ?0 ) 0
I ? 12 (?1 + ?0 )
where A = p
, and
Note that A? 0 AT =
0
1
0
1
1
I 0
? 12 (?1 + ?0 ) 1
I
(? + ?0 )
. Notice that AAT = p
+
? 2 (?1 + ?0 )T 1
A?1 = p 2 1
0 0
0
1
1
1
I
0
(?
+
?
)
p
0
1
T
and A?1 A?T =
+ 2 1
1 , and we can see that the singular
2 (?1 + ?0 )
0 0
1
q
values of A and A?1 are lower bounded by ? 1 2 and upper bounded by 2 + B?2 . Let ?1
2+B?
be the minimum eigenvalue of ? 0 , and u01 (ku01 k2 = 1) the corresponding eigenvector. From the
expression A?AT A?T u01 = ?1 Au01 , so we know that the minimum eigenvalue of ? 0 is lower
bounded. Similarly the largest eigenvalue of ? 0 is upper bounded. Then the desired result follows
the proof of Theorem 10 in [1]. Although the proof of Theorem 10 in [1] is for zero-centered random
variables, the proof remains valid for non zero-centered random variables.
?
When A0 = {v 0 : kvS0?0 k1 ? 3kvS0 0 k1 , kv 0 k2 = 1}, [9] gives w(A0 ) = O( s log p).
Having established a restricted eigenvalue result in Lemma 1, next we use the result in [26] for parameter recovery in generalized linear models (GLMs) to show that `1 regularized logistic regression
can recover the Bayes classifier parameters.
q
Lemma 2. When the number of samples n s0 log p, and choose ? = c0 logn p for some constant
c0 , then we have
s0 log p
kw? ? wk
? 22 + (b? ? ?b)2 = O(
)
(13)
n
with probability at least 1 ? O( p1c1 + n1c2 ), where c1 , c2 > 0 are constants.
Proof. Following the proof of Lemma 1, we see that the conditions (GLM1) and (GLM2) in [26]
are satisfied. Following the proof of Proposition 2 and Corollary 5 in [26], we have the desired
result. Although the proof of Proposition 2 and Corollary 5 in [26] is for zero-centered random
variables, the proof remains valid for non zero-centered random variables.
Combining Lemma 2 and Theorem 1 we have the following theorem which gives a fast rate for the
excess 0-1 risk of a classifier trained using `1 regularized logistic regression.
6
Theorem 2. With probability at least 1 ? O( p1c1 + n1c2 ) where c1 , c2 > 0 are constants, when we
q
? ?b in (10) satisfies
set ? = c0 logn p for some constant c0 the Lasso estimate w,
s log p
R(w,
? ?b) ? R(w? , b? ) = O(
)
n
(14)
Proof. This follows from Lemma 2 and Theorem 1.
5
Other Linear Discriminant Classifiers
In this section, we provide convergence results for the 0-1 risk for other linear discriminant classifiers
discussed in Section 2.3.
Naive Bayes We compare the discriminative approach using `1 logistic regression to the generative approach
For
using naive Bayes.
illustration purposes we conside the case where ? = Ip , ?1 =
1
1
s
s
M
M
?1
and ?0 = ? ?s0
. where 0 < B1 ? M1 , M0 ? B2 are unknown but bounded
s 0p?s
0p?s
1s
M1?
+M0
?
and b? = 12 (?M12 + M02 ). Using naive Bayes we esconstants. In this case w =
s
0p?s
P
P
timate w
? = ??1 ? ??0 , where ??1 = P 1(Y1 (i) =1 ) Y (i) =1 X (i) and ??0 = P 1(Y1 (i) =0 ) Y (i) =0 X (i) .
Thus with high probability, we have kw
? ? w? k22 = O( np ), using Theorem 1 we get a slower rate
than the bound given in Theorem 2 for discriminative classification using `1 regularized logistic
regression.
LPD [7]
LPD uses a linear programming similar to the Dantzig selector.
q
p
Lemma 3 (Cai and Liu [7], Theorem 4). Let ?n = C s log
with C being a sufficiently large
n
constant. Let n > log p, let ? = (?1 ? ?0 )T ? ?1 (?1 ? ?0 ) > c1 for some constant c1 > 0, and
?1
let wLPD be obtained
q as in Equation 4, then with probability greater than 1 ? O(p ), we have
R(wLPD)
R?
? 1 = O(
s log p
n ).
SLDA [28] SLDA uses thresholded estimate for ? and ?1 ? ?0 . We state a simpler version.
R(w
SLDA)
?1 =
Lemma 4 ([28], Theorem 3). Assume that ? and ?1 ? ?0 are sparse, then we have
R?
S log p ?2
s log p ?1
O(max(( n ) , ( n ) ) with high probability, where s = k?1 ? ?0 k0 , S is the number of
non-zero entries in ?, and ?1 , ?2 ? (0, 12 ) are constants.
ROAD [15] ROAD minimizes wT ?w with wT ? restricted to be a constant value and an `1 -penalty
of w.
q
? ??||? = O( log p )
Lemma 5 (Fan et al. [15], Theorem 1). Assume that with high probability, ||?
n
q
log p
and ||?
? ??||? = O(
n ), and let wROAD be obtained as in Equation 3, then with high probability,
q
p
we have R(wROAD ) ? R? = O( s log
n ).
6
Experiments
In this section we describe experiments which illustrate the rates for excess 0-1 risk given in Theorem
2. In our experiments we use Glmnet [19] where we set the option to penalize the intercept term
along with all other parameters. Glmnet is popular package for `1 regularized logistic regression
using coordinate descent methods.
1s
1
?
For illustration purposes in all simulations we use ? = Ip , ?1 = 1p + s
, ?0 = 1p ?
0p?s
1s
?1
To illustrate our bound in Theorem 2, we consider three different scenarios. In Figure 1a
s 0p?s
7
0.5
classification error
0.4
0.35
0.3
0.25
0.2
0
0.25
s=5
s=10
s=15
s=20
0.45
classification error
p=100
p=400
p=1600
0.4
0.35
0.3
0.25
0.2
excess 0?1 risk
0.5
0.45
0.15
0.1
0.05
0.2
100
200
300
n/log(p)
(a) Only varying p.
400
0
100
200
n/s
300
(b) Only varying s.
400
0
0
0.005
0.01
1/n
0.015
(c) Dependence of excess 0-1 risk
on n.
Figure 1: Simulations for different Gaussian classification problems showing the dependence of
classification error on different quantities. All experiments
plotted the average of 20 trials. In all
q
experiments we set the regularization parameter ? =
log p
n .
we vary p while keeping s, (?1 ? ?0 )T ? ?1 (?1 ? ?0 ) constant. Figure 1a shows for different p how
the classification error changes with increasing n. In Figure 1a we show the relationship between
the classification error and the quantity logn p . This figure agrees with our result on excess 0-1 risk?s
dependence on p. In Figure 1b we vary s while keeping p, (?1 ? ?0 )T ? ?1 (?1 ? ?0 ) constant.
Figure 1b shows for different s how the classification error changes with increasing n. In Figure 1a
we show the relationship between the classification error and the quantity ns . This figure agrees with
our result on excess 0-1 risk?s dependence on s. In Figure 1c we show how R(w,
? ?b) ? R(w? , b? )
changes with respect to n1 in one instance Gaussian classification. We can see that the excess 0-1
risk achieves the fast rate and agrees with our bound.
Acknowledgements
We acknowledge the support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803, IIS1320894, IIS-1447574, and DMS-1264033, and NIH via R01 GM117594-01 as part of the Joint
DMS/NIGMS Initiative to Support Research at the Interface of the Biological and Mathematical
Sciences.
References
[1] Arindam Banerjee, Sheng Chen, Farideh Fazayeli, and Vidyashankar Sivakumar. Estimation with norm
regularization. In Advances in Neural Information Processing Systems, pages 1556?1564, 2014.
[2] Peter L Bartlett, Michael I Jordan, and Jon D McAuliffe. Convexity, classification, and risk bounds.
Journal of the American Statistical Association, 101(473):138?156, 2006.
[3] Peter J Bickel and Elizaveta Levina. Some theory for Fisher?s linear discriminant function, ?naive Bayes?,
and some alternatives when there are many more variables than observations. Bernoulli, pages 989?1010,
2004.
[4] Peter J Bickel and Elizaveta Levina. Covariance regularization by thresholding. The Annals of Statistics,
pages 2577?2604, 2008.
[5] C.M. Bishop. Pattern Recognition and Machine Learning. Information Science and Statistics. Springer,
2006. ISBN 9780387310732.
[6] Peter B?uhlmann and Sara Van De Geer. Statistics for high-dimensional data: methods, theory and applications. Springer Science & Business Media, 2011.
[7] Tony Cai and Weidong Liu. A direct estimation approach to sparse linear discriminant analysis. Journal
of the American Statistical Association, 106(496), 2011.
[8] Emmanuel Candes and Terence Tao. The dantzig selector: statistical estimation when p is much larger
than n. The Annals of Statistics, pages 2313?2351, 2007.
[9] Venkat Chandrasekaran, Benjamin Recht, Pablo A Parrilo, and Alan S Willsky. The convex geometry of
linear inverse problems. Foundations of Computational Mathematics, 12(6):805?849, 2012.
8
[10] Weizhu Chen, Zhenghao Wang, and Jingren Zhou. Large-scale L-BFGS using MapReduce. In Advances
in Neural Information Processing Systems, pages 1332?1340, 2014.
[11] L. Devroye, L. Gy?orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer New York,
1996.
[12] Luc Devroye. A probabilistic theory of pattern recognition, volume 31. Springer Science & Business
Media, 1996.
[13] David Donoho and Jiashun Jin. Higher criticism thresholding: Optimal feature selection when useful
features are rare and weak. Proceedings of the National Academy of Sciences, 105(39):14790?14795,
2008.
[14] Jianqing Fan and Yingying Fan. High dimensional classification using features annealed independence
rules. Annals of statistics, 36(6):2605, 2008.
[15] Jianqing Fan, Yang Feng, and Xin Tong. A road to classification in high dimensional space: the regularized optimal affine discriminant. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(4):745?771, 2012.
[16] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. LIBLINEAR: A
library for large linear classification. The Journal of Machine Learning Research, 9:1871?1874, 2008.
[17] Yingying Fan, Jiashun Jin, Zhigang Yao, et al. Optimal classification in sparse gaussian graphic model.
The Annals of Statistics, 41(5):2537?2571, 2013.
[18] Manuel Fern?andez-Delgado, Eva Cernadas, Sen?en Barro, and Dinani Amorim. Do we need hundreds of
classifiers to solve real world classification problems? The Journal of Machine Learning Research, 15
(1):3133?3181, 2014.
[19] Jerome Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for generalized linear models
via coordinate descent. Journal of statistical software, 33(1):1, 2010.
[20] Siddharth Gopal and Yiming Yang. Distributed training of Large-scale Logistic models. In Proceedings
of the 30th International Conference on Machine Learning (ICML-13), pages 289?297, 2013.
[21] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference,
and Prediction. Springer, 2009.
[22] Mladen Kolar and Han Liu. Feature selection in high-dimensional classification. In Proceedings of the
30th International Conference on Machine Learning (ICML-13), pages 329?337, 2013.
[23] Vladimir Koltchinskii. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems: Ecole dEt?e de Probabilit?es de Saint-Flour XXXVIII-2008, volume 2033. Springer Science & Business Media, 2011.
[24] Qing Mai, Hui Zou, and Ming Yuan. A direct approach to sparse discriminant analysis in ultra-high
dimensions. Biometrika, page asr066, 2012.
[25] Enno Mammen, Alexandre B Tsybakov, et al. Smooth discrimination analysis. The Annals of Statistics,
27(6):1808?1829, 1999.
[26] Sahand Negahban, Bin Yu, Martin J Wainwright, and Pradeep K Ravikumar. A unified framework for
high-dimensional analysis of M -estimators with decomposable regularizers. In Advances in Neural Information Processing Systems, pages 1348?1356, 2009.
[27] Andrew Y. Ng and Michael I. Jordan. On discriminative vs. generative classifiers: A comparison of
logistic regression and naive bayes. In Advances in Neural Information Processing Systems 14 (NIPS
2001), 2001.
[28] Jun Shao, Yazhen Wang, Xinwei Deng, Sijian Wang, et al. Sparse linear discriminant analysis by thresholding for high dimensional data. The Annals of Statistics, 39(2):1241?1265, 2011.
[29] Robert Tibshirani, Trevor Hastie, Balasubramanian Narasimhan, and Gilbert Chu. Diagnosis of multiple
cancer types by shrunken centroids of gene expression. Proceedings of the National Academy of Sciences,
99(10):6567?6572, 2002.
[30] Sijian Wang and Ji Zhu. Improved centroids estimation for the nearest shrunken centroid classifier. Bioinformatics, 23(8):972?979, 2007.
[31] Tong Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, pages 56?85, 2004.
9
| 5999 |@word trial:1 version:2 norm:2 c0:4 seek:1 prasad:1 simulation:2 covariance:12 hsieh:1 delgado:1 liblinear:1 liu:6 series:3 ecole:1 existing:1 manuel:1 yet:1 chu:1 written:2 glm2:1 discrimination:1 v:1 generative:25 jiashun:2 caveat:2 simpler:1 zhang:1 mathematical:1 along:1 c2:2 direct:3 initiative:1 yuan:1 barro:1 cernadas:1 expected:2 behavior:1 surge:1 classifiction:1 ming:1 siddharth:1 balasubramanian:1 little:1 increasing:2 provided:2 estimating:1 moreover:1 underlying:1 bounded:12 agnostic:2 notation:1 lowest:1 what:1 maximizes:1 argmin:2 medium:3 minimizes:3 eigenvector:1 developed:2 narasimhan:1 unified:1 gm117594:1 guarantee:2 preferable:1 biometrika:1 classifier:61 k2:12 mcauliffe:1 t1:7 understood:1 aat:1 analyzing:1 sivakumar:1 becoming:1 path:1 lugosi:1 might:1 koltchinskii:1 studied:1 dantzig:3 sara:1 practice:1 probabilit:1 area:1 empirical:2 attain:1 orfi:1 road:5 jui:1 suggest:2 get:3 close:3 selection:3 context:1 risk:21 intercept:5 optimize:1 conventional:1 imposed:1 equivalent:1 gilbert:1 annealed:3 convex:2 focused:1 simplicity:1 recovery:2 decomposable:1 rule:9 estimator:9 insight:1 population:3 notion:2 coordinate:2 annals:7 programming:3 us:2 element:1 recognition:3 wang:5 eva:1 pradeepr:1 benjamin:1 convexity:1 complexity:1 covariates:14 trained:2 weakly:1 shao:1 easily:1 joint:1 k0:1 fast:10 m02:1 describe:1 sijian:2 whose:2 slda:3 larger:2 solve:3 kai:1 otherwise:1 statistic:10 analyse:1 ip:3 eigenvalue:6 isbn:1 cai:4 sen:1 aro:1 combining:1 shrunken:4 poorly:1 achieve:1 degenerate:1 academy:2 kv:3 weizhu:1 convergence:3 extending:1 produce:1 converges:3 yiming:1 derive:1 illustrate:2 andrew:1 nearest:3 strong:1 c:1 come:1 centered:6 enable:1 bin:1 andez:1 ultra:2 proposition:2 biological:1 strictly:1 rong:1 hold:6 around:1 considered:1 sufficiently:1 normal:1 exp:1 k3:2 m0:2 achieves:3 bickel:5 early:1 vary:2 enno:1 purpose:2 estimation:5 label:1 uhlmann:1 utexas:1 individually:1 largest:1 agrees:3 minimization:2 gaussian:16 always:1 gopal:1 zhou:1 varying:2 corollary:2 derived:3 focus:2 consistently:2 bernoulli:1 likelihood:4 centroid:5 attains:1 criticism:1 inference:1 typically:1 weakening:1 a0:9 interested:3 tao:1 arg:1 classification:41 logn:3 socalled:1 special:1 construct:1 having:1 ng:2 sampling:1 kw:3 look:1 icml:2 yu:1 jon:1 np:1 modern:1 tightly:1 national:2 sparsistency:1 qing:1 geometry:1 n1:3 friedman:2 investigate:1 highly:1 mining:1 evaluation:7 flour:1 fazayeli:1 mixture:1 pradeep:2 regularizers:1 unless:1 taylor:2 desired:3 plotted:1 theoretical:2 instance:1 modeling:1 earlier:1 w911nf:1 farideh:1 entry:2 rare:1 hundred:1 graphic:1 cho:1 recht:1 international:2 negahban:1 probabilistic:2 terence:1 michael:2 yao:1 satisfied:1 leveraged:1 choose:1 american:2 derivative:1 iis1320894:1 li:1 combing:1 parrilo:1 de:3 attaining:1 bfgs:1 gy:1 b2:1 u01:2 wk:1 depends:1 break:1 try:1 closed:2 analyze:1 start:1 bayes:34 recover:1 complicated:1 option:1 candes:1 square:2 largely:2 yield:2 wisdom:1 weak:1 fern:1 classified:1 converged:1 trevor:2 definition:1 dm:2 proof:11 timate:1 popular:3 recall:1 ut:1 alexandre:1 higher:1 follow:2 methodology:1 response:12 specify:2 wei:1 improved:1 formulation:1 though:2 jerome:1 glms:1 sheng:1 banerjee:1 logistic:19 lda:7 k22:6 requiring:2 regularization:4 width:1 mammen:1 generalized:3 theoretic:1 interface:1 novel:1 recently:2 arindam:1 nih:1 discriminants:1 ji:1 volume:2 discussed:1 association:2 m1:2 refer:1 a00:7 imposing:1 consistency:3 mathematics:1 similarly:2 han:1 vidyashankar:1 etc:1 multivariate:3 showed:3 recent:4 scenario:2 certain:1 jianqing:2 inequality:1 binary:5 kwk1:1 minimum:2 greater:1 impose:3 deng:1 r0:1 converge:3 ii:2 multiple:1 alan:1 technical:1 faster:3 levina:5 plug:1 smooth:1 lin:1 ravikumar:2 plugging:1 prediction:1 desideratum:1 regression:17 variant:1 metric:7 penalize:3 c1:4 addition:2 adarsh:2 singular:2 biased:2 inconsistent:1 seem:1 jordan:3 structural:4 near:1 leverage:2 noting:1 yang:2 independence:6 hastie:3 lasso:1 det:1 t0:3 whether:2 expression:3 motivated:1 bartlett:1 sahand:1 penalty:2 lpd:3 peter:4 york:1 useful:1 clear:1 tsybakov:1 ellipsoidal:1 extensively:1 mai:2 nsf:1 notice:1 estimated:3 tibshirani:3 diagnosis:1 key:3 drawn:1 tianyang:1 penalizing:1 eg0:1 thresholded:1 asymptotically:1 convert:1 inverse:2 package:3 extends:1 reader:1 chandrasekaran:1 chih:1 decision:1 scaling:1 bound:14 fan:7 oracle:1 constraint:2 software:1 min:2 separable:8 relatively:2 martin:1 department:1 rob:1 b:2 restricted:4 pr:13 computationally:1 equation:4 remains:2 discus:1 loose:1 know:1 letting:1 tractable:1 available:1 apply:1 alternative:3 slower:5 rp:4 assumes:2 binomial:1 ensure:1 tony:1 saint:1 k1:6 emmanuel:1 classical:5 r01:1 society:1 feng:1 quantity:5 dependence:4 diagonal:1 surrogate:1 said:1 elizaveta:2 w0:2 degrade:1 argue:1 discriminant:22 unstable:1 willsky:1 assuming:3 devroye:2 relationship:2 illustration:2 kolar:2 minimizing:1 vladimir:1 setup:1 robert:1 potentially:1 relate:1 unknown:1 perform:1 upper:2 m12:1 observation:2 finite:1 acknowledge:1 descent:2 behave:1 jin:2 mladen:1 y1:2 community:1 weidong:1 amorim:1 pablo:1 david:1 namely:1 required:1 pair:1 specified:1 cast:1 established:1 zhenghao:1 nip:1 able:2 below:1 usually:1 pattern:3 regime:1 sparsity:7 max:2 royal:1 wainwright:1 misclassification:3 natural:1 business:3 regularized:14 zhu:1 minimax:1 library:1 numerous:1 jun:1 naive:11 understanding:1 literature:1 acknowledgement:1 mapreduce:1 xiang:1 loss:2 foundation:1 affine:2 xp:1 consistent:2 s0:4 thresholding:5 austin:1 cancer:1 penalized:2 keeping:2 weaker:3 sparse:14 van:1 distributed:1 dimension:3 world:2 valid:2 doesn:1 commonly:2 excess:13 yingying:2 selector:3 gene:1 active:1 investigating:1 b1:1 assumed:1 discriminative:17 additionally:1 learn:1 expansion:2 complex:1 zou:1 bounding:1 noise:4 allowed:1 categorized:1 x1:1 venkat:1 en:2 elaborate:1 slow:1 tong:2 n:1 sub:1 w00:5 down:1 theorem:17 specific:1 covariate:1 xt:1 showing:1 bishop:1 jen:1 albeit:1 importance:1 hui:1 juncture:1 conditioned:5 kx:1 rui:1 chen:2 nigms:1 simply:3 univariate:1 glmnet:2 stitch:1 chang:1 springer:6 corresponds:1 chance:2 satisfies:1 conditional:15 donoho:1 luc:1 fisher:1 analysing:1 change:3 specifically:2 wt:18 lemma:9 called:5 geer:1 e:1 xin:1 support:2 bioinformatics:1 violated:1 instructive:1 ex:1 |
5,523 | 6 | 775
A NEURAL-NETWORK SOLUTION TO THE CONCENTRATOR
ASSIGNNlENT PROBLEM
Gene A. Tagliarini
Edward W. Page
Department of Computer Science, Clemson University, Clemson, SC
29634-1906
ABSTRACT
Networks of simple analog processors having neuron-like properties have
been employed to compute good solutions to a variety of optimization problems. This paper presents a neural-net solution to a resource allocation problem that arises in providing local access to the backbone of a wide-area communication network. The problem is described in terms of an energy function
that can be mapped onto an analog computational network. Simulation results
characterizing the performance of the neural computation are also presented.
INTRODUCTION
This paper presents a neural-network solution to a resource allocation
problem that arises in providing access to the backbone of a communication
network. 1 In the field of operations research, this problem was first known as
the warehouse location problem and heuristics for finding feasible, suboptimal
solutions have been developed previously.2. 3 More recently it has been known
as the multifacility location problem4 and as the concentrator assignment problem.1
THE HOPFIELD NEURAL NETWORK MODEL
The general structure of the Hopfield neural network model5 ? 6,7 is illustrated in Fig. 1. Neurons are modeled as amplifiers that have a sigmoid input!
output curve as shown in Fig. 2. Synapses are modeled by permitting the output of any neuron to be connected to the input of any other neuron. The
strength of the synapse is modeled by a resistive connection between the output
of a neuron and the input to another. The amplifiers provide integrative analog
summation of the currents that result from the connections to other neurons as
well as connection to external inputs. To model both excitatory and inhibitory
synaptic links, each amplifier provides both a normal output V and an inverted
output V. The normal outputs range between 0 and 1 while the inverting amplifier produces corresponding values between 0 and -1. The synaptic link between the output of one amplifier and the input of another is defined by a
conductance Tij which connects one of the outputs of amplifier j to the input of
amplifier i. In the Hopfield model, the connection between neurons i and j is
made with a resistor having a value Rij = 1rrij . To provide an excitatory synaptic connection (positive Tij ), the resistor is connected to the normal output of
This research was supported by the U.S. Army Strategic Defense Command.
? American Institute of Physics 1988
776
13
14
inputs
1
V
o
VI
V2
V3
V4
outputs
Fig. 1. Schematic for a simplified
Hopfield network with four neurons.
o
-u
+u
Fig. 2. Amplifier input/output
relationship
amplifier j. To provide an inhibitory connection (negative Tij), the resistor is
connected to the inverted output of amplifier j. The connections among the
neurons are defined by a matrix T consisting of the conductances T ij . Hopfield has shown that a symmetric T matrix (Tij = Tji ) whose diagonal entries
are all zeros, causes convergence to a stable state in which the output of each
amplifier is either 0 or 1. Additionally, when the amplifiers are operated in the
high-gain mode, the stable states of a network of n neurons correspond to the
local minima of the quantity
n
E
= (-112)
n
L L
i=l j=l
n
T?V.V?
IJ 1 J
L
V.I?
I 1
(1)
where Vi is the output of the ith neuron and Ii is the externally supplied input
to the ph neuron. Hopfield refers to E as the computational energy of the system.
THE CONCENTRATOR ASSIGNMENT PROBLEM
Consider a collection of n sites that are to be connected to m concentrators as illustrated in Fig. 3(a). The sites are indicated by the shaded circles
and the concentrators are indicated by squares. The problem is to find an
assignment of sites to concentrators that minimizes the total cost of the assignment and does not exceed the capacity of any concentrator. The constraints
that must be met can be summarized as follows:
a) Each site i ( i
and
= 1,
2, ... , n ) is connected to exactly one concentrator;
777
b) Each concentrator j (j = 1, 2, ... , m ) is connected to no more than kj
sites (where kj is the capacity of concentrator D.
Figure 3(b) illustrates a possible solution to the problem represented in Fig.
3(a).
?
0
0
??
??
?
0
? ?
?
??
?
0
o Concentrators
? Sites
(a). Site/concentrator map
(b). Possible assignment
Fig. 3. Example concentrator assignment problem
If the cost of assigning site i to concentrator j is cij , then the total cost of
a particular assignment is
total cost
n
m
i=l
j=l
= L L x ??IJ c??IJ
(2)
where Xij = 1 only if we actually decide to assign site i to concentrator j and is 0
otherwise. There are mn possible assignments of sites to concentrators that
satisfy constraint a). Exhaustive search techniques are therefore impractical
except for relatively small values of m and n.
THE NEURAL NETWORK SOLUTION
This problem is amenable to solution using the Hopfield neural network
model. The Hopfield model is used to represent a matrix of possible assignments of sites to concentrators as illustrated in Fig. 4. Each square corresponds
778
CONCENTRATORS
1
2
j
m
r,;------;-,
/r
SITES
1
,II 11- --III ---III,
2
,~ .---~ ---~I
????
,..
?
? I The darkly shaded neu-
~~
i
'~n
~
III 11- --II --- II Iron corresponds to the
::
:
:
'Ii ? ---Ii ---Ii '
~ -
" n+l
SLACK .... < n+2
,~n+k j
-
-
-
-
hypothesis that site i
should be
as~igned to
:.J concentrator J.
-
II 111---11---11
III II ---~ ---?
?
??
II 11- --III ---III
Fig. 4. Concentrator assignment array
to a neuron and a neuron in row i and column j of the upper n rows of the
array represents the hypothesis that site i should be connected to concentrator
j. If the neuron in row i and column j is on, then site i should be assigned to
concentrator j; if it is off, site i should not be assigned to concentrator j.
The neurons in the lower sub-array, indicated as "SLACK", are used to
implement individual concentrator capacity constraints. The number of slack
neurons in a column should equal the capacity (expressed as the number sites
which can be accommodated) of the corresponding concentrator. While it is
not necessary to assume that the concentrators have equal capacities, it was
assumed here that they did and that their cumulative capacity is greater than or
equal to the number of sites.
To ena~le the neurons in the network illustrated above to compute solutions to the concentrator problem, the network must realize an energy function
in which the lowest energy states correspond to the least cost assignments. The
energy function must therefore favor states which satisfy constraints a) and b)
above as well as states that correspond to a minimum cost assignment. The
energy function is implemented in terms of connection strengths between neurons. The following section details the construction of an appropriate energy
function.
779
THE ENERGY FUNCTION
Consider the following energy equation:
n
E
=
A
m
L ( .L1 y 1J..
. 1
1=
2
- 1 )
+
J=
B
m
n+k?
j=1
i=1
L (L
J y .. - k . )2
IJ
J
(3~
m n+kj
+ C
L L
y.. ( 1 - Yij )
j=1 i=1 1J
where Y ij is the output of the amplifier in row i and column j of the neuron
matrix, m and n are the number of concentrators and the number of sites
respectively, and kj is the capacity of concentrator j.
The first term will be minimum when the sum of the outputs in each row
of neurons associated with a site equals one. Notice that this term influences
only those rows of neurons which correspond to sites; no term is used to coerce
the rows of slack neurons into a particular state.
The second term of the equation will be minimum when the sum of the
outputs in each column equals the capacity kj of the corresponding concentrator. The presence of the kj slack neurons in each column allows this term to
enforce the concentrator capacity restrictions. The effect of this term upon the
upper sub-array of neurons (those which correspond to site assignments) is
that no more than kj sites will be assigned to concentrator j. The number of
neurons to be turned on in column j is kj ; consequently, the number of neurons turned on in column j of the assignment sub-array will be less than or
equal to kj .
The third term causes the energy function to favor the "zero" and "one"
states of the individual neurons by being minimum when all neurons are in one
or the other of these states. This term influences all neurons in the network.
In summary, the first term enforces constraint a) and the second term
enforces constraint b) above. The third term guarantees that a choice is actually made; it assures that each neuron in the matrix will assume a final state
near zero or one corresponding to the Xij term of the cost equation (Eq. 2).
After some algebraic re-arrangement, Eq. 3 can be written in the form of
Eq. 1 where
.,
{A * 8(i,k) * (1-8U,I) + B * 8U,1) * (1-8(i,k?, if i<n and k<n
T IJ kl =
(4)
,
C * 8U,I) * (1-8(i,k?, if i>n or k>n.
Here quadruple subscripts are used for the entries in the matrix T. Each entry
indicates the strength of the connection between the neuron in row i and column j and the neuron in row k and column I of the neuron matrix. The function delta is given by
780
1, if i = j
(5)
0, otherwise.
The A and B terms specify inhibitions within a row or a column of the upper
sub-array and the C term provides the column inhibitions required for the
neurons in the sub-array of slack neurons.
8( i , j ) = {
Equation 3 specifies the form of a solution but it does not include a term
that will cause the network to favor minimum cost assignments. To complete
the formulation, the following term is added to each Tij,kl:
D ? 8( j , I ) ? ( 1 - 8( i , k ) )
(cost [ i , j ] + cost [ k , I ])
where cost[ i , j ] is the cost of assigning site i to concentrator j. The effect of
this term is to reduce the inhibitions among the neurons that correspond to low
cost assignments. The sum of the costs of assigning both site i to concentrator j
and site k to concentrator I was used in order to maintain the symmetry of T.
The external input currents were derived from the energy equation (Eq.3)
and are given by
j , if i < n
(6)
IJ 2 ? k j - 1, otherwise.
I.._ {2. k
This exemplifies a teChnique for combining external input currents which arise
from combinations of certain basic types of constraints.
AN EXAMPLE
The neural network solution for a concentrator assignment problem consisting of twelve sites and five concentrators was simulated. All sites and concentrators were located within the unit square on a randomly generated map.
For this problem, it was assumed that no more than three sites could be
assigned to a concentrator. The assignment cost matrix and a typical assignment resulting from the simulation are shown in Fig. 5. It is interesting to
notice that the network proposed an assignment which made no use of concentrator 2.
Because the capacity of each concentrator kj was assumed to be three
sites, the external input current for each neuron in the upper sub-array was
I ij = 6
while in the sub-array of slack neurons it was
I ij = 5.
The other parameter values used in the simulation were
A = B = C =-2
and
D = 0.1 .
781
SITES
1
CONCENTRATORS
2
4
3
5
@
.46
.40
.63
.39
.92
.38
.82
.81
.56
@
.51
.76
.46
.17
.39
.77
.41
H
@
.81
.54
.52
I
.60
.67
.44
J
@
.84
.76
K
.42
.33
.55
L
@
A
.47
.28
B
.72
.75
C
.95
.71
D
.88
.78
@
@
@
E
.31
.62
F
.25
G
.55
.60 1.05
G
B
.66
.71
G
G
.56
.51
.48
.38
.18
Fig. 5. The concentrator assignment cost matrix with choices circled.
Since this choice of parameters results in a T matrix that is symmetric
and whose diagonal entries are all zeros, the network will converge to the
minima of Eq. 3. Furthermore, inclusion of the term which is weighted by the
parameter D causes the network to favor minimum cost assignments.
To evaluate the performance of the simulated network, an exhaustive
search of all solutions to the problem was conducted using a backtracking algorithm. A frequency distribution of the solution costs associated with the assignments generated by the exhaustive search is shown in Fig. 6. For comparison,
a histogram of the results of one hundred consecutive runs of the neural-net
simulation is shown in Fig. 7. Although the neural-net simulation did not find
a global minimum, ninety-two of the one hundred assignments which it did
find were among the best 0.01 % of all solutions and the remaining eight were
among the best 0.3%.
CONCLUSION
Neural networks can be used to find good, though not necessarily optimal, solutions to combinatorial optimization problems like the concentrator
782
Frequency
Frequency
25
4000000
3500000
3000000
250000
20
15
10
1500000
100000
500000
5
OL----
3.2
4.2
5.2
6.2
7.2
Cost
o
8.2
Fig. 6. Distribution of assignment Fig. 7. Distribution of assignment
costs resulting from an exhaustive costs resulting from 100 consecutive executions of the neural net
search of all possible solutions.
simulation.
assignment problem. In order to use a neural network to solve such problems,
it is necessary to be able to represent a solution to the problem as a state of the
network. Here the concentrator assignment problem was successfully mapped
onto a Hopfield network by associating each neuron with the hypothesis that a
given site should be assigned to a particular concentrator. An energy function
was constructed to determine the connections that were needed and the resulting neural network was simulated.
While the neural network solution to the concentrator assignment problem did not find a globally minimum cost assignment, it very effectively rejected poor solutions. The network was even able to suggest assignments which
would allow concentrators to be removed from the communication network.
REFERENCES
1. A. S. Tanenbaum, Computer Networks (Prentice-Hall: Englewood Cliffs,
New Jersey, 1981), p. 83.
2. E. Feldman, F. A. Lehner and T. L. Ray, Manag. Sci. V12, 670 (1966).
3. A. Kuehn and M. Hamburger, Manag. Sci. V9, 643 (1966).
4. T. Aykin and A. 1. G. Babu, 1. of the Oper. Res. Soc. V38, N3, 241 (1987).
5. J. 1. Hopfield, Proc. Natl. Acad. Sci. U. S. A., V79, 2554 (1982).
6. J. 1. Hopfield and D. W. Tank, Bio. Cyber. V52, 141 (1985) .
7. D. W. Tank and 1. 1. Hopfield, IEEE Trans. on Cir. and Sys. CAS-33, N5,
533 (1986).
| 6 |@word soc:1 implemented:1 effect:2 met:1 assigned:5 arrangement:1 symmetric:2 quantity:1 integrative:1 tji:1 simulation:6 illustrated:4 added:1 diagonal:2 link:2 mapped:2 simulated:3 capacity:10 assign:1 sci:3 evaluate:1 complete:1 summation:1 yij:1 l1:1 current:4 modeled:3 hall:1 relationship:1 normal:3 assigning:3 recently:1 must:3 written:1 sigmoid:1 realize:1 providing:2 cij:1 consecutive:2 negative:1 analog:3 proc:1 combinatorial:1 upper:4 neuron:40 feldman:1 sys:1 ith:1 ena:1 successfully:1 weighted:1 inclusion:1 provides:2 communication:3 location:2 access:2 five:1 stable:2 inhibition:3 constructed:1 command:1 inverting:1 derived:1 resistive:1 warehouse:1 exemplifies:1 required:1 ray:1 kl:2 indicates:1 connection:10 certain:1 darkly:1 trans:1 ol:1 inverted:2 able:2 minimum:10 globally:1 greater:1 employed:1 converge:1 v3:1 determine:1 ii:10 tank:2 among:4 lowest:1 backbone:2 minimizes:1 v38:1 developed:1 mn:1 field:1 finding:1 equal:6 having:2 impractical:1 permitting:1 guarantee:1 cir:1 represents:1 schematic:1 basic:1 n5:1 exactly:1 histogram:1 represent:2 bio:1 unit:1 kj:10 circled:1 randomly:1 positive:1 individual:2 local:2 problem4:1 acad:1 connects:1 consisting:2 interesting:1 maintain:1 cliff:1 allocation:2 quadruple:1 subscript:1 amplifier:13 conductance:2 cyber:1 englewood:1 shaded:2 near:1 operated:1 presence:1 exceed:1 range:1 natl:1 iii:6 row:10 variety:1 amenable:1 excitatory:2 enforces:2 summary:1 supported:1 associating:1 suboptimal:1 implement:1 necessary:2 reduce:1 allow:1 institute:1 wide:1 characterizing:1 accommodated:1 area:1 circle:1 re:2 defense:1 curve:1 cumulative:1 refers:1 column:12 suggest:1 algebraic:1 made:3 onto:2 collection:1 cause:4 simplified:1 assignment:31 prentice:1 influence:2 strategic:1 cost:22 entry:4 restriction:1 tij:5 map:2 hundred:2 gene:1 conducted:1 global:1 ph:1 assumed:3 specifies:1 supplied:1 xij:2 inhibitory:2 notice:2 search:4 array:9 twelve:1 delta:1 additionally:1 v4:1 physic:1 off:1 ca:1 v52:1 symmetry:1 construction:1 four:1 necessarily:1 hypothesis:3 did:4 external:4 located:1 american:1 arise:1 oper:1 sum:3 run:1 fig:15 rij:1 summarized:1 site:32 babu:1 satisfy:2 connected:7 v12:1 decide:1 vi:2 sub:7 removed:1 resistor:3 third:2 externally:1 manag:2 square:3 strength:3 constraint:7 v9:1 upon:1 n3:1 correspond:6 hopfield:12 represented:1 jersey:1 effectively:1 execution:1 relatively:1 illustrates:1 processor:1 department:1 sc:1 synapsis:1 combination:1 poor:1 backtracking:1 exhaustive:4 synaptic:3 whose:2 heuristic:1 neu:1 solve:1 energy:12 ninety:1 frequency:3 otherwise:3 army:1 expressed:1 favor:4 associated:2 model5:1 coerce:1 gain:1 final:1 corresponds:2 resource:2 equation:5 previously:1 net:4 assures:1 iron:1 slack:7 clemson:2 needed:1 consequently:1 actually:2 turned:2 combining:1 feasible:1 operation:1 typical:1 except:1 specify:1 eight:1 synapse:1 formulation:1 v2:1 though:1 appropriate:1 enforce:1 furthermore:1 rejected:1 total:3 convergence:1 produce:1 remaining:1 include:1 arises:2 mode:1 indicated:3 ij:10 eq:5 edward:1 |
5,524 | 60 | 278
THE HOPFIELD MODEL WITH MULTI-LEVEL NEURONS
Michael Fleisher
Department of Electrical Engineering
Technion - Israel Institute of Technology
Haifa 32000, Israel
ABSTRACT
The Hopfield neural network. model for associative memory is generalized. The generalization
replaces two state neurons by neurons taking a richer set of values. Two classes of neuron input output
relations are developed guaranteeing convergence to stable states. The first is a class of "continuous" relations and the second is a class of allowed quantization rules for the neurons. The information capacity for
networks from the second class is fOWld to be of order N 3 bits for a network with N neurons.
A generalization of the sum of outer products learning rule is developed and investigated as well.
? American Institute of Physics 1988
279
I. INTRODUCTION
The ability to perfonn collective computation in a distributed system of flexible structure without
global synchronization is an important engineering objective. Hopfield's neural network [1] is such a
model of associative content addressable memory.
An important property of the Hopfield neural network is its guaranteed convergence to stable states
(interpreted as the stored memories). In this work we introduce a generalization of the Hopfield model by
allowing the outputs of the neurons to take a richer set of values than Hopfield's original binary neurons.
Sufficient conditions for preserving the convergence property are developed for the neuron input output
relations. Two classes of relations are obtained. The first introduces neurons which simulate multi threshold functions, networks with such neurons will be called quantized neural networks (Q.N.N.). The second
class introduces continuous neuron input output relations and networks with such neurons will be called
continuous neural networks (C.N.N.).
In Section II, we introduce Hopfield's neural network and show its convergence property. C.N.N.
are introduced in Section
m and a
sufficient condition for the neuron input output continuous relations is
developed for preserving convergence. In Section IV, Q.N.N. are introduced and their input output relations are analyzed in the same manner as in III. In Section IV we look further at Q.N.N. by using the
definition of information capacity for neural networks of [2] to obtain a tight asymptotic estimate of the
capacity for a Q.N.N. with N neurons. Section VI is a generalized sum of outer products learning for the
Q.N.N. and section VII is the discussion.
n. THE HOPFIELD NEURAL NETWORK
A neural network consists of N pairwise connected neurons. The
states: Xi
i 'th neuron can be in one of two
=-lor Xi =+1. The connections are fixed real numbers denoted by W ij (the connection
from neuron
i
to nelD'On
j ).
Defme the state vector X to be a binary vector whose
i 'th
component
corresponds to the state of the i 'th neuron. Randomly and asynchronously, each neuron examines its input
and decides its next output in the following manner. Let ti be the threshold voltage of the i 'th neuron . If
the weighted sum of the present other N -1 neuron outputs (which compose the
i 'th
neuron input) is
280
greater or equal to ti' the next Xi (xt) is+l. ifnot.Xt is -1. This action is given in (1).
X?+
I
=sgn
N
[ Li
~ W??X
?-t?I ]
IJ J
j=1
(1)
We give the following theorem
Theorem 1 (of (1))
The network described with symmetric (Wij=Wji ) zero diagonal (Wi;=<? connection matrix
W
has the convergence property.
Defme the quantity
1
N N
N
E(X)
~ Li
~ W??X?X?
+ Li
~ t?X?
- =- -2 Li
IJ I J
I
I
i j=1
i=1
(2)
We show that E (X) caD only decrease as a result of the action of the network. Suppose that X k changed
t =Xl +Mk ?the resulting change in E is given by
to X
N
1: WkjXj-tk)
tJ.E = -llXk (
(3)
j=1
(Eq. (3) is correct because of the restrictions on
W).
The term in brackets is exactly the argument of the
sgn function in (1) and therefore the signs of IlXk and the term in brackets is the same (or IlXk =<? and
we get!lE ~ O. Combining this with the fact that E (X) is bounded shows that eventually the network
will remain in a local minimum of E (X). TlUs cornpJetcs the proof.
The technique used in the proof of Theorem 1 is an important tool in analyzing neural networks. A
network with a particular underlying E (X) function can be used to solve optimization problems with
E (K) as the object of optimization.
Thus we see another use of neural networks.
281
m. THE C.N.N.
We ask ourselves the following question: How can we change the sgn function in (1) without affecling the convergence property? The new action rule for the i 'th neuron is
N
X?+=/?[
~ W??X?
,
1 kI
IJ J ]
(4)
j=l
Our attention is focused on possible choices for Ii ('). The following theorem gives a part of the answer.
Theorem 2
The network described by (4) (with symmetric zero diagonal
W)
has the convergence property if
Ii ( .) are strictly increasing and bounded.
Define
(5)
We show as before that E ex) can only decrease and since E is bounded (because of the boundedness of
Ii's) the theorem is proved.
Xj
Usinggi(Xi ) =
Jli-l(u)dU we have
o
(6)
Using the intel111ediate value theorem we gel
282
C
where
C
S;
is
Xk+LlXk
=> IlE SO.
a
point
between
Xk
and
X k +LlXk .
Now,
if
Mk > 0 we
have
= > Ik-I(C) S;fk-1(Xk +Mk ) and the term in brackets is greater or equal to zero
A similar argument holds for Mk
< 0 (of course Mk =0 => llE =0). This comp~etes
the proof.
Some remarks:
(a) Strictly increasing bounded neuron relations are not the whole class of relations conserving the convergence property. This is seen immediately from the fact that Hopfield's original model (1) is not in this
class.
(b) The
E (X) in the C.N.N. coincides with Hopfield's continuous neural network [3]. The difference
between the two networks lies in the updating scheme. In our C.N.N. the neurons update their outputs at
the moments they examine their inputs while in [3] the updating is in the form of a set of differential equations featuring the time evolution of the network outputs.
(c) The boundedness requirement of the neuron relations results from the boundedness of
E (K).
It is
possible to impose further restrictions on W resulting in unbounded neuron relations but keeping E (X)
bounded (from below). This was done in [4] where the neurons exhibit linear relations.
IV. THE Q.N.N.
We develop the class of quantization rules for the neurons, keeping the convergence property.
Denote the set of possible neuron outputs by
t 1 < t 2 < ... < tn
Yo < Y 1 < ... < Yn
and the set of threshold values by
the action of the neurons is given by
xt = Y/
N
if t/ <
L
W;jXj
~ tl+l I=O, ... ,n
j=1
The following theorem gives a class of quantization rules with the convergence property.
(8)
283
Theorem 3
An.y quantization rule for the neurons which is an increasing step functioo that is
Yo<Y 1 < . .. y n',t 1 < ... <tn
Yields a network with the convergence property (with a
(9)
W symmetric and zero diagonal).
We proceed to prove.
Define
(10)
where G (X) is a piecewise linear convex
U function defined by the relation
(11)
As before we show M
~ O. Suppose a change occurred in Xk such thatXk =Yi - 1.Xt=yi . We then
have
(12)
A similar argument follows when Xk =Yi ,Xk+=Yi - 1
with
< Xk .
Any bigger change in Xk (from
Yi
to Yj
I i - j I > 1) yields the same result since it can be viewed as a sequence of I i - j I changes from Yi
to Yj each resulting in M ~O. The proof is completed by noting that LlX'e=O=>M =0 and
bounded.
E (X) is
284
CorollaIy
Hopfield's original model is a special case of (9).
V. INFORMATION CAPACITY OF THE Q.N.N.
We use the definition of [2] for the information capacity of the Q.N.N.
Definition 1
The information capacity of the Q.N.N. (bits) is the
log (Base 2) of the number of distinguishable
networks of N neurons. Two networks are distinguishable if observing the state transitions of the neurons
yields different observations. For Hopfield's original model it was shown in [2] that the capacity
network of N neurons is bounded by
C ~ Q(N 3)b
C ~ log (2(N-l)2f = O(N 3)b.
C
of a
It was also shown that
and thus is exactly of the order N 3b. It is obvious that in our case (which contains the
original model) we must have
C ~ Q(N 3)b
as well (since the lower bound cannot decrease in this
richer case). It is shown in the Appendix that the number of multi threshold functions of N -1 variables
with
n+l
oUlput levels is at most (n+lf 2+N +1 since we have
( (n+lf2+N +1f
N
neurons there will be
distinguishablenetworlcs and thus
(14)
01
as before,
C
is exactly of O(N 3)b. In fact, the rise in
C
is probably a faclOr of O(log2n) as can be
seen from the upper bound.
VI. "OUTER PRODUCT" LEARNING RULE
For Hopfleld's origiDal network with two state neurons (taking the values
sively investigated
?1) a nalw-al and exten-
rl.t 1.? ] learning rule is the so called sum of outer products construction.
1 K 1 1
W1).. =N- ~
~ X?X?
(15)
1 )
1=1
where Xl, ... , X K are the desired stable states of the network. A well-known result for (15) is that the
asymplOtic capacity K of the network is
285
K= N-l +1
(16)
410gN
In this section we introduce a natural generalization of (15) and prove a similar result for the asymp-
totic capacity. We first limit the possible quantization rules to:
(17)
with
Yo < ... < Yn
t.=~(y.+y.
IJ
J
2
J
J-
j=l, ... n
with
(b)
n+l is even
V i Yi -:# 0
(c)
y.I =-yn-l.
(a)
i=O, ... ,n
NeAt we state that the desired stable vectors Xl, . . . X K are such that each component is picked
independently at random from ( Yo
' . . . YM
} with equal probability. Thus. the K
?N
components of
the X 's are zero mean i.i.D random variables. Our modified learning rule is
w.. = -L
~ X!. [_1
]
N ~
Xl
IJ
1=1
Note that for Xi E
Define
I
(+1, -I}
j
(18) is identical to (16).
(18)
286
;~~
IYi -Yjl
l?oJ
A
= max
iJ
IY.12
l
IYj I
We state that
PROPOsmON:
The asymptotic capacity of the above network is given by
N
K=----16A 2 logN
,..,
(19)
(6y)2
PROOF:
Def"me
P (K , N) = Pr
{
vectors chosen randomly as deSCribed}
are stable states with the W of ( )
K
(20)
where Aij is the event that the i th component of j th vector is in error. We concentrate on the event All
W.L.G.
The input u 1 when X' is presented is given by
(21)
The first term is mapped by (17) into itself and corresponds to the desired Signal.
The last term is a sum of (K -1 )(N -1) i.i.D zero mean random variables and corresponds to
noise.
287
The middle term
K-l
1
-N X 1
choice of W (using (18) with
Pr (A 11) =Pr
is disposed of by assuming
K-l
-N
~ O. (With a zero diagonal
N -+00
*'
i j) this term does not appear).
noise gets us out of range }
Denoting the noise by I we have
{
(22)
(K -1)(N-l)4A 2
where the first inequality is from the defmition of .1Yand the second uses the lemma of [6] p. 58. We thus
get
,..,
P (K , N)
~
1 - K ? N . 2exp -
substituting (19) and taking N ~
00
(,1Y)2N 2
-~---'---~
(23)
8(K -l)(N-l)A 2
we get P (K , N) ~
1 and this completes the proof.
Vll. DISCUSSION
Two classes of generalization of the Hopfield neural network model were presented. We give some
remarks:
(a) Any combination of neurons from the two classes will have the convergence property as well.
(b) Our defmition of the information capacity for the eN.N. is useless since a full observation of the pos?
sible state transitions of the netwock is impossible.
288
APPENDIX
We prove the following theorem.
Theorem
An upper bound on the num~ of multi threshold functions with N inputs and M points in the
domain (out of(n+l)N possible points)
et/ is the solution of the recurrence relation
eNM --
M - 1 + n ?C M - 1
CN
N-l
(A.I)
Let us look on the N dimensional weight space W. Each input point X divides the weight space
N
into n+l regions by n parallel hyperplanes
L
W;X;=tk k=l, ... ,n. We keep adding points in such
;=1
a way that the new n hypeq>1anes corresponding to each added point partition the W space into as many
regions as possible. Assume M -1 points have made
hyperplane (out of n) is divided into at most
et! -I regions and we add the M 'lh point. Each
Cf/_l1 region, (being itself an N -1
dimensional space
divided by (M -1)n hyperlines). We thus have after passing the n hyperplanes:
eNM -is
A1 - 1
CNM - I + n ?CN-I
N-l[ M-1]
etI = (n + 1).L
i
n i and the theorem is proved .
? =0
The solution of the recurrence in the case
M =(n + I f
the number of multi threshold functions of N variables equal to
and the result used is established.
(all possible points) we have a bound on
289
LIST OF REFERENCES
[1]
Hopfield J. J. t "Neural networks and physical systems with emergent collective computational abilities", Proc. Nat. Acad. Sci. USA, Vol. 79 (1982), pp. 2554-2558.
[2]
Abu-Mostafa Y.S. and Jacques J. St, "lnfonnation capacity of the Hopfield model", IEEE Trans. on
Info. Theory, Vol. IT-31 (1985. ppA61-464.
[3]
Hopfield J. J., "Neurons with graded response have collective computational properties like those of
two state neurons", Proc. Nat. Acad. Sci. USA, Vol. 81 (1984).
[4]
Fleisher M., "Fast processing of autoregressive signals by a neural network", to be presented at IEEE
Conference, Israel 1987.
[5]
Levin, E., Private communication.
[6]
Pettov, "Sums of independent random variables".
| 60 |@word graded:1 private:1 middle:1 evolution:1 concentrate:1 objective:1 question:1 symmetric:3 correct:1 quantity:1 enm:2 added:1 diagonal:4 sgn:3 exhibit:1 recurrence:2 boundedness:3 mapped:1 coincides:1 moment:1 sci:2 generalized:2 contains:1 generalization:5 capacity:12 outer:4 me:1 denoting:1 tn:2 strictly:2 l1:1 assuming:1 hold:1 useless:1 cad:1 gel:1 exp:1 must:1 mostafa:1 partition:1 substituting:1 rl:1 physical:1 info:1 rise:1 update:1 proc:2 occurred:1 defmition:2 collective:3 allowing:1 upper:2 neuron:41 xk:8 observation:2 llx:1 tool:1 weighted:1 fk:1 num:1 eti:1 quantized:1 communication:1 hyperplanes:2 modified:1 stable:5 lor:1 unbounded:1 iyi:1 voltage:1 differential:1 base:1 ik:1 add:1 introduced:2 consists:1 prove:3 yo:4 compose:1 defme:2 connection:3 manner:2 introduce:3 pairwise:1 inequality:1 binary:2 established:1 examine:1 yi:7 multi:5 wji:1 trans:1 preserving:2 sively:1 minimum:1 greater:2 seen:2 impose:1 below:1 relation:14 ilxk:2 increasing:3 wij:1 signal:2 ii:4 bounded:7 underlying:1 full:1 hopfleld:1 flexible:1 memory:3 israel:3 denoted:1 logn:1 interpreted:1 oj:1 max:1 special:1 developed:4 event:2 natural:1 equal:4 divided:2 scheme:1 technology:1 perfonn:1 bigger:1 identical:1 a1:1 ile:1 look:2 ti:2 exactly:3 functioo:1 piecewise:1 yn:3 appear:1 randomly:2 before:3 asymptotic:2 engineering:2 local:1 synchronization:1 completes:1 limit:1 acad:2 ourselves:1 analyzing:1 probably:1 sufficient:2 introduces:2 analyzed:1 bracket:3 noting:1 range:1 tj:1 iii:1 course:1 changed:1 xj:1 featuring:1 last:1 yj:2 keeping:2 neat:1 aij:1 lf:1 lh:1 cn:2 lle:1 institute:2 asymp:1 addressable:1 iv:3 divide:1 taking:3 haifa:1 desired:3 distributed:1 mk:5 transition:2 autoregressive:1 gn:1 made:1 get:4 cannot:1 proceed:1 passing:1 action:4 remark:2 impossible:1 restriction:2 llxk:3 technion:1 keep:1 levin:1 global:1 attention:1 decides:1 independently:1 convex:1 focused:1 stored:1 answer:1 immediately:1 xi:5 rule:10 examines:1 st:1 sign:1 jacques:1 continuous:5 physic:1 jli:1 vll:1 michael:1 ym:1 iy:1 construction:1 suppose:2 w1:1 vol:3 abu:1 threshold:6 du:1 us:1 investigated:2 domain:1 exten:1 updating:2 american:1 iyj:1 whole:1 noise:3 li:4 totic:1 sible:1 sum:6 allowed:1 electrical:1 tl:1 fleisher:2 en:1 region:4 connected:1 vi:2 decrease:3 tlus:1 picked:1 appendix:2 observing:1 bit:2 xl:4 cnm:1 ki:1 bound:4 parallel:1 guaranteed:1 def:1 lie:1 replaces:1 theorem:12 tight:1 xt:4 list:1 yield:3 po:1 hopfield:16 emergent:1 simulate:1 argument:3 quantization:5 adding:1 nat:2 comp:1 lf2:1 fast:1 department:1 combination:1 vii:1 log2n:1 distinguishable:2 remain:1 whose:1 richer:3 definition:3 solve:1 wi:1 pp:1 obvious:1 ability:2 proof:6 pr:3 jxj:1 itself:2 corresponds:3 asynchronously:1 associative:2 proved:2 ask:1 sequence:1 equation:1 eventually:1 viewed:1 product:4 combining:1 content:1 change:5 response:1 conserving:1 hyperplane:1 yand:1 done:1 lemma:1 called:3 yjl:1 convergence:13 requirement:1 original:5 guaranteeing:1 cf:1 tk:2 object:1 completed:1 develop:1 ex:1 ij:7 disposed:1 eq:1 usa:2 |
5,525 | 600 | A Neural Network that Learns to Interpret
Myocardial Planar Thallium Scintigrams
Charles Rosenberg, Ph.D:
Jacob Erel, M.D.
Department of Computer Science
Hebrew University
Jerusalem, Israel
Department of Cardiology
Sapir Medical Center
Meir General Hospital
Kfar Saba, Israel
Henri Atlan, M.D., PhD.
Department of Biophysics and Nuclear Medicine
Hadassah Medical Center
Jerusalem, Israel
Abstract
The planar thallium-201 myocardial perfusion scintigram is a widely used
diagnostic technique for detecting and estimating the risk of coronary
artery disease. Neural networks learned to interpret 100 thallium scintigrams as determined by individual expert ratings. Standard error backpropagation was compared to standard LMS, and LMS combined with
one layer of RBF units. Using the "leave-one-out" method, generalization was tested on all 100 cases. Training time was determined automatically from cross-validation perfonnance. Best perfonnance was attained
by the RBF/LMS network with three hidden units per view and compares
favorably with human experts.
1 Introduction
Coronary artery disease (CAD) is one of the leading causes of death in the Western World.
The planar thallium-201 is considered to be a reliable diagnostic tool in the detection of
? Current address: Geriatrics, Research, Educational and Clinical Center, VA Medical Center, Salt
Lake City, Utah.
755
756
Rosenberg, Erel, and Atlan
CAD. Thallium is a radioactive isotope that distributes in mammalian tissues after intervenous administration and is imaged by a gamma camera. The resulting scintigram is visually
interpreted by the physician for the presence or absence of defects - areas with relatively
lower perfusion levels. In myocardial applications, thallium is used to measure myocardial
ischemia and to differentiate between viable and non-viable (infarcted) heart muscle (pohost and Henzlova, 1990).
Diagnosis of CAD is based on the comparison of two sets of images, one set acquired
immediately after a standard effort test (BRUCE protocol), and the second following a
delay period of four hours. During this delay, the thallium redistributes in the heart muscle
and spontaneously decays. Defects caused by scar tissue are relatively unchanged over
the delay period (fixed defect), while those caused by ischemia are partially or completely
filled-in (reversible defect) (Beller, 1991; Datz et al., 1992).
Image interpretation is difficult for a number of reasons: the inherent variability in biological systems which makes each case essentially unique, the vast amount of irrelevant and
noisy information in an image, and the "context-dependency" of the interpretation on data
from many other tests and clinical history. Interpretation can also be significantly affected
by attentional shifts, perceptual abilities, and mental state (Franken Jr. and Berbaum, 1991;
Cuar6n et al., 1980).
While networks have found considerable application in ECG processing (e.g. (Artis et al.,
1991)) and clinical decision-making (Baxt, 1991b; Baxt, 1991a), they have thus far found
limited application in the field of nuclear medicine. Non-cardiac imaging applications include the grading of breast carcinomas (Dawson et al., 1991) and the discrimination of normal vs. Alzheimer's PET scans (Kippenhan et al., 1990). Of the studies dealing specifically
with cardiac imaging, neural networks have been applied to several problems in cardiology
including the identification of stenosis (Porenta et al., 1990; Cios et al., 1989; Cios et al.,
1991; Cianflone et al., 1990; Fujita et al., 1992). These studies encouraged us to explore
the use of neural networks in the interpretation of cardiac scintigraphy.
2
Methods
We trained one network consisting of a layer of gaussian RBF units in an unsupervised fashion to discover features in circumferential profiles in planar thallium scintigraphy. Then a
second network was trained in a supervised way to map these features to physician's visual
interpretations of those images using the delta rule (Widrow and Hoff, 1960). This architecture was previously found to compare favorably to other network learning algorithms
(2-layer backpropagation and single-layer networks) on this task (Rosenberg et al., 1993;
Erel et al., 1993).
In our experiments, all of the input vectors representing single views f were first normalized
to unit length V = IIfll . The activation value of a gaussian unit, OJ, is then given by:
(1)
netj
O1? = exp(--)
w
(2)
where j is an index to a gaussian unit and i is an input unit index. The width of the gaussian,
A Neural Network that Learns to Interpret Myocardial Planar Thallium Scintigrams
R~gion.1
Output
IL.
0
Ii:1/1
Ii:1/1
I\-
~
Scores
0
~
)(
IL.
<
IL.
Ii.
;!;
0
Ii.
~
IL.
.:.
t.
<;I
t-
t.
III I
Ill!
?
?
Severe
Moderate
o"
Mild
Normal
RBF
Input
ANT
LAO 45
LAT
VIEWS
Figure 1: The network architecture. The first layer (Input) encoded the three circumferential profiles representing the three views, anterior (ANT), left lateral oblique (LAO). and
left lateral (LAT). The second layer consisted of radial basis function (RBF) units, the third
layer, semi-linear units trained in a supervised fashion. The outputs of the network corresponded to the visual scores as given by the expert observer. An additional unit per view
encoded the scaling factor of the input patterns lost as a result of input normalization.
given by w, was fixed at 0.25 for all units 1?
The gaussian units were trained using a competitive learning rule which moves the center
of the unit closest to the current input pattern (Omax, i.e. the "winner") closer to the input
pattern2 :
~tui,winner
2.1
= 1](v; -
Wi,winner)
(3)
Data Acquisition and Selection
Scintigraphic images were acquired for each of three views: anterior (ANT), left lateral
oblique (LAO 45), and left lateral (LAT) for each patient case. Acquisition was performed
twice, once immediately following a standard effort test and once following a delay period
of four hours. Each image was pre-processed to produce a circumferential profile (Garcia
et aI., 1981; Francisco et aI., 1982) , in which maximum pixel counts within each of 60,
6? contiguous segmental regions are plotted as a function of angle (Garcia, 1991). Preprocessing involved positioning of the region of interest (ROI), interpolative background
subtraction, smoothing and rotational alignment to the heart's apex (Garcia, 1991).
1We have considered applying the learning rule to the unit widths (w) as well as the RBF weights,
however we have not as yet pursued this possibility.
2Following Rumelhart and Zipser (Rumelhart and Zipser, 1986), the other units were also pulled
towards the input vector, although to a much smaller extent than the winner. We used a ratio of 1 to
100.
3The profiles were generated using the Elscint CTL software package for planar quantitative
thallium-20l based on the Cedars-Sinai technique (Garcia et aI., 1981; Maddahi et aI., 1981; Areeda
et aI., 1982).
757
758
Rosenberg, Ere!, and Atlan
Lesion
mild
moderate
severe
Total
single
12
5
0
17
multiple
16
16
11
43
Total
28
21
11
60
Table 1: Distribution of Abnormal Cases as Scored by the Expert Observer. Defects occurring in any combination of two or more regions (even the proximal and distal subregions
of a single area) were treated as one multiple defect. The severity level of multiple lesions
was based on the most severe lesion present.
Cases were pre-selected based on the following criteria (Beller, 1991):
? Insufficient exercise. Cases in which the heart rate was less than 130 b.p.m. were
eliminated, as this level of stress is generally deemed insufficient to accurately
distinguish normal from abnormal conditions.
? Positional abnormalities. In a few cases, the "region of interest" was not positioned or aligned correctly by the technician.
? Increased lung uptake. Typically in cases of multi-vessel disease, a significant
proportion of the perfusion occurs in the lungs as well as in the heart, making it
more difficult to determine the condition of the heart due to the partially overlapping positions of the heart and lungs.
? Breast artifacts.
Cases were selected at random between August, 1989 and March, 1992. Approximately a
third of the cases were eliminated due to insufficient heart rate, 4-5% due to breast artifacts,
4% due to lung uptake, and 1-2% due to positional abnormalities. A set of one hundred
usable cases remained.
2.2
Visual Interpretation
Each case was visually scored by a single expert observer for each of nine anatomical regions generally accepted as those that best relate to the coronary circulation: Septal: proximal and distal, Anterior: proximal and distal, Apex, Inferior: proximal and distal, and
Posterior-Lateral: proximal and distal. Scoring for each region was from normal (I) to
severe (4), indicating the level of the observed perfusion deficit.
Intra-observer variability was examined by having the observer re-interpret 17 of the cases
a second time. The observer was unable to remember the cases from the first reading and
could not refer to the previous scores.
Exact matches were obtained on 91.5% of the regions; only 8 of the 153 total regions (5%)
were labeled as a defect (mild, moderate or severe) on one occasion and not on the other.
All differences, when they occurred, were of a single rating level4 ?
4In contrast, measured inter-observer variability was much higher. A set of 13 cases was individ-
A Neural Network that Learns to Interpret Myocardial Planar Thallium Scintigrams
2.3
The Network Model
The input units of the network were divided into 3 groups of 60 units each, each group
representing the circumferential profile for a single view. A set of 3 RBF units were assigned
to each input group. Then a second layer of weights was trained using the delta rule to
reproduce the target visual scores assigned by the expert observer. The categorical visual
scores were translated to numerical values to make the data suitable for network learning:
normal =0.0, mild defect =0.3, moderate defect =0.7, and severe defect = 1.0.
In order to make efficient use of the available data, we actually trained 100 identical networks; each network was trained on a subset of 99 of the 100 cases and tested on the remaining one. This procedure, sometimes referred to as the "leave-one-out" or "jack-knife"
method, enabled us to determine the generalization performance for each case. This procedure was followed for both the RBF and the delta rule training 5. Training of a single
network took only a few minutes of Sun 4 computer time.
3 Results
Because of the larger numbers of confusions between normal and mild regions in both the
inter- and intra-observer scores, disease was defined as moderate or severe defects. The
threshold value dividing the output values of the network into these two sets was varied
from 0 to 1 in 0.01 step increments. The number of agreements between the expert observer
and the network were computed for each threshold value. The resulting scores, accumulated
over all threshold values, were plotted as a Receiver Operating Characteristic (ROC) curve.
Best performance (percent correct) was achieved with a threshold value of 0.28, which
yielded an overall accuracy of 88.7% (798/900 regions) on the stress data. However, this
value of the threshold heavily favored specificity over sensitivity due to the preponderance
of normal regions in the data. Using the decision threshold which maximized the sum
of sensitivity and specificity, 0.10, accuracy dropped to 84.9% (764/900) but sensitivity
improved to 0.771 (121/157), and specificity was 0.865 (643/743).
3.1
Distinguishing Fixed vs. Reversible Defects
In order to take into account the delayed distribution as well as the stress set of images, the
network was essentially duplicated: one network processed the stress data, and the other,
ually interpreted by 3 expert observers in a previous experiment (Rosenberg et aI., 1993). Percent
agreement (exact matches) between the observers was 82% (288/351). Of the 63 mis-matches, 5 or
about 8% of the regions were of 2 levels of severity. There were no differences of 3 levels of severity.
Approximately two-thirds of the disagreements were between normal and mild regions. These results
indicate that the single observer data employed in the present study are more reliable than the mixed
consensus and individual scores used previously.
5Details of network learning were as follows: Each of the 100 networks was initialized and trained
in the same way. RBF-to-output unit weights were initialized to small random values between 0.5 and
-0.5. Input-to-RBF unit weights were first randomized and then normalized so that the weight vectors
to each RBF unit were of unit length. Unsupervised, competitive training of the RBF units continued
for 100 "epochs" or complete sweeps through the set of 99 cases: 20 epochs with a learning rate (11)
of 0.1 followed by 80 epochs at 0.01 without momentum (0'). Supervised training using a learning
rate of 0.05 and momentum 0.9, was terminated based on cross-validation testing after 200 epochs.
Further training led to over-training and poorer generalization.
759
760
Rosenberg, Erel, and Atlan
the redistribution data. (For details, see (Erel et al., 1993).)
The combined network exhibited only a limited ability to distinguish between scar and
ischemia. Performance on scar detection was good (sens. 0.728 (75/103), spec. 0.878
(700{797?, but the sensitivity of the network on ischemia detection was only 0.185 (10/54).
This result may be explained, at least in part, by the much smaller number of ischemic regions included in the data set as compared with scars (54 versus 103).
4 Conclusions and Future Directions
We suspect that our major limitation is in defect sampling. In order that a statistical system
(networks or otherwise) generalize well to new cases, the data used in training must be
representative of the full population of data likely to be sampled. This is unlikely to happen
when the number of positive cases is on the order of 50, as was the case with ischemia,
since each possible defect location, plus all the possible combinations of locations must be
included.
A variant ofbackpropagation, called competitive backpropagation, has recently been developed which is claimed to generalize appropriately in the presence of multiple defects (Cho
and Reggia, 1993). Weights in this network are constrained to take on positive values,
so that diagnoses made by the system add constructively. In a standard backpropagation
network, multiple diseases can cancel each other out, due to complex interactions of both
positive and negative connection strengths. We are currently planning to investigate the
application of this learning algorithm to the problem of ischemia detection.
Other improvements and extensions include:
? Elicit confidence ratings. Expert visual interpretations could be augmented by
degree of confidence ratings. Highly ambiguous cases could be reduced in importance or eliminated. The ratings could also be used as additional targets for
the network6: cases indicated by the network with low levels of confidence would
require closer inspection by a physician. Initial results are promising in this regard.
? Provide additional information. We have not yet incorporated clinical history,
gender, and examination EKG. Clinical history has been found to have a profound
impact on interpretation of radiographs (Doubilet and Herman, 1981). The inclusion of these variables should allow the network to approximate more closely a
complete diagnosis, and boost the utility of the network in the clinical setting.
? Add constraints. Currently we do not utilize the angles that relate the three views.
It may be possible to build these angles in as constraints and thereby cut down on
the number of free network parameters.
? Expand application. Besides planar thallium, our approach may also be applied
to non-planar 3-D imaging technologies such as SPECT and other nuclear agents or
stress-inducing modalities such as dipyridamole. Preliminary results are promising in this regard.
6See (fesauro and Sejnowski, 1988) for a related idea.
A Neural Network that Learns to Interpret Myocardial Planar Thallium Scintigrams
Acknowledgements
The authors wish to thank Mr. Haim Karger for technical assistance, and the Departments
of Computer Science and Psychology at the Hebrew University for computational support.
We would also like to thank Drs. David Shechter, Moshe Bocher, Roland Chisin and the
staff of the Department of Medical Biophysics and Nuclear Medicine for their help, both
large and small, and two anonymous reviewers. Terry Sejnowski suggested our use of RBF
units.
References
Areeda, J., Train, K. v., Garcia, E. Y., Maddahi, J., Rosanki, A., Waxman, A., and Berman,
D. (1982). Improved analysis of segmental thallium-201 myocardial scintigrams:
Quantitation of distribution, washout, and redistribution. In Esser, P. D., editor, Digital
Imaging. Society of Nuclear Medicine, New York.
Artis, S., Mark, R, and Moody, G. (1991). Detection of atrial fibrillation using artificial
neural networks. In Computers in Cardiology, pages 173-176, Venice, Italy. IEEE,
IEEE Computer Society Press.
Baxt, W. (1991a). Use of an artificial neural network for data analysis in clinical decisionmaking: The diagnosis of acute coronary occlusion. Neural Computation, 2:480-489.
Baxt, W. (1991b). Use of an artificial neural network for the diagnosis of myocardial infarction. Annals of Internal Medicine, 115:843-848.
Beller, G. A. (1991). Myocardial perfusion imaging with thallium-201. In Marcus, M. L.,
Schelbert, H. R., Skorton, D. J., and Wolf, G. L., editors, Cardiac Imaging. W. B.
Sanders.
Cho, S. and Reggia, J. (1993). Multiple disorder diagnosis with adaptive competitive neural
networks. Artificial Intelligence in Medicine. To appear.
Cianfione, D., Carandente, 0., Fragasso, G., Margononato, A., Meloni, C., Rossetti, E.,
Gerundini, P., and Chiechia, S. L. (1990). A neural network based model of predicting
the probability of coronary lesion from myocardial perfusion SPECT data. In Proceedings of the 37th Annual Meeting of the Society of Nuclear Medicine, page 797.
Cios, K. J., Goodenday, L. S., Merhi, M., and Langenderfer, R. (1989). Neural networks in
detection of coronary artery disease. In Computers in Cardiology Conference, pages
33-37, Jerusalem, Israel. IEEE, IEEE Computer Society Press.
Cios, K. J., Shin, 1., and Goodenday, L. S. (1991). Using fuzzy sets to diagnose coronary
artery stenosis. Computer, pages 57-63.
Cuar6n, A., Acero, A., Cardena, M., Huerta, D., Rodriguez, A., and de Garay, R. (1980). Interobserver variability in the interpretation of myocardial images with Tc-99m-Iabeled
diphosponate and pyrophosphate. Journal of Nuclear Medicine, 21(1):1-9.
Datz; E, Gabor, E, Christian, P., Gullber, G., Menzel, C., and Morton, K. (1992). The use of
computer-assisted diagnosis in cardiac-perfusion nuclear medicine studies: A review.
Journal of Digital Imaging, 5(4):1-14.
Dawson, A., Austin, R, and Weinberg, D. (1991). Nuclear grading of breast carcinoma by
image analysis. American Journal of Clinical Pathology, 95(4):S29-S37.
761
762
Rosenberg, Erel, and Atlan
Doubilet, P. and Herman, P. (1981). Interpretation of radiographs: Effect of clinical history.
American Journal of Roentgenology, 137: 1055-1058.
Erel, J., Rosenberg, c., and Atlan, H. (1993). Neural network for automatic interpretation
of thallium scintigrams. In preparation.
Francisco, D. A., Collins, S. M., and et al., R. T. G. (1982). Tomographic thallium-201
myocardial perfusion scintigrams after maximal coronary artery vasodiliation with intravenous dipyridamole: Comparison of qualitative and quantitative approaches. Circulation, 66(2).
Franken Jr., E. A. and Berbaum, K. S. (1991). Perceptual aspects of cardiac imaging. In
Marcus, M. L., Schelbert, H. R., Skorton, D. J., and Wolf, G. L., editors, Cardiac
Imaging. W. B. Sanders.
Fujita, H., Katafuchi, T., Uehara, T., and Nishimura, T. (1992). Application of artificial
neural network to computer-aided diagnosis of coronary artery disease in myocardial
SPECT bull's-eye images. The Journal of Nuclear Medicine, 33(2):272-276.
Garcia, E. V. (1991). Physics and instrumentation of radionuclide imaging. In Marcus,
M. L., Schelbert, H. R., Skorton, D. J., and Wolf, G. L., editors, Cardiac Imaging. W.
B. Sanders.
Garcia, E. V., Maddahi, J., Berman, D. S., and Waxman, A. (1981). Space-time quantitation
of thallium-201 myocardial scintigraphy. Journal of Nuclear Medicine, 22:309-317.
Kippenhan, J., Barker, W., Pascal, S., and Duara, R. (1990). A neural-network classifier
applied to PET scans of normal and Alzheimer's disease (AD) patients. In The Proceedings of the 37th Annual Meeting of the Society of Nuclear Medicine, volume 31,
Washington, D.C.
Maddahi, J., Garcia, E. V., Berman, D. S., Waxman, A., Swan, H. J. C., and Forrester,
J. (1981). Improved noninvasive assessment of coronary artery disease by quantitative analysis of regional stress myocardial distribution and washout of thallium-20l.
Circulation, 64 :924-935.
Pohost, G. M. and Henzlova, M. J. (1990). The value of thallium-201 imaging. New
Eng land Journal of Medicine, 323(3): 190-192.
Porenta, G., Kundrat, S., Dorffner, G., Petta, P., Duit, J., and r, H. S. (19~0). Computer based
image interpretations of thallium- 201 scintigrams: Assessment of coronary artery
disease using the parallel distributed processing approach. In Proceedings of the 37th
Annual Meeting of the Society of Nuclear Medicine, page 825.
Rosenberg, C., Erel, J., and Atlan, H. (1993). A neural network that learns to interpret
myocardial planar thallium scintigrams. Neural Computation. To appear.
Rumelhart, D. and Zipser, D. (1986). Feature discovery by competitive learning. In Rumelhart, D. and McClelland, J., editors, Parallel Distributed Processing, volume 1, chapter 5, pages 151-193. MIT Press, Cambridge, Mass.
Tesauro, G. and Sejnowski, T. J. (1988). A parallel network that learns to play backgammon.
Technical Report CCSR-88-2, University of Illinois at Urbana-Champaign Center for
Complex Systems Research.
Widrow, B. and Hoff, M. (1960). Adaptive switching circuits. In 1960 IRE WESCON
Convention Record, volume 4, pages 96-104. IRE, New York.
PART X
IMPLEMENTATIONS
| 600 |@word mild:6 proportion:1 jacob:1 eng:1 thereby:1 initial:1 score:8 karger:1 current:2 anterior:3 cad:3 activation:1 yet:2 must:2 numerical:1 happen:1 christian:1 discrimination:1 v:2 pursued:1 selected:2 spec:1 intelligence:1 inspection:1 oblique:2 record:1 mental:1 detecting:1 ire:2 location:2 ofbackpropagation:1 profound:1 viable:2 iabeled:1 qualitative:1 acquired:2 inter:2 planning:1 multi:1 automatically:1 estimating:1 discover:1 circuit:1 shechter:1 mass:1 israel:4 interpreted:2 fuzzy:1 developed:1 quantitative:3 remember:1 classifier:1 unit:24 medical:4 appear:2 positive:3 dropped:1 switching:1 approximately:2 plus:1 twice:1 examined:1 ecg:1 limited:2 unique:1 camera:1 spontaneously:1 testing:1 lost:1 backpropagation:4 procedure:2 shin:1 area:2 elicit:1 significantly:1 gabor:1 network6:1 pre:2 radial:1 confidence:3 specificity:3 cardiology:4 selection:1 scar:4 acero:1 risk:1 context:1 applying:1 huerta:1 map:1 reviewer:1 center:6 jerusalem:3 educational:1 barker:1 disorder:1 immediately:2 rule:5 continued:1 nuclear:13 enabled:1 population:1 increment:1 annals:1 target:2 play:1 heavily:1 exact:2 distinguishing:1 agreement:2 rumelhart:4 mammalian:1 cut:1 labeled:1 observed:1 region:14 sun:1 disease:10 trained:8 completely:1 basis:1 translated:1 chapter:1 train:1 sejnowski:3 artificial:5 corresponded:1 quantitation:2 encoded:2 widely:1 larger:1 otherwise:1 ability:2 redistributes:1 noisy:1 differentiate:1 took:1 sen:1 interaction:1 maximal:1 aligned:1 baxt:4 inducing:1 artery:8 individ:1 decisionmaking:1 produce:1 ccsr:1 perfusion:8 leave:2 help:1 widrow:2 measured:1 dividing:1 indicate:1 berman:3 convention:1 direction:1 closely:1 correct:1 human:1 ually:1 redistribution:2 require:1 generalization:3 preliminary:1 anonymous:1 biological:1 extension:1 assisted:1 considered:2 normal:9 visually:2 exp:1 roi:1 lm:3 major:1 currently:2 ctl:1 ere:1 city:1 tool:1 mit:1 gaussian:5 rosenberg:9 morton:1 improvement:1 backgammon:1 contrast:1 accumulated:1 typically:1 unlikely:1 hidden:1 expand:1 reproduce:1 fujita:2 pixel:1 overall:1 ill:1 pascal:1 favored:1 iifll:1 smoothing:1 constrained:1 hoff:2 field:1 once:2 having:1 washington:1 eliminated:3 encouraged:1 identical:1 sampling:1 unsupervised:2 cancel:1 future:1 myocardial:17 report:1 inherent:1 few:2 gamma:1 ischemia:6 individual:2 delayed:1 consisting:1 occlusion:1 detection:6 interest:2 possibility:1 investigate:1 intra:2 highly:1 severe:7 alignment:1 netj:1 poorer:1 closer:2 perfonnance:2 filled:1 initialized:2 re:1 plotted:2 increased:1 beller:3 contiguous:1 bull:1 cedar:1 subset:1 hundred:1 delay:4 menzel:1 dependency:1 proximal:5 combined:2 cho:2 sensitivity:4 randomized:1 physician:3 physic:1 moody:1 expert:9 usable:1 leading:1 american:2 waxman:3 account:1 de:1 caused:2 ad:1 radiograph:2 performed:1 view:8 observer:13 diagnose:1 competitive:5 lung:4 parallel:3 bruce:1 cio:4 il:4 accuracy:2 circulation:3 characteristic:1 maximized:1 ant:3 generalize:2 radioactive:1 identification:1 accurately:1 tissue:2 history:4 acquisition:2 involved:1 mi:1 sampled:1 duplicated:1 positioned:1 actually:1 attained:1 higher:1 supervised:3 planar:11 improved:3 saba:1 reversible:2 western:1 overlapping:1 rodriguez:1 assessment:2 artifact:2 indicated:1 utah:1 effect:1 normalized:2 consisted:1 preponderance:1 assigned:2 imaged:1 death:1 isotope:1 distal:5 assistance:1 during:1 width:2 inferior:1 omax:1 ambiguous:1 criterion:1 occasion:1 stress:6 complete:2 confusion:1 percent:2 image:11 jack:1 recently:1 charles:1 salt:1 winner:4 volume:3 interpretation:12 occurred:1 interpret:7 significant:1 refer:1 cambridge:1 fibrillation:1 ai:6 automatic:1 inclusion:1 illinois:1 pathology:1 esser:1 apex:2 acute:1 operating:1 add:2 segmental:2 closest:1 posterior:1 hadassah:1 swan:1 italy:1 irrelevant:1 moderate:5 instrumentation:1 tesauro:1 claimed:1 dawson:2 meeting:3 muscle:2 scoring:1 additional:3 staff:1 mr:1 employed:1 subtraction:1 determine:2 period:3 ii:4 semi:1 multiple:6 full:1 washout:2 champaign:1 technician:1 positioning:1 match:3 technical:2 cross:2 clinical:9 knife:1 divided:1 roland:1 biophysics:2 va:1 impact:1 variant:1 breast:4 essentially:2 patient:2 normalization:1 sometimes:1 sapir:1 achieved:1 background:1 modality:1 appropriately:1 regional:1 exhibited:1 suspect:1 s29:1 alzheimer:2 zipser:3 presence:2 abnormality:2 iii:1 sander:3 psychology:1 intravenous:1 architecture:2 idea:1 grading:2 administration:1 shift:1 utility:1 effort:2 dorffner:1 york:2 cause:1 nine:1 generally:2 amount:1 ph:1 subregions:1 processed:2 mcclelland:1 reduced:1 meir:1 erel:8 diagnostic:2 delta:3 per:2 carcinoma:2 correctly:1 anatomical:1 diagnosis:8 affected:1 group:3 four:2 interpolative:1 threshold:6 utilize:1 vast:1 imaging:12 defect:15 sum:1 angle:3 package:1 tomographic:1 venice:1 lake:1 decision:2 scaling:1 layer:8 abnormal:2 spect:3 followed:2 franken:2 distinguish:2 haim:1 yielded:1 annual:3 strength:1 constraint:2 software:1 aspect:1 relatively:2 department:5 combination:2 march:1 jr:2 smaller:2 cardiac:8 wi:1 infarction:1 making:2 explained:1 heart:8 previously:2 count:1 drs:1 available:1 disagreement:1 reggia:2 remaining:1 include:2 lat:3 medicine:14 build:1 society:6 unchanged:1 sweep:1 move:1 moshe:1 occurs:1 attentional:1 deficit:1 lateral:5 unable:1 thank:2 extent:1 consensus:1 reason:1 pet:2 marcus:3 length:2 o1:1 index:2 besides:1 insufficient:3 gion:1 rotational:1 hebrew:2 ratio:1 difficult:2 weinberg:1 forrester:1 relate:2 favorably:2 negative:1 constructively:1 uehara:1 implementation:1 urbana:1 ekg:1 variability:4 severity:3 incorporated:1 varied:1 august:1 rating:5 tui:1 david:1 connection:1 learned:1 hour:2 boost:1 address:1 suggested:1 pattern:2 atrial:1 reading:1 herman:2 reliable:2 including:1 oj:1 terry:1 suitable:1 treated:1 examination:1 predicting:1 representing:3 technology:1 lao:3 eye:1 uptake:2 deemed:1 categorical:1 atlan:7 epoch:4 review:1 acknowledgement:1 discovery:1 mixed:1 limitation:1 coronary:11 versus:1 validation:2 digital:2 degree:1 agent:1 editor:5 land:1 austin:1 free:1 sinai:1 allow:1 pulled:1 distributed:2 regard:2 curve:1 noninvasive:1 world:1 author:1 made:1 adaptive:2 preprocessing:1 far:1 henri:1 approximate:1 dealing:1 wescon:1 receiver:1 francisco:2 ischemic:1 table:1 promising:2 vessel:1 complex:2 protocol:1 terminated:1 scored:2 profile:5 lesion:4 augmented:1 referred:1 representative:1 roc:1 fashion:2 position:1 momentum:2 wish:1 exercise:1 perceptual:2 third:3 learns:6 minute:1 remained:1 down:1 decay:1 importance:1 phd:1 occurring:1 tc:1 garcia:8 led:1 explore:1 likely:1 visual:6 positional:2 partially:2 gender:1 wolf:3 rbf:13 towards:1 absence:1 considerable:1 aided:1 included:2 determined:2 specifically:1 distributes:1 total:3 hospital:1 called:1 accepted:1 indicating:1 internal:1 support:1 mark:1 scan:2 collins:1 preparation:1 tested:2 |
5,526 | 6,000 | Predtron: A Family of Online Algorithms for General
Prediction Problems
Prateek Jain
Microsoft Research, INDIA
[email protected]
Nagarajan Natarajan
University of Texas at Austin, USA
[email protected]
Ambuj Tewari
University of Michigan, Ann Arbor, USA
[email protected]
Abstract
Modern prediction problems arising in multilabel learning and learning to rank
pose unique challenges to the classical theory of supervised learning. These problems have large prediction and label spaces of a combinatorial nature and involve
sophisticated loss functions. We offer a general framework to derive mistake
driven online algorithms and associated loss bounds. The key ingredients in our
framework are a general loss function, a general vector space representation of
predictions, and a notion of margin with respect to a general norm. Our general
algorithm, Predtron, yields the perceptron algorithm and its variants when instantiated on classic problems such as binary classification, multiclass classification,
ordinal regression, and multilabel classification. For multilabel ranking and subset ranking, we derive novel algorithms, notions of margins, and loss bounds. A
simulation study confirms the behavior predicted by our bounds and demonstrates
the flexibility of the design choices in our framework.
1
Introduction
Classical supervised learning problems, such as binary and multiclass classification, share a number
of characteristics. The prediction space (the space in which the learner makes predictions) is often
the same as the label space (the space from which the learner receives supervision). Because directly learning discrete valued prediction functions is hard, one learns real-valued or vector-valued
functions. These functions generate continuous predictions that are converted into discrete ones
via simple mappings, e.g., via the ?sign? function (binary classification) or the ?argmax? function
(multiclass classification). Also, the most commonly used loss function is simple, viz. the 0-1 loss.
In contrast, modern prediction problems, such as multilabel learning, multilabel ranking, and subset
ranking do not share these characteristics. In order to handle these problems, we need a more general
framework that offers more flexibility. First, it should allow for the possibility of having different
label space and prediction space. Second, it should allow practitioners to use creative, new ways
to map continuous, vector-valued predictions to discrete ones. Third, it should permit the use of
general loss functions.
Extensions of the theory of classical supervised learning to modern predictions problems have begun. For example, the work on calibration dimension [1] can be viewed as extending one aspect of
the theory, viz. that of calibrated surrogates and consistent algorithms based on convex optimization. This paper deals with the extension of another interesting part of classical supervised learning:
mistake driven algorithms such as perceptron (resp. winnow) and their analyses in terms of `2 (resp.
`1 ) margins [2, Section 7.3].
1
We make a number of contributions. First, we provide a general framework (Section 2) whose
ingredients include an arbitrary loss function and an arbitrary representation of discrete predictions in a continuous space. The framework is abstract enough to be of general applicability but
it offers enough mathematical structure so that we can derive a general online algorithm, Predtron
(Algorithm 1), along with an associated loss bound (Theorem 1) under an abstract margin condition (Section 2.2). Second, we show that our framework unifies several perception-like algorithms
for classical problems such as binary classification, multiclass classification, ordinal regression, and
multilabel classification (Section 3). Even for these classical problems, we get some new results, for
example, when the loss function treats labels asymmetrically or when there exists a ?reject? option
in classification. Third, we apply our framework to two modern prediction problems: subset ranking (Section 4) and multilabel ranking (Section 5). In both of these problems, the prediction space
(rankings) is different from the supervision space (set of labels or vector of relevance scores). For
these two problems, we propose interesting, novel notions of correct prediction with a margin and
derive mistake bounds under a loss derived from NDCG, a ranking measure that pays more attention
to the performance at the top of a ranked list. Fourth, our techniques based on online convex optimization (OCO) can effortlessly incorporate notions of margins w.r.t. non-Euclidean norms, such as
`1 norm, group norm, and trace norm. Such flexibility is important in modern prediction problems
where the learned parameter can be a high dimensional vector or a large matrix with low group or
trace norm. Finally, we test our theory in a simulation study (Section 6) dealing with the subset
ranking problem showing how our framework can be adapted to a specific prediction problem. We
investigate different margin notions as we vary two key design choices in our abstract framework:
the map used to convert continuous predictions into discrete ones, and the choice of the norm used
in the definition of margin.
Related Work. Our general algorithm is related to the perceptron and online gradient descent algorithms used in structured prediction [3, 4]. But, to the best of knowledge, our emphasis on keeping
label and prediction spaces possibly distinct, our use of a general representation of predictions, and
our investigation of generalized notions of margins are all novel. The use of simplex coding in multiclass problems [5] inspired the use of maximum similarity/minimum distance decoding to obtain
discrete predictions from continuous ones. Our proofs use results about Online Gradient Descent
and Online Mirror Descent from the Online Convex Optimization literature [6].
2
Framework and Main Result
The key ingredients in classic supervised learning are an input space, an output space and a loss
function. In this paper, the input space X 2 Rp will always be some subset of a finite dimensional
Euclidean space. Our algorithms maintain prediction functions as a linear combination of the seen
inputs. As a result, they easily kernelize and the theory extends, in a straightforward way to the case
when the input space is a, possibly infinite dimensional, reproducing kernel Hilbert space (RKHS).
2.1
Labels, Prediction, and Scores
We will distinguish between the label space and the prediction space. The former is the space where
the training labels come from whereas the latter is the space where the learning algorithm has to
make predictions in. Both spaces will be assumed to be finite. Therefore, without any loss of
generality, we can identify the label space with [`] = {1, . . . , `} and the prediction space with [k]
where `, k are positive, but perhaps very large, integers. A given loss function L : [k] ? [`] ! R+
maps a prediction 2 [k] and a label y 2 [`] to a non-negative loss L( , y). The loss L can
equivalently be thought of as a k ? ` matrix with loss values as entries. Define the set of correct
predictions for a label y as ?y = { y 2 [k] : L( y , y) = 0}. We assume that, for every label
y, the set ?y is non-empty. That is, every column of the loss matrix has a zero entry. Also, let
cL = minL( ,y)>0 L( , y) and CL = max ,y L( , y) be the minimum (non-zero) and maximum
entries in the loss matrix.
In an online setting, the learner will see a stream of examples (X? , Y? ) 2 X ? [`]. Learner will
predict scores using a linear predictor W 2 Rd?p . However, the predicted scores W X? will be
in Rd , not in the prediction space [k]. So, we need a function pred : Rd ! [k] to convert scores
into actual predictions. We will assume that there is a unique representation rep( ) 2 Rd of each
2
prediction such that k rep( )k2 = 1 for all . Given this, a natural transformation of scores into
prediction is given by the following maximum similarity decoding:
pred(t) 2 argmax hrep( ), ti ,
(1)
2[k]
where ties in the ?argmax? can be broken arbitrarily. There are some nice consequences of the
definition of pred above. First, because k rep( )k2 = 1, maximum similarity decoding is equivalent
to nearest neighbor decoding: pred(t) 2 argmin k rep( ) tk2 . Second, we have a homogeneity
property: pred(ct) = pred(t) if c > 0. Third, rep serves as an ?inverse? of pred in the following
sense. We have, pred(rep( )) = for all . Moreover, rep(pred(t)) is more similar to t than the
representation of any other prediction :
8t 2 Rd ,
2 [k], hrep(pred(t)), ti
hrep( ), ti .
In view of these facts, we will use pred 1 ( ) and rep( ) interchangeably. Using pred, the loss
function L can be extended to a function defined on Rd ? [k] as:
L(t, y) = L(pred(t), y).
With a little abuse of notation, we will continue to denote this new function also by L.
2.2
Margins
We say that a score t is compatible with a label y if the set of
? ?s that achieve
? the maximum in the
definition (1) of pred is exactly ?y . That is, argmax 2[k] pred 1 ( ), t = ?y . Hence, for any
?
?
?
?
2
/ ?y , we have pred 1 ( y ), t > pred 1 ( ), t . The notion of margin makes this
y 2 ?y ,
requirement stronger. We say that a score t has a margin > 0 on label y, iff t is compatible with
y and
?
? ?
?
8 y 2 ?y , 2
/ ?y , pred 1 ( y ), t
pred 1 ( ), t +
Note that margin scales with t: if t has margin on y then ct has margin c on y for any positive c.
If we are using linear predictions t = W X, we say that W has margin on (X, y) iff t = W X has
margin on y. We say that W has margin on a dataset (X1 , y1 ), . . . , (Xn , yn ) iff W has margin
on (X? , y? ) for all ? 2 [n]. Finally, a dataset (X1 , y1 ), . . . , (Xn , yn ) is said to be linearly separable
with margin if there is a unit norm1 W ? such that W ? has margin on (X1 , y1 ), . . . , (Xn , yn ).
2.3
Algorithm
Just like the classic perceptron algorithm, our generalized perceptron algorithm (Algorithm 1) is
mistake driven. That is, it only updates on round when a mistake, i.e., a non-zero loss, is incurred.
On a mistake round, it makes a rank-one update of the form W? +1 = WP
g? ? X?> where g? 2
?
>
Rd , X? 2 Rp . Therefore, W? always
has
a
representation
of
the
form
g
i i Xi . The prediction
P
on a fresh input X is given by i gi hXi , Xi which means the algorithm, just like the original
perceptron, can be kernelized.
We will give a loss bound for the algorithm using tools from Online Convex Optimization (OCO).
Define the function : Rd ? [`] ! R as
?
?
(t, y) = max L( , y)
pred 1 ( y ) pred 1 ( ), t
(2)
2[k]
where y 2 ?y is an arbitrary member of ?y . For any y, (?, y) is a point-wise maximum of linear
functions and hence convex. Also, is non-negative: choose = y to lower bound the maximum.
The inner product part vanishes and the loss L( y , y) vanishes too because y 2 ?y . Given the
definition of , Algorithm 1 can be described succinctly as follows. At round ? , if L(W? X? , Y? ) >
0, then W? +1 = W? ?rW (W X? , Y? ), otherwise W? +1 = W? .
1
Here, we mean that the Frobenius norm kW ? kF equals 1. Of course, the notion of margin can be generalized to any norm including the entry-based `1 norm kW k1 and the spectrum-based `1 norm kW kS(1) (also
called the nuclear or trace norm). See Appendix B.2.
3
Algorithm 1 Predtron: Extension of the Perceptron Algorithm to General Prediction Problems
1: W1
0
2: for ? = 1, 2, . . . do
3:
Receive X? 2 Rp
4:
Predict ? = pred(W? X? ) 2 [k]
5:
Receive label y? 2 [`]
6:
if L( ? , y? ) > 0 then
7:
(t, y) = (W? X? , y? )
?
?
8:
?? = argmax 2[k] L( , y)
pred 1 ( y ) pred 1 ( ), t 2 [k]
9:
r? = (pred 1 (?? ) pred 1 ( y )) ? X?> 2 Rd?p
10:
W? +1 = W? ?r?
11:
else
12:
W? +1 = W?
13:
end if
14: end for
Theorem 1. Suppose the dataset (X1 , y1 ), . . . , (Xn , yn ) is linearly separable with margin . Then
the sequence W? generated by Algorithm 1 with ? = cL /(4R2 ) satisfies the loss bound,
n
X
? =1
where kX? k2 ? R for all ? .
L(W? X? , y? ) ?
4R2 CL2
cL 2
Note that the bound above assumes perfect linear separability. However, just the classic perceptron,
the bound will degrade gracefully when the best linear predictor does not have enough margin on
the data set.
The Predtron algorithm has some interesting variants, two of which we consider in the appendix. A
loss driven version, Predtron.LD, enjoys a loss bound that gets rid of the CL /cL factor in the bound
above. A version, Predtron.Link, that uses link functions to deal with margins defined with respect
to non-Euclidean norms is also considered.
3
Relationship to Existing Results
It is useful to discuss a few concrete applications of the abstract framework introduced in the last
section. Several existing loss bounds can be readily derived by applying our bound for the generalized perceptron algorithm in Theorem 1. In some cases, our framework yields a different algorithm
than existing counterparts, yet admitting identical loss bounds, up to constants.
Binary Classification. We begin with the classical perceptron algorithm for binary classification
(i.e., ` = 2) [7]: L0-1 ( , y) = 1 if 6= y or 0 otherwise. Letting rep( ) be +1 for the positive
class and 1 for the negative class, predictor vector W? 2 R1?p , and thus pred(t) = sign(t),
Algorithm 1 reduces to the original perceptron algorithm; Theorem 1 yields identical mistake bound
on a linearly separable dataset with margin (if the classical margin is , ours works out to be
Pn
R2
2 ), i.e.
2 . We can also easily incorporate asymmetric losses. Let
? =1 L0-1 (W? X? , y? ) ?
L? ( , y) = ?y , if 6= y and 0 otherwise. We then have the following result.
Corollary 2. Consider the perceptron with weighted loss L? . Assume ?1
?2 without loss of
generality. Then the sequence W? generated by Algorithm 1 satisfies the weighted mistake bound,
n
X
? =1
L? (W? X? , y? ) ?
4R2 ?12
.
?22 2
We are not aware of such results for weighted loss. Previous work [8] studies perceptrons
with
Pn uneven margins, and the loss bound there only implies a bound on the unweighted loss:
atsch and Kivinen [9] provide a mistake bound of the
? =1 L0-1 (t? , y? ). In a technical note, R?
4
Pn
form (without proof): ? =1 L? (W? X? , y? ) ?
and ?2 = (1 a)2 for any a 2 [0, 1].
R2
4 2,
but for the specific choice of weights ?1 = a2
Another interesting extension is obtained by allowing the predictions to have a R EJECT option. Define LR EJ (R EJECT, y) = y and LR EJ ( , y) = L0-1 ( , y) otherwise. Assume 1
1
2 > 0 without loss of generality. Choosing the standardPbasis vectors in R2 to be rep(
)
for
the
positive
and the
Pn
negative classes, and rep(R EJECT) = p12
2{1,2} rep( ), we obtain
? =1 LR EJ (W? X? , y? ) ?
4R2
2
2
1
2
2
(See Appendix C.1).
Multiclass Classification. Each instance is assigned exactly one of m classes (i.e., ` = m).
Extending binary classification, we choose the standard basis vectors in Rm to be rep( ) for
the m classes. The learner predicts score t 2 Rm using the predictor W 2 Rm?p . So,
pred(t) = argmaxi ti . Let wj denote the jth row of W (corresponding to label j). The definition of margin becomes:
hwy , Xi max hwj , Xi
j6=y
which is identical to the multiclass margin studied earlier [10]. For the multiclass 0-1 loss L0-1 , we
recover their bound, up to constants2 . Moreover, our surrogate for L0-1 :
(t, y) = max 0, 1 + max t
6=y
ty ,
matches the multiclass extension of the Hinge loss studied by [11]. Finally, note that it is straightforward to obtain loss bounds for multiclass perceptron with R EJECT option by naturally extending
the definitions of rep and LR EJ for the binary case.
Ordinal Regression. The goal is to assign ordinal classes (such as ratings) to a set of objects
{X1 , X2 , . . . } described by their features Xi 2 Rp . In many cases, precise rating information
may not be available, but only their relative ranks; i.e., the observations consist of object-rank pairs
(X? , y? ) where y? 2 [`]. Y is totally-ordered with ?>? relation, which in turn induces a partial
ordering on the objects (Xj is preferred to Xj 0 if yj > yj 0 , Xj and Xj 0 are not comparable if
yj = yP
y|, the PRank perceptron algorithm [12] enjoys the
j 0 ). For the ranking loss L( , y) = |
n
bound ? =1 L(?? , y? ) ? (` 1)(R2 + 1)/? 2 , where ? is a certain rank margin. By a reduction
to multi-class classification with ` classes, Algorithm 1 achieves the loss bound 4(` 1)2 R2 / 2
(albeit, for a different margin ).
Multilabel Classification. This setting generalizes multiclass classification in that instances are
assigned subsets of m classes rather than unique classes, i.e., ` = 2m . The loss function L of
interest may dictate the choice of rep and in turn pred. For example, consider the following subset
losses that treat labels as well as predictions as subsets: (i) Subset 0-1 loss: LIsErr ( , y) = 1 if
= y or 0 otherwise; (ii) Hamming loss: LHam ( , y) = | [ y| | \ y|, and (ii) Error set
size: LErrSetSize ( , y) = {(r, s) 2 y ? ([m] \ y) : r 62 , s 2 } . A natural choice of rep then
d
is the subset indicator
vector
expressed as
P
P in {+1, 1} , where d = m = log `, which can be
1
m
p
rep( ) = m
e
e
(where
e
?s
are
the
standard
basis
vectors
in
R
).
The learner
j
j
j
j2
j62
predicts score t 2 Rm using a matrix W 2 Rm?p . Note that pred(t) = sign(t), where sign is
applied component-wise. The number of predictions is 2m , but we show in Appendix C.2 that the
surrogate (2) and its gradient can be efficiently computed for all of the above losses.
4
Subset Ranking
In subset ranking [13], the task is to learn to rank a number of documents in order of their relevance to
a query. We will assume, for simplicity, that the number of documents per query is constant that we
denote by m. The input space is a subset of Rm?p0 that we can identify with Rp for p = mp0 . Each
row of an input matrix corresponds to a p0 -dimensional feature vector derived jointly using the query
2
Perceptron algorithm in [10] is based on a slightly different loss defined as LErrSet (t, y) = 1 if |{r 6= y :
tr
ty }| > 0 or 0 otherwise (where t = W X). This loss upper bounds L0-1 (because of the way ties are
handled, there can be rounds when L0-1 is 0, but LErrSet is 1).
5
and one of the documents associated with it. The predictions are all m! permutations of degree m.
The most natural (but by no means the only one) representation of permutations is to set rep( ) =
/Z where (i) is the position of the document i in the predicted ranking and the normalization
Z ensures that rep( ) is a unit vector. Note that the dimension d of this representation is equal to m.
The minus sign in this representation ensures that pred(t) outputs a permutation that corresponds to
sorting the entries of t in decreasing order, a common convention in existing work. A more general
representation is obtained by setting rep( ) = f ( )/Z where f : R ! R is
decreasing
paPstrictly
m
2 (i) ensures
real valued function that is applied entry-wise to . The normalization Z =
f
i=1
that k rep( )k2 = 1. To convert an input matrix X 2 Rp (p = mp0 ) into a score vector t 2 Rm ,
it seems that we need to learn a matrix W 2 Rm?mp0 . However, a natural permutation invariance
requirement (if the documents associated are presented in a permuted fashion, the output scores
should also get permuted in the same way) reduces the dimensionality of W to p0 (see, e.g., [14] for
more details). Thus, given a vector w 2 Rp0 we get the score vector as t = Xw. The label space
consists of relevance score vectors y 2 {0, 1, . . . , Ymax }m where Ymax is typically between 1 and
4 (yielding 2 to 5 grades of relevance). Note that the prediction space (of size k = m!) is different
from the label space (of size ` = (Ymax + 1)m ).
A variety of loss functions have been used in subset ranking. For multigraded relevance judgments,
Pm
2y(i) 1
a very popular choice is NDCG which is defined as N DCG( , y) =
i=1 log2 (1+ (i)) /Z(y)
where Z(y) is a normalization constant ensuring NDCG stays bounded by 1. To convert it into a
loss we define LNDCG = 1 N DCG. Note that any permutation that sorts y in decreasing order
gets zero LNDCG . One might worry that the computation of the surrogate defined in (2) and its
gradient might require an enumeration of m! permutations. The next lemma allays such a concern.
Lemma 3. When L = LNDCG and rep( ) is chosen as above, the computation of the surrogate (2),
as well as its gradient, can be reduced to solving a linear assignment problem and hence can be
done in O(m3 ) time.
We now give a result explaining what it means for a score vector t to have a margin on y when we
use a representation of the form described above. Without loss of generality, we may assume that y
is sorted in decreasing order of relevance judgements.
Lemma
pPm 4. Suppose rep( ) = f ( )/Z for a strictly decreasing function f : R ! R and Z =
2
i=1 f (i). Let y be a non-constant relevance judgement vector sorted in decreasing order.
Suppose i1 < i2 , . . . < iN , N
1 are the positions where the relevance drops by a grade or more
(i.e., y(ij ) < y(ij 1)). Then t has a margin on y iff t is compatible with y and, for j 2 [N ],
tij
1
t ij +
f (ij
Z
1) f (ij )
where we define i0 = 1, iN +1 = m + 1 to handle boundary cases.
1
Note that if we choose f (i) = i? , ? > 1 then f (ij 1) f (ij ) = O(i?
) for large ij . In
j
that case, the margin condition above requires less separation between documents with different
relevance scores down the list (when viewed in decreasing order of relevance scores) than at the top
of the list. We end this section with a loss bound for LNDCG under a margin condition.
Corollary 5. Suppose L = LNDCG and rep( ) is as in Lemma 4. Then, assuming the dataset is
linearly separable with margin , the sequence generated by Algorithm 1 with line 9 replaced by
satisfies
r? = X?> (pred
n
X
? =1
where kX? kop ? R.
1
(?? )
LNDCG (X? w? , y? ) ?
pred
1
(
y ))
2 Rp0 ?1
2Ymax +3 ? m2 log22 (2m) ? R2
2
Note that the result above uses the standard `2 -norm based notion of margin. Imagine a subset
ranking problem, where only a small number of features are relevant. It is therefore natural to
consider a notion of margin where the weight vector that ranks everything perfectly has low group `1
norm, instead of low `2 norm. The `1 margin also appears in the analysis of AdaBoost [2, Definition
6
6.2]. We can use a special case of a more general algorithm given in the appendix (Appendix B.2,
Algorithm 3). Specifically, we replace line 10 with the step w? +1 = (r ) 1 (r (w? ) r? )
where (w) = 12 kwk2r . We set r = log(p0 )/(log(p0 ) 1). The mapping r and its inverse can
both be easily computed (see, e.g., [6, p. 145]).
Corollary 6. Suppose L = LNDCG and rep( ) is as in Lemma 4. Then, assuming the dataset is
linearly separable with margin by a unit `1 norm w? (kw? k1 = 1), the sequence generated by
Algorithm 3 with chosen as above (and line 9 modified as in Corollary 5), satisfies
n
X
? =1
LNDCG (X? w? , y? ) ?
9 ? 2Ymax +3 ? m2 log22 (2m) ? R2 ? log p0
2
where maxj=1,...,po kX?,j k2 ? R and X?,j denotes the jth column of X? .
5
Multilabel Ranking
As discussed in Section 3, in multilabel classification, both prediction space and label space are
{0, 1}m with sizes k = ` = 2m . In multilabel ranking, however, the learner has to output rankings
as predictions. So, as in the previous section, we have k = m! since the prediction can be any
one of m! permutations of the labels. As before, we choose rep( ) = f ( )/Z and hence d = m.
However, unlike the previous section, the input is no longer a matrix but a vector X 2 Rp . A
prediction t 2 Rd is obtained as W X where W 2 Rm?p . Note the contrast with the last section:
there, inputs are matrices and a weight vector is learned; here, inputs are vectors and a weight matrix
is learned. Since we output rankings, it is reasonable to use a loss that takes positions of labels into
account. We can use L = LNDCG . Algorithm 1 now immediately applies. Lemma 3 already showed
that is efficiently implementable. We have the following straightforward corollary.
Corollary 7. Suppose L = LNDCG and rep( ) is as in Lemma 4. Then, assuming the dataset is
linearly separable with margin , the sequence generated by Algorithm 1 satisfies
n
X
? =1
LNDCG (X? w? , y? ) ?
2Ymax +3 ? m2 log22 (2m) ? R2
2
where kX? k2 ? R.
The bound above matches the corresponding bound, up to loss specific constants, for the multiclass
multilabel perceptron (MMP) algorithm studied by [15]. The definition of margin by [15] for MMP
is different from ours since their algorithms are designed specifically for multilabel ranking. Just like
them, we can also consider other losses, e.g., precision at top K positions. Another perceptron style
algorithm for multilabel ranking adopts a pairwise approach of comparing two labels at a time [16].
However, no loss bounds are derived.
The result above uses the standard Frobenius norm based margin. Imagine a multilabel problem,
where only a small number of features are relevant across all labels. Then, it is natural to consider a
notion of margin where the matrix that ranksP
everything perfectly has low group (2, 1) norm, instead
p
of low Frobenius norm, where kW k2,1 = j=1 kWj k2 (Wj denotes a column of W ). We again
use a special case of Algorithm 3 (Appendix B.2). Specifically, we replace line 10 with the step
W? +1 = (r ) 1 (r (W? ) r? ) where (W ) = 12 kW k22,r . Recall that the group (2, r)-norm is
the `r norm of the `2 norm of the columns of W . We set r = log(p)/(log(p) 1). The mapping
r and its inverse can both be easily computed (see, e.g., [17, Eq. (2)]).
Corollary 8. Suppose L = LNDCG and rep( ) is as in Lemma 4. Then, assuming the dataset is
linearly separable with margin by a unit group norm W ? (kW ? k2,1 = 1), the sequence generated
by Algorithm 3 with chosen as above, satisfies
n
X
? =1
LNDCG (X? w? , y? ) ?
9 ? 2Ymax +3 ? m2 log22 (2m) ? R2 ? log p
2
where kX? k1 ? R.
7
pred?1(?(i))=1/i
pred?1(?(i))=?i1.1
?10
10
pred?1(?(i))=?i2
20
40
60
80
No. of Training Points (n)
(a)
100
Subset Ranking (n=30, p0=30)
0
10
pred?1(?(i))=1/i
pred?1(?(i))=?i1.1
?5
10
pred?1(?(i))=?i2
?10
10
15
20
No. of documents in each instance (m)
25
Loss on Test Points (1?NDCG)
?5
10
Loss on Test Points (1?NDCG)
Loss on Test Points (1?NDCG)
Subset Ranking (m=20, p0=30)
0
10
0.4
0.3
L1 vs L2 (s=50, n=50, m=20)
Predtron?L2
Predtron?L1
0.2
0.1
0
(b)
500
1000
1500
2000
Data dimensionality (p0)
2500
(c)
1
Figure 1: Subset Ranking: NDCG loss for different pred choices with varying n (Plot (a)) and m
(Plot (b)). As predicted by Lemmas 4 and 5, pred 1 ( i ) = i1.1 is more accurate than 1/i. (c):
L1 vs L2 margin. LNDCG for two different Predtron algorithms based on L1 and L2 margin. Data
is generated using L1 margin notion but with varying sparsity of the optimal scoring function w? .
6
Experiments
We now present simulation results to demonstrate the application of our proposed Predtron framework to subset ranking. We also demonstrate that empirical results match the trend predicted by
our error bounds, hence hinting at tightness of our (upper) bounds. Due to lack of space, we focus
only on the subset ranking problem. Also, we would like to stress that we do not claim that the
basic version of Predtron itself (with ? = 1) provides a state-of-the-art ranker. Instead, we wish to
demonstrate the applicability and flexibility of our framework in a controlled setting.
We generated n data points X? 2 Rm?p0 using a Gaussian distribution with independent rows. The
ith row of X? represents a document and is sampled from a spherical Gaussian centered at ?i . We
selected a w? 2 Rp0 and also a set of thresholds [?1 , . . . , ?m+1 ] to generate relevance scores; we
set ?j = 1j , 82 ? j ? m and ?1 = +1 and ?m+1 = 1. We set relevance score y? (i) of the
ith document in the ? th document-set as: y? (i) = m j iff ?j+1 ? hX? (i), w? i ? ?j . That is,
y? (i) 2 [m 1].
We measure performance of a given method using the NDCG loss LNDCG defined in Section 4.
Note that LNDCG is less sensitive to errors in predictions for the less relevant documents in the list.
On the other hand, our selection of thresholds ?i ?s implies that the gap between scores of lowerranked documents is very small compared to the higher-ranked ones, and hence chances of making
mistakes lower down the list is higher.
Figure 1 (a) shows LNDCG (on a test set) for our Predtron algorithm (see Section 4) but with different
pred 1 functions. For pred 1 ( (i)) = f2 ( ) = i1.1 , f2 (i 1) f2 (i) is monotonically increasing
with i. On the other hand, for pred 1 ( (i)) = f1 ( ) = 1/i, f1 (i 1) f1 (i) is monotonically
decreasing with i. Lemma 4 shows that the mistake bound (in terms of LNDCG ) of Predtron is better
when pred 1 function is selected to be f2 ( (i)) = i1.1 (as well as for f3 ( (i)) = i2 ) instead of
f1 ( (i)) = 1/i. Clearly, Figure 1 (a) empirically validates this mistake bound with LNDCG going
to almost 0 for f2 and f3 with just 60 training points, while f1 based Predtron has large loss even
with n = 100 training points.
Next, we fix the number of training instances to be n = 30 and vary the number of documents m.
As the gap between ?i ?s decreases for larger i, increasing m implies reducing the margin. Naturally,
Predtron with the above mentioned inverse functions has monotonically increasing loss (see Figure 1
(b)). However, f2 and f3 provide zero-loss solutions for larger m when compared to f1 .
Finally, we conduct an experiment to show that by selecting appropriate notion of margin, Predtron
can obtain more accurate solutions. To this end, we generate data from [ 1, 1]p0pand select a sparse
w? . Now, Predtron with `2 -margin notion, i.e., standard gradient descent has p0 dependency in
the error bounds while the `1 -margin (see Corollary 6) has only s log(p0 ) dependence. This error
dependency is also revealed by Figure 1 (c), where increasing p0 with fixed s leads to minor increase
in the loss for `1 -based Predtron but leads to significantly higher loss for `2 -based Predtron.
Acknowledgments
A. Tewari acknowledges the support of NSF under grant IIS-1319810.
8
References
[1] Harish G. Ramaswamy and Shivani Agarwal. Classification calibration dimension for general multiclass
losses. In Advances in Neural Information Processing Systems, pages 2078?2086, 2012.
[2] Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning. MIT
press, 2012.
[3] Michael Collins. Discriminative training methods for hidden Markov models: Theory and experiments
with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural
language processing-Volume 10, pages 1?8, 2002.
[4] Nathan D. Ratliff, J Andrew Bagnell, and Martin Zinkevich. (Approximate) subgradient methods for
structured prediction. In International Conference on Artificial Intelligence and Statistics, pages 380?
387, 2007.
[5] Youssef Mroueh, Tomaso Poggio, Lorenzo Rosasco, and Jean-Jeacques Slotine. Multiclass learning with
simplex coding. In Advances in Neural Information Processing Systems, pages 2789?2797, 2012.
[6] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107?194, 2011.
[7] Albert B.J. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on the
Mathematical Theory of Automata, volume 12, pages 615?622, 1962.
[8] Yaoyong Li, Hugo Zaragoza, Ralf Herbrich, John Shawe-Taylor, and Jaz S. Kandola. The perceptron
algorithm with uneven margins. In Proceedings of the Nineteenth International Conference on Machine
Learning, pages 379?386, 2002.
[9] Gunnar Ratsch and Jyrki Kivinen. Extended classification with modified Perceptron, 2002. Presented
at the NIPS 2002 Workshop: Beyond Classification and Regression: Learning Rankings, Preferences,
Equality Predicates, and Other Structures; abstract available at http://www.cs.cornell.edu/
people/tj/ranklearn/raetsch_kivinen.pdf.
[10] Koby Crammer and Yoram Singer. Ultraconservative online algorithms for multiclass problems. The
Journal of Machine Learning Research, 3:951?991, 2003.
[11] Koby Crammer and Yoram Singer. On the algorithmic implementation of multiclass kernel-based vector
machines. The Journal of Machine Learning Research, 2:265?292, 2002.
[12] Koby Crammer and Yoram Singer. Pranking with ranking. Advances in Neural Information Procession
Systems, 14:641?647, 2002.
[13] David Cossock and Tong Zhang. Statistical analysis of bayes optimal subset ranking. IEEE Transactions
on Information Theory, 54(11):5140?5154, 2008.
[14] Ambuj Tewari and Sougata Chaudhuri. Generalization error bounds for learning to rank: Does the length
of document lists matter? In Proceedings of the 32nd International Conference on Machine Learning,
volume 37 of JMLR Workshop and Conference Proceedings, 2015.
[15] Koby Crammer and Yoram Singer. A family of additive online algorithms for category ranking. The
Journal of Machine Learning Research, 3:1025?1058, 2003.
[16] Eneldo Loza Menc??a and Johannes Furnkranz. Pairwise learning of multilabel classifications with perceptrons. In IEEE International Joint Conference on Neural Networks, pages 2899?2906, 2008.
[17] Sham M. Kakade, Shai Shalev-Shwartz, and Ambuj Tewari. Regularization techniques for learning with
matrices. Journal of Machine Learning Research, 13:1865?1890, 2012.
9
| 6000 |@word version:3 judgement:2 seems:1 norm:24 stronger:1 nd:1 confirms:1 simulation:3 p0:13 tr:1 minus:1 ld:1 reduction:1 score:20 selecting:1 rkhs:1 ours:2 document:14 existing:4 com:1 comparing:1 jaz:1 yet:1 readily:1 john:1 additive:1 drop:1 designed:1 update:2 plot:2 v:2 intelligence:1 selected:2 ith:2 lr:4 provides:1 herbrich:1 preference:1 zhang:1 mathematical:2 along:1 symposium:1 consists:1 pairwise:2 tomaso:1 behavior:1 multi:1 grade:2 inspired:1 decreasing:8 spherical:1 actual:1 little:1 enumeration:1 totally:1 becomes:1 begin:1 increasing:4 moreover:2 notation:1 bounded:1 prateek:1 what:1 argmin:1 transformation:1 tewaria:1 every:2 j62:1 ti:4 tie:2 exactly:2 demonstrates:1 k2:9 rm:10 unit:4 grant:1 yn:4 positive:4 before:1 treat:2 mistake:12 consequence:1 ndcg:8 abuse:1 might:2 acl:1 emphasis:1 k:1 studied:3 unique:3 acknowledgment:1 yj:3 empirical:2 reject:1 thought:1 dictate:1 significantly:1 get:5 selection:1 applying:1 www:1 equivalent:1 map:3 zinkevich:1 straightforward:3 attention:1 convex:6 automaton:1 simplicity:1 immediately:1 m2:4 nuclear:1 ralf:1 classic:4 handle:2 notion:14 resp:2 kernelize:1 suppose:7 imagine:2 us:3 trend:2 natarajan:1 asymmetric:1 predicts:2 wj:2 ensures:3 ordering:1 decrease:1 mentioned:1 vanishes:2 broken:1 ppm:1 multilabel:16 solving:1 menc:1 f2:6 learner:7 basis:2 easily:4 po:1 joint:1 jain:1 instantiated:1 distinct:1 argmaxi:1 query:3 artificial:1 youssef:1 choosing:1 shalev:2 whose:1 jean:1 larger:2 valued:5 nineteenth:1 say:4 tightness:1 otherwise:6 statistic:1 gi:1 lham:1 jointly:1 itself:1 validates:1 online:14 sequence:6 propose:1 product:1 j2:1 relevant:3 iff:5 flexibility:4 achieve:1 ymax:7 chaudhuri:1 frobenius:3 convergence:1 empty:1 requirement:2 extending:3 r1:1 perfect:1 object:3 derive:4 andrew:1 pose:1 ij:8 nearest:1 minor:1 eq:1 c:2 predicted:5 come:1 implies:3 convention:1 correct:2 centered:1 everything:2 require:1 hx:1 assign:1 nagarajan:1 f1:6 fix:1 generalization:1 investigation:1 extension:5 strictly:1 effortlessly:1 considered:1 mapping:3 predict:2 algorithmic:1 claim:1 vary:2 achieves:1 a2:1 label:25 combinatorial:1 utexas:1 sensitive:1 tool:1 weighted:3 mit:1 clearly:1 gaussian:2 always:2 modified:2 rather:1 pn:4 ej:4 cornell:1 varying:2 corollary:8 derived:4 l0:8 viz:2 focus:1 rank:8 contrast:2 rostamizadeh:1 sense:1 talwalkar:1 i0:1 typically:1 dcg:2 hidden:1 kernelized:1 relation:1 going:1 i1:6 classification:22 art:1 special:2 equal:2 aware:1 f3:3 having:1 identical:3 kw:7 represents:1 koby:4 oco:2 simplex:2 novikoff:1 few:1 modern:5 kandola:1 homogeneity:1 maxj:1 replaced:1 argmax:5 microsoft:2 maintain:1 interest:1 possibility:1 investigate:1 admitting:1 yielding:1 tj:1 yaoyong:1 accurate:2 partial:1 poggio:1 conduct:1 euclidean:3 taylor:1 instance:4 column:4 earlier:1 norm1:1 assignment:1 applicability:2 subset:21 entry:6 predictor:4 predicate:1 too:1 dependency:2 calibrated:1 international:4 stay:1 pranking:1 decoding:4 michael:1 concrete:1 w1:1 again:1 choose:4 possibly:2 rosasco:1 style:1 yp:1 li:1 account:1 converted:1 coding:2 matter:1 ranking:30 stream:1 view:1 ramaswamy:1 recover:1 option:3 sort:1 bayes:1 shai:2 contribution:1 characteristic:2 efficiently:2 yield:3 identify:2 judgment:1 unifies:1 j6:1 definition:8 ty:2 slotine:1 naturally:2 associated:4 proof:3 hamming:1 sampled:1 dataset:8 begun:1 popular:1 recall:1 knowledge:1 dimensionality:2 hilbert:1 sophisticated:1 worry:1 appears:1 higher:3 supervised:5 adaboost:1 done:1 generality:4 just:5 hand:2 receives:1 lack:1 perhaps:1 usa:2 k22:1 counterpart:1 former:1 hence:6 assigned:2 equality:1 procession:1 regularization:1 wp:1 i2:4 deal:2 zaragoza:1 round:4 interchangeably:1 generalized:4 pdf:1 stress:1 demonstrate:3 p12:1 l1:5 wise:3 novel:3 common:1 permuted:2 empirically:1 hugo:1 volume:3 cossock:1 discussed:1 rd:10 mroueh:1 pm:1 language:1 shawe:1 hxi:1 calibration:2 supervision:2 similarity:3 longer:1 showed:1 winnow:1 driven:4 certain:1 binary:8 rep:28 arbitrarily:1 continue:1 scoring:1 seen:1 minimum:2 minl:1 monotonically:3 ii:3 sham:1 reduces:2 technical:1 match:3 offer:3 hwj:1 naga86:1 controlled:1 ensuring:1 prediction:49 variant:2 regression:4 basic:1 albert:1 kernel:2 normalization:3 agarwal:1 receive:2 whereas:1 else:1 ratsch:1 unlike:1 hwy:1 member:1 practitioner:1 integer:1 revealed:1 enough:3 variety:1 xj:4 perfectly:2 inner:1 multiclass:16 texas:1 ranker:1 handled:1 useful:1 tewari:4 tij:1 involve:1 johannes:1 induces:1 shivani:1 category:1 rw:1 reduced:1 generate:3 http:1 nsf:1 sign:5 arising:1 per:1 discrete:6 group:6 key:3 gunnar:1 threshold:2 subgradient:1 convert:4 inverse:4 fourth:1 sougata:1 extends:1 family:2 reasonable:1 almost:1 separation:1 appendix:7 comparable:1 bound:35 ct:2 pay:1 distinguish:1 adapted:1 x2:1 aspect:1 nathan:1 cl2:1 separable:7 eject:4 ameet:1 martin:1 structured:2 creative:1 combination:1 across:1 slightly:1 separability:1 rp0:3 log22:4 kakade:1 making:1 discus:1 turn:2 singer:4 ordinal:4 letting:1 prajain:1 serf:1 umich:1 end:4 available:2 generalizes:1 permit:1 apply:1 appropriate:1 rp:7 original:2 top:3 assumes:1 include:1 denotes:2 harish:1 log2:1 hinge:1 xw:1 yoram:4 k1:3 tk2:1 furnkranz:1 classical:8 already:1 dependence:1 hrep:3 bagnell:1 surrogate:5 said:1 gradient:6 distance:1 link:2 gracefully:1 degrade:1 fresh:1 assuming:4 afshin:1 length:1 relationship:1 equivalently:1 trace:3 negative:4 ratliff:1 design:2 implementation:1 allowing:1 upper:2 observation:1 markov:1 finite:2 implementable:1 descent:4 prank:1 extended:2 precise:1 y1:4 reproducing:1 arbitrary:3 rating:2 pred:45 introduced:1 pair:1 david:1 learned:3 nip:1 beyond:1 perception:1 sparsity:1 challenge:1 ambuj:3 max:5 including:1 ranked:2 natural:7 indicator:1 kivinen:2 mp0:3 lorenzo:1 acknowledges:1 nice:1 literature:1 l2:4 loza:1 kf:1 relative:1 loss:66 permutation:7 interesting:4 ingredient:3 foundation:2 incurred:1 degree:1 consistent:1 share:2 row:4 austin:1 compatible:3 succinctly:1 course:1 mohri:1 last:2 keeping:1 jth:2 enjoys:2 allow:2 perceptron:20 india:1 neighbor:1 explaining:1 sparse:1 boundary:1 dimension:3 xn:4 unweighted:1 adopts:1 commonly:1 transaction:1 approximate:1 preferred:1 dealing:1 rid:1 assumed:1 xi:5 discriminative:1 shwartz:2 spectrum:1 continuous:5 ultraconservative:1 nature:1 learn:2 mehryar:1 cl:6 main:1 linearly:7 x1:5 fashion:1 tong:1 precision:1 position:4 wish:1 mmp:2 jmlr:1 third:3 learns:1 theorem:4 down:2 kop:1 specific:3 showing:1 list:6 r2:13 hinting:1 concern:1 exists:1 consist:1 workshop:2 albeit:1 mirror:1 margin:54 kx:5 sorting:1 gap:2 michigan:1 expressed:1 ordered:1 kwj:1 applies:1 corresponds:2 satisfies:6 chance:1 viewed:2 goal:1 sorted:2 ann:1 jyrki:1 replace:2 hard:1 infinite:1 specifically:3 reducing:1 lemma:10 called:1 asymmetrically:1 invariance:1 arbor:1 m3:1 perceptrons:3 atsch:1 select:1 uneven:2 support:1 people:1 latter:1 crammer:4 collins:1 relevance:12 incorporate:2 |
5,527 | 6,001 | On the Optimality of Classifier Chain for
Multi-label Classification
Weiwei Liu
Ivor W. Tsang?
Centre for Quantum Computation and Intelligent Systems
University of Technology, Sydney
[email protected], [email protected]
Abstract
To capture the interdependencies between labels in multi-label classification problems, classifier chain (CC) tries to take the multiple labels of each instance into
account under a deterministic high-order Markov Chain model. Since its performance is sensitive to the choice of label order, the key issue is how to determine
the optimal label order for CC. In this work, we first generalize the CC model over
a random label order. Then, we present a theoretical analysis of the generalization error for the proposed generalized model. Based on our results, we propose
a dynamic programming based classifier chain (CC-DP) algorithm to search the
globally optimal label order for CC and a greedy classifier chain (CC-Greedy)
algorithm to find a locally optimal CC. Comprehensive experiments on a number of real-world multi-label data sets from various domains demonstrate that our
proposed CC-DP algorithm outperforms state-of-the-art approaches and the CCGreedy algorithm achieves comparable prediction performance with CC-DP.
1
Introduction
Multi-label classification, where each instance can belong to multiple labels simultaneously, has
significantly attracted the attention of researchers as a result of its various applications, ranging from
document classification and gene function prediction, to automatic image annotation. For example,
a document can be associated with a range of topics, such as Sports, Finance and Education [1]; a
gene belongs to the functions of protein synthesis, metabolism and transcription [2]; an image may
have both beach and tree tags [3].
One popular strategy for multi-label classification is to reduce the original problem into many binary classification problems. Many works have followed this strategy. For example, binary relevance
(BR) [4] is a simple approach for multi-label learning which independently trains a binary classifier
for each label. Recently, Dembczynski et al. [5] have shown that methods of multi-label learning which explicitly capture label dependency will usually achieve better prediction performance.
Therefore, modeling the label dependency is one of the major challenges in multi-label classification problems. Many multi-label learning models [5, 6, 7, 8, 9, 10, 11, 12] have been developed to
capture label dependency. Amongst them, the classifier chain (CC) model is one of the most popular
methods due to its simplicity and promising experimental results [6].
CC works as follows: One classifier is trained for each label. For the (i + 1)th label, each instance
is augmented with the 1st, 2nd, ? ? ? , ith label as the input to train the (i + 1)th classifier. Given a
new instance to be classified, CC firstly predicts the value of the first label, then takes this instance
together with the predicted value as the input to predict the value of the next label. CC proceeds
in this way until the last label is predicted. However, here is the question: Does the label order
affect the performance of CC? Apparently yes, because different classifier chains involve different
?
Corresponding author
1
classifiers trained on different training sets. Thus, to reduce the influence of the label order, Read et
al. [6] proposed the ensembled classifier chain (ECC) to average the multi-label predictions of CC
over a set of random chain ordering. Since the performance of CC is sensitive to the choice of label
order, there is another important question: Is there any globally optimal classifier chain which can
achieve the optimal prediction performance for CC? If yes, how can the globally optimal classifier
chain be found?
To answer the last two questions, we first generalize the CC model over a random label order. We
then present a theoretical analysis of the generalization error for the proposed generalized model.
Our results show that the upper bound of the generalization error depends on the sum of reciprocal
of square of the margin over the labels. Thus, we can answer the second question: the globally
optimal CC exists only when the minimization of the upper bound is achieved over this CC. To
find the globally optimal CC, we can search over q! different label orders1 , where q denotes the
number of labels, which is computationally infeasible for a large q. In this paper, we propose the
dynamic programming based classifier chain (CC-DP) algorithm to simplify the search algorithm,
which requires O(q 3 nd) time complexity. Furthermore, to speed up the training process, a greedy
classifier chain (CC-Greedy) algorithm is proposed to find a locally optimal CC, where the time
complexity of the CC-Greedy algorithm is O(q 2 nd).
Notations: Assume xt ? Rd is a real vector representing an input or instance (feature) for t ?
{1, ? ? ? , n}. n denotes the number of training samples. Yt ? {?1 , ?2 , ? ? ? , ?q } is the corresponding
output (label). yt ? {0, 1}q is used to represent the label set Yt , where yt (j) = 1 if and only if
?j ? Yt .
2
Related work and preliminaries
To capture label dependency, Hsu et al. [13] first use compressed sensing technique to handle the
multi-label classification problem. They project the original label space into a low dimensional label
space. A regression model is trained on each transformed label. Recovering multi-labels from the
regression output usually involves solving a quadratic programming problem [13], and many works
have been developed in this way [7, 14, 15]. Such methods mainly aim to use different projection
methods to transform the original label space into another effective label space.
Another important approach attempts to exploit the different orders (first-order, second-order and
high-order) of label correlations [16]. Following this way, some works also try to provide a probabilistic interpretation for label correlations. For example, Guo and Gu [8] model the label correlations using a conditional dependency network; PCC [5] exploits a high-order Markov Chain model
to capture the correlations between the labels and provide an accurate probabilistic interpretation of
CC. Other works [6, 9, 10] focus on modeling the label correlations in a deterministic way, and CC
is one of the most popular methods among them. This work will mainly focus on the deterministic
high-order classifier chain.
2.1
Classifier chain
Similar to BR, the classifier chain (CC) model [6] trains q binary classifiers hj (j ? {1, ? ? ? , q}).
Classifiers are linked along a chain where each classifier hj deals with the binary classification problem for label ?j . The augmented vector {xt , yt (1), ? ? ? , yt (j)}nt=1 is used as the input for training
classifier hj+1 . Given a new testing instance x, classifier h1 in the chain is responsible for predicting the value of y(1) using input x. Then, h2 predicts the value of y(2) taking x plus the predicted
value of y(1) as an input. Following in this way, hj+1 predicts y(j + 1) using the predicted value
of y(1), ? ? ? , y(j) as additional input information. CC passes label information between classifiers,
allowing CC to exploit the label dependence and thus overcome the label independence problem
of BR. Essentially, it builds a deterministic high-order Markov Chain model to capture the label
correlations.
1
! represents the factorial notation.
2
2.2
Ensembled classifier chain
Different classifier chains involve different classifiers learned on different training sets and thus the
order of the chain itself clearly affects the prediction performance. To solve the issue of selecting a
chain order for CC, Read et al. [6] proposed the extension of CC, called ensembled classifier chain
(ECC), to average the multi-label predictions of CC over a set of random chain ordering. ECC first
randomly reorders the labels {?1 , ?2 , ? ? ? , ?q } many times. Then, CC is applied to the reordered
labels for each time and the performance of CC is averaged over those times to obtain the final
prediction performance.
3
Proposed model and generalization error analysis
3.1
Generalized classifier chain
We generalize the CC model over a random label order, called generalized classifier chain (GCC)
model. Assume the labels {?1 , ?2 , ? ? ? , ?q } are randomly reordered as {?1 , ?2 , ? ? ? , ?q }, where ?j =
?k means label ?k moves to position j from k. In the GCC model, classifiers are also linked along
a chain where each classifier hj deals with the binary classification problem for label ?j (?k ). GCC
follows the same training and testing procedures as CC, while the only difference is the label order.
In the GCC model, for input xt , yt (j) = 1 if and only if ?j ? Yt .
3.2
Generalization error analysis
In this section, we analyze the generalization error bound of the multi-label classification problem
using GCC based on the techniques developed for the generalization performance of classifiers with
a large margin [17] and perceptron decision tree [18].
Let X represent the input space. Both s and ?s are m samples drawn independently according to
an unknown distribution D. We denote logarithms to base 2 by log. If S is a set, |S| denotes its
cardinality. ? ? ? means the l2 norm. We train a support vector machine(SVM) for each label ?j .
Let {xt }nt=1 as the feature and {yt (?j )}nt=1 as the label, the output parameter of SVM is defined as
[wj , bj ] = SV M ({xt , yt (?1 ), ? ? ? , yt (?j?1 )}nt=1 , {yt (?j )}nt=1 ). The margin for label ?j is defined
as:
1
?j =
(1)
||wj ||2
We begin with the definition of the fat shattering dimension.
Definition 1 ([19]). Let H be a set of real valued functions. We say that a set of points P is ?shattered by H relative to r = (rp )p?P if there are real numbers rp indexed by p ? P such that for
all binary vectors b indexed by P , there is a function fb ? H satisfying
{
? rp + ? if bp = 1
fb (p) =
? rp ? ? otherwise
The fat shattering dimension f at(?) of the set H is a function from the positive real numbers to the
integers which maps a value ? to the size of the largest ?-shattered set, if this is finite, or infinity
otherwise.
Assume H is the real valued function class and h ? H. l(y, h(x)) denotes the loss function. The
expected error of h is defined as erD [h] = E(x,y)?D [l(y, h(x))], where (x, y) drawn from the
unknown distribution D. Here we select 0-1 loss function. So, erD [h] = P(x,y)?D (h(x) ?= y).
n
?
ers [h] is defined as ers [h] = n1
[yt ?= h(xt )].2
t=1
Suppose N (?, H, s) is the ?-covering number of H with respect to the l? pseudo-metric measuring
the maximum discrepancy on the sample s. The notion of the covering number can be referred to
the Supplementary Materials. We introduce the following general corollary regarding the bound of
the covering number:
2
The expression [yt ?= h(xt )] evaluates to 1 if yt ?= h(xt ) is true and to 0 otherwise.
3
Corollary 1 ([17]). Let H be a class of functions X ? [a, b] and D a distribution over X. Choose
0 < ? < 1 and let d = f at(?/4) ? em. Then
( 4m(b ? a)2 )d log(2em(b?a)/(d?))
(2)
E(N (?, H, s)) ? 2
?2
where the expectation E is over samples s ? X m drawn according to Dm .
We study the generalization error bound of the specified GCC with the specified number of labels
and margins. Let G be the set of classifiers of GCC, G = {h1 , h2 , ? ? ? , hq }. ers [G] denotes the
? j (?
fraction of the number of errors that GCC makes on s. Define x
? ? X ? {0, 1}, h
x) = hj (x)(1 ?
? j (?
y(j)) ? hj (x)y(j). If an instance x ? X is correctly classified by hj , then h
x) < 0. Moreover, we
introduce the following proposition:
? j (?
Proposition 1. If an instance x ? X is misclassified by a GCC model, then ?hj ? G, h
x) ? 0.
Lemma 1. Given a specified GCC model with q labels and with margins ? 1 , ? 2 , ? ? ? , ? q for each
label satisfying ki = f at(? i /8), where f at is continuous from the right. If GCC has correctly
classified m multi-labeled examples s generated independently according to the unknown (but fixed)
distribution D and ?s is a set of another m multi-labeled examples, then we can bound the following
probability to be less than ?: P 2m {s?s : ? a GCC model, it correctly classifies s, fraction of ?s misclas?q
q
1
(Q log(32m)+log 2? ) and Q = i=1 ki log( 8em
sified > ?(m, q, ?)} < ?, where ?(m, q, ?) = m
ki ).
Proof. (of Lemma 1). Suppose G is a GCC model with q labels and with margins ? 1 , ? 2 , ? ? ? , ? q ,
the probability event in Lemma 1 can be described as
A = {s?s : ?G, ki = f at(? i /8), ers [G] = 0, er?s [G] > ?}.
?s denote two different set of m examples, which are drawn i.i.d. from the distribution
Let ?s and ?
? and Proposition 1, the event can also be written as
D ? {0, 1}. Applying the definition of x
?, h
i
i
i
? i (?
?s : ?G, ?? = ? /2, ki = f at(?
A = {?s?
? /4), ers [G] = 0, ri = maxt h
xt ), 2?
? i = ?ri , |{?
y ? ??s :
? i (?
? i (?
?hi ? G, h
y) ? 2?
? i + ri }| > m?}. Here, ?maxt h
xt ) means the minimal value of |hi (x)|
which represents the margin for label ?i , so 2?
? i = ?ri . Let ?ki = min{? ? : f at(? ? /4) ? ki }, so
i
?ki ? ?? , we define the following function:
?
??0
?
if h
?0
?
? ? ?2?k
?(h) = ?2?ki if h
i
?
?h
?
otherwise
? ? [?2?k , 0]. Let ?(G)
? : h ? G}.
? = {?(h)
so ?(h)
i
? in the pseudo-metric d???. We have that for
Let B?sk??si represent the minimal ?ki -cover set of ?(G)
ss
ki
?
?
?
any hi ? G, there exists f ? B?s??s , |?(hi (?
z)) ? ?(f (?
z))| < ?ki , for all ?
z ? ?s??s. For all x
? ? ?s, by
i
i ?
?
?
the definition of ri , hi (?
x) ? ri = ?2?
? , and ?ki ? ?? , hi (?
x) ? ?2?ki , ?(hi (?
x)) = ?2?ki , so
? i (?
?(f?(?
x)) < ?2?ki + ?ki = ??ki . However, there are at least m? points y
? ? ??s such that h
y) ? 0,
?
?
so ?(f (?
y)) > ??ki > maxt ?(f (?
xt )). Since ? only reduces separation between output values, we
conclude that the inequality f?(?
y) > maxt f?(?
xt ) holds. Moreover, the m? points in ??s with the largest
f? values must remain for the inequality to hold. By the permutation argument, at most 2?m? of the
sequences obtained by swapping corresponding points satisfy the conditions for fixed f?.
As for any hi ? G, there exists f? ? B?sk??si , so there are |B?sk??si | possibilities of f? that satisfy the
inequality for ki . Note that |B?sk??si | is a positive integer which is usually bigger than 1 and by the
union bound, we get the following inequality:
P (A) ? (E(|B?sk??s1 |) + ? ? ? + E(|B?s??sq |))2?m? ? (E(|B?sk??s1 |) ? ? ? ? ? E(|B?s??sq |))2?m?
k
k
? can be ?-shattered by G,
? so f at ? (?) ? f at ? (?),
Since every set of points ?-shattered by ?(G)
?(G)
G
? : h ? G}. Hence, by Corollary 1 (setting [a, b] to [?2?k , 0], ? to ?k and m to 2m),
? = {h
where G
i
E(|B?sk??si |)
? ?s??s)) ? 2(32m)d log( 8em
d )
= E(N (?ki , ?(G),
4
i
k log( 8em )
ki
ki
where d = f at?(G)
|) ? 2(32m) i
, and we
? (?ki /4) ? f atG
? (?ki /4) ? ki . Thus E(|B?s?
?s
obtain
q
?
k
k log( 8em
ki )
= 2q (32m)Q
P (A) ? (E(|B?sk??s1 |) ? ? ? ? ? E(|B?s??sq |))2?m? ?
2(32m) i
where Q =
i=1
?q
? ? ? ? ? E(|B?s??sq |))2?m? < ? provided
1(
2q )
?(m, q, ?) ?
Q log(32m) + log
m
?
8em
i=1 ki log( ki ).
And so
k
(E(|B?sk??s1 |)
as required.
Lemma 1 applies to a particular GCC model with a specified number of labels and a specified margin
for each label. In practice, we will observe the margins after running the GCC model. Thus, we must
bound the probabilities uniformly over all of the possible margins that can arise to obtain a practical
bound. The generalization error bound of the multi-label classification problem using GCC is shown
as follows:
Theorem 1. Suppose a random m multi-labeled sample can be correctly classified using a GCC
model, and suppose this GCC model contains q classifiers with margins ? 1 , ? 2 , ? ? ? , ? q for each
label. Then we can bound the generalization error with probability greater than 1 ? ? to be less than
130R2 ( ?
2(2m)q )
Q log(8em) log(32m) + log
m
?
?q
1
?
where Q = i=1 (? i )2 and R is the radius of a ball containing the support of the distribution.
Before proving Theorem 1, we state one key Symmetrization lemma and Theorem 2.
Lemma 2 (Symmetrization). Let H be the real valued function class. s and ?s are m samples both
drawn independently according to the unknown distribution D. If m?2 ? 2, then
Ps (sup |erD [h] ? ers [h]| ? ?) ? 2Ps?s (sup |er?s [h] ? ers [h]| ? ?/2)
h?H
h?H
(3)
The proof details of this lemma can be found in the Supplementary Material.
Theorem 2 ([20]). Let H be restricted to points in a ball of M dimensions of radius R about the
origin, then
{ R2
}
f atH (?) ? min
,
M
+
1
(4)
?2
Proof. (of Theorem 1). We must bound the probabilities over different margins. We first use Lemma 2 to bound the probability of error in terms of the probability of the discrepancy between the
performance on two halves of a double sample. Then we combine this result with Lemma 1. We
must consider all possible patterns of ki ?s for label ?i . The largest value of ki is m. Thus, for fixed q,
we can bound the number of possibilities by mq . Hence, there are mq of applications of Lemma 1.
Let ci = {? 1 , ? 2 , ? ? ? , ? q } denote the i-th combination of margins varied in {1, ? ? ? , m}q . G denotes
a set of GCC models. The generalization error of G can be represented as erD [G] and ers [G] is 0,
where G ? G. The uniform convergence bound of the generalization error is
Ps (sup |erD [G] ? ers [G]| ? ?)
G?G
Applying Lemma 2,
Ps (sup |erD [G]?ers [G]| ? ?) ? 2Ps?s (sup |er?s [G] ? ers [G]| ? ?/2)
G?G
G?G
Let Jci = {s?s : ? a GCC model G with q labels and with margins ci : ki = f at(? i /8), ers [G] =
0, er?s [G] ? ?/2}. Clearly,
Ps?s (sup |er?s [G] ? ers [G]| ? ?/2) ? P
G?G
mq
mq
(?
i=1
5
)
Jci
q
As ki still satisfies ki = f at(? i /8), Lemma 1 can still be applied to each case of P m (Jci ). Let
?k = ?/mq . Applying Lemma 1 (replacing ? by ?k /2), we get:
q
P m (Jci ) < ?k /2
?q
q
4em
where ?(m, k, ?k /2) ? 2/m(Q log(32m) + log 2?2
i=1 ki log( ki ). By the union
?k ) and Q =
q
q
?m
q ?m
q
bound, it suffices to show that P m ( i=1 Jci ) ? i=1 P m (Jci ) < ?k /2 ? mq = ?/2. Applying
Lemma 2,
Ps (sup |erD [G] ? ers [G]| ? ?) ? 2Ps?s (sup |er?s [G] ? ers [G]| ? ?/2)
G?G
G?G
? 2P
mq
mq
(?
)
Jci < ?
i=1
Thus, Ps (supG?G |erD [G] ? ers [G]| ? ?) ? 1 ? ?. Let R be the radius of a ball containing the
support of the distribution. Applying Theorem 2, we get ki = f at(? i /8) ? 65R2 /(? i )2 . Note
that we have replaced the constant 82 = 64 by 65 in order to ensure the continuity from the right
required for the application of Lemma 1. We have upperbounded log(8em/ki ) by log(8em). Thus,
(
2(2m)q )
erD [G] ? 2/m Q log(32m) + log
?
130R2 ( ?
2(2m)q )
?
Q log(8em) log(32m) + log
m
?
?q
where Q? = i=1 (?1i )2 .
Given the training data size and the number of labels, Theorem 1 reveals one important factor in reducing the generalization error bound for the GCC model: the minimization of the sum of reciprocal
of square of the margin over the labels. Thus, we obtain the following Corollary:
Corollary 2 (Globally Optimal Classifier Chain). Suppose a random m multi-labeled sample with
q labels can be correctly classified using a GCC model, this GCC model is the globally optimal
classifier chain if and only if the minimization of Q? in Theorem 1 is achieved over this classifier
chain.
Given the number of labels q, there are q! different label orders. It is very expensive to find the
globally optimal CC, which can minimize Q? , by searching over all of the label orders. Next, we
discuss two simple algorithms.
4
Optimal classifier chain algorithm
In this section, we propose two simple algorithms for finding the optimal CC based on our result
in Section 3. To clearly state the algorithms, we redefine the margins with label order information.
Given label set M = {?1 , ?2 , ? ? ? , ?q }, suppose a GCC model contains q classifiers. Let oi (1 ?
oi ? q) denote the order of ?i in the GCC model, ?ioi represents the margin for label ?i , with
previous oi ? 1 labels as the augmented input. If oi = 1,?
then ?i1 represents the margin for label ?i ,
q
?
?
without augmented input. Then Q is redefined as Q = i=1 (? o1i )2 .
i
4.1
Dynamic programming algorithm
To simplify the search algorithm mentioned before, we propose the[ CC-DP algorithm to find
] the
?k
?q
1
1
+
globally optimal CC. Note that Q? = i=1 (? o1i )2 = (? o1q )2 + ? ? ? +
, we
ok+1
oj
j=1
2
2
q
i
(?k+1 )
(?j )
explore the idea of DP to iteratively optimize Q? over a subset of M with the length of 1, 2, ? ? ? , q.
Finally, we can obtain the optimal Q? over M. Assume i ? {1, ? ? ? , q}. Let V (i, ?) be the optimal
Q? over a subset of M with the length of ?(1 ? ? ? q), where the label order is ending by label ?i .
Suppose Mi? represent the corresponding label set for V (i, ?). When ? = q, V (i, q) be the optimal
Q? over M, where the label order is ending by label ?i . The DP equation is written as:
{
}
1
V (i, ? + 1) =
min
+ V (j, ?)
(5)
j?=i,?i ??Mj?
(?i?+1 )2
6
where ?i?+1 is the margin for label ?i , with Mj? as the augmented input. The initial condition of
DP is: V (i, 1) = (?11 )2 and Mi1 = {?i }. Then, the optimal Q? over M can be obtained by solving
i
mini?{1,??? ,q} V (i, q). Assume the training of linear SVM takes O(nd). The CC-DP algorithm is
shown as the following bottom-up procedure: from the bottom, we first compute V (i, 1) = (?11 )2 ,
i
which takes O(nd). Then we compute V (i, 2) = minj?=i,?i ??Mj1 { (?12 )2 + V (j, 1)}, which requires
i
at most O(qnd), and set Mi2 = Mj1 ? {?i }. Similarly, it takes at most O(q 2 nd) time complexity to
calculate V (i, q). Last, we iteratively solve this DP Equation, and use mini?{1,??? ,q} V (i, q) to get
the optimal solution, which requires at most O(q 3 nd) time complexity.
Theorem 3 (Correctness of CC-DP). Q? can be minimized by CC-DP, which means this Algorithm
can find the globally optimal CC.
The proof can be referred to in the Supplementary Materials.
4.2
Greedy algorithm
We propose a CC-Greedy algorithm to find a locally optimal CC to speed up the CC-DP algorithm.
To save time, we construct only one classifier chain with the locally optimal label order. Based on
the training instances, we select the label from {?1 , ?2 , ? ? ? , ?q } as the first label, if the maximum
margin can be achieved over this label, without augmented input. The first label is denoted by ?1 .
Then we select the label from the remainder as the second label, if the maximum margin can be
achieved over this label with ?1 as the augmented input. We continue in this way until the last label
is selected. Finally, this algorithm will converge to the locally optimal CC. We present the details
of the CC-Greedy algorithm in the Supplementary Materials, where the time complexity of this
algorithm is O(q 2 nd).
5
Experiment
In this section, we perform experimental studies on a number of benchmark data sets from different
domains to evaluate the performance of our proposed algorithms for multi-label classification. All
the methods are implemented in Matlab and all experiments are conducted on a workstation with a
3.2GHZ Intel CPU and 4GB main memory running 64-bit Windows platform.
5.1
Data sets and baselines
We conduct experiments on eight real-world data sets with various domains from three websites.345
Following the experimental settings in [5] and [7], we preprocess the LLog, yahoo art, eurlex sm
and eurlex ed data sets. Their statistics are presented in the Supplementary Materials. We compare
our algorithms with some baseline methods: BR, CC, ECC, CCA [14] and MMOC [7]. To perform
a fair comparison, we use the same linear classification/regression package LIBLINEAR [21] with
L2-regularized square hinge loss (primal) to train the classifiers for all the methods. ECC is averaged
over several CC predictions with random order and the ensemble size in ECC is set to 10 according
to [5, 6]. In our experiment, the running time of PCC and EPCC [5] on most data sets, like slashdot
and yahoo art, takes more than one week. From the results in [5], ECC is comparable with EPCC
and outperforms PCC, so we do not consider PCC and EPCC here. CCA and MMOC are two
state-of-the-art encoding-decoding [13] methods. We cannot get the results of CCA and MMOC on
yahoo art 10, eurlex sm 10 and eurlex ed 10 data sets in one week. Following [22], we consider
the Example-F1, Macro-F1 and Micro-F1 measures to evaluate the prediction performance of all
methods. We perform 5-fold cross-validation on each data set and report the mean and standard
error of each evaluation measurement. The running time complexity comparison is reported in the
Supplementary Materials.
3
http://mulan.sourceforge.net
http://meka.sourceforge.net/#datasets
5
http://cse.seu.edu.cn/people/zhangml/Resources.htm#data
4
7
Table 1: Results of Example-F1 on the various data sets (mean ? standard deviation). The best
results are in bold. Numbers in square brackets indicate the rank.
Data set
BR
CC
ECC
CCA
MMOC
CC-Greedy
CC-DP
yeast
image
slashdot
enron
LLog 10
yahoo art 10
eurlex sm 10
eurlex ed 10
Average Rank
0.6076 ? 0.019[6]
0.5247 ? 0.025[7]
0.4898 ? 0.024[6]
0.4792 ? 0.017[7]
0.3138 ? 0.022[6]
0.4840 ? 0.023[5]
0.8594 ? 0.003[5]
0.7170 ? 0.012[5]
5.88
0.5850? 0.033[7]
0.5991? 0.021[1]
0.5246? 0.028[4]
0.4799? 0.011[6]
0.3219? 0.028[4]
0.5013? 0.022[4]
0.8609? 0.004[1]
0.7176? 0.012[4]
3.88
0.6096? 0.018[5]
0.5947? 0.015[4]
0.5123? 0.027[5]
0.4848? 0.014[4]
0.3223? 0.030[3]
0.5070? 0.020[3]
0.8606? 0.003[3]
0.7183? 0.013[2]
3.63
0.6109 ? 0.024[4]
0.5947 ? 0.009[4]
0.5260 ? 0.021[3]
0.4812 ? 0.024[5]
0.2978 ? 0.026[7]
4.60
0.6132 ? 0.021 [3]
0.5960 ? 0.012[3]
0.4895 ? 0.022[7]
0.4940 ? 0.016[1]
0.3153 ? 0.026[5]
3.80
0.6144? 0.021[1]
0.5939? 0.021[6]
0.5266? 0.022[2]
0.4894 ? 0.016[2]
0.3269? 0.023[2]
0.5131? 0.015[2]
0.8600? 0.004[4]
0.7183? 0.013[2]
2.63
0.6135? 0.015[2]
0.5976? 0.015[2]
0.5268? 0.022[1]
0.4880? 0.015[3]
0.3298? 0.025[1]
0.5135? 0.020[1]
0.8609? 0.004[1]
0.7190? 0.013[1]
1.50
5.2
Prediction performance
Example-F1 results for our method and baseline approaches in respect of the different data sets
are reported in Table 1. Other measure results are reported in the Supplementary Materials. From
the results, we can see that: 1) BR is much inferior to other methods in terms of Example-F1.
Our experiment provides empirical evidence that the label correlations exist in many real word
data sets and because BR ignores the information about the correlations between the labels, BR
achieves poor performance on most data sets. 2) CC improves the performance of BR, however,
it underperforms ECC. This result verifies the answer to our first question stated in Section 1: the
label order does affect the performance of CC; ECC, which averages over several CC predictions
with random order, improves the performance of CC. 3) CC-DP and CC-Greedy outperforms CCA
and MMOC. This studies verify that optimal CC achieve competitive results compared with stateof-the-art encoding-decoding approaches. 4) Our proposed CC-DP and CC-Greedy algorithms are
successful on most data sets. This empirical result also verifies the answers to the last two questions
stated in Section 1: the globally optimal CC exists and CC-DP can find the globally optimal CC
which achieves the best prediction performance; the CC-Greedy algorithm achieves comparable
prediction performance with CC-DP, while it requires lower time complexity than CC-DP. In the
experiment, our proposed algorithms are much faster than CCA and MMOC in terms of both training
and testing time, and achieve the same testing time with CC. Through the training time for our
algorithms is slower than BR, CC and ECC. Our extensive empirical studies show that our algorithms
achieve superior performance than those baselines.
6
Conclusion
To improve the performance of multi-label classification, a plethora of models have been developed
to capture label correlations. Amongst them, classifier chain is one of the most popular approaches
due to its simplicity and good prediction performance. Instead of proposing a new learning model,
we discuss three important questions in this work regarding the optimal classifier chain stated in
Section 1. To answer these questions, we first propose a generalized CC model. We then provide
a theoretical analysis of the generalization error for the proposed generalized model. Based on our
results, we obtain the answer to the second question: the globally optimal CC exists only if the minimization of the upper bound is achieved over this CC. It is very expensive to search over q! different
label orders to find the globally optimal CC. Thus, we propose the CC-DP algorithm to simplify
the search algorithm, which requires O(q 3 nd) complexity. To speed up the CC-DP algorithm, we
propose a CC-Greedy algorithm to find a locally optimal CC, where the time complexity of the CCGreedy algorithm is O(q 2 nd). Comprehensive experiments on eight real-world multi-label data sets
from different domains verify our theoretical studies and the effectiveness of proposed algorithms.
Acknowledgments
This research was supported by the Australian Research Council Future Fellowship FT130100746.
References
[1] Robert E. Schapire and Yoram Singer. BoosTexter: A Boosting-based System for Text Categorization.
Machine Learning, 39(2-3):135?168, 2000.
8
[2] Zafer Barutc?uoglu and Robert E. Schapire and Olga G. Troyanskaya. Hierarchical multi-label prediction
of gene function. Bioinformatics, 22(7):22?7, 2006.
[3] Matthew R. Boutell and Jiebo Luo and Xipeng Shen and Christopher M. Brown. Learning Multi-Label
Scene Classification. Pattern Recognition, 37(9):1757?1771, 2004.
[4] Grigorios Tsoumakas and Ioannis Katakis and Ioannis P. Vlahavas. Mining Multi-label Data. In Data
Mining and Knowledge Discovery Handbook, pages 667?685, 2010. Springer US.
[5] Krzysztof Dembczynski and Weiwei Cheng and Eyke H?ullermeier. Bayes Optimal Multilabel Classification via Probabilistic Classifier Chains. Proceedings of the 27th International Conference on Machine
Learning, pages 279?286, Haifa, Israel, 2010. Omnipress.
[6] Jesse Read and Bernhard Pfahringer and Geoffrey Holmes and Eibe Frank. Classifier Chains for Multilabel Classification. In Proceedings of the European Conference on Machine Learning and Knowledge
Discovery in Databases: Part II, pages 254?269, Berlin, Heidelberg, 2009. Springer-Verlag.
[7] Yi Zhang and Jeff G. Schneider. Maximum Margin Output Coding. Proceedings of the 29th International
Conference on Machine Learning, pages 1575?1582, New York, NY, 2012. Omnipress.
[8] Yuhong Guo and Suicheng Gu. Multi-Label Classification Using Conditional Dependency Networks.
Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, pages 1300?
1305, Barcelona, Catalonia, Spain, 2011. AAAI Press.
[9] Sheng-Jun Huang and Zhi-Hua Zhou. Multi-Label Learning by Exploiting Label Correlations Locally.
Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, Toronto, Ontario, Canada,
2012. AAAI Press.
[10] Feng Kang and Rong Jin and Rahul Sukthankar. Correlated Label Propagation with Application to Multilabel Learning. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition,
pages 1719?1726, New York, NY, 2006. IEEE Computer Society.
[11] Weiwei Liu and Ivor W. Tsang. Large Margin Metric Learning for Multi-Label Prediction. Proceedings
of the Twenty-Ninth Conference on Artificial Intelligence, pages 2800?2806, Texas, USA, 2015. AAAI
Press.
[12] Mingkui Tan and Qinfeng Shi and Anton van den Hengel and Chunhua Shen and Junbin Gao and Fuyuan
Hu and Zhen Zhang. Learning Graph Structure for Multi-Label Image Classification via Clique Generation. The IEEE Conference on Computer Vision and Pattern Recognition, 2015.
[13] Daniel Hsu and Sham Kakade and John Langford and Tong Zhang. Multi-Label Prediction via Compressed Sensing. Advances in Neural Information Processing Systems, pages 772?780, 2009. Curran
Associates, Inc.
[14] Yi Zhang and Jeff G. Schneider. Multi-Label Output Codes using Canonical Correlation Analysis. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pages 873?
882, Fort Lauderdale, USA, 2011. JMLR.org.
[15] Farbound Tai and Hsuan-Tien Lin. Multilabel Classification with Principal Label Space Transformation.
Neural Computation, 24(9):2508?2542, 2012.
[16] Min-Ling Zhang and Kun Zhang. Multi-label learning by exploiting label dependency. Proceedings
of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages
999?1008, QWashington, DC, USA, 2010. ACM.
[17] John Shawe-Taylor and Peter L. Bartlett and Robert C. Williamson and Martin Anthony. Structural Risk
Minimization Over Data-Dependent Hierarchies. IEEE Transactions on Information Theory, 44(5):1926?
1940, 1998.
[18] Kristin P. Bennett and Nello Cristianini and John Shawe-Taylor and Donghui Wu. Enlarging the Margins
in Perceptron Decision Trees. Machine Learning, 41(3):295?313, 2000.
[19] Michael J. Kearns and Robert E. Schapire. Efficient Distribution-free Learning of Probabilistic Concepts. Proceedings of the 31st Symposium on the Foundations of Computer Science, pages 382?391, Los
Alamitos, CA, 1990. IEEE Computer Society Press.
[20] Peter L. Bartlett and John Shawe-Taylor. Generalization Performance of Support Vector Machines and
Other Pattern Classifiers. Advances in Kernel Methods - Support Vector Learning, pages 43?54, Cambridge, MA, USA, 1998. MIT Press.
[21] Rong-En Fan and Kai-Wei Chang and Cho-Jui Hsieh and Xiang-Rui Wang and Chih-Jen Lin. LIBLINEAR: A Library for Large Linear Classification. Journal of Machine Learning Research, 9:1871?1874,
2008.
[22] Qi Mao and Ivor Wai-Hung Tsang and Shenghua Gao. Objective-Guided Image Annotation. IEEE
Transactions on Image Processing, 22(4):1585?1597, 2013.
9
| 6001 |@word pcc:4 norm:1 nd:10 hu:1 hsieh:1 liblinear:2 initial:1 liu:2 contains:2 selecting:1 daniel:1 document:2 outperforms:3 com:1 nt:5 luo:1 si:5 gmail:1 attracted:1 written:2 must:4 john:4 greedy:13 half:1 metabolism:1 selected:1 website:1 intelligence:4 ith:1 reciprocal:2 provides:1 boosting:1 cse:1 toronto:1 firstly:1 org:1 zhang:6 along:2 symposium:1 combine:1 redefine:1 introduce:2 expected:1 multi:32 globally:14 zhi:1 cpu:1 window:1 cardinality:1 project:1 begin:1 notation:2 moreover:2 classifies:1 provided:1 mulan:1 katakis:1 israel:1 spain:1 developed:4 proposing:1 finding:1 transformation:1 pseudo:2 every:1 finance:1 fat:2 classifier:47 positive:2 ecc:11 before:2 encoding:2 plus:1 au:1 jci:7 range:1 averaged:2 practical:1 responsible:1 acknowledgment:1 testing:4 union:2 practice:1 sq:4 procedure:2 empirical:3 significantly:1 projection:1 word:1 jui:1 protein:1 get:5 qnd:1 cannot:1 risk:1 influence:1 applying:5 sukthankar:1 optimize:1 deterministic:4 map:1 yt:16 shi:1 jesse:1 attention:1 independently:4 boutell:1 shen:2 simplicity:2 hsuan:1 holmes:1 mq:8 proving:1 handle:1 notion:1 searching:1 hierarchy:1 suppose:7 tan:1 programming:4 curran:1 origin:1 associate:1 satisfying:2 expensive:2 recognition:3 predicts:3 labeled:4 database:1 bottom:2 wang:1 capture:7 tsang:4 calculate:1 wj:2 ordering:2 mentioned:1 complexity:9 epcc:3 cristianini:1 dynamic:3 multilabel:4 trained:3 solving:2 reordered:2 gu:2 htm:1 joint:1 various:4 represented:1 train:5 effective:1 artificial:4 grigorios:1 supplementary:7 solve:2 valued:3 say:1 s:1 otherwise:4 compressed:2 kai:1 statistic:2 mi2:1 transform:1 itself:1 final:1 sequence:1 net:2 propose:8 remainder:1 macro:1 ioi:1 ath:1 achieve:5 ontario:1 boostexter:1 sourceforge:2 los:1 exploiting:2 convergence:1 double:1 p:9 plethora:1 categorization:1 sydney:1 recovering:1 predicted:4 involves:1 implemented:1 indicate:1 australian:1 guided:1 radius:3 material:7 tsoumakas:1 education:1 mingkui:1 ensembled:3 generalization:15 suffices:1 preliminary:1 f1:6 proposition:3 extension:1 rong:2 hold:2 predict:1 bj:1 week:2 matthew:1 major:1 achieves:4 label:125 troyanskaya:1 sensitive:2 symmetrization:2 largest:3 council:1 correctness:1 kristin:1 minimization:5 mit:1 clearly:3 aim:1 zhou:1 hj:9 corollary:5 focus:2 rank:2 mainly:2 sigkdd:1 baseline:4 dependent:1 shattered:4 pfahringer:1 transformed:1 misclassified:1 i1:1 issue:2 classification:22 among:1 denoted:1 stateof:1 yahoo:4 eurlex:6 art:7 platform:1 construct:1 beach:1 shattering:2 represents:4 donghui:1 discrepancy:2 minimized:1 report:1 future:1 intelligent:1 simplify:3 micro:1 ullermeier:1 randomly:2 simultaneously:1 seu:1 comprehensive:2 replaced:1 slashdot:2 n1:1 attempt:1 possibility:2 mining:3 evaluation:1 bracket:1 upperbounded:1 swapping:1 primal:1 chain:38 accurate:1 o1i:2 tree:3 indexed:2 mj1:2 logarithm:1 conduct:1 haifa:1 taylor:3 theoretical:4 minimal:2 instance:10 modeling:2 cover:1 measuring:1 deviation:1 subset:2 uniform:1 successful:1 conducted:1 reported:3 dependency:7 answer:6 sv:1 cho:1 st:2 international:5 probabilistic:4 decoding:2 lauderdale:1 michael:1 synthesis:1 together:1 aaai:4 containing:2 choose:1 huang:1 account:1 bold:1 ioannis:2 coding:1 inc:1 satisfy:2 explicitly:1 depends:1 supg:1 try:2 h1:2 linked:2 apparently:1 analyze:1 sup:8 competitive:1 bayes:1 dembczynski:2 annotation:2 minimize:1 square:4 oi:4 ensemble:1 preprocess:1 yes:2 generalize:3 anton:1 cc:80 researcher:1 classified:5 minj:1 ed:3 wai:1 definition:4 sixth:1 evaluates:1 dm:1 associated:1 proof:4 mi:1 workstation:1 hsu:2 popular:4 knowledge:3 ut:1 improves:2 ok:1 rahul:1 erd:9 wei:1 furthermore:1 until:2 correlation:11 sheng:1 langford:1 replacing:1 christopher:1 propagation:1 continuity:1 yeast:1 usa:4 verify:2 true:1 brown:1 concept:1 hence:2 read:3 iteratively:2 deal:2 eyke:1 inferior:1 covering:3 generalized:6 demonstrate:1 omnipress:2 ranging:1 image:6 recently:1 superior:1 belong:1 interpretation:2 measurement:1 cambridge:1 meka:1 automatic:1 rd:1 similarly:1 centre:1 shawe:3 base:1 belongs:1 chunhua:1 verlag:1 inequality:4 binary:7 continue:1 yi:2 tien:1 additional:1 greater:1 schneider:2 determine:1 converge:1 ii:1 multiple:2 interdependency:1 sham:1 reduces:1 faster:1 cross:1 lin:2 bigger:1 qi:1 prediction:18 regression:3 essentially:1 metric:3 expectation:1 vision:2 represent:4 kernel:1 achieved:5 underperforms:1 fellowship:1 enron:1 pass:1 effectiveness:1 integer:2 structural:1 weiwei:3 affect:3 independence:1 reduce:2 regarding:2 idea:1 cn:1 br:10 texas:1 expression:1 bartlett:2 gb:1 peter:2 york:2 xipeng:1 matlab:1 involve:2 factorial:1 locally:7 http:3 schapire:3 exist:1 canonical:1 correctly:5 key:2 drawn:5 krzysztof:1 graph:1 fraction:2 sum:2 package:1 fourteenth:1 chih:1 wu:1 separation:1 decision:2 comparable:3 bit:1 cca:6 bound:18 ki:38 followed:1 hi:8 cheng:1 fold:1 quadratic:1 fan:1 infinity:1 bp:1 ri:6 scene:1 tag:1 speed:3 argument:1 optimality:1 min:4 martin:1 according:5 ball:3 combination:1 poor:1 remain:1 em:12 kakade:1 s1:4 den:1 restricted:1 computationally:1 equation:2 resource:1 tai:1 discus:2 singer:1 eight:2 observe:1 hierarchical:1 vlahavas:1 save:1 slower:1 rp:4 original:3 denotes:6 running:4 ensure:1 hinge:1 exploit:3 yoram:1 build:1 society:3 feng:1 move:1 objective:1 question:9 alamitos:1 strategy:2 dependence:1 amongst:2 dp:21 hq:1 berlin:1 topic:1 nello:1 length:2 code:1 mini:2 kun:1 robert:4 frank:1 stated:3 gcc:25 redefined:1 unknown:4 perform:3 allowing:1 upper:3 twenty:3 markov:3 sm:3 benchmark:1 finite:1 datasets:1 jin:1 dc:1 varied:1 ninth:1 jiebo:1 canada:1 fort:1 required:2 specified:5 extensive:1 learned:1 kang:1 barcelona:1 proceeds:1 usually:3 pattern:5 challenge:1 sified:1 oj:1 memory:1 catalonia:1 event:2 regularized:1 predicting:1 representing:1 improve:1 technology:1 mi1:1 library:1 zhen:1 jun:1 text:1 l2:2 discovery:3 relative:1 xiang:1 loss:3 permutation:1 generation:1 geoffrey:1 validation:1 h2:2 foundation:1 maxt:4 supported:1 last:5 free:1 infeasible:1 perceptron:2 taking:1 ghz:1 van:1 overcome:1 dimension:3 world:3 ending:2 hengel:1 quantum:1 fb:2 author:1 ignores:1 transaction:2 bernhard:1 transcription:1 gene:3 clique:1 reveals:1 handbook:1 conclude:1 reorder:1 search:6 continuous:1 sk:9 table:2 promising:1 mj:2 correlated:1 ca:1 heidelberg:1 williamson:1 european:1 anthony:1 domain:4 main:1 ling:1 arise:1 verifies:2 fair:1 augmented:7 referred:2 intel:1 en:1 ny:2 tong:1 position:1 mao:1 jmlr:1 theorem:9 enlarging:1 xt:12 yuhong:1 jen:1 er:22 sensing:2 r2:4 svm:3 evidence:1 eibe:1 exists:5 ci:2 margin:24 rui:1 explore:1 gao:2 ivor:4 sport:1 chang:1 applies:1 springer:2 hua:1 atg:1 satisfies:1 acm:2 ma:1 conditional:2 jeff:2 bennett:1 qinfeng:1 uniformly:1 reducing:1 llog:2 olga:1 lemma:15 principal:1 called:2 kearns:1 experimental:3 select:3 support:5 guo:2 people:1 relevance:1 bioinformatics:1 evaluate:2 hung:1 |
5,528 | 6,002 | Smooth Interactive Submodular Set Cover
Yisong Yue
California Institute of Technology
[email protected]
Bryan He
Stanford University
[email protected]
Abstract
Interactive submodular set cover is an interactive variant of submodular set cover
over a hypothesis class of submodular functions, where the goal is to satisfy
all sufficiently plausible submodular functions to a target threshold using as few
(cost-weighted) actions as possible. It models settings where there is uncertainty
regarding which submodular function to optimize. In this paper, we propose a new
extension, which we call smooth interactive submodular set cover, that allows the
target threshold to vary depending on the plausibility of each hypothesis. We
present the first algorithm for this more general setting with theoretical guarantees
on optimality. We further show how to extend our approach to deal with realvalued functions, which yields new theoretical results for real-valued submodular
set cover for both the interactive and non-interactive settings.
1
Introduction
In interactive submodular set cover (ISSC) [10, 11, 9], the goal is to interactively satisfy all plausible
submodular functions in as few actions as possible. ISSC is a wide-encompassing framework that
generalizes both submodular set cover [24] by virtue of being interactive, as well as some instances
of active learning by virtue of many active learning criteria being submodular [12, 9].
A key characteristic of ISSC is the a priori uncertainty regarding the correct submodular function to
optimize. For example, in personalized recommender systems, the system does not know the user?s
preferences a priori, but can learn them interactively via user feedback. Thus, any algorithm must
choose actions in order to disambiguate between competing hypotheses as well as optimize for the
most plausible ones ? this issue is also known as the exploration-exploitation tradeoff.
In this paper, we propose the smooth interactive submodular set cover problem, which addresses
two important limitations of previous work. The first limitation is that conventional ISSC [10, 11, 9]
only allows for a single threshold to satisfy, and this ?all or nothing? nature can be inflexible for
settings where the covering goal should vary smoothly (e.g., based on plausibility). In smooth ISSC,
one can smoothly vary the target threshold of the candidate submodular functions according to their
plausibility. In other words, the less plausible a hypothesis is, the less we emphasize maximizing
its associated utility function. We present a simple greedy algorithm for smooth ISSC with provable guarantees on optimality. We also show that our smooth ISSC framework and algorithm fully
generalize previous instances of and algorithms for ISSC by reducing back to just one threshold.
One consequence of smooth ISSC is the need to optimize for real-valued functions, which leads
to the second limitation of previous work. Many natural classes of submodular functions are realvalued (cf. [25, 5, 17, 21]). However, submodular set cover (both interactive and non-interactive)
has only been rigorously studied for integral or rational functions with fixed denominator, which
highlights a significant gap between theory and practice. We propose a relaxed version of smooth
ISSC using an approximation tolerance , such that one needs only to satisfy the set cover criterion to
within . We extend our greedy algorithm to provably optimize for real-valued submodular functions
within this tolerance. To the best of our knowledge, this yields the first theoretically rigorous
algorithm for real-valued submodular set cover (both interactive and non-interactive).
1
Problem 1 Smooth Interactive Submodular Set Cover
1: Given:
1.
2.
3.
4.
5.
Hypothesis class H (does not necessarily contain h? )
Query set Q and response set R with known q(h) ? R for q ? Q, h ? H
Modular query cost function c defined over Q
Monotone submodular objective functions Fh : 2Q?R ? R?0 for h ? H
Monotone submodular distance functions Gh : 2Q?R ? R?0 for h ? H, with Gh (S ?(q, r))?
Gh (S) = 0 for any S if r ? q(h)
6. Threshold function ? : R?0 ? R?0 mapping a distance to required objective function value
2: Protocol: For i = 1, . . . , ?: ask a question q?i ? Q and receive a response r?i ? q?i (h? ).
P
? ? ?(Gh (S ? )) for all h ? H, where S? =
3: Goal: Using minimal cost i c(?
qi ), terminate when Fh (S)
S
4
{(?
qi , r?i )}i and S ? = q?Q,r?q(h? ) {(q, r)}.
2
Background
Submodular Set Cover. In the basic submodular set cover problem [24], we are given an action
set Q and a monotone submodular set function F : 2Q ? R?0 that maps subsets A ? Q to
non-negative scalar values. A set function F is monotone and submodular if and only if:
?A ? B ? Q, q ? Q :
F (A ? q) ? F (A)
and
F (A ? q) ? F (A) ? F (B ? q) ? F (B),
respectively, where ? denotes set addition (i.e., A ? q ? A ? {q}). In other words, monotonicity
implies that adding a set always yields non-negative gain, and submodularity implies that adding to
a smaller set A results in a larger gain than adding to a larger set B. We also assume that F (?) = 0.
Each q ? Q is associated with a modular or additive cost c(q). Given
Pa target threshold ?, the goal is
to select a set A that satisfies F (A) ? ? with minimal cost c(A) = q?A c(q). This problem is NPhard; but for integer-valued F , simple greedy forward selection can provably achieve near-optimal
cost of at most (1 + ln(maxa?Q F ({a}))OP T [24], and is typically very effective in practice.
One motivating application is content recommendation [5, 4, 25, 11, 21], where Q are items to
recommend, F (A) captures the utility of A ? Q, and ? is the satisfaction goal. Monotonicity
of F captures the property that total utility never decreases as one recommends more items, and
submodularity captures the the diminishing returns property when recommending redundant items.
Interactive Submodular Set Cover. In the basic interactive setting [10], the decision maker must
optimize over a hypothesis class H of submodular functions Fh . The setting is interactive, whereby
the decision maker chooses an action (or query) q ? Q, and the environment provides a response r ?
R. Each query q is now a function mapping hypotheses H to responses R (i.e., q(h) ? R), and the
environment provides responses according to an unknown true hypothesis h? ? H (i.e., r ? q(h? )).
This process iterates until Fh? (S) ? ?, where S denotes the set of observed question/response
pairs:
P
S = {(q, r)} ? Q?R. The goal is to satisfy Fh? (S) ? ? with minimal cost c(S) = (q,r)?S c(q).
For example, when recommending movies to a new user with unknown interests (cf. [10, 11]), H
can be a set of user types or movie genres (e.g., H = {Action, Drama, Horror, . . .}). Then Q would
contain individual movies that can be recommended, and R would be a ?yes? or ?no? response or
an integer rating representing how interested the user (modeled as h? ) is in a given movie.
The interactive setting is both a learning and covering problem, as opposed to just a covering problem. The decision maker must balance between disambiguating between hypotheses in H (i.e.,
identifying which is the true h? ) and satisfying the covering goal Fh? (S) ? ?; this issue is also
known as the exploration-exploitation tradeoff. Noisy ISSC [11] extends basic ISSC by no longer
assuming the true h? is in H, and uses a distance function Gh and tolerance ? such that the goal is
to satisfy Fh (S) ? ? for all sufficiently plausible h, where plausibility is defined as Gh (S) ? ?.
3
Problem Statement
We now present the smooth interactive submodular set cover problem, which generalizes basic
and noisy ISSC [10, 11] (described in Section 2). Like basic ISSC, each hypothesis h ? H is
associated with a utility function Fh : 2Q?R ? R?0 that maps sets of query/response pairs to
2
?1
?1
?2
Fh
?2
Fh
?3
?3
?1?2
Gh
(a)
?3
Fh
Fh
?1?2
Gh
(b)
?3
Gh
(c)
Gh
(d)
Figure 1: Examples of (a) multiple thresholds, (b) approximate multiple thresholds, (c) a continuous
convex threshold, and (d) an approximate continuous convex threshold. For the approximate setting,
we essentially allow for satisfying any threshold function that resides in the yellow region.
non-negative scalars. Like noisy ISSC, the hypothesis class H does not necessarily contain the true
h? (i.e., the agnostic setting). Each h ? H is associated with a distance or disagreement function
Gh : 2Q?R ? R?0 which maps sets of question/response pairs to a disagreement score (i.e., the
larger Gh (S) is, the more h disagrees with S). We further require that Fh (?) = 0 and Gh (?) = 0.
4 S
Problem 1 describes the general problem setting. Let S ? = q?Q,r?q(h? ) {(q, r)} denote the set of
all possible question/responses pairs given by h? . The goal is to construct a question/response set
? ? ?(Gh (S ? )), where ?(?) maps
S? with minimal cost such that, for every h ? H we have Fh (S)
disagreement values to desired utilities. In general, ?(?) is a non-increasing function, since the goal
is to optimize more the most plausible hypotheses in H. We describe two versions of ?(?) below.
Version 1: Step Function (Multiple Thresholds). The first version uses a decreasing step function
(see Figure 1(a)). Given a pair of sequences ?1 > . . . > ?N > 0 and 0 < ?1 < . . . < ?N ,
the threshold function is ?(v) = ?n? (v) where n? (v) = min{n ? {0, . . . , N + 1}|v < ?n }, and
4
4
4
4
?0 = ?, ?N +1 = 0, ?0 = 0, ?N +1 = ?. The goal in Problem 1 is equivalently: ? ?h ? H and
? ? ?n whenever Gh (S ? ) < ?n .? This version is a strict generalization
n = 1, . . . , N : satisfy Fh (S)
of noisy ISSC, which uses only a single ? and ?.
Version 2: Convex Threshold Curve. The second version uses a convex ?(?) that decreases continuously as Gh (S ? ) increases (see Figure 1(c)), and is not a strict generalization of noisy ISSC.
Approximate Thresholds. Finally, we also consider a relaxed version of smooth ISSC, whereby
we only require that the objectives Fh be satisfied to within some tolerance ? 0. More formally,
we say that we approximately solve Problem 1 with tolerance if its goal is redefined as: ?using
P
? ? ?(Gh (S ? )) ? for all h ? H.? See Figure 1(b) & 1(d)
minimal cost, i c(?
qi ), guarantee Fh (S)
for the approximate versions of the multiple tresholds and convex versions, respectively.
ISSC has only been rigorously studied when the utility functions are Fh are rational-valued with
a fixed denominator. We show in Section 4.3 how to efficiently solve the approximate version of
smooth ISSC when Fh are real-valued, which also yields a new approach for approximately solving
the classical non-interactive submodular set cover problem with real-valued objective functions.
4
Algorithm & Main Results
A key question in the study of interactive optimization is how to balance the exploration-exploitation
tradeoff. On the one hand, one should exploit current knowledge to efficiently satisfy the plausible
submodular functions. However, hypotheses that seem plausible might actually not be due to imperfections in the algorithm?s knowledge. One should thus explore by playing actions that disambiguate
the plausibility of competing hypotheses. Our setting is further complicated due to also solving a
combinatorial optimization problem (submodular set cover), which is in general intractable.
4.1
Approach Outline
We present a general greedy algorithm, described in Algorithm 1 below, for solving smooth ISSC
with provably near-optimal cost. Algorithm 1 requires as input a submodular meta-objective F?
3
Algorithm 1 Worst Case Greedy Algorithm for Smooth Interactive Submodular Set Cover
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
input: F?
input: F?max
input: Q
input: R
S??
while F? (S) < F?max do
q? ? argmaxq?Q minr?R F? (S ? (q, r)) ? F? (S) /c(q)
Play q?, observe r?
S ? S ? (?
q , r?)
end while
Variable
H
Q
R
Fh
Gh
F?
F?max
DF
DG
?(?)
?i
?i
N
Fh0
0
?n
// Submodular Meta-Objective
// Termination Threshold for F?
// Query or Action Set
// Response Set
Definition
Set of hypotheses
Set of actions or queries
Set of responses
Monotone non-decreasing submodular utility function
Monotone non-decreasing submodular distance function
Monotone non-decreasing submodular function unifying Fh , Gh and the thresholds
Maximum value held by F?
Denominator for Fh (when rational)
Denominator for Gh (when rational)
Continuous convex threshold
Thresholds for F (?1 is largest)
Thresholds for G (?1 is smallest)
Number of thresholds
Approximation tolerance for the real-valued case
Surrogate utility function for the approximate version
Surrogate thresholds for the approximate version
Figure 2: Summary of notation used. The top portion is used in all settings. The middle portion is
used for the multiple thresholds setting. The bottom portion is used for real-valued functions.
that quantifies the exploration-exploitation trade-off, and the specific instantiation of F? depends on
which version of smooth ISSC is being solved. Algorithm 1 greedily optimizes for the worst case
outcome at each iteration (Line 7) until a termination condition F? ? F?max has been met (Line 6).
The construction of F? is essentially a reduction of smooth ISSC to a simpler submodular set cover
problem, and generalizes the reduction approach in [11]. In particular, we first lift the analysis of
[11] to deal with multiple thresholds (Section 4.2). We then show how to deal with approximate
thresholds in the real-valued setting (Section 4.3), which finally allows us to address the continuous
threshold setting (Section 4.4). Our cost guarantees are stated relative to the general cover cost
(GCC), which lower bounds the optimal cost, as stated in Definition 4.1 and Lemma 4.2 below. Via
this reduction, we can show that our approach achieves cost bounded by (1 + ln F?max )GCC ?
(1 + ln F?max )OP T . For clarity of exposition, all proofs are deferred to the supplementary material.
Definition 4.1 (General Cover Cost (GCC)). Define oracles T ? RQ to be functions mapping
? S
? is the set of question-response pairs
? =
questions to responses and T (Q)
qi , T (?
qi ))}. T (Q)
? {(?
q?i ?Q
? Define the General Cover Cost as:
given by T for the set of questions Q.
?
GCC = max
T ?RQ
min
? F
? (T (Q))?
?
?max
Q:
F
? .
c(Q)
? ? F?max
Lemma 4.2 (Lemma 3 from [11]). If there is a question asking strategy for satisfying F? (S)
with worst case cost C ? , then GCC ? C ? . Thus GCC ? OP T .
4.2
Multiple Thresholds Version
We begin with the multiple thresholds version. In this section, we assume that each Fh and Gh
are rational-valued with fixed denominators DF and DG , respectively.1 We first define a doubly
1
When each Fh and/or Gh are integer-valued, then DF = 1 and/or DG = 1, respectively.
4
Fh i
?N
Gh i
?N
224
225
226
227
228
Figure
?229
F?hi ,1
_
F?hi ,N
F?hmax
???
?1
F?h1
???
F?h,1max
F?h,N max
^
F?hi
F?hmax
F?h|H|
???
F?
F?max
F?hmax
C
B
A
^
???
Gh i
_
???
?1
???
218
219
220
221
222
223
Fh i
???
216
217
Figure 2: This figure shows the relationship between the terms defined in Definition 4.3. (A) For
3:F?hDepicting
the=relationship
between
terms
(A) the
If F?hi ,n ?
F?hi ,nmax
(?n ?n+1 )(?
?n 1the
), either
Fhidefined
?n or in
GhDefinition
?n . This4.3.
generates
n
i ,n
i
Fhi ,nmaxtradeoff
= (?nbetween
??n+1satisfying
)(?n ??the
), then
Fhi ? ?(B)
; this
the tradeoff
either
of theeither
two thresholds.
ForG
F?hhi ?F??hn
, F?higenerates
F?hi ,nmax
n?1
n or
,n
max
230
for
all i 2 {1,the
. . . ,either
N }. This
creates
the thresholds.
requirement that
allIfofF
thresholds
be satisfied.
(C)F
?the
?h must
?h ,n ?
?h ,n
between
satisfying
of
the
two
(B)
?
F
,
then
F
h
i
max
i
i
231
max
For F? F?max , F?h F?hmax for all h 2 H. This creates the requirement
that
all of the hypotheses
?i232
? {1, must
. . . , be
N };
this enforces that all i, at least one of the thresholds ?i or ?i must be satisfied. (C)
satisfied.
If233
F? ? F?max , then F?h ? F?hmax ?h ? H; this enforces that all hypotheses must be satisfied.
234
235
Using
(1) and
(2), we
define the general
forms ofutility
F? and and
F?maxdistance
used in Sections
4.2, 4.3, and 4.4.
truncated
version
of each
hypothesis
submodular
function:
Each of these sections will apply this definition to different choices of Fh , Gh , N , ?1 , . . . , ?N , and
?of4the problem. In this
? definition, X is a constant to make F?h
?1 , . . . , ?N to solve their
Fh,?variants
(1)
n ,?j (S) = max(min(Fh (S), ?n ), ?j ) ? ?j ,
to be integer-valued, Y is the contribution to the maximum value from Fh and ?n , and Z is the
4
? from Gh and ?n .
?
contribution to the maximum
Gh,?n ,?value
(2)
j (S) = max(min(Gh (S), ?n ), ?j ) ? ?j .
Definition 4.3 (General F? and F?max ).
In241
other words, Fh,??n ,?j is truncated from below
at ?j and from above at ?n (it is assumed that
?
4
?242
is
offset
by
??
so
that
F
(?)
= 0.? +
GG
is ?constructed
?
?
n > ?j ),F?and
j
h,?
,?
h,?
(
S)
=
(?
?
)
G
(
S)
F
?n+1 ),analogously.
n
jh,?n ,?n+1 (S)
nn,?
h,n
n
n 1
h,?n ,?n 1
h,?
,?jn 1 (S)(?n
? and F?max , which can be instantiated to
2define
0
1 forms3of F
Using
(1) and (2), we can
the
general
243
N
X
Y
4
244
address
different
of
? versions
? 5,
4@smooth
F?h (S)
=X
(?j ISSC.
?j 1 )A F?h,n (S)
236
237
238
239
240
245
?
?
Definition
4.3 (Generaln=1
form j6of
=nF and Fmax ).
246
247
248
249
250
251
252
X
4
4? 4
? ) ?F?G
? F?=
?
?
(S)(?
=n ? ?
F?n?1
|H|Y Z?
F?h,n (S)
h (S),
max
h,?=
n ,?n?1 (S) Fh,?n ,?n+1 (S) + Gh,?n ,?n?1 (S)(?n ? ?n+1 ),
h2H??
?
?
N
X
Y
Definition
4 4.4 (Multiple Thresholds). To solve the multiple thresholds version of the problem, Fh ,
? =
?
?
F?G
C
?. . . , ? ?,?
j ? ?j?1 )? Fh,n (S)? ,
h (S)
F
,
N
,
?
,
and
?(?
h
1
N
1 , . . . , ?N are used without modification. The constants are set as the
following:
4
n=1
X
j6=n
4
? =
? F?max = |H|CF CG .
Y
253
F? (S)
F?h (S),
N
N
X = DF DG
, Y = DF ?1 , Z = DG
(?n ?n 1 )
254
h?H
n=1
255
The
coefficient CF? converts each F?h to be integer-valued, CF is the contribution to F?max
256
and
? , and
to F?maxexploitation
from Gh(maximizing
and ?n . the most plausible Fh ) and exThis C
definition
F? trades off between
G is theofcontribution
257 n
?
N
from Fh
ploration
(distinguishing
between Version
more andof
less
plausible
Fh ) by
each
Fi ?
to ,reach
its
Definition
4.4 (Multiple
Thresholds
ISSC).
Given
? allowing
, . . . , ?N
and
258
1 . . . , ?N , we inmaximum
value either by having Fh reach ?i or having Gh reach ?1i . In other
words, each
of the
?
?
259
stantiate thresholds
F and Fcan
Definition
4.3 via:
maxbeinsatisfied
with either
a sufficiently large utility Fh or a sufficiently large distance
260
261
262
?263
N
Y
Gh . Figure 2 shows the logical relationships between these components.
N
N
CF? = DF DG
, ? CF = DF ?1 ,
CG = DG
(?n ? ?n?1 ).
We prove in Appendix A that F is monotone submodular, and that finding a S such that F? (S)
F?max is equivalent to solving Problem 1. For Definition 4.4, we alson=1
require that ?n and ?n thresholds satisfy
4.5between
for F? to beexploitation
submodular. (maximizing the plausible Fh ?s) and exploration
in Definition
4.4Condition
trades off
F264
?i by
?n+1
N
(disambiguating
in Fhh?s)
allowing
each F?h to reach its maximum by either Fh reachConditionplausibility
4.5. The sequence
265
?n ?n 1 ii=1 is non-increasing.
?h can be satisfied with either a sufficiently large
ing
?
or
G
reaching
?
.
In
other
words,
each
F
i
h
i
266
Theorem 4.6. Let Fh and Gh be monotone submodular
and rational-valued with fixed denominator
utility
large
Gh . Figure
shows the
relationships
between
theseF? components.
267 FhDor and
DGdistance
, respectively.
Then, if 3
Condition
4.5 logical
holds, then
applying Algorithm
1 using
and
F
?max from Definition 4.4 solves the multiple thresholds version of Problem 1 with cost at most
268
F
?
? (S) ?
? in Appendix
?
??submodular, and that finding an S such that F
We
prove
A
that
F
is
monotone
Q
N
269
N
1
+
ln
|H|D
D
?
(?
?
)
GCC.
?
?
F
1
n
n
1
G
n=1
Fmax is equivalent to solving Problem 1. For F to be submodular, we also require Condition 4.5,
which is essentially a discrete analogue to the condition that a continuous ?(?) should be convex.
??n+1 N
5
i
is non-increasing.
Condition 4.5. The sequence h ??nn ??
n?1 n=1
Theorem 4.6. Given Condition 4.5, Algorithm
1using Definition 4.4 solves the multiple
thresholds
QN
N
version of Problem 1 using cost at most 1 + ln |H|DF DG ?1 n=1 (?n ? ?n?1 ) GCC.
If each Gh is integral and ?n = ?n?1 + 1, then the bound simplifies to (1 + ln (|H|DF ?1 )) GCC.
We present an alternative formulation in Appendix D.2 that has better bounds when DG is large, but
is less flexible and cannot be easily extended to the real-valued and convex threshold curve settings.
5
4.3
Approximate Thresholds for Real-Valued Functions
Solving even non-interactive submodular set cover is extremely challenging when the utility functions Fh are real-valued. For example, Appendix B.1 describes a setting where the greedy algorithm
performs arbitrarily poorly. We now extend the results from Section 4.2 to real-valued Fh and
?1 , . . . , ?N .
Rather than trying to solve the problem exactly, we instead solve a relaxed or approximate version,
which will be useful for the convex threshold curve setting. Let > 0 denote a pre-specified
approximation tolerance for Fh , d?e? denote rounding up to the nearest multiple of ?, and b?c?
denote rounding down to the nearest multiple of ?. We define a surrogate problem:
Definition 4.7 (Approximate Thresholds for Real-Valued Functions). Define the following approximations to Fh and ?n :
?
?
?
|S|
X
D?
? +
Fh (S)
(|Q| + 1 ? i)?
? ,
?
D i=1
?
?
D
#%
$
"
n
N
X
Y
D
4
0
N ?i+1
?n =
(?j ? ?j?1 )
?n ?
(2N ? 2i)DG
D i=1
j=i
4
? =
Fh0 (S)
D
?
#
N
N
Y
X
X
N
?i+1
(?j ? ?j?1 ) + 2?
(2N ? 2i)DG
D = ? (|Q| + 1 ? i) +
?
|Q|
"
4
i=1
j=i
i=1
Instantiate F? and F?max in Definition 4.3 using Fh0 , ?n0 above, Gh , ?n and:
N
N
CF? = DG
, CF = ?10 , CG = DG
N
Y
(?n ? ?n?1 ).
n=1
We prove in Appendix B that Definition 4.7 is an instance of a smooth ISSC problem, and that
solving Definition 4.7 will approximately solve the original real-valued smooth ISSC problem.
Theorem 4.8. Given Condition 4.5, Algorithm 1 using Definition 4.7 will approximately solve
the
multiple thresholds version
real-valued
of Problem 1 with tolerance using cost at most
QN
0
N
1 + ln |H|?1 DG n=1 (?n ? ?n?1 ) GCC.
We show in Appendix B.2 how to apply this result to approximately solve the basic submodular set
cover problem with real-valued objectives. Note that if is selected as the smallest distinct difference
between values in Fh , then the approximation will be exact.
4.4
Convex Threshold Curve Version
We now address the setting where the threshold curve ?(?) is continuous and convex. We again
solve the approximate version, since the threshold curve ?(?) is necessarily real-valued. Let > 0
be the pre-specified tolerance for Fh0 . Let N be defined so that N DG is the maximal value of Gh .
We convert the continuous version ?(?) to a multiple threshold version (with N thresholds) that is
within an -approximation of the former, as shown below.
Definition 4.9 (Equivalent Multiple Thresholds for Continuous Convex Curve). Instantiate F? and
F?max in Definition 4.3 using Gh without modification, and a sequence of thresholds:
?
?
?
|S|
X
D?
? +
Fh (S)
(|Q| + 1 ? i)?
? ,
?
D i=1
?
?
D
$
"
#%
n
N
X
Y
D
4
0
N ?i+1
?n =
?(n) ?
(2N ? 2i)DG
(?j ? ?j?1 )
D i=1
j=i
4
? =
Fh0 (S)
4
?n = DG n
6
D
with constants set as:
CF = ?10 ,
CF? = 1,
N
Y
N
CG = DG
N
(?n ? ?n?1 ) = DG
.
n=1
Note that the Fh0 are not too expensive to compute. We prove in Appendix C that satisfying this set of
thresholds is equivalent to satisfying the original curve ?(?) within -error. Note also that Definition
4.9 uses the same form as Definition 4.7 to handle the approximation of real-valued functions.
Theorem 4.10. Applying Algorithm 1 using Definition 4.9 approximately solves
the convex threshN
old version of Problem 1 with tolerance using cost at most: 1 + ln |H|?10 DG
GCC.
Note that if is sufficiently large, then N could in principle be smaller, which can lead to less
conservative approximations. There may also be more precise approximations by reducing to other
formulations for the multi-threshold setting (e.g., Appendix D.2).
5
Simulation Experiments
Comparison of Methods to Solve Multiple Thresholds. We compared our multiple threshold
method against multiple baselines (see Appendix D for more details) in a range of simulation settings
(see Appendix E.1). Figure 4 shows the results. We see that our approach is consistently amongst the
best performing methods. The primary competitor is the circuit of constraints approach from [11]
(see Appendix D.3 for a comparison of the theoretical guarantees). We also note that all approaches
dramatically outperform their worst-case guarantees.
Cost for Setting A
50
Cost for Setting B
35
Cost for Setting C
35
35
Multiple Threshold (Def 4.4)
Alternative (Def D.1)
Circuit (Def D.6)
Forward (Sec D.1)
Backward (Sec D.1)
30
Cost
30
40
Cost
Cost
45
25
25
30
25
0
50
100
20
20
0
50
Percentile
100
0
50
Percentile
100
Percentile
Figure 4: Comparison against baselines in three simulation settings.
Validating Approximation Tolerances. We also validated the efficacy of our approximate thresholds relaxation (see Appendix E.2 for more details of the setup). Figure 5 shows the results. We see
that the actual deviation from the original smooth ISSC problem is much smaller than the specified
, which suggests that our guarantees are rather conservative. For instance, at = 15, the algorithm
is allowed to terminate immediately. We also see that the cost to completion steadily decreases as
increases, which agrees with our theoretical results.
Cost vs 0
34
32
0
1.5
Deviation
Cost
Deviation vs
2
30
28
26
1
0.5
0
5
10
15
20
25
5
0
10
15
20
25
0
Figure 5: Comparing cost and deviation from the exact function for varying .
6
Summary of Results & Discussion
0
Figure 6 summarizes the size of F?max (or F?max
for real-valued functions) for the various settings.
Recall that our cost guarantees take the form (1 + ln F?max )OP T . When Fh are real-valued, then
0
we instead solve the smooth ISSC problem approximately with cost guarantee (1 + ln F?max
)OP T .
Our results are well developed for many different versions of the utility functions Fh , but are less
flexible for the distance functions Gh . For example, even for rational-valued Gh , F?max scales as
N
DG
, which is not desirable. The restriction of Gh to be rational (or integral) leads to a relatively
straightforward reduction of the continuous convex version of ?(?) to a multiple thresholds version.
7
In fact, our formulation can be extended to deal with real-valued Gh and ?n in the multiple thresholds version; however the resulting F? is no longer guaranteed to be submodular. It is possible that a
different assumption than the one imposed in Condition 4.5 is required to prove more general results.
F
G
Rational
Rational
Real
Rational
Multiple Thresholds
N QN
|H|?1 DF DG
i=1 (?i ? ?i?1 )
N QN
|H|?10 DG
i=1 (?i ? ?i?1 )
Convex Threshold Curve
N
|H|?1 DF DG
N
|H|?10 DG
0
Figure 6: Summarizing F?max . When Fh are real-valued, we show F?max
instead.
Our analysis appears to be overly conservative for many settings. For instance, all the approaches we
evaluated empirically achieved much better performance than their worst-case guarantees. It would
be interesting to identify ways to constrain the problem and develop tighter theoretical guarantees.
7
Other Related Work
Submodular optimization is an important problem that arises across many settings, including sensor
placements [16, 15], summarization [26, 17, 23], inferring latent influence networks [8], diversified
recommender systems [5, 4, 25, 21], and multiple solution prediction [1, 3, 22, 19]. However, the
majority of previous work has focused on offline submodular optimization whereby the submodular
function to be optimized is fixed a priori (i.e., does not vary depending on feedback).
There are two typical ways that a submodular optimization problem can be made interactive. The
first is in online submodular optimization, where an unknown submodular function must be reoptimized repeatedly over many sessions in an online or repeated-games fashion [20, 25, 21]. In
this setting, feedback is typically provided only at the conclusion of a session, and so adapting from
feedback is performed between sessions. In other words, each session consists of a non-interactive
submodular optimization problem, and the technical challenge stems from the fact that the submodular function is unknown a priori and must be learned from feedback provided post optimization in
each session ? this setting is often referred to as inter-session interactive optimization.
The other way to make submodular optimization interactive, which we consider in this paper, is to
make feedback available immediately after each action taken. In this way, one can simultaneously
learn about and optimize for the unknown submodular function within a single optimization session
? this setting is often referred to as intra-session interactive optimization. One can also consider
settings that allow for both intra-session and inter-session interactive optimization.
Perhaps the most well-studied application of intra-session interactive submodular optimization is
active learning [10, 7, 11, 9, 2, 14, 13], where the goal is to quickly reduce the hypothesis class
to some target residual uncertainty for planning or decision making. Many instances of noisy and
approximate active learning can be formulated as an interactive submodular set cover problem [9].
A related setting is adaptive submodularity [7, 2, 6, 13], which is a probabilistic setting that essentially requires that the conditional expectation over the hypothesis set of submodular functions is
itself a submodular function. In contrast, we require that the hypothesis class be pointwise submodular (i.e., each hypothesis corresponds to a different submodular utility function). Although neither
adaptive submodularity nor pointwise submodularity is a strict generalization of the other (cf. [7, 9]),
in practice it can often be easier to model application settings using pointwise submodularity.
The ?flipped? problem is to maximize utility with a bounded budget, which is commonly known as
the budgeted submodular maximization problem [18]. Interactive budgeted maximization has been
analyzed rigorously for adaptive submodular problems [7], but it remains a challenge to develop
provably near-optimal interactive algorithms for pointwise submodular utility functions.
8
Conclusions
We introduced smooth interactive submodular set cover, a smoothed generalization of previous ISSC
frameworks. Smooth ISSC allows for the target threshold to vary based on the plausibility of the
hypothesis. Smooth ISSC also introduces an approximate threshold solution concept that can be
applied to real-valued functions, which also applies to basic submodular set cover with real-valued
objectives. We developed the first provably near-optimal algorithm for this setting.
8
References
[1] Dhruv Batra, Payman Yadollahpour, Abner Guzman-Rivera, and Gregory Shakhnarovich. Diverse m-best
solutions in markov random fields. In European Conference on Computer Vision (ECCV), 2012.
[2] Yuxin Chen and Andreas Krause. Near-optimal batch mode active learning and adaptive submodular
optimization. In International Conference on Machine Learning (ICML), 2013.
[3] Debadeepta Dey, Tommy Liu, Martial Hebert, and J. Andrew Bagnell. Contextual sequence prediction
via submodular function optimization. In Robotics: Science and Systems Conference (RSS), 2012.
[4] Khalid El-Arini and Carlos Guestrin. Beyond keyword search: discovering relevant scientific literature.
In ACM Conference on Knowledge Discovery and Data Mining (KDD), 2011.
[5] Khalid El-Arini, Gaurav Veda, Dafna Shahaf, and Carlos Guestrin. Turning down the noise in the blogosphere. In ACM Conference on Knowledge Discovery and Data Mining (KDD), 2009.
[6] Victor Gabillon, Branislav Kveton, Zheng Wen, Brian Eriksson, and S. Muthukrishnan. Adaptive submodular maximization in bandit setting. In Neural Information Processing Systems (NIPS), 2013.
[7] Daniel Golovin and Andreas Krause. Adaptive submodularity: A new approach to active learning and
stochastic optimization. In Conference on Learning Theory (COLT), 2010.
[8] Manuel Gomez Rodriguez, Jure Leskovec, and Andreas Krause. Inferring networks of diffusion and
influence. In ACM Conference on Knowledge Discovery and Data Mining (KDD), 2010.
[9] Andrew Guillory. Active Learning and Submodular Functions. PhD thesis, University of Washington,
2012.
[10] Andrew Guillory and Jeff Bilmes. Interactive submodular set cover. In International Conference on
Machine Learning (ICML), 2010.
[11] Andrew Guillory and Jeff Bilmes. Simultaneous learning and covering with adversarial noise. In International Conference on Machine Learning (ICML), 2011.
[12] Steve Hanneke. The complexity of interactive machine learning. Master?s thesis, Carnegie Mellon University, 2007.
[13] Shervin Javdani, Yuxin Chen, Amin Karbasi, Andreas Krause, J. Andrew Bagnell, and Siddhartha Srinivasa. Near optimal bayesian active learning for decision making. In Conference on Artificial Intelligence
and Statistics (AISTATS), 2014.
[14] Shervin Javdani, Matthew Klingensmith, J. Andrew Bagnell, Nancy Pollard, and Siddhartha Srinivasa.
Efficient touch based localization through submodularity. In IEEE International Conference on Robotics
and Automation (ICRA), 2013.
[15] Andreas Krause, Ajit Singh, and Carlos Guestrin. Near-optimal sensor placements in gaussian processes.
In International Conference on Machine Learning (ICML), 2005.
[16] Jure Leskovec, Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne VanBriesen, and Natalie
Glance. Cost-effective outbreak detection in networks. In ACM Conference on Knowledge Discovery and
Data Mining (KDD), 2007.
[17] Hui Lin and Jeff Bilmes. Learning mixtures of submodular shells with application to document summarization. In Conference on Uncertainty in Artificial Intelligence (UAI), 2012.
[18] George Nemhauser, Laurence Wolsey, and Marshall Fisher. An analysis of approximations for maximizing submodular set functions. Mathematical Programming, 14(1):265?294, 1978.
[19] Adarsh Prasad, Stefanie Jegelka, and Dhruv Batra. Submodular meets structured: Finding diverse subsets
in exponentially-large structured item sets. In Neural Information Processing Systems (NIPS), 2014.
[20] Filip Radlinski, Robert Kleinberg, and Thorsten Joachims. Learning diverse rankings with multi-armed
bandits. In International Conference on Machine Learning (ICML), 2008.
[21] Karthik Raman, Pannaga Shivaswamy, and Thorsten Joachims. Online learning to diversify from implicit
feedback. In ACM Conference on Knowledge Discovery and Data Mining (KDD), 2012.
[22] Stephane Ross, Jiaji Zhou, Yisong Yue, Debadeepta Dey, and J. Andrew Bagnell. Learning policies for
contextual submodular prediction. In International Conference on Machine Learning (ICML), 2013.
[23] Sebastian Tschiatschek, Rishabh Iyer, Haochen Wei, and Jeff Bilmes. Learning mixtures of submodular
functions for image collection summarization. In Neural Information Processing Systems (NIPS), 2014.
[24] Laurence A Wolsey. An analysis of the greedy algorithm for the submodular set covering problem.
Combinatorica, 2(4):385?393, 1982.
[25] Yisong Yue and Carlos Guestrin. Linear submodular bandits and their application to diversified retrieval.
In Neural Information Processing Systems (NIPS), 2011.
[26] Yisong Yue and Thorsten Joachims. Predicting diverse subsets using structural svms. In International
Conference on Machine Learning (ICML), 2008.
9
| 6002 |@word exploitation:4 middle:1 version:33 laurence:2 termination:2 simulation:3 r:1 prasad:1 rivera:1 reduction:4 liu:1 score:1 efficacy:1 daniel:1 document:1 current:1 comparing:1 contextual:2 manuel:1 must:9 additive:1 kdd:5 n0:1 v:2 greedy:7 instantiate:2 selected:1 item:4 discovering:1 intelligence:2 payman:1 yuxin:2 provides:2 iterates:1 preference:1 simpler:1 mathematical:1 constructed:1 natalie:1 prove:5 doubly:1 consists:1 tommy:1 theoretically:1 inter:2 planning:1 nor:1 multi:2 decreasing:4 actual:1 armed:1 increasing:3 begin:1 provided:2 notation:1 bounded:2 circuit:2 agnostic:1 maxa:1 developed:2 finding:3 guarantee:11 every:1 nf:1 interactive:36 exactly:1 consequence:1 meet:1 approximately:7 might:1 studied:3 suggests:1 challenging:1 tschiatschek:1 range:1 enforces:2 drama:1 reoptimized:1 practice:3 kveton:1 adapting:1 word:6 pre:2 nmax:2 cannot:1 eriksson:1 selection:1 applying:2 influence:2 optimize:8 conventional:1 map:4 equivalent:4 restriction:1 maximizing:4 imposed:1 straightforward:1 branislav:1 convex:15 focused:1 identifying:1 immediately:2 handle:1 target:6 play:1 construction:1 user:5 exact:2 programming:1 us:5 distinguishing:1 hypothesis:23 pa:1 satisfying:7 expensive:1 observed:1 bottom:1 solved:1 capture:3 worst:5 region:1 keyword:1 decrease:3 trade:3 rq:2 environment:2 complexity:1 haochen:1 bryanhe:1 rigorously:3 singh:1 solving:7 shakhnarovich:1 creates:2 localization:1 easily:1 various:1 genre:1 muthukrishnan:1 instantiated:1 distinct:1 effective:2 describe:1 query:7 artificial:2 lift:1 outcome:1 modular:2 stanford:2 plausible:11 valued:33 larger:3 say:1 solve:12 supplementary:1 statistic:1 noisy:6 itself:1 online:3 sequence:5 propose:3 maximal:1 relevant:1 fmax:2 horror:1 poorly:1 achieve:1 amin:1 requirement:2 depending:2 develop:2 this4:1 completion:1 andrew:7 jiaji:1 nearest:2 op:5 solves:3 implies:2 met:1 submodularity:8 correct:1 stephane:1 stochastic:1 exploration:5 material:1 require:5 generalization:4 tighter:1 brian:1 extension:1 hold:1 sufficiently:6 dhruv:2 mapping:3 matthew:1 vary:5 achieves:1 smallest:2 fh:52 combinatorial:1 maker:3 ross:1 largest:1 agrees:1 weighted:1 gaurav:1 imperfection:1 always:1 sensor:2 gaussian:1 reaching:1 rather:2 zhou:1 varying:1 validated:1 joachim:3 consistently:1 contrast:1 rigorous:1 greedily:1 cg:4 baseline:2 summarizing:1 adversarial:1 jeanne:1 shivaswamy:1 el:2 nn:2 typically:2 diminishing:1 bandit:3 interested:1 provably:5 issue:2 flexible:2 colt:1 priori:4 field:1 construct:1 never:1 having:2 washington:1 flipped:1 icml:7 guzman:1 recommend:1 few:2 wen:1 javdani:2 dg:25 simultaneously:1 individual:1 karthik:1 argmaxq:1 detection:1 interest:1 mining:5 intra:3 khalid:2 zheng:1 deferred:1 introduces:1 analyzed:1 mixture:2 rishabh:1 held:1 integral:3 old:1 desired:1 theoretical:5 minimal:5 leskovec:2 instance:6 asking:1 marshall:1 cover:29 maximization:3 cost:34 minr:1 deviation:4 subset:3 rounding:2 too:1 motivating:1 guillory:3 gregory:1 chooses:1 international:8 probabilistic:1 off:3 h2h:1 analogously:1 continuously:1 quickly:1 shervin:2 gabillon:1 again:1 thesis:2 satisfied:6 yisong:4 interactively:2 arini:2 hn:1 choose:1 opposed:1 return:1 sec:2 automation:1 coefficient:1 satisfy:9 ranking:1 depends:1 performed:1 h1:1 portion:3 carlos:5 complicated:1 contribution:3 characteristic:1 efficiently:2 yield:4 identify:1 yes:1 yellow:1 generalize:1 bayesian:1 bilmes:4 hanneke:1 j6:1 simultaneous:1 reach:4 whenever:1 sebastian:1 definition:26 against:2 competitor:1 steadily:1 associated:4 proof:1 ploration:1 rational:11 gain:2 ask:1 logical:2 recall:1 knowledge:8 nancy:1 actually:1 back:1 appears:1 steve:1 response:15 wei:1 formulation:3 evaluated:1 dey:2 just:2 implicit:1 until:2 hand:1 shahaf:1 touch:1 rodriguez:1 glance:1 mode:1 perhaps:1 scientific:1 contain:3 true:4 concept:1 former:1 deal:4 game:1 covering:6 whereby:3 percentile:3 criterion:2 trying:1 gg:1 outline:1 performs:1 gh:42 image:1 fi:1 srinivasa:2 empirically:1 exponentially:1 extend:3 he:1 significant:1 mellon:1 diversify:1 dafna:1 session:11 submodular:81 longer:2 optimizes:1 meta:2 arbitrarily:1 caltech:1 victor:1 guestrin:5 george:1 relaxed:3 maximize:1 redundant:1 recommended:1 ii:1 multiple:26 desirable:1 stem:1 smooth:24 ing:1 technical:1 plausibility:6 lin:1 issc:32 retrieval:1 post:1 qi:5 prediction:3 variant:2 basic:7 denominator:6 essentially:4 expectation:1 df:11 vision:1 iteration:1 achieved:1 robotics:2 receive:1 background:1 addition:1 krause:6 adarsh:1 strict:3 yue:4 validating:1 seem:1 call:1 integer:5 structural:1 near:7 recommends:1 pannaga:1 competing:2 reduce:1 regarding:2 simplifies:1 andreas:6 tradeoff:4 veda:1 utility:15 pollard:1 action:10 repeatedly:1 dramatically:1 useful:1 svms:1 outperform:1 overly:1 bryan:1 diverse:4 discrete:1 carnegie:1 siddhartha:2 key:2 threshold:64 clarity:1 budgeted:2 neither:1 yadollahpour:1 diffusion:1 backward:1 relaxation:1 monotone:10 convert:2 uncertainty:4 master:1 extends:1 raman:1 decision:5 appendix:12 summarizes:1 bound:3 hi:6 def:3 guaranteed:1 gomez:1 hhi:1 oracle:1 placement:2 constraint:1 constrain:1 personalized:1 generates:1 kleinberg:1 optimality:2 min:4 extremely:1 performing:1 relatively:1 structured:2 according:2 inflexible:1 smaller:3 describes:2 across:1 vanbriesen:1 modification:2 making:2 outbreak:1 karbasi:1 thorsten:3 taken:1 ln:10 remains:1 know:1 end:1 generalizes:3 available:1 apply:2 observe:1 disagreement:3 alternative:2 batch:1 faloutsos:1 jn:1 original:3 denotes:2 top:1 cf:12 unifying:1 exploit:1 classical:1 icra:1 objective:8 question:10 strategy:1 primary:1 bagnell:4 surrogate:3 amongst:1 nemhauser:1 distance:7 majority:1 provable:1 assuming:1 modeled:1 relationship:4 pointwise:4 balance:2 equivalently:1 setup:1 robert:1 statement:1 negative:3 stated:2 gcc:11 redefined:1 unknown:5 summarization:3 allowing:2 recommender:2 policy:1 markov:1 truncated:2 extended:2 precise:1 smoothed:1 ajit:1 rating:1 introduced:1 pair:6 required:2 specified:3 optimized:1 california:1 learned:1 nip:4 address:4 beyond:1 jure:2 below:5 challenge:2 max:35 including:1 analogue:1 satisfaction:1 natural:1 predicting:1 turning:1 residual:1 representing:1 movie:4 technology:1 realvalued:2 martial:1 stefanie:1 literature:1 disagrees:1 discovery:5 relative:1 encompassing:1 fully:1 highlight:1 interesting:1 limitation:3 wolsey:2 jegelka:1 principle:1 playing:1 eccv:1 summary:2 hebert:1 offline:1 allow:2 jh:1 institute:1 wide:1 tolerance:11 feedback:7 curve:9 resides:1 qn:4 forward:2 made:1 adaptive:6 commonly:1 collection:1 approximate:16 emphasize:1 monotonicity:2 active:8 instantiation:1 uai:1 filip:1 assumed:1 recommending:2 continuous:9 latent:1 search:1 quantifies:1 disambiguate:2 learn:2 nature:1 terminate:2 golovin:1 necessarily:3 european:1 protocol:1 aistats:1 main:1 noise:2 nothing:1 allowed:1 repeated:1 referred:2 nphard:1 fashion:1 christos:1 inferring:2 candidate:1 hmax:5 theorem:4 down:2 specific:1 offset:1 virtue:2 intractable:1 adding:3 debadeepta:2 hui:1 phd:1 iyer:1 budget:1 gap:1 easier:1 chen:2 smoothly:2 explore:1 blogosphere:1 diversified:2 scalar:2 recommendation:1 applies:1 corresponds:1 satisfies:1 acm:5 shell:1 conditional:1 goal:14 formulated:1 exposition:1 disambiguating:2 jeff:4 fisher:1 content:1 typical:1 reducing:2 lemma:3 conservative:3 total:1 batra:2 select:1 formally:1 combinatorica:1 radlinski:1 arises:1 |
5,529 | 6,003 | Tractable Bayesian Network Structure Learning with
Bounded Vertex Cover Number
Janne H. Korhonen
Helsinki Institute for Information Technology HIIT
Department of Computer Science
University of Helsinki
[email protected]
Pekka Parviainen
Helsinki Institute for Information Technology HIIT
Department of Computer Science
Aalto University
[email protected]
Abstract
Both learning and inference tasks on Bayesian networks are NP-hard in general.
Bounded tree-width Bayesian networks have recently received a lot of attention as
a way to circumvent this complexity issue; however, while inference on bounded
tree-width networks is tractable, the learning problem remains NP-hard even for
tree-width 2. In this paper, we propose bounded vertex cover number Bayesian
networks as an alternative to bounded tree-width networks. In particular, we show
that both inference and learning can be done in polynomial time for any fixed
vertex cover number bound k, in contrast to the general and bounded tree-width
cases; on the other hand, we also show that learning problem is W[1]-hard in
parameter k. Furthermore, we give an alternative way to learn bounded vertex
cover number Bayesian networks using integer linear programming (ILP), and
show this is feasible in practice.
1
Introduction
Bayesian networks are probabilistic graphical models representing joint probability distributions
of random variables. They can be used as a model in a variety of prediction tasks, as they enable
computing the conditional probabilities of a set of random variables given another set of random
variables; this is called the inference task. However, to use a Bayesian network as a model for
inference, one must first obtain the network. Typically, this is done by estimating the network based
on observed data; this is called the learning task.
Both the inference and learning tasks are NP-hard in general [3, 4, 6]. One approach to deal with
this issue has been to investigate special cases where these problems would be tractable. That is,
the basic idea is to select models from a restricted class of Bayesian networks that have structural
properties enabling fast learning or inference; this way, the computational complexity will not be
an issue, though possibly at the cost of accuracy if the true distribution is far from the model family.
Most notably, it is known that the inference task can be solved in polynomial time if the network
has bounded tree-width, or more precisely, the inference task is fixed-parameter tractable in the
tree-width of the network. Moreover, this is in a sense optimal, as bounded tree-width is necessary
for polynomial-time inference unless the exponential time hypothesis (ETH) fails [17].
1
The possibility of tractable inference has motivated several recent studies also on learning bounded
tree-width Bayesian networks [2, 12, 16, 19, 22]. However, unlike in the case of inference, learning a
Bayesian network of bounded tree-width is NP-hard for any fixed tree-width bound at least 2 [16].
Furthermore, it is known that learning many relatively simple classes such as paths [18] and polytrees
[9] is also NP-hard. Indeed, so far the only class of Bayesian networks for which a polynomial
time learning algorithm is known are trees, i.e., graphs with tree-width 1 [5] ? it appears that our
knowledge about structure classes allowing tractable learning is quite limited.
1.1
Structure Learning with Bounded Vertex Cover Number
In this work, we propose bounded vertex cover number Bayesian networks as an alternative to
the tree-width paradigm. Roughly speaking, we consider Bayesian networks where all pairwise
dependencies ? i.e., edges in the moralised graph ? are covered by having at least one node from the
vertex cover incident to each of them; see Section 2 for technical details. Like bounded tree-width
Bayesian networks, this is a parameterised class, allowing a trade-off between the complexity of
models and the size of the space of possible models by varying the parameter k.
Results: complexity of learning bounded vertex cover networks. Crucially, we show that learning an optimal Bayesian network structure with vertex cover number at most k can be done in
polynomial time for any fixed k. Moreover, vertex cover number provides an upper bound for
tree-width, implying that inference is also tractable; thus, we identify a rare example of a class of
Bayesian networks where both learning and inference are tractable.
Specifically, our main theoretical result shows that an optimal Bayesian network structure with
vertex cover number at most k can be found in time 4k n2k+O(1) (Theorem 5). However, while the
running time of our algorithm is polynomial with respect to the number of nodes, the degree of the
polynomial depends on k. We show that this is in a sense best we can hope for; that is, we show that
there is no fixed-parameter algorithm with running time f (k) poly(n) for any function f even when
the maximum allowed parent set size is restricted to 2, unless the commonly accepted complexity
assumption FPT 6= W[1] fails (Theorem 6).
Results: ILP formulation and learning in practice. While we prove that the learning bounded
vertex cover Bayesian network structures can be done in polynomial time, the unavoidable dependence
on k in the degree the polynomial makes the algorithm of our main theorem infeasible for practical
usage when the vertex cover number k increases. Therefore, we investigate using an integer linear
programming (ILP) formulation as an alternative way to find optimal bounded vertex cover Bayesian
networks in practice (Section 4). Although the running time of an ILP is exponential in the worst
case, the actual running time in many practical scenarios is significantly lower; indeed, most of the
state-of-the-art algorithms for exact learning of Bayesian networks in general [1, 8] and with bounded
tree-width [19, 22] are based on ILPs. Our experiments show that bounded vertex cover number
Bayesian networks can, indeed, be learned fast in practice using ILP (Section 5).
2
Preliminaries
Directed graphs. A directed graph D = (N, A) consists of a node set N and arc set A ? N ? N ;
for a fixed node set, we usually identify a directed graph with its arc set A. A directed graph is called
a directed acyclic graph or DAG if it contains no directed cycles. We write n = |N | and uv for arc
(u, v) ? A. For u, v ? N with uv ? A, we say that u is a parent of v and v is a child of u. We write
Av for the parent set of v, that is, Av = {u ? N : uv ? A}.
Bayesian network structure learning. We consider the Bayesian network structure learning using
the score-based approach [7, 14], where the input consists of the node set N and the local scores
fv (S) for each node v ? N and S P
? N \ {v}. The task is to find a DAG A ? the network structure ?
that maximises the score f (A) = v?N fv (Av ).
We assume that the scores fv are computed beforehand, and that we can access each entry fv (S) in
constant time. We generally consider a setting where only parent sets belonging to specified sets
Fv ? 2N are permitted. Typically, Fv consists of parent sets of size at most k, in which case
we
assume that the scores fv (S) are given only for |S| ? k; that is, the size of the input is O n nk .
2
Moralised graphs. For a DAG A, the moralised graph of A is an undirected graph MA = (N, EA ),
where EA is obtained by adding (1) an undirected edge {u, v} to EA for each arc uv ? A, and (2) by
adding an undirected edge {u, v} to EA if u and v have a common child, that is, {uw, vw} ? A in
A for some w ? A. The edges added to EA due to rule (2) are called moral edges.
Tree-width and vertex cover number. A tree-decomposition of a graph G = (V, E) is a pair
(X , T ), where T isSa tree with node set {1, 2, . . . , m} and X = {X1 , X2 , . . . , Xm } is a collection of
m
subsets of V with i=1 Xi = V such that
(a) for each {u, v} ? E there is i with u, v ? Xi , and
(b) for each v ? V the graph T [{i : v ? Xi }] is connected.
The width of a tree-decomposition (T, X ) is maxi |Xi | ? 1. The tree-width tw(G) of a graph G is
the minimum width of a tree-decomposition of G. For a DAG A, we define the tree-width tw(A) as
the tree-width of the moralised graph MA [12].
For a graph G = (V, E), a set C ? V is a vertex cover if each edge is incident to at least one vertex
in C. The vertex cover number of a graph ? (G) is the size of the smallest vertex cover in G. As with
tree-width, we define the vertex cover number ? (A) of a DAG A as ? (MA ).
Lemma 1. For a DAG A, we have tw(A) ? ? (A).
Proof. By definition, the moralised graph MA has a vertex cover C of size ? (A). We can construct
a star-shaped tree-decomposition for MA with a central node i with Xi = C and a leaf j with
Xj = C ? v for every v ? N \ C. Clearly, this tree-decomposition has width ? (A); thus, we have
tw(A) = tw(MA ) ? ? (A).
Structure learning with parameters. Finally, we give a formal definition for the bounded treewidth and bounded vertex cover number Bayesian network structure learning problems. That is, let
p ? {?, tw}; in the bounded-p Bayesian network structure learning, we are given a node
P set N , local
scores fv (S) and an integer k, and the task is to find a DAG A maximising score v?N fv (Av )
subject to p(A) ? k. For both tree-width and vertex cover number, the parameter k also bounds the
maximum parent set size, so we will assume that the local scores fv (S) are given only if |S| ? k.
3
3.1
Complexity Results
Polynomial-time Algorithm
We start by making a few simple observations about the structure of bounded vertex cover number
Bayesian networks. In the following, we slightly abuse the terminology and say that N1 ? N is a
vertex cover for a DAG A if N1 is a vertex cover of MA .
Lemma 2. Let N1 ? N be a set of size k, and let A be a DAG on N . Set N1 is a vertex cover for A
if and only if
(a) for each node v ?
/ N1 , we have Av ? N1 , and
(b) each node v ? N1 has at most one parent outside N1 .
Proof. (?) For (a), we have that if there were nodes u, v ?
/ N1 such that u is the child of v, the
moralised graph MA would have edge {u, v} that is not covered by N1 . Likewise for (b), we have
that if a node u ? N1 had parents v, w ?
/ N1 , then MA would have edge {v, w} not covered by N1 .
Thus, both (a) and (b) have to hold if A has vertex cover N1 .
(?) Since (a) holds, all directed edges in A have one endpoint in N1 , and thus the corresponding
undirected edges in MA are covered by N1 . Moreover, by (a) and (b), no node has two parents
outside N1 , so all moral edges in MA also have at least one endpoint in N1 .
Lemma 2 allows us to partition a DAG with vertex cover number k into a core that covers at most 2k
nodes that are either in a fixed vertex cover or are parents of those nodes (core nodes), and a periphery
3
(a)
(b)
N1
e
u
N2
v
v
u
e
Figure 1: (a) Example of a DAG with vertex cover number 4, with sets N1 and N2 as in Lemma 3.
(b) Reduction used in Theorem 6; each edge in the original graph is replaced by a possible v-structure.
containing arcs going into nodes that have no children and all parents in the vertex cover (peripheral
nodes). This is illustrated in Figure 1(a), and the following lemma formalises the observation.
Lemma 3. Let A be a DAG on N with vertex cover N1 of size k. Then there is a set N2 ? N \ N1
of size at most k and arc sets B and C such that A = B ? C and
(a) B is a DAG on N1 ? N2 with vertex cover N1 , and
(b) C contains only arcs uv with u ? N1 and v ?
/ N1 ? N2 .
S
Proof. First, let N2 =
v?N1 Av \ N1 . By Lemma 2, each v ? N1 can have at most one parent
outside N1 , so we have |N2 | ? |N1 | ? k.
Now let B = {uv ? A : u, v ? N1 ? N2 } and C = A \ B. To see that (a) holds for this choice of B,
we observe that the edge set of the moralised graph MB is a subset of the edges in MA , and thus N1
covers all edges of MB . For (b), the choice of N2 and Lemma 2 ensure that nodes in N \ (N1 ? N2 )
have no children, and again by Lemma 2 their parents are all in N1 .
Dually, if we fix the core and peripheral node sets, we can construct a DAG with bounded vertex cover
number by the selecting the core independently from the parents of the peripheral nodes. Formally:
Lemma 4. Let N1 , N2 ? N be disjoint. Let B be a DAG on N1 ? N2 with vertex cover N1 , and let
C be a DAG on N such that C only contains arcs uv with u ? N1 and v ?
/ N1 ? N2 . Then
(a) A = B ? C is a DAG on N with vertex cover N1 , and
P
P
(b) the score of A is f (A) = v?N1 ?N2 fv (Bv ) + v?N
/ 1 ?N2 fv (Cv ).
Proof. To see that (a) holds, we observe that B is acyclic by assumption, and addition of arcs from
C cannot create cycles as there are no outgoing arcs from nodes in N \ (N1 ? N2 ). Moreover, for
v ? N1 ? N2 , there are no arcs ending at v in C, and likewise for v ?
/ N1 ? N2 , there are no arcs
ending at v in B. Thus, we have Av = Bv if v ? N1 ? N2 and Av = Cv otherwise. This implies that
since conditions of Lemma 2 hold for both B and C, they also hold for A, and thus N1 is a vertex
cover for A. Finally, the preceding observation implies also that fv (Av ) = fv (Bv ) for v ? N1 ? N2
and fv (Av ) = fv (Cv ) otherwise, which implies (b).
Lemmas 3 and 4 give the basis of our strategy for finding an optimal Bayesian
network2kstructure with
vertex cover number at most k. That is, we iterate over all possible nk n?k
= O(n ) choices for
k
sets N1 and N2 ; for each choice, we construct the optimal core and periphery as follows, keeping
track of the best found DAG A? :
Step 1. To find the optimal core B, we construct a Bayesian network structure learning instance on
N1 ? N2 by removing nodes outside N1 ? N2 and restricting the possible choices of parent
sets so that Fv = 2N1 for all v ? N2 , and Fv = {S ? N1 ?N2 : |S ? N2 | ? 1} for v ? N1 .
By Lemma 2, any solution for this instance is a DAG with vertex cover N1 . Moreover, this
instance has 2k nodes, so it can be solved in time O(k 2 22k ) using the dynamic programming
algorithm of Silander and Myllym?ki [23].
4
Step 2. To construct the periphery C, we compute the value f?v (N1 ) = maxS?N1 fv (S) and select
corresponding best parent set choice Cv for each v ?
/ N1 ? N2 ; this can be done in time in
O(nk2k ) time using the dynamic programming algorithm of Ott and Miyano [21].
Step 3. We check if f (B ? C) > f (A? ), and replace A? with B ? C if this holds.
By Lemma 4(a), all DAGs considered by the algorithm are valid solutions for Bayesian network
structure learning with bounded vertex cover number, and by Lemma 4(b), we can find the optimal
solution for fixed N1 and N2 by optimising the choice of the core and the periphery separately.
Moreover, by Lemma 3 each bounded vertex cover DAG is included in the search space, so we are
guaranteed to find the optimal one. Thus, we have proven our main theorem:
Theorem 5. Bounded vertex cover number Bayesian network structure learning can be solved in
time 4k n2k+O(1) .
3.2
Lower Bound
Although the algorithm presented in the previous section runs in polynomial time in n, the degree of
the polynomial depends on the size of vertex cover k, which poses a serious barrier to practical use
when k grows.
Moreover, the algorithm is essentially optimal in the general case, as the input has
size ? n nk when parent sets of size at most k are allowed. However, in practice one often assumes
that a node can have at most, say, 2 or 3 parents. Thus, it makes sense to consider settings where
the input is restricted, by e.g. considering instances where parent set size is bounded from above by
some constant w while allowing vertex cover number k to be higher. In this case, we might hope to
do better, as the input size is not a restricting factor.
Unfortunately, we show that it is not possible to obtain a algorithm where the degree of the polynomial
does not depend on k even when the maximum parent set size is limited to 2, that is, there is no
algorithm with running time g(k) poly(n) for any function g, unless the widely believed complexity
assumption FPT 6= W[1] fails. Specifically, we show that Bayesian network structure learning
with bounded vertex cover number is W[1]-hard when restricted to instances with parent set size 2,
implying the above claim. For full technical details on complexity classes FPT and W[1] and the
related theory, we refer the reader to standard texts on the topic [11, 13, 20]; for our result, it suffices
to note that the assumption FPT 6= W[1] implies that finding a k-clique from a graph cannot be done
in time g(k) poly(n) for any function g.
Theorem 6. Bayesian network structure learning with bounded vertex cover number is W[1]-hard in
parameter k, even when restricted to instances with maximum parent set size 2.
Proof. We prove the result by a parameter-preserving reduction from clique, which is known to
be W[1]-hard [10]. We use the same reduction strategy as Korhonen and Parviainen [16] use in
proving that the bounded tree-width version of the problem is NP-hard. That is, given an instance
(G = (V, E), k) of clique, we construct a new instance of bounded vertex cover number Bayesian
network structure learning as follows. The node set of the instance is N = V ? E. The parent scores
are defined by setting fe ({u, v}) = 1 for each e = {u, v} ? E, and fv (S) = 0 for all other v and S;
see Figure 1(b). Finally, the vertex cover size is required to be at most k. Clearly, the new instance
can be constructed in polynomial time.
It now suffices to show that the original graph G hasa clique of size k if and only if the optimal DAG
N with vertex cover number at most k has score k2 :
(?) Assume G has a k-clique C ? V . Let A be a DAG on N obtained by setting Ae = {u, v} for
each e = {u, v} ? C, and Av = ? for all other nodes v ? N . All edges in the moralised
graph MA
k
are now clearly covered by C. Furthermore, since C is a clique in G, there are 2 nodes with a
non-empty parent set, giving f (A) = k2 .
(?) Assume now that there is a DAG A on N with vertex cover number k and a score f (A) ? k2 .
There must be at least k2 nodes e = {u, v} ? E such that Ae = {u, v}, as these are the only nodes
that can contribute to a positive score. Each of these triangles Te = {e, u, v} for e = {u, v} must
contain at least two nodes from a minimum vertex cover C; without loss of generality, we may
assume that these nodes are u and
v, as e cannot cover any other edges. However, this means that
C ? V and there are at least k2 edges e ? C, implying that C must be a k-clique in G.
5
4
Integer Linear Programming
To complement the combinatorial algorithm of Section 3.1, we will formulate the bounded vertex
cover number Bayesian network structure learning problem as an integer linear program (ILP).
Without loss of generality, we may assume that nodes are labeled with integers [n].
As a basis for the formulation, let zSv be a binary variable that takes value 1 when S is the parent set
of v and 0 otherwise. The objective function for the ILP is
X X
max
fv (S)zSv .
v?N S?Fv
To ensure that the variables zSv encode a valid DAG, we use the standard constraints introduced by
Jaakkola et al. [15] and Cussens [8]:
X
zSv = 1
?v ? N
(1)
S?Fv
X
X
v?W
S?Fv
S?W =?
zSv ? 1
?W ? N : |W | ? 1
(2)
zSv ? {0, 1}
?v ? N, S ? Fv .
(3)
Now it remains to bound the vertex cover number of the moralised graph. We introduce two sets
of binary variables. The variable yuv takes value 1 if there is an edge between nodes u and v in
the moralised graph and 0 otherwise. The variable cu takes value 1 if the node u is a part of the
vertex cover and 0 otherwise. By combining a construction of the moralised graph and a well-known
formulation for vertex cover, we get the following:
X
X
zSv +
zT u ? yuv ? 0
?u, v ? N : u < v
(4)
S?Fv : u?S
T ?Fu : v?T
zSv ? yuw ? 0
yuv ? cu ? cv ? 0
X
cu ? k
?v ? N, S ? Fv : u, w ? S, u < w
?u, v ? N : u < v
(5)
(6)
(7)
u?N
yuv , cu ? {0, 1}
?u, v ? N.
(8)
The constraints (4) and (5) guarantee that y-variables encode the moral graph. The constraint (6)
guarantees that if there is an edge between u and v in the moral graph then either u or v is included
in the vertex cover. Finally, the constraint (7) bounds the size of the vertex cover.
5
Experiments
We implemented both the combinatorial algorithm of Section 3.1 and the ILP formulation of Section 4
to benchmark the practical performance of the algorithms and test how good approximations bounded
vertex cover DAGs provide. The combinatorial algorithm was implemented in Matlab and is available
online1 . The ILPs were implemented using CPLEX Python API and solved using CPLEX 12. The
implementation is available as a part of TWILP software2 .
Combinatorial algorithm. As the worst- and best-case running time of the combinatorial algorithm
are the same, we tested it with synthetic data sets varying the number of nodes n and the vertex cover
bound k, limiting each run to at most 24 hours. The results are shown in Figure 2. With reasonable
vertex cover number bounds the polynomial-time algorithm scales only up to about 15 nodes; this is
mainly due to the fact that, while the running time is polynomial in n, the degree of the polynomial
depends on k and when k grows, the algorithm becomes quickly infeasible.
1
2
http://research.cs.aalto.fi/pml/software/VCDP/
http://bitbucket.org/twilp/twilp
6
105
time (s)
104
103
102
10
n = 16
n = 15
n = 14
n = 13
1
100
1
2
3
4
5
k
Figure 2: Running times of the polynomial time algorithm. Number of nodes varies from 13 to 16
and the vertex cover number from 1 to 5. For n = 15 and n = 16 with k = 5, the algorithm did not
finish in 24 hours.
Integer linear program. We ran our experiments using a union of the data sets used by Berg et
al. [2] and those provided at GOBNILP homepage3 . We benchmarked the results against other
ILP-based algorithms, namely GOBNILP [8] for learning Bayesian networks without any restrictions
to the structure and TWILP [22] for learning bounded tree-width Bayesian networks. In our tests,
each algorithm was given 4 hours of CPU time. Figure 3 shows results for selected data sets. Due to
space reasons, full results are reported in the supplement.
The results show that optimal DAGs with moderate vertex cover number (7 for flag, 6 for carpo10000)
tend to have higher scores than optimal trees. This suggests that often one can trade speed for
accuracy by moving from trees to bounded vertex cover number DAGs. We also note that bounded
vertex cover number DAGs are usually learned quickly, typically at least two orders-of-magnitude
faster than bounded tree-width DAGs. However, bounded tree-width DAGs are a less constrained
class, and thus in multiple cases the best found bounded tree-width DAG has better score than the
corresponding bounded vertex cover number DAG even when the bounded tree-width DAG is not
proven to be optimal. This seems to be the case also if we have mismatching bound, say, 5 for
tree-width and 10 for vertex cover number.
Finally, we notice that ILP solves easily problem instances with, say, 60 nodes and vertex cover bound
8; see the results for carpo10000 data set. Thus, in practice ILP scales up to significantly larger data
sets and vertex cover number bounds than the combinatorial algorithm of Section 3.1. Presumably,
this is due to the fact that ILP solvers tend to use heuristics that can quickly prune out provably
non-optimal parts of choices for the vertex cover, while the combinatorial algorithm considers them
all.
6
Discussion
We have shown that bounded vertex cover number Bayesian networks both allow tractable inference
and can be learned in polynomial time. The obvious point of comparison is the class of trees, which
has the same properties. Structurally these two classes are quite different. In particular, neither is a
subclass of the other ? DAGs with vertex cover number k > 1 can contain dense substructures, while
a path of n nodes (which is also a tree) has a vertex cover number bn/2c = ?(n).
In contrast with trees, bounded vertex cover number Bayesian networks have a densely connected
?core? , and each node outside the core is either connected to the core or it has no connections. Thus,
we would expect them to perform better than trees when the ?real? network has a few dense areas
and only few connections between nodes outside these areas. On the other hand, bounding the vertex
cover number bounds the total size of the core area, which can be problematic especially in large
networks when some parts of the network are not represented in the minimum vertex cover.
3
http://www.cs.york.ac.uk/aig/sw/gobnilp/
7
abalone (n = 9), scores
?15200
abalone (n = 9), running times
104
103
?15600
time (s)
score
?15400
?15800
?16000
?16200
?16400
1
2
3
4
5
6
7
8
9
102
101
100
0
10
1
2
3
4
5
?2700
?2750
?2800
?2850
?2900
?2950
?3000
?3050
?3100
flag (n = 29), scores
8
9
10
9
10
flag (n = 29), running times
103
1
2
3
4
5
6
7
8
9
102
101
100
0
10
1
2
3
4
5
6
7
8
k
carpo10000 (n = 60), scores
carpo10000 (n = 60), running times
104
103
time (s)
score
7
104
k
?150000
?160000
?170000
?180000
?190000
?200000
?210000
?220000
?230000
6
k
time (s)
score
k
1
2
3
4
5
6
7
8
9
10
102
101
100
0
1
k
No structure constraints
2
3
4
5
6
7
8
9
10
k
Bounded tree-width
Bounded vertex cover
Figure 3: Results for selected data sets. We report the score for the optimal DAG without structure
constraints, and for the optimal DAGs with bounded tree-width and bounded vertex cover when the
bound k changes, as well as the running time required for finding the optimal DAG in each case. If
the computations were not finished at the time limit of 4 hours, we show the score of the best DAG
found so far; the shaded area represents the unexplored part of the search space, that is, the upper
bound of the shaded area is the best score upper bound proven by the ILP solver.
We also note that bounded vertex cover Bayesian networks have a close connection to naive Bayes
classifiers. That is, variables outside a vertex cover are conditionally independent of each other
given the vertex cover. Thus, we can replace the vertex cover by a single variable whose states are a
Cartesian product of the states of the vertex cover variables; this star-shaped network can then be
viewed as a naive Bayes classifier.
Finally, we note some open question related to our current work. From a theoretical perspective,
we would like to classify different graph parameters in terms of complexity of learning. Ideally, we
would want to have a graph parameter that has a fixed-parameter learning algorithm when we bound
the maximum parent set size, circumventing the barrier of Theorem 6. From a practical perspective,
there is clearly room for improvement in efficiency of our ILP-based learning algorithm; for instance,
GOBNILP uses various optimisations beyond the basic ILP encoding to speed up the search.
Acknowledgments
We thank James Cussens for fruitful discussions. This research was partially funded by the Academy
of Finland (Finnish Centre of Excellence in Computational Inference Research COIN, 251170).
The experiments were performed using computing resources within the Aalto University School of
Science ?Science-IT? project.
8
References
[1] Mark Bartlett and James Cussens. Advances in Bayesian network learning using integer programming. In
29th Conference on Uncertainty in Artificial Intelligence (UAI), 2013.
[2] Jeremias Berg, Matti J?rvisalo, and Brandon Malone. Learning Optimal Bounded Treewidth Bayesian
Networks via Maximum Satisfiability. In 17th International Conference on Artificial Intelligence and
Statistics (AISTATS), 2014.
[3] David M. Chickering. Learning Bayesian networks is NP-Complete. In Learning from Data: Artificial
Intelligence and Statistics V, pages 121?130. Springer-Verlag, 1996.
[4] David M. Chickering, David Heckerman, and Chris Meek. Large-sample learning of Bayesian networks is
NP-Hard. Journal of Machine Learning Research, 5:1287?1330, 2004.
[5] C. K. Chow and C. N. Liu. Approximating discrete probability distributions with dependence trees. IEEE
Transactions on Information Theory, 14(3):462?467, 1968.
[6] Gregory. F. Cooper. The computational complexity of probabilistic inference using Bayesian belief
networks. Artificial Intelligence, 42:393?405, 1990.
[7] Gregory F. Cooper and Edward Herskovits. A Bayesian method for the induction of probabilistic networks
from data. Machine Learning, 9:309?347, 1992.
[8] James Cussens. Bayesian network learning with cutting planes. In 27th Conference on Uncertainty in
Artificial Intelligence (UAI), 2011.
[9] Sanjoy Dasgupta. Learning polytrees. In 15th Conference on Uncertainty in Artificial Intelligence (UAI),
1999.
[10] Rodney G. Downey and Michael R. Fellows. Parameterized computational feasibility. In Feasible
Mathematics II, pages 219?244. Birkhauser, 1994.
[11] Rodney G. Downey and Michael R. Fellows. Parameterized complexity. Springer-Verlag, 1999.
[12] Gal Elidan and Stephen Gould. Learning bounded treewidth Bayesian networks. Journal of Machine
Learning Research, 9:2699?2731, 2008.
[13] J?rg Flum and Martin Grohe. Parameterized complexity theory. Springer-Verlag, 2006.
[14] David Heckerman, Dan Geiger, and David M. Chickering. Learning Bayesian networks: The combination
of knowledge and statistical data. Machine Learning, 20(3):197?243, 1995.
[15] Tommi Jaakkola, David Sontag, Amir Globerson, and Marina Meila. Learning bayesian network structure
using LP relaxations. In 13th International Conference on Artificial Intelligence and Statistics (AISTATS),
2010.
[16] Janne H. Korhonen and Pekka Parviainen. Learning bounded tree-width Bayesian networks. In 16th
International Conference on Artificial Intelligence and Statistics (AISTATS), 2013.
[17] Johan H. P. Kwisthout, Hans L. Bodlaender, and L. C. van der Gaag. The necessity of bounded treewidth
for efficient inference in Bayesian networks. In 19th European Conference on Artificial Intelligence (ECAI),
2010.
[18] Chris Meek. Finding a path is harder than finding a tree. Journal of Artificial Intelligence Research, 15:
383?389, 2001.
[19] Siqi Nie, Denis Deratani Maua, Cassio Polpo de Campos, and Qiang Ji. Advances in Learning Bayesian
Networks of Bounded Treewidth. In Advances in Neural Information Processing Systems 27 (NIPS), 2014.
[20] Rolf Niedermeier. Invitation to fixed-parameter algorithms. Oxford University Press, 2006.
[21] Sascha Ott and Satoru Miyano. Finding optimal gene networks using biological constraints. Genome
Informatics, 14:124?133, 2003.
[22] Pekka Parviainen, Hossein Shahrabi Farahani, and Jens Lagergren. Learning Bounded Tree-width Bayesian
Networks using Integer Linear Programming. In 17th International Conference on Artificial Intelligence
and Statistics (AISTATS), 2014.
[23] Tomi Silander and Petri Myllym?ki. A simple approach for finding the globally optimal Bayesian network
structure. In 22nd Conference on Uncertainty in Artificial Intelligence (UAI), 2006.
9
| 6003 |@word cu:4 version:1 polynomial:19 seems:1 nd:1 open:1 crucially:1 bn:1 decomposition:5 harder:1 reduction:3 necessity:1 liu:1 contains:3 score:24 selecting:1 ilps:2 current:1 invitation:1 must:4 partition:1 implying:3 intelligence:12 leaf:1 selected:2 malone:1 amir:1 plane:1 core:11 provides:1 node:43 contribute:1 denis:1 org:1 constructed:1 prove:2 consists:3 dan:1 introduce:1 bitbucket:1 excellence:1 pairwise:1 notably:1 indeed:3 roughly:1 globally:1 actual:1 cpu:1 considering:1 solver:2 becomes:1 provided:1 estimating:1 bounded:55 moreover:7 project:1 benchmarked:1 cassio:1 finding:7 gal:1 guarantee:2 fellow:2 every:1 unexplored:1 subclass:1 k2:5 classifier:2 uk:1 positive:1 local:3 limit:1 api:1 encoding:1 oxford:1 path:3 abuse:1 might:1 suggests:1 shaded:2 polytrees:2 limited:2 directed:7 practical:5 acknowledgment:1 globerson:1 practice:6 union:1 area:5 eth:1 significantly:2 pekka:4 get:1 cannot:3 close:1 satoru:1 restriction:1 www:1 fruitful:1 gaag:1 attention:1 independently:1 formulate:1 rule:1 proving:1 limiting:1 construction:1 exact:1 programming:7 us:1 hypothesis:1 labeled:1 observed:1 solved:4 worst:2 cycle:2 connected:3 trade:2 ran:1 complexity:12 nie:1 ideally:1 dynamic:2 depend:1 efficiency:1 basis:2 triangle:1 easily:1 joint:1 gobnilp:4 represented:1 various:1 fast:2 artificial:12 outside:7 quite:2 heuristic:1 widely:1 larger:1 whose:1 say:5 otherwise:5 statistic:5 propose:2 mb:2 product:1 silander:2 combining:1 academy:1 parent:26 empty:1 ac:1 pose:1 school:1 received:1 edward:1 solves:1 implemented:3 c:2 treewidth:5 implies:4 tommi:1 enable:1 farahani:1 fix:1 suffices:2 preliminary:1 biological:1 hold:7 brandon:1 considered:1 presumably:1 claim:1 finland:1 smallest:1 combinatorial:7 create:1 hope:2 clearly:4 varying:2 jaakkola:2 encode:2 improvement:1 check:1 mainly:1 aalto:4 contrast:2 sense:3 inference:18 typically:3 chow:1 going:1 provably:1 issue:3 hossein:1 art:1 special:1 constrained:1 construct:6 having:1 shaped:2 qiang:1 optimising:1 represents:1 siqi:1 petri:1 np:8 report:1 serious:1 few:3 densely:1 replaced:1 lagergren:1 cplex:2 n1:59 investigate:2 possibility:1 beforehand:1 edge:20 fu:1 necessary:1 unless:3 tree:48 theoretical:2 instance:12 classify:1 cover:86 ott:2 cost:1 vertex:84 entry:1 rare:1 subset:2 reported:1 dependency:1 varies:1 gregory:2 synthetic:1 international:4 probabilistic:3 off:1 informatics:1 michael:2 quickly:3 again:1 central:1 unavoidable:1 containing:1 possibly:1 yuv:4 issa:1 de:1 star:2 depends:3 performed:1 lot:1 start:1 bayes:2 substructure:1 rodney:2 accuracy:2 likewise:2 identify:2 bayesian:54 definition:2 against:1 james:3 obvious:1 proof:5 knowledge:2 satisfiability:1 ea:5 appears:1 higher:2 permitted:1 formulation:5 hiit:2 done:6 though:1 generality:2 furthermore:3 parameterised:1 hand:2 grows:2 usage:1 contain:2 true:1 illustrated:1 deal:1 conditionally:1 width:36 abalone:2 formalises:1 complete:1 fi:3 recently:1 common:1 ji:1 endpoint:2 refer:1 dag:39 cv:5 uv:7 meila:1 mathematics:1 centre:1 had:1 funded:1 moving:1 access:1 han:1 recent:1 perspective:2 moderate:1 scenario:1 periphery:4 verlag:3 binary:2 der:1 jens:1 preserving:1 minimum:3 preceding:1 prune:1 paradigm:1 elidan:1 ii:1 stephen:1 full:2 multiple:1 technical:2 faster:1 believed:1 marina:1 feasibility:1 prediction:1 basic:2 ae:2 optimisation:1 essentially:1 addition:1 want:1 separately:1 campos:1 unlike:1 finnish:1 subject:1 tend:2 undirected:4 integer:9 structural:1 vw:1 variety:1 xj:1 iterate:1 finish:1 idea:1 motivated:1 bartlett:1 moral:4 downey:2 sontag:1 speaking:1 york:1 matlab:1 generally:1 covered:5 http:3 problematic:1 herskovits:1 notice:1 disjoint:1 track:1 write:2 discrete:1 dasgupta:1 n2k:2 terminology:1 neither:1 uw:1 graph:30 circumventing:1 relaxation:1 aig:1 run:2 parameterized:3 uncertainty:4 family:1 reader:1 reasonable:1 geiger:1 cussens:4 bound:17 ki:2 rvisalo:1 guaranteed:1 meek:2 maua:1 bv:3 precisely:1 constraint:7 helsinki:4 x2:1 software:1 speed:2 relatively:1 gould:1 martin:1 department:2 peripheral:3 combination:1 tomi:1 belonging:1 mismatching:1 heckerman:2 slightly:1 pml:1 lp:1 tw:6 making:1 online1:1 restricted:5 resource:1 remains:2 ilp:15 tractable:9 available:2 observe:2 zsv:8 alternative:4 coin:1 bodlaender:1 original:2 assumes:1 running:12 ensure:2 graphical:1 sw:1 giving:1 especially:1 approximating:1 objective:1 added:1 question:1 strategy:2 dependence:2 thank:1 chris:2 topic:1 considers:1 reason:1 induction:1 maximising:1 unfortunately:1 fe:1 implementation:1 zt:1 perform:1 allowing:3 upper:3 av:11 maximises:1 observation:3 arc:12 enabling:1 benchmark:1 moralised:11 dually:1 introduced:1 complement:1 pair:1 required:2 specified:1 namely:1 connection:3 david:6 fv:27 learned:3 twilp:4 hour:4 nip:1 beyond:1 usually:2 xm:1 rolf:1 program:2 max:2 fpt:4 belief:1 circumvent:1 representing:1 technology:2 finished:1 naive:2 text:1 python:1 loss:2 expect:1 acyclic:2 proven:3 kwisthout:1 incident:2 degree:5 miyano:2 keeping:1 ecai:1 infeasible:2 formal:1 allow:1 institute:2 barrier:2 van:1 ending:2 valid:2 genome:1 commonly:1 collection:1 far:3 transaction:1 cutting:1 gene:1 clique:7 uai:4 sascha:1 xi:5 search:3 learn:1 matti:1 johan:1 poly:3 european:1 did:1 aistats:4 main:3 dense:2 bounding:1 n2:28 myllym:2 allowed:2 child:5 x1:1 cooper:2 fails:3 structurally:1 exponential:2 chickering:3 theorem:8 removing:1 maxi:1 restricting:2 adding:2 supplement:1 magnitude:1 te:1 cartesian:1 nk:3 parviainen:5 rg:1 partially:1 springer:3 ma:13 conditional:1 viewed:1 room:1 replace:2 feasible:2 hard:11 change:1 included:2 specifically:2 korhonen:4 birkhauser:1 flag:3 lemma:16 called:4 total:1 sanjoy:1 accepted:1 janne:3 select:2 formally:1 berg:2 mark:1 outgoing:1 tested:1 |
5,530 | 6,004 | Secure Multi-party Differential Privacy
Peter Kairouz1
Sewoong Oh2
Pramod Viswanath1
1
Department of Electrical & Computer Engineering
2
Department of Industrial & Enterprise Systems Engineering
University of Illinois Urbana-Champaign
Urbana, IL 61801, USA
{kairouz2,swoh,pramodv}@illinois.edu
Abstract
We study the problem of interactive function computation by multiple parties,
each possessing a bit, in a differential privacy setting (i.e., there remains an uncertainty in any party?s bit even when given the transcript of interactions and all the
other parties? bits). Each party wants to compute a function, which could differ
from party to party, and there could be a central observer interested in computing
a separate function. Performance at each party is measured via the accuracy of
the function to be computed. We allow for an arbitrary cost metric to measure
the distortion between the true and the computed function values. Our main result is the optimality of a simple non-interactive protocol: each party randomizes
its bit (sufficiently) and shares the privatized version with the other parties. This
optimality result is very general: it holds for all types of functions, heterogeneous
privacy conditions on the parties, all types of cost metrics, and both average and
worst-case (over the inputs) measures of accuracy.
1
Introduction
Multi-party computation (MPC) is a generic framework where multiple parties share their information in an interactive fashion towards the goal of computing some functions, potentially different at
each of the parties. In many situations of common interest, the key challenge is in computing the
functions as privately as possible, i.e., without revealing much about one?s information to the other
(potentially colluding) parties. For instance, an interactive voting system aims to compute the majority of (say, binary) opinions of each of the parties, with each party being averse to declaring their
opinion publicly. Another example involves banks sharing financial risk exposures ? the banks need
to agree on quantities such as the overnight lending rate which depends on each bank?s exposure,
which is a quantity the banks are naturally loath to truthfully disclose [1]. A central learning theory
question involves characterizing the fundamental limits of interactive information exchange such
that a strong (and suitably defined) adversary only learns as little as possible while still ensuring that
the desired functions can be computed as accurately as possible.
One way to formulate the privacy requirement is to ensure that each party learns nothing more
about the others? information than can be learned from the output of the function computed. This
topic is studied under the rubric of secure function evaluation (SFE); the SFE formulation has
been extensively studied with the goal of characterizing which functions can be securely evaluated [39, 3, 21, 11]. One drawback of SFE is that depending on what auxiliary information the
adversary might have, disclosing the exact function output might reveal each party?s data. For example, consider computing the average of the data owned by all the parties. Even if we use SFE, a
party?s data can be recovered if all the other parties collaborate. To ensure protection of the private
data under such a strong adversary, we want to impose a stronger privacy guarantee of differential
privacy. Recent breaches of sensitive information about individuals due to linkage attacks prove
1
the vulnerability of existing ad-hoc privatization schemes, such as anonymization of the records. In
linkage attacks, an adversary matches up anonymized records containing sensitive information with
public records in a different dataset. Such attacks have revealed the medical record of a former governor of Massachusetts [37], the purchase history of Amazon users[7], genomic information [25],
and movie viewing history of Netflix users [33].
An alternative formulation is differential privacy, a relatively recent formulation that has received
considerable attention as a formal mathematical notion of privacy that provides protection against
such strong adversaries (a recent survey is available at [16]). The basic idea is to introduce enough
randomness in the communication so that an adversary possessing arbitrary side information and
access to the entire transcript of the communication will still have some residual uncertainty in
identifying any of the bits of the parties. This privacy requirement is strong enough that non-trivial
functions will be computed only with some error. Thus, there is a great need for understanding
the fundamental tradeoff between privacy and accuracy, and for designing privatization mechanisms
and communication protocols that achieve the optimal tradeoffs. The formulation and study of an
optimal framework addressing this tradeoff is the focus of this paper.
We study the following problem of multi-party computation under differential privacy: each party
possesses a single bit of information and the information bits are statistically independent. Each
party is interested in computing a function, which could differ from party to party, and there could
be a central observer (observing the entire transcript of the interactive communication protocol)
interested in computing a separate function. Performance at each party and the central observer is
measured via the accuracy of the function to be computed. We allow an arbitrary cost metric to
measure the distortion between the true and the computed function values. Each party imposes a
differential privacy constraint on its information bit (the privacy level could be different from party
to party) ? i.e., there remains an uncertainty in any specific party?s bit even to an adversary that has
access to the transcript of interactions and all the other parties? bits. The interactive communication
is achieved via a broadcast channel that all parties and the central observer can hear (this modeling
is without loss of generality ? since the differential privacy constraint protects against an adversary
that can listen to the entire transcript, the communication between any two parties might as well be
revealed to all the others). It is useful to distinguish between two types of communication protocols:
interactive and non-interactive. We say a communication protocol is non-interactive if a message
broadcasted by one party does not depend on the messages broadcasted by other parties. In contrast,
interactive protocols allow the messages at any stage of the communication to depend on all the
previous messages.
Our main result is the exact optimality of a simple non-interactive protocol in terms of maximizing
accuracy for any given privacy level, when each party possesses one bit: each party randomizes
(sufficiently) and publishes its own bit. In other words:
non-interactive randomized response is exactly optimal.
Each party and the central observer then separately compute their respective decision functions to
maximize the appropriate notion of their accuracy measure. This optimality result is very general:
it holds for all types of functions, heterogeneous privacy conditions on the parties, all types of
cost metrics, and both average and worst-case (over the inputs) measures of accuracy. Finally, the
optimality result is simultaneous, in terms of maximizing accuracy at each of the parties and the
central observer. Each party only needs to know its own desired level of privacy, its own function
to be computed, and its measure of accuracy. Optimal data release and optimal decision making is
naturally separated.
The key technical result is a geometric understanding of the space of conditional probabilities of a
given transcript: the interactive nature of the communication constrains the space to be a rank-1 tensor (a special case of Equation (6) in [35] and perhaps implicitly used in [30]; the two-party analog
of this result is in [29]), while differential privacy imposes linear constraints on the singular vectors
of this tensor. We characterize the convex hull of such manifolds of rank-1 tensors and show that
their corner-points exactly correspond to the transcripts that arise from a non-interactive randomized response protocol. This universal (for all functionalities) characterization is then used to argue
that both average-case and worst-case accuracies are maximized by non-interactive randomized responses.
2
Technically, we prove that non-interactive randomized response is the optimal solution of the rankconstrained and non-linear optimization of (11). The rank constraints on higher order tensors arises
from the necessary condition of (possibly interactive) multi-party protocols, known as protocol compatibility (see Section 2 for details). To solve this non-standard optimization, we transform (11) into
a novel linear program of (17) and (20). The price we pay is the increased dimension, the resulting LP is now infinite dimensional. The idea is that we introduce a new variable for each possible
rank-one tensor, and optimize over all of them.
Formulating utility maximization under differential privacy as linear programs has been previously
studied in [32, 20, 6, 23], under the standard client-server model where there is a single data publisher and a single data analyst. These approaches exploit the fact that both the differential privacy
constraints and the utilities are linear in the matrix representing a privatization mechanism. A similar technique of transforming a non-linear optimization problem into an infinite dimensional LP has
been successfully applied in [26], where optimal privatization mechanisms under local differential
privacy has been studied. We generalize these techniques to rank-constrained optimizations.
Further, perhaps surprisingly, we prove that this infinite dimensional linear program has a simple
optimal solution, which we call randomized response. Upon receiving the randomized responses,
each party can compute the best approximation of its respective function. The main technical innovation is in proving that (a) the optimal solution of this LP corresponds to corner points of a convex
hull of a particular manifold defined by a rank-one tensor (see Lemma 6.2 in the supplementary material for details); and (b) the respective manifold has a simple structure such that the corner points
correspond to particular protocols that we call randomized responses.
When the accuracy is measured via average accuracy, both the objective and the constraints are
linear and it is natural to expect the optimal solution to be at the corner points (see Equation (17)).
A surprising aspect of our main result is that the optimal solution is still at the corner points even
though the worst-case accuracy is a concave function over the protocol P (see Equation (19)).
This work focuses on the scenario where each party possesses a single bit of information. With
multiple bits of information at each of the parties, the existence of a differentially private protocol
with a fixed accuracy for any non-trivial functionality implies the existence of a protocol with the
same level of privacy and same level of accuracy for a specific functionality that only depends on
one bit of each of the parties (as in [22]). Thus, if we can obtain lower bounds on accuracy for
functionalities involving only a single bit at each of the parties, we obtain lower bounds on accuracy
for all non-trivial general functionalities. However, non-interactive communication is unlikely to be
exactly optimal in this general case where each party possesses multiple bits of information, and we
provide a further discussion in Section 4. We move a detailed discussion of related work (Section 5)
to the supplementary material, focusing on the problem formulation next.
2
Problem formulation
Consider the setting where we have k parties, each with its own private binary data xi ? {0, 1}
generated independently. The independence assumption here is necessary because without it each
party can learn something about others, which violates differential privacy, even without revealing
any information. We discuss possible extensions to correlated sources in Section 4. Differential
privacy implicitly imposes independence in a multi-party setting. The goal of the private multi-party
computation is for each party i ? [k] to compute an arbitrary function fi : {0, 1}k ? Y of interest
by interactively broadcasting messages, while preserving the privacy of each party. There might be
a central observer who listens to all the messages being broadcasted, and wants to compute another
arbitrary function f0 : {0, 1} ? Y. The k parties are honest in the sense that once they agree on
what protocol to follow, every party follows the rules. At the same time, they can be curious, and
each party needs to ensure other parties cannot learn his bit with sufficient confidence. The privacy
constraints here are similar to the local differential privacy setting studied in [13] in the sense that
there are multiple privacy barriers, each one separating each individual party and the rest of the
world. However, the main difference is that we consider multi-party computation, where there are
multiple functions to be computed, and each node might possess a different function to be computed.
Let x = [x1 , . . . , xk ] ? {0, 1}k denote the vector of k bits, and x?i
=
[x1 , . . . , xi?1 , xi+1 , . . . , xk ] ? {0, 1}k?1 is the vector of bits except for the i-th bit. The parties
3
agree on an interactive protocol to achieve the goal of multi-party computation. A ?transcript? is
the output of the protocol, and is a random instance of all broadcasted messages until all the communication terminates. The probability that a transcript ? is broadcasted (via a series of interactive
communications) when the data is x is denoted by Px,? = P(? | x) for x ? {0, 1}k and for ? ? T .
Then, a protocol can be represented as a matrix denoting the probability distribution over a set of
k
transcripts T conditioned on x: P = [Px,? ] ? [0, 1]2 ?|T | .
In the end, each party makes a decision on what the value of function fi is, based on its own bit xi
and the transcript ? that was broadcasted. A decision rule is a mapping from a transcript ? ? T and
private bit xi ? {0, 1} to a decision y ? Y represented by a function f?i (?, xi ). We allow randomized
decision rules, in which case f?i (?, xi ) can be a random variable. For the central observer, a decision
rule is a function of just the transcript, denoted by a function f?0 (? ).
We consider two notions of accuracy: the average accuracy and the worst-case accuracy. For the
i-th party, consider an accuracy measure wi : Y ? Y ? R (or equivalently a negative cost function)
such that wi (fi (x), f?i (?, xi )) measures the accuracy when the function to be computed is fi (x) and
the approximation is f?i (?, xi ). Then the average accuracy for this i-th party is defined as
1 X
ACCave (P, wi , fi , f?i ) ?
Ef?i ,Px,? [wi (fi (x), f?i (?, xi ))] ,
(1)
2k
k
x?{0,1}
where the expectation is taken over the random transcript ? distribution as P and also any randomness in the decision function f?i . We want to emphasize that the input is deterministic, i.e. we impose
no distribution on the input data, and the expectation is not over the data sets x. Compared to assuming a distribution over the data, this is a weaker assumption on the data, and hence makes our main
result stronger. For example, if the accuracy measure is an indicator such that wi (y, y 0 ) = I(y=y0 ) ,
then ACCave measures the average probability of getting the correct function output. For a given
protocol P , it takes (2k |T |) operations to compute the optimal decision rule:
X
?
fi,ave
(?, xi ) = arg max
Px,? wi (fi (x), y) ,
(2)
y?Y
x?i ?{0,1}k?1
for each i ? [k]. The computational cost of (2k |T |) for computing the optimal decision rule is
unavoidable in general, since that is the inherent complexity of the problem: describing the distribution of the transcript requires the same cost. We will show that the optimal protocol requires a set
of transcripts of size |T | = 2k , and the computational complexity of the decision rule for general
a function is 22k . However, for a fixed protocol, this decision rule needs to be computed only once
before any message is transmitted. Further, it is also possible to find a closed form solution for the
decision rule when f has a simple structure. One example is the XOR function studied in detail in
Section 3, where the optimal decision rule is as simple as evaluating the XOR of all the received bits,
which requires O(k) operations. When there are multiple maximizers y, we can choose arbitrarily,
and it follows that there is no gain in randomizing the decision rule for average accuracy. Similarly,
the worst-case accuracy is defined as
ACCwc (P, wi , fi , f?i ) ?
min Ef?i ,Px,? [wi (fi (x), f?i (?, xi ))] .
x?{0,1}k
(3)
For worst-case accuracy, given a protocol P , the optimal decision rule of the i-th party with a bit xi
can be computed by solving the following convex program:
XX
Q(xi ) = arg max
min
Px,? wi (fi (x), y)Q?,y
(4)
Q ? R|T |?|Y|
subject to
x?i ?{0,1}k?1
X
y?Y
? ?T y?Y
Q?,y = 1 , ?? ? T and Q ? 0
?
The optimal (random) decision rule fi,wc
(?, xi ) is to output y given transcript ? according to
(x )
P(y|?, xi ) = Q?,yi . This can be formulated as a linear program with (|T | |Y|) variables and
(2k + |T |) constraints. Again, it is possible to find a closed form solution for the decision rule
when f has a simple structure: for the XOR function, the optimal decision rule is again evaluating
4
the XOR of all the received bits requiring O(k) operations. For a central observer, the accuracy
measures are defined similarly, and the optimal decision rule is now
X
?
f0,ave
(? ) = arg max
Px,? w0 (f0 (x), y) ,
(5)
y?Y
x?{0,1}k
?
and for worst-case accuracy the optimal (random) decision rule f0,wc
(? ) is to output y given tranP
(0)
script ? according to P(y|? ) = Q?,y . Subject to y?Y Q?,y = 1 , ?? ? T and Q ? 0,
XX
Q(0) = arg max
min
Px,? w0 (f0 (x), y)Q?,y
(6)
Q ? R|T |?|Y|
x?{0,1}k
? ?T y?Y
where w0 : Y ? Y ? R is the measure of accuracy for the central observer.
Privacy is measured by differential privacy [14, 15]. Since we allow heterogeneous privacy constraints, we use ?i to denote the desired privacy level of the i-th party. We say a protocol P is
?i -differentially private for the i-th party if for i ? [k], and all xi , x0i ? {0, 1}, x?i ? {0, 1}k?1 , and
? ?T,
P(? |xi , x?i ) ? e?i P(? |x0i , x?i ) .
(7)
This condition ensures no adversary can infer the private data xi with high enough confidence, no
matter what auxiliary information he might have and independent of his computational power. To
lighten notations, we let ?i = e?i and say a protocol is ?i -differentially private for the i-th party.
If the protocol is ?i -differentially private for all i ? [k], then we say that the protocol is {?i }differentially private for all parties.
A necessary condition on the multi-party protocols P , when the bits are generated independent of
each other, is protocol compatibility [22]: conditioned on the transcript of the protocol, the input
bits stay independent of each other. In our setting, input bits are deterministic, hence independent.
Mathematically, a protocol P is protocol compatible if each column P (? ) is a rank-one tensor, when
reshaped into a k-th order tensor P (? ) ? [0, 1]2?2?...?2 , where
)
Px(?1 ,...,x
k
(8)
= Px,? .
Precisely, there exist vectors u(1) . . . , u(k) such that P (? ) = u(1) ? ? ? ? ? u(k) , where ? denotes the
(k)
(? )
(1)
standard outer-product, i.e. Pi1 ,...,ik = ui1 ? ? ? ? ? uik . This is crucial in deriving the main results,
and it is a well-known fact in the secure multi-party computation literature. This follows from the
fact that when the bits are generated
Q independently, all the bits are still independent conditioned
on the transcript, i.e. P (x|? ) = i P (xi |? ), which follows implicitly from [30] and directly from
Equation (6) of [35]. Notice that using the rank-one tensor representation of each column of the
(i)
(i)
protocol P (? ) , we have P (? |xi = 0, x?i )/P (? |xi = 1, x?i ) = u1 /u2 . It follows that P is
(i)
(i)
(i)
?i -differentially private if and only if ??1
i u2 ? u1 ? ?i u2 .
Randomized response. Consider the following simple protocol known as the randomized response,
which is a term first coined by [38] and commonly used in many private communications including
the multi-party setting [31]. We will show in Section 3 that this is the optimal protocol for simultaneously maximizing the accuracy of all the parties. Each party broadcasts a randomized version of
its bit denoted by x
?i such that
?i
xi with probability 1+?
,
i
x
?i =
(9)
1
x?i with probability 1+?
,
i
where x?i is the logical complement of xi . Each transcript can be represented by the output of the
protocol, which in this case is x
? = [?
x1 , . . . , x
?k ] ? T , where T = {0, 1}k is now the set of all
broadcasted bits.
Accuracy maximization. Consider the problem of maximizing the average accuracy for a centralized observer with function f . Up to the scaling of 1/2k in (1), the accuracy can be written as
X
XX
X
EP [w(f (x), f?0 (? ))] =
w(f0 (x), y)
Px,? P(f?0 (? ) = y) ,
(10)
|
{z
}
|
{z
}
k
x
x?{0,1}
y?Y
5
, Wx
(y)
? ?T
, Q?,y
where f?0 (? ) denotes the randomized decision up on receiving the transcript ? . In the following we
(y)
define Wx , w(f0 (x), y) to represent the accuracy measure and Q?,y , P(f?(? ) = y) to represent
the decision rule.
Focusing on this single central observer for the purpose of illustration, we want to design protocols
Px,? and decision rules Q?,y that maximize the above accuracy. Further, this protocol has to be
compatible with interactive communication, satisfying the rank one condition discussed above, and
satisfy the differential privacy condition in (7). Hence, we can formulate the accuracy maximization
(y)
can be formulated as follows. Given Wx ?s in terms of the function f0 (?) to be computed, an
accuracy measure w0 (?, ?), and required privacy level ?i ?s, we solve
X
X
maximize
Wx(y)
Px,? Q?,y ,
P ?R2k ?|T | ,Q?R|T |?|Y|
x,?{0,1}k ,y?Y
subject to
P and Q are row-stochastic matrices,
? ?T
rank(P (? ) ) = 1 , ?? ? T ,
P(xi ,x?i ),? ? ?i P(x0i ,x?i ),? , ?i ? [k], x1 , x01 , ? {0, 1}, x?i ? {0, 1}k?1 ,
(11)
where P (? ) is defined as a k-th order tensor defined from the ? -th column of matrix P as defined
in Equation (8). Notice that the rank constraint is only a necessary condition for a protocol to be
compatible with interactive communication schemes, i.e. a valid interactive communication protocol
implies the rank-one condition but not all rank-one protocols are valid interactive communication
schemes. Therefore, the above is a relaxation with larger feasible set of protocols, but it turns out that
the optimal solution of the above optimization problem is the randomized response, which is a valid
(non-interactive) communication protocol. Hence, there is no loss in solving the above relaxation.
The main challenge in solving this optimization is that it is a rank-constrained tensor optimization
which is notoriously difficult. Since the rank constraint is over a k-th order tensor (k-dimensional
array) with possibly k > 2, common approaches of convex relaxation from [36] for matrices (which
are 2nd order tensors) does not apply. Further, we want to simultaneously apply similar optimizations to all the parties with different functions to be computed.
We introduce a novel transformation of the above rank-constrained optimization into a linear program in (17) and (20). The price we pay is in the increased dimensionality: the LP has an infinite
dimensional decision variable. However, combined with the geometric understanding of the the
manifold of rank-1 tensors, we can identify the exact optimal solution. We show in the next section
that given desired level of privacy {?i }i?[k] , there is a single universal protocol that simultaneously
maximizes the accuracy for (a) all parties; (b) any functions of interest; (c) any accuracy measures;
and (d) both worst-case and average case accuracy. Together with optimal decision rules performed
at each of the receiving ends, this gives the exact optimal multi-party computation scheme.
3
Main Result
We show, perhaps surprisingly, that the simple randomized response presented in (9) is the optimal
protocol in a very general sense. For any desired privacy level ?i , and arbitrary function fi , for any
accuracy measure wi , and any notion of accuracy (either average or worst case), we show that the
randomized response is universally optimal. The proof of the following theorem can be found in the
supplementary material.
Theorem 3.1 Let the optimal decision rule be defined as in (2) for the average accuracy and (4) for
the worst-case accuracy. Then, for any ?i ? 1, any function fi : {0, 1}k ? Y, and any accuracy
measure wi : Y ?Y ? R for i ? [k], the randomized response for given ?i with the optimal decision
function achieves the maximum accuracy for the i-th party among all {?i }-differentially private
interactive protocols and all decision rules. For the central observer, the randomized response with
the optimal decision rule defined in (5) and (6) achieves the maximum accuracy among all {?i }differentially private interactive protocols and all decision rules for any arbitrary function f0 and
any measure of accuracy w0 .
This is a strong universal optimality result. Every party and the central observer can simultaneously
achieve the optimal accuracy using a universal randomized response. Each party only needs to know
6
its own desired level of privacy, its own function to be computed, and its measure of accuracy. Optimal data release and optimal decision making are naturally separated. However, it is not immediate
at all that a non-interactive scheme such as the randomized response would achieve the maximum
accuracy. The fact that interaction does not help is counter-intuitive, and might as well be true only
for the binary data scenario we consider in this paper. The key technical innovation is the convex
geometry in the proof, which does not generalize to larger alphabet case.
Once we know interaction does not help, we can make an educated guess that the randomized response should dominate over other non-interactive schemes. This intuition follows from the dominance of randomized response in the single-party setting, that was proved using a powerful operational interpretation of differential privacy first introduced in [34]. This intuition can in fact be made
rigorous, as we show in Section 7 (of our supplemental material) with a simple two-party example.
However, we want to emphasize that our main result for multi-party computation does not follow
from any existing analysis of randomized responses, in particular those seemingly similar analyses
in [26]. The challenge is in proving that interaction does not help, which requires the technological
innovations presented in this paper.
Multi-party XOR computation. For a given function and a given accuracy measure, analyzing
the performance of the optimal protocol provides the exact nature of the privacy-accuracy tradeoff.
Consider a scenario where a central observer wants to compute the XOR of all the k-bits, each of
which is ?-differentially private. In this special case, we can apply our main theorem to analyze the
accuracy exactly in a combinatorial form, and we provide a proof in Section A.1.
Corollary 3.1 Consider k-party computation for f0 (x) = x1 ? ? ? ? ? xk , and the accuracy measure
is one if correct and zero if not, i.e. w0 (0, 0) = w0 (1, 1) = 1 and w0 (0, 1) = w0 (1, 0) = 0.
For any {?}-differentially private protocol P and any decision rule f?, the average and worst-case
accuracies are bounded by
Pbk/2c k k?2i
Pbk/2c k k?2i
i=0
i=0
2i ?
2i ?
?
?
ACCave (P, w0 , f0 , f0 ) ?
, ACCwc (P, w0 , f0 f0 ) ?
,
k
(1 + ?)
(1 + ?)k
and the equality is achieved by the randomized response and optimal decision rules in (5) and (6).
The optimal decision for both accuracies is simply to output the XOR of the received privatized bits.
This is a strict generalization of a similar result in [22], where XOR computation was studied but
only for a two-party setting. In the high privacy regime, where ? ' 0 (equivalently ? = e? ' 1), this
implies that ACCave = 0.5 + 2?(k+1) ?k + O(?k+1 ) . The leading term is due to the fact that we are
considering an accuracy measure of a Boolean function. The second term of 2?(k+1) ?k captures the
effect that, we are essentially observing the XOR through k consecutive binary symmetric channels
with flipping probability ?/(1 + ?). Hence, the accuracy gets exponentially worse in k. On the other
hand, if those k-parties are allowed to collaborate, then they can compute the XOR in advance and
only transmit the privatized version of the XOR, achieving accuracy of ?/(1+?) = 0.5+(1/4)?2 +
O(?3 ). This is always better than not collaborating, which is the bound in Corollary 3.1.
4
Discussion
In this section, we discuss a few topics, each of which is interesting but non-trivial to solve in any
obvious way. Our main result is general and sharp, but we want to ask how to push it further.
Generalization to multiple bits. When each party owns multiple bits, it is possible that interactive protocols improve over the randomized response protocol. This is discussed with examples in
Section 8 (in the supplementary material).
Approximate differential privacy. A common generalization of differential privacy, known as the
approximate differential privacy, is to allow a small slack of ? ? 0 in the privacy condition[14, 15].
In the multi-party context, a protocol P is (?i , ?i )-differentially private for the i-th party if for all
i ? [k], and all xi , x0i ? {0, 1}, x?i ? {0, 1}k?1 , and for all subset T ? T ,
P(? ? T |xi , x?i ) ?
e?i P(? ? T |x0i , x?i ) + ?i .
7
(12)
It is natural to ask if the linear programming (LP) approach presented in this paper can be extended to
identify the optimal multi-party protocol under {(?i , ?i )}-differential privacy. The LP formulations
of (17) and (20) heavily rely on the fact that any differentially private protocol P can be decomposed
as the combination of the matrix S and the ?(y) ?s. Since the differential privacy constraints are
(y)
invariant under scaling of P? , one can represent the scale-free pattern of the distribution with S?
(y)
and the scaling with ?? . This is no longer true for {(?i , ?i )}-differential privacy, and the analysis
technique does not generalize.
Correlated sources. When the data xi ?s are correlated (e.g. each party observe a noisy version
of the state of the world), knowing xi reveals some information on other parties? bits. In general,
revealing correlated data requires careful coordination between multiple parties. The analysis techniques developed in this paper do not generalize to correlated data, since the crucial rank-one tensor
(y)
structure of S? is no longer present.
Extensions to general utility functions. A surprising aspect of the main result is that even though
the worst-case accuracy is a concave function over the protocol P , the maximum is achieved at an
extremal point of the manifold of rank-1 tensors. This suggests that there is a deeper geometric
structure of the problem, leading to possible universal optimality of the randomized response for a
broader class of utility functions. It is an interesting task to understand the geometric structure of the
problem, and to ask what class of utility functions lead to optimality of the randomized response.
Acknowledgement
This research is supported in part by NSF CISE award CCF-1422278, NSF SaTC award CNS1527754, NSF CMMI award MES-1450848 and NSF ENG award ECCS-1232257.
References
[1] Emmanuel Abbe, Amir Khandani, and W Andrew. Lo. 2011.privacy-preserving methods for sharing
financial risk exposures.. The American Economic Review, 102:65?70.
[2] Amos Beimel, Kobbi Nissim, and Eran Omri. Distributed private data analysis: Simultaneously solving
how and what. In Advances in Cryptology?CRYPTO 2008, pages 451?468. Springer, 2008.
[3] Michael Ben-Or, Shafi Goldwasser, and Avi Wigderson. Completeness theorems for non-cryptographic
fault-tolerant distributed computation. In Proceedings of the twentieth annual ACM symposium on Theory
of computing, pages 1?10. ACM, 1988.
[4] D. Blackwell. Equivalent comparisons of experiments. The annals of mathematical statistics, 24(2):265?
272, 1953.
[5] A. Blum, C. Dwork, F. McSherry, and K. Nissim. Practical privacy: the sulq framework. In Proceedings
of the twenty-fourth symposium on Principles of database systems, pages 128?138. ACM, 2005.
[6] H. Brenner and K. Nissim. Impossibility of differentially private universally optimal mechanisms. In
Foundations of Computer Science, 2010 51st Annual IEEE Symposium on, pages 71?80. IEEE, 2010.
[7] J. A. Calandrino, A. Kilzer, A. Narayanan, E. W. Felten, and V. Shmatikov. ? you might also like:? privacy
risks of collaborative filtering. In Security and Privacy (SP), 2011 IEEE Symposium on, pages 231?246.
IEEE, 2011.
[8] K. Chaudhuri, A. Sarwate, and K. Sinha. Near-optimal differentially private principal components. In
Advances in Neural Information Processing Systems, pages 989?997, 2012.
[9] K. Chaudhuri, A. D. Sarwate, and K. Sinha. A near-optimal algorithm for differentially-private principal
components. Journal of Machine Learning Research, 14:2905?2943, 2013.
[10] Kamalika Chaudhuri, Claire Monteleoni, and Anand D Sarwate. Differentially private empirical risk
minimization. The Journal of Machine Learning Research, 12:1069?1109, 2011.
[11] David Chaum, Claude Cr?epeau, and Ivan Damgard. Multiparty unconditionally secure protocols. In
Proceedings of the twentieth annual ACM symposium on Theory of computing, pages 11?19. ACM, 1988.
[12] T. M. Cover and J. A. Thomas. Elements of information theory. John Wiley & Sons, 2012.
[13] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Local privacy and statistical minimax rates. In Foundations of Computer Science (FOCS), 2013 IEEE 54th Annual Symposium on, pages 429?438. IEEE,
2013.
8
[14] C. Dwork. Differential privacy. In Automata, languages and programming, pages 1?12. Springer, 2006.
[15] C. Dwork, F. McSherry, K. Nissim, and A. Smith. Calibrating noise to sensitivity in private data analysis.
In Theory of Cryptography, pages 265?284. Springer, 2006.
[16] Cynthia Dwork. Differential privacy: A survey of results. In Theory and Applications of Models of
Computation, pages 1?19. Springer, 2008.
[17] Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor. Our data,
ourselves: Privacy via distributed noise generation. In Advances in Cryptology-EUROCRYPT 2006, pages
486?503. Springer, 2006.
[18] Quan Geng and Pramod Viswanath. The optimal mechanism in differential privacy. arXiv preprint
arXiv:1212.1186, 2012.
[19] Quan Geng and Pramod Viswanath. The optimal mechanism in differential privacy: Multidimensional
setting. arXiv preprint arXiv:1312.0655, 2013.
[20] A. Ghosh, T. Roughgarden, and M. Sundararajan. Universally utility-maximizing privacy mechanisms.
SIAM Journal on Computing, 41(6):1673?1693, 2012.
[21] O. Goldreich, S. Micali, and A. Wigderson. How to play any mental game. In Proceedings of the
Nineteenth Annual ACM Symposium on Theory of Computing, STOC ?87, pages 218?229, New York,
NY, USA, 1987. ACM.
[22] V. Goyal, I. Mironov, O. Pandey, and A. Sahai. Accuracy-privacy tradeoffs for two-party differentially
private protocols. In Advances in Cryptology?CRYPTO 2013, pages 298?315. Springer, 2013.
[23] Mangesh Gupte and Mukund Sundararajan. Universally optimal privacy mechanisms for minimax
agents. In Proceedings of the twenty-ninth ACM SIGMOD-SIGACT-SIGART symposium on Principles
of database systems, pages 135?146. ACM, 2010.
[24] M. Hardt and A. Roth. Beating randomized response on incoherent matrices. In Proceedings of the
forty-fourth annual ACM symposium on Theory of computing, pages 1255?1268. ACM, 2012.
[25] N. Homer, S. Szelinger, M. Redman, D. Duggan, W. Tembe, J. Muehling, J. V. Pearson, D. A. Stephan,
S. F. Nelson, and D. W. Craig. Resolving individuals contributing trace amounts of dna to highly complex
mixtures using high-density snp genotyping microarrays. PLoS genetics, 4(8):e1000167, 2008.
[26] P. Kairouz, S. Oh, and P. Viswanath. Extremal mechanisms for local differential privacy. In Advances in
neural information processing systems, 2014.
[27] M. Kapralov and K. Talwar. On differentially private low rank approximation. In Proceedings of the
Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1395?1414. SIAM, 2013.
[28] Shiva Prasad Kasiviswanathan, Homin K Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith.
What can we learn privately? SIAM Journal on Computing, 40(3):793?826, 2011.
[29] Joe Kilian. More general completeness theorems for secure two-party computation. In Proceedings of
the thirty-second annual ACM symposium on Theory of computing, pages 316?324. ACM, 2000.
[30] R. K?unzler, J. M?uller-Quade, and D. Raub. Secure computability of functions in the it setting with
dishonest majority and applications to long-term security. In Theory of Cryptography, pages 238?255.
Springer, 2009.
[31] Andrew McGregor, Ilya Mironov, Toniann Pitassi, Omer Reingold, Kunal Talwar, and Salil Vadhan. The
limits of two-party differential privacy. In Foundations of Computer Science (FOCS), 2010 51st Annual
IEEE Symposium on, pages 81?90. IEEE, 2010.
[32] F. McSherry and K. Talwar. Mechanism design via differential privacy. In Foundations of Computer
Science, 2007. FOCS?07. 48th Annual IEEE Symposium on, pages 94?103. IEEE, 2007.
[33] A. Narayanan and V. Shmatikov. Robust de-anonymization of large sparse datasets. In Security and
Privacy, 2008. SP 2008. IEEE Symposium on, pages 111?125. IEEE, 2008.
[34] Sewoong Oh and Pramod Viswanath. The composition theorem for differential privacy. arXiv preprint
arXiv:1311.0776, 2013.
[35] Manoj M Prabhakaran and Vinod M Prabhakaran. On secure multiparty sampling for more than two
parties. In Information Theory Workshop (ITW), 2012 IEEE, pages 99?103. IEEE, 2012.
[36] Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linear
matrix equations via nuclear norm minimization. SIAM review, 52(3):471?501, 2010.
[37] L. Sweeney. Simple demographics often identify people uniquely. Health, 671:1?34, 2000.
[38] S. L. Warner. Randomized response: A survey technique for eliminating evasive answer bias. Journal of
the American Statistical Association, 60(309):63?69, 1965.
[39] Andrew C Yao. Protocols for secure computations. In 2013 IEEE 54th Annual Symposium on Foundations
of Computer Science, pages 160?164. IEEE, 1982.
9
| 6004 |@word private:26 version:4 eliminating:1 stronger:2 norm:1 nd:1 suitably:1 prasad:1 eng:1 dishonest:1 series:1 denoting:1 existing:2 recovered:1 protection:2 surprising:2 written:1 john:1 wx:4 guess:1 amir:1 xk:3 smith:2 record:4 mental:1 provides:2 characterization:1 lending:1 node:1 completeness:2 attack:3 kairouz:1 kasiviswanathan:1 mathematical:2 enterprise:1 differential:31 symposium:15 ik:1 focs:3 prove:3 naor:1 introduce:3 privacy:65 warner:1 multi:16 decomposed:1 little:1 considering:1 xx:3 notation:1 bounded:1 maximizes:1 what:7 developed:1 supplemental:1 kenthapadi:1 ghosh:1 transformation:1 homer:1 guarantee:1 every:2 multidimensional:1 voting:1 concave:2 interactive:31 pramod:4 exactly:4 medical:1 before:1 educated:1 engineering:2 local:4 ecc:1 limit:2 randomizes:2 disclosing:1 analyzing:1 might:8 studied:7 suggests:1 statistically:1 fazel:1 practical:1 thirty:1 goyal:1 universal:5 empirical:1 evasive:1 revealing:3 word:1 confidence:2 get:1 cannot:1 risk:4 context:1 raskhodnikova:1 optimize:1 equivalent:1 deterministic:2 roth:1 maximizing:5 exposure:3 attention:1 independently:2 convex:5 survey:3 formulate:2 automaton:1 amazon:1 identifying:1 privatized:3 sweeney:1 mironov:3 rule:26 array:1 deriving:1 dominate:1 oh:2 financial:2 his:2 nuclear:1 proving:2 notion:4 beimel:1 sahai:1 transmit:1 annals:1 play:1 heavily:1 user:2 exact:5 programming:2 designing:1 kunal:1 element:1 satisfying:1 viswanath:4 database:2 ep:1 disclose:1 preprint:3 electrical:1 capture:1 worst:13 ensures:1 averse:1 kilian:1 plo:1 counter:1 technological:1 intuition:2 transforming:1 benjamin:1 complexity:2 viswanath1:1 constrains:1 salil:1 pramodv:1 depend:2 solving:4 technically:1 upon:1 goldreich:1 represented:3 alphabet:1 separated:2 avi:1 pearson:1 supplementary:4 solve:3 larger:2 distortion:2 say:5 nineteenth:1 statistic:1 reshaped:1 transform:1 noisy:1 seemingly:1 hoc:1 claude:1 interaction:5 product:1 maryam:1 omer:1 chaudhuri:3 achieve:4 intuitive:1 differentially:18 getting:1 requirement:2 adam:1 redman:1 ben:1 help:3 depending:1 andrew:3 cryptology:3 measured:4 x0i:5 received:4 transcript:21 strong:5 auxiliary:2 involves:2 overnight:1 implies:3 differ:2 drawback:1 correct:2 functionality:5 hull:2 stochastic:1 viewing:1 opinion:2 public:1 material:5 violates:1 exchange:1 generalization:3 mathematically:1 extension:2 hold:2 sufficiently:2 great:1 mapping:1 achieves:2 consecutive:1 purpose:1 combinatorial:1 coordination:1 vulnerability:1 sensitive:2 extremal:2 successfully:1 amos:1 minimization:2 uller:1 genomic:1 always:1 aim:1 satc:1 cr:1 manoj:1 broader:1 corollary:2 release:2 focus:2 rank:21 impossibility:1 secure:8 industrial:1 contrast:1 ave:2 sense:3 rigorous:1 entire:3 unlikely:1 sulq:1 interested:3 prabhakaran:2 compatibility:2 arg:4 among:2 denoted:3 constrained:3 special:2 once:3 sampling:1 abbe:1 geng:2 purchase:1 others:3 lighten:1 inherent:1 few:1 krishnaram:1 simultaneously:5 individual:3 geometry:1 ourselves:1 interest:3 message:8 centralized:1 dwork:5 highly:1 evaluation:1 mixture:1 mcsherry:4 necessary:4 respective:3 desired:6 sinha:2 instance:2 increased:2 modeling:1 column:3 boolean:1 cover:1 maximization:3 cost:7 addressing:1 subset:1 characterize:1 answer:1 randomizing:1 combined:1 recht:1 st:2 density:1 sensitivity:1 randomized:28 siam:5 stay:1 fundamental:2 lee:1 receiving:3 anonymization:2 michael:1 together:1 ilya:2 yao:1 again:2 central:15 unavoidable:1 interactively:1 containing:1 choose:1 broadcast:2 possibly:2 worse:1 corner:5 american:2 leading:2 kobbi:2 parrilo:1 de:1 matter:1 satisfy:1 depends:2 ad:1 script:1 performed:1 observer:15 closed:2 observing:2 analyze:1 kapralov:1 netflix:1 collaborative:1 il:1 publicly:1 accuracy:63 xor:11 who:1 oh2:1 maximized:1 correspond:2 identify:3 sofya:1 generalize:4 accurately:1 craig:1 notoriously:1 randomness:2 history:2 simultaneous:1 r2k:1 monteleoni:1 sharing:2 against:2 mpc:1 obvious:1 naturally:3 proof:3 gain:1 dataset:1 proved:1 massachusetts:1 ask:3 logical:1 hardt:1 listen:1 dimensionality:1 khandani:1 focusing:2 higher:1 follow:2 response:25 formulation:7 evaluated:1 though:2 generality:1 just:1 stage:1 until:1 hand:1 reveal:1 perhaps:3 usa:2 effect:1 calibrating:1 requiring:1 true:4 ccf:1 sfe:4 former:1 hence:5 equality:1 symmetric:1 pbk:2 game:1 uniquely:1 duchi:1 snp:1 novel:2 possessing:2 fi:14 ef:2 common:3 exponentially:1 broadcasted:7 sarwate:3 analog:1 he:1 discussed:2 interpretation:1 association:1 sundararajan:2 composition:1 swoh:1 collaborate:2 similarly:2 illinois:2 language:1 access:2 f0:14 longer:2 moni:1 pitassi:1 something:1 eurocrypt:1 own:7 recent:3 scenario:3 server:1 binary:4 arbitrarily:1 fault:1 itw:1 yi:1 preserving:2 transmitted:1 minimum:1 impose:2 forty:1 maximize:3 resolving:1 multiple:10 infer:1 champaign:1 technical:3 match:1 long:1 award:4 ensuring:1 involving:1 basic:1 heterogeneous:3 essentially:1 metric:4 expectation:2 arxiv:6 ui1:1 represent:3 achieved:3 want:9 separately:1 singular:1 source:2 publisher:1 crucial:2 rest:1 posse:5 strict:1 sigact:1 subject:3 quan:2 anand:1 reingold:1 jordan:1 call:2 vadhan:1 curious:1 near:2 revealed:2 enough:3 stephan:1 vinod:1 ivan:1 independence:2 shiva:1 economic:1 idea:2 knowing:1 tradeoff:5 goldwasser:1 microarrays:1 honest:1 utility:6 linkage:2 peter:1 york:1 useful:1 detailed:1 amount:1 extensively:1 narayanan:2 dna:1 exist:1 nsf:4 notice:2 discrete:1 dominance:1 key:3 blum:1 achieving:1 computability:1 relaxation:3 talwar:3 uncertainty:3 powerful:1 fourth:3 you:1 multiparty:2 decision:35 scaling:3 bit:38 bound:3 pay:2 guaranteed:1 distinguish:1 annual:11 roughgarden:1 constraint:12 precisely:1 protects:1 wc:2 aspect:2 u1:2 optimality:8 formulating:1 min:3 pi1:1 relatively:1 px:13 department:2 according:2 combination:1 terminates:1 son:1 y0:1 wi:11 lp:6 making:2 invariant:1 taken:1 equation:6 agree:3 remains:2 previously:1 discus:2 describing:1 mechanism:10 turn:1 slack:1 know:3 crypto:2 end:2 demographic:1 rubric:1 available:1 operation:3 apply:3 observe:1 generic:1 appropriate:1 alternative:1 existence:2 thomas:1 denotes:2 ensure:3 wigderson:2 exploit:1 coined:1 sigmod:1 emmanuel:1 genotyping:1 tensor:16 objective:1 move:1 question:1 quantity:2 flipping:1 cmmi:1 eran:1 separate:2 separating:1 majority:2 outer:1 w0:11 me:1 topic:2 manifold:5 argue:1 nissim:5 nelson:1 trivial:4 analyst:1 assuming:1 illustration:1 innovation:3 equivalently:2 difficult:1 potentially:2 frank:1 stoc:1 sigart:1 trace:1 negative:1 design:2 cryptographic:1 twenty:3 datasets:1 urbana:2 immediate:1 situation:1 extended:1 communication:19 ninth:1 arbitrary:7 sharp:1 shmatikov:2 introduced:1 complement:1 publishes:1 required:1 blackwell:1 david:1 pablo:1 security:3 learned:1 adversary:9 pattern:1 beating:1 privatization:4 regime:1 challenge:3 hear:1 program:6 max:4 including:1 wainwright:1 power:1 natural:2 client:1 rely:1 indicator:1 residual:1 representing:1 scheme:6 improve:1 movie:1 minimax:2 governor:1 incoherent:1 unconditionally:1 health:1 breach:1 review:2 understanding:3 geometric:4 literature:1 acknowledgement:1 contributing:1 toniann:1 loss:2 expect:1 interesting:2 generation:1 filtering:1 declaring:1 foundation:5 x01:1 agent:1 sufficient:1 anonymized:1 imposes:3 sewoong:2 principle:2 bank:4 share:2 row:1 lo:1 compatible:3 claire:1 genetics:1 surprisingly:2 supported:1 free:1 formal:1 allow:6 side:1 weaker:1 deeper:1 understand:1 bias:1 characterizing:2 barrier:1 sparse:1 distributed:3 dimension:1 world:2 evaluating:2 valid:3 commonly:1 made:1 universally:4 party:103 approximate:2 emphasize:2 implicitly:3 reveals:1 tolerant:1 owns:1 xi:29 truthfully:1 pandey:1 channel:2 nature:2 learn:3 robust:1 operational:1 listens:1 complex:1 protocol:57 sp:2 main:13 privately:2 noise:2 arise:1 nothing:1 allowed:1 cryptography:2 x1:5 securely:1 uik:1 fashion:1 ny:1 wiley:1 omri:1 learns:2 theorem:6 specific:2 cynthia:2 mukund:1 maximizers:1 workshop:1 joe:1 kamalika:1 conditioned:3 push:1 broadcasting:1 homin:1 simply:1 twentieth:2 u2:3 springer:7 corresponds:1 collaborating:1 owned:1 acm:14 conditional:1 goal:4 formulated:2 careful:1 towards:1 price:2 cise:1 considerable:1 feasible:1 brenner:1 infinite:4 except:1 colluding:1 lemma:1 principal:2 people:1 arises:1 mcgregor:1 correlated:5 |
5,531 | 6,005 | Adaptive Stochastic Optimization: From Sets to Paths
Zhan Wei Lim
David Hsu
Wee Sun Lee
Department of Computer Science, National University of Singapore
{limzhanw,dyhsu,leews}@comp.nus.edu.sg
Abstract
Adaptive stochastic optimization (ASO) optimizes an objective function adaptively under uncertainty. It plays a crucial role in planning and learning under
uncertainty, but is, unfortunately, computationally intractable in general. This paper introduces two conditions on the objective function, the marginal likelihood
rate bound and the marginal likelihood bound, which, together with pointwise
submodularity, enable efficient approximate solution of ASO. Several interesting
classes of functions satisfy these conditions naturally, e.g., the version space reduction function for hypothesis learning. We describe Recursive Adaptive Coverage, a new ASO algorithm that exploits these conditions, and apply the algorithm
to two robot planning tasks under uncertainty. In contrast to the earlier submodular optimization approach, our algorithm applies to ASO over both sets and paths.
1
Introduction
A hallmark of an intelligent agent is to learn new information as the world unfolds and to improvise
by fusing the new information with prior knowledge. Consider an autonomous unmanned aerial
vehicle (UAV) searching for a victim lost in a jungle. The UAV acquires new information on the
victim?s location by scanning the environment with noisy onboard sensors. How can the UAV plan
and adapt its search strategy in order to find the victim as fast as possible? This is an example of
stochastic optimization, in which an agent chooses a sequence of actions under uncertainty in order
to optimize an objective function. In adaptive stochastic optimization (ASO), the agent?s action
choices are conditioned on the outcomes of earlier choices. ASO plays a crucial role in planning
and learning under uncertainty, but it is, unfortunately, computationally intractable in general [5].
Adaptive submodular optimization provides a powerful tool for approximate solution of ASO and
has several important applications, such as sensor placement, active learning, etc. [5]. However, it
has been so far restricted to optimization over a set domain: the agent chooses a subset out of a
finite set of items. This is inadequate for the UAV search, as the agent?s consecutive choices are
constrained to form a path. Our work applies to ASO over both sets and paths.
Our work aims to identify subclasses of ASO and provide conditions that enable efficient nearoptimal solution. We introduce two conditions on the objective function, the marginal likelihood
rate bound (MLRB) and the marginal likelihood bound (MLB). They enable efficient approximation
of ASO with pointwise submodular objective functions, functions that satisfy a ?diminishing return?
property. MLRB is different from adaptive submodularity; we prove that adaptive submodularity
does not imply MLRB and vice versa. While there exist functions that do not satisfy either the
adaptive submodular or the MLRB condition, all pointwise submodular functions satisfy the MLB
condition, albeit with different constants.
We propose Recursive Adaptive Coverage (RAC), a polynomial-time approximation algorithm that
guarantees near-optimal solution of ASO over either a set or a path domain, if the objective function
satisfies the MLRB or the MLB condition and is pointwise monotone submodular. Since MLRB
differs from adaptive submodularity, the new algorithm expands the set of problems that admit efficient approximate solutions, even for ASO over a set domain. We have evaluated RAC in simulation
on two robot planning tasks under uncertainty and show that RAC performs well against several
commonly used heuristic algorithms, including greedy algorithms that optimize information gain.
1
2
Related Work
Submodular set function optimization encompasses many hard combinatorial optimization problems in operation research and decision making. Submodularity implies a diminishing return effect
where adding an item to a smaller set is more beneficial than adding the same item to a bigger set.
For example, adding a new temperature sensor when there are few sensors helps more in mapping
temperature in a building than when there are already many sensors. Submodular functions can be
efficiently approximated using a greedy heuristic [11]. Recent works have incorporated stochasticity to submodular optimization [1, 5] and generalized the problem from sets optimization to path
optimization [2].
Our work builds on progress in submodular optimization on paths to solve the adaptive stochastic optimization problem on paths. Our RAC algorithm share a similar structure and analysis as
the RAId algorithm in [10] that is used to solve adaptive informative path planning (IPP) problems
without noise. In fact, noiseless adaptive IPP is a special case of adaptive stochastic optimization
problems on paths that satisfies the marginal likelihood rate bound condition. We can derive the
same approximation bound using the results in Section 6. Both works are inspired by the algorithm
in [8] used to solve the Adaptive Traveling Salesperson (ATSP) problem. In the ATSP problem, a
salesperson has to service a subset of locations with demand that is not known in advance. However,
the salesperson knows the prior probabilities of the demand at each location (possibly correlated)
and the goal is to find an adaptive policy to service all locations with demand.
Adaptive submodularity [5] generalizes submodularity to stochastic settings and gives logarithmic
approximation bounds using a greedy heuristic. It was also shown that no polynomial time algorithm
can compute approximateP
solution of adaptive stochastic optimization problems within a factor of
p
O(|X|1 ? ) unless P H = 2 , that is the polynomial-time hierarchy collapses to its second level [5].
Many Bayesian active learning problems can be modeled by suitable adaptive submodular objective
functions [6, 4, 3]. However, [3] recently proposed a new stochastic set function for active learning
with a general loss function that is not adaptive monotone submodular. This new objective function
satisfies the marginal likelihood bound with nontrivial constant G.
Adaptive stochastic optimization is a special case of the Partially Observable Markov Decision Process (POMDP), a mathematical principled framework for reasoning under uncertainty [9]. Despite
recent tremendous progress in offline [12] and online solvers [14, 13], most partially observable
planning problems remain hard to solve.
3
Preliminaries
We now describe the adaptive stochastic optimization problem and use the UAV search and rescue
task to illustrate our definitions. Let X be the set of actions and let O be the set of observations. The
agent operates in a world whose events are determined by a static state called the scenario, denoted
as : X ! O. When the agent takes an action x 2 X, it receives an observation o = (x) 2 O
that is determined by an initially unknown scenario . We denote a random scenario as and use
a prior distribution p( ) := P[ = ] over the scenarios to represent our prior knowledge of the
world.
For e.g., in the UAV task, the actions are flying to various locations, observations are the possible
sensors? readings, and a scenario is a victim?s position. When the UAV flies to a particular location
x, it observes its sensors? readings o that depends on actual victim?s position . Prior knowledge
about the victim?s position can be encoded as a probability distribution over the possible victim?s
positions.
After taking actions x1 , x2 , . . . and receiving observations o1 , o2 , . . . after each action, the agent has
a history = {(x1 , o1 ), (x2 , o2 ), . . . }. We say that a scenario is consistent with a history when
the actions and corresponding observations of the history never contradict with the , i.e. (x) = o
for all (x, o) 2 . We denote this by ? . We can also say that a history 0 is consistent with
another history if dom( 0 ) dom( ) and 0 (x) = (x) for all x 2 dom( ), where dom( ) is
the set of actions taken in . For example, a victim?s position has not been ruled out given the
sensors readings at various locations when ? .
An agent?s goal can be characterized by a stochastic set function f : 2X ? OX ! R, which
measures progress toward the goal given the actions taken and the true scenario. In this paper, we
assume that f is pointwise monotone on finite domain. i.e., f (A, ) ? f (B, ) for any and for
2
all A ? B ? X. An agent achieves its goal and covers f when f has maximum value after taking
actions S ? X and given it is in scenario , i.e., f (S, ) = f (X, ). For example, the objective
function can be the sum of prior probabilities of impossible victim?s positions given a history. The
UAV finds the victim when all except the true victim?s position are impossible.
An agent?s strategy for adaptively taking actions is a policy ? that maps a history to its next action.
A policy terminates when there is no next action to take for a given history. We say that a policy ?
covers the function f when the agent executing ? always achieves its goal upon termination. That
is, f (dom( ), ) = f (X, ) for all scenarios ? , where is the history when the agent executes
?. For example, a policy ? tells the UAV where to fly to next given the locations visited and whether
it has a positive sensor at those locations or not and it covers the objective function when the UAV
executing it always find the victim.
Formally, an adaptive stochastic optimization problem on paths consists of the tuple
(X, d, p, O, r, f ), the set of actions X is the set of locations the agent can visit, r is the starting location of the agent, and d is a metric that gives the distance between any pair of locations x, x0 2 X.
The cost of the policy ?, C(?, ), is the length of the path starting from location r traversed by the
agent until the policy terminates, when presented with scenario , e.g., the distance traveled by UAV
executing policy ? for a particular true victim position. We want to find a policy ? that minimizes
the cost of traveling to cover the function. We formally state the problem:
Problem 1. Given an adaptive stochastic optimization problem on paths I = (X, d, p, O, r, f ),
compute an adaptive policy that minimizes the expected cost
X
C(?) = E[C(?, )] =
C(?, )p( ).
(1)
subject to f (dom( ),
for all ?.
0
) = f (X,
0
), where
is the history encountered when executing ? on
0
,
Adaptive stochastic optimization problems on sets can be formally defined by a tuple, (X, c, p, O, f ).
The set of actions X is a set of items that an agent may select. Instead of a distance metric, the cost
of
P selecting an item is defined by a cost function c : X ! R and the cost of a policy C(?, ) =
x2S c(x), where S is the subset of items selected by ? when presented with scenario .
4
Classes of Functions
This section introduces the classes of objective functions for adaptive stochastic optimization problems and gives the relationship between them.
Given a finite set X and a function on subsets of X, f : 2X ! R, the function f is submodular if
f (A) + f (B) f (A [ B) + f (A \ B) for all A, B ? X. Let f (S, ) be a stochastic set function.
If f (S, ) is submodular for each fixed scenario 2 OX , then f is pointwise submodular.
Adaptive submodularity and monotonicity generalize submodularity and monotonicity to stochastic settings where we receive random observations after selecting each item [6]. We define the expected marginal value of an item x given a history , 4(x| ) as: 4(x| ) =
E [f (dom( ) [ {x}, ) f (dom( ), ) | ? ] . A function f : 2X ? OX ! R is adaptive
monotone with respect to a prior distribution p( ) if , for all such that P[ ? ] > 0 and all
x 2 X, it holds that 4(x| ) 0. i.e. the expected marginal value of any fixed item is nonnegative.
Function f is adaptive submodular with respect to a prior distribution p( ) if, for all and 0 such
that 0 ? and for all x 2 X\dom( 0 ), it holds that 4(x| ) 4(x| 0 ). i.e. the expected marginal
value of any fixed item does not increase as more items are selected. A function can be adaptive
submodular with respect to a certain distribution p but not be pointwise submodular. However, it
must be pointwise submodular if it is adaptive submodular with respect to all distributions.
We denote f?(S, ) = min ? f (S, ) as the worst case value of f given a history and p( ) :=
P[ ? ] as the marginal likelihood of a history. The marginal likelihood rate bound (MLRB)
condition requires a function f such that: For all 0 ? , if p( 0 ) ? 0.5p( ) then ,
?
1 ?
Q f?(dom( 0 ), ) ?
Q f?(dom( ), ) ,
(2)
K
except for scenarios already covered, where K > 1 and Q
max f (X, ) is a constant upper
bound for the maximum value of f for all scenarios.
3
Intuitively, this condition means that the worst case remaining objective value decreases by a constant fraction whenever the marginal likelihood of history decreases by at least half.
Example: The version space reduction function V with arbitrary prior is adaptive submodular and
monotone [5]. Furthermore, it satisfies the MLRB. The version space reduction function V is defined
as:
X
V(S, ) = 1
p( 0 )
(3)
0?
(S)
for all scenario , S ? X and (S) gives the history of visiting locations x in S when the scenario is
. The version space reduction function is often used for active learning, where the true hypothesis
is identified once all the scenarios are covered. We present the proof that the version space reduction
function satisfies the MLRB condition (and all other proofs) in the supplementary material.
Proposition 1. The version space function V satisfies the MLRB with constants Q = 1 and K = 2.
The following proposition teases apart the relationship between the MLRB condition and adaptive
submodularity.
Proposition 2. Adaptive submodularity does not imply the MLRB condition, and vice versa.
The marginal likelihood bound (MLB) condition requires that there exists some constant G, such
that for all ,
f (X, ) f?(dom( ), ) ? G ? p( ).
(4)
In other words, the worst remaining objective value must be less than the marginal likelihood of its
history multiplied by some constant G. Our quality of solution depends on the constant G. The
smaller the constant G, the better the approximation bound.
We can make any adaptive stochastic optimization problem satisfy the MLB with a large enough
constant G. To trivially ensure the bound of MLB, we set G = Q ? 1/ , where = min p( ).
Hence, Q ? G ? p( ) unless we have visited all locations and covered the function by definition.
Example: The version space reduction function V can be interpreted as the expected 0 1 loss of a
random scenario 0 ? differing from true scenario . The loss is counted as one whenever 0 6= .
For example, a pair of scenarios that differ in observation at one location has the same loss of 1 as
another pair that differs in all observations. Thus, it can be useful to assign different loss to different
pair of scenarios with a general loss function. The generalized version space reduction function is
defined as: fL (S, ) = E 0 [L( , 0 )1( (S) 6= 0 (S))] , where 1(?) is an indicator function and
L : OX ? OX ! R 0 is a general loss function that satisfies L( 0 , ) = L( , 0 ) and L( , 0 ) = 0
if = 0 . The generalized version space reduction function is not adaptive submodular [3] and does
not satisfy the MLRB condition. However, it satisfies condition MLB with a non-trivial constant G.
Proposition 3. The generalized version space reduction function fL satisfies MLB with G =
max , 0 L( , 0 ).
5
Algorithm
Adaptive planning is computationally hard due to the need to consider every possible observation after each action. RAC assumes that it always receive the most likely observation to simplify adaptive
planning. RAC is a recursive algorithm that partially covers the function in each step and repeats on
the residual function until the entire function is covered.
In each recursive step, RAC uses the mostly like observation assumption to transform adaptive
stochastic optimization problem into a submodular orienteering problem to generate a tour and traverse it. If the assumption is true throughout the tour, then RAC achieves the required partial coverage. Otherwise, RAC receives some observation that has probability less than half (since only the
most likely observation has probability at least half), the marginal likelihood of history decreases by
at least half, and the MLRB and MLB conditions ensures that substantial progress is made towards
covering the function.
Submodular orienteering takes a submodular function g : X ! R and a metric on X and
gives the minimum cost path ? that covers function g such that g(? ) = g(X). We now describe the submodular orienteering problem used in each recursive step. Given the current history , we construct a restricted set of location-observation pairs, Z = {(x, o) : (x, o) 2
/
4
, o is the most likely observation at x given }. Using ideas from [7], we construct a submodular function g?? : 2Z ! R to be used in the submodular orienteering problem. Upon completion of
the recursive step, we would like the function to be either covered or have value at least ? for all
scenarios consistent with [Z 0 where Z 0 is the selected subset of Z. We first restrict to a subset of
scenarios that are consistent with . To simplify, we transform the function so that its maximum
value for all is at least ? by defining f? (S, ) = f (S, ) + (? f (X, )) whenever f (X, ) < ?
and f? (S, ) = f (S, ) otherwise. For Z 0 ? Z, we now define g? (Z 0 , ) = f? (dom( [ Z 0 ), ) if
0
Z 0 is consistent with and
) otherwise. Finally, we construct the submodular
Pg? (Z , ) = f? (X,
?
0
0
function g? (Z ) = 1/| |
min(?,
g
(Z
,
)).
The constructions have the following properties
?
2
that guarantees the effectiveness of the recursive steps of RAC.
Proposition 4. Let f be a pointwise monotone submodular function. Then g? is pointwise monotone
submodular and g?? is monotone submodular. In addition g?? (Z 0 )
? if and only if f is either
covered or have value at least ? for all scenarios consistent with [ Z 0 .
We can replace g?? by a simpler function if f satisfies a minimal dependency property where the value
of function f depends only on the history, i.e. f (dom( ), 0 ) = f (dom( ), ) for all , 0 ? .
We define a new submodular set function g?m (Z 0 ) = g? (Z 0 , Z [ ).
Proposition 5. When f satisfies minimal dependency, g?m (Z 0 ) ? implies g?? (Z 0 ) ?.
RAC needs to guard against committing to costly plan made under the most likely observation
assumption which is bound to be wrong eventually. RAC uses two different mechanisms for hedging.
For MLRB, instead of requiring complete coverage, we solve partial coverage using a submodular
?
path optimization problem g(1
(1 1/K)Q for all consistent scenarios under
1/K)Q so that f (S)
the most likely observation assumption in each recursive step. For MLB, we solve submodular
?
orienteering for complete coverage of gQ
but also solve for the version space reduction function
?
with 0.5 as the target, V0.5 , as a hedge against over-commitment by the first tour when the function
is not well aligned with the probability of observations. The cheaper tour is then traversed by RAC
in each recursive step.
We define the informative observation set ?x for every location x 2 X: ?x = { o | p(o|x) ? 0.5}.
RAC traverses the tour and adaptively terminates when it encounters an informative observation.
Subsequent recursive calls work on the residual function f 0 and normalized prior p0 . Let be
the history encountered so far just before the recursive call, for any set S
dom( ) f 0 (S, ) =
f (S, ) f (dom( ), ). We assume that function f is integer-valued. The recursive step is repeated
until the residual value Q0 = 0. We give the pseudocode of RAC in Algorithm 1. We give details of
S UBMODULAR PATH procedure and prove its approximation bound in supplementary material.
Algorithm 1 RAC
procedure RECURSE RAC(p, f, Q)
if max 2{ 0 |p( 0 )>0} f (X, ) = 0 then
return
?
G EN T OUR(p, f, Q)
E XECUTE P LAN(? )
p( | )p( )
0
p
p( )
f0
f (Y, ) f (?, )
Q0
Q min f (?, ) for all ?
RECURSE RAC(p0 , f 0 , Q0 )
procedure E XECUTE P LAN(? )
repeat
Visit next location x in ? and observe o.
until o 2 ?x or end of tour.
Move to location xt = r.
return history encountered .
6
procedure G EN T OUR(p, f, Q)
if f satisfies MLB then
?
?f
S UBMODULAR PATH(gQ
)
if max p( ) ? 0.5 then
?
?vs
S UBMODULAR PATH(V0.5
)
0
?
arg min?f ,?vs (W (? ))
else
?
?f
else
?
?
S UBMODULAR PATH(g(1
1/K)Q )
return ? where ? = (x0 , x1 , . . . , xt ) and
x0 = xt = r
Analysis
We give the performance guarantees for applying RAC to adaptive stochastic optimization problem
on paths that satisfy MLRB and MLB.
5
Theorem 1. Assume that f is an integer-valued pointwise submodular monotone function. If f
satisfies MLRB condition, then for any constant ? > 0 and an instance of adaptive stochastic optimization problem on path optimizing f , RAC computes a policy ? in polynomial time such that
C(?) = O((log|X|)2+? log Q logK Q)C(? ? )),
where Q and K > 1 are constants that satisfies Equation (2).
Theorem
P 2. Assume that prior probability distribution p is represented as non-negative integers
with
p( ) = P and f is an integer-valued pointwise submodular monotone function. If f
satisfies MLB, then for any constant ? > 0 and an instance of adaptive stochastic optimization
problem on path optimizing f , RAC computes a policy ? for in polynomial time such that
C(?) = O((log|X|)2+? (log P + log Q) log G)C(? ? ),
where Q = max f (X, ).
For adaptive stochastic optimization problems on subsets, we achieve tighter approximation bounds
by replacing the bound of submodular orienteering with greedy submodular set cover.
Theorem 3. Assume f is an integer-valued pointwise submodular and monotone function. If f satisfies MLRB condition, then for an instance of adaptive stochastic optimization problem on subsets
optimizing f , RAC computes a policy ? in polynomial time such that
C(?) = 4(ln Q + 1)(logK Q + 1)C(? ? ),
where Q and K > 1 are constants that satisfies Equation (2).
Theorem 4. Assume f is an integer-valued pointwise submodular and monotone function and =
min p( ). If f satisfies MLB condition, then for an instance of adaptive stochastic optimization
problem on subsets optimizing f , RAC computes a policy ? in polynomial time such that
C(?) = 4(ln 1/ + ln Q + 2)(log G + 1)C(? ? )),
where Q = max f (X, ).
7
Application: Noisy Informative Path Planning
In this section, we apply RAC to solve adaptive informative path planning (IPP) problems with noisy
observations. We reduce an adaptive noisy IPP problem to an Equivalence Class Determination
(ECD) problem [6] and apply RAC to solve it near-optimally using an objective function that satisfies
MLRB condition. We evaluate this approach on two IPP tasks with noisy observations.
In an informative path planning (IPP) problem, an agent seeks a path to sense and gather information from its environment. An IPP problem is specified as a tuple I = (X, d, H, ph , O, Zh , r),
the definitions for X, d, O, r are the same as adaptive stochastic optimization problem on path. In
addition, there is a finite set of hypotheses, H, and a prior probability over them, p(h). We also have
a set of probabilistic observation functions Zh = {Zx | x 2 X}, with one observation function
Zx (h, o) = p(o|x, h) for each location x. The goal of IPP problem is to identify the true hypothesis.
7.1
Equivalence Class Determination Problem
An Equivalence Class Determination (ECD) problem consists of a set of hypotheses H and a set of
equivalence classes {H1 , H2 , . . . , Hm } that partitions H. Its goal is to identify which equivalence
class the true hypothesis lies in by moving to locations and making observations with the minimum expected movement cost. ECD problem has been applied to noisy Bayesian active learning
to achieve near-optimal performance. Noisy adaptive IPP problem can also be reduced to an ECD
instance when it is always possible to identify the true hypothesis in IPP problem.
To differentiate between the equivalence classes, we use the Gibbs error objective function (called
the edge-cutting function in [6]). The idea is consider the ambiguities between pairs of hypotheses
in different equivalence class, and to visit locations and make observations to disambiguate between them. The set of pairs of hypotheses in different classes is E = [1?i<j?m {{h0 , h00 } : h0 2
Hi , h00 2 Hj }. We disambiguate a pair {h0 , h00 } when we make an observation o at a location x
and either h0 or h00 is inconsistent with the observation, Zx0 (h0 , o) = 0 or Zx0 (h00 , o) = 0. The
set of pairs disambiguated by visiting a location x when hypothesis h 2 H 0 is true is given by
6
c=1
s
Long range sensor
detects the survivor
in the 3 ? 3 area.
x7
x4
c=4
x1
Safe zone
true target location
x5
h = 10
s
Short range sensor
detects the survivor
in current grid cell.
x6
x2
x3
Starting location
Figure 2: Grasp the cup with a handle
top, the side view (left) and the top view
(right).
Figure 1: UAV Search and Rescue
Ex (h) = {{h0 , h00 } : Zx0 (h, o) = 1, Zx0 (h0 , o) = 0 or Zx0 (h00 , o) = 0}. We define a weight function
w : E ! R 0 as w({h0 , h00 }) = p0 (h0 ) ? p0 (h00 ). We can
Pnow define the Gibbs error objective function: fGE (Y, h) = W ([x2Y Ex (h)), where W (E 0 ) = e2E 0 w(e), Y is the set of location visited
and h 2 H 0 .
Proposition 6. The Gibbs error function fGE is pointwise submodular
In addition,
Pm and monotone.
2
it satisfies condition MLRB with constants Q = W (E) = 1
i=1 (p(Hi )) , the total weight of
ambiguous pairs of hypotheses, and K = 2.
The first step to reduce adaptive noisy IPP instance I to ECD instance E is to create a noiseless IPP
problem I 0 = (X, d, H 0 , p0 , O, Z 0 , r) from a noisy IPP instance I = (X, d, H, p, O, Z, r) is by creating a hypothesis for every possible observation vector. Each hypothesis h0 2 H 0 is an observation
vector h0 = (o1 , o2 , . . . , o|X| ) and the new hypothesis n
space H 0 is H 0 = O|X| . Next, for each hyo
Q|X|
pothesis hi 2 H, we create an equivalence class Hi = (o1 , o2 , . . . , o|X| ) j=1 Zxj (hi , oj ) > 0
that consists of all observation vectors h0 = (o1 , o2 , . . . , o|X| ) 2 H 0 that are possible with hypothesis Hi . When we can always identify the true underlying hypothesis h 2 H, the equivalence classes
is a partition on the set H 0 .
7.2
Experiments
We evaluate RAC in simulation on two noisy IPP tasks modified from [10]. We highlight the modifications and give the full description in the supplementary material. In a variant of the UAV search
and rescue task (see Figure 1), there is a safe zone (marked grey in Figure 1) where the victim is
deemed to be safe if we know that he is in it. otherwise we need to know the exact location of the
victim. The equivalence classes task are the safe zone and every location outside of it. Furthermore,
the long range sensor may report the wrong reading with probability of 0.03.
In a noisy variant of the grasping task, the laser range finder has a 0.85 chance of detecting the
correct discretized value x, 0.05 chance of ?1 errors each, and 0.025 chance of ?2 errors each.
The robot gripper is fairly robust to estimation error of the cup handle?s orientation. For each cup,
we partition the cup handle orientation into regions of 20 degrees each. We only need to know the
region that contains cup handle. The equivalence classes here are the regions. However, it is not
always possible to identify the true region due to observation noise. We can still reduce to ECD
problem by associating each observation vector to its most likely equivalence class.
We now describe our baselines algorithms. Define information gain to be reduction in Shannon
entropy of the equivalence classes, the information gain (IG) algorithm, greedily picks the location
that maximizes the expected information gain, where the expectation is taken over all possible observations at the location. To account for movement cost, the information gain (IG-Cost) algorithm
greedily picks the location that maximizes expected information gain per unit movement cost. Both
IG and IG-Cost do not reason over the long term but achieve limited adaptivity by replanning in
each step. The Sampled-RAId algorithm is as described in [10].
We evaluate IG, IG-Cost, Sampled-RAId,and RAC with version space reduction (RAC-V) and Gibbs
error (RAC-GE ) objectives. RAC-GE has theoretical performance guarantees for the noisy adaptive
7
IPP problem. Under the MLRB condition, RAC-V can also be shown to have a similar performance
bound. However RAC-GE optimizes the target function directly and we expect that optimizing the
target function directly would usually have better performance in practice. Even though the version
space reduction function and Gibbs error function are adaptive submodular, the greedy policy in [5]
is not applicable as the movement cost per step depends on the paths and is not fixed. If we ignore
movement cost, a greedy policy on the version space reduction function is equivalent to generalized
binary search, which is equivalent to IG [15] for the UAV task where the prior is uniform and there
are two observations.
We set all algorithms to terminate when the Gibbs error of the equivalence classes is less than
? = 10 5 . The Gibbs error corresponds to the exponentiated R?nyi entropy (order 2) and also the
prediction error of a Gibbs classifier that predicts by sampling a hypothesis from the prior. We run
1000 trials with the true hypothesis sampled randomly from the prior for the UAV search task and
3000 trials for the grasping task as its variance is higher. For Sampled-RAId, we set the number of
samples to be three times the number of hypothesis.
140
720
130
700
120
680
110
660
100
Cost
Cost
For performance comparison, we pick 15 different thresholds (starting from 1?10 5 and doubling
each step) for Gibbs error of the equivalence classes and compute the average cost incurred by each
algorithm to reduce Gibbs error to below each threshold level . We plot the average cost with 95%
confidence interval for the two IPP tasks in Figures 3 and 4. For the grasping task, we omit trials
where the minimum Gibbs error possible is greater than when we compute the average cost for
that specific value. For readability, we omit results due to IG from the plots when it is worse than
other algorithms by a large margin, which is all of IG in the grasping task. From our experiments,
RAC-GE has the lowest average cost for both tasks at almost every . The RAC-V has very close
results while the other algorithms, Sampled-RAId, IG-Cost and IG do not perform as well for both
the UAV search and grasping task.
90
RAC-GE
RAC-V
Sampled-RAId
IG
70
60
10
5
10
4
10
580
560
3
10
2
10
1
10
540
5
10
0
IG-Cost
RAC-V
Sampled-RAId
RAC-GE
10
4
10
3
10
2
10
1
10
0
Gibbs Error
Gibbs Error
Figure 4: Grasping
Figure 3: UAV search and rescue
8
620
600
IG-Cost
80
640
Conclusion
We study approximation algorithms for adaptive stochastic optimization over both sets and paths.
We give two conditions on pointwise monotone submodular functions that are useful for understanding the performance of approximation algorithms on these problems: the MLB and the MLRB. Our
algorithm, RAC, runs in polynomial time with an approximation ratio that depends on the constants
characterizing these two conditions. The results extend known results for adaptive stochastic optimization problems on sets to paths, and enlarges the class of functions known to be efficiently
approximable for both problems. We apply the algorithm to two adaptive informative path planning
applications with promising results.
Acknowledgement This work is supported in part by NUS AcRF grant R-252-000-587-112, National Research Foundation Singapore through the SMART Phase 2 Pilot Program (Subaward
Agreement No. 09), and US Air Force Research Laboratory under agreement number FA238615-1-4010.
8
References
[1] Arash Asadpour, Hamid Nazerzadeh, and Amin Saberi. Stochastic submodular maximization.
In Internet and Network Economics, pages 477?489. 2008.
[2] Gruia Calinescu and Alexander Zelikovsky. The polymatroid steiner problems. Journal of
Combinatorial Optimization, 9(3):281?294, 2005.
[3] Nguyen Viet Cuong, Wee Sun Lee, and Nan Ye. Near-optimal Adaptive Pool-based Active
Learning with General Loss. In Proc. Uncertainty in Artificial Intelligence, 2014.
[4] Nguyen Viet Cuong, Wee Sun Lee, Nan Ye, Kian Ming A. Chai, and Hai Leong Chieu. Active
Learning for Probabilistic Hypotheses Using the Maximum Gibbs Error Criterion. In Advances
in Neural Information Processing Systems (NIPS), 2013.
[5] Daniel Golovin and Andreas Krause. Adaptive submodularity: Theory and applications in
active learning and stochastic optimization. J. Artificial Intelligence Research, 42(1):427?486,
2011.
[6] Daniel Golovin, Andreas Krause, and Debajyoti Ray. Near-optimal bayesian active learning
with noisy observations. In Advances in Neural Information Processing Systems (NIPS), pages
766?774, 2010.
[7] Andrew Guillory and Jeff Bilmes. Interactive submodular set cover. In International Conference on Machine Learning (ICML), Haifa, Israel, 2010.
[8] Anupam Gupta, Viswanath Nagarajan, and R. Ravi. Approximation Algorithms for Optimal
Decision Trees and Adaptive TSP Problems. In Samson Abramsky, Cyril Gavoille, Claude
Kirchner, Friedhelm Meyer auf der Heide, and Paul G. Spirakis, editors, Automata, Languages and Programming, number 6198 in Lecture Notes in Computer Science, pages 690?
701. Springer Berlin Heidelberg, January 2010.
[9] Leslie Pack Kaelbling, Michael. L Littman, and Anthony R. Cassandra. Planning and acting
in partially observable stochastic domains. Artificial Intelligence, 101:99?134, January 1998.
[10] Zhan Wei Lim, David Hsu, and Wee Sun Lee. Adaptive informative path planning in metric
spaces. In Workshop on the Algorithmic Foundations of Robotics, 2014.
[11] George L. Nemhauser, Laurence A. Wolsey, and Marshall L. Fisher. An analysis of approximations for maximizing submodular set functions?I. Mathematical Programming, 14(1):265?
294, 1978.
[12] Sylvie C. W. Ong, Shao Wei Png, David Hsu, and Wee Sun Lee. Planning under uncertainty
for robotic tasks with mixed observability. Int. J. Robotics Research, 29(8):1053?1068, 2010.
[13] David Silver and Joel Veness. Monte-Carlo Planning in Large POMDPs. Advances in Neural
Information Processing Systems (NIPS), 2010.
[14] Adhiraj Somani, Nan Ye, David Hsu, and Wee Sun Lee. Despot: Online pomdp planning
with regularization. In Advances in Neural Information Processing Systems (NIPS), pages
1772?1780, 2013.
[15] Alice X. Zheng, Irina Rish, and Alina Beygelzimer. Efficient Test Selection in Active Diagnosis via Entropy Approximation. Proc. Uncertainty in Artificial Intelligence, 2005.
9
| 6005 |@word trial:3 version:14 polynomial:8 laurence:1 termination:1 grey:1 simulation:2 seek:1 p0:5 pg:1 pick:3 reduction:14 contains:1 selecting:2 daniel:2 o2:5 steiner:1 current:2 rish:1 beygelzimer:1 pothesis:1 must:2 subsequent:1 partition:3 informative:8 plot:2 v:2 greedy:6 selected:3 half:4 item:11 intelligence:4 short:1 provides:1 detecting:1 location:33 traverse:2 readability:1 simpler:1 mathematical:2 guard:1 prove:2 consists:3 ray:1 introduce:1 x0:3 expected:8 planning:17 discretized:1 inspired:1 detects:2 ming:1 asadpour:1 actual:1 solver:1 underlying:1 maximizes:2 lowest:1 israel:1 x2s:1 interpreted:1 minimizes:2 differing:1 guarantee:4 every:5 subclass:1 expands:1 interactive:1 wrong:2 classifier:1 unit:1 dyhsu:1 omit:2 grant:1 positive:1 service:2 before:1 jungle:1 despite:1 path:32 equivalence:15 alice:1 collapse:1 limited:1 range:4 recursive:12 lost:1 practice:1 differs:2 x3:1 procedure:4 area:1 word:1 confidence:1 close:1 selection:1 impossible:2 applying:1 optimize:2 equivalent:2 map:1 maximizing:1 economics:1 starting:4 automaton:1 pomdp:2 searching:1 handle:4 autonomous:1 hierarchy:1 play:2 construction:1 target:4 exact:1 programming:2 us:2 hypothesis:20 agreement:2 approximated:1 viswanath:1 predicts:1 role:2 fly:2 worst:3 region:4 ensures:1 sun:6 grasping:6 decrease:3 movement:5 observes:1 principled:1 substantial:1 environment:2 littman:1 ong:1 dom:17 smart:1 flying:1 upon:2 shao:1 various:2 represented:1 laser:1 fast:1 describe:4 committing:1 monte:1 artificial:4 tell:1 outcome:1 h0:12 outside:1 victim:15 heuristic:3 whose:1 solve:9 encoded:1 say:3 supplementary:3 otherwise:4 valued:5 enlarges:1 transform:2 noisy:13 tsp:1 online:2 differentiate:1 sequence:1 claude:1 propose:1 gq:2 commitment:1 aligned:1 achieve:3 amin:1 description:1 chai:1 silver:1 executing:4 help:1 derive:1 illustrate:1 completion:1 andrew:1 progress:4 coverage:6 implies:2 differ:1 submodularity:12 safe:4 correct:1 stochastic:32 arash:1 enable:3 material:3 assign:1 nagarajan:1 preliminary:1 hamid:1 proposition:7 tighter:1 traversed:2 somani:1 hold:2 mapping:1 algorithmic:1 achieves:3 consecutive:1 estimation:1 proc:2 applicable:1 combinatorial:2 visited:3 replanning:1 vice:2 create:2 tool:1 aso:12 zxj:1 sensor:12 always:6 aim:1 modified:1 hj:1 likelihood:12 survivor:2 contrast:1 greedily:2 baseline:1 sense:1 entire:1 diminishing:2 initially:1 arg:1 orientation:2 denoted:1 plan:2 constrained:1 special:2 fairly:1 marginal:15 once:1 never:1 construct:3 veness:1 sampling:1 x4:1 icml:1 report:1 intelligent:1 simplify:2 few:1 randomly:1 wee:6 national:2 cheaper:1 phase:1 irina:1 zheng:1 joel:1 grasp:1 introduces:2 recurse:2 tuple:3 edge:1 partial:2 approximable:1 unless:2 tree:1 ruled:1 haifa:1 theoretical:1 minimal:2 instance:8 earlier:2 marshall:1 cover:8 maximization:1 leslie:1 cost:24 fusing:1 kaelbling:1 subset:9 tour:6 uniform:1 inadequate:1 optimally:1 nearoptimal:1 dependency:2 despot:1 scanning:1 guillory:1 chooses:2 adaptively:3 international:1 lee:6 probabilistic:2 receiving:1 pool:1 michael:1 together:1 ambiguity:1 kirchner:1 possibly:1 worse:1 admit:1 creating:1 return:5 account:1 int:1 satisfy:7 depends:5 hedging:1 vehicle:1 h1:1 view:2 e2e:1 air:1 variance:1 efficiently:2 identify:6 generalize:1 bayesian:3 nazerzadeh:1 carlo:1 bilmes:1 comp:1 zx:2 pomdps:1 executes:1 history:21 whenever:3 definition:3 against:3 naturally:1 proof:2 static:1 hsu:4 gain:6 sampled:7 pilot:1 lim:2 knowledge:3 higher:1 x6:1 wei:3 evaluated:1 ox:5 though:1 furthermore:2 just:1 until:4 traveling:2 receives:2 replacing:1 acrf:1 quality:1 building:1 effect:1 ye:3 requiring:1 true:14 normalized:1 hence:1 regularization:1 q0:3 laboratory:1 x5:1 acquires:1 covering:1 ambiguous:1 criterion:1 generalized:5 complete:2 performs:1 temperature:2 onboard:1 saberi:1 reasoning:1 hallmark:1 recently:1 pseudocode:1 polymatroid:1 extend:1 he:1 versa:2 gibbs:14 cup:5 trivially:1 grid:1 pm:1 stochasticity:1 submodular:47 language:1 mlb:15 samson:1 moving:1 robot:3 improvise:1 f0:1 v0:2 etc:1 recent:2 optimizing:5 optimizes:2 apart:1 scenario:25 certain:1 binary:1 der:1 minimum:3 greater:1 george:1 gruia:1 full:1 adapt:1 characterized:1 determination:3 long:3 adhiraj:1 uav:17 bigger:1 visit:3 finder:1 prediction:1 variant:2 noiseless:2 metric:4 expectation:1 represent:1 robotics:2 cell:1 receive:2 addition:3 want:1 krause:2 interval:1 else:2 crucial:2 subject:1 inconsistent:1 effectiveness:1 call:2 integer:6 near:5 leong:1 enough:1 identified:1 restrict:1 associating:1 reduce:4 idea:2 andreas:2 observability:1 whether:1 sylvie:1 cyril:1 action:17 useful:2 covered:6 ph:1 png:1 reduced:1 generate:1 kian:1 exist:1 singapore:2 rescue:4 per:2 diagnosis:1 subaward:1 lan:2 threshold:2 alina:1 ravi:1 monotone:14 fraction:1 sum:1 run:2 uncertainty:10 powerful:1 throughout:1 almost:1 decision:3 zhan:2 bound:18 fl:2 hi:6 internet:1 nan:3 encountered:3 nonnegative:1 nontrivial:1 placement:1 x2:3 x7:1 disambiguated:1 min:6 department:1 leews:1 aerial:1 smaller:2 beneficial:1 remain:1 terminates:3 making:2 modification:1 intuitively:1 restricted:2 taken:3 computationally:3 equation:2 ln:3 eventually:1 mechanism:1 know:4 ge:6 end:1 generalizes:1 operation:1 multiplied:1 apply:4 observe:1 rac:39 anupam:1 encounter:1 assumes:1 remaining:2 ensure:1 ecd:6 top:2 exploit:1 build:1 nyi:1 objective:17 move:1 already:2 strategy:2 costly:1 visiting:2 hai:1 nemhauser:1 calinescu:1 distance:3 berlin:1 trivial:1 toward:1 reason:1 length:1 o1:5 pointwise:16 modeled:1 relationship:2 ratio:1 unfortunately:2 mostly:1 negative:1 policy:17 unknown:1 perform:1 upper:1 observation:36 markov:1 zx0:5 finite:4 january:2 defining:1 incorporated:1 arbitrary:1 david:5 pair:10 required:1 ubmodular:4 specified:1 unmanned:1 auf:1 tremendous:1 nu:2 nip:4 usually:1 below:1 reading:4 encompasses:1 program:1 including:1 max:6 oj:1 suitable:1 event:1 force:1 indicator:1 residual:3 imply:2 deemed:1 hm:1 traveled:1 prior:15 sg:1 understanding:1 acknowledgement:1 zh:2 loss:8 expect:1 highlight:1 lecture:1 adaptivity:1 interesting:1 wolsey:1 mixed:1 h2:1 foundation:2 incurred:1 agent:18 degree:1 gather:1 consistent:7 editor:1 share:1 repeat:2 supported:1 tease:1 cuong:2 offline:1 side:1 exponentiated:1 viet:2 taking:3 characterizing:1 world:3 unfolds:1 computes:4 commonly:1 adaptive:60 made:2 ipp:16 ig:14 counted:1 far:2 nguyen:2 debajyoti:1 approximate:3 observable:3 contradict:1 cutting:1 ignore:1 monotonicity:2 active:10 robotic:1 search:9 disambiguate:2 promising:1 learn:1 terminate:1 robust:1 correlated:1 golovin:2 pack:1 heidelberg:1 anthony:1 domain:5 noise:2 paul:1 repeated:1 x1:4 en:2 raid:7 position:8 meyer:1 lie:1 theorem:4 xt:3 specific:1 gupta:1 intractable:2 exists:1 gripper:1 albeit:1 adding:3 workshop:1 logk:2 heide:1 conditioned:1 demand:3 margin:1 cassandra:1 entropy:3 logarithmic:1 likely:6 partially:4 doubling:1 chieu:1 applies:2 springer:1 corresponds:1 satisfies:20 chance:3 hedge:1 goal:7 marked:1 towards:1 jeff:1 replace:1 fisher:1 hard:3 h00:9 determined:2 except:2 operates:1 acting:1 called:2 total:1 shannon:1 zone:3 formally:3 select:1 alexander:1 evaluate:3 ex:2 |
5,532 | 6,006 | Learning Structured Densities via Infinite
Dimensional Exponential Families
Mladen Kolar
University of Chicago
[email protected]
Siqi Sun
TTI Chicago
[email protected]
Jinbo Xu
TTI Chicago
[email protected]
Abstract
Learning the structure of a probabilistic graphical models is a well studied problem in the machine learning community due to its importance in many applications. Current approaches are mainly focused on learning the structure under restrictive parametric assumptions, which limits the applicability of these methods.
In this paper, we study the problem of estimating the structure of a probabilistic
graphical model without assuming a particular parametric model. We consider
probabilities that are members of an infinite dimensional exponential family [4],
which is parametrized by a reproducing kernel Hilbert space (RKHS) H and its
kernel k. One difficulty in learning nonparametric densities is the evaluation of
the normalizing constant. In order to avoid this issue, our procedure minimizes
the penalized score matching objective [10, 11]. We show how to efficiently minimize the proposed objective using existing group lasso solvers. Furthermore, we
prove that our procedure recovers the graph structure with high-probability under
mild conditions. Simulation studies illustrate ability of our procedure to recover
the true graph structure without the knowledge of the data generating process.
1
Introduction
Undirected graphical models, or Markov random fields [13], have been extensively studied and applied in fields ranging from computational biology [15, 28], to natural language processing [16, 20]
and computer vision [9, 17]. In an undirected graphical model, conditional independence assumptions underlying a probability distribution are encoded in the graph structure. Furthermore, the joint
probability density function can be factorized according to the cliques of the graph [14]. One of the
fundamental problems in the literature is learning the structure of a graphical model given an i.i.d.
sample from an unknown distribution. A lot of work has been done under specific parametric assumptions on the unknown distribution. For example, in Gaussian Graphical Models the structure of
the graph is encoded by the sparsity pattern of the precision matrix [6, 30]. Similarly, in the context
of exponential family graphical models, where the node conditional distribution given all the other
nodes is a member of an exponential family, the structure is described by the non-zero coefficients
[29]. Most existing approaches to learn the structure of a high-dimensional undirected graphical
model are based on minimizing a penalized loss objective, where the loss is usually a log-likelihood
or a composite likelihood and the penalty induces sparsity on the resulting parameter vector [see,
for example, 6, 12, 18, 22, 24, 29, 30]. In addition to sparsity inducing penalties, methods that
use other structural constraints have been proposed. For example, since many real-world networks
are scale-free [1], several algorithms are designed specifically to learn structure of such networks
1
[5, 19]. Graphs tend to have cluster structure and learning simultaneously the structure and cluster
assignment has been investigated [2, 27].
In this paper, we focus on learning the structure of a pairwise graphical models without assuming
a parametric class of models. The main challenge in estimating nonparametric graphical models
is computation of the log normalizing constant. To get around this problem, we propose to use
score matching [10, 11] as a divergence, instead of the usual KL divergence, as it does not require
evaluation of the log partition function. The probability density function is estimated by minimizing
the expected distance between the model score function and the data score function, where the score
function is defined as gradient of the corresponding probability density functions. The advantage
of this measure is that the normalization constant is canceled out when computing the distance. In
order to learn the underlying graph structure, we assume that the logarithm of the density is additive
in node-wise and edge-wise potentials and use a sparsity inducing penalty to select non-zero edge
potentials. As we will prove later, our procedure will allow us to consistently estimate the underlying
graph structure.
The rest of paper is organized as follows. We first introduce the notations, background and related
work. Then we formulate our model, establish a representer theorem and present a group lasso
algorithm to optimize the objective. Next we prove that our estimator is consistent by showing that
it can recover the true graph with high probability given sufficient number of samples. Finally the
results for simulated data are presented to demonstrate the correctness of our algorithm empirically.
1.1
Notations
Let [n] denote the set {1, 2, . . . , n}. For a vector ? = (?1 , . . . , ?d )T ? Rd , let k?kp =
P
1
( i?[d] |?i |p ) p denote its lp norm. Let column vector vec(D) denote the vectorization of matrix D, cat(a, b) denote the concatenation of two vectors a and b, and mat(aT1 , . . . , aTd ) the
matrix with rows given by aT1 , . . . , aTd . For ? ? Rd , let Lp (?, p0 ) denote the space of function for which the p-th power of absolute value is p0 integrable; and for f ? Lp (?, p0 ), let
R
1
kf kLp (?,p0 ) = kf kp = ( ? |f |p dx) p denote its Lp norm. Throughout the paper, we denote H
(or Hi , Hij ) as Hilbert space and h?, ?iH , k ? kH as corresponding inner product and norm.
For any operator C : H1 ? H2 , we use kCk to denote the usual operator norm, which is defined as
kCk = inf{a ? 0 : kCf kH2 ? akf kH1 for all f ? H1 };
and kCkHS to denote its Hilbert-Schmidt norm, which is defined as
X
kCk2HS =
kCei k2H2 ,
i?I
where ei is an orthonormal basis of H for an index set I. Also, we use R(C) to denote operator C?s
range space. For any f ? H1 and g ? H2 , let f ? g denote their tensor product.
2
2.1
Background & Related Work
Learning graphical models in exponential families
Let x = (x1 , x2 , ..., xd ) be a d-dimensional random vector from a multivariate Gaussian distribution.
It is well known that the conditional independency of two variables given all the others is encoded
in the zero pattern of its precision matrix ?, that is, xi and xj are conditionally independent given
x?ij if and only if ?ij = 0, where x?ij is the vector of x without xi and xj . A sparse estimate
of ? can be obtained by maximum-likelihood (joint selection) or pseudo-likelihood (neighborhood
selection) optimization with an added l1 penalty [6, 22, 30]. Given n independent realizations of x
(rows of X ? Rn?d ), the penalized maximum-likelihood estimate of the precision matrix can be
obtained as
? ? = arg min tr(S?)
? ? log det ? + ?k?k1 ,
?
?0
where S? = n?1 X T X and ? controls the sparsity level of estimated graph.
2
(1)
The pseudo-likelihood method estimates the neighborhood of a node a by the non-zeros of the
solution to a regularized linear model
1
??s = arg min kXs ? X?s ?k22 + ?k?k1 .
(2)
? n
? (s) = {a : ?sa 6= 0}.
The estimated neighborhood is then N
Another way to specify a parametric graphical model is by assuming that each node-conditional
distributions is a part of the exponential family [29]. Specifically, the conditional distribution of xs
given x?s is assumed to be
X
P (xs |x?s ) = exp(
?st xs xt + C(xs ) ? D(x?s , ?)),
(3)
t?N (s)
where C is the base measure, D is the log-normalization constant and N (s) is the neighborhood a the
node s. Similar to (2), the neighborhood of each node can be estimated by minimizing the negative
log-likelihood with l1 penalty on ?. The optimization is tractable when the normalization constant
D can be easily computed based on the model assumption. For example,
P under Poisson graphical
model assumptions for count data, the normalization constant is ? exp( t?N (s) ?st xt ). When using
the neighborhood estimation, the graph can be estimated as the union of the neighborhoods of each
node, which leads to consistent graph estimation [22, 29].
2.2
Generalized Exponential Family and RKHS
We say H is a RKHS associated with kernel k : ? ? ? ? R+ if and only if for each x ? ?, the
following two conditions are satisfied: (1) k(?, x) ? H and (2) it has reproducing properties such that
f (x) = hf, k(?, x)iH for all f (?) ? H, where k is a symmetric and positive semidefinite function.
Denote the RKHS H with kernel k as H(k).
P?
For any f ? H(k), there exists
P? a set of xi and ?i , such that f (?) = i=1 ?i k(?, xi ). Similarly
for any g ? H(k), g(?) = j=1 ?j k(?, yj ), the inner product of f and g is defined as hf, giH =
qP
P?
?
?
k(x
,
y
).
Therefore
the
norm
of
f
simply
is
kf
k
=
i
j
i
j
H
i,j ?i ?j k(xi , xj ). The sumi,j=1
mation is guaranteed to be larger than or equal to zero because the kernel k is positive semidefinite.
We consider the exponential family in infinite dimensions [4], where
P = {pf (x) = ef (x)?A(f ) q0 (x), x ? ?; f ? F}
and the function space F is defined as
Z
F = {f ? H(k) : A(f ) = log ef (x) q0 (x)dx < ?},
?
where q0 (x) is the base measure, A(f ) is a generalized normalization constant such that pf (x) is
a valid probability density function, and H is a RKHS [3] associated with kernel k. To see it as
a generalization of the exponential family, we show some examples that can generate useful finite
dimension exponential families:
? Normal: ? = R, k(x, y) = xy + x2 y 2
? Poisson: ? = N ? {0}, k(x, y) = xy
? Exponential: ? = R+ , k(x, y) = xy.
For more detailed information, please refer to [4].
When learning structure of a graphical model, we will further impose structural conditions on H(k)
in order ensure that F consists of additive functions.
2.3
Score Matching
Score matching is a convenient procedure that allows for estimating a probability density without
computing the normalizing constant [10, 11]. It is based on minimizing Fisher divergence
Z
? log p(x) ? log p0 (x)
2
1
dx,
J(pkp0 ) =
p(x)
?
(4)
2
?x
?x
2
3
p(x)
p(x)
where ? log?xp(x) = ( ? log
, . . . , ? log
) is the score function. Observe that for p(x, ?) =
?x1
?xd
1
Z(?) q(x, ?) the normalization constant Z(?) cancels out in the gradient computation, which makes
the divergence independent of Z(?). Since the score matching objective involves the unknown oracle probability density function p0 , it is typically not computable. However, under some mild
conditions which we will discuss in METHODS section, (4) can be rewritten as
Z
X 1 ? log p(x)
? 2 log p(x)
dx.
(5)
J(pkp0 ) = p0 (x)
(
)2 +
2
?xi
?x2i
i?[d]
After substituting the expectation with an empirical average, we get
1 X X 1 ? log p(Xa ) 2 ? 2 log p(Xa )
?
J(pkp
.
(
) +
0) =
n
2
?xi
?x2i
(6)
a?[n] i?[d]
?
Compared to maximum likelihood estimation, minimizing J(pkp
0 ) is computationally tractable.
While we will be able to estimate p0 only up to a scale factor, this will be sufficient for the purpose
of graph structure estimation.
3
3.1
Methods
Model Formulation and Assumptions
We assume that the true probability density function p0 is in P. Furthermore, for simplicity we
assume that
X
log p0 (x) = f (x) =
f0,ij (xi , xj ),
i?j
(i,j)?S
where f0,ii (xi , xi ) is a node potential and f0,ij (xi , xj ) is an edge potential. The set S denotes the
edge set of the graph. Extensions to models where potentials are defined over larger cliques are
possible. We further assume that f0,ij ? Hij (kij ), where Hij is a RKHS with kernel kij . To
simplify the notation, we use f0,ij (x) or kij (?, x) to denote f0,ij (xi , xj ) and kij (?, (xi , xj )). If the
context is clear, we drop the subscript for norm or inner product. Define
X
H(S) = {f =
fij |fij ? Hij }
(7)
(i,j)?S
as a set of functions that decompose as sum of bivariate functions
P on edge set2 S. Note that
H(S) is also (a subset of) a RKHS with the norm kf k2H(S) =
(i,j)?S kfij kHij and kernel
P
k = (i,j)?S kij .
P
Let ?(f ) = kf kH,1 = i?j kfij kHij . For any edge set S (not necessarily the true edge set), we
P
denote ?S (fS ) = s?S kfs kHs as the norm ? reduced to S. Similarly, denote its dual norm as
??S [fS ] = max?S (gS )?1 hfS , gS i [25].
Under the assumption that the unknown f0 is additive, the loss function becomes
Z
X ?f (x) ?f0 (x) 2
1
p0 (x)
?
dx
J(f ) =
2
?xi
?xi
i?[d]
Z
1 X X
?kij (?, (xi , xj )) ?kij 0 (?, (xi , xj 0 ))
=
hfij ? f0,ij , p0 (x)
?
dx(fij 0 ? f0,ij 0 )i
2
?xi
?xi
0
i?[d] j,j ?[d]
1 X X
=
hfij ? f0,ij , Cijij 0 (fij 0 ? f0,ij 0 )i.
2
0
i?[d] j,j ?[d]
Intuitively, C can be viewed as a d2 matrix, and the operator at position (ij, ij 0 ) is Cij,ij 0 . For
general (ij, i0 j 0 ), i 6= i0 the corresponding operator simply is 0. Define CSS 0 as
Z
X
?kij (?, (xi , xj )) ?ki0 j 0 (?, (xi0 , xj 0 ))
p0 (x)
?
dx,
?xi
?xi
0 0
0
(i,j)?S,(i ,j )?S
4
which intuitively can be treated as a sub matrix of C with rows S and columns S 0 . We will use this
notation intensively in the main theorem and its proof.
Following [26], we make the following assumptions.
A1. Each kij is twice differentiable on ? ? ?.
A2. For any i and x
?j ? ?j = [aj , bj ], we assume that
lim
+
xi ?ai or b?
i
? 2 kij (x, y)
|y=x p20 (x) = 0,
?xi ?yi
where x = (xi , x
?j ) and ai , bi could be ?? or ?.
A3. This condition ensures that J(pkp0 ) < ? for any p ? P [for more details see 26]:
k
? 2 kij (?, x)
?kij (?, x)
kHij ? L2 (?, p0 ).
kHij ? L2 (?, p0 ), k
?xi
?x2i
A4. The operator CSS , is compact and the smallest eigenvalue ?min = ?min (CSS ) > 0.
?1
A5. ??S c [CS c S CSS
] ? 1 ? ?, where ? > 0.
A6. f0 ? R(C), which means there exists ? ? H, such that f0 = C?. f0 is the oracle function.
We will discuss the definition of operator C and ?? in section 4. Compared with [29], A4 can be
interpreted as the dependency condition and the A5 is the incoherence condition, which is a standard
condition for structure learning in high dimensional statistical estimators.
3.2
Estimation Procedure
We estimate f by minimizing the following penalized score matching objective
? ) + ? kf kH,1
min L?? (f ) = J(f
f
2
s.t. fij ? Hij ,
(8)
P
? ) is given in (6). The norm kf kH,1 =
where J(f
i?j kfij kHij is used as a sparsity inducing
?
penalty. A simplified form of J(f ) is given below that will lead to efficient algorithm for solving
(8).
The following theorem states that the score matching objective can be written as a penalized
quadratic function on f .
Theorem 3.1 (i) The score matching objective can be represented as
?
1
L? (f ) = hf ? f0 , C(f ? f0 )i + kf kH,1
2
2
R
P
?k(?,x)
?k(?,x)
where C = p0 (x) i?[d] ?xi ? ?xi dx is a trace operator.
(ii) Given observed data Xn?d , the empirical estimation of L? is
X
?
1
? i+
hfij , ???ij i + kf kH,1 + const
L?? (f ) = hf, Cf
2
2
(9)
(10)
i?j
where C? =
1
n
2
? kij (?,(Xai ,Xaj ))
?x2j
P
a?[n]
P
i?[d]
P
? 2 kij (?,(Xai ,Xaj ))
?k(?,Xa )
and ??ij = n1 a?[n]
?xi
?x2i
P
? 2 kij (?,(Xai ,Xaj ))
1
otherwise.
a?n
n
?x2i
?k(?,Xa )
?xi
if i 6= j, or ??ij =
?
+
Please refer to our supplementary material for detailed proof 1 .
The above theorem still requires us to minimize over F. Our next results shows that the solution is
finite dimensional. That is, we establish a representer theorem for our problem.
1
Please visit ttic.uchicago.edu/?siqi for supplementary material and code.
5
Theorem 3.2 (i) The solution to (10) can be represented as
X
?kij (?, (Xbi , Xbj ))
?kij (?, (Xbi , Xbj ))
f?ij (?) =
?bij
+ ?bji
+ ?ij ??ij ,
?xi
?xj
(11)
b?[n]
where i ? j.
(ii) Minimizing (10) is equivalent to minimizing the following quadratic function:
!2
X
1 X X
ab
ab
1a
(?bij Gij11 + ?bji Gij12 ) +
?ij hij
2n ai
j
bj
XX
X
?
2b
+
(?bij h1b
?ij k??ij k2 + kf kH,1
ij + ?bji hij ) +
2
i?j b
i?j
q
1 X T
?X
t F ?
=
?ij
(Dai ? ?)2 + E t ? +
ij ij
2n ai
2
(12)
i?j
2
? kij (Xa ,Xb )
?kij (?,Xb ) ?
where Gab
, hrb
, ?ij i are constant that only depends on X, ? =
ijrs =
ij = h
?xr ?ys
?xr
cat(vec(?), vec(?)) is the vector parameter and ?ij = cat(?ij , vec(??ij )) is a group of parameters.
Dai , E and F are corresponding constant vectors and matrices based on G, h and the order of
parameters. Then the above problem can be solved by group lasso [7, 21].
The first part of theorem states our representer theorem, and the second part is obtained by plugging
in (11) to (10). See supplementary material for a detailed proof. Theorem 3.2 provides us with an
efficient way to minimize (8), as it reduced the optimization to a group lasso problem for which
many efficient solvers exist.
Let f?? = arg minf ?H L?? (f ) denote the solution to (12). We can estimate the graph as follows:
?
S?? = {(i, j) : kf?ij
k=
6 0},
(13)
That is, the graph is encoded in the sparsity pattern of f?? .
4
Statistical Guarantees
In this section we study statistical properties of the proposed estimator (13). Let S denote the true
edge set and S c its complement. We prove that S?? recovers S with high probability when the sample
size n is sufficiently large.
T
T
T
Denote D = mat(D11
, . . . , Dai
, . . . , Dnd
). We will need the following result on the estimated
?
operator C,
Proposition 4.1 (Lemma 5 in [8] or Theorem 5 in [26] ) (Properties of C)
1
1. kC? ? CkHS = Op0 (n? 2 )
2. k(C + ?L)?1 k ? ? min 1diag(L) , kC(C + ?L)?1 k ? 1, where ? > 0 and L is diagonal with
positive constants.
The following result gives first order optimality conditions for the optimization problem (8).
Proposition 4.2 (Optimality Condition)
? ) + ? ?(f )2 achieves optimality when the following two conditions are satisfied:
J(f
2
? ) + ??(f )
(1) ?fs J(f
fs
= 0 ?s ? S
kfs kHs
? )] ? ??(f ).
(2) ??S c [?fSc J(f
6
With these preliminary results, we have the following main results.
Theorem 4.3 Assume that conditions A1-A7 are satisfied. The regularization parameter ? is se1
??min ??
?
min
lected at the order of n? 4 and satisfies ? ?
? , where ?min = mins?S kfs k > 0
4(1??)?max
and ?max =
maxs?S kfs? k
|S|+ 5
> 0. Then P (S?? = S) ? 1.
Proof Idea: The theorem above is the main theoretical guarantee for our score matching estimator.
We use the ?witness? proof framework inspired by [23, 29]. Let f ? denote the true density function
and p? the probability density function. We first construct a solution f?S on true edge set S as
X
? ) + ?(
f?S = min J(f
kfij k)2
(14)
fS c =0
2
(i,j)?S
1
and set f?S c as zero. Using Proposition 4.1, we prove that kf?S ? fS? k = Op (n? 4 ). Then we
compute the subgradient on S c and prove that its dual norm is upper bounded by ??(f ) by using
assumptions A4, A5 and A6. Therefore we construct a solution that satisfied the optimality condition
and converges in probability to the true graph. Refer to supplementary material for detailed proof.
5
Experiments
We illustrate performance of our method on two simulations. In our experiments, we use the same
kernel defined as follows:
kx ? yk22
k(x, y) = exp(?
) + r(xT y + c)2 ,
(15)
2? 2
that is, the summation of a Gaussian kernel and a polynomial kernel. We set ? 2 = 1.5, r = 0.1 and
c = 0.5 for all the simulations.
We report the true positive rate vs false positive rate (ROC) curve to measure the performance of
different procedures. Let S be the true edge set, and let S?? be the estimated graph. The true positive
?? =1|
??
and S
and S=0|
rate is defined as TPR? = |S=1|S=1|
, and false positive rate is FPR? = |S =1
, where |?|
|S=0|
is the cardinality of the set. The curve is then plotted based on 100 uniformly-sampled regularization
parameters and based on 20 independent runs.
In the first simulation, we apply our algorithm to data sampled from a simple chain graph-based
Gaussian model (see Figure 1 for detail), and compare its performance with glasso [6]. We use the
same sampling method as in [31] to generate the data: we set ?s = 0.4 for s ? S and its diagonal
to a constant such that ? is positive definite. We set the dimension d to 25 and change the sample
size n ? {20, 40, 60, 80, 100} data points.
Except for the low sample size case (n = 20), the performance of our method is comparable with
glasso, without utilizing the fact that the underlying distribution is of a particular parametric form.
Intuitively, to capture the graph structure, the proposed nonparametric method requires more data
because of much weaker assumptions.
To further show the strength of our algorithm, we test it on a nonparanormal (NPN) distribution
([18]). A random vector x = (x1 , . . . , xp ) has a nonparanormal distribution if there exist functions
(f1 , . . . , fp ) such that (f1 (x1 ), . . . , fd (xd )) ? N (?, ?). When f is monotone and differentiable,
the probability density function is given by
Y
1
1
T ?1
P (x) =
(f (x) ? ?)}
|fj0 |.
p
1 exp{? (f (x) ? ?) ?
2
(2?) 2 |?| 2
j
Here the graph structure is still encoded in the sparsity pattern of ? = ??1 , that is, xi ?xj |x?i,j if
and only if ?ij = 0 [18].
In our experiments we use the ?Symmetric Power Transformation? [18], that is,
fj (zj ) = ?j ( qR
g0 (zj ? ?j )
g02 (t ? ?j )?(
7
t??j
?j )dt
) + ?j ,
1.0
SME
1.0
Glasso
1.0
Adjacent Matrix
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.8
0.8
0.8
?
?
?
?
?
?
?
?
?
?
? ?
?
?
?
?
?
?
? ?
?
?
?
?
?
0.6
TruePositiveRate
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.2
?
0.0
0.0
0.4
0.6
0.8
1.0
?
?
?
?
?
?
?
?
?
?
?
0.0
0.2
0.4
0.6
0.8
20
40
60
80
100
0.0
0.2
0.2
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.2
?
?
?
?
?
?
0.0
?
?
0.4
0.6
TruePositiveRate
0.4
?
?
?
?
?
?
?
?
?
0.4
0.6
?
1.0
?
?
?
?
?
?
?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.0
0.2
0.4
FalsePositiveRate
0.6
0.8
20
40
60
80
100
1.0
FalsePositiveRate
Figure 1: The estimation results for Gaussian graphical models. left: The adjacent matrix of true
graph. center: the ROC curve of glasso. right: the ROC curve of score matching estimator (SME).
1.0
SME
1.0
NonParaNormal
1.0
Glasso
?
?
?
0.8
0.8
?
?
? ?
?
??? ?
?
? ? ? ? ?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
??
?
?
?
?
?
?
?
? ?
?
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.4
0.6
FalsePositiveRate
0.8
1.0
0.2
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.0
0.2
0.4
0.6
FalsePositiveRate
0.8
20
40
60
80
100
1.0
0.0
0.2
0.2
20
40
60
80
100
?
?
?
?
?
?
?
?
?
0.0
0.2
0.0
?
? ?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.0
???
??
?
?
0.6
TruePositiveRate
?
?
?
?
?
?
?
?
?
???
? ??
?
? ?
?
?
? ?
?
0.4
?
?
?
0.6
TruePositiveRate
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
???
??
??
?
?
?
?
?
?
?
??
?
???
?
??
?
??
?
??
?
?????
?
????
??
?
??
????
?
?
??
???
?
?
?
??
?
?
?
0.4
0.6
?
?
??
?
?
?
?
??
?
?
?
TruePositiveRate
??
??
?
?
?
?
0.4
??
0.8
?
?
?
?
?
?
?
?
?
?
??
?
?
??
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
0.0
?
0.2
0.4
0.6
0.8
20
40
60
80
100
1.0
FalsePositiveRate
Figure 2: The estimated ROC curves of nonparanormal graphical models for glasso (left), NPN
(center) and SME (right).
where g0 (t) = sign(t)|t|? , to transform data. For comparison with graph lasso, we first use a
truncation method to Gaussianize the data, and then apply graphical lasso to the transformed data.
See [18, 31] for details. From figure 2, without knowing the underlying data distribution, the score
matching estimator outperforms glasso, and show similar results to nonparanormal when the sample
size is large.
6
Discussion
In this paper, we have proposed a new procedure for learning the structure of a nonparametric graphical model. Our procedure is based on minimizing a penalized score matching objective, which can
be performed efficiently using existing group lasso solvers. Particularly appealing aspect of our
approach is that it does not require computing the normalization constant. Therefore, our procedure can be applied to a very broad family of infinite dimensional exponential families. We have
established that the procedure provably recovers the true underlying graphical structure with highprobability under mild conditions. In the future, we plan to investigate more efficient algorithms for
solving (10), since it is often the case that C? is well structured and can be efficiently approximated.
Acknowledgments
The authors are grateful to the financial support from National Institutes of Health R01GM0897532,
National Science Foundation CAREER award CCF-1149811 and IBM Corporation Faculty Research Fund at the University of Chicago Booth School of Business. This work was completed in
part with resources provided by the University of Chicago Research Computing Center.
8
References
[1] R. Albert. Scale-free networks in cell biology. Journal of cell science, 118(21):4947?4957, 2005.
[2] C. Ambroise, J. Chiquet, C. Matias, et al. Inferring sparse gaussian graphical models with latent structure.
Electronic Journal of Statistics, 3:205?238, 2009.
[3] N. Aronszajn. Theory of reproducing kernels. Transactions of the American mathematical society, pages
337?404, 1950.
[4] S. Canu and A. Smola. Kernel methods and the exponential family. Neurocomputing, 69(7):714?720,
2006.
[5] A. Defazio and T. S. Caetano. A convex formulation for learning scale-free networks via submodular
relaxation. In Advances in Neural Information Processing Systems, pages 1250?1258, 2012.
[6] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso.
Biostatistics, 9(3):432?441, 2008.
[7] J. Friedman, T. Hastie, and R. Tibshirani. A note on the group lasso and a sparse group lasso. arXiv
preprint arXiv:1001.0736, 2010.
[8] K. Fukumizu, F. R. Bach, and A. Gretton. Statistical consistency of kernel canonical correlation analysis.
The Journal of Machine Learning Research, 8:361?383, 2007.
[9] S. Geman and C. Graffigne. Markov random field image models and their applications to computer vision.
In Proceedings of the International Congress of Mathematicians, volume 1, page 2, 1986.
[10] A. Hyv?arinen. Estimation of non-normalized statistical models by score matching. In Journal of Machine
Learning Research, pages 695?709, 2005.
[11] A. Hyv?arinen. Some extensions of score matching. Computational statistics & data analysis, 51(5):2499?
2512, 2007.
[12] Y. Jeon and Y. Lin. An effective method for high-dimensional log-density anova estimation, with application to nonparametric graphical model building. Statistica Sinica, 16(2):353, 2006.
[13] R. Kindermann, J. L. Snell, et al. Markov random fields and their applications, volume 1. American
Mathematical Society Providence, RI, 1980.
[14] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009.
[15] Y. A. Kourmpetis, A. D. Van Dijk, M. C. Bink, R. C. van Ham, and C. J. ter Braak. Bayesian markov
random field analysis for protein function prediction based on network data. PloS one, 5(2):e9293, 2010.
[16] J. Lafferty, A. McCallum, and F. C. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. 2001.
[17] S. Z. Li. Markov random field modeling in Image Analysis. 2011.
[18] H. Liu, J. Lafferty, and L. Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. The Journal of Machine Learning Research, 10:2295?2328, 2009.
[19] Q. Liu and A. T. Ihler. Learning scale free networks by reweighted l1 regularization. In International
Conference on Artificial Intelligence and Statistics, pages 40?48, 2011.
[20] C. D. Manning and H. Sch?utze. Foundations of statistical natural language processing. MIT press, 1999.
[21] L. Meier, S. Van De Geer, and P. B?uhlmann. The group lasso for logistic regression. Journal of the Royal
Statistical Society: Series B (Statistical Methodology), 70(1):53?71, 2008.
[22] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the lasso. The
Annals of Statistics, pages 1436?1462, 2006.
[23] P. Ravikumar, M. J. Wainwright, and J. Lafferty. High-dimensional graphical model selection using l1regularized logistic regression. 2008.
[24] P. Ravikumar, M. J. Wainwright, J. D. Lafferty, et al. High-dimensional ising model selection using
1-regularized logistic regression. The Annals of Statistics, 38(3):1287?1319, 2010.
[25] R. T. Rockafellar. Convex analysis. Number 28. Princeton university press, 1970.
[26] B. Sriperumbudur, K. Fukumizu, R. Kumar, A. Gretton, and A. Hyv?arinen. Density estimation in infinite
dimensional exponential families. arXiv preprint arXiv:1312.3516, 2013.
[27] S. Sun, H. Wang, and J. Xu. Inferring block structure of graphical models in exponential families. In
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, pages
939?947, 2015.
[28] Z. Wei and H. Li. A markov random field model for network-based analysis of genomic data. Bioinformatics, 23(12):1537?1544, 2007.
[29] E. Yang, G. Allen, Z. Liu, and P. K. Ravikumar. Graphical models via generalized linear models. In
Advances in Neural Information Processing Systems, pages 1358?1366, 2012.
[30] M. Yuan and Y. Lin. Model selection and estimation in the gaussian graphical model. Biometrika,
94(1):19?35, 2007.
[31] T. Zhao, H. Liu, K. Roeder, J. Lafferty, and L. Wasserman. The huge package for high-dimensional
undirected graph estimation in r. The Journal of Machine Learning Research, 13(1):1059?1062, 2012.
9
| 6006 |@word mild:3 faculty:1 polynomial:1 norm:12 d2:1 hyv:3 simulation:4 covariance:1 p0:16 tr:1 liu:4 series:1 score:18 rkhs:7 nonparanormal:6 outperforms:1 existing:3 current:1 jinbo:2 com:1 gmail:1 dx:8 written:1 chicago:5 partition:1 additive:3 designed:1 drop:1 fund:1 v:1 intelligence:2 mccallum:1 fpr:1 provides:1 node:9 mathematical:2 yuan:1 prove:6 consists:1 introduce:1 pairwise:1 expected:1 inspired:1 pf:2 solver:3 cardinality:1 becomes:1 provided:1 estimating:3 underlying:6 notation:4 xx:1 factorized:1 bounded:1 biostatistics:1 interpreted:1 minimizes:1 mathematician:1 transformation:1 corporation:1 guarantee:2 pseudo:2 xd:3 biometrika:1 k2:1 control:1 segmenting:1 positive:8 congress:1 limit:1 subscript:1 incoherence:1 pkp:2 xaj:3 twice:1 studied:2 xbi:2 meinshausen:1 range:1 kfij:4 bi:1 acknowledgment:1 yj:1 union:1 graffigne:1 definite:1 block:1 xr:2 pkp0:3 procedure:11 kh2:1 empirical:2 composite:1 matching:14 convenient:1 protein:1 get:2 selection:6 operator:9 dnd:1 context:2 optimize:1 equivalent:1 center:3 eighteenth:1 convex:2 focused:1 formulate:1 simplicity:1 wasserman:2 estimator:6 utilizing:1 orthonormal:1 financial:1 l1regularized:1 ambroise:1 cs:4 annals:2 chicagobooth:1 approximated:1 particularly:1 geman:1 ising:1 observed:1 preprint:2 solved:1 capture:1 wang:1 ensures:1 caetano:1 sun:3 plo:1 ham:1 grateful:1 solving:2 basis:1 easily:1 joint:2 cat:3 represented:2 effective:1 fsc:1 kp:2 artificial:2 labeling:1 neighborhood:7 encoded:5 larger:2 supplementary:4 klp:1 say:1 otherwise:1 ability:1 statistic:6 transform:1 advantage:1 differentiable:2 eigenvalue:1 sequence:1 propose:1 product:4 realization:1 inducing:3 kh:7 qr:1 cluster:2 generating:1 tti:2 gab:1 converges:1 illustrate:2 ij:36 school:1 op:1 sa:1 c:1 involves:1 fij:5 material:4 require:2 arinen:3 f1:2 generalization:1 decompose:1 preliminary:1 proposition:3 snell:1 summation:1 extension:2 around:1 sufficiently:1 normal:1 exp:4 k2h:1 bj:2 substituting:1 achieves:1 a2:1 smallest:1 lected:1 purpose:1 utze:1 estimation:14 uhlmann:2 kindermann:1 correctness:1 fukumizu:2 mit:2 genomic:1 gaussian:7 mation:1 avoid:1 focus:1 consistently:1 likelihood:8 mainly:1 roeder:1 i0:2 typically:1 kc:2 koller:1 transformed:1 provably:1 issue:1 canceled:1 arg:3 dual:2 plan:1 field:8 equal:1 construct:2 sampling:1 biology:2 broad:1 cancel:1 representer:3 minf:1 siqi:3 future:1 others:1 report:1 simplify:1 simultaneously:1 divergence:4 national:2 neurocomputing:1 jeon:1 n1:1 ab:2 friedman:3 huge:1 fd:1 a5:3 investigate:1 evaluation:2 d11:1 semidefinite:2 atd:2 xb:2 chain:1 edge:10 fj0:1 xy:3 kfs:4 logarithm:1 plotted:1 chiquet:1 theoretical:1 kij:19 column:2 se1:1 modeling:1 assignment:1 a6:2 applicability:1 subset:1 dependency:1 providence:1 st:2 density:15 fundamental:1 international:3 probabilistic:4 satisfied:4 american:2 zhao:1 li:2 potential:5 de:1 coefficient:1 rockafellar:1 depends:1 later:1 h1:3 lot:1 performed:1 recover:2 hf:5 minimize:3 efficiently:3 kh1:1 bayesian:1 definition:1 sriperumbudur:1 matias:1 associated:2 proof:6 recovers:3 ihler:1 sampled:2 intensively:1 knowledge:1 lim:1 hilbert:3 organized:1 mkolar:1 dt:1 methodology:1 specify:1 wei:1 formulation:2 done:1 furthermore:3 xa:5 smola:1 correlation:1 ei:1 aronszajn:1 a7:1 logistic:3 aj:1 building:1 k22:1 normalized:1 true:13 ccf:1 regularization:3 symmetric:2 q0:3 conditionally:1 adjacent:2 reweighted:1 please:3 generalized:3 demonstrate:1 l1:3 allen:1 fj:1 ranging:1 wise:2 image:2 ef:2 empirically:1 qp:1 xbj:2 volume:2 xi0:1 tpr:1 refer:3 vec:4 ai:4 rd:2 consistency:1 canu:1 similarly:3 submodular:1 language:2 p20:1 f0:17 base:2 multivariate:1 inf:1 yi:1 integrable:1 dai:3 impose:1 ii:3 gretton:2 ki0:1 bach:1 lin:2 ravikumar:3 visit:1 y:1 a1:2 plugging:1 award:1 prediction:1 sme:4 regression:3 vision:2 expectation:1 poisson:2 albert:1 arxiv:4 kernel:14 normalization:7 cell:2 addition:1 background:2 semiparametric:1 sch:1 rest:1 tend:1 undirected:5 member:2 lafferty:5 structural:2 yang:1 yk22:1 ter:1 npn:2 independence:1 xj:13 hastie:2 lasso:12 inner:3 idea:1 knowing:1 computable:1 det:1 defazio:1 penalty:6 f:6 useful:1 detailed:4 clear:1 nonparametric:5 extensively:1 induces:1 gih:1 reduced:2 generate:2 exist:2 zj:2 canonical:1 sign:1 estimated:8 tibshirani:2 mat:2 gaussianize:1 kck:2 group:9 independency:1 k2h2:1 anova:1 graph:26 subgradient:1 monotone:1 g02:1 sum:1 relaxation:1 run:1 inverse:1 package:1 family:15 throughout:1 electronic:1 comparable:1 hi:1 guaranteed:1 quadratic:2 oracle:2 g:2 strength:1 constraint:1 x2:2 ri:1 aspect:1 min:11 optimality:4 kumar:1 structured:2 according:1 manning:1 lp:4 appealing:1 intuitively:3 computationally:1 resource:1 discus:2 count:1 tractable:2 rewritten:1 apply:2 observe:1 schmidt:1 denotes:1 ensure:1 cf:1 completed:1 graphical:27 a4:3 const:1 restrictive:1 k1:2 establish:2 society:3 tensor:1 objective:9 g0:2 added:1 parametric:6 usual:2 diagonal:2 gradient:2 distance:2 simulated:1 concatenation:1 parametrized:1 assuming:3 code:1 index:1 kolar:1 minimizing:9 sinica:1 cij:1 hij:7 trace:1 negative:1 unknown:4 upper:1 markov:6 mladen:1 finite:2 witness:1 rn:1 reproducing:3 community:1 ttic:2 complement:1 meier:1 kl:1 established:1 akf:1 able:1 usually:1 pattern:4 below:1 fp:1 sparsity:8 challenge:1 max:4 royal:1 wainwright:2 power:2 difficulty:1 natural:2 regularized:2 treated:1 business:1 x2i:5 dijk:1 health:1 literature:1 l2:2 kf:12 loss:3 glasso:7 at1:2 h2:2 foundation:2 sufficient:2 consistent:2 xp:2 principle:1 ibm:1 row:3 penalized:6 free:4 truncation:1 allow:1 uchicago:1 weaker:1 highprobability:1 institute:1 absolute:1 sparse:4 van:3 curve:5 dimension:3 xn:1 world:1 valid:1 author:1 simplified:1 transaction:1 compact:1 clique:2 xai:3 assumed:1 xi:32 braak:1 vectorization:1 latent:1 learn:3 career:1 investigated:1 necessarily:1 diag:1 main:4 statistica:1 xu:3 x1:4 roc:4 precision:3 sub:1 position:1 inferring:2 pereira:1 exponential:15 bij:3 theorem:13 specific:1 xt:3 showing:1 x:4 normalizing:3 bivariate:1 exists:2 a3:1 ih:2 false:2 importance:1 kx:1 booth:1 simply:2 set2:1 khs:2 satisfies:1 bji:3 conditional:6 viewed:1 fisher:1 change:1 infinite:5 specifically:2 uniformly:1 except:1 lemma:1 x2j:1 geer:1 kxs:1 select:1 support:1 bioinformatics:1 princeton:1 |
5,533 | 6,007 | Lifelong Learning with Non-i.i.d. Tasks
Christoph H. Lampert
IST Austria
Klosterneuburg, Austria
[email protected]
Anastasia Pentina
IST Austria
Klosterneuburg, Austria
[email protected]
Abstract
In this work we aim at extending the theoretical foundations of lifelong learning.
Previous work analyzing this scenario is based on the assumption that learning
tasks are sampled i.i.d. from a task environment or limited to strongly constrained
data distributions. Instead, we study two scenarios when lifelong learning is possible, even though the observed tasks do not form an i.i.d. sample: first, when they
are sampled from the same environment, but possibly with dependencies, and second, when the task environment is allowed to change over time in a consistent
way. In the first case we prove a PAC-Bayesian theorem that can be seen as a
direct generalization of the analogous previous result for the i.i.d. case. For the
second scenario we propose to learn an inductive bias in form of a transfer procedure. We present a generalization bound and show on a toy example how it can be
used to identify a beneficial transfer algorithm.
1
Introduction
Despite the tremendous growth of available data over the past decade, the lack of fully annotated
data, which is an essential part of success of any traditional supervised learning algorithm, demands
methods that allow good generalization from limited amounts of training data. One way to approach
this is provided by the lifelong learning (or learning to learn [1]) paradigm, which is based on the
idea of accumulating knowledge over the course of learning multiple tasks in order to improve the
performance on future tasks.
In order for this scenario to make sense one has to define what kind of relations connect the observed
tasks with the future ones. The first formal model of lifelong learning was proposed by Baxter
in [2]. He introduced the notion of task environment ? a set of all tasks that may need to be solved
together with a probability distribution over them. In Baxter?s model the lifelong learning system
observes tasks that are sampled i.i.d. from the task environment. This allows proving bounds in
the PAC framework [3, 4] that guarantee that a hypothesis set or inductive bias that works well on
the observed tasks will also work well on future tasks from the same environment. Baxter?s results
were later extended using algorithmic stability [5], task similarity measures [6], and PAC-Bayesian
analysis [7]. Specific cases that were studied include feature learning [8] and sparse coding [9].
All these works, however, assume that the observed tasks are independently and identically distributed, as the original work by Baxter did. This assumption allows making predictions about the
future of the learning process, but it limits the applicability of the results in practice. To our knowledge, only the recent [10] has studied lifelong learning without an i.i.d. assumption. However, the
considered framework is limited to binary classification with linearly separable classes and isotropic
log-concave data distributions.
In this work we use the PAC-Bayesian framework to study two possible relaxations of the i.i.d. assumption without restricting the class of possible data distributions. First, we study the case in which
tasks can have dependencies between them, but are still sampled from a fixed task environment. An
1
illustrative example would be when task are to predict the outcome of chess games. Whenever a
player plays multiple games the corresponding tasks are not be independent. In this setting we retain many concepts of [7] and learn an inductive bias in the form of a probability distribution. We
prove a bound relating the expected error when relying on the learned bias for future tasks to its
empirical error over the observed tasks. It has the same form as for the i.i.d. situation, except for a
slowdown of convergence proportional to a parameter capturing the amount of dependence between
tasks.
Second, we introduce a new and more flexible lifelong learning setting, in which the learner observes
a sequence of tasks from different task environments. This could be, e.g., classification tasks of increasing difficulty. In this setting one cannot expect that transferring an inductive bias from observed
tasks to future tasks will be beneficial, because the task environment is not stationary. Instead, we
aim at learning an effective transfer algorithm: a procedure that solves a task taking information
from a previous task into account. We bound the expected performance of such algorithms when
applied to future tasks based on their performance on the observed tasks.
2
Preliminaries
Following Baxter?s model [2] we assume that all tasks that may need to be solved share the same
input space X and output space Y. The lifelong learning system observes n tasks t1 , . . . , tn in form
i
)} is a set of m points sampled
of training sets S1 , . . . , Sn , where each Si = {(xi1 , y1i ), . . . , (xim , ym
i.i.d. from the corresponding unknown data distribution Di over X ?Y. In contrast to previous works
on lifelong learning [2, 5, 8] we omit the assumption that the observed tasks are independently and
identically distributed.
In order to theoretically analyze lifelong learning in the case of non-i.i.d. tasks we use techniques
from PAC-Bayesian theory [11]. We assume that the learner uses the same hypothesis set H =
{h : X ? Y} and the same loss function ` : Y ? Y ? [0, 1] for solving all tasks. PAC-Bayesian
theory studies the performance of randomized, Gibbs, predictors. Formally, for any probability
distribution Q over the hypothesis set, the corresponding Gibbs predictor for every point x ? X
randomly samples h ? Q and returns h(x). The expected loss of such Gibbs predictor on a task
corresponding to a data distribution D is given by:
er(Q) = Eh?Q E(x,y)?D `(h(x), y)
and its empirical counterpart based on a training set S sampled from D
m
1 X
er(Q)
b
= Eh?Q
`(h(xi ), yi ).
m i=1
(1)
m
is given by:
(2)
PAC-Bayesian theory allows us to obtain upper bounds on the difference between these two quantities of the following form:
Theorem 1. Let P be any distribution over H, fixed before observing the sample S. Then for any
? > 0 the following holds uniformly for all distributions Q over H with probability at least 1 ? ?:
1 + 8 log 1/?
1
?
,
(3)
er(Q) ? er(Q)
b
+ ? KL(Q||P ) +
m
8 m
where KL denotes the Kullback-Leibler divergence.
The distribution P should be chosen before observing any data and therefore is usually referred
as prior distribution. In contrast, the bound holds uniformly with respect to the distributions Q.
Whenever it consists only of computable quantities, it can be used to choose a data-dependent Q that
minimizes the right hand side of the inequality (3) and thus provides a Gibbs predictor with expected
error bounded by a hopefully low value. Suchwise Q is usually referred as a posterior distribution.
Note that besides explicit bounds, such as (3), in the case of 0/1-loss one can also derive implicit
bound that can be tighter in some regimes [12]. Instead of the error difference, er ?er,
b these bound
their KL-divergence, kl(erk
b er), where kl(qkp) denotes the KL-divergence between two Bernoulli
random variables with success probabilities p and q. In this work, we prefer explicit bounds as they
are more intuitive and allow for more freedom in the choice of different loss functions. They also
allow us to combine several inequalities in an additive way, which we make use of in Sections 3
and 4.
2
3
Dependent tasks
The first extension of Baxter?s model that we study is the case, when the observed tasks are sampled
from the same task environment, but with some interdependencies. In other words, in this case the
tasks are identically, but not independently, distributed.
Since the task environment is assumed to be constant we can build on ideas from the situation of i.i.d.
tasks in [7]. We assume that for all tasks the learner uses the same deterministic learning algorithm
that produces a posterior distribution Q based on a prior distribution P and a sample set S. We also
assume that there is a set of possible prior distributions and some hyper-prior distribution P over it.
The goal of the learner is to find a hyper-posterior distribution Q over this set such that, when the
prior is sampled according to Q, the expected loss on the next, yet unobserved task is minimized:
er(Q) = EP ?Q E(t,St ) Eh?Q(P,St ) E(x,y)?Dt `(h(x), y).
(4)
The empirical counterpart of the above quantity is given by:
n
er(Q)
b
= EP ?Q
m
1X
1 X
Eh?Qi (P,Si )
`(h(xij ), yji ).
n i=1
m j=1
(5)
In order to bound the difference between these two quantities we adopt the two-staged procedure
used in [7]. First, we bound the difference between the empirical error er(Q)
b
and the corresponding
expected multi-task risk given by:
n
er(Q)
e
= EP ?Q
1X
Eh?Qi (P,Si ) E(x,y)?Di `(h(x), y).
n i=1
(6)
Then we continue with bounding the difference between er(Q) and er(Q).
e
Since conditioned on the observed tasks the corresponding training samples are independent, we can
reuse the following results from [7] in order to perform the first step of the proof.
Theorem 2. With probability at least 1 ? ? uniformly for all Q:
n
n + 8 log(1/?)
X
1
?
KL(Q||P) +
EP ?Q KL(Qi (P, Si )||P ) +
. (7)
er(Q)
e
? er(Q)
b
+ ?
n m
8n m
i=1
To bound the difference between er(Q) and er(Q),
e
however, the results from [7] cannot be used,
because they rely on the assumption that the observed tasks are independent. Instead we adopt ideas
from chromatic PAC-Bayesian bounds [13] that rely on the properties of a dependency graph built
with respect to the dependencies within the observed tasks.
Definition 1 (Dependency graph). The dependency graph ?(t) = (V, E) of a set of random variables t = (t1 , . . . , tn ) is such that:
? the set of vertices V equals {1, . . . , n},
? there is no edge between i and j if and only if ti and tj are independent.
Definition 2 (Exact fractional cover [14]). Let ? = (V, E) be an undirected graph with V =
{1, . . . , n}. A set C = {(Cj , wj )}kj=1 , where Cj ? V and wj ? [0, 1] for all j, is a proper exact
fractional cover if:
? for every j all vertices in Cj are independent,
? ?j Cj = V ,
? for every i ? V
Pk
j=1
The sum of the weights w(C) =
wj Ii?Cj = 1.
Pk
j=1
wj is the chromatic weight of C and k is the size of C.
Then the following holds:
3
Theorem 3. For any fixed hyper-prior distribution P, any proper exact fractional cover C of the
dependency graph ?(t1 , . . . , tn ) of size k and any ? > 0 the following holds with probability at least
1 ? ? uniformly for all hyper-posterior distributions Q:
r
p
w(C)(1 ? 8 log ? + 8 log k)
w(C)
?
er(Q) ? er(Q)
e
+
KL(Q||P) +
.
(8)
n
8 n
Proof. By Donsker-Varadhan?s variational formula [15]:
er(Q) ? er(Q)
e
=
k
X
w(C) X
wj
E(t,St ) ert (Qt ) ? eri (Qi ) ?
EP ?Q
w(C)
n
j=1
(9)
i?Cj
k
? w(C) X
X
wj 1
j
KL(Q||P) + log EP ?P exp
E(t,St ) ert (Qt ) ? eri (Qi ) .
w(C) ?j
n
j=1
i?Cj
Since the tasks within every Cj are independent, for every fixed prior P {E(t,St ) ert (Qt ) ?
eri (Qi )}i?Cj are i.i.d. and take values in [b ? 1, b] , where b = E(t,St ) ert (Qt ). Therefore, by
Hoeffding?s lemma [16]:
? w(C) X
?2 w(C)2 |Cj |
j
j
E(ti ,Si ),i?Cj exp
E(t,St ) ert (Qt ) ? eri (Qi ) ? exp
.
(10)
n
8n2
i?Cj
Therefore, by Markov?s inequality with probability at least 1 ? ?j it holds that:
? w(C) X
?2 w(C)2 |Cj |
j
j
? log ?j .
log EP ?P exp
E(t,St ) ert (Qt ) ? eri (Qi ) ?
n
8n2
(11)
i?Cj
Consequently, we obtain with probability at least 1 ?
Pk
j=1 ?j :
k
k
k
X
X
wj 1
wj ?j w(C)|Cj | X wj
?
KL(Q||P) +
log ?j . (12)
w(C) ?j
8n2
w(C)?j
j=1
j=1
j=1
p
By setting ?1 = ? ? ? = ?k = n/w(C) and ?j = ?/k we obtain the statement of the theorem.
er(Q) ? er(Q)
e
?
By combining Theorems 2 and 3 we obtain the main result of this section:
Theorem 4. For any fixed hyper-prior distribution P, any proper exact fractional cover C of the
dependency graph ?(t1 , . . . , tn ) of size k and any ? > 0 the following holds with probability at least
1 ? ? uniformly for all hyper-posterior distributions Q:
p
n
1 + w(C)mn
1 X
?
er(Q) ? er(Q)+
b
KL(Q||P) + ?
EP ?Q KL(Qi (P, Si )||P )+
n m
n m i=1
p
w(C)(1 + 8 log(2/?) + 8 log k)
n + 8 log(2/?)
?
?
+
.
(13)
8n m
8 n
Theorem 4 shows that even in the case of non-independent tasks a bound very similar to that in [7]
can be obtained. In particular, it contains two types of complexity terms: KL(Q||P) corresponds to
the level of the task environment and KL(Qi ||P ) corresponds specifically to the i-th task. Similarly
to the i.i.d. case, when the learner has access to unlimited amount of data, but for finitely many
observed
tasks (m ? ?, n < ?), the complexity terms of the second type converge to 0 as
?
1/ m, while the first one does not, as there is still uncertainty on the task environment level. In the
opposite situation, when the learner has access to infinitely many tasks, but with only
p finitely many
samples per task (m < ?, n ? ?), the first complexity term converges to 0 as w(C)/n,
p up to
logarithmic terms. Thus there is a worsening comparing to the i.i.d. case proportional to w(C),
which represents the amount of dependence among the tasks. If the tasks are actually i.i.d., the
dependency graph contains no edges, so we can form a cover of size 1 with chromatic weight 1.
Thus we recover the result from [7] as a special case of Theorem 4.
4
For general dependence graph, fastest convergence is obtained by using a cover with minimal chromatic weight. It is known that the minimal chromatic weight, ?? (?), satisfies the following inequality [14]: 1 ? c(?) ? ?? (?) ? ?(?) + 1, where c(?) is the order of the largest clique in ? and
?(?) is the maximum degree of a vertex in ?.
In some situations, even the bound obtainable from Theorem 4 by plugging in a cover with the
minimal chromatic weight can be improved: Theorem 4 also holds for any subset ts , |ts | = s, of the
observed tasks with the induced dependency subgraph ?s . Therefore it might provide a tighter bound
if ?? (?s )/s is smaller than ?? (?)/n. However, this is not guaranteed since the empirical error er
b
computed on ts might become larger, as well as the second part of the bound, which decreases with
n and does not depend on the chromatic weight of the cover. Note also that such a subset needs to
be chosen before observing the data, since the bound of Theorem 4 holds with probability 1 ? ? only
for a fixed set of tasks and a fixed cover.
Another important aspect of Theorem 4 as a PAC-Bayesian bound is that the right hand side of
inequality (13) consists only of computable quantities. Therefore it can be seen as quality measure
of a hyper-posterior Q and by minimizing it one could obtain a distribution that is adjusted to a
particular task environment. The resulting minimizer can be expected to work well even on new,
yet unobserved tasks, because the guarantees of Theorem 4 still hold due to the uniformity of the
bound. To do so, one can use the same techniques as in [7], because Theorem 4 differs from the
bound provided there only by constant factors.
4
Changing Task Environments
In this section we study a situation, when the task environment is gradually changing: every next
task ti+1 is sampled from a distribution Ti+1 over the tasks that can depend on the history of the
process. Due to the change of task environment the previous idea of learning one prior for all
tasks does not seem reasonable anymore. In contrast, we propose to learn a transfer algorithm that
produces a solution for the current task based on the corresponding sample set and the sample set
from the previous task. Formally, we assume that there is a set A of learning algorithms that produce
a posterior distribution Qi+1 for task ti+1 based on the training samples Si and Si+1 . The goal of
the learner is to identify an algorithm A in this set that leads to good performance when applied to a
new, yet unobserved task, while using the last observed training sample Sn 1 .
For each task ti and each algorithm A ? A we define the expected and empirical error of applying
this algorithm as follows:
m
eri (A) = Eh?Qi E(x,y)?Di `(h(x), y),
er
b i (A) = Eh?Qi
1 X
`(h(xij ), yji ),
m j=1
(14)
where Qi = A(Si , Si?1 ).The goal of the learner is to find A that minimizes ern+1 given the history
of the observed tasks. However, if the task environment would change arbitrarily from step to step,
the observed tasks would not contain any relevant information for a new task. To overcome this
difficulty, we make the assumption that the expected performance of the algorithms in A does not
change over time. Formally, we assume for each A ? A there exists a value, er(A), such that for
every i = 2, . . . , n + 1, with Ei = (Ti , ti , Si ):
E{Ei?1 ,Ei } [ eri (A) | E1 , . . . , Ei?2 ] = er(A).
(15)
In words, the quality of a transfer algorithm does not depend on when during the task sequence
it is applied, provided that it is always applied to the subsequent sample sets. Note that this is a
natural assumption for lifelong learning: without it, the quality of transfer algorithms could change
over time, so an algorithm that works well for all observed tasks might not work anymore for future
tasks.
The goal of the learner can be reformulated as identifying A ? A with minimal er(A), which can be
seen as the expected value of the expected risk of applying algorithm A on the next, yet unobserved
task. Since er(A) is unknown, we derive an upper bound based on the observed data that holds
uniformly for all algorithms A and therefore can be used to guide the learner. To do so, we again use
1
Note that this setup includes the possibility of model selection, such as predictors using different feature
representations or (hyper)parameter values.
5
the construction with hyper-priors and hyper-posteriors from the previous section. Formally, let P
be a prior distribution over the set of possible algorithms that is fixed before any data arrives and let
Q be a possibly data-dependent hyper-posterior. The quality of the hyper-posterior and its empirical
counterpart are given by the following quantities:
n
1 X
er(Q) = EA?Q er(A),
er(Q)
b
= EA?Q
er
b i (A).
(16)
n ? 1 i=2
Similarly to the previous section, we first bound the difference between er(Q)
b
and multi-task expected error given by:
n
1 X
er(Q)
e
= EA?Q
eri (A).
(17)
n ? 1 i=2
Even though Theorem 2 is not directly applicable here, a more careful modification of it allows to
obtain the following result (see supplementary material for a detailed proof):
Theorem 5. For any fixed hyper-prior distribution P with probability at least 1 ? ? the following
holds uniformly for all hyper-posterior distributions Q:
(n ? 1) + 8 log(1/?)
1
? KL(Q?Q2 ?? ? ??Qn ||P ?P2 ?? ? ??Pn )+
?
,
er(Q)
e
? er(Q)+
b
(n ? 1) m
8(n ? 1) m
where P2 , . . . , Pn are some reference prior distributions that do not depend on the training sets of
subsequent tasks. Possible choices include using just one prior distribution P fixed before observing
any data, or using the posterior distributions obtained from the previous task, i.e. Pi = Qi?1 .
To complete the proof we need to bound the difference between er(Q) and er(Q).
e
We use techniques
from [17] in combination of those from [13], resulting in the following lemma:
Lemma 1. For any fixed algorithm A and any ? the following holds:
n
?2
1 X
EE1 ,...,En exp ? er(A) ?
.
(18)
eri (A) ? exp
n ? 1 i=2
2(n ? 1)
Proof. First, define Xi = (Ei?1 , Ei ) for i = 2, . . . , n and g : Xi 7? eri (A) and b = er(A). Then:
n
? X
X
1 X
exp ? er(A) ?
eri (A) = exp
(b ? g(Xi )) +
(b ? g(Xi ))
n ? 1 i=2
n ? 1 even i
odd i
2? X
1
2? X
1
? exp
(b ? g(Xi )) + exp
(b ? g(Xi )) .
(19)
2
n ? 1 even i
2
n?1
odd i
Note, that both, the set of Xi -s corresponding to even i and the set of Xi -s corresponding to odd
i, form a martingale difference sequence. Therefore by using Lemma 2 from the supplementary
material (or similarly Lemma 2 in [17]) and Hoeffding?s lemma [16] we obtain:
2? X
4?2
(b ? g(Xi )) ? exp
(20)
EE1 ,...,En exp
n ? 1 even i
8(n ? 1)
and the same for the odd i. Together with inequality (19) it gives the statement of the lemma.
Now we can prove the following statement:
Theorem 6. For any hyper-prior distribution P and any ? > 0 with probability at least 1 ? ? the
following inequality holds uniformly for all Q:
1
1 + 2 log(1/?)
?
er(Q) ? er(Q)
e
+?
KL(Q||P) +
.
(21)
n?1
2 n?1
Proof. By applying Donsker-Varadhan?s variational formula [15] one obtains that:
n
1
1 X
er(Q) ? er(Q)
e
?
KL(Q||P) + log EA?P exp ? er(A) ?
eri (A) .
?
n ? 1 i=2
6
(22)
Figure 1: Illustration of three learning tasks sampled from a non-stationary environment. Shaded
areas illustrate the data distribution, + and ? indicate positive and negative training examples. Between subsequent tasks, the data distribution changes by a rotation. A transfer algorithm with access
to two subsequent tasks can compensate for this by rotating the previous data into the new position,
thereby obtaining more data samples to train on.
For a fixed algorithm A we obtain from Lemma 1:
n
?2
1 X
EE1 ,...,En exp ? er(A) ?
eri (A) ? exp
.
n ? 1 i=2
2(n ? 1)
(23)
Since P does not depend on the process, by Markov?s inequality, with probability at least 1 ? ?, we
obtain
n
1
?2
1 X
eri (A) ? exp
.
(24)
EA?P exp ? er(A) ?
n ? 1 i=2
?
2(n ? 1)
?
The statement of the theorem follows by setting ? = n ? 1.
By combining Theorems 5 and 6 we obtain the main result of this section:
Theorem 7. For any hyper-prior distribution P and any ? > 0 with probability at least 1 ? ? the
following holds uniformly for all Q:
p
n
X
(n ? 1)m + 1
1
?
?
er(Q) ? er(Q)
b
+
EA?Q KL(Qi kPi )
KL(QkP) +
(n ? 1) m
(n ? 1) m i=2
+
(n ? 1) + 8 log(2/?) 1 + 2 log(2/?)
?
?
,
+
8(n ? 1) m
2 n?1
(25)
where P2 , . . . , Pn are some reference prior distributions that should not depend on the data of
subsequent tasks.
Similarly to Theorem 4 the above bound contains two types of complexity terms: one corresponding
to the level of the changes
? in the task environment and task-specific terms. The first complexity term
converges to 0 like 1/ n ? 1 when the number of the observed tasks increases, indicating that
more observed tasks allow for better estimation of the behavior of the transfer algorithms. The taskspecific complexity terms vanish only when the amount of observed data m per tasks grows. In
addition, since the right hand side of the inequality (25) consists only of computable quantities and
at the same time holds uniformly for all Q, one can obtain a posterior distribution by minimizing it
over the transfer algorithms that is adjusted to particularly changing task environments.
We illustrate this process by discussing a toy example (Figure 1). Suppose that X = R2 , Y =
{?1, 1} and that the learner uses linear classifiers, h(x) = signhw, xi, and 0/1-loss, `(y1 , y2 ) =
Jy1 6= y2 K, for solving every task. For simplicity we assume that every task environment contains
only one task or, equivalently, every Ti is a delta peak, and that the change in the environment
between two steps is due to a constant rotation by ?0 = ?6 of the feature space. For the set A we use
a one-parameter family of transfer algorithms, A? for ? ? R. Given sample sets Sprev and Scur , any
algorithm A? first rotates Sprev by the angle ?, and then trains a linear support vector machine on the
union of both sets. Clearly, the quality of each transfer algorithm depends on the chosen angle, and
an elementary calculation shows that condition (15) is fulfilled. We can therefore use the bound (25)
7
as a criterion to determine a beneficial angle2 . For that we set Qi = N (wi , I2 ), i.e. unit variance
Gaussian distributions with means wi . Similarly, we choose all reference prior distributions as unit
variance Gaussian with zero mean, Pi = N (0, I2 ). Analogously, we set the hyper-prior P to be
N (0, 10), a zero mean normal distribution with enlarged variance in order to make all reasonable
rotations ? lie within one standard deviation from the mean. As hyper-posteriors Q we choose
N (?, 1) and the goal of the learning is to identify the best ?. In order to obtain the objective function
from equation (25) we first compute the complexity terms (and approximate all expectations with
respect to Q by the values at its mean ?):
?2
kwi k2
,
EA?Q KL(Qi kPi ) ?
.
20
2
The empirical error of the Gibbs classifiers in the case of 0/1-loss and Gaussian distributions is
given by the following expression (we again approximate the expectation by the value at ?) [20, 21]:
!
n
m
yji hwi , xij i
1 X 1 X
er(Q)
b
?
?
,
(26)
n ? 1 i=2 m j=1
kxij k
where ?(z) = 12 1 ? erf( ?z2 ) and erf(z) is the Gauss error function. The resulting objective
function that we obtain for identifying a beneficial angle ? is the following:
?
!?
p
n
m
i
i
2
2
X
X
y
hw
,
x
i
(n ? 1)m + 1 ?
1
1
k
i
j
j
?.
? kw
?
?i +
(27)
?
?
+
J (?) =
20 n ? 1 i=2 2 m
m j=1
(n ? 1) m
kxij k
KL(Q||P) =
Numeric experiments confirm that by optimizing J (?) with respect to ? one can obtain an advantageous angle: using n = 2, . . . , 11 tasks, each with m = 10 samples, we obtain an average test
error of 14.2% for the (n + 1)th task. As can be expected, this lies in between the error for the
same setting without transfer, which was 15.0%, and the error when always rotating by ?6 , which
was 13.5%.
5
Conclusion
In this work we present a PAC-Bayesian analysis of lifelong learning under two types of relaxations
of the i.i.d. assumption on the tasks. Our results show that accumulating knowledge over the course
of learning multiple tasks can be beneficial for the future even if these tasks are not i.i.d. In particular,
for the situation when the observed tasks are sampled from the same task environment but with
possible dependencies we prove a theorem that generalizes the existing bound for the i.i.d. case.
As a second setting we further relax the i.i.d. assumption and allow the task environment to change
over time. Our bound shows that it is possible to estimate the performance of applying a transfer
algorithm on future tasks based on its performance on the observed ones. Furthermore, our result
can be used to identify a beneficial algorithm based on the given data and we illustrate this process
on a toy example. For future work, we plan to expand on this aspect. Essentially, any existing
domain adaptation algorithm can be used as a transfer method in our setting. However, the success
of domain adaptation techniques is often caused by asymmetry between the source and the target such algorithms usually rely on availability of extensive amounts of data from the source and only
limited amounts from the target. In contrast, in lifelong learning setting all the tasks are assumed to
be equipped with limited training data. Therefore we are particularly interested in identifying how
far the constant quality assumption can be carried over to existing domain adaptation techniques and
real-world lifelong learning situations.
Acknowledgments. This work was in parts funded by the European Research Council under
the European Union?s Seventh Framework Programme (FP7/2007-2013)/ERC grant agreement no
308036.
2
Note that Theorem 7 provides an upper bound for the expected error of stochastic Gibbs classifiers, and
not deterministic ones that are preferable in practice. However for 0/1-loss the error of a Gibbs classifier is
bounded from below by half the error of the corresponding majority vote predictor [18, 19] and therefore twice
the right hand side of (25) provides a bound for deterministic classifiers.
8
References
[1] Sebastian Thrun and Tom M. Mitchell. Lifelong robot learning. Technical report, Robotics
and Autonomous Systems, 1993.
[2] Jonathan Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research, 12:149?198, 2000.
[3] Leslie G. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134?1142,
1984.
[4] Vladimir N. Vapnik. Estimation of Dependencies Based on Empirical Data. Springer, 1982.
[5] Andreas Maurer. Algorithmic stability and meta-learning. Journal of Machine Learning Research (JMLR), 6:967?994, 2005.
[6] Gilles Blanchard, Gyemin Lee, and Clayton Scott. Generalizing from several related classification tasks to a new unlabeled sample. In Conference on Neural Information Processing
Systems (NIPS), 2011.
[7] Anastasia Pentina and Christoph H. Lampert. A PAC-Bayesian bound for lifelong learning. In
International Conference on Machine Learing (ICML), 2014.
[8] Andreas Maurer. Transfer bounds for linear feature learning. Machine Learning, 75:327?350,
2009.
[9] Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. Sparse coding for
multitask and transfer learning. In International Conference on Machine Learing (ICML),
2013.
[10] Maria-Florina Balcan, Avrim Blum, and Santosh Vempala. Efficient representations for lifelong learning and autoencoding. In Workshop on Computational Learning Theory (COLT),
2015.
[11] David A. McAllester. Some PAC-Bayesian theorems. Machine Learning, 37(3):355?363,
1999.
[12] Matthias Seeger. PAC-Bayesian generalisation error bounds for gaussian process classification.
Journal of Machine Learning Research (JMLR), 3:233?269, 2003.
[13] Liva Ralaivola, Marie Szafranski, and Guillaume Stempfel. Chromatic PAC-Bayes bounds for
non-iid data: Applications to ranking and stationary ?-mixing processes. Journal of Machine
Learning Research (JMLR), 2010.
[14] Daniel Ullman and Edward Scheinerman. Fractional Graph Theory: A Rational Approach to
the Theory of Graphs. Wiley Interscience Series in Discrete Mathematics, 1997.
[15] Monroe. D. Donsker and S. R. Srinivasa Varadhan. Asymptotic evaluation of certain Markov
process expectations for large time. I. Communications on Pure and Applied Mathematics,
28:1?47, 1975.
[16] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of
the American Statistical Association, 58:13?30, 1963.
[17] Yevgeny Seldin, Franois Laviolette, Nicol Cesa-Bianchi, John Shawe-Taylor, and Peter Auer.
PAC-Bayesian inequalities for martingales. IEEE Transactions on Information Theory,
58:7086?7093, 2012.
[18] David A. McAllester. Simplified PAC-Bayesian margin bounds. In Workshop on Computational Learning Theory (COLT), 2003.
[19] Franc?ois Laviolette and Mario Marchand. PAC-Bayes risk bounds for stochastic averages
and majority votes of sample-compressed classifiers. Journal of Machine Learning Research
(JMLR), 8:1461?1487, 2007.
[20] John Langford and John Shawe-Taylor. PAC-Bayes and margins. In Conference on Neural
Information Processing Systems (NIPS), 2002.
[21] Pascal Germain, Alexandre Lacasse, Franc?ois Laviolette, and Mario Marchand. PAC-Bayesian
learning of linear classifiers. In International Conference on Machine Learing (ICML), 2009.
9
| 6007 |@word multitask:1 advantageous:1 paredes:1 thereby:1 franois:1 contains:4 series:1 daniel:1 romera:1 past:1 existing:3 current:1 comparing:1 z2:1 worsening:1 si:11 yet:4 liva:1 john:3 subsequent:5 additive:1 stationary:3 half:1 intelligence:1 isotropic:1 provides:3 direct:1 become:1 learing:3 prove:4 consists:3 scur:1 combine:1 stempfel:1 interscience:1 wassily:1 introduce:1 theoretically:1 expected:14 behavior:1 multi:2 relying:1 equipped:1 increasing:1 provided:3 bounded:3 what:1 kind:1 erk:1 minimizes:2 q2:1 unobserved:4 guarantee:2 every:10 ti:9 concave:1 growth:1 preferable:1 classifier:7 k2:1 unit:2 grant:1 omit:1 t1:4 before:5 positive:1 limit:1 despite:1 analyzing:1 might:3 twice:1 studied:2 christoph:2 shaded:1 fastest:1 limited:5 acknowledgment:1 practice:2 union:2 differs:1 procedure:3 pontil:1 area:1 empirical:9 word:2 cannot:2 unlabeled:1 selection:1 ralaivola:1 risk:3 applying:4 accumulating:2 szafranski:1 deterministic:3 independently:3 kpi:2 simplicity:1 identifying:3 pure:1 proving:1 stability:2 notion:1 autonomous:1 analogous:1 ert:6 qkp:2 construction:1 play:1 suppose:1 target:2 exact:4 us:3 hypothesis:3 agreement:1 particularly:2 observed:24 ep:8 solved:2 wj:9 decrease:1 observes:3 environment:25 complexity:7 depend:6 solving:2 uniformity:1 learner:11 train:2 massimiliano:1 effective:1 artificial:1 hyper:18 outcome:1 larger:1 supplementary:2 relax:1 gyemin:1 compressed:1 erf:2 autoencoding:1 sequence:3 matthias:1 propose:2 adaptation:3 relevant:1 combining:2 subgraph:1 mixing:1 intuitive:1 convergence:2 chl:1 xim:1 extending:1 asymmetry:1 produce:3 klosterneuburg:2 converges:2 derive:2 illustrate:3 ac:2 finitely:2 odd:4 qt:6 p2:3 edward:1 taskspecific:1 ois:2 solves:1 indicate:1 annotated:1 stochastic:2 mcallester:2 material:2 generalization:3 preliminary:1 tighter:2 elementary:1 adjusted:2 extension:1 hold:15 considered:1 normal:1 exp:17 algorithmic:2 predict:1 adopt:2 estimation:2 applicable:1 council:1 largest:1 clearly:1 always:2 gaussian:4 aim:2 pn:3 chromatic:8 maria:1 bernoulli:1 contrast:4 seeger:1 sense:1 dependent:3 transferring:1 relation:1 expand:1 interested:1 classification:4 flexible:1 among:1 colt:2 pascal:1 plan:1 constrained:1 special:1 equal:1 santosh:1 represents:1 kw:1 icml:3 future:11 minimized:1 report:1 franc:2 randomly:1 divergence:3 freedom:1 possibility:1 evaluation:1 arrives:1 hwi:1 tj:1 edge:2 maurer:3 taylor:2 rotating:2 theoretical:1 minimal:4 cover:9 leslie:1 applicability:1 vertex:3 subset:2 deviation:1 predictor:6 seventh:1 dependency:12 connect:1 st:8 peak:1 randomized:1 international:3 retain:1 lee:1 xi1:1 together:2 ym:1 analogously:1 again:2 cesa:1 choose:3 possibly:2 hoeffding:3 american:1 return:1 ullman:1 toy:3 account:1 coding:2 includes:1 availability:1 blanchard:1 caused:1 ranking:1 depends:1 later:1 analyze:1 observing:4 mario:2 recover:1 bayes:3 variance:3 kxij:2 identify:4 bayesian:15 iid:1 history:2 whenever:2 sebastian:1 definition:2 proof:6 di:3 sampled:11 rational:1 mitchell:1 austria:4 knowledge:3 fractional:5 cj:15 obtainable:1 actually:1 ea:7 auer:1 alexandre:1 dt:1 supervised:1 tom:1 improved:1 though:2 strongly:1 furthermore:1 just:1 implicit:1 langford:1 hand:4 ei:6 hopefully:1 lack:1 quality:6 grows:1 concept:1 contain:1 y2:2 counterpart:3 inductive:5 leibler:1 i2:2 game:2 during:1 illustrative:1 criterion:1 complete:1 tn:4 balcan:1 variational:2 srinivasa:1 rotation:3 association:1 he:1 relating:1 gibbs:7 mathematics:2 similarly:5 erc:1 varadhan:3 shawe:2 funded:1 access:3 robot:1 similarity:1 posterior:14 recent:1 optimizing:1 scenario:4 certain:1 inequality:11 binary:1 success:3 continue:1 arbitrarily:1 discussing:1 yi:1 meta:1 seen:3 converge:1 paradigm:1 determine:1 ii:1 multiple:3 interdependency:1 technical:1 calculation:1 compensate:1 e1:1 plugging:1 qi:18 prediction:1 florina:1 essentially:1 expectation:3 robotics:1 addition:1 source:2 kwi:1 induced:1 undirected:1 seem:1 identically:3 baxter:7 pentina:2 opposite:1 andreas:3 idea:4 computable:3 expression:1 reuse:1 peter:1 reformulated:1 detailed:1 amount:7 xij:3 delta:1 fulfilled:1 per:2 discrete:1 ist:4 blum:1 changing:3 marie:1 graph:10 relaxation:2 sum:2 angle:4 uncertainty:1 family:1 reasonable:2 prefer:1 capturing:1 bound:37 guaranteed:1 marchand:2 unlimited:1 y1i:1 aspect:2 separable:1 vempala:1 ern:1 according:1 combination:1 beneficial:6 smaller:1 wi:2 making:1 s1:1 chess:1 modification:1 gradually:1 equation:1 fp7:1 staged:1 available:1 generalizes:1 anymore:2 original:1 denotes:2 include:2 eri:14 laviolette:3 build:1 objective:2 quantity:7 dependence:3 anastasia:2 traditional:1 rotates:1 thrun:1 majority:2 besides:1 illustration:1 minimizing:2 vladimir:1 equivalently:1 setup:1 statement:4 negative:1 proper:3 unknown:2 perform:1 gilles:1 upper:3 bianchi:1 markov:3 lacasse:1 t:3 situation:7 extended:1 communication:2 y1:1 introduced:1 clayton:1 david:2 germain:1 kl:22 extensive:1 learned:1 tremendous:1 nip:2 usually:3 below:1 scott:1 regime:1 built:1 difficulty:2 eh:7 rely:3 natural:1 mn:1 improve:1 carried:1 sn:2 kj:1 prior:19 nicol:1 asymptotic:1 fully:1 expect:1 loss:8 proportional:2 foundation:1 degree:1 consistent:1 share:1 pi:2 course:2 slowdown:1 last:1 bias:6 allow:5 formal:1 side:4 guide:1 lifelong:18 taking:1 sparse:2 distributed:3 overcome:1 numeric:1 world:1 qn:1 simplified:1 programme:1 far:1 transaction:1 approximate:2 obtains:1 kullback:1 clique:1 confirm:1 assumed:2 xi:11 yji:3 decade:1 learn:4 transfer:16 sprev:2 obtaining:1 european:2 domain:3 did:1 pk:3 main:2 linearly:1 bounding:1 yevgeny:1 lampert:2 n2:3 allowed:1 enlarged:1 referred:2 en:3 martingale:2 wiley:1 position:1 explicit:2 lie:2 donsker:3 vanish:1 jmlr:4 hw:1 theorem:25 formula:2 specific:2 pac:19 er:54 learnable:1 r2:1 essential:1 exists:1 workshop:2 restricting:1 vapnik:1 valiant:1 avrim:1 conditioned:1 demand:1 margin:2 monroe:1 generalizing:1 logarithmic:1 infinitely:1 seldin:1 bernardino:1 springer:1 corresponds:2 minimizer:1 satisfies:1 acm:1 goal:5 consequently:1 careful:1 change:9 specifically:1 except:1 uniformly:10 generalisation:1 lemma:8 gauss:1 player:1 vote:2 indicating:1 formally:4 guillaume:1 support:1 jonathan:1 |
5,534 | 6,008 | Algorithms with Logarithmic or Sublinear Regret for
Constrained Contextual Bandits
Huasen Wu
University of California at Davis
[email protected]
R. Srikant
University of Illinois at Urbana-Champaign
[email protected]
Xin Liu
University of California at Davis
[email protected]
Chong Jiang
University of Illinois at Urbana-Champaign
[email protected]
Abstract
We study contextual bandits with budget and time constraints, referred to as constrained contextual bandits. The time and budget constraints significantly complicate the exploration and exploitation tradeoff because they introduce complex
coupling among contexts over time. To gain insight, we first study unit-cost systems with known context distribution. When the expected rewards are known, we
develop an approximation of the oracle, referred to Adaptive-Linear-Programming
(ALP), which achieves near-optimality and only requires the ordering of expected
rewards. With these highly desirable features, we then combine ALP with the
upper-confidence-bound (UCB) method in the general case where the expected
rewards are unknown a priori. We show that the proposed UCB-ALP algorithm
achieves logarithmic regret except for certain boundary cases. Further, we design algorithms and obtain similar regret bounds for more general systems with
unknown context distribution and heterogeneous costs. To the best of our knowledge, this is the first work that shows how to achieve logarithmic regret in constrained contextual bandits. Moreover, this work also sheds light on the study of
computationally efficient algorithms for general constrained contextual bandits.
1
Introduction
The contextual bandit problem [1, 2, 3] is an important extension of the classic multi-armed bandit
(MAB) problem [4], where the agent can observe a set of features, referred to as context, before
making a decision. After the random arrival of a context, the agent chooses an action and receives
a random reward with expectation depending on both the context and action. To maximize the
total reward, the agent needs to make a careful tradeoff between taking the best action based on the
historical performance (exploitation) and discovering the potentially better alternative actions under
a given context (exploration). This model has attracted much attention as it fits the personalized
service requirement in many applications such as clinical trials, online recommendation, and online
hiring in crowdsourcing. Existing works try to reduce the regret of contextual bandits by leveraging
the structure of the context-reward models such as linearity [5] or similarity [6], and more recent
work [7] focuses on computationally efficient algorithms with minimum regret. For Markovian
context arrivals, algorithms such as UCRL [8] for more general reinforcement learning problem can
be used to achieve logarithmic regret.
However, traditional contextual bandit models do not capture an important characteristic of real
systems: in addition to time, there is usually a cost associated with the resource consumed by each
action and the total cost is limited by a budget in many applications. Taking crowdsourcing [9] as
an example, the budget constraint for a given set of tasks will limit the number of workers that an
employer can hire. Another example is the clinical trials [10], where each treatment is usually costly
and the budget of a trial is limited. Although budget constraints have been studied in non-contextual
bandits where logarithmic or sublinear regret is achieved [11, 12, 13, 14, 15, 16], as we will see
later, these results are inapplicable in the case with observable contexts.
1
In this paper, we study contextual bandit problems with budget and time constraints, referred to
as constrained contextual bandits, where the agent is given a budget B and a time-horizon T . In
addition to a reward, a cost is incurred whenever an action is taken under a context. The bandit
process ends when the agent runs out of either budget or time. The objective of the agent is to
maximize the expected total reward subject to the budget and time constraints. We are interested in
the regime where B and T grow towards infinity proportionally.
The above constrained contextual bandit problem can be viewed as a special case of Resourceful
Contextual Bandits (RCB) [17]. In [17], RCB is studied under more general settings with possibly
infinite contexts, random costs, and?
multiple budget constraints. A Mixture Elimination algorithm is
proposed and shown to achieve O( T ) regret. However, the benchmark for the definition of regret
in [17] is restricted to within a finite policy set. Moreover, the Mixture Elimination algorithm suffers
high complexity and the design of computationally efficient algorithms for such general settings is
still an open problem.
To tackle this problem, motivated by certain applications, we restrict the set of parameters in our
model as follows: we assume finite discrete contexts, fixed costs, and a single budget constraint. This
simplified model is justified in many scenarios such as clinical trials [10] and rate selection in wireless networks [18]. More importantly, these simplifications allow us to design easily-implementable
algorithms that achieve O(log T ) regret (except for a set of parameters of zero Lebesgue measure,
which we refer to as boundary cases), where the regret is defined more naturally as the performance
gap between the proposed algorithm and the oracle, i.e., the optimal algorithm with known statistics.
Even with simplified assumptions considered in this paper, the exploration-exploitation tradeoff is
still challenging due to the budget and time constraints. The key challenge comes from the complexity of the oracle algorithm. With budget and time constraints, the oracle algorithm cannot simply
take the action that maximizes the instantaneous reward. In contrast, it needs to balance between
the instantaneous and long-term rewards based on the current context and the remaining budget. In
principle, dynamic programming (DP) can be used to obtain this balance. However, using DP in
our scenario incurs difficulties in both algorithm design and analysis: first, the implementation of
DP is computationally complex due to the curse of dimensionality; second, it is difficult to obtain
a benchmark for regret analysis, since the DP algorithm is implemented in a recursive manner and
its expected total reward is hard to be expressed in a closed form; third, it is difficult to extend the
DP algorithm to the case with unknown statistics, due to the difficulty of evaluating the impact of
estimation errors on the performance of DP-type algorithms.
To address these difficulties, we first study approximations of the oracle algorithm when the system
statistics are known. Our key idea is to approximate the oracle algorithm with linear programming
(LP) that relaxes the hard budget constraint to an average budget constraint. When fixing the average
budget constraint at B/T , this LP approximation provides an upper bound on the expected total
reward, which serves as a good benchmark in regret analysis. Further, we propose an Adaptive
Linear Programming (ALP) algorithm that adjusts the budget constraint to the average remaining
budget b? /? , where ? is the remaining time and b? is the remaining budget. Note that although the
idea of approximating a DP problem with an LP problem has been widely studied in literature (e.g.,
[17, 19]), the design and analysis of ALP here is quite different. In particular, we show that ALP
achieves O(1) regret, i.e., its expected total reward is within a constant independent of T from the
optimum, except for certain boundaries. This ALP approximation and its regret analysis make an
important step towards achieving logarithmic regret for constrained contextual bandits.
Using the insights from the case with known statistics, we study algorithms for constrained contextual bandits with unknown expected rewards. Complicated interactions between information acquisition and decision making arise in this case. Fortunately, the ALP algorithm has a highly desirable
property that it only requires the ordering of the expected rewards and can tolerate certain estimation
errors of system parameters. This property allows us to combine ALP with estimation methods that
can efficiently provide a correct rank of the expected rewards. In this paper, we propose a UCB-ALP
algorithm by combining ALP with the upper-confidence-bound (UCB) method [4]. We show?that
UCB-ALP achieves O(log T ) regret except for certain boundary cases, where its regret is O( T ).
We note that UCB-type algorithms are proposed in [20] for non-contextual bandits with concave
rewards and convex constraints, and further ?
extended to linear contextual bandits. However, [20]
focuses on static contexts1 and achieves O( T ) regret in our setting since it uses a fixed budget
constraint in each round. In comparison, we consider random context arrivals and use an adaptive
1
After the online publication of our preliminary version, two recent papers [21, 22] extend their previous
?
work [20] to the dynamic context case, where they focus on possibly infinite contexts and achieve O( T )
regret, and [21] restricts to a finite policy set as [17].
2
budget constraint to achieve logarithmic regret. To the best of our knowledge, this is the first work
that shows how to achieve logarithmic regret in constrained contextual bandits. Moreover, the proposed UCB-ALP algorithm is quite computationally efficient and we believe these results shed light
on addressing the open problem of general constrained contextual bandits.
Although the intuition behind ALP and UCB-ALP is natural, the rigorous analysis of their regret is
non-trivial since we need to consider many interacting factors such as action/context ranking errors,
remaining budget fluctuation, and randomness of context arrival. We evaluate the impact of these
factors using a series of novel techniques, e.g., the method of showing concentration properties under
adaptive algorithms and the method of bounding estimation errors under random contexts. For the
ease of exposition, we study the ALP and UCB-ALP algorithms in unit-cost systems with known
context distribution in Sections 3 and 4, respectively. Then we discuss the generalization to systems
with unknown context distribution in Section 5 and with heterogeneous costs in Section 6, which
are much more challenging and the details can be found in the supplementary material.
2
System Model
We consider a contextual bandit problem with a context set X = {1, 2, . . . , J} and an action set
A = {1, 2, . . . , K}. At each round t, a context Xt arrives independently with identical distribution
P{Xt = j} = ?j , j ? X , and each action k ? A generates a non-negative reward Yk,t . Under a
given context Xt = j, the reward Yk,t ?s are independent random variables in [0, 1]. The conditional
expectation E[Yk,t |Xt = j] = uj,k is unknown to the agent. Moreover, a cost is incurred if action k
is taken under context j. To gain insight into constrained contextual bandits, we consider fixed and
known costs in this paper, where the cost is cj,k > 0 when action k is taken under context j. Similar
to traditional contextual bandits, the context Xt is observable at the beginning of round t, while only
the reward of the action taken by the agent is revealed at the end of round t.
At the beginning of round t, the agent observes the context Xt and takes an action At from {0} ? A,
where ?0? represents a dummy action that the agent skips the current context. Let Yt and Zt be the
reward and cost for the agent in round t, respectively. If the agent takes an action At = k > 0,
then the reward is Yt = Yk,t and the cost is Zt = cXt ,k . Otherwise, if the agent takes the dummy
action At = 0, neither reward nor cost is incurred, i.e., Yt = 0 and Zt = 0. In this paper, we focus
on contextual bandits with a known time-horizon T and limited budget B. The bandit process ends
when the agent runs out of the budget or at the end of time T .
A contextual bandit algorithm ? is a function that maps the historical observations Ht?1 =
(X1 , A1 , Y1 ; X2 , A2 , Y2 ; . . . ; Xt?1 , At?1 , Yt?1 ) and the current context Xt to an action At ?
{0} ? A. The objective of the algorithm is to maximize the expected total reward U? (T, B) for
a given time-horizon T and a budget B, i.e.,
T
X
maximize?
U? (T, B) = E?
Yt
t=1
subject to
T
X
Zt ? B,
t=1
where the expectation is taken over the distributions of contexts and rewards. Note that we consider
a ?hard? budget constraint, i.e., the total costs should not be greater than B under any realization.
We measure the performance of the algorithm ? by comparing it with the oracle, which is the optimal
algorithm with known statistics, including the knowledge of ?j ?s, uj,k ?s, and cj,k ?s. Let U ? (T, B)
be the expected total reward obtained by the oracle algorithm. Then, the regret of the algorithm ? is
defined as
R? (T, B) = U ? (T, B) ? U? (T, B).
The objective of the algorithm is then to minimize the regret. We are interested in the asymptotic
regime where the time-horizon T and the budget B grow to infinity proportionally, i.e., with a fixed
ratio ? = B/T .
3
Approximations of the Oracle
In this section, we study approximations of the oracle, where the statistics of bandits are known
to the agent. This will provide a benchmark for the regret analysis and insights into the design of
constrained contextual bandit algorithms.
3
As a starting point, we focus on unit-cost systems, i.e., cj,k = 1 for each j and k, from Section 3 to
Section 5, which will be relaxed in Section 6. In unit-cost systems, the quality of action k under context j is fully captured by its expected reward uj,k . Let u?j be the highest expected reward under context j, and kj? be the best action for context j, i.e., u?j = maxk?A uj,k and kj? = arg maxk?A uj,k .
For ease of exposition, we assume that the best action under each context is unique, i.e., uj,k < u?j
for all j and k 6= kj? . Similarly, we also assume u?1 > u?2 > . . . > u?J for simplicity.
With the knowledge of uj,k ?s, the agent knows the best action kj? and its expected reward u?j under
?
any context j. In each round t, the task of the oracle is deciding whether to take action kX
or not
t
depending on the remaining time ? = T ? t + 1 and the remaining budget b? .
The special case of two-context systems (J = 2) is trivial, where the agent just needs to procrastinate
for the better context (see Appendix D of the supplementary material). When considering more
general cases with J > 2, however, it is computationally intractable to exactly characterize the
oracle solution. Therefore, we resort to approximations based on linear programming (LP).
3.1
Upper Bound: Static Linear Programming
We propose an upper bound for the expected total reward U ? (T, B) of the oracle by relaxing the
hard constraint to an average constraint and solving the corresponding constrained LP problem.
Specifically, let pj ? [0, 1] be the probability that the agent takes action kj? for context j, and 1 ? pj
be the probability that the agent skips context j (i.e., taking action At = 0). Denote the probability
vector as p = (p1 , p2 , . . . , pJ ). For a time-horizon T and budget B, consider the following LP
problem:
J
X
(LP T,B ) maximizep
pj ?j u?j ,
(1)
j=1
J
X
subject to
pj ?j ? B/T,
(2)
j=1
p ? [0, 1]J .
Define the following threshold as a function of the average budget ? = B/T :
j
X
?j(?) = max{j :
?j 0 ? ?}
(3)
j 0 =1
with the convention that ?j(?) = 0 if ?1 > ?. We can verify that the following solution is optimal for
LP T,B :
?
?
1,
if 1 ? j ? ?j(?),
?
? P?j(?)
?? j 0 =1 ?j 0
pj (?) =
(4)
, if j = ?j(?) + 1,
??j(?)+1
?
?
?
0,
if j > ?j(?) + 1.
Correspondingly, the optimal value of LP T,B is
?
v(?) =
j(?)
X
?j u?j + p?j(?)+1 (?)??j(?)+1 u??j(?)+1 .
(5)
j=1
This optimal value v(?) can be viewed as the maximum expected reward in a single round with
b (T, B) =
average budget ?. Summing over the entire horizon, the total expected reward becomes U
T v(?), which is an upper bound of U ? (T, B).
Lemma 1. For a unit-cost system with known statistics, if the time-horizon is T and the budget is
b (T, B) ? U ? (T, B).
B, then U
The proof of Lemma 1 is available in Appendix A of the supplementary material. With Lemma 1, we
b (T, B)
can bound the regret of any algorithm by comparing its performance with the upper bound U
?
b (T, B) has a simple expression, as we will see later, it significantly
instead of U (T, B). Since U
reduces the complexity of regret analysis.
4
3.2
Adaptive Linear Programming
Although the solution (4) provides an upper bound on the expected reward, using such a fixed
algorithm will not achieve good performance as the ratio b? /? , referred to as average remaining
budget, fluctuates over time. We propose an Adaptive Linear Programming (ALP) algorithm that
adjusts the threshold and randomization probability according to the instantaneous value of b? /? .
Specifically, when the remaining time is ? and the remaining budget is b? = b, we consider an LP
problem LP ?,b which is the same as LP T,B except that B/T in Eq. (2) is replaced with b/? . Then,
the optimal solution for LP ?,b can be obtained by replacing ? in Eqs. (3), (4), and (5) with b/? . The
ALP algorithm then makes decisions based on this optimal solution.
ALP Algorithm: At each round t with remaining budget b? = b, obtain pj (b/? )?s by solving LP ?,b ;
?
take action At = kX
with probability pXt (b/? ), and At = 0 with probability 1 ? pXt (b/? ).
t
The above ALP algorithm only requires the ordering of the expected rewards instead of their accurate
values. This highly desirable feature allows us to combine ALP with classic MAB algorithms such as
UCB [4] for the case without knowledge of expected rewards. Moreover, this simple ALP algorithm
achieves very good performance within a constant distance from the optimum, i.e., O(1) regret,
except for certain boundary cases. Specifically, for 1 ? j ? J, let qj be the cumulative probability
Pj
defined as qj = j 0 =1 ?j 0 with the convention that q0 = 0. The following theorem states the near
optimality of ALP.
Theorem 1. Given any fixed ? ? (0, 1), the regret of ALP satisfies:
?
u?
1 ?uJ
1) (Non-boundary cases) if ? 6= qj for any j ? {1, 2, . . . , J ? 1}, then RALP (T, B) ? 1?e
,
?2? 2
where ? = min{? ? q?j(?) , q?j(?)+1 ? ?}.
?
2) (Boundary cases) if ? = qj for some j ? {1, 2, . . . , J ? 1}, then RALP (T, B) ? ?(o) T +
p
?
?
u1 ?uJ
, where ?(o) = 2(u?1 ? u?J ) ?(1 ? ?) and ? 0 = min{? ? q?j(?)?1 , q?j(?)+1 ? ?}.
1?e?2(?0 )2
Theorem 1 shows
that ALP achieves O(1) regret except for certain boundary cases, where it still
?
achieves O( T ) regret. This implies that the regret due to the linear relaxation is negligible in most
cases. Thus, when the expected rewards are unknown, we can achieve low regret, e.g., logarithmic
regret, by combining ALP with appropriate information-acquisition mechanisms.
Sketch of Proof: Although the ALP algorithm seems fairly intuitive, its regret analysis is nontrivial. The key to the proof is to analyze the evolution of the remaining budget b? by mapping
ALP to ?sampling without replacement?. Specifically, from Eq. (4), we can verify that when the
remaining time is ? and the remaining budget is b? = b, the system consumes one unit of budget with
probability b/? , and consumes nothing with probability 1 ? b/? . When considering the remaining
budget, the ALP algorithm can be viewed as ?sampling without replacement?. Thus, we can show
that b? follows the hypergeometric distribution [23] and has the following properties:
Lemma 2. Under the ALP algorithm, the remaining budget b? satisfies:
1) The expectation and variance of b? are E[b? ] = ?? and Var(b? ) = TT ??
?1 ? ?(1 ? ?), respectively.
2) For any positive number ? satisfying 0 < ? < min{?, 1 ? ?}, the tail distribution of b? satisfies
P{b? < (? ? ?)? } ? e?2?
2
?
2
and P{b? > (? + ?)? } ? e?2? ? .
Then, we prove Theorem
PT 1 based onLemma 2. Note that the expected total reward under ALP
is UALP (T, B) = E
? =1 v(b? /? ) , where v(?) is defined in (5) and the expectation is taken
over the distribution of b? . For the non-boundary cases, the single-round expected reward satisfies
E[v(b? /? )] = v(?) if the threshold ?j(b? /? ) = ?j(?) for all possible b? ?s. The regret then is bounded
by a constant because the probability of the event ?j(b? /? ) 6= ?j(?) decays exponentially due to the
concentration property of b? . For the boundary cases, we show the conclusion by relating the regret
with the variance of b? . Please refer to Appendix B of the supplementary material for details.
4
UCB-ALP Algorithm for Constrained Contextual Bandits
Now we get back to the constrained contextual bandits, where the expected rewards are unknown
to the agent. We assume the agent knows the context distribution as [17], which will be relaxed in
Section 5. Thanks to the desirable properties of ALP, the maxim of ?optimism under uncertainty?
5
[8] is still applicable and ALP can be extended to the bandit settings when combined with estimation
policies that can quickly provide correct ranking with high probability. Here, combining ALP with
the UCB method [4], we propose a UCB-ALP algorithm for constrained contextual bandits.
4.1
UCB: Notations and Property
Let Cj,k (t) be the number of times that action k ? A has been taken under context j up to round t.
If Cj,k (t ? 1) > 0, let u
?j,k (t) be the empirical reward of action k under context j, i.e., u
?j,k (t) =
Pt?1
1
0
0
0
Y
1(X
=
j,
A
=
k),
where
1(?)
is
the
indicator
function.
We
define
the UCB
0
t
t
t =1 t
Cj,k (t?1)
q
log t
?j,k (t) = 1 for Cj,k (t ?
of uj,k at t as u
?j,k (t) = u
?j,k (t) + 2Cj,k
(t?1) for Cj,k (t ? 1) > 0, and u
1) = 0. Furthermore, we define the UCB of the maximum expected reward under context j as
u
?? (t) = maxk?A u
?j,k (t). As suggested in [24], we use a smaller coefficient in the exploration term
qj
log t
2Cj,k (t?1) than the traditional UCB algorithm [4] to achieve better performance.
We present the following property of UCB that is important in regret analysis.
Lemma 3. For two context-action pairs, (j, k) and (j 0 , k 0 ), if uj,k < uj 0 ,k0 , then for any t ? T ,
P{?
uj,k (t) ? u
?j 0 ,k0 (t)|Cj,k (t ? 1) ? `j,k } ? 2t?1 ,
where `j,k =
(6)
2 log T
(uj 0 ,k0 ?uj,k )2 .
Lemma 3 states that for two context-action pairs, the ordering of their expected rewards can be identified correctly with high probability, as long as the suboptimal pair has been executed for sufficient
times (on the order of O(log T )). This property has been widely applied in the analysis of UCBbased algorithms [4, 13], and its proof can be found in [13, 25] with a minor modification on the
coefficients.
4.2
UCB-ALP Algorithm
We propose a UCB-based adaptive linear programming (UCB-ALP) algorithm, as shown in Algorithm 1. As indicated by the name, the UCB-ALP algorithm maintains UCB estimates of expected
rewards for all context-action pairs and then implements the ALP algorithm based on these estimates. Note that the UCB estimates u
??j (t)?s may be non-decreasing in j. Thus, the solution of
?
LP ?,b based on u
?j (t) depends on the actual ordering of u
??j (t)?s and may be different from Eq. (4).
We use p?j (?) rather than pj (?) to indicate this difference.
Algorithm 1 UCB-ALP
Input: Time-horizon T , budget B, and context distribution ?j ?s;
Init: ? = T , b = B;
Cj,k (0) = 0, u
?j,k (0) = 0, u
?j,k (0) = 1, ?j ? X and ?k ? A; u
??j (0) = 1, ?j ? X ;
for t = 1 to T do
kj? (t) ? arg maxk u
?j,k (t), ?j;
u
??j (t) ? u
??j,k? (t) (t);
j
if b > 0 then
Obtain the probabilities p?j (b/? )?s by solving LP ?,b with u?j replaced by u
??j (t);
?
Take action kXt (t) with probability p?Xt (b/? );
end if
Update ? , b, Cj,k (t), u
?j,k (t), and u
?j,k (t).
end for
4.3
Regret of UCB-ALP
We study the regret of UCB-ALP in this section. Due to space limitations, we only present a sketch
of the analysis. Specific representations of the regret bounds and proof details can be found in the
supplementary material.
Pj
Recall that qj = j 0 =1 ?j 0 (1 ? j ? J) are the boundaries defined in Section 3. We show that
as the budget B and the time-horizon T grow to infinity in proportion, the proposed UCB-ALP
algorithm achieves logarithmic regret except for the boundary cases.
6
Theorem 2. Given ?j ?s, uj,k ?s and a fixed ? ? (0, 1), the regret of UCB-ALP satisfies:
1) (Non-boundary cases) if ? 6= qj for any j ? {1, 2, . . . , J ? 1}, then the regret of UCB-ALP is
RUCB?ALP (T, B) = O JK log T .
2) (Boundary cases) if ??= qj for some j ? {1, 2, . . . , J ? 1}, then the regret of UCB-ALP is
RUCB?ALP (T, B) = O T + JK log T .
Theorem 2 differs from Theorem 1 by an additional term O(JK log T ). This term results from using
UCB to learn the ordering of expected rewards. Under UCB, each of the JK content-action pairs
should be executed roughly O(log T ) times to obtain the correct ordering. For the non-boundary
cases, UCB-ALP is order-optimal because obtaining the correct action ranking under each context
will result in O(log T ) regret [26]. Note that our results do not contradict the lower bound in [17]
because we consider discrete contexts?and actions, and focus on instance-dependent regret. For
the boundary cases, we keep both the T?and log T terms because the constant in the log T term
is typically much larger than that in the T term. Therefore, the log T term may dominate the
regret particularly when the number of context-action
pairs is large for medium T . It is still an open
?
problem if one can achieve regret lower than O( T ) in these cases.
Sketch of Proof: We bound the regret of UCB-ALP by comparing its performance with the benchb (T, B). The analysis of this bound is challenging due to the close interactions among differmark U
ent sources of regret and the randomness of context arrivals. We first partition the regret according
to the sources and then bound each part of regret, respectively.
Step 1: Partition the regret. By analyzing the implementation of UCB-ALP, we show that its
regret is bounded as
(a)
(c)
RUCB?ALP (T, B) ? RUCB?ALP (T, B) + RUCB?ALP (T, B),
PJ P
(a)
where the first part RUCB?ALP (T, B) = j=1 k6=k? (u?j ? uj,k )E[Cj,k (T )] is the regret from
j
PT
(c)
action ranking errors within a context, and the second part RUCB?ALP (T, B) = ? =1 E v(?) ?
PJ
?j (b? /? )?j u?j is the regret from the fluctuations of b? and context ranking errors.
j=1 p
(a)
Step 2: Bound each part of regret. For the first part, we can show that RUCB?ALP (T, B) =
O(log T ) using similar techniques for traditional UCB methods [25]. The major challenge of regret
(c)
analysis for UCB-ALP then lies in the evaluation of the second part RUCB?ALP (T, B).
We first verify that the evolution of b? under UCB-ALP is similar to that under ALP and Lemma 2
still holds under UCB-ALP. With respect to context ranking errors, we note that unlike classic UCB
methods, not all context ranking errors contribute to the regret due to the threshold structure of
ALP. Therefore, we carefully categorize the context ranking results based on their contributions. We
briefly discuss the analysis for the non-boundary cases here. Recall that ?j(?) is the threshold for the
static LP problem LP T,B . We define the following events that capture all possible ranking results
based on UCBs:
??j (t) > u
Erank,0 (t) = ?j ? ?j(?), u
???j(?)+1 (t); ?j > ?j(?) + 1, u
??j (t) < u
???j(?)+1 (t) ,
Erank,1 (t) = ?j ? ?j(?), u
??j (t) ? u
???j(?)+1 (t); ?j > ?j(?) + 1, u
??j (t) < u
???j(?)+1 (t) ,
Erank,2 (t) = ?j > ?j(?) + 1, u
??j (t) ? u
???j(?)+1 (t) .
The first event Erank,0 (t) indicates a roughly correct context ranking, because under Erank,0 (t) UCBALP obtains a correct solution for LP ?,b? if b? /? ? [q?j(?) , q?j(?)+1 ]. The last two events Erank,s (t),
s = 1, 2, represent two types of context ranking errors: Erank,1 (t) corresponds to ?certain contexts
with above-threshold reward having lower UCB?, while Erank,2 (t) corresponds to ?certain contexts
PT
with below-threshold reward having higher UCB?. Let T (s) = t=1 1(Erank,s (t)) for 0 ? s ? 2.
We can show that the expected number of context ranking errors satisfies E[T (s) ] = O(JK log T ),
(c)
s = 1, 2, implying that RUCB?ALP (T, B) = O(JK log T ). Summarizing the two parts, we have
RUCB?ALP (T, B) = O(JK log T ) for the non-boundary cases. The regret for the boundary cases
can be bounded using similar arguments.
Key Insights from UCB-ALP: Constrained contextual bandits involve complicated interactions
between information acquisition and decision making. UCB-ALP alleviates these interactions by
7
approximating the oracle with ALP for decision making. This approximation achieves near-optimal
performance while tolerating certain estimation errors of system statistics, and thus enables the
combination with estimation methods such as UCB in unknown statistics cases. Moreover, the
adaptation property of UCB-ALP guarantees the concentration property of the system status, e.g.,
b? /? . This allows us to separately study the impact of action or context ranking errors and conduct
rigorous analysis of regret. These insights can be applied in algorithm design and analysis for
constrained contextual bandits under more general settings.
5
Bandits with Unknown Context Distribution
When the context distribution is unknown, a reasonable heuristic is to replace the probability ?j in
Pt
ALP with its empirical estimate, i.e., ?
?j (t) = 1t t0 =1 1(Xt0 = j). We refer to this modified ALP
algorithm as Empirical ALP (EALP), and its combination with UCB as UCB-EALP.
The empirical distribution provides a maximum likelihood estimate of the context distribution and
the EALP and UCB-EALP algorithms achieve similar performance as ALP and UCB-ALP, respectively, as observed in numerical simulations. However, a rigorous analysis for EALP and UCBEALP is much more challenging due to the dependency introduced by the empirical distribution. To
tackle this issue, our rigorous analysis focuses on a truncated version of EALP where we stop updating the empirical distribution after a given round. Using the method of bounded averaged differences
based on coupling argument, we obtain the concentration property of the average remaining budget
b? /? , and show that this truncated EALP algorithm achieves O(1) regret except for the boundary
cases. The regret of the corresponding UCB-based version can by bounded similarly as UCB-ALP.
6
Bandits with Heterogeneous Costs
The insights obtained from unit-cost systems can also be used to design algorithms for heterogeneous cost systems where the cost cj,k depends on j and k. We generalize the ALP algorithm to
approximate the oracle, and adjust it to the case with unknown expected rewards. For simplicity, we
assume the context distribution is known here, while the empirical estimate can be used to replace
the actual context distribution if it is unknown, as discussed in the previous section.
With heterogeneous costs, the quality of an action k under a context j is roughly captured by its
normalized expected reward, defined as ?j,k = uj,k /cj,k . However, the agent cannot only focus
on the ?best? action, i.e., kj? = arg maxk?A ?j,k , for context j. This is because there may exist
another action k 0 such that ?j,k0 < ?j,kj? , but uj,k0 > uj,kj? (and of course, cj,k0 > cj,kj? ). If
the budget allocated to context j is sufficient, then the agent may take action k 0 to maximize the
expected reward. Therefore, to approximate the oracle, the ALP algorithm in this case needs to
solve an LP problem accounting for all context-action pairs with an additional constraint that only
one action can be taken under each context. By investigating the structure of ALP in this case and
the concentration
? of the remaining budget, we show that ALP achieves O(1) regret in non-boundary
cases, and O( T ) regret in boundary cases. Then, an -First ALP algorithm is proposed for the
unknown statistics case where an exploration stage is implemented first and then an exploitation
stage is implemented according to ALP.
7
Conclusion
In this paper, we study computationally-efficient algorithms that achieve logarithmic or sublinear
regret for constrained contextual bandits. Under simplified yet practical assumptions, we show
that the close interactions between the information acquisition and decision making in constrained
contextual bandits can be decoupled by adaptive linear relaxation. When the system statistics are
known, the ALP approximation achieves near-optimal performance, while tolerating certain estimation errors of system parameters. When the expected rewards are unknown, the proposed UCB-ALP
algorithm leverages the advantages of ALP?and UCB, and achieves O(log T ) regret except for certain boundary cases, where it achieves O( T ) regret. Our study provides an efficient approach of
dealing with the challenges introduced by budget constraints and could potentially be extended to
more general constrained contextual bandits.
Acknowledgements: This research was supported in part by NSF Grants CCF-1423542, CNS1457060, CNS-1547461, and AFOSR MURI Grant FA 9550-10-1-0573.
8
References
[1] J. Langford and T. Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In
Advances in Neural Information Processing Systems (NIPS), pages 817?824, 2007.
[2] T. Lu, D. P?al, and M. P?al. Contextual multi-armed bandits. In International Conference on
Artificial Intelligence and Statistics, pages 485?492, 2010.
[3] L. Zhou. A survey on contextual multi-armed bandits. arXiv preprint arXiv:1508.03326, 2015.
[4] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235?256, 2002.
[5] L. Li, W. Chu, J. Langford, and R. E. Schapire. A contextual-bandit approach to personalized
news article recommendation. In ACM International Conference on World Wide Web (WWW),
pages 661?670, 2010.
[6] A. Slivkins. Contextual bandits with similarity information. The Journal of Machine Learning
Research, 15(1):2533?2568, 2014.
[7] A. Agarwal, D. Hsu, S. Kale, J. Langford, L. Li, and R. E. Schapire. Taming the monster:
A fast and simple algorithm for contextual bandits. In International Conference on Machine
Learning (ICML), 2014.
[8] P. Auer and R. Ortner. Logarithmic online regret bounds for undiscounted reinforcement learning. In Advances in Neural Information Processing Systems (NIPS), pages 49?56, 2007.
[9] A. Badanidiyuru, R. Kleinberg, and Y. Singer. Learning on a budget: posted price mechanisms
for online procurement. In ACM Conference on Electronic Commerce, pages 128?145, 2012.
[10] T. L. Lai and O. Y.-W. Liao. Efficient adaptive randomization and stopping rules in multi-arm
clinical trials for testing a new treatment. Sequential Analysis, 31(4):441?457, 2012.
[11] L. Tran-Thanh, A. C. Chapman, A. Rogers, and N. R. Jennings. Knapsack based optimal
policies for budget-limited multi-armed bandits. In AAAI Conference on Artificial Intelligence,
2012.
[12] A. Badanidiyuru, R. Kleinberg, and A. Slivkins. Bandits with knapsacks. In IEEE 54th Annual
Symposium on Foundations of Computer Science (FOCS), pages 207?216, 2013.
[13] C. Jiang and R. Srikant. Bandits with budgets. In IEEE 52nd Annual Conference on Decision
and Control (CDC), pages 5345?5350, 2013.
[14] A. Slivkins. Dynamic ad allocation: Bandits with budgets. arXiv preprint arXiv:1306.0155,
2013.
[15] Y. Xia, H. Li, T. Qin, N. Yu, and T.-Y. Liu. Thompson sampling for budgeted multi-armed
bandits. In International Joint Conference on Artificial Intelligence, 2015.
[16] R. Combes, C. Jiang, and R. Srikant. Bandits with budgets: Regret lower bounds and optimal
algorithms. In ACM Sigmetrics, 2015.
[17] A. Badanidiyuru, J. Langford, and A. Slivkins. Resourceful contextual bandits. In Conference
on Learning Theory (COLT), 2014.
[18] R. Combes, A. Proutiere, D. Yun, J. Ok, and Y. Yi. Optimal rate sampling in 802.11 systems.
In IEEE INFOCOM, pages 2760?2767, 2014.
[19] M. H. Veatch. Approximate linear programming for average cost MDPs. Mathematics of
Operations Research, 38(3):535?544, 2013.
[20] S. Agrawal and N. R. Devanur. Bandits with concave rewards and convex knapsacks. In ACM
Conference on Economics and Computation, pages 989?1006. ACM, 2014.
[21] S. Agrawal, N. R. Devanur, and L. Li. Contextual bandits with global constraints and objective.
arXiv preprint arXiv:1506.03374, 2015.
[22] S. Agrawal and N. R. Devanur. Linear contextual bandits with global constraints and objective.
arXiv preprint arXiv:1507.06738, 2015.
[23] D. P. Dubhashi and A. Panconesi. Concentration of measure for the analysis of randomized
algorithms. Cambridge University Press, 2009.
[24] A. Garivier and O. Capp?e. The KL-UCB algorithm for bounded stochastic bandits and beyond.
In Conference on Learning Theory (COLT), pages 359?376, 2011.
[25] D. Golovin and A. Krause. Dealing with partial feedback, 2009.
[26] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in
Applied Mathematics, 6(1):4?22, 1985.
9
| 6008 |@word trial:5 exploitation:4 briefly:1 version:3 seems:1 proportion:1 nd:1 open:3 simulation:1 accounting:1 incurs:1 liu:3 series:1 existing:1 current:3 contextual:42 comparing:3 yet:1 chu:1 attracted:1 numerical:1 partition:2 enables:1 update:1 implying:1 greedy:1 discovering:1 intelligence:3 beginning:2 provides:4 contribute:1 zhang:1 symposium:1 focs:1 prove:1 combine:3 manner:1 introduce:1 expected:34 roughly:3 p1:1 nor:1 multi:7 decreasing:1 actual:2 armed:6 curse:1 considering:2 becomes:1 moreover:6 linearity:1 maximizes:1 bounded:6 notation:1 medium:1 guarantee:1 concave:2 tackle:2 shed:2 exactly:1 control:1 unit:7 grant:2 before:1 service:1 negligible:1 positive:1 limit:1 analyzing:1 jiang:3 fluctuation:2 studied:3 challenging:4 relaxing:1 ease:2 limited:4 averaged:1 unique:1 practical:1 commerce:1 testing:1 recursive:1 regret:73 implement:1 differs:1 empirical:7 significantly:2 confidence:2 get:1 cannot:2 close:2 selection:1 context:74 www:1 map:1 yt:5 thanh:1 kale:1 attention:1 starting:1 independently:1 convex:2 survey:1 thompson:1 devanur:3 simplicity:2 economics:1 insight:7 adjusts:2 rule:2 importantly:1 dominate:1 classic:3 pt:5 programming:10 us:1 satisfying:1 jk:7 particularly:1 updating:1 muri:1 observed:1 monster:1 preprint:4 capture:2 news:1 ordering:7 highest:1 observes:1 yk:4 consumes:2 intuition:1 complexity:3 reward:51 dynamic:3 solving:3 badanidiyuru:3 inapplicable:1 capp:1 easily:1 joint:1 k0:6 ralp:2 fast:1 artificial:3 quite:2 fluctuates:1 widely:2 supplementary:5 larger:1 heuristic:1 solve:1 otherwise:1 statistic:12 fischer:1 online:5 kxt:1 advantage:1 agrawal:3 propose:6 tran:1 hire:1 interaction:5 adaptation:1 qin:1 combining:3 realization:1 alleviates:1 achieve:13 intuitive:1 ent:1 requirement:1 optimum:2 undiscounted:1 coupling:2 develop:1 depending:2 fixing:1 minor:1 eq:4 p2:1 implemented:3 huasen:1 c:1 come:1 skip:2 convention:2 implies:1 indicate:1 correct:6 stochastic:1 exploration:5 alp:86 elimination:2 material:5 rogers:1 resourceful:2 generalization:1 preliminary:1 mab:2 randomization:2 extension:1 hold:1 considered:1 deciding:1 mapping:1 major:1 achieves:15 a2:1 estimation:8 applicable:1 robbins:1 sigmetrics:1 modified:1 rather:1 zhou:1 publication:1 ucrl:1 focus:8 rank:1 indicates:1 likelihood:1 contrast:1 rigorous:4 summarizing:1 dependent:1 stopping:1 entire:1 typically:1 bandit:58 proutiere:1 interested:2 arg:3 among:2 issue:1 colt:2 priori:1 k6:1 constrained:21 special:2 fairly:1 having:2 sampling:4 chapman:1 identical:1 represents:1 yu:1 icml:1 pxt:2 ortner:1 replaced:2 lebesgue:1 replacement:2 cns:1 highly:3 evaluation:1 adjust:1 chong:1 mixture:2 arrives:1 light:2 behind:1 accurate:1 worker:1 partial:1 decoupled:1 conduct:1 instance:1 markovian:1 cost:25 addressing:1 characterize:1 dependency:1 chooses:1 combined:1 thanks:1 international:4 randomized:1 quickly:1 aaai:1 cesa:1 possibly:2 resort:1 li:4 coefficient:2 ranking:13 depends:2 ad:1 later:2 try:1 closed:1 infocom:1 analyze:1 maintains:1 complicated:2 cxt:1 minimize:1 contribution:1 variance:2 characteristic:1 efficiently:1 generalize:1 tolerating:2 lu:1 randomness:2 suffers:1 whenever:1 complicate:1 definition:1 acquisition:4 naturally:1 associated:1 proof:6 static:3 gain:2 stop:1 hsu:1 treatment:2 recall:2 knowledge:5 dimensionality:1 cj:18 carefully:1 auer:2 back:1 ok:1 tolerate:1 higher:1 furthermore:1 just:1 stage:2 langford:4 sketch:3 receives:1 web:1 replacing:1 combes:2 quality:2 indicated:1 believe:1 name:1 verify:3 y2:1 normalized:1 ccf:1 evolution:2 q0:1 round:12 hiring:1 please:1 davis:2 yun:1 tt:1 instantaneous:3 novel:1 exponentially:1 extend:2 tail:1 discussed:1 relating:1 refer:3 multiarmed:1 cambridge:1 mathematics:2 similarly:2 illinois:4 similarity:2 recent:2 scenario:2 certain:12 yi:1 captured:2 minimum:1 fortunately:1 greater:1 relaxed:2 additional:2 maximize:5 multiple:1 desirable:4 reduces:1 champaign:2 clinical:4 long:2 lai:2 a1:1 impact:3 heterogeneous:5 liao:1 expectation:5 arxiv:8 represent:1 agarwal:1 achieved:1 justified:1 addition:2 separately:1 krause:1 grow:3 source:2 allocated:1 unlike:1 subject:3 leveraging:1 near:4 leverage:1 revealed:1 relaxes:1 fit:1 restrict:1 identified:1 suboptimal:1 reduce:1 idea:2 tradeoff:3 consumed:1 qj:8 t0:1 whether:1 motivated:1 expression:1 optimism:1 panconesi:1 action:44 jennings:1 proportionally:2 involve:1 schapire:2 exist:1 restricts:1 nsf:1 srikant:3 dummy:2 correctly:1 discrete:2 key:4 threshold:7 achieving:1 budgeted:1 neither:1 pj:12 garivier:1 ht:1 asymptotically:1 relaxation:2 run:2 uncertainty:1 employer:1 reasonable:1 wu:1 electronic:1 decision:7 appendix:3 bound:18 simplification:1 oracle:16 annual:2 nontrivial:1 constraint:24 infinity:3 x2:1 personalized:2 generates:1 u1:1 kleinberg:2 argument:2 optimality:2 min:3 according:3 combination:2 smaller:1 lp:20 making:5 modification:1 restricted:1 taken:8 computationally:7 resource:1 discus:2 mechanism:2 singer:1 know:2 end:6 serf:1 available:1 operation:1 observe:1 appropriate:1 alternative:1 rucb:11 knapsack:3 remaining:18 ucbs:1 uj:20 approximating:2 dubhashi:1 objective:5 fa:1 costly:1 concentration:6 traditional:4 dp:7 distance:1 trivial:2 ratio:2 balance:2 difficult:2 executed:2 potentially:2 negative:1 design:8 implementation:2 zt:4 policy:4 unknown:15 bianchi:1 upper:8 observation:1 urbana:2 benchmark:4 finite:4 implementable:1 truncated:2 maxk:5 extended:3 y1:1 interacting:1 introduced:2 pair:7 kl:1 slivkins:4 california:2 hypergeometric:1 ucdavis:2 nip:2 address:1 beyond:1 suggested:1 usually:2 below:1 regime:2 challenge:3 including:1 max:1 event:4 difficulty:3 natural:1 indicator:1 arm:1 mdps:1 kj:10 taming:1 epoch:1 literature:1 acknowledgement:1 asymptotic:1 afosr:1 fully:1 cdc:1 sublinear:3 limitation:1 allocation:2 var:1 foundation:1 incurred:3 agent:23 sufficient:2 article:1 principle:1 course:1 supported:1 wireless:1 last:1 allow:1 wide:1 taking:3 correspondingly:1 boundary:23 xia:1 feedback:1 evaluating:1 cumulative:1 world:1 adaptive:10 reinforcement:2 simplified:3 historical:2 approximate:4 observable:2 contradict:1 obtains:1 status:1 keep:1 dealing:2 global:2 investigating:1 summing:1 learn:1 golovin:1 init:1 obtaining:1 complex:2 posted:1 bounding:1 arise:1 arrival:5 nothing:1 x1:1 referred:5 lie:1 third:1 procurement:1 theorem:7 xt:9 specific:1 showing:1 rcb:2 decay:1 intractable:1 sequential:1 maxim:1 budget:53 horizon:9 kx:2 gap:1 logarithmic:12 simply:1 xt0:1 expressed:1 recommendation:2 corresponds:2 satisfies:6 acm:5 conditional:1 viewed:3 careful:1 towards:2 exposition:2 replace:2 price:1 content:1 hard:4 infinite:2 except:10 specifically:4 lemma:8 total:12 xin:1 ucb:56 categorize:1 evaluate:1 crowdsourcing:2 |
5,535 | 6,009 | From random walks to distances on
unweighted graphs
Tatsunori B. Hashimoto
MIT EECS
[email protected]
Yi Sun
MIT Mathematics
[email protected]
Tommi S. Jaakkola
MIT EECS
[email protected]
Abstract
Large unweighted directed graphs are commonly used to capture relations between entities. A fundamental problem in the analysis of such networks is to
properly define the similarity or dissimilarity between any two vertices. Despite
the significance of this problem, statistical characterization of the proposed metrics has been limited.
We introduce and develop a class of techniques for analyzing random walks on
graphs using stochastic calculus. Using these techniques we generalize results on
the degeneracy of hitting times and analyze a metric based on the Laplace transformed hitting time (LTHT). The metric serves as a natural, provably well-behaved
alternative to the expected hitting time. We establish a general correspondence
between hitting times of the Brownian motion and analogous hitting times on the
graph. We show that the LTHT is consistent with respect to the underlying metric
of a geometric graph, preserves clustering tendency, and remains robust against
random addition of non-geometric edges. Tests on simulated and real-world data
show that the LTHT matches theoretical predictions and outperforms alternatives.
1
Introduction
Many network metrics have been introduced to measure the similarity between any two vertices.
Such metrics can be used for a variety of purposes, including uncovering missing edges or pruning
spurious ones. Since the metrics tacitly assume that vertices lie in a latent (metric) space, one could
expect that they also recover the underlying metric in some well-defined limit. Surprisingly, there
are nearly no known results on this type of consistency. Indeed, it was recently shown [19] that the
expected hitting time degenerates and does not measure any notion of distance.
We analyze an improved hitting-time metric ? Laplace transformed hitting time (LTHT) ? and rigorously evaluate its consistency, cluster-preservation, and robustness under a general network model
which encapsulates the latent space assumption. This network model, specified in Section 2, posits
that vertices lie in a latent metric space, and edges are drawn between nearby vertices in that space.
To analyze the LTHT, we develop two key technical tools. We establish a correspondence between
functionals of hitting time for random walks on graphs, on the one hand, and limiting It? processes
(Corollary 4.4) on the other. Moreover, we construct a weighted random walk on the graph whose
limit is a Brownian motion (Corollary 4.1). We apply these tools to obtain three main results.
First, our Theorem 3.5 recapitulates and generalizes the result of [19] pertaining to degeneration
of expected hitting time in the limit. Our proof is direct and demonstrates the broader applicability of the techniques to general random walk based algorithms. Second, we analyze the Laplace
transformed hitting time as a one-parameter family of improved distance estimators based on random walks on the graph. We prove that there exists a scaling limit for the parameter ? such that
the LTHT can become the shortest path distance (Theorem S5.2) or a consistent metric estimator
averaging over many paths (Theorem 4.5). Finally, we prove that the LTHT captures the advantages
1
of random-walk based metrics by respecting the cluster structure (Theorem 4.6) and robustly recovering similarity queries when the majority of edges carry no geometric information (Theorem 4.9).
We now discuss the relation of our work to prior work on similarity estimation.
Quasi-walk metrics: There is a growing literature on graph metrics that attempts to correct the
degeneracy of expected hitting time [19] by interpolating between expected hitting time and shortest
path distance. The work closest to ours is the analysis of the phase transition of the p-resistance
metric in [1] which proves that p-resistances are nondegenerate for some p; however, their work did
not address consistency or bias of p-resistances. Other approaches to quasi-walk metrics such as
logarithmic-forest [3], distributed routing distances [16], truncated hitting times [12], and randomized shortest paths [8, 21] exist but their statistical properties are unknown. Our paper is the first to
prove consistency properties of a quasi-walk metric.
Nonparametric statistics: In the nonparametric statistics literature, the behavior of k-nearest neighbor and ?-ball graphs has been the focus of extensive study. For undirected graphs, Laplacian-based
techniques have yielded consistency for clusters [18] and shortest paths [2] as well as the degeneracy of expected hitting time [19]. Algorithms for exactly embedding k-nearest neighbor graphs are
similar and generate metric estimates, but require knowledge of the graph construction method, and
their consistency properties are unknown [13]. Stochastic differential equation techniques similar
to ours were applied to prove Laplacian convergence results in [17], while the process-level convergence was exploited in [6]. Our work advances the techniques of [6] by extracting more robust
estimators from process-level information.
Network analysis: The task of predicting missing links in a graph, known as link prediction, is one
of the most popular uses of similarity estimation. The survey [9] compares several common link
prediction methods on synthetic benchmarks. The consistency of some local similarity metrics such
as the number of shared neighbors was analyzed under a single generative model for graphs in [11].
Our results extend this analysis to a global, walk-based metric under weaker model assumptions.
2
2.1
Continuum limits of random walks on networks
Definition of a spatial graph
We take a generative approach to defining similarity between vertices. We suppose that each vertex
i of a graph is associated with a latent coordinate xi ? Rd and that the probability of finding an edge
between two vertices depends solely on their latent coordinates. In this model, given only the unweighted edge connectivity of a graph, we define natural distances between vertices as the distances
between the latent coordinates xi . Formally, let X = {x1 , x2 , . . .} ? Rd be an infinite sequence
of points drawn i.i.d. from a differentiable density with bounded log gradient p(x) with compact
support D. A spatial graph is defined by the following:
Definition 2.1 (Spatial graph). Let ?n : Xn ? R>0 be a local scale function and h : R?0 ? [0, 1]
a piecewise continuous function with h(x) = 0 for x > 1, h(1) > 0, and h left-continuous at 1. The
spatial graph Gn corresponding to ?n and h is the random graph with vertex set Xn and a directed
edge from xi to xj with probability pij = h(|xi ? xj |?n (xi )?1 ).
This graph was proposed in [6] as the generalization of k-nearest neighbors to isotropic kernels. To
make inference tractable, we focus on the large-graph, small-neighborhood limit as n ? ? and
?n (x) ? 0. In particular, we will suppose that there exist scaling constants gn and a deterministic
continuous function ? : D ? R>0 so that
gn ? 0,
1
1
gn n d+2 log(n)? d+2 ? ?,
?n (x)gn?1 ? ?(x) for x ? Xn ,
where the final convergence is uniform in x and a.s. in the draw of X . The scaling constant gn
represents a bound on the asymptotic sparsity of the graph.
We give a few concrete examples to make the quantities h, gn , and ?n clear.
1. The directed k-nearest neighbor graph is defined by setting h(x) = 1x?[0,1] , the indicator
function of the unit interval, ?n (x) the distance to the k th nearest neighbor, and gn =
(k/n)1/d the rate at which ?n (x) approaches zero.
2
2. A Gaussian kernel graph is approximated by setting h(x) = exp(?x2 /? 2 )1x?[0,1] . The
truncation of the Gaussian tails at ? is an analytic convenience rather than a fundamental
limitation, and the bandwidth can be varied by rescaling ?n (x).
2.2
Continuum limit of the random walk
Our techniques rely on analysis of the limiting behavior of the simple random walk Xtn on a spatial
graph Gn , viewed as a discrete-time Markov process with domain D. The increment at step t of
Xtn is a jump to a random point in Xn which lies within the ball of radius ?n (Xtn ) around Xtn .
We observe three effects: (A) the random walk jumps more frequently towards regions of high
density; (B) the random walk moves more quickly whenever ?n (Xtn ) is large; (C) for ?n small and
a large step count t, the random variable Xtn ? X0n is the sum of many small independent (but not
necessarily identically distributed) increments. In the n ? ? limit, we may identify Xtn with a
continuous-time stochastic process satisfying (A), (B), and (C) via the following result, which is a
slight strengthening of [6, Theorem 3.4] obtained by applying [15, Theorem 11.2.3] in place of the
original result of Stroock-Varadhan.
Theorem 2.2. The simple random walk Xtn converges uniformly in Skorokhod space D([0, ?), D)
after a time scaling b
t = tgn2 to the It? process Ybt valued in the space of continuous functions
C([0, ?), D) with reflecting boundary conditions on D defined by
?
dYbt = ? log(p(Ybt ))?(Ybt )2 /3db
t + ?(Ybt )/ 3dWbt .
(1)
Effects (A), (B), and (C) may be seen in the stochastic differential equation (1) as follows. The
2
direction of the drift is controlled by ? log(p(Ybt )), the rate of drift is controlled by
b
t ) , and the
? ?(Y
noise is driven by a Brownian motion Wbt with location-dependent scaling ?(Ybt )/ 3.1
We view Theorem 2.2 as a method to understand the simple random walk Xtn through the continuous walk Ybt . Attributes of stochastic processes such as stationary distribution or hitting time may be
defined for both Ybt and Xtn , and in many cases Theorem 2.2 implies that an appropriately-rescaled
version of the discrete attribute will converge to the continuous one. Because attributes of the continuous process Ybt can reveal information about proximity between points, this provides a general
framework for inference in spatial graphs. We use hitting times of the continuous process to a domain E ? D to prove properties of the hitting time of a simple random walk on a graph via the limit
arguments of Theorem 2.2.
3
Degeneracy of expected hitting times in networks
The hitting time, commute time, and resistance distance are popular measures of distance based upon
the random walk which are believed to be robust and capture the cluster structure of the network.
However, it was shown in a surprising result in [19] that on undirected geometric graphs the scaled
expected hitting time from xi to xj converges to inverse of the degree of xj .
In Theorem 3.5, we give an intuitive explanation and generalization of this result by showing that
if the random walk on a graph converges to any limiting It? process in dimension d ? 2, the
scaled expected hitting time to any point converges to the inverse of the stationary distribution. This
answers the open problem in [19] on the degeneracy of hitting times for directed graphs and graphs
with general degree distributions such as directed k-nearest neighbor graphs, lattices, and power-law
graphs with convergent random walks. Our proof can be understood as first extending the transience
or neighborhood recurrence of Brownian motion for d ? 2 to more general It? processes and then
connecting hitting times on graphs to their It? process equivalents.
3.1
Typical hitting times are large
We will prove the following lemma that hitting a given vertex quickly is unlikely. Let Txxji,n be the
hitting time to xj of Xtn started at xi and TExi be the continuous equivalent for Ybt to hit E ? D .
1
Both the variance ?(?n (x)2 ) and expected value ?(? log(p(x))?n (x)2 ) of a single step in the simple
random walk are ?(gn2 ). The time scaling b
t = tgn2 in Theorem 2.2 was chosen so that as n ? ? there are gn?2
discrete steps taken per unit time, meaning the total drift and variance per unit time tend to a non-trivial limit.
3
Lemma 3.1 (Typical hitting times are large). For any d ? 2, c > 0, and ? > 0, for large enough n
we have P(Txxji,n > cgn?2 ) > 1 ? ?.
To prove Lemma 3.1, we require the following tail bound following from the Feynman-Kac theorem.
Theorem 3.2 ([10, Exercise 9.12] Feynman-Kac for the Laplace transform). The Laplace transform
of the hitting time (LTHT) u(x) = E[exp(??TEx )] is the solution to the boundary value problem
with boundary condition u|?E = 1:
1
Tr[? T H(u)?] + ?(x) ? ?u ? ?u = 0.
2
This will allow us to bound the hitting time to the ball B(xj , s) of radius s centered at xj .
x
Lemma 3.3. For x, y ? D, d ? 2, and any ? > 0, there exists s > 0 such that E[e?TB(y,s) ] < ?.
Proof. We compare the Laplace transformed hitting time of the general It? process to that of Brownian motion via Feynman-Kac and handle the latter case directly. Details are in Section S2.1.
We now use Lemma 3.3 to prove Lemma 3.1.
xi
a.s. for
Proof of Lemma 3.1. Our proof proceeds in two steps. First, we have Txxji,n ? TB(x
j ,s),n
any s > 0 because xj ? B(xj , s), so by Theorem 2.2, we have
xi
?2
lim E[e?Txj ,n gn ] ? lim E[e
n??
x
g ?2
j ,s),n n
i
?TB(x
n??
x
] = E[e
i
?TB(x
j ,s)
].
(2)
xi
?TB(x
j ,s)
] < 12 ?e?c for some s > 0. For large enough n, this
Applying Lemma 3.3, we have E[e
xi
?2 ?c
combined with (2) implies P(Txj ,n ? cgn )e < ?e?c and hence P(Txxji,n ? cgn?2 ) < ?.
3.2
Expected hitting times degenerate to the stationary distribution
To translate results from It? processes to directed graphs, we require a regularity condition. Let
qt (xj , xi ) denote the probability that Xtn = xj conditioned on X0n = xi . We make the following
technical conjecture which we assume holds for all spatial graphs.
(?) For t = ?(gn?2 ), the rescaled marginal nqt (x, xi ) is a.s. eventually uniformly equicontinuous.2
Let ?X n (x) denote the stationary distribution of Xtn . The following was shown in [6, Theorem 2.1]
under conditions implied by our condition (?) (Corollary S2.6).
R
Theorem 3.4. Assuming (?), for a?1 = p(x)2 ?(x)?2 dx, we have the a.s. limit
p(x)
?
b(x) := lim n?X n (x) = a
.
n??
?(x)2
We may now express the limit of expected hitting time in terms of this result.
Theorem 3.5. For d ? 2 and any i, j, we have
E[Txxji,n ] a.s. 1
.
?
n
?
b(xj )
Proof. We give a sketch. By Lemma 3.1, the random walk started at xi does not hit xj within cgn?2
steps with high probability. By Theorem S2.5, the simple random walk Xtn mixes at exponential
rate, implying in Lemma S2.8 that the probability of first hitting at step t > cgn?2 is approximately
the stationary distribution at xj . Expected hitting time is then shown to approximate the expectation
of a geometric random variable. See Section S2 for a full proof.
Theorem 3.5 is illustrated in Figures 1A and 1B, which show with only 3000 points, expected hitting
times on a k-nearest neighbor graph degenerates to the stationary distribution. 3
2
Assumption (?) is related to smoothing properties of the graph Laplacian and is known to hold for undirected graphs [4]. No directed analogue is known, and [6] conjectured a weaker property for all spatial graphs.
See Section S1 for further details.
3
Surprisingly, [19] proved that 1-D hitting times diverge despite convergence of the continuous equivalent.
This occurs because the discrete walk can jump past the target point. In Section S2.4, we consider 1-D hitting
4
Figure 1: Estimated distance from orange starting point on a k-nearest neighbor graph constructed
on two clusters. A and B show degeneracy of hitting times (Theorem 3.5). C, D, and E show that
log-LTHT interpolate between hitting time and shortest path.
4
The Laplace transformed hitting time (LTHT)
In Theorem 3.5 we showed that expected hitting time is degenerate because a simple random walk
mixes before hitting its target. To correct this we penalize longer paths. More precisely, consider for
x
b x
b 2 the Laplace transforms E[e??T
E ] and E[e??n TE,n ] of T x and T x
?b > 0 and ?n = ?g
n
E,n .
E
These Laplace transformed hitting times (LTHT?s) have three advantages. First, while the expected
hitting time of a Brownian motion to a domain is dominated by long paths, the LTHT is dominated
by direct paths. Second, the LTHT for the It? process can be derived in closed form via the FeynmanKac theorem, allowing us to make use of techniques from continuous stochastic processes to control
the continuum LTHT. Lastly, the LTHT can be computed both by sampling and in closed form as a
matrix inversion (Section S3). Now define the scaled log-LTHT as
p
xi
? log(E[e??n Txj ,n ])/ 2?n gn .
Taking different scalings for ?n with n interpolates between expected hitting time (?n ? 0 on a
fixed graph) and shortest path distance (?n ? ?) (Figures 1C, D, and E). In Theorem 4.5, we show
b 2 ) yields a consistent distance measure retaining the unique
that the intermediate scaling ?n = ?(?g
n
properties of hitting times. Most of our results on the LTHT are novel for any quasi-walk metric.
While considering the Laplace transform of the hitting time is novel to our work, this metric has been
used in the literature in an ad-hoc manner in various forms as a similarity metric for collaboration
networks [20], hidden subgraph detection [14], and robust shortest path distance [21]. However,
these papers only considered the elementary properties of the limits ?n ? 0 and ?n ? ?. Our
consistency proof demonstrates the advantage of the stochastic process approach.
4.1
Consistency
It was shown previously that for n fixed and ?n ? ?, ? log(E[??n Txxji,n ])/?n gn converges to
shortest path distance from xi to xj . We investigate more precise behavior in terms of the scaling of
?n . There are two regimes: if ?n = ?(log(gnd n)), then the shortest path dominates and the LTHT
b 2 ), the graph log-LTHT
converges to shortest path distance (See Theorem S5.2). If ?n = ?(?g
n
b
converges to its continuous equivalent, which for large ? averages over random walks concentrated
b 2 ), we proceed in three steps: (1) we
around the geodesic. To show consistency for ?n = ?(?g
n
reweight the random walk on the graph so the limiting process is Brownian motion; (2) we show
that log-LTHT for Brownian motion recovers latent distance; (3) we show that log-LTHT for the
reweighted walk converges to its continuous limit; (4) we conclude that log-LTHT of the reweighted
walk recovers latent distance.
(1) Reweighting the random walk to converge to Brownian motion: We define weights using the
estimators pb and ?b for p(x) and ?(x) from [6].
times to small out neighbors which corrects this problem and derive closed form solutions (Theorem S2.12).
This hitting time is non-degenerate but highly biased due to boundary terms (Corollary S2.14).
5
Theorem 4.1. Let pb and ?b be consistent estimators of the density and local scale and A be the
b n defined below converges to a Brownian motion.
adjacency matrix. Then the random walk X
t
(
b(xj )?1
PAi,j p
b(xi )?2 i 6= j
n
n
A
p
b(xk )?1 ?
b
b
i,k
k
P(Xt+1 = xj | Xt = xi ) =
1 ? ?b(xi )?2
i=j
Proof. Reweighting by pb and ?b is designed to cancel the drift and diffusion terms in Theorem 2.2 by
ensuring that as n grows large, jumps have means approaching 0 and variances which are asymptotically equal (but decaying with n). See Theorem S4.1. 4
(2) Log-LTHT for a Brownian motion: Let Wt be a Brownian motion with W0 = xi , and let
xi
T B(xj ,s) be the hitting time of Wt to B(xj , s). We show that log-LTHT converges to distance.
Lemma 4.2. For any ? < 0, if ?b = s? , as s ? 0 we have
q
b xi
? log(E[exp(??T
)])/
2?b ? |xi ? xj |.
B(xj ,s)
Proof. We consider hitting time of Brownian motion started at distance |xi ? xj | from the origin to
distance s of the origin, which is controlled by a Bessel process. See Subsection S6.1 for details.
b 2 ): To compare continuous and discrete log-LTHT?s, we
(3) Convergence of LTHT for ?n = ?(?g
n
will first define the s-neighborhood of a vertex xi on Gn as the graph equivalent of the ball B(xi , s).
Definition 4.3 (s-neighborhood). Let ?b(x) be the consistent estimate of the local scale from [6] so
that ?b(x) ? ?(x) uniformly a.s. as n ? ?. The ?b-weight of a path xi1 ? ? ? ? ? xil is the sum
Pl?1
b(xim ) of vertex weights ?b(xi ). For s > 0 and x ? Gn , the s-neighborhood of x is
m=1 ?
NBsn (x) := {y | there is a path x ? y of ?b-weight ? gn?1 s}.
xi
For xi , xj ? Gn , let TbB(x
be the hitting time of the transformed walk on Gn from xi to NBsn (xj ).
j ,s)
We now verify that hitting times to the s-neighborhood on graphs and the s-radius ball coincide.
d
xi
xi
Corollary 4.4. For s > 0, we have gn2 TbNB
s (x ),n ? T B(xj ,s) .
j
n
Proof. We verify that the ball and the neighborhood have nearly identical sets of points and apply
Theorem 2.2. See Subsection S6.2 for details.
(4) Proving consistency of log-LTHT: Properly accounting for boundary effects, we obtain a consistency result for the log-LTHT for small neighborhood hitting times.
Theorem 4.5. Let xi , xj ? Gn be connected by a geodesic not intersecting ?D. For any ? > 0,
b 2 , for large n we have with high probability
there exists a choice of ?b and s > 0 so that if ?n = ?g
n
q
b ? |xi ? xj | < ?.
? log(E[exp(??n Tbxi s
2
?
)])/
NBn (xj ),n
Proof of Theorem 4.5. The proof has three steps. First, we convert to the continuous setting via
Corollary 4.4. Second, we show the contribution of the boundary is negligible. The conclusion
follows from the explicit computation of Lemma S6.1. Full details are in Section S6.
The stochastic process limit based proof of Theorem 4.5 implies that the log-LTHT is consistent and
robust to small perturbations to the graph which preserve the same limit (Supp. Section S8).
4
This is a special case of a more general theorem for transforming limits of graph random walks (Theorem
S4.1). Figure S1 shows that this modification is highly effective in practice.
6
4.2
Bias
Random walk based metrics are often motivated as recovering a cluster preserving metric. We now
show that the log-LTHT of the un-weighted simple random walk preserves the underlying cluster
structure. In the 1-D case, we provide a complete characterization.
be
Theorem 4.6. Suppose the spatial graph has d = 1 and h(x) = 1x?[0,1] . Let T xi ?(x
b j )gn
NBn
(xj ),n
the hitting time of a simple random walk from xi to the out-neighborhood of xj . It converges to
Z xj p
?
p
p
? 2?
])/
? log(E[??T xi ?(x
8?
?
m(x)dx
+
o
log(1
+
e
)/
2? ,
b j )gn
NBn
where m(x) =
2
?(x)2
+
(xj ),n
1 ? log(p(x))
?
?x2
xi
+
1
?
? log(p(x))
?x
2
defines a density-sensitive metric.
Proof. Apply the WKBJ approximation for Schrodinger equations to the Feynman-Kac PDE from
Theorem 3.2. See Corollary S7.2 and Corollary S2.13 for a full proof.
The leading order terms of the density-sensitive metric appropriately penalize crossing regions of
large changes to the log density; this is not the case for the expected hitting time (Theorem S2.12).
4.3
Robustness
While shortest path distance is a consistent measure of the underlying metric, it breaks down catastrophically with the addition of a single non-geometric edge and does not meaningfully rank vertices
that share an edge. In contrast, we show that LTHT breaks ties between vertices via the resource
allocation (RA) index, a robust local similarity metric under Erd?os-R?nyi-type noise. 5
Definition 4.7. The noisy spatial graph Gn over Xn with noise terms q1 (n), . . ., qn (n) is constructed by drawing an edge from xi to xj with probability
pij = h(|xi ? xj |?n (xi )?1 )(1 ? qj (n)) + qj (n).
Define the directed RA index in terms of the out-neighborhood set NBn (xi ) and the in-neighborhood
P
ts
set NBin
|NBn (xk )|?1 and two step log-LTHT by Mij
:=
n (xi ) as Rij :=
xk ?NBn (xi )?NBin
n (xj )
xi
xi
6
? log(E[exp(??Txj ,n ) | Txj ,n > 1]). We show two step log-LTHT and RA index give equivalent
methods for testing if vertices are within distance ?n (x).
Theorem 4.8. If ? = ?(log(gnd n)) and xi and xj have at least one common neighbor, then
ts
Mij
? 2? ? ? log(Rij ) + log(|NBn (xi )|).
Proof. Let Pij (t) be the probability of going from xi to xj in t steps, and Hij (t) the probability of
not hitting before time t. Factoring the two-step hitting time yields
?
X
Pij (t)
ts
Hij (t)e??(t?2) .
Mij
= 2? ? log(Pij (2)) ? log 1 +
P (2)
t=3 ij
Let kmax be the maximal out-degree in Gn . The contribution of paths of length greater than 2
2
vanishes because Hij (t) ? 1 and Pij (t)/Pij (2) ? kmax
, which is dominated by e?? for ? =
Rij
n
?(log(g n)). Noting that Pij (2) = |NBn (xi )| concludes. For full details see Theorem S9.1.
d/2
For edge identification within distance ?n (x), the RA index is robust even at noise level q = o(gn ).
Modifying the graph by changing fewer than gn2 /n edges does not affect the continuum limit of the random
graph, and therefore preserve the LTHT with parameter ? = ?(gn2 ). While this weak bound allows on average
o(1) noise edges per vertex, it does show that the LTHT is substantially more robust than shortest paths without
modification. See Section S8 for proofs.
6
The conditioning Txxji,n > 1 is natural in link-prediction tasks where only pairs of disconnected vertices
are queried. Empirically, we observe it is critical to performance (Figure 3).
5
7
Figure 2: The LTHT recovered deleted edges
most consistently on a citation network
Figure 3: The two-step LTHT (defined above
Theorem 4.8) outperforms others at word similarity estimation including the basic log-LTHT.
d/2
Theorem 4.9. If qi = q = o(gn ) for all i, for any ? > 0 there are c1 , c2 and hn so that for any
i, j, with probability at least 1 ? ? we have
? |xi ? xj | < min{?n (xi ), ?n (xj )} if Rij hn < c1 ;
? |xi ? xj | > 2 max{?n (xi ), ?n (xj )} if Rij hn > c2 .
Proof. The minimal RA index follows from standard concentration arguments (see S9.2).
5
Link prediction tasks
We compare the LTHT against other baseline measures of vertex similarity: shortest path distance,
expected hitting time, number of common neighbors, and the RA index. A comprehensive evaluation
of these quasi-walk metrics was performed in [8] who showed that a metric equivalent to the LTHT
performed best. We consider two separate link prediction tasks on the largest connected component
of vertices of degree at least five, fixing ? = 0.2.7 The degree constraint is to ensure that local
methods using number of common neighbors such as the resource allocation index do not have an
excessive number of ties. Code to generate figures in this paper are contained in the supplement.
Citation network: The KDD 2003 challenge dataset [5] includes a directed, unweighted network
of e-print arXiv citations whose dense connected component has 11,042 vertices and 222,027 edges.
We use the same benchmark method as [9] where we delete a single edge and compare the similarity
of the deleted edge against the set of control pair of vertices i, j which do not share an edge. We
count the fraction of pairs on which each method rank the deleted edge higher than all other methods.
We find that LTHT is consistently best at this task (Figure 2). 8
Associative Thesaurus network: The Edinburgh associative thesaurus [7] is a network with a dense
connected component of 7754 vertices and 246,609 edges in which subjects were shown a set of ten
words and for each word was asked to respond with the first word to occur to them. Each vertex
represents a word and each edge is a weighted, directed edge where the weight from xi to xj is the
number of subjects who responded with word xj given word xi .
We measure performance by whether strong associations with more than ten responses can be distinguished from weak ones with only one response. We find that the LTHT performs best and that
preventing one-step jumps is critical to performance as predicted by Theorem 4.8 (Figure 3).
6
Conclusion
Our work has developed an asymptotic equivalence between hitting times for random walks on
graphs and those for diffusion processes. Using this, we have provided a short extension of the
proof for the divergence of expected hitting times, and derived a new consistent graph metric that
is theoretically principled, computationally tractable, and empirically successful at well-established
link prediction benchmarks. These results open the way for the development of other principled
quasi-walk metrics that can provably recover underlying latent similarities for spatial graphs.
7
8
Results are qualitatively identical when varying ? from 0.1 to 1; see supplement for details.
The two-step LTHT is not shown since it is equivalent to the LTHT in missing link prediction.
8
References
[1] M. Alamgir and U. von Luxburg. Phase transition in the family of p-resistances. In Advances in Neural
Information Processing Systems, pages 379?387, 2011.
[2] M. Alamgir and U. von Luxburg. Shortest path distance in random k-nearest neighbor graphs. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 1031?1038, 2012.
[3] P. Chebotarev. A class of graph-geodetic distances generalizing the shortest-path and the resistance distances. Discrete Applied Mathematics, 159(5):295?302, 2011.
[4] D. A. Croydon and B. M. Hambly. Local limit theorems for sequences of simple random walks on graphs.
Potential Analysis, 29(4):351?389, 2008.
[5] J. Gehrke, P. Ginsparg, and J. Kleinberg. Overview of the 2003 KDD Cup. ACM SIGKDD Explorations
Newsletter, 5(2):149?151, 2003.
[6] T. B. Hashimoto, Y. Sun, and T. S. Jaakkola. Metric recovery from directed unweighted graphs. In
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, pages
342?350, 2015.
[7] G. R. Kiss, C. Armstrong, R. Milroy, and J. Piper. An associative thesaurus of English and its computer
analysis. The Computer and Literary Studies, pages 153?165, 1973.
[8] I. Kivim?ki, M. Shimbo, and M. Saerens. Developments in the theory of randomized shortest paths with a
comparison of graph node distances. Physica A: Statistical Mechanics and its Applications, 393:600?616,
2014.
[9] L. L? and T. Zhou. Link prediction in complex networks: A survey. Physica A: Statistical Mechanics
and its Applications, 390(6):1150?1170, 2011.
[10] B. ?ksendal. Stochastic differential equations: An introduction with applications. Universitext. SpringerVerlag, Berlin, sixth edition, 2003.
[11] P. Sarkar, D. Chakrabarti, and A. W. Moore. Theoretical justification of popular link prediction heuristics.
In IJCAI Proceedings-International Joint Conference on Artificial Intelligence, volume 22, page 2722,
2011.
[12] P. Sarkar and A. W. Moore. A tractable approach to finding closest truncated-commute-time neighbors in
large graphs. In In Proc. UAI, 2007.
[13] B. Shaw and T. Jebara. Structure preserving embedding. In Proceedings of the 26th Annual International
Conference on Machine Learning, pages 937?944. ACM, 2009.
[14] S. T. Smith, E. K. Kao, K. D. Senne, G. Bernstein, and S. Philips. Bayesian discovery of threat networks.
IEEE Transactions on Signal Processing, 62:5324?5338, 2014.
[15] D. W. Stroock and S. S. Varadhan. Multidimensional diffussion processes, volume 233. Springer Science
& Business Media, 1979.
[16] A. Tahbaz-Salehi and A. Jadbabaie. A one-parameter family of distributed consensus algorithms with
boundary: From shortest paths to mean hitting times. In Decision and Control, 2006 45th IEEE Conference on, pages 4664?4669. IEEE, 2006.
[17] D. Ting, L. Huang, and M. I. Jordan. An analysis of the convergence of graph Laplacians. In Proceedings
of the 27th International Conference on Machine Learning (ICML-10), pages 1079?1086, 2010.
[18] U. von Luxburg, M. Belkin, and O. Bousquet. Consistency of spectral clustering. The Annals of Statistics,
pages 555?586, 2008.
[19] U. von Luxburg, A. Radl, and M. Hein. Hitting and commute times in large random neighborhood graphs.
Journal of Machine Learning Research, 15:1751?1798, 2014.
[20] M. Yazdani. Similarity Learning Over Large Collaborative Networks. PhD thesis, ?cole Polytechnique
F?d?rale de Lausanne, 2013.
[21] L. Yen, M. Saerens, A. Mantrach, and M. Shimbo. A family of dissimilarity measures between nodes
generalizing both the shortest-path and the commute-time distances. In Proceedings of the 14th ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 785?793. ACM,
2008.
9
| 6009 |@word version:1 inversion:1 open:2 calculus:1 accounting:1 q1:1 commute:4 tr:1 catastrophically:1 carry:1 ours:2 outperforms:2 past:1 recovered:1 surprising:1 dx:2 kdd:2 analytic:1 designed:1 stationary:6 generative:2 implying:1 fewer:1 intelligence:2 isotropic:1 xk:3 smith:1 short:1 characterization:2 provides:1 node:2 location:1 five:1 constructed:2 direct:2 become:1 differential:3 c2:2 chakrabarti:1 prove:8 salehi:1 manner:1 introduce:1 theoretically:1 indeed:1 ra:6 tbb:1 expected:20 frequently:1 growing:1 mechanic:2 behavior:3 tex:1 considering:1 provided:1 underlying:5 moreover:1 bounded:1 medium:1 substantially:1 developed:1 finding:2 multidimensional:1 tie:2 exactly:1 demonstrates:2 scaled:3 hit:2 control:3 unit:3 before:2 negligible:1 understood:1 local:7 limit:19 despite:2 analyzing:1 ginsparg:1 path:25 solely:1 approximately:1 equivalence:1 lausanne:1 limited:1 directed:11 unique:1 testing:1 practice:1 ybt:10 word:7 convenience:1 s9:2 kmax:2 applying:2 equivalent:8 deterministic:1 missing:3 eighteenth:1 starting:1 survey:2 recovery:1 estimator:5 s6:4 proving:1 embedding:2 handle:1 notion:1 coordinate:3 increment:2 laplace:10 analogous:1 limiting:4 construction:1 suppose:3 target:2 alamgir:2 annals:1 us:1 origin:2 crossing:1 approximated:1 satisfying:1 rij:5 capture:3 degeneration:1 region:2 connected:4 sun:2 rescaled:2 principled:2 transforming:1 vanishes:1 respecting:1 asked:1 rigorously:1 tacitly:1 geodesic:2 upon:1 hashimoto:2 joint:1 various:1 effective:1 pertaining:1 query:1 artificial:2 neighborhood:12 whose:2 heuristic:1 valued:1 drawing:1 statistic:4 transform:3 noisy:1 final:1 tatsunori:1 associative:3 hoc:1 advantage:3 sequence:2 differentiable:1 maximal:1 strengthening:1 translate:1 degenerate:5 subgraph:1 intuitive:1 kao:1 convergence:6 cluster:7 regularity:1 extending:1 xim:1 xil:1 ijcai:1 converges:11 derive:1 develop:2 fixing:1 ij:1 nearest:9 qt:1 chebotarev:1 strong:1 recovering:2 predicted:1 implies:3 tommi:2 direction:1 posit:1 radius:3 correct:2 attribute:3 modifying:1 stochastic:9 centered:1 exploration:1 routing:1 adjacency:1 require:3 generalization:2 elementary:1 extension:1 pl:1 physica:2 hold:2 proximity:1 around:2 considered:1 exp:5 continuum:4 purpose:1 estimation:3 proc:1 sensitive:2 cole:1 largest:1 gehrke:1 tool:2 weighted:3 mit:6 gaussian:2 rather:1 zhou:1 varying:1 jaakkola:2 broader:1 corollary:8 derived:2 focus:2 properly:2 consistently:2 rank:2 contrast:1 sigkdd:2 baseline:1 inference:2 dependent:1 factoring:1 unlikely:1 spurious:1 relation:2 hidden:1 transformed:7 quasi:6 going:1 provably:2 uncovering:1 retaining:1 development:2 spatial:11 smoothing:1 orange:1 special:1 marginal:1 equal:1 construct:1 sampling:1 identical:2 represents:2 pai:1 cancel:1 nearly:2 excessive:1 icml:2 others:1 piecewise:1 few:1 belkin:1 preserve:4 divergence:1 interpolate:1 comprehensive:1 phase:2 attempt:1 detection:1 investigate:1 highly:2 mining:1 evaluation:1 analyzed:1 edge:22 walk:44 hein:1 theoretical:2 minimal:1 delete:1 gn:26 stroock:2 lattice:1 applicability:1 vertex:24 uniform:1 successful:1 answer:1 eec:2 synthetic:1 combined:1 density:6 fundamental:2 randomized:2 international:6 xi1:1 corrects:1 jadbabaie:1 diverge:1 connecting:1 quickly:2 concrete:1 intersecting:1 connectivity:1 thesis:1 von:4 hn:3 huang:1 leading:1 rescaling:1 supp:1 potential:1 de:1 skorokhod:1 includes:1 txj:5 depends:1 ad:1 performed:2 view:1 break:2 closed:3 analyze:4 recover:2 decaying:1 yen:1 contribution:2 collaborative:1 responded:1 variance:3 who:2 yield:2 identify:1 generalize:1 weak:2 identification:1 bayesian:1 xtn:14 whenever:1 definition:4 sixth:1 against:3 proof:20 associated:1 recovers:2 degeneracy:6 proved:1 dataset:1 popular:3 knowledge:2 lim:3 subsection:2 reflecting:1 higher:1 response:2 improved:2 erd:1 lastly:1 hand:1 sketch:1 o:1 reweighting:2 defines:1 reveal:1 behaved:1 grows:1 effect:3 verify:2 hence:1 moore:2 illustrated:1 reweighted:2 recurrence:1 mantrach:1 complete:1 schrodinger:1 polytechnique:1 performs:1 motion:13 newsletter:1 saerens:2 meaning:1 novel:2 recently:1 common:4 empirically:2 overview:1 conditioning:1 volume:2 extend:1 tail:2 slight:1 s8:2 association:1 s5:2 cup:1 queried:1 rd:2 consistency:13 mathematics:2 varadhan:2 similarity:14 longer:1 brownian:13 closest:2 showed:2 conjectured:1 driven:1 yi:1 exploited:1 seen:1 preserving:2 greater:1 converge:2 shortest:18 bessel:1 signal:1 preservation:1 full:4 mix:2 technical:2 match:1 believed:1 long:1 pde:1 laplacian:3 controlled:3 prediction:10 ensuring:1 basic:1 qi:1 metric:35 expectation:1 arxiv:1 kernel:2 penalize:2 c1:2 addition:2 interval:1 appropriately:2 biased:1 subject:2 tend:1 undirected:3 db:1 meaningfully:1 jordan:1 extracting:1 noting:1 intermediate:1 bernstein:1 identically:1 enough:2 variety:1 xj:43 affect:1 bandwidth:1 approaching:1 qj:2 whether:1 motivated:1 s7:1 resistance:6 interpolates:1 proceed:1 clear:1 recapitulates:1 nonparametric:2 transforms:1 s4:2 ten:2 concentrated:1 gnd:2 generate:2 exist:2 kac:4 s3:1 estimated:1 per:3 discrete:6 express:1 threat:1 key:1 pb:3 drawn:2 deleted:3 changing:1 diffusion:2 graph:66 asymptotically:1 fraction:1 sum:2 convert:1 nqt:1 inverse:2 luxburg:4 respond:1 place:1 family:4 x0n:2 draw:1 thesaurus:3 decision:1 scaling:9 nbn:8 bound:4 ki:1 convergent:1 correspondence:2 yielded:1 annual:1 occur:1 precisely:1 constraint:1 x2:3 nearby:1 dominated:3 kleinberg:1 bousquet:1 argument:2 min:1 conjecture:1 ball:6 disconnected:1 encapsulates:1 s1:2 modification:2 taken:1 computationally:1 equation:4 resource:2 remains:1 previously:1 discus:1 count:2 eventually:1 ksendal:1 tractable:3 feynman:4 serf:1 generalizes:1 apply:3 observe:2 radl:1 spectral:1 robustly:1 distinguished:1 shaw:1 alternative:2 robustness:2 original:1 clustering:2 ensure:1 universitext:1 ting:1 prof:1 establish:2 nyi:1 implied:1 move:1 print:1 quantity:1 occurs:1 concentration:1 gradient:1 distance:31 link:10 separate:1 simulated:1 entity:1 majority:1 berlin:1 w0:1 philip:1 consensus:1 trivial:1 assuming:1 length:1 code:1 index:7 hij:3 reweight:1 wbt:1 unknown:2 gn2:4 allowing:1 shimbo:2 markov:1 benchmark:3 t:3 truncated:2 defining:1 precise:1 varied:1 perturbation:1 jebara:1 drift:4 sarkar:2 introduced:1 pair:3 specified:1 extensive:1 established:1 address:1 proceeds:1 below:1 rale:1 regime:1 sparsity:1 challenge:1 laplacians:1 tb:5 including:2 max:1 explanation:1 analogue:1 power:1 critical:2 natural:3 rely:1 business:1 predicting:1 indicator:1 started:3 concludes:1 prior:1 geometric:6 literature:3 discovery:2 asymptotic:2 law:1 expect:1 limitation:1 allocation:2 degree:5 pij:8 consistent:8 nondegenerate:1 share:2 collaboration:1 surprisingly:2 truncation:1 english:1 bias:2 weaker:2 understand:1 allow:1 neighbor:15 taking:1 distributed:3 edinburgh:1 boundary:7 dimension:1 xn:5 world:1 transition:2 unweighted:5 qn:1 preventing:1 commonly:1 jump:5 coincide:1 qualitatively:1 transaction:1 functionals:1 pruning:1 compact:1 approximate:1 citation:3 global:1 uai:1 conclude:1 xi:57 continuous:16 latent:9 un:1 robust:8 forest:1 interpolating:1 necessarily:1 complex:1 domain:3 did:1 significance:1 main:1 dense:2 s2:10 noise:5 edition:1 x1:1 explicit:1 exponential:1 exercise:1 lie:3 theorem:45 down:1 xt:2 showing:1 dominates:1 exists:3 supplement:2 phd:1 dissimilarity:2 te:1 conditioned:1 generalizing:2 logarithmic:1 hitting:63 contained:1 kiss:1 justification:1 springer:1 mij:3 acm:4 viewed:1 towards:1 shared:1 change:1 springerverlag:1 infinite:1 typical:2 uniformly:3 averaging:1 wt:2 lemma:12 total:1 tendency:1 formally:1 support:1 latter:1 evaluate:1 armstrong:1 |
5,536 | 601 | Learning Curves, Model Selection and
Complexity of Neural Networks
Noboru Murata
Department of IVIathematical Engineering and Information Physics
University of Tokyo, Tokyo 113, JAPAN
E-mail: mura~sat.t.u-tokyo.ac.jp
Shuji Yoshizawa
Dept. Mech. Info.
University of Tokyo
ShUll-ichi Amari
Dept. Math. Eng. and Info. Phys.
University of Tokyo
Abstract
Learning curves show how a neural network is improved as the
number of t.raiuing examples increases and how it is related to
the network complexity. The present paper clarifies asymptotic
properties and their relation of t.wo learning curves, one concerning
the predictive loss or generalization loss and the other the training
loss. The result gives a natural definition of the complexity of a
neural network. Moreover, it provides a new criterion of model
selection.
1
INTRODUCTION
The leal'lI ing Cl1l've shows how well t hE' behavior of a neural network is improved as
t.he nurnber of training examples increast"'s and how it is I'elated with the complexity
of neural net.works. This provides liS with a criterion for choosing an adequate
network ill relat.ion t.o the number
of training examples. Some researchers have attacked this problem by using statistical mechanical met.hods (see Levin et al. [1990], Seung et al. [1991]' etc.)
and some by informat.ion theory and algorithmic methods (see Baum and Haussler
607
608
Murata, Yoshizawa, and Amari
[1989], et.c.). The present. paper elucidates asympt.otic properties of the learning
CUl"ve from the statistical point of view, giving a new criterion for model selection.
2
STATEMENT OF THE PROBLEM
Let us consider a stochastic neural network, which is parameterized by a set of m
(0 1 , ..? ,om) and whose input-output relation is specified by a condiweights 0
tional probability p(ylx, 0). In other words, for an input signal is x E R"?n, the
probability distribution of output y E R"oU! is given by p(ylx, 0).
=
A typical form of the stochastic neural network is as follows: let us consider a
multi-layered network !(x, 0) where 0 is a set of m parameters 0 = (0 1 , ??? , om) and
its components correspond to weights and thresholds of the network. When some
input x is given, the network produce an output
y = /(x,()
+ TJ(X),
(1 )
where TJ(x) is noise whose conditional distribut.ion is given by a(TJlx). Then the
condit.ional dist.ribution of the net.work. which specifies the input-output relation,
is given by
p(yl1.?,O) = a(y - /(x, ()Ix).
(2)
e
\Ve define a t.raining sample
= {(Xl, Yd, .. " (Xt, Yt)} as a set of t examples
generated from the true conditional distribution q(ylx), where Xi is generated from
a probability distribution 1'(X) independently. We should note that both r(x) and
q(ylx) are unknown and we need not assume the faithfulness of the model, that is,
we do not a'3sume that there exists a parameter 0* which realize the true distribution
q(ylx) such that p(Ylx, 0?) = q(ylx).
Our purpose is t.o find an appropriate parameter () which realizes a good approximation IJ(ylx, 0) t.o q(yl:r). For this purpose, we use a loss function
L(O)
= D(1'; qlp(O)) + 8(0)
(3)
as a Cl'it.erioll t.o be minimized, where D( 1'; qlp( 0) represent.s a general divergence
measure between t.wo conditional probabilit.ies q(ylx) and p(ylx, 0) in the expectat.ion form under t.he true input-output probability
D(1'; qlp(O?) =
J
1'(x)q(Ylx)k(x, y, O)dxdy
(4)
and S(O) is a regulal'ization t.erm to fit. the smoothness condition of outputs (Moody
[1992]), So t.he loss functioll is rewritten as a expectation form
L(O)= j1'(J;)Q( Y1 X)d(x,y'(l)dxd y ,
d(x,y,()=k(x,y,O)+S(O),
(5)
and d(:t,!I, 0) is raIled t.he pointwise loss funct.ioll.
A typical rase of the divergence D of t.he multi-layered network f( X, 0) with noise
is the squared error
D( 1'; qllJ( 0?) =
j 1'(
X )q( ylx
)lly - /( x, 0)11 2 dxdy,
(6)
Learning Curves, Model Selection and Complexity of Neural Networks
The error function of an ordinary multi-layered network is in this form, and the
conventional Back-Pr'opagation met.hod is derived from this type of loss function.
Anot.her t.ypical case is the Kullhaek-Leibler divergence
q(ylx)
(7)
D(I';qlp(O)) =
r(.r)lJ(ylx)log
dxdy.
p(ylx,B)
The integration 1'(x)q(ylx) logq(ylx)dxdy is a constant called a conditional entropy, and we usually use the following abbreviated form instead of the previous
divergence:
J
J
J
D(7'; qlp((})) = -
1'(x)q(ylx) logp(y/x, B)dxdy.
(8)
Next, we define an optimum of the parameter in the sense of the loss function that
we introduced. We denote by B* the optimal parameter that minimizes the loss
function L( 0), that is,
L(O*) = min L(O),
(9)
(J
and we regard p(ylx, 0*) as the best realization of the model.
\t\'hen a trailling sample
e is given, we can also define an empirical loss function:
=
where i',
If
1.(0)
D(1'; qlp(B)) + S((n,
are the empirical distribut.ions given by the sample
1
D(l?;tj/p(O))
t
= t Lk(Xi'Yi,(}),
(xi,yd E
e, that is,
e.
(10)
(11)
i=l
In practical case, we consider t.he empirical loss function and search for the quasioptimal paramet.er 0 defined hy
L(O) = min L(O),
(J
(12)
because the trw?' distribut.ions 1'{x) and q(ylx) are unkllown and we can only use
examplps (XidJd observed from t.he tl'lle distribution ,,(x)IJ(ylx), We should note
that. the quasi-optilllal paramet.er 0 is a rallc\OI1l variable depending on the sample
each element of which is chosen randOlnly.
e,
The following lemma guarantees that we can use the empirical loss function instead
of the actual loss funct.ion when t.he number of examples t is large.
Lenllna 1 If fhe 11'11111ber of examples t is large e1lough, it is shown that the quasioptimal pam7llcier 0 -is lIormally dist7'ib 'utcd al'ound the optimal parameter B*, that
lS,
(13)
where
-.
and 'V
denote~
(?.1
/
Q
J
r(.t)I/(yl;L')\'c!(.l', y. 0* )'Vd(;L', V, 0* )Td.tdy,
l'(x)IJ(ylx)'V'Vd(x,y,O*)dxdy,
fhe di.fJer?en/utl oper'ator with respect to B,
( 14)
(15)
609
610
Murata, Yoshizawa, and Amari
This lemma is proved hy using t.he uSllal statistical methods.
3
LEARNING PROCEDURE
In many cases, however, it is difficult to obtain the quasi-optimal parameter 9 by
minimizing the equation (10) direct.ly. VVe therefore often use a stochastic descent
method to get an approximation to the quasi-optimal parameter 9.
Definition 1 (Stochastic Descent Method) In each learning step, an example
randomly, and the following modification is
is re-sampled from the given sample
applied to the parameter On at step 71,
e
(16)
where c is a positit,e value called a learni7lg coefficient and
sampled example at step 71.
(Xi(n), Yi(n)) 2S
the re-
This is a sequent.ial learning method and the operations of random sampling
frol11
in eacll lcarning step is called the re-sampling plan. The parameter
011 at. st.ep 11 is a random variable as a function of the re-sampled sequence
...; = {( J'i( 1) ? .lJi( 1) ) ?.?. , (J: i( ,t!, lji( Il d }. However, if the initial value of 0 is appropriate
(this assumpt.ion prevent.s being stuck in local minima) and if the learning step n
is large enough, it. is shown that the learned parameter On is normally distributed
around the qnasi-opt.imal parameter .
e
Lenuua 2 If the learning step n is large enough and the learning coefficient c is
small enough, the parameter 0" is normally distributed asymptotically, that is,
(17)
Oil '" N(O,EV),
where' \I satisfies the followi7lg T"Clatio71
G
= QF + VQ,
fL
t
(,' =
\1 d( J ' / , Yi , 0
(18)
rv d(
.l: i ,
!Ii , 0) T ,
i= I
,
Q
It
=t L
V' V' d ( Xi, Yi , 0) .
i==l
In the following discussion, we assume that.
and we denot.e the learned parameter by
11
is large enough and c is small enough,
(19)
The dist.ribut.ion of t.he randolll variable
distribllt.ioll N(O.EV).
4
0,
therefore, can be regarded a<; the normal
LEARNING CURVES
It. is import.allt. to evalll<:l\.t> the difl'crellce bet.ween two quantities L(O) and 1.(0). The
quantit.y 1.(0) is calkd the predict.ive loss or the generalization error, which shows
Learning Curves, Model Selection and Complexity of Neural Networks
t.he average loss of t.he tl"ained network when a novel example is given. On the other
hand, the quant.ity L(O) is called the training loss or the training error, which shows
the average loss evaluated by the examples used in tl?aining. Since these quantities
depend all t.he sample
and the I'e-sampled sequence w, we take the expectation
E and the variance Val' with respect to the sample
and the re-sampling sequence
e
e
w.
First., let. us consider the predictive loss which is t.he average loss of the trained
network when a new example (which does not belong to the sample e) is given.
This averaging operation is replaced by averaging all over the input-output pairs,
because the measure of the sample
is z?'ro. Then the predictive loss is written as
L(O) =
From the properties of
?
B,
and
J
e
1?(x)q(Ylx)d(x,y,O)dxdy.
(20)
we can prove the following important relations.
Theorem 1 Th.e predictive loss asymptotically satisfies
E[L(O)]
L(()*)
1
E
+ 2t trCQ-1 + '2trQv,
I
['2
21.'.! t)'{,'Q-I(,'Q-1
\lar[L(O)]
(21)
-
+ 2"trQ VQV + 7t.rG\I.
(22)
Roughly speaking, thel'!' exist t.wo raudOll1 values Y 1 and }"~. and the predictive loss
can he writ t.en as t.he following forl1l:
1
E
L(O)
L(O?) + 2tt.rCQ-l + 2t.rQll
+fYl + EY2 + Op(~) + Op(E),
(23)
where Y1 aud Y2 satisfy
E[Yd = 0,
E[Y'.!] =
Cov[Y)
Var[Yd
= ~t.rCQ-1CQ-l,
.
I
Vad}":!] = 1t.r Q V QV,
o.
'.rGV,
}''.!]
E, Val' and Cov dellol.e t.he I'xp ect.al.ioll, t.h e variance and the covariance respectively.
Next, we consider t.he> t.railling loss, i.e., t.lw average loss evaluated by the examples
used ill t.l'<lining . .Just. as we did in t.he previolls theorem, we can get the following
re la.tions.
ThCOl'Clll 2 The training loss aSY1l71ltotically satisfy
1
L(O?) - 2t t.I'GQ-I
1(/'
f .
E
+ 2 t.l'Q V,
(24)
"
1'(J:)q(YI ?t)d(J:,y,O*)-dxdy
- (./ /'(.1: )q( Y IJ: )d( x, y,
o? )d.tdy) :!) .
(25)
611
612
Murata, Yoshizawa, and Amari
Intuitively speaking like the predictive loss, the training loss can be expanded as
(26)
where Y3 satisfies
0,
/ r(x)q(ylx)d(x,y,O*)2dxdy- ( / r(x)q(Ylx)d(x,y,O*)dxdy)2.
When we look at two curves E[L(O)] and E[L(o)] as functions of t, they are called
learning curves which represent the characteristics of learning. The expectations of
the predictive loss and the training loss look quite similar. They are different in
the sign of the term lit. As the learning coefficient ? increases, the expectations
E[L(lJ)] and E[L(o)] increase, but as the number of examples t increases, the average
predictive loss E[L(O')] dec.reases and t.he average training loss E[.L(8)] conversely
increases. Moreover, their variances are different in the Ol?der of t. The coefficients
trGQ-l, trQV, etc. are calculated from the matrices G, Q and V, which reflect
the architecture of the network and the loss criterion t.o be minimized. We can
consider t.hese mat.rices as representing the complexity of the network. In earlier
work, Amari and tvillrata [1991] introduced an effective complexity of the network,
trCQ-l, by analogy to Akaike's Information Criterion (AIC) (see Akaike [1974]).
5
AN APPLICATION FOR MODEL SELECTION
These results nat.urally leads us to a model selection criterion, which is like the AlC
criterion of statistical model selection and which is related those proposed by some
researchers (see Murata et a1. [1991], Moody [1992]). From the previous relations,
we can easily show the following relation
(27)
where c is a quant.it.y of order 1/ Jl and common to all the net.works of the same
archit.ecture. We compare the abilities of two different networks, which have the
same al'chitecture and are tl'ained by the same sample, but differ in the number of
weights or nemons (see Fig.l). We can use a quantity, NIC (Network Information
Criterion),
(28)
where
(29)
fO!" selecting an opt.imal net.work model. Note that. this quantity NIC is directly
calculable, since all elements of it. L(O). G, Q, are given by summing over the
Learning Curves, Model Selection and Complexity of Neural Networks
e.
sample
When we have two models 1111 and M2, and the NIC of A11 is smaller
than that of .Hz, the predictive loss of Afl is expected smaller than that of M 2 , so
All can be l'egal'ded as a better model in the sense of the loss function.
This criterion cannot he used when we compare two networks of different architectures, for example a multi-layered network and a radial basis expansion network.
This is because the value c of the order 1/ Vi term is common only to two networks
in which one is included in the other as a submodel. The criterion is in general
valid only for such a family of networks (see Fig.2).
6
CONCLUSIONS
In this paper, we show that. there is nice relation between the expectation of the
predictive loss and that of the training loss. This result naturally leads us to a new
model selection criterion.
We will consider the application of this result as an algorithm for automatically
changing the number of hidden units in the learning as future work.
References
H. Aka.ike. (1974) A new look at the statistical model identification. IEEE Trans.
AC,19(6):716-723.
S. Amari. (1967) Theol'Y of adaptive pattern classifiers.
16(3}:29U-:307.
IEEE Trans.
EC,
S. Amari and N. l'vI urata.. (1991) Stat.ist.ical theory of learning curves under entropic
loss criterioll. Technical Report METR 91-12, University of Tokyo, Tokyo, Japan.
E. B. Baum and D. Haussler. (1989) What size net gives valid generalization?
Neural Computation, 1:151-160.
E. Levin, N. Tishby, and S. A . Solla. (1990) A statistical approach to learning and
generalization in layered neural networks. Proc. of IEEE, 78(1O}:1568-1574.
J . E. l\'Ioody. (1992) The effective number of parameters: An analysis of generalization alld regularization in nonlinear learning systems. In J . E . Moody, S. J. Hanson,
and R. P. Lippmann, (eds.), Advances ill Neural JlIfonnation Processing Systems 4San Mateo, CA: Morgan Ka.ufmanll .
N. Murata. (1992) Statistical aSY17l1liotic study on learning (In Japanese). PhD
thesis, University of Tokyo, Tokyo, Japan .
N. Murata, S. Yoshizawa, and S. Amari. {1991} A criterion for determining the
number of paramet.ers in an artificial neural network model. In T. Kohonen et al.,
(eds.), Artificial Ne 'ural Networks, 9-14. Holland: Elsevier Science Publishers.
H. S. Seung, H. Sompolinsky, and N. Tishby. (1991) Statistical mechanics of learning
from examples II. quenched theory a.nd unrealizable rules. Submitted to Physical
Review A.
613
614
Murata, Yoshizawa, and Amari
origin of the large
variance
q(y Ix)
~)1[\
~
",
/!\ \
q(y Ix)
I
I
I
I \? \\
I \ \\
?
Figure 1: Geomet.rical represent.ation of hierarchical models: the solid lines between
q(vlx) and Oi show predictive losses, anel the dashed lines between q(Ylx) and OJ show
t.raining losses. The large variance of t.he trailling loss originated in the discrepancy
of q(YI.r) alld q(yl.l'). Whell we est.illiatt' I,he I)l"('dinion loss from t.he t.ra.ining loss,
the large variallef' st.ill ,'('maills. Bllt. ill t.he case t.hat t.he model M 1 includes the
model IIf:!, t.his variance is common to two models, so we do not have to take care
of it.
variance
Figure 2: Geomet.rical representat.ion of non-hierarchical models: the solid lines
bet.ween q(Ylx) and Oi show predictive losses, and the dashed lines between q(ylx)
and OJ show t.raining losses. The discrepancy of (J(ylx) and q(Ylx) works differently
on two models M I alld 11-12 ill est.imating predict.ivf' losses.
| 601 |@word nd:1 eng:1 covariance:1 solid:2 initial:1 selecting:1 ka:1 import:1 written:1 afl:1 realize:1 j1:1 ial:1 provides:2 math:1 ional:1 quantit:1 direct:1 ect:1 calculable:1 prove:1 vad:1 ra:1 expected:1 roughly:1 behavior:1 dist:2 mechanic:1 multi:4 ol:1 td:1 automatically:1 actual:1 moreover:2 what:1 minimizes:1 guarantee:1 y3:1 ro:1 classifier:1 normally:2 ly:1 unit:1 engineering:1 local:1 yd:4 mateo:1 conversely:1 practical:1 thel:1 qlp:6 procedure:1 mech:1 probabilit:1 empirical:4 word:1 radial:1 quenched:1 tdy:2 get:2 cannot:1 vlx:1 selection:10 layered:5 conventional:1 yt:1 baum:2 independently:1 l:1 m2:1 rule:1 haussler:2 regarded:1 submodel:1 his:1 ity:1 elucidates:1 akaike:2 origin:1 element:2 logq:1 observed:1 ep:1 difl:1 sompolinsky:1 solla:1 lji:2 complexity:9 seung:2 hese:1 trained:1 depend:1 funct:2 predictive:12 basis:1 easily:1 differently:1 effective:2 artificial:2 sume:1 choosing:1 whose:2 quite:1 ive:1 amari:9 ability:1 cov:2 sequence:3 trailling:2 net:5 gq:1 kohonen:1 realization:1 optimum:1 produce:1 a11:1 tions:1 depending:1 ac:2 stat:1 ij:4 op:2 met:2 differ:1 aud:1 tokyo:9 stochastic:4 generalization:5 unrealizable:1 opt:2 paramet:3 dxd:1 around:1 normal:1 algorithmic:1 predict:2 entropic:1 purpose:2 proc:1 realizes:1 ound:1 qv:1 ribution:1 bet:2 derived:1 aka:1 sense:2 utl:1 tional:1 elsevier:1 lj:2 her:1 relation:7 hidden:1 ical:1 quasi:3 theol:1 ill:6 distribut:3 plan:1 integration:1 sampling:3 ribut:1 iif:1 ike:1 lit:1 look:3 future:1 minimized:2 report:1 discrepancy:2 randomly:1 ve:3 divergence:4 replaced:1 chitecture:1 tj:3 re:6 denot:1 earlier:1 shuji:1 logp:1 ordinary:1 levin:2 tishby:2 aining:1 st:2 physic:1 yl:3 moody:3 ey2:1 squared:1 reflect:1 thesis:1 li:2 japan:3 oper:1 includes:1 coefficient:4 relat:1 satisfy:2 vi:2 view:1 alld:3 om:2 il:1 oi:2 variance:7 characteristic:1 murata:8 clarifies:1 correspond:1 ecture:1 identification:1 fyl:1 researcher:2 submitted:1 phys:1 fo:1 ed:2 definition:2 yoshizawa:6 naturally:1 di:1 sampled:4 proved:1 ou:1 back:1 trw:1 improved:2 evaluated:2 just:1 hand:1 nonlinear:1 noboru:1 lar:1 oil:1 true:3 y2:1 ization:1 ivf:1 regularization:1 leibler:1 criterion:12 tt:1 functioll:1 novel:1 common:3 urally:1 physical:1 jp:1 jl:1 belong:1 he:27 smoothness:1 clll:1 etc:2 yi:6 der:1 morgan:1 minimum:1 dxdy:10 care:1 ween:2 dashed:2 signal:1 ii:2 rv:1 fhe:2 ural:1 representat:1 ing:1 technical:1 concerning:1 y:1 a1:1 expectation:5 ained:2 lly:1 represent:3 ion:10 dec:1 publisher:1 hz:1 assumpt:1 enough:5 fit:1 ioll:3 architecture:2 quant:2 wo:3 speaking:2 adequate:1 ylx:30 nic:3 specifies:1 exist:1 sign:1 geomet:2 mat:1 ichi:1 ist:1 threshold:1 changing:1 prevent:1 lenllna:1 asymptotically:2 vqv:1 parameterized:1 family:1 informat:1 fl:1 aic:1 anot:1 hy:2 ining:1 min:2 expanded:1 bllt:1 department:1 smaller:2 mura:1 modification:1 intuitively:1 pr:1 erm:1 equation:1 vq:1 abbreviated:1 operation:2 rewritten:1 hierarchical:2 appropriate:2 yl1:1 hat:1 archit:1 giving:1 quantity:4 vd:2 mail:1 writ:1 pointwise:1 condit:1 cq:1 minimizing:1 difficult:1 statement:1 info:2 unknown:1 descent:2 attacked:1 y1:2 sequent:1 introduced:2 pair:1 mechanical:1 specified:1 faithfulness:1 hanson:1 learned:2 trans:2 usually:1 pattern:1 ev:2 oj:2 ation:1 natural:1 imal:2 ator:1 representing:1 ne:1 lk:1 lining:1 nice:1 review:1 hen:1 val:2 determining:1 asymptotic:1 loss:45 analogy:1 var:1 anel:1 xp:1 qf:1 lle:1 ber:1 leal:1 regard:1 curve:10 raining:3 distributed:2 calculated:1 valid:2 stuck:1 adaptive:1 san:1 ec:1 lippmann:1 sat:1 summing:1 xi:5 search:1 rcq:2 ca:1 expansion:1 cl:1 japanese:1 did:1 noise:2 ded:1 expectat:1 vve:1 fig:2 tl:4 en:2 originated:1 xl:1 ib:1 lw:1 ix:3 theorem:2 xt:1 er:3 exists:1 alc:1 phd:1 nat:1 hod:2 entropy:1 rg:1 holland:1 satisfies:3 rice:1 conditional:4 included:1 typical:2 averaging:2 lemma:2 called:5 la:1 est:2 dept:2 |
5,537 | 6,010 | Robust Regression via Hard Thresholding
Kush Bhatia? , Prateek Jain? , and Purushottam Kar??
?
Microsoft Research, India
?
Indian Institute of Technology Kanpur, India
{t-kushb,prajain}@microsoft.com, [email protected]
Abstract
We study the problem of Robust Least Squares Regression (RLSR) where several
response variables can be adversarially corrupted. More specifically, for a data
matrix X ? Rp?n and an underlying model w? , the response vector is generated
as y = X T w? + b where b ? Rn is the corruption vector supported over at most
C ? n coordinates. Existing exact recovery results for RLSR focus solely on L1 penalty based convex formulations and impose relatively strict model assumptions
such as requiring the corruptions b to be selected independently of X.
In this work, we study a simple hard-thresholding algorithm called T ORRENT
which, under mild conditions on X, can recover w? exactly even if b corrupts the
response variables in an adversarial manner, i.e. both the support and entries of b
are selected adversarially after observing X and w? . Our results hold under deterministic assumptions which are satisfied if X is sampled from any sub-Gaussian
distribution. Finally unlike existing results that apply only to a fixed w? , generated
independently of X, our results are universal and hold for any w? ? Rp .
Next, we propose gradient descent-based extensions of T ORRENT that can scale
efficiently to large scale problems, such as high dimensional sparse recovery. and
prove similar recovery guarantees for these extensions. Empirically we find T OR RENT , and more so its extensions, offering significantly faster recovery than the
state-of-the-art L1 solvers. For instance, even on moderate-sized datasets (with
p = 50K) with around 40% corrupted responses, a variant of our proposed
method called T ORRENT-HYB is more than 20? faster than the best L1 solver.
?If among these errors are some which appear too large to be admissible,
then those equations which produced these errors will be rejected, as coming from too faulty experiments, and the unknowns will be determined by
means of the other equations, which will then give much smaller errors.?
A. M. Legendre, On the Method of Least Squares. 1805.
1
Introduction
Robust Least Squares Regression (RLSR) addresses the problem of learning a reliable set of regression coefficients in the presence of several arbitrary corruptions in the response vector. Owing to the
wide-applicability of regression, RLSR features as a critical component of several important realworld applications in a variety of domains such as signal processing [1], economics [2], computer
vision [3, 4], and astronomy [2].
Given a data matrix X = [x1 , . . . , xn ] with n data points in Rp and the corresponding response
? such that,
vector y ? Rn , the goal of RLSR is to learn a w
X
? =
? S)
(w,
arg min
(yi ? xTi w)2 ,
(1)
w?Rp
i?S
S?[n]:|S|?(1??)?n
?
This work was done while P.K. was a postdoctoral researcher at Microsoft Research India.
1
That is, we wish to simultaneously determine the set of corruption free points S? and also estimate
the best model parameters over the set of clean points. However, the optimization problem given
above is non-convex (jointly in w and S) in general and might not directly admit efficient solutions.
Indeed there exist reformulations of this problem that are known to be NP-hard to optimize [1].
To address this problem, most existing methods with provable guarantees assume that the observations are obtained from some generative model. A commonly adopted model is the following
y = X T w? + b,
(2)
where w? ? Rp is the true model vector that we wish to estimate and b ? Rn is the corruption
vector that can have arbitrary values. A common assumption is that the corruption vector is sparsely
supported i.e. kbk0 ? ? ? n for some ? > 0.
Recently, [4] and [5] obtained a surprising result which shows that one can recover w? exactly even
when ? . 1, i.e., when almost all the points are corrupted, by solving an L1 -penalty based convex
optimization problem: minw,b kwk1 + ? kbk1 , s.t., X > w + b = y. However, these results require
the corruption vector b to be selected oblivious of X and w? . Moreover, the results impose severe
restrictions on the data distribution, requiring that the data be either sampled from an isotropic
Gaussian ensemble [4], or row-sampled from an incoherent orthogonal matrix [5]. Finally, these
results hold only for a fixed w? and are not universal in general.
In contrast, [6] studied RLSR with less stringent assumptions, allowing arbitrary corruptions in
response variables as well as in the data matrix X, and proposed a trimmed inner product based
algorithm for the problem. However, their recovery guarantees
?are significantly weaker. Firstly,
?
they are able to?
recover w? only upto an additive error ? p (or ? s if w? is s-sparse). Hence, they
require ? ? 1/ p just to claim a non-trivial bound. Note that this amounts to being able to tolerate
only a vanishing fraction of corruptions. More importantly, even with n ? ? and extremely small
? they are unable to guarantee exact recovery of w? . A similar result was obtained by [7], albeit
using a sub-sampling based algorithm with stronger assumptions on b.
In this paper, we focus on a simple and natural thresholding based algorithm for RLSR. At a high
level, at each step t, our algorithm alternately estimates an active set St of ?clean? points and then
updates the model to obtain wt+1 by minimizing the least squares error on the active set. This
intuitive algorithm seems to embody a long standing heuristic first proposed by Legendre [8] over
two centuries ago (see introductory quotation in this paper) that has been adopted in later literature
[9, 10] as well. However, to the best of our knowledge, this technique has never been rigorously
analyzed before in non-asymptotic settings, despite its appealing simplicity.
Our Contributions: The main contribution of this paper is an exact recovery guarantee for the
thresholding algorithm mentioned above that we refer to as T ORRENT-FC (see Algorithm 1). We
provide our guarantees in the model given in 2 where the corruptions b are selected adversarially but
restricted to have at most ? ? n non-zero entries where ? is a global constant dependent only on X 1 .
Under deterministic conditions on X, namely the subset strong convexity (SSC) and smoothness
(SSS) properties (see Definition 1), we guarantee that T ORRENT-FC converges at a geometric rate
and recovers w? exactly. We further show that these properties (SSC and SSS) are satisfied w.h.p.
if a) the data X is sampled from a sub-Gaussian distribution and, b) n ? p log p.
We would like to stress three key advantages of our result over the results of [4, 5]: a) we allow b
to be adversarial, i.e., both support and values of b to be selected adversarially based on X and w? ,
b) we make assumptions on data that are natural, as well as significantly less restrictive than what
existing methods make, and c) our analysis admits universal guarantees, i.e., holds for any w? .
We would also like to stress that while hard-thresholding based methods have been studied rigorously for the sparse-recovery problem [11, 12], hard-thresholding has not been studied formally
for the robust regression problem. [13] study soft-thresholding approaches to the robust regression
problem but without any formal guarantees. Moreover, the two problems are completely different
and hence techniques from sparse-recovery analysis do not extend to robust regression.
1
Note that for an adaptive adversary, as is the case in our work, recovery cannot be guaranteed for ? ? 1/2
e This
e ? w? ) for an adversarially chosen model w.
since the adversary can introduce corruptions as bi = x>
i (w
e thus making recovery impossible.
would make it impossible for any algorithm to distinguish between w? and w
2
Despite its simplicity, T ORRENT-FC does not scale very well to datasets with large p as it solves
least squares problems at each iteration. We address this issue by designing a gradient descent
based algorithm (T ORRENT-GD), and a hybrid algorithm (T ORRENT-Hyb), both of which enjoy a
geometric rate of convergence and can recover w? under the model assumptions mentioned above.
We also propose extensions of T ORRENT for the RLSR problem in the sparse regression setting
where p n but kw? k0 = s? p. Our algorithm T ORRENT-HD is based on T ORRENT-FC but
uses the Iterative Hard Thresholding (IHT) algorithm, a popular algorithm for sparse regression. As
before, we show that T ORRENT-HD also converges geometrically to w? if a) the corruption index ?
is less than some constant C, b) X is sampled from a sub-Gaussian distribution and, c) n ? s? log p.
Finally, we experimentally evaluate existing L1 -based algorithms and our hard thresholding-based
algorithms. The results demonstrate that our proposed algorithms (T ORRENT-(FC/GD/HYB)) can
be significantly faster than the best L1 solvers, exhibit better recovery properties, as well as be more
robust to dense white noise. For instance, on a problem with 50K dimensions and 40% corruption,
T ORRENT-HYB was found to be 20? faster than L1 solvers, as well as achieve lower error rates.
2
Problem Formulation
Given a set of data points X = [x1 , x2 , . . . , xn ], where xi ? Rp and the corresponding response
vector y ? Rn , the goal is to recover a parameter vector w? which solves the RLSR problem (1).
We assume that the response vector y is generated using the following model:
y = y? + b + ?, where y? = X > w? .
Hence, in the above model, (1) reduces to estimating w? . We allow the model w? representing the
regressor, to be chosen in an adaptive manner after the data features have been generated.
The above model allows two kinds of perturbations to yi ? dense but bounded noise ?i (e.g. white
noise ?i ? N (0, ? 2 ), ? ? 0), as well as potentially unbounded corruptions bi ? to be introduced
by an adversary. The only requirement we enforce is that the gross corruptions be sparse. ? shall
represent the dense noise vector, for example ? ? N (0, ? 2 ?In?n ), and b, the corruption vector such
that kbk0 ? ??n for some corruption index ? > 0. We shall use the notation S? = supp(b) ? [n] to
denote the set of ?clean? points, i.e. points that have not faced unbounded corruptions. We consider
adaptive adversaries that are able to view the generated data points xi , as well as the clean responses
yi? and dense noise values ?i before deciding which locations to corrupt and by what amount.
We denote the unit sphere in p dimensions using S p?1 . For any ? ? (0, 1], we let S? =
{S ? [n] : |S| = ? ? n} denote the set of all subsets of size ? ? n. For any set S, we let XS :=
[xi ]i?S ? Rp?|S| denote the matrix whose columns are composed of points in that set. Also, for
any vector v ? Rn we use the notation vS to denote the |S|-dimensional vector consisting of those
components that are in S. We use ?min (X) and ?max (X) to denote, respectively, the smallest and
largest eigenvalues of a square symmetric matrix X. We now introduce two properties, namely,
Subset Strong Convexity and Subset Strong Smoothness, which are key to our analyses.
Definition 1 (SSC and SSS Properties). A matrix X ? Rp?n satisfies the Subset Strong Convexity
Property (resp. Subset Strong Smoothness Property) at level ? with strong convexity constant ??
(resp. strong smoothness constant ?? ) if the following holds:
?? ? min ?min (XS XS> ) ? max ?max (XS XS> ) ? ?? .
S?S?
S?S?
Remark 1. We note that the uniformity enforced in the definitions of the SSC and SSS properties is
not for the sake of convenience but rather a necessity. Indeed, a uniform bound is required in face of
an adversary which can perform corruptions after data and response variables have been generated,
and choose to corrupt precisely that set of points where the SSC and SSS parameters are the worst.
3
T ORRENT: Thresholding Operator-based Robust Regression Method
We now present T ORRENT, a Thresholding Operator-based Robust RegrEssioN meThod for performing robust regression at scale. Key to our algorithms is the Hard Thresholding Operator which
we define below.
3
Algorithm 1 T ORRENT: Thresholding Operator- Algorithm 3 UPDATE T ORRENT-GD
based Robust RegrEssioN meThod
Input: Current model w, current active set S, step
size ?
Input: Training data {xi , yi } , i = 1 . . . n, step length
1: g ? XS (XS> w ? yS )
?, thresholding parameter ?, tolerance
2: return w ? ? ? g
1: w0 ?
0, S
0 = [n], t ? 0, r0 ? y
2: while
rtSt
2 > do
t
t
3:
wt+1 ? UPDATE(w
t , ?, r , St?1 )
Algorithm 4 UPDATE T ORRENT-HYB
t+1 , S
t+1
4:
r i ? yi ? w , x i
Input: Current model w, current active set S, step
5:
St+1 ? HT(rt+1 , (1 ? ?)n)
size ?, current residuals r, previous active set S 0
6:
t?t+1
1: // Use the GD update if the active
7: end while
set S is changing a lot
8: return wt
2: if |S\S 0 | > ? then
3:
w0 ? UPDATE-GD(w, S, ?, r, S 0 )
4: else
Algorithm 2 UPDATE T ORRENT-FC
5: // If stable, use the FC update
Input: Current model
w,
current
active
set
S
X
6:
w0 ? UPDATE-FC(w, S)
2
1: return arg min
(yi ? hw, xi i)
7: end if
w
i?S
8: return w0
Definition 2 (Hard Thresholding Operator). For any vector v ? Rn , let ?v ? S
n be the permutation
v? (1) ? v? (2) ? . . . ?
that
orders
elements
of
v
in
ascending
order
of
their
magnitudes
i.e.
v
v
v? (n) . Then for any k ? n, we define the hard thresholding operator as
v
HT(v; k) = i ? [n] : ?v?1 (i) ? k
Using this operator, we present our algorithm T ORRENT (Algorithm 1) for robust regression. T OR RENT follows a most natural iterative strategy of, alternately, estimating an active set of points which
have the least residual error on the current regressor, and then updating the regressor to provide a
better fit on this active set. We offer three variants of our algorithm, based on how aggressively the
algorithm tries to fit the regressor to the current active set.
We first propose a fully corrective algorithm T ORRENT-FC (Algorithm 2) that performs a fully
corrective least squares regression step in an effort to minimize the regression error on the active set.
This algorithm makes significant progress in each step, but at a cost of more expensive updates. To
address this, we then propose a milder, gradient descent-based variant T ORRENT-GD (Algorithm 3)
that performs a much cheaper update of taking a single step in the direction of the gradient of the
objective function on the active set. This reduces the regression error on the active set but does not
minimize it. This turns out to be beneficial in situations where dense noise is present along with
sparse corruptions since it prevents the algorithm from overfitting to the current active set.
Both the algorithms proposed above have their pros and cons ? the FC algorithm provides significant
improvements with each step, but is expensive to execute whereas the GD variant, although efficient
in executing each step, offers slower progress. To get the best of both these algorithms, we propose
a third, hybrid variant T ORRENT-HYB (Algorithm 4) that adaptively selects either the FC or the GD
update depending on whether the active set is stable across iterations or not.
In the next section we show that this hard thresholding-based strategy offers a linear convergence
rate for the algorithm in all its three variations. We shall also demonstrate the applicability of this
technique to high dimensional sparse recovery settings in a subsequent section.
4
Convergence Guarantees
For the sake of ease of exposition, we will first present our convergence analyses for cases where
dense noise is not present i.e. y = X > w? + b and will handle cases with dense noise and sparse
corruptions later. We first analyze the fully corrective T ORRENT-FC algorithm. The convergence
proof in this case relies on the optimality of the two steps carried out by the algorithm, the fully
corrective step that selects the best regressor on the active set, and the hard thresholding step that
discovers a new active set by selecting points with the least residual error on the current regressor.
4
Theorem 3. Let X = [x1 , . . . , xn ] ? Rp?n be the given data matrix and y = X T w? + b be the
corrupted output with kbk0 ? ? ? n. Let Algorithm 2 be executed on this data with the thresholding
e = ??1/2 X satisfies the
parameter set to ? ? ?. Let ?0 be an invertible matrix such that X
0
SSC and SSS properties
at
level
?
with
constants
?
and
?
respectively
(see
Definition
1). If the
?
?
?
data satisfies
(1+ 2)??
?1??
t
< 1, then after t = O log
t
kbk2
?1
n
iterations, Algorithm 2 obtains an
?
-accurate solution w i.e. kw ? w k2 ? .
Proof (Sketch). Let rt = y ? X > wt be the vector of residuals at time t and Ct = XSt XS>t . Also
let S? = supp(b) be the set of uncorrupted points. The fully corrective step ensures that
wt+1 = Ct?1 XSt ySt = Ct?1 XSt XS>t w? + bSt = w? + Ct?1 XSt bSt ,
2
t+1
2 . Combining the two gives us
whereas the hard thresholding step ensures that
rt+1
St+1
? rS?
2
2
2
?1
?1
>
>
>
bSt+1
2 ?
C
X
b
X
St St
+ 2 ? bSt+1 XSt+1 Ct XSt bSt
S? \St+1 t
2
2
2
?1
?1
?1
>
>
T
eST
e
e
e
eS X
eS bS
+ 2 ? b>
e
eS bS
X
X
X
X
X
X
X
=
St St
St+1 St+1
t
t
t
t
t
t
S? \St+1
2
?2?
??
2
? kbSt k2 + 2 ?
? kbSt k2
bSt+1
2 ,
? 2
?1??
?1??
?2
eS 0 and ?2
e > )?1 X
eS X
e > (X
e = ??1/2 X and X > Ct?1 XS 0 = X
where ?1 follows from setting X
t
0
St
S
S
follows from the SSC and SSS properties, kbSt k0 ? kbk0 ? ? ? n and |S? \St+1 | ? ? ? n. Solving
the quadratic equation and performing other manipulations gives us the claimed result.
?
(1+ 2)?
Theorem 3 relies on a deterministic (fixed design) assumption, specifically ?1?? ? < 1 in order
to guarantee convergence. We can show that a large class of random designs, including Gaussian
and sub-Gaussian designs actually satisfy this requirement. That ?
is to say, data generated from these
(1+ 2)??
distributions satisfy the SSC and SSS conditions such that ?1??
< 1 with high probability.
Theorem 4 explicates this for the class of Gaussian designs.
Theorem 4. Let X = [x1 , . . . , xn ] ? Rp?n be the given data matrix with eachxi ? N (0, ?). Let
1
y = X > w? + b and kbk0 ? ? ? n. Also, let ? ? ? < 65
and n ? ? p + log 1? . Then, with proba
?
kbk
(1+ 2)?
9
bility at least 1??, the data satisfies ?1?? ? < 10
. More specifically, after T ? 10 log ?1n 2
iterations of Algorithm 1 with the thresholding parameter set to ?, we have
wT ? w?
? .
(?)
Remark 2. Note that Theorem 4 provides rates that are independent of the condition number ??max
min (?)
of the distribution. We also note that results similar to Theorem 4 can be proven for the larger class
of sub-Gaussian distributions. We refer the reader to Section G for the same.
Remark 3. We remind the reader that our analyses can readily accommodate dense noise in addition
to sparse unbounded corruptions. We direct the reader to Appendix A which presents convergence
proofs for our algorithms in these settings.
Remark 4. We would like to point out that the design requirements made by our analyses are very
mild when compared to existing literature. Indeed, the work of [4] assumes the Bouquet Model
where distributions are restricted to be isotropic Gaussians whereas the work of [5] assumes a more
stringent model of sub-orthonormal matrices, something that even Gaussian designs do not satisfy.
Our analyses, on the other hand, hold for the general class of sub-Gaussian distributions.
We now analyze the T ORRENT-GD algorithm which performs cheaper, gradient-style updates on
the active set. We will show that this method nevertheless enjoys a linear rate of convergence.
Theorem 5. Let the data settings be as stated in Theorem 3 and let Algorithm 3 be executed on this
1
data with the thresholding parameter set to ? ? ? and the step length set to ? = ?1??
. If the data
5
p
kbk
satisfies max ? ?? , 1 ? ??1?? ? 14 , then after t = O log ?n2 1
iterations, Algorithm 1
obtains an -accurate solution wt i.e. kwt ? w? k2 ? .
Similar to T ORRENT-FC, the assumptions made by the T ORRENT-GD algorithm are also satisfied
by the class of sub-Gaussian distributions. The proof of Theorem 5, given in Appendix D, details
these arguments. Given the convergence analyses for T ORRENT-FC and GD, we now move on to
provide a convergence analysis for the hybrid T ORRENT-HYB algorithm which interleaves FC and
GD steps. Since the exact interleaving adopted by the algorithm depends on the data, and not known
in advance, this poses a problem. We address this problem by giving below a uniform convergence
guarantee, one that applies to every interleaving of the FC and GD update steps.
Theorem 6. Suppose Algorithm 4 is executed on data that allows Algorithms 2 and 3 a convergence
rate of ?FC and ?GD respectively. Suppose we have 2??FC ??GD <1. Then for
any interleavings of the
kbk
FC and GD steps that the policy may enforce, after t = O log ?1n 2
iterations, Algorithm 4
ensures an -optimal solution i.e. kwt ? w? k ? .
We point out to the reader that the assumption made by Theorem 6 i.e. 2 ? ?FC ? ?GD < 1 is readily
satisfied by random sub-Gaussian designs, albeit at the cost of reducing the noise tolerance limit. As
we shall see, T ORRENT-HYB offers attractive convergence properties, merging the fast convergence
rates of the FC step, as well as the speed and protection against overfitting provided by the GD step.
5
High-dimensional Robust Regression
In this section, we extend our approach to the robust high-dimensional sparse recovery setting. As
before, we assume that the response vector y is obtained as: y = X > w? + b, where kbk0 ? ? ? n.
However, this time, we also assume that w? is s? -sparse i.e. kw? k0 ? s? . As before, we shall
neglect white/dense noise for the sake of simplicity. We reiterate that it is not possible to use existing
results from sparse recovery (such as [11, 12]) directly to solve this problem.
? so that kw
? ? w? k2 ? . The challenge here
Our objective would be to recover a sparse model w
is to forgo a sample complexity of n & p and instead, perform recovery with n ? s? log p samples
alone. For this setting, we modify the FC update step of T ORRENT-FC method to the following:
X
2
wt+1 ? inf
(yi ? hw, xi i) ,
(3)
kwk0 ?s
i?St
for some target sparsity level s p. We refer to this modified algorithm as T ORRENT-HD. Assuming X satisfies the RSC/RSS properties (defined below), (3) can be solved efficiently using results
from sparse recovery (for example the IHT algorithm [11, 14] analyzed in [12]).
Definition 7 (RSC and RSS Properties). A matrix X ? Rp?n will be said to satisfy the Restricted
Strong Convexity Property (resp. Restricted Strong Smoothness Property) at level s = s1 + s2 with
strong convexity constant ?s1 +s2 (resp. strong smoothness constant Ls1 +s2 ) if the following holds
for all kw1 k0 ? s1 and kw2 k0 ? s2 :
2
2
2
?s kw1 ? w2 k2 ?
X > (w1 ? w2 )
2 ? Ls kw1 ? w2 k2
For our results, we shall require the subset versions of both these properties.
Definition 8 (SRSC and SRSS Properties). A matrix X ? Rp?n will be said to satisfy the Subset
Restricted Strong Convexity (resp. Subset Restricted Strong Smoothness) Property at level (?, s)
with strong convexity constant ?(?,s) (resp. strong smoothness constant L(?,s) ) if for all subsets
S ? S? , the matrix XS satisfies the RSC (resp. RSS) property at level s with constant ?s (resp. Ls ).
We now state the convergence result for the T ORRENT-HD algorithm.
Theorem 9. Let X ? Rp?n be the given data matrix and y = X T w? + b be the corrupted
?1/2
output with kw? k0 ? s? and kbk0 ? ? ? n. Let ?0 be an invertible matrix such that ?0 X
?
satisfies the SRSC and SRSS properties at level (?, 2s+s ) with constants ?(?,2s+s? ) and L(?,2s+s? )
respectively (see Definition 8). Let Algorithm 2 be executed
with the T ORRENT-HD
on this data
L(1??,2s+s? )
update, thresholding parameter set to ? ? ?, and s ? 32 ?(1??,2s+s? ) .
6
70
60
60
50
40
40
30
20
20
10
110 120 130 140 150 160 170 180 190 200
0
90
80
80
70
60
60
50
40
40
30
20
20
10
110 120 130 140 150 160 170 180 190 200
Total Points
(a)
0
p = 500 n = 2000 alpha = 0.25 sigma = 0.2
100
100
Corrupted Points
80
Corrupted Points
Corrupted Points
100
100
80
L1?DALM (p = 50 sigma = 0)
TORRENT?HYB (p = 50 sigma = 0)
100
90
90
80
80
70
60
60
50
40
40
30
20
20
10
110 120 130 140 150 160 170 180 190 200
Total Points
Total Points
(b)
(c)
0
kw ? w? k2
TORRENT?FC (p = 50 sigma = 0)
100
0.25
0.2
TORRENT?FC
TORRENT?HYB
L1?DALM
0.15
0.1
0
10
20
Magnitude of Corruption
(d)
Figure 1: (a), (b) and (c) Phase-transition diagrams depicting the recovery properties of the T ORRENT-FC,
T ORRENT-HYB and L1 algorithms. The colors red and blue represent a high and low probability of success
resp. A method is considered successful in an experiment if it recovers w? upto a 10?4 relative error. Both
variants of T ORRENT can be seen to recover w? in presence of larger number of corruptions than the L1 solver.
(d) Variation in recovery error with the magnitude of corruption. As the corruption is increased, T ORRENT-FC
and T ORRENT-HYB show improved performance while the problem becomes more difficult for the L1 solver.
If X also satisfies
4L(?,s+s? )
?(1??,s+s? )
kbk
< 1, then after t = O log ?1n 2
iterations, Algorithm 2
obtains an -accurate solution wt i.e. kwt ? w? k2 ? .
if X
is sampled from a Gaussian distribution N (0, ?) and n ?
1
? s ?
log p , then for all values of ? ? ? < 65
, we can guarantee kwt ? w? k2 ?
kbk
after t = O log ?1n 2
iterations of the algorithm (w.p. ? 1 ? 1/n10 ).
In particular,
?
?max (?)
?min (?)
Remark 5. The sample complexity required by Theorem 9 is identical to the one required by analyses
for high dimensional sparse recovery [12], save constants. Also note that T ORRENT-HD can tolerate
the same corruption index as T ORRENT-FC.
6
Experiments
Several numerical simulations were carried out on linear regression problems in low-dimensional,
as well as sparse high-dimensional settings. The experiments show that T ORRENT not only offers
statistically better recovery properties as compared to L1 -style approaches, but that it can be more
than an order of magnitude faster as well.
Data: For the low dimensional setting, the regressor w? ? Rp was chosen to be a random unit norm
vector. Data was sampled as xi ? N (0, Ip ) and response variables were generated as yi? = hw? , xi i.
The set of corrupted points S ? was selected as a uniformly random (?n)-sized subset of [n] and the
corruptions were set to bi ? U (?5 ky? k? , 5 ky? k? ) for i ? S ? . The corrupted responses were
then generated as yi = yi? + bi + ?i where ?i ? N (0, ? 2 ). For the sparse high-dimensional setting,
supp(w? ) was selected to be a random s? -sized subset of [p]. Phase-transition diagrams (Figure 1)
were generated by repeating each experiment 100 times. For all other plots, each experiment was
run over 20 random instances of the data and the plots were drawn to depict the mean results.
Algorithms: We compared various variants of our algorithm T ORRENT to the regularized L1 algorithm for robust
the L1 problem can be written as minz kzk1 s.t.Az = y,
[4, 5]. ?Note that
>regression
1
?>
where A = X ? Im?m and z = [w ?b> ]> . We used the Dual Augmented Lagrange Multiplier (DALM) L1 solver implemented by [15] to solve the L1 problem. We ran a fine tuned grid
search over the ? parameter for the L1 solver and quoted the best results obtained from the search. In
the low-dimensional setting, we compared the recovery properties of T ORRENT-FC (Algorithm 2)
and T ORRENT-HYB (Algorithm 4) with the DALM-L1 solver, while for the high-dimensional case,
we compared T ORRENT-HD against the DALM-L1 solver. Both the L1 solver, as well as our methods, were implemented in Matlab and were run on a single core 2.4GHz machine with 8 GB RAM.
Choice of L1 -solver: An extensive comparative study of various L1 minimization algorithms was
performed by [15] who showed that the DALM and Homotopy solvers outperform other counterparts
both in terms of recovery properties, and timings. We extended their study to our observation model
and found the DALM solver to be significantly better than the other L1 solvers; see Figure 3 in the
appendix. We also observed, similar to [15], that the Approximate Message Passing (AMP) solver
diverges on our problem as the input matrix to the L1 solver is a non-Gaussian matrix A = [X T ?1 I].
7
1e?1
1e?2
1e?3
1e?4
p = 10000 n = 2303 s = 50
1
TORRENT? HD
L1?DALM
0.5
p = 50000 n = 5410 alpha = 0.4 s = 100
3
kw ? w? k2
TORRENT?FC
TORRENT?HYB
TORRENT?GD
L1?DALM
kw ? w? k2
TORRENT?FC
TORRENT?HYB
L1?DALM
0
10
p = 300 n = 1800 alpha = 0.41 kappa = 5
kw ? w? k2
kw ? w? k2
p = 500 n = 2000 sigma = 0.2
2
TORRENT?HD
L1?DALM
1
1e?5
0
0.2
0.4
0.6
Fraction of Corrupted Points
0
2
4
6
8
10
0
0
12
Time (in Sec)
(a)
(b)
0.2
0.4
0.6
0.8
Fraction of Corrupted Points
(c)
0
0
100
200
300
Time (in Sec)
400
(d)
Figure 2: In low-dimensional (a,b), as well as sparse high dimensional (c,d) settings, T ORRENT offers better
recovery as the fraction of corrupted points ? is varied. In terms of runtime, T ORRENT is an order of magnitude
faster than L1 solvers in both settings. In the low-dim. setting, T ORRENT-HYB is the fastest of all the variants.
Evaluation Metric: We measure the performance of various algorithms using the standard L2 error:
b ? w? k2 . For the phase-transition plots (Figure 1), we deemed an algorithm successful on
rw
b = kw
?4
b with error rw
an instance if it obtained a model w
? kw? k2 . We also measured the CPU
b < 10
time required by each of the methods, so as to compare their scalability.
6.1
Low Dimensional Results
Recovery Property: The phase-transition plots presented in Figure 1 represent our recovery experiments in graphical form. Both the fully-corrective and hybrid variants of T ORRENT show better
recovery properties than the L1 -minimization approach, indicated by the number of runs in which
the algorithm was able to correctly recover w? out of a 100 runs. Figure 2 shows the variation in
recovery error as a function of ? in the presence of white noise and exhibits the superiority of T OR RENT -FC and T ORRENT -HYB over L1 -DALM. Here again, T ORRENT -FC and T ORRENT -HYB
achieve significantly lesser recovery error than L1 -DALM for all ? <= 0.5. Figure 3 in the apb ? w? k2 with varying p, ? and n follow a similar trend with
pendix show that the variations of kw
T ORRENT having significantly lower recovery error in comparison to the L1 approach.
Figure 1(d) brings out an interesting trend in the recovery property of T ORRENT. As we increase
the magnitude of corruption from U (? ky? k? , ky? k? ) to U (?20 ky? k? , 20 ky? k? ), the recovery error for T ORRENT-HYB and T ORRENT-FC decreases as expected since it becomes easier to
identify the grossly corrupted points. However the L1 -solver was unable to exploit this observation
and in fact exhibited an increase in recovery error.
Run Time: In order to ascertain the recovery guarantees for T ORRENT on ill-conditioned problems,
we performed an experiment where data was sampled as xi ? N (0, ?) where diag(?) ? U (0, 5).
Figure 2 plots the recovery error as a function of time. T ORRENT-HYB was able to correctly recover
w? about 50? faster than L1 -DALM which spent a considerable amount of time pre-processing the
data matrix X. Even after allowing the L1 algorithm to run for 500 iterations, it was unable to reach
the desired residual error of 10?4 . Figure 2 also shows that our T ORRENT-HYB algorithm is able to
converge to the optimal solution much faster than T ORRENT-FC or T ORRENT-GD. This is because
T ORRENT-FC solves a least square problem at each step and thus, even though it requires significantly fewer iterations to converge, each iteration in itself is very expensive.
While each iteration of
T ORRENT-GD is cheap, it is still limited by the slow O (1 ? ?1 )t convergence rate of the gradient
descent algorithm, where ? is the condition number of the covariance matrix. T ORRENT-HYB, on
the other hand, is able to combine the strengths of both the methods to achieve faster convergence.
6.2
High Dimensional Results
Recovery Property: Figure 2 shows the variation in recovery error in the high-dimensional setting
as the number of corrupted points was varied. For these experiments, n was set to 5s? log(p) and
the fraction of corrupted points ? was varied from 0.1 to 0.7. While L1 -DALM fails to recover w?
for ? > 0.5, T ORRENT-HD offers perfect recovery even for ? values upto 0.7.
Run Time: Figure 2 shows the variation in recovery error as a function of run time in this setting.
L1 -DALM was found to be an order of magnitude slower than T ORRENT-HD, making it infeasible
for sparse high-dimensional settings. One key reason for this is that the L1 -DALM solver is significantly slower in identifying the set of clean points. For instance, whereas T ORRENT-HD was able to
identify the clean set of points in only 5 iterations, it took L1 around 250 iterations to do the same.
8
References
[1] Christoph Studer, Patrick Kuppinger, Graeme Pope, and Helmut B?olcskei. Recovery of
Sparsely Corrupted Signals. IEEE Transaction on Information Theory, 58(5):3115?3130,
2012.
[2] Peter J. Rousseeuw and Annick M. Leroy. Robust Regression and Outlier Detection. John
Wiley and Sons, 1987.
[3] John Wright, Alan Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma. Robust Face
Recognition via Sparse Representation. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 31(2):210?227, 2009.
[4] John Wright and Yi Ma. Dense Error Correction via `1 Minimization. IEEE Transaction on
Information Theory, 56(7):3540?3560, 2010.
[5] Nam H. Nguyen and Trac D. Tran. Exact recoverability from dense corrupted observations via
L1 minimization. IEEE Transaction on Information Theory, 59(4):2036?2058, 2013.
[6] Yudong Chen, Constantine Caramanis, and Shie Mannor. Robust Sparse Regression under
Adversarial Corruption. In 30th International Conference on Machine Learning (ICML), 2013.
[7] Brian McWilliams, Gabriel Krummenacher, Mario Lucic, and Joachim M. Buhmann. Fast and
Robust Least Squares Estimation in Corrupted Linear Models. In 28th Annual Conference on
Neural Information Processing Systems (NIPS), 2014.
[8] Adrien-Marie Legendre (1805). On the Method of Least Squares. In (Translated from the
French) D.E. Smith, editor, A Source Book in Mathematics, pages 576?579. New York: Dover
Publications, 1959.
[9] Peter J. Rousseeuw. Least Median of Squares Regression. Journal of the American Statistical
Association, 79(388):871?880, 1984.
[10] Peter J. Rousseeuw and Katrien Driessen. Computing LTS Regression for Large Data Sets.
Journal of Data Mining and Knowledge Discovery, 12(1):29?45, 2006.
[11] Thomas Blumensath and Mike E. Davies. Iterative Hard Thresholding for Compressed Sensing. Applied and Computational Harmonic Analysis, 27(3):265?274, 2009.
[12] Prateek Jain, Ambuj Tewari, and Purushottam Kar. On Iterative Hard Thresholding Methods for High-dimensional M-Estimation. In 28th Annual Conference on Neural Information
Processing Systems (NIPS), 2014.
[13] Yiyuan She and Art B. Owen. Outlier Detection Using Nonconvex Penalized Regression.
arXiv:1006.2592 (stat.ME).
[14] Rahul Garg and Rohit Khandekar. Gradient descent with sparsification: an iterative algorithm
for sparse recovery with restricted isometry property. In 26th International Conference on
Machine Learning (ICML), 2009.
[15] Allen Y. Yang, Arvind Ganesh, Zihan Zhou, Shankar Sastry, and Yi Ma. A Review of Fast
`1 -Minimization Algorithms for Robust Face Recognition. CoRR abs/1007.3753, 2012.
[16] Beatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model
selection. The Annals of Statistics, 28(5):1302?1338, 2000.
[17] Thomas Blumensath. Sampling and reconstructing signals from a union of linear subspaces.
IEEE Transactions on Information Theory, 57(7):4660?4671, 2011.
[18] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Y. Eldar
and G. Kutyniok, editors, Compressed Sensing, Theory and Applications, chapter 5, pages
210?268. Cambridge University Press, 2012.
9
| 6010 |@word mild:2 version:1 norm:1 stronger:1 seems:1 r:4 simulation:1 covariance:1 accommodate:1 necessity:1 selecting:1 offering:1 tuned:1 amp:1 existing:7 current:11 com:1 surprising:1 protection:1 written:1 readily:2 john:3 additive:1 subsequent:1 numerical:1 cheap:1 plot:5 update:16 depict:1 v:1 alone:1 generative:1 selected:7 fewer:1 intelligence:1 isotropic:2 dover:1 vanishing:1 core:1 smith:1 provides:2 mannor:1 cse:1 location:1 firstly:1 purushot:1 unbounded:3 along:1 direct:1 bouquet:1 prove:1 blumensath:2 introductory:1 combine:1 yst:1 introduce:2 manner:2 expected:1 indeed:3 embody:1 bility:1 xti:1 cpu:1 solver:20 kwk0:1 becomes:2 provided:1 estimating:2 underlying:1 moreover:2 bounded:1 notation:2 prateek:2 what:2 kind:1 astronomy:1 sparsification:1 guarantee:14 every:1 runtime:1 exactly:3 kutyniok:1 k2:17 bst:6 unit:2 mcwilliams:1 enjoy:1 appear:1 superiority:1 before:5 timing:1 modify:1 limit:1 despite:2 laurent:1 solely:1 might:1 garg:1 studied:3 christoph:1 ease:1 fastest:1 limited:1 bi:4 kw2:1 statistically:1 union:1 universal:3 significantly:9 trac:1 davy:1 pre:1 studer:1 get:1 cannot:1 convenience:1 selection:1 operator:7 shankar:2 faulty:1 impossible:2 optimize:1 restriction:1 deterministic:3 economics:1 zihan:1 independently:2 convex:3 l:2 simplicity:3 recovery:41 identifying:1 importantly:1 orthonormal:1 nam:1 hd:12 century:1 handle:1 coordinate:1 variation:6 resp:9 target:1 suppose:2 annals:1 exact:5 us:1 designing:1 element:1 trend:2 expensive:3 kappa:1 updating:1 recognition:2 sparsely:2 observed:1 mike:1 solved:1 worst:1 ensures:3 rlsr:9 decrease:1 ran:1 mentioned:2 gross:1 convexity:8 complexity:2 rigorously:2 uniformity:1 solving:2 explicates:1 completely:1 translated:1 interleavings:1 k0:6 various:3 caramanis:1 corrective:6 chapter:1 jain:2 fast:3 bhatia:1 whose:1 heuristic:1 larger:2 solve:2 say:1 compressed:2 statistic:1 jointly:1 itself:1 ip:1 advantage:1 eigenvalue:1 took:1 propose:5 tran:1 coming:1 product:1 combining:1 achieve:3 graeme:1 intuitive:1 ky:6 az:1 scalability:1 convergence:17 requirement:3 diverges:1 comparative:1 perfect:1 converges:2 executing:1 spent:1 depending:1 ac:1 stat:1 pose:1 measured:1 progress:2 solves:3 strong:15 implemented:2 direction:1 owing:1 stringent:2 require:3 beatrice:1 homotopy:1 brian:1 im:1 extension:4 correction:1 hold:7 around:2 considered:1 wright:2 deciding:1 claim:1 smallest:1 estimation:3 largest:1 minimization:5 gaussian:14 modified:1 rather:1 zhou:1 varying:1 publication:1 focus:2 joachim:1 improvement:1 she:1 contrast:1 adversarial:3 helmut:1 dim:1 milder:1 dependent:1 selects:2 corrupts:1 arg:2 among:1 issue:1 dual:1 ill:1 pascal:1 eldar:1 adrien:1 art:2 never:1 having:1 sampling:2 identical:1 adversarially:5 kw:13 icml:2 np:1 roman:1 oblivious:1 composed:1 simultaneously:1 kwt:4 cheaper:2 phase:4 consisting:1 microsoft:3 proba:1 ab:1 detection:2 message:1 mining:1 evaluation:1 severe:1 analyzed:2 accurate:3 minw:1 orthogonal:1 desired:1 rsc:3 instance:5 column:1 soft:1 increased:1 applicability:2 cost:2 entry:2 subset:12 uniform:2 successful:2 too:2 corrupted:19 gd:21 adaptively:1 st:15 vershynin:1 international:2 standing:1 regressor:7 invertible:2 w1:1 again:1 satisfied:4 choose:1 ssc:8 admit:1 book:1 american:1 style:2 return:4 supp:3 sec:2 coefficient:1 satisfy:5 depends:1 reiterate:1 later:2 view:1 lot:1 try:1 performed:2 observing:1 analyze:2 red:1 mario:1 recover:10 contribution:2 minimize:2 square:11 who:1 efficiently:2 ensemble:1 identify:2 produced:1 researcher:1 corruption:30 ago:1 n10:1 srsc:2 reach:1 iht:2 definition:8 against:2 grossly:1 proof:4 recovers:2 con:1 sampled:8 popular:1 knowledge:2 color:1 actually:1 tolerate:2 follow:1 response:14 improved:1 rahul:1 formulation:2 done:1 execute:1 though:1 rejected:1 just:1 torrent:11 sketch:1 hand:2 ganesh:2 french:1 brings:1 indicated:1 requiring:2 true:1 multiplier:1 counterpart:1 hence:3 aggressively:1 symmetric:1 lts:1 white:4 attractive:1 stress:2 demonstrate:2 performs:3 l1:42 allen:1 pro:1 lucic:1 harmonic:1 discovers:1 recently:1 common:1 functional:1 empirically:1 extend:2 association:1 refer:3 significant:2 cambridge:1 smoothness:8 grid:1 sastry:2 mathematics:1 kw1:3 stable:2 interleaf:1 patrick:1 something:1 isometry:1 showed:1 purushottam:2 constantine:1 moderate:1 inf:1 manipulation:1 claimed:1 nonconvex:1 kar:2 success:1 kwk1:1 yi:13 uncorrupted:1 seen:1 impose:2 r0:1 determine:1 converge:2 signal:3 reduces:2 alan:1 faster:9 offer:7 long:1 sphere:1 arvind:2 y:1 variant:9 regression:26 vision:1 metric:1 arxiv:1 iteration:14 represent:3 whereas:4 addition:1 fine:1 xst:6 else:1 diagram:2 source:1 median:1 w2:3 unlike:1 sr:2 exhibited:1 strict:1 massart:1 shie:1 presence:3 yang:2 variety:1 fit:2 orrent:69 inner:1 lesser:1 kush:1 whether:1 gb:1 trimmed:1 effort:1 penalty:2 peter:3 passing:1 york:1 remark:5 matlab:1 gabriel:1 tewari:1 amount:3 repeating:1 rousseeuw:3 rw:2 outperform:1 exist:1 driessen:1 correctly:2 blue:1 shall:6 key:4 nevertheless:1 drawn:1 changing:1 marie:1 clean:6 ht:2 ls1:1 ram:1 geometrically:1 fraction:5 enforced:1 realworld:1 run:8 kuppinger:1 almost:1 reader:4 appendix:3 bound:2 ct:6 guaranteed:1 distinguish:1 quadratic:2 annual:2 leroy:1 strength:1 precisely:1 krummenacher:1 x2:1 sake:3 speed:1 argument:1 min:7 extremely:1 optimality:1 performing:2 relatively:1 legendre:3 smaller:1 beneficial:1 across:1 ascertain:1 son:1 reconstructing:1 appealing:1 making:2 b:2 s1:3 kbk:5 outlier:2 restricted:7 equation:3 turn:1 ascending:1 prajain:1 end:2 reformulations:1 adopted:3 gaussians:1 apply:1 upto:3 enforce:2 save:1 slower:3 rp:14 thomas:2 assumes:2 graphical:1 neglect:1 exploit:1 giving:1 restrictive:1 objective:2 move:1 strategy:2 rt:3 said:2 exhibit:2 gradient:7 subspace:1 kbk1:1 unable:3 w0:4 me:1 trivial:1 reason:1 provable:1 khandekar:1 assuming:1 length:2 index:3 remind:1 minimizing:1 difficult:1 executed:4 potentially:1 kzk1:1 sigma:5 stated:1 design:7 policy:1 unknown:1 perform:2 allowing:2 observation:4 datasets:2 descent:5 situation:1 extended:1 rn:6 perturbation:1 varied:3 arbitrary:3 recoverability:1 yiyuan:1 introduced:1 namely:2 required:4 extensive:1 alternately:2 nip:2 address:5 able:8 adversary:5 below:3 pattern:1 sparsity:1 challenge:1 ambuj:1 reliable:1 max:6 including:1 critical:1 natural:3 hybrid:4 regularized:1 buhmann:1 residual:5 representing:1 technology:1 carried:2 deemed:1 incoherent:1 ss:8 faced:1 review:1 literature:2 geometric:2 l2:1 discovery:1 rohit:1 asymptotic:2 relative:1 fully:6 permutation:1 interesting:1 proven:1 thresholding:25 editor:2 corrupt:2 row:1 penalized:1 supported:2 free:1 infeasible:1 enjoys:1 formal:1 weaker:1 allow:2 institute:1 india:3 wide:1 face:3 taking:1 sparse:24 tolerance:2 ghz:1 yudong:1 dimension:2 xn:4 transition:4 commonly:1 adaptive:4 made:3 nguyen:1 transaction:5 alpha:3 obtains:3 approximate:1 iitk:1 global:1 active:18 overfitting:2 xi:10 quoted:1 postdoctoral:1 search:2 iterative:5 learn:1 robust:20 depicting:1 domain:1 diag:1 main:1 dense:11 s2:4 noise:12 n2:1 x1:4 augmented:1 pope:1 slow:1 wiley:1 sub:10 fails:1 wish:2 rent:3 third:1 minz:1 hw:3 admissible:1 kanpur:1 theorem:13 interleaving:2 sensing:2 x:11 admits:1 olcskei:1 albeit:2 merging:1 corr:1 magnitude:7 conditioned:1 chen:1 easier:1 kbk2:1 fc:36 prevents:1 lagrange:1 kbk0:7 applies:1 satisfies:9 relies:2 ma:3 sized:3 goal:2 exposition:1 owen:1 considerable:1 hard:15 experimentally:1 specifically:3 determined:1 reducing:1 uniformly:1 wt:9 called:2 total:3 forgo:1 e:5 est:1 formally:1 support:2 indian:1 evaluate:1 |
5,538 | 6,011 | Column Selection via Adaptive Sampling
Saurabh Paul
Global Risk Sciences, Paypal Inc.
[email protected]
Malik Magdon-Ismail
CS Dept., Rensselaer Polytechnic Institute
[email protected]
Petros Drineas
CS Dept., Rensselaer Polytechnic Institute
[email protected]
Abstract
Selecting a good column (or row) subset of massive data matrices has found many
applications in data analysis and machine learning. We propose a new adaptive sampling algorithm that can be used to improve any relative-error column
selection algorithm. Our algorithm delivers a tighter theoretical bound on the approximation error which we also demonstrate empirically using two well known
relative-error column subset selection algorithms. Our experimental results on
synthetic and real-world data show that our algorithm outperforms non-adaptive
sampling as well as prior adaptive sampling approaches.
1
Introduction
In numerous machine learning and data analysis applications, the input data are modelled as a matrix
A ? Rm?n , where m is the number of objects (data points) and n is the number of features. Often,
it is desirable to represent your solution using a few features (to promote better generalization and
interpretability of the solutions), or using a few data points (to identify important coresets of the
data), for example PCA, sparse PCA, sparse regression, coreset based regression, etc. [1, 2, 3, 4].
These problems can be reduced to identifying a good subset of the columns (or rows) in the data
matrix, the column subset selection problem (CSSP). For example, finding an optimal sparse linear
encoder for the data (dimension reduction) can be explicitly reduced to CSSP [5]. Motivated by the
fact that in many practical applications, the left and right singular vectors of a matrix A lacks any
physical interpretation, a long line of work [6, 7, 8, 9, 10, 11, 12, 13, 14, 15], focused on extracting
a subset of columns of the matrix A, which are approximately as good as Ak at reconstructing A.
To make our discussion more concrete, let us formally define CSSP.
Column Subset
CSSP: Find a matrix C ? Rm?c containing c columns of
Selection Problem,
A for which
A ? CC+ A
F is small.1 In the prior work, one measures the quality of a CSSPsolution against Ak , the best rank-k approximation to A obtained via the singular value decomposition (SVD), where k is a user specified target rank parameter.
For example,
[15] gives efficient
algorithms to find C with c ? 2k/ columns, for which
A ? CC+ A
F ? (1 + ) kA ? Ak kF .
Our contribution is not to directly attack CSSP. We present a novel algorithm that can improve an
existing CSSP algorithm by adaptively invoking it, in a sense actively learning which columns to
sample next based on the columns you have already sampled. If you use the CSSP-algorithm from
[15] as a strawman benchmark, you can obtain c columns all at once and incur an error roughly
(1 + 2k/c) kA ? Ak kF . Or, you can invoke the algorithm to obtain, for example, c/2 columns,
and then allow the algorithm to adapt to the columns already chosen (for example by modifying A)
before choosing the remaining c/2 columns. We refer to the former as continued sampling and to the
1
CC+ A is the best possible reconstruction of A by projection into the space spanned by the columns of C.
1
latter as adaptive sampling. We prove performance guarantees which show that adaptive sampling
improves upon continued sampling, and we present experiments on synthetic and real data that
demonstrate significant empirical performance gains.
1.1
Notation
A, B, . . . denote matrices and a, b, . . . denote column vectors; In is the n ? n identity matrix.
[A, B] and [A; B] denote matrix concatenation operations in a column-wise and row-wise manner,
respectively. Given a set S ? {1, . . . n}, AS is the matrix that contains the columns of A ? Rm?n
indexed by S. Let rank(A) = ? ? min {m, n}. The (economy) SVD of A is
X
?
VTk
?k
0
=
?i (A)ui viT
A = (Uk U??k )
T
0 ???k
V??k
i=1
m?k
where Uk ? R
and U??k ? R
contain the left singular vectors ui , Vk ? Rn?k
n?(??k)
and V??k ? R
contain the right singular vectors vi , and ? ? R??? is a diagonal matrix
2
containing the singular values ?1 (A) ? . . . ? ?? (A) > 0. The Frobenius norm of A is kAkF =
P
2
+
?1 T
U ; and, Ak , the best
i,j Aij ; Tr(A) is the trace of A; the pseudoinverse of A is A = V?
Pk
rank-k approximation to A under any unitarily invariant norm is Ak = Uk ?k VTk = i=1 ?i ui viT .
1.2
m?(??k)
Our Contribution: Adaptive Sampling
We design a novel CSSP-algorithm that adaptively selects columns from the matrix A in rounds. In
each round we remove from A the information that has already been ?captured? by the columns that
have been thus far selected. Algorithm 1 selects tc columns of A in t rounds, where in each round
c columns of A are selected using a relative-error CSSP-algorithm from prior work.
Input: A ? Rm?n ; target rank k; # rounds t; columns per round c
Output: C ? Rm?tc , tc columns of A and S, the indices of those columns.
1: S = {}; E0 = A
2: for ` = 1, ? ? ? , t do
3:
Sample indices S` of c columns from E`?1 using a CSSP-algorithm.
4:
S ? S ? S` .
5:
Set C = AS and E` = A ? (CC+ A)`k .
6: return C, S
Algorithm 1: Adaptive Sampling
At round ` in Step 3, we compute column indices S (and C = AS ) using a CSSP-algorithm on the
residual E`?1 of the previous round. To compute this residual, remove from A the best rank-(`?1)k
approximation to A in the span of the columns selected from the first ` ? 1 rounds,
E`?1 = A ? (CC+ A)(`?1)k .
A similar strategy was developed in [8] with sequential adaptive use of (additive error) CSSPalgorithms. These (additive error) CSSP-algorithms select columns according to column norms [11].
In [8], the residual in step 5 is defined differently, as E` = A ? CC+ A. To motivate our result, it
helps to take a closer look at the reconstruction error E = A ? CC+ A after t adaptive rounds of
the strategy in [8] with the CSSP-algorithm in [11].
# rounds
t=2
t
Continued sampling: tc columns using
CSSP-algorithm from [11]. ( = k/c)
kEk2F ? kA ? Ak k2F + kAk2F
2
kEk2F ? kA ? Ak k2F + kAk2F
t
2
Adaptive sampling: t rounds of the strategy
in [8] with the CSSP-algorithm from [11].
kEk2F ? (1 + ) kA ? Ak k2F + 2 kAk2F
kEk2F ? (1 + O()) kA ? Ak k2F +t kAk2F
2
2
Typically kAkF kA ? Ak kF and is small (i.e., c k), so adaptive sampling a` la [8] wins
over continued sampling for additive error CSSP-algorithms. This is especially apparent after t
2
rounds, where continued sampling only attenuates the big term kAkF by /t, but adaptive sampling
t
exponentially attenuates this term by .
Recently, powerful CSSP-algorithms have been developed which give relative-error guarantees [15].
We can use the adaptive strategy from [8] together with these newer relative error CSSP-algorithms.
If one carries out the analysis from [8] by replacing the additive error CSSP-algorithm from [11]
with the relative error CSSP-algorithm in [15], the comparison of continued and adaptive sampling
using the strategy from [8] becomes (t = 2 rounds suffices to see the problem):
# rounds
t=2
Continued sampling: tc columns using
CSSP-algorithm from [15]. ( = 2k/c)
kEk2F ? 1 +
kA ? Ak k2F
2
Adaptive sampling: t rounds of the strategy
in [8] with the CSSP-algorithm from [15].
2
kEk2F ? 1 + +
kA ? Ak k2F
2
2
Adaptive sampling from [8] gives a worse theoretical guarantee than continued sampling for relative
error CSSP-algorithms. In a nutshell, no matter how many rounds of adaptive sampling you do,
2
the theoretical bound will not be better than (1 + k/c)kA ? Ak kF if you are using a relative error
CSSP-algorithm. This raises an obvious question: is it possible to combine relative-error CSSPalgorithms with adaptive sampling to get (provably and empirically) improved CSSP-algorithms?
The approach of [8] does not achieve this objective. We provide a positive answer to this question.
Our approach is a subtle modification to the approach in [8]: in Step 5 of Algorithm 1. When we
compute the residual matrix in round `, we subtract (CC+ A)`k from A, the best rank-`k approximation to the projection of A onto the current columns selected, as opposed to subtracting the full
projection CC+ A. This subtle change, is critical in our new analysis which gives a tighter bound
on the final error, allowing us to boost relative-error CSSP-algorithms. For t = 2 rounds of adaptive
sampling, we get a reconstruction error of
2
2
2
kEkF ? (1 + ) kA ? A2k kF + (1 + ) kA ? Ak kF ,
where = 2k/c. The critical improvement in the bound is that the dominant O(1)-term depends on
2
2
kA ? A2k kF , and the dependence on kA ? Ak kF is now O(). To highlight this improved theoretical bound in an extreme case, consider a matrix A that has rank exactly 2k, then kA ? A2k kF = 0.
2
Continued sampling gives an error-bound (1+ 2 )kA ? Ak kF , where as our adaptive sampling gives
2
an error-bound ( + 2 )kA ? Ak kF , which is clearly better in this extreme case. In practice, data
matrices have rapidly decaying singular values, so this extreme case is not far from reality (See
Figure 1).
Singular Values of HGDP avgd over 22 chromosomes
800
Singular Values of TechTC?300 avgd over 49 datasets
1000
Singular Values
Singular Values
800
600
400
200
0
0
20
40
60
80
600
400
200
0
0
100
200
400
600
800
1000
1200
Figure 1: Figure showing the singular value decay for two real world datasets.
To state our main theoretical result, we need to more formally define a relative error CSSP-algorithm.
Definition 1 (Relative Error CSSP-algorithm A(X, k, c)). A relative error CSSP-algorithm A takes
as input a matrix X, a rank parameter k < rank(X) and a number of columns c, and outputs column
indices S with |S| = c, so that the columns C = XS satisfy:
h
i
2
2
EC kX ? (CC+ X)k kF ? (1 + (c, k))kX ? Xk kF ,
3
where (c, k) depends on A and the expectation is over random choices made in the algorithm.2
Our main theorem bounds the reconstruction error when our adaptive sampling approach is used to
boost A. The boost in performance depends on the decay of the spectrum of A.
Theorem 1. Let A ? Rm?n be a matrix of rank ? and let k < ? be a target rank. If, in Step 3 of
Algorithm 1, we use the relative error CSSP-algorithm A with (c, k) = < 1, then
t?1
h
i
X
2
2
2
EC kA ? (CC+ A)tk kF ? (1 + ) kA ? Atk kF +
(1 + )t?i kA ? Aik kF .
i=1
Comments.
1. The dominant O(1) term in our bound is kA ? Atk kF , not kA ? Ak kF . This is a major improvement since the former is typically much smaller than the latter in real data. Further, we
need a bound on the reconstruction error kA ? CC+ AkF . Our theorem give a stronger result
than needed because kA ? CC+ AkF ? kA ? (CC+ A)tk kF .
2. We presented our result for the case of a relative error CSSP-algorithm with a guarantee on the
expected reconstruction error. Clearly, if the CSSP-algorithm is deterministic, then Theorem 1
will also hold deterministically. The result in Theorem 1 can also be boosted to hold with high
probability, by repeating the process log 1? times and picking the columns which performed best.
Then, with probability at least 1 ? ?,
2
2
kA ? (CC+ A)tk kF ? (1 + 2) kA ? Atk kF + 2
t?1
X
2
(1 + )t?i kA ? Aik kF .
i=1
If the CSSP-algorithm itself only gives a high-probability (at least 1??) guarantee, then the bound
in Theorem 1 also holds with high probability, at least 1 ? t?, which is obtained by applying a
union bound to the probability of failure in each round.
3. Our results hold for any relative error CSSP-algorithm combined with our adaptive sampling
strategy. The relative error CSSP-algorithm in [15] has (c, k) ? 2k/c. The relative error CSSPalgorithm in [16] has (c, k) = O(k log k/c). Other algorithms can be found in [8, 9, 17].
We presented the simplest form of the result, which can be generalized to sample a different
number of columns in each round, or even use a different CSSP-algorithm in each round. We
have not optimized the sampling schedule, i.e. how many columns to sample in each round. At
the moment, this is largely dictated by the CSSP algorithm itself, which requires a minimum
number of samples in each round to give a theoretical guarantee. From the empirical perspective
(for example using leverage score sampling to select columns), strongest performance may be
obtained by adapting after every column is selected.
4. In the context of the additive error CSSP-algorithm from [11], our adaptive sampling strategy
gives a theoretical performance guarantee which is at least as good as the adaptive sampling
strategy from [8].
Lastly, we also provide the first empirical evaluation of adaptive sampling algorithms. We implemented our algorithm using two relative-error column selection algorithms (the near-optimal column
selection algorithm of [18, 15] and the leverage-score sampling algorithm of [19]) and compared it
against the adaptive sampling algorithm of [8] on synthetic and real-world data. The experimental
results show that our algorithm outperforms prior approaches.
1.3
Related Work
Column selection algorithms have been extensively studied in prior literature. Such algorithms
include rank-revealing QR factorizations [6, 20] for which only weak performance guarantees can
be derived. The QR approach was improved in [21] where the authors proposed a memory efficient
implementation. The randomized additive error CSSP-algorithm [11] was a breakthrough, which led
to a series of improvements producing relative CSSP-algorithms using a variety of randomized and
2
h
i
2
For an additive-error CSSP algorithm, EC kX ? (CC+ X)k kF ? kX ? Xk k2F + (c, k)kXk2F .
4
deterministic techniques. These include leverage score sampling [19, 16], volume sampling [8, 9,
17], the two-stage hybrid sampling approach of [22], the near-optimal column selection algorithms
of [18, 15], as well as deterministic variants presented in [23]. We refer the reader to Section 1.5
of [15] for a detailed overview of prior work. Our focus is not on CSSP-algorithms per se, but rather
on adaptively invoking existing CSSP-algorithms. The only prior adaptive sampling with a provable
guarantee was introduced in [8] and further analyzed in [24, 9, 25]; this strategy is specifically boosts
the additive error CSSP-algorithm, but does not work with relative error CSSP-algorithms which are
currently in use. Our modification of the approach in [8] is delicate, but crucial to the new analysis
we perform in the context of relative error CSSP-algorithms.
Our work is motivated by relative error CSSP-algorithms satisfying definition 1. Such algorithms
exist which give expected guarantees [15] as well as high probability guarantees [19]. Specifically,
given X ? Rm?n and
a target rank k, the leverage-score sampling approach of [19] selects c =
O k/2 log k/2 columns of A to form a matrix C ? Rm?c to give a (1+)-relative error with
probability at least 1 ? ?. Similarly, [18, 15] proposed near-optimal relative error CSSP-algorithms
selecting c ? 2c/ columns and giving a (1 + )-relative error guarantee in expectation, which can
be boosted to a high probability guarantee via independent repetition.
2
Proof of Theorem 1
We now prove the main result which analyzes the performance of our adaptive sampling in Algorithm 1 for a relative error CSSP-algorithm. We will need the following linear algebraic Lemma.
Lemma 1. Let X, Y ? Rm?n and suppose that rank(Y) = r. Then,
?i (X ? Y) ? ?r+i (X).
Proof. Observe that ?i (X ? Y) = k(X ? Y) ? (X ? Y)i?1 k2 . The claim is now immediate from
the Eckart-Young theorem because Y + (X ? Y)i?1 has rank at most r + i ? 1, therefore
?i (X ? Y) = kX ? (Y + (X ? Y)i?1 )k2 ? kX ? Xr+i?1 k2 = ?r+i (X).
We are now ready to prove Theorem 1 by induction on t, the number of rounds of adaptive sampling.
When t = 1, the claim is that
h
i
2
2
E kA ? (CC+ A)k kF ? (1 + ) kA ? Ak kF ,
which is immediate from the definition of the relative error CSSP-algorithm. Now for the induction.
Suppose that after t rounds, columns Ct are selected, and we have the induction hypothesis that
t?1
h
i
X
2
2
2
ECt kA ? (Ct Ct+ A)tk kF ? (1 + ) kA ? Atk kF +
(1 + )t?i kA ? Aik kF .
(1)
i=1
In the (t + 1)th round, we use the residual Et = A ? (Ct Ct+ A)tk to select new columns C0 . Our
relative error CSSP-algorithm A gives the following guarantee:
i
h
2
2
+
EC0 kEt ? (C0 C0 Et )k kF Et
? (1 + )
Et ? Etk
F
!
k
t
2 X
t
= (1 + )
E
?
?i2 (E )
F
i=1
?
(1 + )
!
k
t
2 X
2
E
?
?tk+i (A) .
F
(2)
i=1
(The last step follows because ?i2 (Et ) = ?i2 (A ? (Ct Ct+ A)tk ) and we can apply Lemma 1 with
2
X = A, Y = (Ct Ct+ A)tk and r = rank(Y) = tk, to obtain ?i2 (Et ) ? ?tk+i
(A).) We now take
5
the expectation of both sides with respect to the columns Ct ,
ii
h
h
2
+
ECt EC0 kEt ? (C0 C0 Et )k kF Et
!
k
h
2 i X
t
2
? (1 + ) ECt E
?
?tk+i (A) .
F
i=1
(a)
?
2
(1 + )2 kA ? Atk kF +
t?1
X
2
(1 + )t+1?i kA ? Aik kF ? (1 + )
i=1
=
2
(1 + ) kA ? Atk kF ?
k
X
k
X
2
?tk+i
(A)
i=1
!
2
?tk+i
(A)
2
+ (1 + )kA ? Atk kF
i=1
t?1
X
2
+
(1 + )t+1?i kA ? Aik kF
i=1
=
2
(1 + )kA ? A(t+1)k kF +
t
X
2
(1 + )t+1?i kA ? Aik kF
(3)
i=1
(a) follows, because of the induction hypothesis (eqn. 1). The columns chosen after round t + 1 are
Ct+1 = [Ct , C0 ]. By the law of iterated expectation,
ii
h
h
h
2
2i
+
+
ECt EC0 kEt ? (C0 C0 Et )k kF Et = ECt+1 kEt ? (C0 C0 Et )k kF .
+
+
Observe that Et ? (C0 C0 Et )k = A ? (Ct Ct+ A)tk ? (C0 C0 Et )k = A ? Y, where Y is in the
+
column space of Ct+1 = [Ct , C0 ]; further, rank(Y) ? (t + 1)k. Since (Ct+1 Ct+1 A)(t+1)k is the
best rank-(t + 1)k approximation to A in the column space of Ct+1 , for any realization of Ct+1 ,
2
+
2
+
kA ? (Ct+1 Ct+1 A)(t+1)k kF ? kEt ? (C0 C0 Et )k kF .
(4)
Combining (4) with (3), we have that
t
X
2
+
2
2
ECt+1 kA ? (Ct+1 Ct+1 A)(t+1)k kF ? (1+)kA ? A(t+1)k kF +
(1+)t+1?i kA ? Aik kF .
i=1
This is the desired bound after t + 1 rounds, concluding the induction.
It is instructive to understand where our new adaptive sampling strategy is needed for the proof to
go through. The crucial step is (2) where we use Lemma 1 ? it is essential that the residual was a
low-rank perturbation of A.
3
Experiments
We compared three adaptive column sampling methods, using two real and two synthetic data sets.3
Adaptive Sampling Methods
ADP-AE: the prior adaptive method which uses the additive error CSSP-algorithm [8].
ADP-LVG: our new adaptive method using the relative error CSSP-algorithm [19].
ADP-Nopt: our adaptive method using the near optimal relative error CSSP-algorithm [15].
Data Sets
HGDP 22 chromosomes: SNPs human chromosome data from the HGDP database [26]. We
use all 22 chromosome matrices (1043 rows; 7,334-37,493 columns) and report the average.
Each matrix contains +1, 0, ?1 entries, and we randomly filled in missing entries.
TechTC-300: 49 document-term matrices [27] (150-300 rows (documents); 10,000-40,000
columns (words)). We kept 5-letter or larger words and report averages over 49 data-sets.
Synthetic 1: Random 1000 ? 10000 matrices with ?i = i?0.3 (power law).
Synthetic 2: Random 1000 ? 10000 matrices with ?i = exp(1?i)/10 (exponential).
3
ADP-Nopt: has two stages. The first stage is a deterministic dual set spectral-Frobenius column selection
in which ties could occur. We break ties in favor of the column not already selected with the maximum norm.
6
For randomized algorithms, we repeat the experiments five times and take the average. We use the
synthetic data sets to provide a controlled environment in which we can see performance for different
types of singular value spectra on very large matrices. In prior work it is common to report on the
quality of the columns selected C by comparing the best
rank-k approximation
within the column
span of C to Ak . Hence, we report the relative error
A ? (CC+ A)k
F / kA ? Ak kF when
comparing the algorithms. We set the target rank k = 5 and the number of columns in each round to
c = 2k. We have tried several choices for k and c and the results are qualitatively identical so we only
report on one choice. Our first set of results in Figure 2 is to compare the prior adaptive algorithm
ADP-AE with the new adaptive ones ADP-LVG and ADP-Nopt which boose relative error CSSPalgorithms. Our two new algorithms are both performing better the prior existing adaptive sampling
algorithm. Further, ADP-Nopt is performing better than ADP-LVG, and this is also not surprising,
because ADP-Nopt produces near-optimal columns ? if you boost a better CSSP-algorithm, you get
better results. Further, by comparing the performance on Synthetic 1 with Synthetic 2, we see that
our algorithm (as well as prior algorithms) gain significantly in performance for rapidly decaying
singular values; our new theoretical analysis reflects this behavior, whereas prior results do not.
HGDP 22 chromosomes, k:10,c=2k
||A?(CC+A)k||F/||A?Ak||F
The theory bound depends on = c/k. The figure
to the right shows a result for k = 10; c = 2k
(k increases but is constant). Comparing the figure with the HGDP plot in Figure 2, we see that
the quantitative performance is approximately the
same, as the theory predicts (since c/k has not
changed). The percentage error stays the same
even though we are sampling more columns because the benchmark kA ? Ak kF also get smaller
when k increases. Since ADP-Nopt is the superior algorithm, we continue with results only for
this algorithm.
ADP?AE
ADP?LVG
ADP?Nopt
1.06
1.04
1.02
1
1
2
3
4
# of rounds
5
||A-(CC+A)k|| F /||A-Ak|| F
Our next experiment is to test which adaptive strategy works better in practice given the same iniTechTC-300 49 Datasets, k:5,c=2k
tial selection of columns. That is, in Figure 2,
1.2
ADP-AE uses an adaptive sampling based on the
ADP-AE
residual A ? CC+ A and then adaptively samADP-LVG
1.15
ADP-Nopt
ples according to the adaptive strategy in [8]; the
initial columns are chosen with the additive error
1.1
algorithm. Our approach chooses initial columns
with the relative error CSSP-algorithm and then
1.05
continues to sample adaptively based on the relative error CSSP-algorithm and the residual A ?
1
1
2
3
4
5
(CC+ A)tk . We now give all the adaptive sam# of rounds
pling algorithms the benefit of the near-optimal
initial columns chosen in the first round by the algorithm from [15]. The result shown to the right confirms that ADP-Nopt is best even if all adaptive
strategies start from the same initial near-optimal columns.
7
TechTC300 49 datasets, k:5,c:2k
||A-(CC+A)k||F /||A-Ak||F
Adaptive versus Continued Sequential Sampling. Our last experiment is to demonstrate that
adaptive sampling works better than continued sequential sampling. We consider the relative error
CSSP-algorithm in [15] in two modes. The first
is ADP-Nopt, which is our adaptive sampling algorithms which selects tc columns in t rounds of
c columns each. The second is SEQ-Nopt, which
is just the relative error CSSP-algorithm in [15]
sampling tc columns, all in one go. The results
are shown on the right. The adaptive boosting of
the relative error CSSP-algorithm can gives up to
a 1% improvement in this data set.
1.02
ADP-Nopt
SEQ-Nopt
1.015
1.01
1.005
1
1
2
3
# of rounds
4
5
||A?(CC+A) || /||A?A ||
k F
ADP?AE
ADP?LVG
ADP?Nopt
1.06
1.02
1
1
TechTC?300 49 Datasets, k:5,c=2k
1.15
ADP?AE
ADP?LVG
ADP?Nopt
1.1
k F
1.04
+
||A?(CC A)k||F/||A?Ak||F
HGDP 22 chromosomes, k:5,c=2k
2
3
4
# of rounds
1.05
1
1
5
Synthetic Data 1, k:5,c=2k
1.1
k F
||A?(CC+A) || /||A?A ||
1.02
1.01
1
1
5
ADP?AE
ADP?LVG
ADP?Nopt
k F
ADP?AE
ADP?LVG
ADP?Nopt
1.03
3
4
# of rounds
Synthetic Data 2, k:5,c=2k
+
||A?(CC A)k||F/||A?Ak||F
1.04
2
2
3
4
# of rounds
1.05
1
1
5
2
3
4
# of rounds
5
Figure 2: Plots of relative error ratio
A ? (CC+ A)k
F / kA ? Ak kF for various adaptive sampling al-
gorithms for k = 5 and c = 2k. In all cases, performance improves with more rounds of sampling, and rapidly
converges to a relative reconstruction error of 1. This is most so in data matrices with singular values that decay
quickly (such as TectTC and Synthetic 2). The HGDP singular values decay slowly because missing entries are
selected randomly, and Synthetic 1 has slowly decaying power-law singular values by construction.
4
Conclusion
We present a new approach for adaptive sampling algorithms which can boost relative error CSSPalgorithms, in particular the near optimal CSSP-algorithm in [15]. We showed theoretical and experimental evidence that our new adaptively boosted CSSP-algorithm is better than the prior existing
adaptive sampling algorithm which is based on the additive error CSSP-algorithm in [11]. We also
showed evidence (theoretical and empirical) that our adaptive sampling algorithms are better than
sequentially sampling all the columns at once. In particular, our theoretical bounds give a result
which is tighter for matrices whose singular values decay rapidly.
Several interesting questions remain. We showed that the simplest adaptive sampling algorithm
which samples a constant number of columns in each round improves upon sequential sampling all
at once. What is the optimal sampling schedule, and does it depend on the singular value spectrum
of the data matric? In particular, can improved theoretical bounds or empirical performance be
obtained by carefully choosing how many columns to select in each round?
It would also be interesting to see the improved adaptive sampling boosting of CSSP-algorithms in
the actual applications which require column selection (such as sparse PCA or unsupervised feature
selection). How do the improved theoretical estimates we have derived carry over to these problems
(theoretically or empirically)? We leave these directions for future work.
Acknowledgements
Most of the work was done when SP was a graduate student at RPI. PD was supported by IIS1447283 and IIS-1319280.
8
References
[1] Christos Boutsidis, Petros Drineas, and Malik Magdon-Ismail. Near optimal coresets for least-squares
regression. IEEE Transactions on Information Theory, 59(10), October 2013.
[2] C. Boutsidis and M. Magdon-Ismail. A note on sparse least-squares regression. Information Processing
Letters, 115(5):273?276, 2014.
[3] Christos Boutsidis and Malik Magdon-Ismail. Deterministic feature selection for k-means clustering.
IEEE Transactions on Information Theory, 59(9), September 2013.
[4] Christos Boutsidis, Petros Drineas, and Malik Magdon-Ismail. Sparse features for pca-like regression. In
Proc. 25th Annual Conference on Neural Information Processing Systems (NIPS), 2011. to appear.
[5] Malik Magdon-Ismail and Christos Boutsidis. Optimal sparse linear auto-encoders and sparse pca.
arXiv:1502.06626, 2015.
[6] T. F. Chan and P. C. Hansen. Some applications of the rank revealing QR factorization. SIAM J. Sci. Stat.
Comput., 13(3):727?741, 1992.
[7] A. Deshpande and L. Rademacher. Efficient volume sampling for row/column subset selection. In Proceedings of the IEEE 51st FOCS, pages 329?338, 2010.
[8] A. Deshpande and S. Vempala. Adaptive sampling and fast low-rank matrix approximation. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pages 292?303.
Springer, 2006.
[9] A. Deshpande, L. Rademacher, S. Vempala, and G. Wang. Matrix approximation and projective clustering
via volume sampling. Theory of Computing, 2(1):225?247, 2006.
[10] P. Drineas, I. Kerenidis, and P. Raghavan. Competitive recommendation systems. In Proceedings of the
34th STOC, pages 82?90, 2002.
[11] A. Frieze, R. Kannan, and S. Vempala. Fast monte-carlo algorithms for finding low-rank approximations.
Journal of the ACM (JACM), 51(6):1025?1041, 2004.
[12] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms
for constructing approximate matrix decompositions. SIAM Rev., 53(2):217?288, May 2011.
[13] E. Liberty, F. Woolfe, P.G. Martinsson, V. Rokhlin, and M. Tygert. Randomized algorithms for the lowrank approximation of matrices. PNAS, 104(51):20167?20172, 2007.
[14] Michael W Mahoney and Petros Drineas. CUR matrix decompositions for improved data analysis. PNAS,
106(3):697?702, 2009.
[15] C. Boutsidis, P. Drineas, and M. Magdon-Ismail. Near-optimal column-based matrix reconstruction.
SIAM Journal of Computing, 43(2):687?717, 2014.
[16] P. Drineas, M. W Mahoney, and S Muthukrishnan. Subspace sampling and relative-error matrix approximation: Column-based methods. In Approximation, Randomization, and Combinatorial Optimization.
Algorithms and Techniques, pages 316?326. Springer, 2006.
[17] Venkatesan Guruswami and Ali Kemal Sinop. Optimal column-based low-rank matrix reconstruction. In
Proceedings of the 23rd Annual ACM-SIAM Symposium on Discrete Algorithms, pages 1207?1214, 2012.
[18] C. Boutsidis, P. Drineas, and M. Magdon-Ismail. Near optimal column-based matrix reconstruction. In
IEEE 54th Annual Symposium on FOCS, pages 305?314, 2011.
[19] Petros Drineas, Michael W Mahoney, and S Muthukrishnan. Relative-error cur matrix decompositions.
SIAM Journal on Matrix Analysis and Applications, 30(2):844?881, 2008.
[20] T.F. Chan. Rank revealing QR factorizations. Linear Algebra and its Applications, 8889(0):67 ? 82, 1987.
[21] Crystal Maung and Haim Schweitzer. Pass-efficient unsupervised feature selection. In Advances in Neural
Information Processing Systems, pages 1628?1636, 2013.
[22] C. Boutsidis, M. W Mahoney, and P. Drineas. An improved approximation algorithm for the column
subset selection problem. In Proceedings of the 20th SODA, pages 968?977, 2009.
[23] D. Papailiopoulos, A. Kyrillidis, and C. Boutsidis. Provable deterministic leverage score sampling. In
Proc. SIGKDD, pages 997?1006, 2014.
[24] A. Deshpande, L. Rademacher, S. Vempala, and G. Wang. Matrix approximation and projective clustering
via volume sampling. In Proc. SODA, pages 1117?1126, 2006.
[25] P. Drineas and M. W Mahoney. A randomized algorithm for a tensor-based generalization of the singular
value decomposition. Linear algebra and its applications, 420(2):553?571, 2007.
[26] P. Paschou, J. Lewis, A. Javed, and P. Drineas. Ancestry informative markers for fine-scale individual
assignment to worldwide populations. Journal of Medical Genetics, 47(12):835?47, 2010.
[27] D. Davidov, E. Gabrilovich, and S. Markovitch. Parameterized generation of labeled datasets for text
categorization based on a hierarchical directory. In Proc. SIGIR, pages 250?257, 2004.
9
| 6011 |@word norm:4 stronger:1 c0:17 confirms:1 tried:1 decomposition:5 invoking:2 tr:1 carry:2 moment:1 reduction:1 initial:4 contains:2 score:5 selecting:2 series:1 document:2 outperforms:2 existing:4 ka:47 com:1 current:1 comparing:4 surprising:1 rpi:3 additive:11 informative:1 remove:2 plot:2 selected:9 directory:1 xk:2 boosting:2 attack:1 five:1 schweitzer:1 ect:6 symposium:2 focs:2 prove:3 combine:1 manner:1 theoretically:1 expected:2 behavior:1 roughly:1 gabrilovich:1 techtc:3 actual:1 becomes:1 notation:1 what:1 ec0:3 developed:2 finding:3 guarantee:14 quantitative:1 every:1 nutshell:1 tie:2 exactly:1 rm:9 k2:3 uk:3 medical:1 lvg:9 appear:1 producing:1 sinop:1 before:1 positive:1 ak:29 approximately:2 studied:1 kxk2f:1 factorization:3 projective:2 graduate:1 practical:1 kek2f:6 practice:2 union:1 xr:1 empirical:5 adapting:1 revealing:3 projection:3 significantly:1 word:2 get:4 onto:1 selection:17 risk:1 applying:1 context:2 deterministic:6 missing:2 go:2 vit:2 focused:1 sigir:1 identifying:1 coreset:1 continued:11 spanned:1 population:1 markovitch:1 papailiopoulos:1 target:5 suppose:2 construction:1 massive:1 user:1 aik:7 us:2 hypothesis:2 satisfying:1 continues:1 predicts:1 database:1 labeled:1 wang:2 eckart:1 environment:1 pd:1 ui:3 motivate:1 raise:1 depend:1 algebra:2 ali:1 incur:1 upon:2 drineas:12 differently:1 various:1 muthukrishnan:2 fast:2 monte:1 choosing:2 apparent:1 whose:1 larger:1 encoder:1 favor:1 itself:2 final:1 propose:1 reconstruction:10 subtracting:1 combining:1 realization:1 rapidly:4 achieve:1 ismail:8 frobenius:2 qr:4 rademacher:3 produce:1 categorization:1 converges:1 leave:1 object:1 help:1 tk:15 stat:1 a2k:3 lowrank:1 implemented:1 c:4 direction:1 liberty:1 modifying:1 human:1 raghavan:1 atk:7 require:1 suffices:1 generalization:2 randomization:2 tighter:3 hold:4 exp:1 claim:2 major:1 proc:4 combinatorial:2 currently:1 hansen:1 repetition:1 reflects:1 clearly:2 rather:1 boosted:3 derived:2 focus:1 vk:1 improvement:4 rank:27 sigkdd:1 sense:1 economy:1 typically:2 selects:4 provably:1 dual:1 breakthrough:1 saurabh:1 once:3 sampling:69 identical:1 look:1 k2f:7 unsupervised:2 promote:1 future:1 report:5 few:2 randomly:2 frieze:1 individual:1 delicate:1 evaluation:1 mahoney:5 analyzed:1 extreme:3 closer:1 indexed:1 filled:1 ples:1 desired:1 e0:1 theoretical:13 column:83 tial:1 assignment:1 subset:8 entry:3 avgd:2 encoders:1 answer:1 synthetic:13 combined:1 adaptively:6 chooses:1 st:1 randomized:5 siam:5 stay:1 probabilistic:1 invoke:1 picking:1 michael:2 together:1 quickly:1 concrete:1 containing:2 opposed:1 slowly:2 ket:5 worse:1 cssp:64 return:1 actively:1 vtk:2 student:1 coresets:2 inc:1 matter:1 satisfy:1 explicitly:1 vi:1 depends:4 performed:1 break:1 start:1 decaying:3 competitive:1 contribution:2 square:2 largely:1 identify:1 modelled:1 weak:1 iterated:1 carlo:1 cc:28 randomness:1 strongest:1 definition:3 against:2 failure:1 boutsidis:9 deshpande:4 obvious:1 proof:3 petros:5 cur:2 sampled:1 gain:2 improves:3 subtle:2 schedule:2 carefully:1 improved:8 done:1 though:1 just:1 stage:3 lastly:1 eqn:1 tropp:1 replacing:1 marker:1 lack:1 mode:1 quality:2 unitarily:1 contain:2 former:2 hence:1 i2:4 round:42 generalized:1 crystal:1 demonstrate:3 delivers:1 snp:1 wise:2 novel:2 recently:1 common:1 superior:1 empirically:3 physical:1 overview:1 exponentially:1 volume:4 interpretation:1 adp:32 martinsson:2 refer:2 significant:1 rd:1 similarly:1 tygert:1 etc:1 pling:1 dominant:2 showed:3 dictated:1 perspective:1 chan:2 continue:1 drinep:1 captured:1 minimum:1 analyzes:1 venkatesan:1 ii:3 full:1 desirable:1 pnas:2 worldwide:1 adapt:1 long:1 controlled:1 variant:1 regression:5 ae:9 expectation:4 woolfe:1 arxiv:1 represent:1 whereas:1 fine:1 singular:19 crucial:2 comment:1 extracting:1 near:11 leverage:5 paypal:2 variety:1 kyrillidis:1 motivated:2 pca:5 guruswami:1 algebraic:1 etk:1 detailed:1 se:1 repeating:1 extensively:1 simplest:2 reduced:2 exist:1 percentage:1 per:2 discrete:1 paschou:1 kept:1 letter:2 you:8 powerful:1 soda:2 parameterized:1 reader:1 seq:2 bound:16 ct:24 haim:1 annual:3 occur:1 your:1 min:1 span:2 concluding:1 performing:2 vempala:4 according:2 smaller:2 remain:1 reconstructing:1 sam:1 newer:1 rev:1 modification:2 invariant:1 needed:2 magdon:9 operation:1 apply:1 polytechnic:2 observe:2 hierarchical:1 spectral:1 remaining:1 include:2 clustering:3 matric:1 giving:1 especially:1 tensor:1 malik:5 objective:1 already:4 question:3 strategy:14 dependence:1 diagonal:1 september:1 win:1 subspace:1 sci:1 concatenation:1 kak2f:4 kekf:1 provable:2 induction:5 kannan:1 index:4 ratio:1 kemal:1 october:1 stoc:1 trace:1 design:1 attenuates:2 implementation:1 perform:1 allowing:1 javed:1 datasets:6 benchmark:2 immediate:2 rn:1 perturbation:1 introduced:1 specified:1 optimized:1 boost:6 akf:2 nip:1 maung:1 interpretability:1 memory:1 power:2 critical:2 hybrid:1 residual:8 improve:2 numerous:1 ready:1 auto:1 strawman:1 text:1 prior:14 literature:1 acknowledgement:1 kf:47 relative:43 law:3 kakf:3 highlight:1 interesting:2 generation:1 versus:1 row:6 genetics:1 changed:1 repeat:1 last:2 supported:1 aij:1 side:1 allow:1 understand:1 institute:2 sparse:8 benefit:1 dimension:1 world:3 author:1 made:1 adaptive:55 qualitatively:1 far:2 ec:3 transaction:2 approximate:1 global:1 pseudoinverse:1 sequentially:1 nopt:17 spectrum:3 ancestry:1 rensselaer:2 reality:1 chromosome:6 constructing:1 sp:1 pk:1 main:3 big:1 paul:1 gorithms:1 christos:4 deterministically:1 exponential:1 comput:1 young:1 theorem:9 showing:1 decay:5 x:1 evidence:2 essential:1 sequential:4 kx:6 subtract:1 tc:7 led:1 halko:1 jacm:1 recommendation:1 springer:2 lewis:1 acm:2 identity:1 change:1 specifically:2 lemma:4 pas:1 experimental:3 svd:2 la:1 formally:2 select:4 rokhlin:1 latter:2 dept:2 instructive:1 |
5,539 | 6,012 | Multi-class SVMs: From Tighter Data-Dependent
Generalization Bounds to Novel Algorithms
? un
? Dogan
Ur
Microsoft Research
Cambridge CB1 2FB, UK
[email protected]
Yunwen Lei
Department of Mathematics
City University of Hong Kong
[email protected]
Alexander Binder
ISTD Pillar
Singapore University of Technology and Design
Machine Learning Group, TU Berlin
alexander [email protected]
Marius Kloft
Department of Computer Science
Humboldt University of Berlin
[email protected]
Abstract
This paper studies the generalization performance of multi-class classification algorithms, for which we obtain?for the first time?a data-dependent generalization error bound with a logarithmic dependence on the class size, substantially
improving the state-of-the-art linear dependence in the existing data-dependent
generalization analysis. The theoretical analysis motivates us to introduce a new
multi-class classification machine based on `p -norm regularization, where the parameter p controls the complexity of the corresponding bounds. We derive an
efficient optimization algorithm based on Fenchel duality theory. Benchmarks on
several real-world datasets show that the proposed algorithm can achieve significant accuracy gains over the state of the art.
1
Introduction
Typical multi-class application domains such as natural language processing [1], information retrieval [2], image annotation [3] and web advertising [4] involve tens or hundreds of thousands of
classes, and yet these datasets are still growing [5]. To handle such learning tasks, it is essential
to build algorithms that scale favorably with respect to the number of classes. Over the past years,
much progress in this respect has been achieved on the algorithmic side [4?7], including efficient
stochastic gradient optimization strategies [8].
Although also theoretical properties such as consistency [9?11] and finite-sample behavior [1, 12?
15] have been studied, there still is a discrepancy between algorithms and theory in the sense that the
corresponding theoretical bounds do often not scale well with respect to the number of classes. This
discrepancy occurs the most strongly in research on data-dependent generalization bounds, that is,
bounds that can measure generalization performance of prediction models purely from the training
samples, and which thus are very appealing in model selection [16]. A crucial advantage of these
bounds is that they can better capture the properties of the distribution that has generated the data,
which can lead to tighter estimates [17] than conservative data-independent bounds.
To our best knowledge, for multi-class classification, the first data-dependent error bounds were
given by [14]. These bounds exhibit a quadratic dependence on the class size and were used by [12]
and [18] to derive bounds for kernel-based multi-class classification and multiple kernel learning
(MKL) problems, respectively. More recently, [13] improve the quadratic dependence to a linear
dependence by introducing a novel surrogate for the multi-class margin that is independent on the
true realization of the class label.
1
However, a heavy dependence on the class size, such as linear or quadratic, implies a poor generalization guarantee for large-scale multi-class classification problems with a massive number of
classes. In this paper, we show data-dependent generalization bounds for multi-class classification
problems that?for the first time?exhibit a sublinear dependence on the number of classes. Choosing appropriate regularization, this dependence can be as mild as logarithmic. We achieve these
improved bounds via the use of Gaussian complexities, while previous bounds are based on a wellknown structural result on Rademacher complexities for classes induced by the maximum operator.
The proposed proof technique based on Gaussian complexities exploits potential coupling among
different components of the multi-class classifier, while this fact is ignored by previous analyses.
The result shows that the generalization ability is strongly impacted by the employed regularization.
Which motivates us to propose a new learning machine performing block-norm regularization over
the multi-class components. As a natural choice we investigate here the application of the proven `p
norm [19]. This results in a novel `p -norm multi-class support vector machine (MC-SVM), which
contains the classical model by Crammer & Singer [20] as a special case for p = 2. The bounds
indicate that the parameter p crucially controls the complexity of the resulting prediction models.
We develop an efficient optimization algorithm for the proposed method based on its Fenchel dual
representation. We empirically evaluate its effectiveness on several standard benchmarks for multiclass classification taken from various domains, where the proposed approach significantly outperforms the state-of-the-art method of [20].
The remainder of this paper is structured as follows. Section 2 introduces the problem setting and
presents the main theoretical results. Motivated by which we propose a new multi-class classification
model in Section 3 and give an efficient optimization algorithm based on Fenchel duality theory. In
Section 4 we evaluate the approach for the application of visual image recognition and on several
standard benchmark datasets taken from various application domains. Section 5 concludes.
2
2.1
Theory
Problem Setting and Notations
This paper considers multi-class classification problems with c ? 2 classes. Let X denote the input
space and Y = {1, 2, . . . , c} denote the output space. Assume that we are given a sequence of examples S = {(x1 , y1 ), . . . , (xn , yn )} ? (X ? Y)n , independently drawn according to a probability
measure P defined on the sample space Z = X ? Y. Based on the training examples S, we wish to
learn a prediction rule h from a space H of hypotheses mapping from Z to R and use the mapping
x ? arg maxy?Y h(x, y) to predict (ties are broken by favoring classes with a lower index, for
which our loss function defined below always counts an error). For any hypothesis h ? H, the margin ?h (x, y) of the function h at a labeled example (x, y) is ?h (x, y) := h(x, y) ? maxy0 6=y h(x, y 0 ).
The prediction rule h makes an error at (x, y) if ?h (x, y) ? 0 and thus the expected risk incurred
from using h for prediction is R(h) := E[1?h (x,y)?0 ].
Any function h : X ? Y ? R can be equivalently represented by the vector-valued function
e := {?h : h ? H} the class
(h1 , . . . , hc ) with hj (x) = h(x, j), ?j = 1, . . . , c. We denote by H
of margin functions associated to H. Let k : X ? X ? R be a Mercer kernel with ? being the
associated feature map, i.e., k(x, x
?) = h?(x), ?(?
x)i for all x, x
? ? X . We denote by k ? k? the dual
norm of k ? k, i.e., kwk? := supkwk?1
hw, wi.
? For a convex function f , we denote by f ? its Fenchel
?
conjugate, i.e., f ? (v) := supw [hw, vi ? f (w)]. For any w = (w1 , . . . , wc ) we define the `2,p -norm
Pc
by kwk2,p := [ j=1 kwj kp2 ]1/p . For any p ? 1, we denote by p? the dual exponent of p satisfying
1/p + 1/p? = 1 and p? := p(2 ? p)?1 . We require the following definitions.
Definition 1 (Strong Convexity). A function f : X ? R is said to be ?-strongly convex w.r.t. a
norm k ? k iff ?x, y ? X and ?? ? (0, 1), we have
?
?(1 ? ?)kx ? yk2 .
2
Definition 2 (Regular Loss). We call ` a L-regular loss if it satisfies the following properties:
f (?x + (1 ? ?)y) ? ?f (x) + (1 ? ?)f (y) ?
(i) `(t) bounds the 0-1 loss from above: `(t) ? 1t?0 ;
(ii) ` is L-Lipschitz in the sense |`(t1 ) ? `(t2 )| ? L|t1 ? t2 |;
2
(iii) `(t) is decreasing and it has a zero point c` , i.e., `(c` ) = 0.
Some examples of L-regular loss functions include the hinge `h (t) = (1 ? t)+ and the margin loss
`? (t) = 1t?0 + (1 ? t??1 )10<t?? ,
2.2
? > 0.
(1)
Main results
Our discussion on data-dependent generalization error bounds is based on the established methodology of Rademacher and Gaussian complexities [21].
Definition 3 (Rademacher and Gaussian Complexity). Let H be a family of real-valued functions
defined on Z and S = (z1 , . . . , zn ) a fixed sample of size n with elements in Z. Then, the empirical
Rademacher and Gaussian complexities of H with respect to the sample S are defined by
RS (H) = E?
n
1X
sup
?i h(zi ) ,
n
h?H
i=1
GS (H) = Eg
n
1X
sup
gi h(zi ) ,
n
h?H
i=1
where ?1 , . . . , ?n are independent random variables with equal probability taking values +1 or ?1,
and g1 , . . . , gn are independent N (0, 1) random variables.
Note that we have the following comparison inequality relating Rademacher and Gaussian complexities (Cf. Section 4.2 in [22]):
r
r
?
?p
log nRS (H).
(2)
RS (H) ?
GS (H) ? 3
2
2
Existing work on data-dependent generalization bounds for multi-class classifiers [12?14, 18] builds
on the following structural result on Rademacher complexities (e.g., [12], Lemma 8.1):
RS (max{h1 , . . . , hc } : hj ? Hj , j = 1, . . . , c) ?
c
X
RS (Hj ),
(3)
j=1
where H1 , . . . , Hc are c hypothesis sets. This result is crucial for the standard generalization analysis
of multi-class classification since the margin ?h involves the maximum operator, which is removed
by (3), but at the expense of a linear dependency on the class size. In the following we show that this
linear dependency is suboptimal because (3) does not take into account the coupling among different
classes.
For example, a common regularizer used in multi-class learning algorithms is r(h) =
Pc
2
j=1 khj k2 [20], for which the components h1 , . . . , hc are correlated via a k ? k2,2 regularizer, and
the bound (3) ignoring this correlation would not be effective in this case [12?14, 18].
As a remedy, we here introduce a new structural complexity result on function classes induced
by general classes via the maximum operator, while allowing to preserve the correlations among
different components meanwhile. Instead of considering the Rademacher complexity, Lemma 4
concerns the structural relationship of Gaussian complexities since it is based on a comparison result
among different Gaussian processes.
Lemma 4 (Structural result on Gaussian complexity). Let H be a class of functions defined on
X ? Y with Y = {1, . . . , c}. Let g1 , . . . , gnc be independent N (0, 1) distributed random variables.
Then, for any sample S = {x1 , . . . , xn } of size n, we have
GS
n X
c
X
1
sup
g(j?1)n+i hj (xi ),
{max{h1 , . . . , hc } : h = (h1 , . . . , hc ) ? H} ? Eg
n h=(h1 ,...,hc )?H i=1 j=1
where Eg denotes the expectation w.r.t. to the Gaussian variables g1 , . . . , gnc .
(4)
The proof of Lemma 4 is given in Supplementary Material A. Equipped with Lemma 4, we are
now able to present a general data-dependent margin-based generalization bound. The proof of the
following results (Theorem 5, Theorem 7 and Corollary 8) is given in Supplementary Material B.
Theorem 5 (Data-dependent generalization bound for multi-class classification). Let H ? RX ?Y
be a hypothesis class with Y = {1, . . . , c}. Let ` be a L-regular loss function and denote B` :=
sup(x,y),h `(?h (x, y)). Suppose that the examples S = {(x1 , y1 ), . . . , (xn , yn )} are independently
3
drawn from a probability measure defined on X ? Y. Then, for any ? > 0, with probability at least
1 ? ?, the following multi-class classification generalization bound holds for any h ? H:
s
?
n
n X
c
X
log 2?
1X
2L 2?
R(h) ?
`(?h (xi , yi )) +
Eg
sup
,
g(j?1)n+i hj (xi ) + 3B`
n i=1
n
2n
h=(h1 ,...,hc )?H i=1 j=1
where g1 , . . . , gnc are independent N (0, 1) distributed random variables.
Remark 6. Under the same condition of Theorem 5, [12] derive the following data-dependent
generalization bound (Cf. Corollary 8.1 in [12]):
s
n
log 2?
4Lc
1X
`(?h (xi , yi )) +
RS ({x ? h(x, y) : y ? Y, h ? H}) + 3B`
.
R(h) ?
n i=1
n
2n
This linear dependence on c is due to the
of (3). For comparison, Theorem 5 implies that the
Puse
n Pc
dependence on c is governed by the term i=1 j=1 g(j?1)n+i hj (xi ), an advantage of which is that
the components h1 , . . . , hc are jointly coupled. As we will see, this allows us to derive an improved
result with a favorable dependence on c, when a constraint is imposed on (h1 , . . . , hc ).
The following Theorem 7 applies the general result in Theorem 5 to kernel-based methods. The
hypothesis space is defined by imposing a constraint with a general strongly convex function.
Theorem 7 (Data-dependent generalization bound for kernel-based multi-class learning algorithms
and MC-SVMs). Suppose that the hypothesis space is defined by
H := Hf,? = {hw = (hw1 , ?(x)i, . . . , hwc , ?(x)i) : f (w) ? ?},
where f is a ?-strongly convex function w.r.t. a norm k?k defined on H satisfying f ? (0) = 0. Let ` be
a L-regular loss function and denote B` := sup(x,y),h `(?h (x, y)). Let g1 , . . . , gnc be independent
N (0, 1) distributed random variables. Then, for any ? > 0, with probability at least 1 ? ? we have
v
s
u
n
n
2
X
X
u
log 2?
??
1
4L
t
R(hw ) ?
`(?hw (xi , yi )) +
Eg
.
g(j?1)n+i ?(xi ) j=1,...,c
+ 3B`
n i=1
n
?
2n
?
i=1
We now consider the following specific hypothesis spaces using a k ? k2,p constraint:
Hp,? := {hw = (hw1 , ?(x)i, . . . , hwc , ?(x)i) : kwk2,p ? ?},
1 ? p ? 2.
(5)
Corollary 8 (`p -norm MC-SVM generalization bound). Let ` be a L-regular loss function and
denote B` := sup(x,y),h `(?h (x, y)). Then, with probability at least 1 ? ?, for any hw ? Hp,? the
generalization error R(hw ) can be upper bounded by:
v
s
(?
u n
1
n
2
X
X
log ? 2L?u
e(4 log c)1+ 2 log c , if p? ? 2 log c,
1
t
1
`(?hw (xi , yi ))+3B`
+
k(xi , xi ) ?
1+ p? 1?
n i=1
2n
n
2p?
cp ,
otherwise.
i=1
Remark 9. The bounds in Corollary 8 enjoy a mild dependence on the number of classes. The
dependence is polynomial with exponent 1/p? for 2 < p? < 2 log c and becomes logarithmic if p? ?
2 log c. Even in the theoretically unfavorable case of p = 2 [20], the bounds still exhibit a radical
dependence on the number of classes, which is substantially milder than the quadratic dependence
established in [12, 14, 18] and the linear dependence established in [13]. Our generalization bound
is data-dependent and shows clearly how the margin would affect the generalization performance
(when ` is the margin loss `? ): a large margin ? would increase the empirical error while decrease
the model?s complexity, and vice versa.
2.3
Comparison of the Achieved Bounds to the State of the Art
Related work on data-independent bounds. The large body of theoretical work on multi-class
learning considers data-independent bounds. Based on the `? -norm covering number bound of
linear operators, [15] obtain a generalization bound exhibiting a linear dependence
on the class size,
?
3
1
which is improved by [9] to a radical dependence of the form O(n? 2 (log 2 n) ?c ). Under conditions
4
analogous to Corollary 8, [23] derive a class-size independent generalization guarantee. However,
their bound is based on a delicate definition of margin, which is why it is commonly not used in the
mainstream multi-class literature. [1] derive the following generalization bound
h1
i
h1
X
X
? y ?w
? y? ,?(x)i)
E log 1 +
? inf E log 1 +
ep(??hw
ep(??hwy ?wy? ,?(x)i)
w?H
p
p
y?6=y
y?6=y
i 2 sup
?n
x?X k(x, x)
+
kwk22,2 +
, (6)
2(n + 1)
?n
where ? is a margin condition, p > 0 a scaling factor, and ? a regularization parameter. Eq. (6) is
class-size independent, yet Corollary 8 shows superiority in the following
aspects: first, for SVMs
Pn
(i.e., margin loss `? ), our bound consists of an empirical error ( n1 i=1 `? (?hw (xi , yi ))) and a complexity term divided by the margin value (note that L = 1/? in Corollary 8). When the margin
is large (which is often desirable) [14], the last term in the bound given by Corollary 8 becomes
small, while?on the contrary?-the bound (6) is an increasing function of ?, which is undesirable.
Secondly, Theorem 7 applies to general loss functions, expressed through a strongly convex function over a general hypothesis space, while the bound (6) only applies to a specific regularization
algorithm. Lastly, all the above mentioned results are conservative data-independent estimates.
Related work on data-dependent bounds. The techniques used in above mentioned papers do not
straightforwardly translate to data-dependent bounds, which is the type of bounds in the focus of
the present work. The investigation of these was initiated, to our best knowledge, by [14]: with the
structural complexity bound (3) for function classes induced via the maximal operator, [14] derive a
margin bound admitting a quadratic dependency on the number of classes. [12] use these results in
[14] to study the generalization performance of MC-SVMs, where the components h1 , . . . , hc are
coupled with an k ? k2,p , p ? 1 constraint. Due to the usage of the suboptimal Eq. (3), [12] obtain
a margin bound growing quadratically w.r.t. the number of classes. [18] develop a new multi-class
classification algorithm based on a natural notion called the multi-class margin of a kernel. [18]
also present a novel multi-class Rademacher complexity margin bound based on Eq. (3), and the
bound also depends quadratically on the class size. More recently, [13] give a refined Rademacher
complexity bound with a linear dependence on the class size. The key reason for this improvement
is the introduction of ??,h := miny0 ?Y [h(x, y)?h(x, y 0 )+?1y0 =y ] bounding margin ?h from below,
and since the maximum operation in ??,h is applied to the set Y rather than the subset Y ? {yi } for
?h , one needs not to consider the random realization of yi . We also use this trick in our proof of
Theorem 5. However, [13] fail to improve this linear dependence to a logarithmic dependence, as
we achieved in Corollary 8, due to the use of the suboptimal structural result (3).
3
Algorithms
Motivated by the generalization analysis given in Section 2, we now present a new multi-class
learning algorithm, based on performing empirical risk minimization in the hypothesis space (5).
This corresponds to the following `p -norm MC-SVM (1 ? p ? 2):
Problem 10 (Primal problem: `p -norm MC-SVM).
c
n
i2
X
1h X
p p
min
kwj k2 + C
`(ti ),
w 2
(P)
j=1
i=1
s.t. ti = hwyi , ?(xi )i ? maxhwy , ?(xi )i,
y6=yi
For p = 2 we recover the seminal multi-class algorithm by Crammer & Singer [20] (CS), which is
thus a special case of the proposed formulation. An advantage of the proposed approach over [20]
can be that, as shown in Corollary 8, the dependence of the generalization performance on the class
size becomes milder as p decreases to 1.
3.1
Dual problems
Since the optimization problem (P) is convex, we can derive the associated dual problem for the
construction of efficient optimization algorithms. The derivation of the following dual problem is
deferred to Supplementary Material C. For a matrix ? ? Rn?c , we denote by ?i the i-th row.
Denote by ej the j-th unit vector in Rc and 1 the vector in Rc with all components being zero.
5
Problem 11 (Completely dualized problem for general loss). The Lagrangian dual of (10) is:
c
n
n
X
X
p i 2(p?1)
1h X
?iy
p
sup ?
?C
`? (? i )
?ij ?(xi )
2p?1
2
C
n?c
(D)
??R
j=1 i=1
i=1
s.t. ?ij ? 0 ? ?i ? 1 = 0,
?j 6= yi , i = 1, . . . , n.
Theorem 12 (R EPRESENTER THEOREM). For any dual variable ? ? Rn?c , the associated primal
variable w = (w1 , . . . , wc ) minimizing the Lagrangian saddle problem can be represented by:
wj =
c
n
n
n
X
p? ?2 X
X
? 2 ?1
X
k
?i?j ?(xi )kp2 p?
?ij ?(xi )
2
?ij ?(xi ) .
?
j=1
i=1
i=1
i=1
For the hinge loss `h (t) = (1 ? t)+ , we know its Fenchel-Legendre conjugate is `?h (t) = t if
?iyi
?iyi
?iyi
)=? C
if ?1 ? ? C
? 0 and ? elsewise. Now
?1 ? t ? 0 and ? elsewise. Hence `?h (? C
we have the following dual problem for the hinge loss function:
Problem 13 (Completely dualized problem for the hinge loss (`p -norm MC-SVM)).
c
n
n
X
X
p i 2(p?1)
1h X
p
sup ?
+
?ij ?(xi )
2p?1
?iyi
2 j=1 i=1
(7)
??Rn?c
i=1
s.t. ?i ? eyi ? C ? ?i ? 1 = 0,
3.2
?i = 1, . . . , n.
Optimization Algorithms
The dual problems (D) and (7) are not quadratic programs for p 6= 2, and thus generally not easy to
solve. To circumvent this difficulty, we rewrite Problem 10 as the following equivalent problem:
min
w,?
c
X
kwj k2
2
j=1
2?j
+C
n
X
`(ti )
i=1
s.t. ti ? hwyi , ?(xi )i ? hwy , ?(xi )i,
y 6= yi , i = 1, . . . , n,
(8)
k?kp? ? 1, p? = p(2 ? p)?1 , ?j ? 0.
The class weights ?1 , . . . , ?c in Eq. (8) play a similar role as the kernel weights in `p -norm MKL
algorithms [19]. The equivalence between problem (P) and Eq. (8) follows directly from Lemma 26
in [24], which shows that the optimal ? = (?1 , . . . , ?c ) in Eq. (8) can be explicitly represented in
closed form. Motivated by the recent work on `p -norm MKL, we propose to solve the problem (8)
via alternately optimizing w and ?. As we will show, given temporarily fixed ?, the optimization
of w reduces to a standard multi-class classification problem. Furthermore, the update of ?, given
fixed w, can be achieved via an analytic formula.
Problem 14 (Partially dualized problem for a general loss). For fixed ?, the partial dual problem
for the sub-optimization problem (8) w.r.t. w is
sup ?
??Rn?c
c
n
n
X
X
2
1X
?iy
?j
?ij ?(xi )
2 ? C
`? (? i )
2 j=1
C
i=1
i=1
(9)
s.t. ?ij ? 0 ? ?i ? 1 = 0, ?j 6= yi , i = 1, . . . , n.
The primal variable w minimizing the associated Lagrangian saddle problem is
wj = ?j
n
X
?ij ?(xi ).
(10)
i=1
We defer the proof to Supplementary Material C. Analogous to Problem 13, we have the following
partial dual problem for the hinge loss.
Problem 15 (Partially dualized problem for the hinge loss (`p -norm MC-SVM)).
sup
??Rn?c
c
n
n
X
2 X
1X
?j
?ij ?(xi ) 2 +
?iyi
f (?) := ?
2 j=1
i=1
i=1
s.t. ?i ? eyi ? C ? ?i ? 1 = 0,
6
?i = 1, . . . , n.
(11)
The Problems 14 and 15 are quadratic, so we can use the dual coordinate ascent algorithm [25] to
very efficiently solve them for the case of linear kernels. To this end, we need to compute the gradient
and solve the restricted problem of optimizing only one ?i , ?i, keeping all other dual variables
fixed [25]. The gradient of f can be exactly represented by w:
n
X
?f
= ??j
??ij k(xi , x?i ) + 1yi =j = 1yi =j ? hwj , ?(xi )i.
??ij
(12)
?i=1
Suppose the additive change to be applied to the current ?i is ??i , then from (12) we have
f (?1 , . . . , ?i?1 , ?i + ??i , ?i+1 , . . . , ?n ) =
c
c
X
?f
1X
??ij ?
?j k(xi , xi )[??ij ]2 + const.
??
2
ij
j=1
j=1
Therefore, the sub-problem of optimizing ??i is given by
max ?
??i
c
c
X
1X
?f
?j k(xi , xi )[??ij ]2 +
??ij
2 j=1
??
ij
j=1
(13)
s.t. ??i ? eyi ? C ? ?i ? ??i ? 1 = 0.
We now consider the subproblem of updating class weights ? with temporarily fixed w, for which
we have the following analytic solution. The proof is deferred to the Supplementary Material C.1.
Proposition 16. (Solving the subproblem with respect to the class weights) Given fixed wj , the
minimal ?j optimizing the problem (8) is attained at
X
p?2
c
p
2?p
p
.
(14)
?j = kwj k2
kw?j k2
?
j=1
The update of ?j based on Eq. (14) requires calculating kwj k22 , which can be easily fulfilled by
recalling the representation established in Eq. (10).
The resulting training algorithm for the proposed `p -norm MC-SVM is given in Algorithm 1. The
algorithm alternates between solving a MC-SVM problem for fixed class weights (Line 3) and updating the class weights in a closed-form manner (Line 5). Recall that Problem 11 establishes a
completely dualized problem, which can be used as a sound stopping criterion for Algorithm 1.
Algorithm 1: Training algorithm for `p -norm MC-SVM.
input: examples {(xi , yi )ni=1 } and the kernel k.
p
initialize ?j = p? 1/c, wj = 0 for all j = 1, . . . , c
while Optimality conditions are not satisfied do
optimize the multi-class classification problem (9)
compute kwj k22 for all j = 1, . . . , c, according to Eq. (10)
update ?j for all j = 1, . . . , c, according to Eq. (14)
end
4
Empirical Analysis
We implemented the proposed `p -norm MC-SVM algorithm (Algorithm 1) in C++ and solved the
involved MC-SVM problem using dual coordinate ascent [25]. We experiment on six benchmark
datasets: the Sector dataset studied in [26], the News 20 dataset collected by [27], the Rcv1 dataset
collected by [28], the Birds 15, Birds 50 as a part from [29] and the Caltech 256 collected by
griffin2007caltech. We used fc6 features from the BVLC reference caffenet from [30]. Table 1
gives an information on these datasets.
We compare with the classical CS in [20], which constitutes a strong baseline for these datasets
[25]. We employ a 5-fold cross validation on the training set to tune the regularization parameter
C by grid search over the set {2?12 , 2?11 , . . . , 212 } and p from 1.1 to 2 with 10 equidistant points.
We repeat the experiments 10 times, and report in Table 2 on the average accuracy and standard
deviations attained on the test set.
7
Dataset
No. of Classes No. of Training Examples No. of Test Examples No. of Attributes
Sector
105
6, 412
3, 207
55, 197
News 20
20
15, 935
3, 993
62, 060
Rcv1
53
15, 564
518, 571
47, 236
Birds 15
200
3, 000
8, 788
4, 096
Birds 50
200
9, 958
1, 830
4, 096
Caltech 256
256
12, 800
16, 980
4, 096
Table 1: Description of datasets used in the experiments.
Method / Dataset
Sector
News 20
Rcv1
Birds 15
Birds 50
Caltech 256
`p -norm MC-SVM 94.20?0.34 86.19?0.12 85.74?0.71 13.73?1.4 27.86?0.2 56.00?1.2
Crammer & Singer 93.89?0.27 85.12?0.29 85.21?0.32 12.53?1.6 26.28?0.3 54.96?1.1
Table 2: Accuracies achieved by CS and the proposed `p -norm MC-SVM on the benchmark datasets.
We observe that the proposed `p -norm MC-SVM consistently outperforms CS [20] on all considered
datasets. Specifically, our method attains 0.31% accuracy gain on Sector, 1.07% accuracy gain on
News 20, 0.53% accuracy gain on Rcv1, 1.2% accuracy gain on Birds 15, 1.58% accuracy gain on
Birds 50, and 1.04% accuracy gain on Birds 15. We perform a Wilcoxon signed rank test between
the accuracies of CS and our method on the benchmark datasets, and the p-value is 0.03, which
means our method is significantly better than CS at the significance level of 0.05. These promising
results indicate that the proposed `p -norm MC-SVM could further lift the state of the art in multiclass classification, even in real-world applications beyond the ones studied in this paper.
5
Conclusion
Motivated by the ever growing size of multi-class datasets in real-world applications such as image annotation and web advertising, which involve tens or hundreds of thousands of classes, we
studied the influence of the class size on the generalization behavior of multi-class classifiers. We
focus here on data-dependent generalization bounds enjoying the ability to capture the properties of
the distribution that has generated the data. Of independent interest, for hypothesis classes that are
given as a maximum over base classes, we developed a new structural result on Gaussian complexities that is able to preserve the coupling among different components, while the existing structural
results ignore this coupling and may yield suboptimal generalization bounds. We applied the new
structural result to study learning rates for multi-class classifiers, and derived, for the first time, a
data-dependent bound with a logarithmic dependence on the class size, which substantially outperforms the linear dependence in the state-of-the-art data-dependent generalization bounds.
Motivated by the theoretical analysis, we proposed a novel `p -norm MC-SVM, where the parameter
p controls the complexity of the corresponding bounds. This class of algorithms contains the classical CS [20] as a special case for p = 2. We developed an effective optimization algorithm based on
the Fenchel dual representation. For several standard benchmarks taken from various domains, the
proposed approach surpassed the state-of-the-art method of CS [20] by up to 1.5%.
A future direction will be to derive a data-dependent bound that is completely independent of the
class size (even overcoming the mild logarithmic dependence here). To this end, we will study more
powerful structural results than Lemma 4 for controlling complexities of function classes induced
via the maximum operator. As a good starting point, we will consider `? -norm covering numbers.
Acknowledgments
We thank Mehryar Mohri for helpful discussions. This work was partly funded by the German
Research Foundation (DFG) award KL 2698/2-1.
References
[1] T. Zhang, ?Class-size independent generalization analsysis of some discriminative multi-category classification,? in Advances in Neural Information Processing Systems, pp. 1625?1632, 2004.
[2] T. Hofmann, L. Cai, and M. Ciaramita, ?Learning with taxonomies: Classifying documents and words,?
in NIPS workshop on syntax, semantics, and statistics, 2003.
8
[3] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, ?Imagenet: A large-scale hierarchical
image database,? in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on,
pp. 248?255, IEEE, 2009.
[4] A. Beygelzimer, J. Langford, Y. Lifshits, G. Sorkin, and A. Strehl, ?Conditional probability tree estimation
analysis and algorithms,? in Proceedings of UAI, pp. 51?58, AUAI Press, 2009.
[5] S. Bengio, J. Weston, and D. Grangier, ?Label embedding trees for large multi-class tasks,? in Advances
in Neural Information Processing Systems, pp. 163?171, 2010.
[6] P. Jain and A. Kapoor, ?Active learning for large multi-class problems,? in Computer Vision and Pattern
Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 762?769, IEEE, 2009.
[7] O. Dekel and O. Shamir, ?Multiclass-multilabel classification with more classes than examples,? in International Conference on Artificial Intelligence and Statistics, pp. 137?144, 2010.
[8] M. R. Gupta, S. Bengio, and J. Weston, ?Training highly multiclass classifiers,? The Journal of Machine
Learning Research, vol. 15, no. 1, pp. 1461?1492, 2014.
[9] T. Zhang, ?Statistical analysis of some multi-category large margin classification methods,? The Journal
of Machine Learning Research, vol. 5, pp. 1225?1251, 2004.
[10] A. Tewari and P. L. Bartlett, ?On the consistency of multiclass classification methods,? The Journal of
Machine Learning Research, vol. 8, pp. 1007?1025, 2007.
[11] T. Glasmachers, ?Universal consistency of multi-class support vector classification,? in Advances in Neural Information Processing Systems, pp. 739?747, 2010.
[12] M. Mohri, A. Rostamizadeh, and A. Talwalkar, Foundations of machine learning. MIT press, 2012.
[13] V. Kuznetsov, M. Mohri, and U. Syed, ?Multi-class deep boosting,? in Advances in Neural Information
Processing Systems, pp. 2501?2509, 2014.
[14] V. Koltchinskii and D. Panchenko, ?Empirical margin distributions and bounding the generalization error
of combined classifiers,? Annals of Statistics, pp. 1?50, 2002.
[15] Y. Guermeur, ?Combining discriminant models with new multi-class svms,? Pattern Analysis & Applications, vol. 5, no. 2, pp. 168?179, 2002.
[16] L. Oneto, D. Anguita, A. Ghio, and S. Ridella, ?The impact of unlabeled patterns in rademacher complexity theory for kernel classifiers,? in Advances in Neural Information Processing Systems, pp. 585?593,
2011.
[17] V. Koltchinskii and D. Panchenko, ?Rademacher processes and bounding the risk of function learning,?
in High Dimensional Probability II, pp. 443?457, Springer, 2000.
[18] C. Cortes, M. Mohri, and A. Rostamizadeh, ?Multi-class classification with maximum margin multiple
kernel,? in ICML-13, pp. 46?54, 2013.
[19] M. Kloft, U. Brefeld, S. Sonnenburg, and A. Zien, ?Lp-norm multiple kernel learning,? The Journal of
Machine Learning Research, vol. 12, pp. 953?997, 2011.
[20] K. Crammer and Y. Singer, ?On the algorithmic implementation of multiclass kernel-based vector machines,? The Journal of Machine Learning Research, vol. 2, pp. 265?292, 2002.
[21] P. L. Bartlett and S. Mendelson, ?Rademacher and gaussian complexities: Risk bounds and structural
results,? J. Mach. Learn. Res., vol. 3, pp. 463?482, 2002.
[22] M. Ledoux and M. Talagrand, Probability in Banach Spaces: isoperimetry and processes, vol. 23. Berlin:
Springer, 1991.
[23] S. I. Hill and A. Doucet, ?A framework for kernel-based multi-category classification.,? J. Artif. Intell.
Res.(JAIR), vol. 30, pp. 525?564, 2007.
[24] C. A. Micchelli and M. Pontil, ?Learning the kernel function via regularization,? Journal of Machine
Learning Research, pp. 1099?1125, 2005.
[25] S. S. Keerthi, S. Sundararajan, K.-W. Chang, C.-J. Hsieh, and C.-J. Lin, ?A sequential dual method for
large scale multi-class linear svms,? in 14th ACM SIGKDD, pp. 408?416, ACM, 2008.
[26] J. D. Rennie and R. Rifkin, ?Improving multiclass text classification with the support vector machine,?
tech. rep., AIM-2001-026, MIT, 2001.
[27] K. Lang, ?Newsweeder: Learning to filter netnews,? in Proceedings of the 12th international conference
on machine learning, pp. 331?339, 1995.
[28] D. D. Lewis, Y. Yang, T. G. Rose, and F. Li, ?Rcv1: A new benchmark collection for text categorization
research,? The Journal of Machine Learning Research, vol. 5, pp. 361?397, 2004.
[29] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona, ?Caltech-UCSD Birds
200,? Tech. Rep. CNS-TR-2010-001, California Institute of Technology, 2010.
[30] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, ?Caffe:
Convolutional architecture for fast feature embedding,? arXiv preprint arXiv:1408.5093, 2014.
9
| 6012 |@word mild:3 kong:1 polynomial:1 norm:26 pillar:1 dekel:1 hu:1 r:5 crucially:1 hsieh:1 tr:1 contains:2 document:1 past:1 existing:3 outperforms:3 current:1 com:1 guadarrama:1 beygelzimer:1 lang:1 yet:2 additive:1 hofmann:1 analytic:2 update:3 intelligence:1 oneto:1 boosting:1 zhang:2 rc:2 consists:1 manner:1 introduce:2 theoretically:1 expected:1 behavior:2 growing:3 multi:42 decreasing:1 equipped:1 considering:1 increasing:1 becomes:3 notation:1 bounded:1 substantially:3 developed:2 guarantee:2 ti:4 auai:1 tie:1 exactly:1 classifier:7 k2:8 uk:1 control:3 unit:1 enjoy:1 yn:2 superiority:1 t1:2 mach:1 initiated:1 signed:1 bird:10 koltchinskii:2 studied:4 equivalence:1 binder:2 branson:1 acknowledgment:1 block:1 pontil:1 universal:1 empirical:6 significantly:2 word:1 regular:6 undesirable:1 selection:1 operator:6 unlabeled:1 risk:4 influence:1 seminal:1 optimize:1 equivalent:1 map:1 imposed:1 lagrangian:3 starting:1 independently:2 convex:6 rule:2 embedding:2 handle:1 notion:1 coordinate:2 analogous:2 annals:1 construction:1 suppose:3 play:1 massive:1 controlling:1 shamir:1 hypothesis:10 trick:1 element:1 recognition:3 satisfying:2 updating:2 labeled:1 database:1 ep:2 role:1 subproblem:2 preprint:1 solved:1 capture:2 thousand:2 dualized:5 wj:4 news:4 sonnenburg:1 decrease:2 removed:1 mentioned:2 rose:1 panchenko:2 broken:1 complexity:24 convexity:1 multilabel:1 rewrite:1 solving:2 purely:1 completely:4 cityu:1 easily:1 various:3 represented:4 regularizer:2 derivation:1 jain:1 fast:1 effective:2 kp:1 artificial:1 lift:1 netnews:1 choosing:1 refined:1 caffe:1 supplementary:5 valued:2 solve:4 cvpr:2 rennie:1 otherwise:1 ability:2 statistic:3 gi:1 g1:5 jointly:1 advantage:3 sequence:1 brefeld:1 ledoux:1 cai:1 karayev:1 propose:3 maximal:1 remainder:1 tu:1 combining:1 realization:2 kapoor:1 rifkin:1 iff:1 translate:1 achieve:2 caffenet:1 description:1 ghio:1 darrell:1 rademacher:12 categorization:1 derive:9 coupling:4 develop:2 radical:2 ij:17 progress:1 eq:10 strong:2 implemented:1 c:8 involves:1 implies:2 indicate:2 exhibiting:1 direction:1 attribute:1 filter:1 stochastic:1 material:5 humboldt:1 require:1 glasmachers:1 generalization:33 investigation:1 proposition:1 tighter:2 secondly:1 hold:1 considered:1 algorithmic:2 mapping:2 predict:1 dogan:1 favorable:1 estimation:1 schroff:1 label:2 vice:1 city:1 establishes:1 minimization:1 mit:2 clearly:1 gaussian:12 always:1 aim:1 rather:1 pn:1 hj:7 ej:1 corollary:10 derived:1 focus:2 improvement:1 consistently:1 rank:1 hk:1 tech:2 sigkdd:1 attains:1 talwalkar:1 baseline:1 sense:2 kp2:2 milder:2 helpful:1 dependent:19 stopping:1 rostamizadeh:2 perona:1 favoring:1 semantics:1 arg:1 classification:24 among:5 dual:17 supw:1 exponent:2 art:7 special:3 initialize:1 equal:1 y6:1 kw:1 icml:1 constitutes:1 discrepancy:2 future:1 t2:2 report:1 employ:1 preserve:2 intell:1 dfg:1 cns:1 keerthi:1 microsoft:2 delicate:1 n1:1 recalling:1 interest:1 investigate:1 highly:1 deferred:2 introduces:1 admitting:1 pc:3 primal:3 partial:2 hwyi:2 enjoying:1 tree:2 re:2 girshick:1 theoretical:6 minimal:1 fenchel:6 gn:1 zn:1 introducing:1 deviation:1 subset:1 hundred:2 welinder:1 straightforwardly:1 dependency:3 combined:1 cb1:1 international:2 kloft:3 dong:1 iy:2 w1:2 satisfied:1 li:3 account:1 potential:1 de:1 eyi:3 explicitly:1 vi:1 depends:1 h1:13 closed:2 kwk:1 sup:12 hf:1 recover:1 annotation:2 defer:1 jia:1 ni:1 accuracy:10 convolutional:1 efficiently:1 yield:1 mc:18 advertising:2 rx:1 definition:5 pp:24 involved:1 proof:6 associated:5 gain:7 dataset:5 recall:1 knowledge:2 attained:2 nrs:1 jair:1 methodology:1 impacted:1 improved:3 formulation:1 strongly:6 furthermore:1 lastly:1 correlation:2 langford:1 talagrand:1 web:2 mkl:3 lei:1 artif:1 usage:1 k22:2 true:1 remedy:1 regularization:8 hence:1 i2:1 eg:5 covering:2 hong:1 criterion:1 syntax:1 hill:1 cp:1 image:4 novel:5 recently:2 common:1 empirically:1 banach:1 relating:1 ridella:1 kwk2:2 sundararajan:1 significant:1 cambridge:1 imposing:1 versa:1 consistency:3 mathematics:1 hp:2 grid:1 grangier:1 language:1 funded:1 iyi:5 yk2:1 mainstream:1 base:1 wilcoxon:1 recent:1 optimizing:4 inf:1 wellknown:1 inequality:1 rep:2 yi:14 caltech:4 employed:1 deng:1 ii:2 zien:1 multiple:3 desirable:1 sound:1 reduces:1 cross:1 long:1 retrieval:1 hwj:1 divided:1 lin:1 award:1 impact:1 prediction:5 vision:2 expectation:1 surpassed:1 arxiv:2 kernel:15 achieved:5 crucial:2 hwy:2 ascent:2 induced:4 kwk22:1 contrary:1 effectiveness:1 call:1 structural:12 yang:1 iii:1 easy:1 bengio:2 affect:1 zi:2 equidistant:1 sorkin:1 architecture:1 suboptimal:4 multiclass:7 motivated:5 six:1 bartlett:2 remark:2 deep:1 ignored:1 generally:1 tewari:1 involve:2 tune:1 ten:2 bvlc:1 svms:6 category:3 hw1:2 singapore:1 sutd:1 fulfilled:1 vol:10 group:1 key:1 drawn:2 year:1 powerful:1 hwc:2 family:1 scaling:1 bound:55 fold:1 quadratic:7 g:3 constraint:4 fei:2 wc:2 aspect:1 min:2 optimality:1 performing:2 rcv1:5 guermeur:1 marius:1 department:2 structured:1 according:3 alternate:1 poor:1 conjugate:2 legendre:1 y0:1 ur:1 wi:1 appealing:1 lp:1 maxy:1 restricted:1 taken:3 count:1 fail:1 german:1 singer:4 know:1 end:3 operation:1 observe:1 hierarchical:1 appropriate:1 denotes:1 include:1 cf:2 hinge:6 const:1 calculating:1 exploit:1 build:2 classical:3 micchelli:1 occurs:1 strategy:1 dependence:25 surrogate:1 said:1 exhibit:3 gradient:3 analsysis:1 thank:1 berlin:4 considers:2 collected:3 discriminant:1 reason:1 ciaramita:1 index:1 relationship:1 minimizing:2 equivalently:1 sector:4 taxonomy:1 favorably:1 expense:1 design:1 implementation:1 motivates:2 perform:1 allowing:1 upper:1 datasets:11 benchmark:8 finite:1 ever:1 y1:2 rn:5 ucsd:1 overcoming:1 kl:1 z1:1 imagenet:1 wah:1 california:1 quadratically:2 established:4 alternately:1 nip:1 able:2 beyond:1 below:2 wy:1 pattern:4 gnc:4 program:1 including:1 max:3 syed:1 natural:3 difficulty:1 circumvent:1 isoperimetry:1 improve:2 technology:2 concludes:1 miny0:1 coupled:2 text:2 sg:1 literature:1 loss:19 sublinear:1 proven:1 validation:1 foundation:2 shelhamer:1 incurred:1 mercer:1 classifying:1 heavy:1 strehl:1 row:1 mohri:4 repeat:1 last:1 keeping:1 side:1 institute:1 taking:1 distributed:3 xn:3 world:3 fb:1 commonly:1 collection:1 maxy0:1 ignore:1 doucet:1 active:1 uai:1 belongie:1 xi:30 discriminative:1 un:1 search:1 why:1 table:4 promising:1 learn:2 fc6:1 ignoring:1 improving:2 mehryar:1 hc:11 meanwhile:1 domain:4 significance:1 main:2 bounding:3 x1:3 body:1 lifshits:1 lc:1 sub:2 wish:1 governed:1 anguita:1 hw:11 donahue:1 theorem:12 formula:1 specific:2 svm:16 gupta:1 cortes:1 concern:1 essential:1 workshop:1 socher:1 mendelson:1 sequential:1 margin:22 kx:1 logarithmic:6 saddle:2 visual:1 expressed:1 newsweeder:1 temporarily:2 partially:2 kwj:6 chang:1 applies:3 kuznetsov:1 khj:1 corresponds:1 springer:2 satisfies:1 lewis:1 acm:2 weston:2 conditional:1 lipschitz:1 change:1 typical:1 specifically:1 lemma:7 conservative:2 called:1 duality:2 partly:1 unfavorable:1 support:3 crammer:4 alexander:2 evaluate:2 mita:1 correlated:1 |
5,540 | 6,013 | Optimal Linear Estimation under Unknown
Nonlinear Transform
Xinyang Yi
The University of Texas at Austin
[email protected]
Zhaoran Wang
Princeton University
[email protected]
Constantine Caramanis
The University of Texas at Austin
[email protected]
Han Liu
Princeton University
[email protected]
Abstract
Linear regression studies the problem of estimating a model parameter ? ? ? Rp ,
from n observations {(yi , xi )}ni=1 from linear model yi = hxi , ? ? i + i . We
consider a significant generalization in which the relationship between hxi , ? ? i
and yi is noisy, quantized to a single bit, potentially nonlinear, noninvertible, as
well as unknown. This model is known as the single-index model in statistics, and,
among other things, it represents a significant generalization of one-bit compressed
sensing. We propose a novel spectral-based estimation procedure and show that
we can recover ? ? in settings (i.e., classes of link function f ) where previous
algorithms fail. In general, our algorithm requires only very mild restrictions on the
(unknown) functional relationship between yi and hxi , ? ? i. We also consider the
high dimensional setting where ? ? is sparse, and introduce a two-stage nonconvex
framework that addresses estimation challenges in high dimensional regimes where
p n. For a broad class of link functions between hxi , ? ? i and yi , we establish
minimax lower bounds that demonstrate the optimality of our estimators in both
the classical and high dimensional regimes.
1
Introduction
We consider a generalization of the one-bit quantized regression problem, where we seek to recover
the regression coefficient ? ? ? Rp from one-bit measurements. Specifically, suppose that X is a
random vector in Rp and Y is a binary random variable taking values in {?1, 1}. We assume the
conditional distribution of Y given X takes the form
1
1
P(Y = 1|X = x) = f (hx, ? ? i) + ,
(1.1)
2
2
where f : R ? [?1, 1] is called the link function. We aim to estimate ? ? from n i.i.d. observations
{(yi , xi )}ni=1 of the pair (Y, X). In particular, we assume the link function f is unknown. Without
any loss of generality, we take ? ? to be on the unit sphere Sp?1 since its magnitude can always be
incorporated into the link function f .
The model in (1.1) is simple but general. Under specific choices of the link function f , (1.1) immediately leads to many practical models in machine learning and signal processing, including logistic
regression and one-bit compressed sensing. In the settings where the link function is assumed to
be known, a popular estimation procedure is to calculate an estimator that minimizes a certain loss
1
function. However, for particular link functions, this approach involves minimizing a nonconvex
objective function for which the global minimizer is in general intractable to obtain. Furthermore, it
is difficult or even impossible to know the link function in practice, and a poor choice of link function
may result in inaccurate parameter estimation and high prediction error. We take a more general
approach, and in particular, target the setting where f is unknown. We propose an algorithm that can
estimate the parameter ? ? in the absence of prior knowledge on the link function f . As our results
make precise, our algorithm succeeds as long as the function f satisfies a single moment condition.
As we demonstrate, this moment condition is only a mild restriction on f . In particular, our methods
and theory are widely applicable even to the settings where f is non-smooth, e.g., f (z) = sign(z), or
noninvertible, e.g., f (z) = sin(z).
In particular, as we show in ?2, our restrictions on f are sufficiently flexible so that our results provide
a unified framework that encompasses a broad range of problems, including logistic regression,
one-bit compressed sensing, one-bit phase retrieval as well as their robust extensions. We use these
important examples to illustrate our results, and discuss them at several points throughout the paper.
Main contributions. The key conceptual contribution of this work is a novel use of the method of
moments. Rather than considering moments of the covariate, X, and the response variable, Y , we
look at moments of differences of covariates, and differences of response variables. Such a simple yet
critical observation enables everything that follows and leads to our spectral-based procedure.
We also make two theoretical contributions. First, we simultaneously establish the statistical and
computational rates of convergence of the proposed spectral algorithm. We consider both the low
dimensional setting where the number of samples exceeds the dimension and the high dimensional
setting where the dimensionality may (greatly) exceed the number of samples. In both these settings,
our proposed algorithm achieves the same statistical rate of convergence as that of linear regression
applied on data generated by the linear model without quantization. Second, we provide minimax
lower bounds for the statistical rate of convergence, and thereby establish the optimality of our
procedure within a broad model class. In the low dimensional setting, our results obtain the optimal
rate with the optimal sample complexity. In the high dimensional setting, our algorithm requires
estimating a sparse eigenvector, and thus our sample complexity coincides with what is believed to
be the best achievable via polynomial time methods [2]; the error rate itself, however, is informationtheoretically optimal. We discuss this further in ?3.4.
Related works. Our model in (1.1) is close to the single-index model (SIM) in statistics. In the SIM,
we assume that the response-covariate pair (Y, X) is determined by
Y = f (hX, ? ? i) + W
(1.2)
with unknown link function f and noise W . Our setting is a special case of this, as we restrict Y
to be a binary random variable. The single index model is a classical topic, and therefore there is
extensive literature ? too much to exhaustively review it. We therefore outline the pieces of work most
relevant to our setting and our results. For estimating ? ? in (1.2), a feasible approach is M -estimation
[8, 9, 12], in which the unknown link function f is jointly estimated using nonparametric estimators.
Although these M -estimators have been shown to be consistent, they are not computationally efficient
since they involve solving a nonconvex optimization problem. Another approach to estimate ? ? is
named the average derivative estimator (ADE; [24]). Further improvements of ADE are considered
in [13, 22]. ADE and its related methods require that the link function f is at least differentiable, and
thus excludes important models such as one-bit compressed sensing with f (z) = sign(z). Beyond
estimating ? ? , the works in [15, 16] focus on iteratively estimating a function f and vector ? that
are good for prediction, and they attempt to control the generalization error. Their algorithms are
based on isotonic regression, and are therefore only applicable when the link function is monotonic
and satisfies Lipschitz constraints. The work discussed above focuses on the low dimensional setting
where p n. Another related line of works is sufficient dimension reduction, where the goal is to
find a subspace U of the input space such that the response Y only depends on the projection U> X.
Single-index model and our problem can be regarded as special cases of this problem as we are
primarily interested in recovering a one-dimensional subspace. Due to space limit, we refer readers to
the long version of this paper for a detailed survey [29].
2
In the high dimensional regime with p n and ? ? has some structure (for us this means sparsity),
we note there exists some recent progress [1] on estimating f via PAC Bayesian methods. In the
special case when f is linear function, sparse linear regression has attracted extensive study over
the years. The recent work by Plan et al. [21] is closest to our setting. They consider the setting of
normal covariates, X ? N (0, Ip ), and they propose a marginal regression estimator for estimating
? ? , that, like our approach, requires
no prior
knowledge about f . Their proposed algorithm relies
on the assumption that Ez?N (0,1) zf (z) 6= 0, and hence cannot work for link functions that are
even. As we will describe below, our algorithm is based on a novel moment-based estimator, and
avoids requiring such a condition, thus allowing us to handle even link functions under a very mild
moment restriction, which we describe in detail below. Generally, the work in [21] requires different
conditions, and thus beyond the discussion above, is not directly comparable to the work here. In
cases where both approaches apply, the results are minimax optimal.
2
Example models
In this section, we discuss several popular (and important) models in machine learning and signal
processing that fall into our general model (1.1) under specific link functions. Variants of these models
have been studied extensively in the recent literature. These examples trace through the paper, and we
use them to illustrate the details of our algorithms and results.
Logistic regression. In logistic regression (LR), we assume that P(Y = 1|X = x) =
exp (z+?)?1
1
1+exp (?hx,? ? i??) , where ? is the intercept. The link function corresponds to f (z) = exp (z+?)+1 .
One robust variant of LR is called flipped logistic regression, where we assume that the labels
Y generated from standard LR model are flipped with probability pe , i.e., P(Y = 1|X = x) =
1?pe
pe
1+exp (?hx,? ? i??) + 1+exp (hx,? ? i+?) . This reduces to the standard LR model when pe = 0. For
flipped LR, the link function f can be written as
1 ? exp (z + ?)
exp (z + ?) ? 1
f (z) =
+ 2pe ?
.
(2.1)
exp (z + ?) + 1
1 + exp (z + ?)
Flipped LR has been studied by [19, 25]. In both papers, estimating ? ? is based on minimizing some
surrogate loss function involving a certain tuning parameter connected to pe . However, pe is unknown
in practice. In contrast to their approaches, our method does not hinge on the unknown parameter pe .
Our approach has the same formulation for both standard and flipped LR, thus unifies the two models.
One-bit compressed sensing. One-bit compressed sensing (CS) aims at recovering sparse signals
from quantized linear measurements (see e.g., [11, 20]). In detail, we define B0 (s, p) := {? ? Rp :
| supp(?)| ? s} as the set of sparse vectors in Rp with at most s nonzero elements. We assume
(Y, X) ? {?1, 1} ? Rp satisfies
Y = sign(hX, ? ? i),
(2.2)
where ? ? ? B0 (s, p). In this paper, we also consider its robust version with noise , i.e., Y =
sign(hX, ? ? i + ). Assuming ? N (0, ? 2 ), the link function f of robust 1-bit CS thus corresponds
to
Z ?
2
2
1
?
f (z) = 2
e?(u?z) /2? du ? 1.
(2.3)
2??
0
Note that (2.2) also corresponds to the probit regression model without the sparse constraint on ? ? .
Throughout the paper, we do not distinguish between the two model names. Model (2.2) is referred
to as one-bit compressed sensing even in the case where ? ? is not sparse.
One-bit phase retrieval. The goal of phase retrieval (e.g., [5]) is to recover signals based on linear
measurements with phase information erased, i.e., pair (Y, X) ? R ? Rp is determined by equation
Y = |hX, ? ? i|. Analogous to one-bit compressed sensing, we consider a new model named one-bit
phase retrieval where the linear measurement with phase information erased is quantized to one bit.
In detail, pair (Y, X) ? {?1, 1} ? Rp is linked through Y = sign(|hX, ? ? i| ? ?), where ? is the
quantization threshold. Compared with one-bit compressed sensing, this problem is more difficult
because Y only depends on ? ? through the magnitude of hX, ? ? i instead of the value of hX, ? ? i.
Also, it is more difficult than the original phase retrieval problem due to the additional quantization.
3
Using our general model, The link function thus corresponds to
f (z) = sign(|z| ? ?).
(2.4)
It is worth noting that, unlike previous models, here f is neither odd nor monotonic.
3
Main results
We now turn to our algorithms for estimating ? ? in both low and high dimensional settings. We first
introduce a second moment estimator based on pairwise differences. We prove that the eigenstructure
of the constructed second moment estimator encodes the information of ? ? . We then propose
algorithms to estimate ? ? based upon this second moment estimator. In the high dimensional setting
where ? ? is sparse, computing the top eigenvector of our pairwise-difference matrix reduces to
computing a sparse eigenvector. Beyond algorithms, we discuss minimax lower bound in ?3.5. We
present simulation results in ?3.6
3.1
Conditions for success
We now introduce several key quantities, which allow us to state precisely the conditions required for
the success of our algorithm.
Definition 3.1. For any (unknown) link function, f , define the quantity ?(f ) as follows:
?(f ) := ?21 ? ?0 ?2 + ?20 .
(3.1)
where ?0 , ?1 and ?2 are given by
?k := E f (Z)Z k ,
k = 0, 1, 2 . . . ,
(3.2)
where Z ? N (0, 1).
As we discuss in detail below, the key condition for success of our algorithm is ?(f ) 6= 0. As we
show below, this is a relatively mild condition, and in particular, it is satisfied by the three examples
introduced in ?2. For odd and monotonic f , ?(f ) > 0 unless f (z) = 0 for all z in which case no
algorithm is able to recover ? ? . For even f , we have ?1 = 0. Thus ?(f ) 6= 0 if and only if ?0 6= ?2 .
3.2
Second moment estimator
We describe a novel moment estimator that enables our algorithm. Let {(yi , xi )}ni=1 be the n i.i.d.
observations of (Y, X). Assuming without loss of generality that n is even, we consider the following
key transformation
?yi := y2i ? y2i?1 ,
?xi := x2i ? x2i?1 ,
(3.3)
for i = 1, 2, ..., n/2. Our procedure is based on the following second moment
n/2
2X
p?p
M :=
?yi2 ?xi ?x>
.
i ?R
n i=1
(3.4)
The intuition behind this second moment is as follows. By (1.1), the variation of X along the direction
? ? has the largest impact on the variation of hX, ? ? i. Thus, the variation of Y directly depends
n/2
on the variation of X along ? ? . Consequently, {(?yi , ?xi )}i=1 encodes the information of such a
dependency relationship. In the following, we make this intuition more rigorous by analyzing the
eigenstructure of E(M) and its relationship with ? ? .
Lemma 3.2. For ? ? ? Sp?1 , we assume that (Y, X) ? {?1, 1} ? Rp satisfies (1.1). For X ?
N (0, Ip ), we have
E(M) = 4?(f ) ? ? ? ? ?> + 4(1 ? ?20 ) ? Ip ,
(3.5)
where ?0 and ?(f ) are defined in (3.2) and (3.1).
Lemma 3.2 proves that ? ? is the leading eigenvector of E(M) as long as the eigengap ?(f ) is positive.
If instead we have ?(f ) < 0, we can use a related moment estimator which has analogous properties.
4
Pn/2
To this end, define M0 := n2 i=1 (y2i + y2i?1 )2 ?xi ?x>
i . In parallel to Lemma 3.2, we have a
similar result for M0 as stated below.
Corollary 3.3. Under the setting of Lemma 3.2,
E(M0 ) = ?4?(f ) ? ? ? ? ?> + 4(1 + ?20 ) ? Ip .
Corollary 3.3 therefore shows that when ?(f ) < 0, we can construct another second moment estimator
M0 such that ? ? is the leading eigenvector of E(M0 ). As discussed above, this is precisely the setting
for one-bit phase retrieval when the quantization threshold in (3.1) satisfies ? < ?m . For simplicity of
the discussion, hereafter we assume that ?(f ) > 0 and focus on the second moment estimator M
defined in (3.4).
A natural question to ask is whether ?(f ) 6= 0 holds for specific models. The following lemma
demonstrates exactly this, for the example models introduced in ?2.
Lemma 3.4. (a) Consider the flipped logistic regression where f is given in (2.1). By setting the
2
intercept to be ? = 0, we have ?(f ) & (1
? 2pe ). (b) For robust
one-bit compressed sensing where f
2
2
0 4
1??
C ?
is given in (2.3). We have ?(f ) & min
. (c) For one-bit phase retrieval where
, (1+?
3 )2
1+? 2
f is given in (2.4). For Z ? N (0, 1), we let ?m be the median of |Z|, i.e., P(|Z| ? ?m ) = 1/2. We
have |?(f )| & ?|? ? ?m | exp(??2 ) and sign[?(f )] = sign(? ? ?m ). We thus obtain ?(f ) > 0 for
? > ?m .
3.3
Low dimensional recovery
We consider estimating ? ? in the classical (low dimensional) setting where p n. Based on the
second moment estimator M defined in (3.4), estimating ? ? amounts to solving a noisy eigenvalue
problem. We solve this by a simple iterative algorithm: provided an initial vector ? 0 ? Sp?1 (which
may be chosen at random) we perform power iterations as shown in Algorithm 1.
Theorem 3.5. We assume X ? N (0, Ip ) and (Y, X) follows (1.1). Let {(yi , xi )}ni=1 be n i.i.d.
samples of response input pair (Y, X). For any link function f in (1.1) with ?0 , ?(f ) defined in (3.2)
and (3.1), and ?(f ) > 01 . We let
1 ? ?20
??(f ) + (? ? 1)(1 ? ?20 )
.
? :=
+
1
2,
and
?
:=
(3.6)
?(f ) + 1 ? ?20
(1 + ?) ?(f ) + 1 ? ?20
There exist constant Ci such that when n ? C1 p/? 2 , for Algorithm 1, we have that with probability
at least 1 ? 2 exp(?C2 p),
r
r
2
t
1 ? ?2 t
? ? ? ?
? C3 ? ?(f ) + 1 ? ?0 ? p +
??
, for t = 1, . . . , Tmax . (3.7)
2
2
?(f )
n
?{z
|
}
|
{z
}
Optimization Error
Statistical Error
0
Here ? = ? , ?b , where ?b is the first leading eigenvector of M.
Note that by (3.6) we have ? ? (0, 1). Thus, the optimization error term in (3.7) decreases at
a geometric rate to zero as t increases. For Tmax sufficiently large such
perror
that the statistical
and optimization error terms in (3.7) are of the same order, we have
? Tmax ? ? ?
2 . p/n.
This statistical rate of convergence matches the rate of estimating a p-dimensional vector in linear
regression without any quantization, and will later be shown to be optimal. This result shows that the
lack of prior knowledge on the link function and the information loss from quantization do not keep
our procedure from obtaining the optimal statistical rate.
3.4
High dimensional recovery
Next we consider the high dimensional setting where p n and ? ? is sparse, i.e., ? ? ? Sp?1 ?
B0 (s, p) with s being support size. Although this high dimensional estimation problem is closely
1
Recall that we have an analogous treatment and thus results for ?(f ) < 0.
5
related to the well-studied sparse PCA problem, the existing works [4, 6, 17, 23, 27, 28, 31, 32] on
sparse PCA do not provide a direct solution to our problem. In particular, they either lack statistical
guarantees on the convergence rate of the obtained estimator [6, 23, 28] or rely on the properties of
the sample covariance matrix of Gaussian data [4, 17], which are violated by the second moment
estimator defined in (3.4). For the sample covariance matrixp
of sub-Gaussian data, [27] prove that the
convex relaxation proposed by [7] achieves a suboptimal s log p/n rate of convergence.
Yuan and
p
Zhang [31] propose the truncated power method, and show that it attains the optimal s log p/n rate
Algorithm 1 Low dimensional recovery
Algorithm 2 Sparse recovery
n
Input {(yi , xi )}i=1 , number of iterations Tmax Input {(yi , xi )}ni=1 , number of iterations Tmax ,
1: Second moment estimation: Construct M
regularization parameter ?, sparsity level sb.
from samples according to (3.4).
1: Second moment estimation: Construct M
from samples according to (3.4).
2: Initialization: Choose a random vector ? 0 ?
Sn?1
2: Initialization:
3: ?0 ? argmin{?hM, ?i + ?k?k1,1
3: For t = 1, 2, . . . , Tmax do
t
t?1
??Rp?p
4:
? ?M??
t
t
t
| Tr(?) = 1, 0 ? I} (3.8)
5:
? ? ? /k? k2
6: end For
4:
? 0 ? first leading eigenvector of ?0
Output ? Tmax
5:
? 0 ? trunc(? 0 , sb)
6:
? 0 ? ? 0 /k? 0 k2
locally; that is, it exhibits this rate of convergence 7: For t = 1, 2, . . . , T
max do
only in a neighborhood of the true solution where 8: ? t ? trunc(M ? ? t?1 , sb)
h? 0 , ? ? i > C where C > 0 is some constant. It
9:
? t ? ? t /k? t k2
is well understood that for a random initialization 10: end For
on Sp?1 , such a condition fails with probability Output ? Tmax
going to one as p ? ?.
Instead, we propose a two-stage procedure for estimating ? ? in our setting. In the first stage, we adapt
the convex relaxation proposed by [27] and use it as an initialization step, in order to obtain a good
enough initial point satisfying the condition h? 0 , ? ? i > C. The convex optimization problem can be
easily solved by the alternating direction method of multipliers (ADMM) algorithm (see [3, 27] for
details). Then we adapt the truncated power method. This procedure is illustrated in Algorithm 2. In
particular, we define truncation operator trunc(?, ?) as [trunc(?, s)]j = 1(j ? S)?j , where S is the
index set corresponding to the top s largest |?j |. The initialization phase of our algorithm requires
O(s2 log p) samples (see below for more precise details) to succeed. As work in [2] suggests, it is
unlikely that a polynomial time algorithm can avoid such dependence. However, once
p we are near the
solution, as we show, this two-step procedure achieves the optimal error rate of s log p/n.
Theorem 3.6. Let
? := 4(1 ? ?20 ) + ?(f ) 4(1 ? ?20 ) + 3?(f ) < 1,
(3.9)
and the minimum sample size be
2
nmin := C ? s2 log p ? ?(f )2 ? min ?(1 ? ?1/2 )/2, ?/8
(1 ? ?20 ) + ?(f ) .
(3.10)
p
2
Suppose ? = C ?(f )+(1??0 ) log p/n with a sufficiently large constant C, where ?(f ) and ?0
are specified in
(3.2) and (3.5). Meanwhile,
assume the sparsity parameter sb in Algorithm 2 is set to
be sb= C 00 max 1/(??1/2 ?1)2 ,1 ?s? . For n ? nmin with nmin defined in (3.10), we have
r
5
1
q
?(f ) + (1 ? ?20 ) 2 (1 ? ?20 ) 2
s log p
t
?
t
?
+
?
?
min (1 ? ?1/2 )/2, 1/8
k? ? ? k2 ? C ?
3
?(f )
n
{z
}
|
{z
} |
Optimization Error
Statistical Error
(3.11)
with high probability. Here ? is defined in (3.9).
The first term on the right-hand side of (3.11) is the statistical error while the second term gives the
optimization error. Note that the optimization error decays at a geometric rate since ? < 1. For Tmax
6
sufficiently large, we have
p
T
? max ? ? ?
. s log p/n.
2
In the sequel, we show that the right-hand side gives the optimal statistical rate of convergence for a
broad model class under the high dimensional setting with p n.
3.5
Minimax lower bound
We establish the minimax lower bound for estimating ? ? in the model defined in (1.1). In the sequel
we define the family of link functions that are Lipschitz continuous and are bounded away from ?1.
Formally, for any m ? (0, 1) and L > 0, we define
F(m, L) := f : |f (z)| ? 1 ? m, |f (z) ? f (z 0 )| ? L|z ? z 0 |, for all z, z 0 ? R . (3.12)
Let Xfn := {(yi , xi )}ni=1 be the n i.i.d. realizations of (Y, X), where X follows N (0, Ip ) and Y
b n ),
satisfies (1.1) with link function f . Correspondingly, we denote the estimator of ? ? ? B to be ?(X
f
?
?
where B is the domain of ? . We define the minimax risk for estimating ? as
b n ) ? ??
.
R(n, m, L, B) :=
inf
inf sup E
?(X
(3.13)
f
b n ) ? ? ?B
f ?F (m,L) ?(X
2
f
b but also all
In the above definition, we not only take the infimum over all possible estimators ?,
possible link functions in F(m, L). For a fixed f , our formulation recovers the standard definition
of minimax risk [30]. By taking the infimum over all link functions, our formulation characterizes
the minimax lower bound under the least challenging f in F(m, L). In the sequel we prove that our
procedure attains such a minimax lower bound for the least challenging f given any unknown link
function in F(m, L). That is to say, even when f is unknown, our estimation procedure is as accurate
as in the setting where we are provided the least challenging f , and the achieved accuracy is not
improvable due to the information-theoretic limit. The following theorem establishes the minimax
lower bound in the high dimensional setting.
Theorem 3.7. Let B = Sp?1 ?B0 (s, p). We assume that n > m(1?m)/(2L2 )2? Cs log(p/s)/2?log 2 .
For any s ? (0, p/4], the minimax risk defined in (3.13) satisfies
r
p
m(1 ? m)
s log(p/s)
0
R(n, m, L, B) ? C ?
?
.
L
n
Here C and C 0 are absolute constants, while m and L are defined in (3.12).
Theorem 3.7 establishes the minimax optimality of the statistical rate attained by our procedure for
b
p n and s-sparse ? ? . In particular, for arbitrary f ? F(m, L) ?
p{f : ?(f ) > 0}, the estimator ?
attained by Algorithm 2 is minimax-optimal in the sense that its s log p/n rate of convergence is
not improvable, even when the information on the link function
f is available. For general ? ? ? Rp ,
p
one can show the best possible convergence rate is ?( m(1 ? m)p/n/L) by setting s = p/4 in
Theorem 3.7.
It is worth to note that our lower bound becomes trivial for m = 0, i.e., there exists some z such
that |f (z)| = 1. One example is the noiseless one-bit compressedpsensing for which we have
f (z) = sign(z). In fact, for noiseless one-bit compressed sensing, the s log p/n rate is not optimal.
For example, the Jacques et al. [14] provide an algorithm (with exponential running time) that achieves
rate s log p/n. Understanding such a rate transition phenomenon for link functions with zero margin,
i.e., m = 0 in (3.12), is an interesting future direction.
3.6
Numerical results
We now turn to the numerical results that support our theory. For the three models introduced in ?2,
we apply Algorithm 1 and Algorithm 2 to do parameter estimation in the classic and high dimensional
regimes. Our simulations are based on synthetic data. For classic recovery, ? ? is randomly chosen
from Sp?1 ; for sparse recovery, we set ?j? = s?1/2 1(j ? S) for all j ? [p], where S is a random
index subset of [p] with size s. In Figure 1, as predicted by Theorem 3.5, we observe that the same
7
p
p/n
p leads to nearly identical estimation error. Figure 2 demonstrates similar results for the predicted
rate s log p/n of sparse recovery and thus validates Theorem 3.6.
0.5
0.6
0.5
0.4
p = 10
p = 20
p = 40
0.3
0.2
0.05
0.1 p
0.15
0.2
0.4
0.3
p = 10
p = 20
p = 40
0.2
0.05
p/n
(a) Flipped Logistic Regression
0.7
Estimation Error
Estimation Error
Estimation Error
0.7
0.1 p
0.15
0.6
0.5
0.4
0.2
0.05
0.2
p/n
(b) One-bit Compressed Sensing
p = 10
p = 20
p = 40
0.3
0.1 p
0.15
0.2
p/n
(c) One-bit Phase Retrieval
Figure 1: Estimation error of low dimensional recovery. (a) pe = 0.1. (b) ? 2 = 0.1. (c) ? = 1.
1
0.8
0.6
p = 100, s = 5
p = 100, s = 10
p = 200, s = 5
p = 200, s = 10
0.4
0.2
0.1
0.2
p
0.3
0.4
s log p/n
(a) Flipped Logistic Regression
Estimation Error
0.8
Estimation Error
Estimation Error
1.2
0.6
0.4
p = 100, s = 5
p = 100, s = 10
p = 200, s = 5
p = 200, s = 10
0.2
0
0.1
0.2
p
0.3
0.4
s log p/n
(b) One-bit Compressed Sensing
1
0.8
0.6
p = 100, s = 5
p = 100, s = 10
p = 200, s = 5
p = 200, s = 10
0.4
0.2
0
0.1
0.15p 0.2
0.25
0.3
s log p/n
(c) One-bit Phase Retrieval
Figure 2: Estimation error of sparse recovery. (a) pe = 0.1. (b) ? 2 = 0.1. (c) ? = 1.
4
Discussion
Sample complexity. In high dimensional regime, while our algorithm achieves optimal convergence
rate, the sample complexity we need is ?(s2 log p). The natural question is whether it can be reduced
to O(s log p). We note that breaking the barrier s2 log p is challenging. Consider a simpler problem
sparse phase retrieval where yi = |hxi , ? ? i|, with a fairly extensive body of literature, the state-ofthe-art efficient algorithms (i.e., with polynomial running time) for recovering sparse ? ? requires
sample complexity ?(s2 log p) [10]. It remains open to show whether it?s possible to do consistent
sparse recovery with O(s log p) samples by any polynomial time algorithms.
Acknowledgment
XY and CC would like to acknowledge NSF grants 1056028, 1302435 and 1116955. This research
was also partially supported by the U.S. Department of Transportation through the Data-Supported
Transportation Operations and Planning (D-STOP) Tier 1 University Transportation Center. HL is
grateful for the support of NSF CAREER Award DMS1454377, NSF IIS1408910, NSF IIS1332109,
NIH R01MH102339, NIH R01GM083084, and NIH R01HG06841. ZW was partially supported by
MSR PhD fellowship while this work was done.
References
[1] A L Q U I E R , P. and B I A U , G . (2013). Sparse single-index model. Journal of Machine Learning
Research, 14 243?280.
[2] B E R T H E T , Q . and R I G O L L E T , P. (2013). Complexity theoretic lower bounds for sparse principal
component detection. In Conference on Learning Theory.
[3] B O Y D , S ., P A R I K H , N ., C H U , E ., P E L E AT O , B . and E C K S T E I N , J . (2011). Distributed
optimization and statistical learning via the alternating direction method of multipliers. Foundations and
R in Machine Learning, 3 1?122.
Trends
8
[4] C A I , T. T., M A , Z . and W U , Y. (2013). Sparse PCA: Optimal rates and adaptive estimation. Annals
of Statistics, 41 3074?3110.
[5] C A N D ? S , E . J ., E L D A R , Y. C ., S T R O H M E R , T. and V O R O N I N S K I , V. (2013). Phase retrieval
via matrix completion. SIAM Journal on Imaging Sciences, 6 199?225.
[6] D ? A S P R E M O N T , A ., B A C H , F. and E L G H A O U I , L . (2008). Optimal solutions for sparse principal
component analysis. Journal of Machine Learning Research, 9 1269?1294.
[7] D ? A S P R E M O N T , A ., E L G H A O U I , L ., J O R D A N , M . I . and L A N C K R I E T , G . R . (2007). A
direct formulation for sparse PCA using semidefinite programming. SIAM Review 434?448.
[8] D E L E C R O I X , M ., H R I S TA C H E , M . and P AT I L E A , V. (2000). Optimal smoothing in semiparametric
index approximation of regression functions. Tech. rep., Interdisciplinary Research Project: Quantification
and Simulation of Economic Processes.
[9] D E L E C R O I X , M ., H R I S TA C H E , M . and P AT I L E A , V. (2006). On semiparametric M -estimation in
single-index regression. Journal of Statistical Planning and Inference, 136 730?769.
[10] E L D A R , Y. C . and M E N D E L S O N , S . (2014). Phase retrieval: Stability and recovery guarantees.
Applied and Computational Harmonic Analysis, 36 473?494.
[11] G O P I , S ., N E T R A PA L L I , P., J A I N , P. and N O R I , A . (2013). One-bit compressed sensing: Provable
support and vector recovery. In International Conference on Machine Learning.
[12] H A R D L E , W., H A L L , P. and I C H I M U R A , H . (1993). Optimal smoothing in single-index models.
Annals of Statistics, 21 157?178.
[13] H R I S TA C H E , M ., J U D I T S K Y , A . and S P O K O I N Y , V. (2001). Direct estimation of the index
coefficient in a single-index model. Annals of Statistics, 29 pp. 595?623.
[14] J A C Q U E S , L ., L A S K A , J . N ., B O U F O U N O S , P. T. and B A R A N I U K , R . G . (2011). Robust 1-bit
compressive sensing via binary stable embeddings of sparse vectors. arXiv preprint arXiv:1104.3160.
[15] K A K A D E , S . M ., K A N A D E , V., S H A M I R , O . and K A L A I , A . (2011). Efficient learning of
generalized linear and single index models with isotonic regression. In Advances in Neural Information
Processing Systems.
[16] K A L A I , A . T. and S A S T R Y , R . (2009). The isotron algorithm: High-dimensional isotonic regression.
In Conference on Learning Theory.
[17] M A , Z . (2013). Sparse principal component analysis and iterative thresholding. The Annals of Statistics,
41 772?801.
[18] M A S S A R T , P. and P I C A R D , J . (2007). Concentration inequalities and model selection, vol. 1896.
Springer.
[19] N ATA R A J A N , N ., D H I L L O N , I ., R AV I K U M A R , P. and T E WA R I , A . (2013). Learning with noisy
labels. In Advances in Neural Information Processing Systems.
[20] P L A N , Y. and V E R S H Y N I N , R . (2013). One-bit compressed sensing by linear programming. Communications on Pure and Applied Mathematics, 66 1275?1297.
[21] P L A N , Y., V E R S H Y N I N , R . and Y U D O V I N A , E . (2014). High-dimensional estimation with
geometric constraints. arXiv preprint arXiv:1404.3749.
[22] P O W E L L , J . L ., S T O C K , J . H . and S T O K E R , T. M . (1989). Semiparametric estimation of index
coefficients. Econometrica, 57 pp. 1403?1430.
[23] S H E N , H . and H U A N G , J . (2008). Sparse principal component analysis via regularized low rank matrix
approximation. Journal of Multivariate Analysis, 99 1015?1034.
[24] S T O K E R , T. M . (1986). Consistent estimation of scaled coefficients. Econometrica, 54 pp. 1461?1481.
[25] T I B S H I R A N I , J . and M A N N I N G , C . D . (2013). Robust logistic regression using shift parameters.
arXiv preprint arXiv:1305.4987.
[26] V E R S H Y N I N , R . (2010). Introduction to the non-asymptotic analysis of random matrices. arXiv
preprint arXiv:1011.3027.
[27] V U , V. Q ., C H O , J ., L E I , J . and R O H E , K . (2013). Fantope projection and selection: A near-optimal
convex relaxation of sparse PCA. In Advances in Neural Information Processing Systems.
[28] W I T T E N , D ., T I B S H I R A N I , R . and H A S T I E , T. (2009). A penalized matrix decomposition, with
applications to sparse principal components and canonical correlation analysis. Biostatistics, 10 515?534.
[29] Y I , X ., W A N G , Z ., C A R A M A N I S , C . and L I U , H . (2015). Optimal linear estimation under unknown
nonlinear transform. arXiv preprint arXiv:1505.03257.
[30] Y U , B . (1997). Assouad, Fano, and Le Cam. In Festschrift for Lucien Le Cam. Springer, 423?435.
[31] Y U A N , X . - T. and Z H A N G , T. (2013). Truncated power method for sparse eigenvalue problems.
Journal of Machine Learning Research, 14 899?925.
[32] Z O U , H ., H A S T I E , T. and T I B S H I R A N I , R . (2006). Sparse principal component analysis. Journal
of Computational and Graphical Statistics, 15 265?286.
9
| 6013 |@word mild:4 msr:1 version:2 polynomial:4 achievable:1 open:1 seek:1 simulation:3 covariance:2 decomposition:1 thereby:1 tr:1 reduction:1 moment:21 liu:1 initial:2 hereafter:1 xinyang:1 existing:1 yet:1 perror:1 attracted:1 written:1 numerical:2 enables:2 lr:7 quantized:4 simpler:1 zhang:1 along:2 constructed:1 c2:1 direct:3 yuan:1 prove:3 introduce:3 pairwise:2 nor:1 planning:2 considering:1 becomes:1 provided:2 estimating:15 bounded:1 r01mh102339:1 project:1 biostatistics:1 what:1 argmin:1 minimizes:1 eigenvector:7 compressive:1 unified:1 transformation:1 guarantee:2 fantope:1 exactly:1 demonstrates:2 k2:4 scaled:1 control:1 unit:1 grant:1 eigenstructure:2 positive:1 understood:1 limit:2 analyzing:1 tmax:9 initialization:5 studied:3 suggests:1 challenging:4 range:1 practical:1 acknowledgment:1 practice:2 procedure:12 y2i:4 projection:2 cannot:1 close:1 selection:2 operator:1 risk:3 impossible:1 intercept:2 isotonic:3 restriction:4 transportation:3 center:1 convex:4 survey:1 simplicity:1 recovery:12 immediately:1 pure:1 estimator:21 regarded:1 dms1454377:1 classic:2 handle:1 stability:1 variation:4 analogous:3 annals:4 target:1 suppose:2 programming:2 pa:1 element:1 trend:1 satisfying:1 preprint:5 wang:1 solved:1 calculate:1 connected:1 decrease:1 intuition:2 complexity:6 covariates:2 econometrica:2 cam:2 exhaustively:1 trunc:4 grateful:1 solving:2 upon:1 easily:1 caramanis:1 describe:3 neighborhood:1 widely:1 solve:1 say:1 compressed:15 statistic:7 transform:2 noisy:3 itself:1 jointly:1 ip:6 validates:1 differentiable:1 eigenvalue:2 propose:6 relevant:1 realization:1 convergence:11 illustrate:2 completion:1 odd:2 b0:4 progress:1 sim:2 recovering:3 c:3 involves:1 predicted:2 direction:4 closely:1 everything:1 require:1 hx:12 generalization:4 extension:1 hold:1 sufficiently:4 considered:1 normal:1 exp:11 m0:5 achieves:5 estimation:27 applicable:2 label:2 lucien:1 utexas:2 largest:2 establishes:2 always:1 gaussian:2 aim:2 rather:1 pn:1 avoid:1 corollary:2 focus:3 improvement:1 rank:1 greatly:1 contrast:1 rigorous:1 attains:2 tech:1 sense:1 inference:1 inaccurate:1 sb:5 unlikely:1 going:1 interested:1 among:1 flexible:1 plan:1 art:1 special:3 fairly:1 smoothing:2 marginal:1 construct:3 once:1 identical:1 represents:1 broad:4 look:1 flipped:8 nearly:1 future:1 primarily:1 randomly:1 simultaneously:1 festschrift:1 phase:15 isotron:1 attempt:1 detection:1 semidefinite:1 behind:1 accurate:1 xy:1 unless:1 theoretical:1 subset:1 too:1 dependency:1 synthetic:1 international:1 siam:2 interdisciplinary:1 sequel:3 satisfied:1 choose:1 derivative:1 leading:4 supp:1 zhaoran:2 coefficient:4 depends:3 piece:1 later:1 linked:1 sup:1 characterizes:1 recover:4 parallel:1 contribution:3 r01hg06841:1 ni:6 accuracy:1 improvable:2 ofthe:1 bayesian:1 unifies:1 worth:2 cc:1 definition:3 pp:3 recovers:1 stop:1 treatment:1 popular:2 ask:1 recall:1 knowledge:3 dimensionality:1 attained:2 ta:3 response:5 formulation:4 done:1 generality:2 furthermore:1 stage:3 nmin:3 correlation:1 hand:2 nonlinear:3 lack:2 logistic:9 infimum:2 name:1 requiring:1 true:1 multiplier:2 hence:1 regularization:1 alternating:2 iteratively:1 nonzero:1 illustrated:1 sin:1 coincides:1 generalized:1 outline:1 theoretic:2 demonstrate:2 harmonic:1 novel:4 nih:3 functional:1 discussed:2 r01gm083084:1 significant:2 measurement:4 refer:1 tuning:1 mathematics:1 fano:1 hxi:5 stable:1 han:1 matrixp:1 closest:1 multivariate:1 recent:3 constantine:2 inf:2 certain:2 nonconvex:3 inequality:1 binary:3 success:3 rep:1 yi:15 minimum:1 additional:1 signal:4 reduces:2 smooth:1 exceeds:1 match:1 adapt:2 believed:1 sphere:1 long:3 retrieval:12 iis1408910:1 award:1 impact:1 prediction:2 variant:2 regression:22 involving:1 noiseless:2 arxiv:10 iteration:3 achieved:1 c1:1 fellowship:1 semiparametric:3 median:1 zw:1 unlike:1 thing:1 near:2 noting:1 exceed:1 enough:1 embeddings:1 restrict:1 suboptimal:1 economic:1 texas:2 shift:1 whether:3 pca:5 eigengap:1 generally:1 detailed:1 involve:1 amount:1 nonparametric:1 extensively:1 locally:1 reduced:1 exist:1 nsf:4 iis1332109:1 canonical:1 sign:9 estimated:1 jacques:1 vol:1 key:4 threshold:2 neither:1 imaging:1 excludes:1 relaxation:3 year:1 named:2 throughout:2 reader:1 family:1 comparable:1 bit:29 bound:10 distinguish:1 constraint:3 precisely:2 encodes:2 optimality:3 min:3 relatively:1 department:1 according:2 poor:1 hl:1 tier:1 computationally:1 equation:1 remains:1 discus:5 turn:2 fail:1 know:1 end:3 available:1 operation:1 apply:2 observe:1 away:1 spectral:3 xfn:1 rp:11 original:1 top:2 running:2 graphical:1 hinge:1 k1:1 prof:1 establish:4 noninvertible:2 classical:3 objective:1 question:2 quantity:2 concentration:1 dependence:1 surrogate:1 exhibit:1 subspace:2 link:32 topic:1 trivial:1 provable:1 assuming:2 index:14 relationship:4 minimizing:2 difficult:3 potentially:1 trace:1 stated:1 unknown:13 perform:1 allowing:1 zf:1 av:1 observation:4 acknowledge:1 truncated:3 incorporated:1 precise:2 communication:1 arbitrary:1 introduced:3 pair:5 required:1 specified:1 extensive:3 c3:1 address:1 beyond:3 able:1 below:6 regime:5 sparsity:3 challenge:1 encompasses:1 including:2 max:3 power:4 critical:1 natural:2 rely:1 quantification:1 regularized:1 minimax:14 x2i:2 hm:1 sn:1 prior:3 literature:3 review:2 geometric:3 l2:1 understanding:1 asymptotic:1 loss:5 probit:1 interesting:1 foundation:1 sufficient:1 consistent:3 thresholding:1 austin:2 ata:1 penalized:1 supported:3 truncation:1 side:2 allow:1 fall:1 taking:2 correspondingly:1 barrier:1 absolute:1 sparse:32 distributed:1 dimension:2 transition:1 avoids:1 adaptive:1 keep:1 global:1 conceptual:1 assumed:1 xi:11 continuous:1 iterative:2 robust:7 career:1 obtaining:1 du:1 meanwhile:1 domain:1 sp:7 main:2 yi2:1 s2:5 noise:2 n2:1 body:1 referred:1 sub:1 fails:1 exponential:1 pe:11 breaking:1 theorem:8 specific:3 hanliu:1 covariate:2 pac:1 sensing:16 decay:1 intractable:1 exists:2 quantization:6 ci:1 phd:1 magnitude:2 margin:1 ez:1 partially:2 monotonic:3 springer:2 corresponds:4 minimizer:1 satisfies:7 relies:1 assouad:1 succeed:1 conditional:1 goal:2 consequently:1 lipschitz:2 absence:1 feasible:1 erased:2 admm:1 specifically:1 determined:2 lemma:6 principal:6 called:2 ade:3 succeeds:1 yixy:1 formally:1 support:4 violated:1 princeton:4 phenomenon:1 |
5,541 | 6,014 | Risk-Sensitive and Robust Decision-Making:
a CVaR Optimization Approach
Yinlam Chow
Stanford University
[email protected]
Aviv Tamar
UC Berkeley
[email protected]
Shie Mannor
Technion
[email protected]
Marco Pavone
Stanford University
[email protected]
Abstract
In this paper we address the problem of decision making within a Markov decision process (MDP) framework where risk and modeling errors are taken into
account. Our approach is to minimize a risk-sensitive conditional-value-at-risk
(CVaR) objective, as opposed to a standard risk-neutral expectation. We refer to
such problem as CVaR MDP. Our first contribution is to show that a CVaR objective, besides capturing risk sensitivity, has an alternative interpretation as expected
cost under worst-case modeling errors, for a given error budget. This result, which
is of independent interest, motivates CVaR MDPs as a unifying framework for
risk-sensitive and robust decision making. Our second contribution is to present
an approximate value-iteration algorithm for CVaR MDPs and analyze its convergence rate. To our knowledge, this is the first solution algorithm for CVaR MDPs
that enjoys error guarantees. Finally, we present results from numerical experiments that corroborate our theoretical findings and show the practicality of our
approach.
1
Introduction
Decision making within the Markov decision process (MDP) framework typically involves the minimization of a risk-neutral performance objective, namely the expected total discounted cost [3].
This approach, while very popular, natural, and attractive from a computational standpoint, neither
takes into account the variability of the cost (i.e., fluctuations around the mean), nor its sensitivity
to modeling errors, which may significantly affect overall performance [12]. Risk-sensitive MDPs
[9] address the first aspect by replacing the risk-neutral expectation with a risk-measure of the total
discounted cost, such as variance, Value-at-Risk (VaR), or Conditional-VaR (CVaR). Robust MDPs
[15], on the other hand, address the second aspect by defining a set of plausible MDP parameters,
and optimize decision with respect to the expected cost under worst-case parameters.
In this work we consider risk-sensitive MDPs with a CVaR objective, referred to as CVaR MDPs.
CVaR [1, 20] is a risk-measure that is rapidly gaining popularity in various engineering applications, e.g., finance, due to its favorable computational properties [1] and superior ability to safeguard a decision maker from the ?outcomes that hurt the most? [22]. In this paper, by relating risk
to robustness, we derive a novel result that further motivates the usage of a CVaR objective in a
decision-making context. Specifically, we show that the CVaR of a discounted cost in an MDP is
equivalent to the expected value of the same discounted cost in presence of worst-case perturbations
of the MDP parameters (specifically, transition probabilities), provided that such perturbations are
within a certain error budget. This result suggests CVaR MDP as a method for decision making
under both cost variability and model uncertainty, motivating it as unified framework for planning
under uncertainty.
Literature review: Risk-sensitive MDPs have been studied for over four decades, with earlier efforts
focusing on exponential utility [9], mean-variance [24], and percentile risk criteria [7] . Recently,
for the reasons explained above, several authors have investigated CVaR MDPs [20]. Specifically,
1
in [4], the authors propose a dynamic programming algorithm for finite-horizon risk-constrained
MDPs where risk is measured according to CVaR. The algorithm is proven to asymptotically converge to an optimal risk-constrained policy. However, the algorithm involves computing integrals
over continuous variables (Algorithm 1 in [4]) and, in general, its implementation appears particularly difficult. In [2], the authors investigate the structure of CVaR optimal policies and show that a
Markov policy is optimal on an augmented state space, where the additional (continuous) state variable is represented by the running cost. In [8], the authors leverage such result to design an algorithm
for CVaR MDPs that relies on discretizing occupation measures in the augmented-state MDP. This
approach, however, involves solving a non-convex program via a sequence of linear-programming
approximations, which can only shown to converge asymptotically. A different approach is taken
by [5], [19] and [25], which consider a finite dimensional parameterization of control policies, and
show that a CVaR MDP can be optimized to a local optimum using stochastic gradient descent (policy gradient). A recent result by Pflug and Pichler [17] showed that CVaR MDPs admit a dynamic
programming formulation by using a state-augmentation procedure different from the one in [2].
The augmented state is also continuous, making the design of a solution algorithm challenging.
Contributions: The contribution of this paper is twofold. First, as discussed above, we provide a
novel interpretation for CVaR MDPs in terms of robustness to modeling errors. This result is of
independent interest and further motivates the usage of CVaR MDPs for decision making under uncertainty. Second, we provide a new optimization algorithm for CVaR MDPs, which leverages the
state augmentation procedure introduced by Pflug and Pichler [17]. We overcome the aforementioned computational challenges (due to the continuous augmented state) by designing an algorithm
that merges approximate value iteration [3] with linear interpolation. Remarkably, we are able to
provide explicit error bounds and convergence rates based on contraction-style arguments. In contrast to the algorithms in [4, 8, 5, 25], given the explicit MDP model our approach leads to finite-time
error guarantees, with respect to the globally optimal policy. In addition, our algorithm is significantly simpler than previous methods, and calculates the optimal policy for all CVaR confidence
intervals and initial states simultaneously. The practicality of our approach is demonstrated in numerical experiments involving planning a path on a grid with thousand of states. To the best of our
knowledge, this is the first algorithm to approximate globally-optimal policies for non-trivial CVaR
MDPs whose error depends on the resolution of interpolation.
Organization: This paper is structured as follows. In Section 2 we provide background on CVaR
and MDPs, we state the problem we wish to solve (i.e., CVaR MDPs), and motivate the CVaR
MDP formulation by establishing a novel relation between CVaR and model perturbations. Section
3 provides the basis for our solution algorithm, based on a Bellman-style equation for the CVaR.
Then, in Section 4 we present our algorithm and correctness analysis. In Section 5 we evaluate our
approach via numerical experiments. Finally, in Section 6, we draw some conclusions and discuss
directions for future work.
2
Preliminaries, Problem Formulation, and Motivation
2.1 Conditional Value-at-Risk
Let Z be a bounded-mean random variable, i.e., E[|Z|] < ?, on a probability space (?, F, P),
with cumulative distribution function F (z) = P(Z ? z). In this paper we interpret Z as a cost.
The value-at-risk
(VaR) at
confidence level ? ? (0, 1) is the 1 ? ? quantile of Z, i.e., VaR? (Z) =
min z | F (z) ? 1 ? ? . The conditional value-at-risk (CVaR) at confidence level ? ? (0, 1) is
defined as [20]:
n
o
1
(1)
CVaR? (Z) = min w + E (Z ? w)+ ,
w?R
?
+
where (x) = max(x, 0) represents the positive part of x. If there is no
probability atom at
VaR? (Z), it is well known from Theorem 6.2 in [23] that CVaR? (Z) = E Z | Z ? VaR? (Z) .
Therefore, CVaR? (Z) may be interpreted as the expected value of Z, conditioned on the ?-portion
of the tail distribution. It is well known that CVaR? (Z) is decreasing in ?, CVaR1 (Z) equals to
E(Z), and CVaR? (Z) tends to max(Z) as ? ? 0. During the last decade, the CVaR risk-measure
has gained popularity in financial applications, among others. It is especially useful for controlling
rare, but potentially disastrous events, which occur above the 1 ? ? quantile, and are neglected by
the VaR [22]. Furthermore, CVaR enjoys desirable axiomatic properties, such as coherence [1]. We
refer to [26] for further motivation about CVaR and a comparison with other risk measures such as
VaR.
A useful property of CVaR, which we exploit in this paper, is its alternative dual representation [1]:
CVaR? (Z) =
max E? [Z],
(2)
??UCVaR (?,P)
2
where E? [Z] denotes
the ?-weighted
expectation of Z, andthe risk envelope UCVaR is given by
R
UCVaR (?, P) = ? : ?(?) ? 0, ?1 , ??? ?(?)P(?)d? = 1 . Thus, the CVaR of a random variable Z may be interpreted as the worst-case expectation of Z, under a perturbed distribution ?P.
In this paper, we are interested in the CVaR of the total discounted cost in a sequential decisionmaking setting, as discussed next.
2.2
Markov Decision Processes
An MDP is a tuple M = (X , A, C, P, x0 , ?), where X and A are finite state and action spaces;
C(x, a) ? [?Cmax , Cmax ] is a bounded deterministic cost; P (?|x, a) is the transition probability
distribution; ? ? [0, 1) is the discounting factor, and x0 is the initial state. (Our results easily
generalize to random initial states and random costs.)
Let the space of admissible histories up to time t be Ht = Ht?1 ? A ? X , for t ? 1, and H0 = X .
A generic element ht ? Ht is of the form ht = (x0 , a0 , . . . , xt?1 , at?1 , xt ). Let ?H,t be the set of
all history-dependent policies with the property
that at each time t the randomized control action is
a function of ht . In other words, ?H,t := ?0 : H 0 ? P(A), ?1 : H1 ? P(A), . . . , ?t : Ht ?
P(A)}|?j (hj ) ? P(A) for all hj ? Hj , 1 ? j ? t . We also let ?H = limt?? ?H,t be the set of
all history dependent policies.
2.3 Problem Formulation
Let C(xt , at ) denote the stage-wise costs observed along a state/control trajectory in the MDP
PT
model, and let C0,T = t=0 ? t C(xt , at ) denote the total discounted cost up to time T . The risksensitive discounted-cost problem we wish to address is as follows:
min CVaR? lim C0,T x0 , ? ,
(3)
???H
T ??
where ? = {?0 , ?1 , . . .} is the policy sequence with actions at = ?t (ht ) for t ? {0, 1, . . .}. We
refer to problem (3) as CVaR MDP (One may also consider a related formulation combining mean
and CVaR, the details of which are presented in the supplementary material).
The problem formulation in (3) directly addresses the aspect of risk sensitivity, as demonstrated by
the numerous applications of CVaR optimization in finance (see, e.g., [21, 11, 6]) and the recent
approaches for CVaR optimization in MDPs [4, 8, 5, 25]. In the following, we show a new result
providing additional motivation for CVaR MDPs, from the point of view of robustness to modeling
errors.
2.4 Motivation - Robustness to Modeling Errors
We show a new result relating the CVaR objective in (3) to the expected discounted-cost in presence
of worst-case perturbations of the MDP parameters, where the perturbations are budgeted according
to the ?number of things that can go wrong?. Thus, by minimizing CVaR, the decision maker also
guarantees robustness of the policy.
Consider a trajectory (x0 , a0 , . . . , xT ) in a finite-horizon MDP problem with transitions
Pt (xt |xt?1 , at?1 ). We explicitly denote the time index of the transition matrices for reasons
that will become clear shortly. The total probability of the trajectory is P (x0 , a0 , . . . , xT ) =
P0 (x0 )P1 (x1 |x0 , a0 ) ? ? ? PT (xT |xT ?1 , aT ?1 ), and we let C0,T (x0 , a0 , . . . , xT ) denote its discounted cost, as defined above.
We consider an adversarial setting, where an adversary is allowed to change the transition probabilities at each stage, under some budget constraints. We will show that, for a specific budget and
perturbation structure, the expected cost under the worst-case perturbation is equivalent to the CVaR
of the cost. Thus, we shall establish that, in this perspective, being risk sensitive is equivalent to
being robust against model perturbations.
For each stage 1 ? t ? T , consider a perturbed transition matrix P?t = Pt ??t , where ?t ? RX ?A?X
is a multiplicative probability perturbation and ? is the Hadamard product, under the condition that
P?t is a stochastic matrix. Let ?t denote the set of perturbation matrices that satisfy this condition,
and let ? = ?1 ? ? ? ? ? ?T the set of all possible perturbations to the trajectory distribution.
3
We now impose a budget constraint on the perturbations as follows. For some budget ? ? 1, we
consider the constraint
?1 (x1 |x0 , a0 )?2 (x2 |x1 , a1 ) ? ? ? ?T (xT |xT ?1 , aT ?1 ) ? ?,
?x0 , . . . , xT ? X , ?a0 , . . . , aT ?1 ? A.
(4)
Essentially, the product in Eq. (4) states that with small budget the worst cannot happen at each
time. Instead, the perturbation budget has to be split (multiplicatively) along the trajectory. We note
that Eq. (4) is in fact a constraint on the perturbation matrices, and we denote by ?? ? ? the set of
perturbations that satisfy this constraint with budget ?. The following result shows an equivalence
between the CVaR and the worst-case expected loss.
Proposition 1 (Interpretation of CVaR as a Robustness Measure) It holds
CVaR ?1 (C0,T (x0 , a0 , . . . , xT )) =
sup
(?1 ,...,?T )???
EP? [C0,T (x0 , a0 , . . . , xT )] ,
(5)
where EP? [?] denotes expectation with respect to a Markov chain with transitions P?t .
The proof of Proposition 1 is in the supplementary material. It is instructive to compare Proposition
1 with the dual representation of CVaR in (2) where both results convert the CVaR risk into a robustness measure. Note, in particular, that the perturbation budget in Proposition 1 has a temporal
structure, which constrains the adversary from choosing the worst perturbation at each time step.
Remark 1 An equivalence between robustness and risk-sensitivity was previously suggested by Osogami [16]. In that study, the iterated (dynamic) coherent risk was shown to be equivalent to a
robust MDP [10] with a rectangular uncertainty set. The iterated risk (and, correspondingly, the
rectangular uncertainty set) is very conservative [27], in the sense that the worst can happen at each
time step. In contrast, the perturbations considered here are much less conservative. In general,
solving robust MDPs without the rectangularity assumption is NP-hard. Nevertheless, Mannor et.
al. [13] showed that, for cases where the number of perturbations to the parameters along a trajectory is upper bounded (budget-constrained perturbation), the corresponding robust MDP problem is
tractable. Analogous to the constraint set (1) in [13], the perturbation set in Proposition 1 limits the
total number of log-perturbations along a trajectory. Accordingly, we shall later see that optimizing
problem (3) with perturbation structure (4) is indeed also tractable.
Next section provides the fundamental theoretical ideas behind our approach to the solution of (3).
3
Bellman Equation for CVaR
In this section, by leveraging a recent result from [17], we present a dynamic programming (DP) formulation for the CVaR MDP problem in (3). As we shall see, the value function in this formulation
depends on both the state and the CVaR confidence level ?. We then establish important properties of such DP formulation, which will later enable us to derive an efficient DP-based approximate
solution algorithm and provide correctness guarantees on the approximation error. All proofs are
presented in the supplementary material.
Our starting point is a recursive decomposition of CVaR, whose proof is detailed in Theorem 10 of
[17].
Theorem 2 (CVaR Decomposition, Theorem 21 in [17]) For any t ? 0, denote by Z =
(Zt+1 , Zt+2 , . . . ) the cost sequence from time t + 1 onwards. The conditional CVaR under policy ?, i.e., CVaR? (Z | ht , ?), obeys the following decomposition:
CVaR? (Z | ht , ?) =
max
??UCVaR (?,P (?|xt ,at ))
E[?(xt+1 ) ? CVaR??(xt+1 ) (Z | ht+1 , ?) | ht , ?],
where at is the action induced by policy ?t (ht ), and the expectation is with respect to xt+1 .
Theorem 2 concerns a fixed policy ?; we now extend it to a general DP formulation. Note that in
the recursive decomposition in Theorem 2 the right-hand side involves CVaR terms with different
confidence levels than that in the left-hand side. Accordingly, we augment the state space X with an
additional continuous state Y = (0, 1], which corresponds to the confidence level. For any x ? X
and y ? Y, the value-function V (x, y) for the augmented state (x, y) is defined as:
V (x, y) = min CVaRy lim C0,T | x0 = x, ? .
T ??
???H
4
Similar to standard DP, it is convenient to work with operators defined on the space of value functions
[3]. In our case, Theorem 2 leads to the following definition of CVaR Bellman operator T : X ?Y ?
X ? Y:
#
"
X
0
0
0
0
T[V ](x, y) = min C(x, a) + ?
max
?(x )V (x , y?(x )) P (x |x, a) . (6)
a?A
??UCVaR (y,P (?|x,a))
x0 ?X
We now establish several useful properties for the Bellman operator T[V ].
Lemma 3 (Properties of CVaR Bellman Operator) The Bellman operator T[V ] has the following properties:
1. (Contraction.) kT[V1 ] ? T[V2 ]k? ? ?kV1 ? V2 k? , where kf k?= supx?X ,y?Y |f (x, y)|.
2. (Concavity preserving in y.) For any x ? X , suppose yV (x, y) is concave in y ? Y. Then
the maximization problem in (6) is concave. Furthermore, yT[V ](x, y) is concave in y.
The first property in Lemma 3 is similar to standard DP [3], and is instrumental to the design of
a converging value-iteration approach. The second property is nonstandard and specific to our approach. It will be used to show that the computation of value-iteration updates involves concave,
and therefore tractable optimization problems. Furthermore, it will be used to show that a linearinterpolation of V (x, y) in the augmented state y has a bounded error.
Equipped with the results in Theorem 2 and Lemma 3, we can now show that the fixed point solution
of T[V ](x, y) = V (x, y) is unique, and equals to the solution of the CVaR MDP problem (3) with
x0 = x and ? = y.
Theorem 4 (Optimality Condition) For any x ? X and y ? (0, 1], the solution to T[V ](x, y) =
V (x, y) is unique, and equals to V ? (x, y) = min???H CVaRy (limT ?? C0,T | x0 = x, ?).
Next, we show that the optimal value of the CVaR MDP problem (3) can be attained by a stationary Markov policy, defined as a greedy policy with respect to the value function V ? (x, y). Thus,
while the original problem is defined over the intractable space of history-dependent policies, a
stationary Markov policy (over the augmented state space) is optimal, and can be readily derived
from V ? (x, y). Furthermore, an optimal history-dependent policy can be readily obtained from an
(augmented) optimal Markov policy according to the following theorem.
?
Theorem 5 (Optimal Policies) Let ?H
= {?0 , ?1 , . . .} ? ?H be a history-dependent policy recursively defined as:
?k (hk ) = u? (xk , yk ), ?k ? 0,
(7)
with initial conditions x0 and y0 = ?, and state transitions
xk ? P (? | xk?1 , u? (xk?1 , yk?1 )),
yk = yk?1 ?x?k?1 ,yk?1 ,u? (xk ), ?k ? 1,
(8)
?
where the stationary Markovian policy u? (x, y) and risk factor ?x,y,u
? (?) are solution to the min?
?
max optimization problem in the CVaR Bellman operator T[V ](x, y). Then, ?H
is an optimal
policy for problem (3) with initial state x0 and CVaR confidence level ?.
Theorems 4 and 5 suggest that a value-iteration DP method [3] can be used to solve the CVaR MDP
problem (3). Let an initial value-function guess V0 : X ? Y ? R be chosen arbitrarily. Value
iteration proceeds recursively as follows:
Vk+1 (x, y) = T[Vk ](x, y), ?(x, y) ? X ? Y, k ? {0, 1, . . . , }.
(9)
Specifically, by combining the contraction property in Lemma 3 and uniqueness result of fixed point
solutions from Theorem 4, one concludes that limk?? Vk (x, y) = V ? (x, y). By selecting x =
x0 and y = ?, one immediately obtains V ? (x0 , ?) = min???H CVaR? (limT ?? C0,T | x0 , ?).
Furthermore, an optimal policy may be derived from V ? (x, y) according to the policy construction
procedure in Theorem 5.
Unfortunately, while value iteration is conceptually appealing, its direct implementation in our setting is generally impractical since, e.g., the state y is continuous. In the following, we pursue an
approximation to the value iteration algorithm (9), based on a linear interpolation scheme for y.
5
Algorithm 1 CVaR Value Iteration with Linear Interpolation
1: Given:
? N (x) interpolation points Y(x) = y1 , . . . , yN (x) ? [0, 1]N (x) for every x ? X with
yi < yi+1 , y1 = 0 and yN (x) = 1.
? Initial value function V0 (x, y) that satisfies Assumption 1.
2: For t = 1, 2, . . .
? For each x ? X and each yi ? Y(x), update the value function estimate as follows:
Vt (x, yi ) = TI [Vt?1 ](x, yi ),
3: Set the converged value iteration estimate as Vb ? (x, yi ), for any x ? X , and yi ? Y(x).
4
Value Iteration with Linear Interpolation
In this section we present an approximate DP algorithm for solving CVaR MDPs, based on the
theoretical results of Section 3. The value iteration algorithm in Eq. (9) presents two main implementation challenges. The first is due to the fact that the augmented state y is continuous. We
handle this challenge by using interpolation, and exploit the concavity of yV (x, y) to bound the
error introduced by this procedure. The second challenge stems from the the fact that applying T
involves maximizing over ?. Our strategy is to exploit the concavity of the maximization problem
to guarantee that such optimization can indeed be performed effectively.
As discussed, our approach relies on the fact that the Bellman operator T preserves concavity as
established in Lemma 3. Accordingly, we require the following assumption for the initial guess
V0 (x, y),
Assumption 1 The guess for the initial value function V0 (x, y) satisfies the following properties:
1) yV0 (x, y) is concave in y ? Y and 2) V0 (x, y) is continuous in y ? Y for any x ? X .
Assumption 1 may easily be satisfied, for example, by choosing V0 (x, y) = CVaRy (Z | x0 = x),
where Z is any arbitrary bounded random variable. As stated earlier, a key difficulty in applying
value iteration (9) is that, for each state x ? X , the Bellman operator has to be calculated for each
y ? Y, and Y is continuous. As an approximation, we propose to calculate the Bellman operator
only for a finite set of values y, and interpolate the value function in between such interpolation
points.
Formally, let N (x) denote the number of interpolation points. For every x ? X , denote by Y(x) =
y1 , . . . , yN (x) ? [0, 1]N (x) the set of interpolation points. We denote by Ix [V ](y) the linear
interpolation of the function yV (x, y) on these points, i.e.,
yi+1 V (x, yi+1 ) ? yi V (x, yi )
Ix [V ](y) = yi V (x, yi ) +
(y ? yi ),
yi+1 ? yi
where yi = max {y 0 ? Y(x) : y 0 ? y} and yi+1 is the closest interpolation point such that
y ? [yi , yi+1 ], i.e., yi+1 = min {y 0 ? Y(x) : y 0 ? y}. The interpolation of yV (x, y) instead of
V (x, y) is key to our approach. The motivation is twofold: first, it can be shown [20] that for a
discrete random variable Z, yCVaRy (Z) is piecewise linear in y. Second, one can show that the
Lipschitzness of y V (x, y) is preserved during value iteration, and exploit this fact to bound the
linear interpolation error.
We now define the interpolated Bellman operator TI as follows:
#
"
X Ix0 [V ](y?(x0 ))
0
P (x |x, a) .
TI [V ](x, y) = min C(x, a) + ?
max
a?A
y
??UCVaR (y,P (?|x,a)) 0
(10)
x ?X
Remark 2 Notice that by L?Hospital?s rule one has limy?0 Ix [V ](y?(x))/y = V (x, 0)?(x). This
implies that at y = 0 the interpolated
Bellman operator is equivalent to the original
Bellman oper
ator, i.e., T[V ](x, 0) = mina?A C(x, a) + ? maxx0 ?X :P (x0 |x,a)>0 V (x0 , 0) = TI [V ](x, 0).
Algorithm 1 presents CVaR value iteration with linear interpolation. The only difference between
this algorithm and standard value iteration (9) is the linear interpolation procedure described above.
In the following, we show that Algorithm 1 converges, and bound the error due to interpolation.
We begin by showing that the useful properties established in Lemma 3 for the Bellman operator T
extend to the interpolated Bellman operator TI .
6
Lemma 6 (Properties of Interpolated Bellman Operator) TI [V ] has the same properties of
T[V ] as in Lemma 3, namely 1) contraction and 2) concavity preservation.
Lemma 6 implies several important consequences for Algorithm 1. The first one is that the maximization problem in (10) is concave, and thus may be solved efficiently at each step. This
guarantees that the algorithm is tractable. Second, the contraction property in Lemma 6 guarantees that Algorithm 1 converges, i.e., there exists a value function Vb ? ? R|X |?|Y| such that
limn?? TnI [V0 ](x, yi ) = Vb ? (x, yi ). In addition, the convergence rate is geometric and equals to ?.
The following theorem provides an error bound between approximate value iteration and exact value
iteration (3) in terms of the interpolation resolution.
Theorem 7 (Convergence and Error Bound) Suppose the initial value function V0 (x, y) satisfies
Assumption 1 and let > 0 be an error tolerance parameter. For any state x ? X and step t ? 0,
choose y2 > 0 such that Vt (x, y2 ) ? Vt (x, 0) ? ? and update the interpolation points according
to the logarithmic rule: yi+1 = ?yi , ?i ? 2, with uniform constant ? ? 1. Then, Algorithm 1 has
the following error bound:
0 ? Vb ? (x0 , ?) ? min CVaR?
???H
??
lim C0,T | x0 , ? ?
O ((? ? 1) + ) ,
T ??
1??
and the following finite time convergence error bound:
n
n
TI [V0 ](x0 , ?) ? min CVaR? lim C0,T | x0 , ? ? O ((? ? 1) + ) + O(? ) .
???H
T ??
1??
Theorem 7 shows that 1) the interpolation-based value function is a conservative estimate for the
optimal solution to problem (3); 2) the interpolation procedure is consistent, i.e., when the number
of interpolation points is arbitrarily large (specifically, ? 0 and yi+1 /yi ? 1), the approximation
error tends to zero; and 3) the approximation error bound is O((? ? 1) + ), where log ? is the
log-difference of the interpolation points, i.e., log ? = log yi+1 ? log yi , ?i.
For a pre-specified , the condition Vt (x, y2 ) ? Vt (x, 0) ? ? may be satisfied by a simple adaptive
procedure for selecting the interpolation points Y(x). At each iteration t > 0, after calculating
Vt (x, yi ) in Algorithm 1, at each state x in which the condition does not hold, add a new interpolation
0
2
point y20 = |Vt (x,y2y
)?Vt (x,0)| , and additional points between y2 and y2 such that the condition log ? ?
log yi+1 ? log yi is maintained. Since all the additional points belong to the segment [0, y2 ], the
linearly interpolated Vt (x, yi ) remains unchanged, and Algorithm 1 proceeds as is. For bounded
costs and > 0, the number of additional points required is bounded.
The full proof of Theorem 7 is detailed in the supplementary material; we highlight the main ideas
and challenges involved. In the first part of the proof we bound, for all t > 0, the Lipschitz constant
of yVt (x, y) in y. The key to this result is to show that the Bellman operator T preserves the
Lipschitz property for yVt (x, y). Using the Lipschitz bound and the concavity of yVt (x, y), we then
bound the error Ix [Vyt ](y) ? Vt (x, y) for all y. The condition on y2 is required for this bound to hold
when y ? 0. Finally, we use this result to bound kTI [Vt ](x, y) ? T[Vt ](x, y)k? . The results of
Theorem 7 follow from contraction arguments, similar to approximate dynamic programming [3].
5
Experiments
We validate Algorithm 1 on a rectangular grid world, where states represent grid points on a 2D
terrain map. An agent (e.g., a robotic vehicle) starts in a safe region and its objective is to travel to a
given destination. At each time step the agent can move to any of its four neighboring states. Due to
sensing and control noise, however, with probability ? a move to a random neighboring state occurs.
The stage-wise cost of each move until reaching the destination is 1, to account for fuel usage. In
between the starting point and the destination there are a number of obstacles that the agent should
avoid. Hitting an obstacle costs M >> 1 and terminates the mission. The objective is to compute a
safe (i.e., obstacle-free) path that is fuel efficient.
For our experiments, we choose a 64 ? 53 grid-world (see Figure 1), for a total of 3,312 states.
The destination is at position (60, 2), and there are 80 obstacles plotted in yellow. By leveraging
Theorem 7, we use 21 log-spaced interpolation points for Algorithm 1 in order to achieve a small
value function error. We choose ? = 0.05, and a discount factor ? = 0.95 for an effective horizon
of 200 steps. Furthermore, we set the penalty cost equal to M = 2/(1 ? ?)?such choice trades off
high penalty for collisions and computational complexity (that increases as M increases). For the
7
Figure 1: Grid-world simulation. Left three plots show the value functions and corresponding paths
for different CVaR confidence levels. The rightmost plot shows a cost histogram (for 400 Monte
Carlo trials) for a risk-neutral policy and a CVaR policy with confidence level ? = 0.11.
interpolation parameters discussed in Theorem 7, we set = 0.1 and ? = 2.067 (in order to have 21
logarithmically distributed grid points for the CVaR confidence parameter in [0, 1]).
In Figure 1 we plot the value function V (x, y) for three different values of the CVaR confidence
parameter ?, and the corresponding paths starting from the initial position (60, 50). The first three
figures in Figure 1 show how by decreasing the confidence parameter ? the average travel distance
(and hence fuel consumption) slightly increases but the collision probability decreases, as expected.
We next discuss robustness to modeling errors. We conducted simulations in which with probability
0.5 each obstacle position is perturbed in a random direction to one of the neighboring grid cells.
This emulates, for example, measurement errors in the terrain map. We then trained both the riskaverse (? = 0.11) and risk-neutral (? = 1) policies on the nominal (i.e., unperturbed) terrain map,
and evaluated them on 400 perturbed scenarios (20 perturbed maps with 20 Monte Carlo evaluations
each). While the risk-neutral policy finds a shorter route (with average cost equal to 18.137 on
successful runs), it is vulnerable to perturbations and fails more often (with over 120 failed runs). In
contrast, the risk-averse policy chooses slightly longer routes (with average cost equal to 18.878 on
successful runs), but is much more robust to model perturbations (with only 5 failed runs).
For the computation of Algorithm 1 we represented the concave piecewise linear maximization
problem in (10) as a linear program, and concatenated several problems to reduce repeated overhead stemming from the initialization of the CPLEX linear programming solver. This resulted in
a computation time on the order of two hours. We believe there is ample room for improvement,
for example by leveraging parallelization and sampling-based methods. Overall, we believe our
proposed approach is currently the most practical method available for solving CVaR MDPs (as a
comparison, the recently proposed method in [8] involves infinite dimensional optimization). The
Matlab code used for the experiments is provided in the supplementary material.
6
Conclusion
In this paper we presented an algorithm for CVaR MDPs, based on approximate value-iteration on
an augmented state space. We established convergence of our algorithm, and derived finite-time
error bounds. These bounds are useful to stop the algorithm at a desired error threshold.
In addition, we uncovered an interesting relationship between the CVaR of the total cost and the
worst-case expected cost under adversarial model perturbations. In this formulation, the perturbations are correlated in time, and lead to a robustness framework significantly less conservative than
the popular robust-MDP framework, where the uncertainty is temporally independent.
Collectively, our work suggests CVaR MDPs as a unifying and practical framework for computing
control policies that are robust with respect to both stochasticity and model perturbations. Future
work should address extensions to large state-spaces. We conjecture that a sampling-based approximate DP approach [3] should be feasible since, as proven in this paper, the CVaR Bellman equation
is contracting (as required by approximate DP methods).
Acknowledgement
The authors would like to thank Mohammad Ghavamzadeh for helpful comments on the technical
details, and Daniel Vainsencher for practical optimization advice. Y-L. Chow and M. Pavone are partially supported by the Croucher Foundation doctoral scholarship and the Office of Naval Research,
Science of Autonomy Program, under Contract N00014-15-1-2673. Funding for Shie Mannor and
Aviv Tamar were partially provided by the European Community?s Seventh Framework Programme
(FP7/2007-2013) under grant agreement 306638 (SUPREL).
8
References
[1] P. Artzner, F. Delbaen, J. Eber, and D. Heath. Coherent measures of risk. Mathematical finance, 9(3):
203?228, 1999.
[2] N. B?auerle and J. Ott. Markov decision processes with average-value-at-risk criteria. Mathematical
Methods of Operations Research, 74(3):361?379, 2011.
[3] D. Bertsekas. Dynamic programming and optimal control, Vol II. Athena Scientific, 4th edition, 2012.
[4] V. Borkar and R. Jain. Risk-constrained Markov decision processes. IEEE Transaction of Automatic
Control, 59(9):2574 ? 2579, 2014.
[5] Y. Chow and M. Ghavamzadeh. Algorithms for CVaR optimization in MDPs. In Advances in Neural
Information Processing Systems 27, pages 3509?3517, 2014.
[6] K. Dowd. Measuring market risk. John Wiley & Sons, 2007.
[7] J. Filar, D. Krass, and K. Ross. Percentile performance criteria for limiting average Markov decision
processes. Automatic Control, IEEE Transactions on, 40(1):2?10, 1995.
[8] W. Haskell and R. Jain. A convex analytic approach to risk-aware Markov decision processes. SIAM
Journal of Control and Optimization, 2014.
[9] R. A. Howard and J. E. Matheson. Risk-sensitive Markov decision processes. Management Science, 18
(7):356?369, 1972.
[10] G. Iyengar. Robust dynamic programming. Mathematics of Operations Research, 30(2):257280, 2005.
[11] G. Iyengar and A. Ma. Fast gradient descent method for mean-CVaR optimization. Annals of Operations
Research, 205(1):203?212, 2013.
[12] S. Mannor, D. Simester, P. Sun, and J. Tsitsiklis. Bias and variance approximation in value function
estimates. Management Science, 53(2):308?322, 2007.
[13] S. Mannor, O. Mebel, and H. Xu. Lightning does not strike twice: Robust MDPs with coupled uncertainty.
In International Conference on Machine Learning, pages 385?392, 2012.
[14] P. Milgrom and I. Segal. Envelope theorems for arbitrary choice sets. Econometrica, 70(2):583?601,
2002.
[15] A. Nilim and L. El Ghaoui. Robust control of Markov decision processes with uncertain transition matrices. Operations Research, 53(5):780?798, 2005.
[16] T. Osogami. Robustness and risk-sensitivity in markov decision processes. In Advances in Neural Information Processing Systems, pages 233?241, 2012.
[17] G. Pflug and A. Pichler. Time consistent decisions and temporal decomposition of coherent risk functionals. Optimization online, 2015.
[18] M. Phillips. Interpolation and approximation by polynomials, volume 14. Springer Science & Business
Media, 2003.
[19] L. Prashanth. Policy gradients for cvar-constrained mdps. In Algorithmic Learning Theory, pages 155?
169. Springer, 2014.
[20] R. Rockafellar and S. Uryasev. Optimization of conditional value-at-risk. Journal of risk, 2:21?42, 2000.
[21] R. Rockafellar, S. Uryasev, and M. Zabarankin. Master funds in portfolio analysis with general deviation
measures. Journal of Banking & Finance, 30(2):743?778, 2006.
[22] G. Serraino and S. Uryasev. Conditional value-at-risk (CVaR). In Encyclopedia of Operations Research
and Management Science, pages 258?266. Springer, 2013.
[23] A. Shapiro, D. Dentcheva, and A. Ruszczy?nski. Lectures on stochastic programming. SIAM, 2009.
[24] M. Sobel. The variance of discounted Markov decision processes. Journal of Applied Probability, pages
794?802, 1982.
[25] A. Tamar, Y. Glassner, and S. Mannor. Optimizing the CVaR via sampling. In AAAI, 2015.
[26] S. Uryasev, S. Sarykalin, G. Serraino, and K. Kalinchenko. VaR vs CVaR in risk management and
optimization. In CARISMA conference, 2010.
[27] H. Xu and S. Mannor. The robustness-performance tradeoff in Markov decision processes. In Advances
in Neural Information Processing Systems, pages 1537?1544, 2006.
9
| 6014 |@word trial:1 polynomial:1 instrumental:1 c0:10 yv0:1 simulation:2 contraction:6 p0:1 decomposition:5 recursively:2 initial:11 uncovered:1 selecting:2 daniel:1 yvt:3 rightmost:1 readily:2 john:1 stemming:1 numerical:3 happen:2 analytic:1 kv1:1 plot:3 update:3 fund:1 v:1 stationary:3 greedy:1 guess:3 parameterization:1 accordingly:3 xk:5 provides:3 mannor:7 simpler:1 mathematical:2 along:4 direct:1 become:1 overhead:1 x0:30 indeed:2 market:1 expected:10 p1:1 nor:1 planning:2 bellman:18 discounted:10 globally:2 decreasing:2 equipped:1 solver:1 provided:3 begin:1 bounded:7 medium:1 fuel:3 interpreted:2 pursue:1 unified:1 finding:1 lipschitzness:1 impractical:1 guarantee:7 temporal:2 berkeley:2 every:2 ti:7 concave:7 glassner:1 finance:4 wrong:1 control:10 grant:1 suprel:1 yn:3 bertsekas:1 positive:1 engineering:1 local:1 tends:2 limit:1 consequence:1 establishing:1 fluctuation:1 interpolation:28 path:4 twice:1 initialization:1 studied:1 doctoral:1 equivalence:2 suggests:2 challenging:1 obeys:1 unique:2 practical:3 recursive:2 procedure:7 significantly:3 convenient:1 confidence:12 word:1 pre:1 suggest:1 cannot:1 operator:15 risk:49 context:1 applying:2 optimize:1 equivalent:5 deterministic:1 demonstrated:2 yt:1 maximizing:1 map:4 go:1 starting:3 convex:2 rectangular:3 resolution:2 immediately:1 rule:2 financial:1 handle:1 hurt:1 analogous:1 limiting:1 annals:1 controlling:1 pt:4 suppose:2 construction:1 exact:1 programming:9 nominal:1 designing:1 agreement:1 element:1 logarithmically:1 particularly:1 observed:1 ep:2 solved:1 worst:11 thousand:1 calculate:1 region:1 averse:1 sun:1 trade:1 decrease:1 yk:5 complexity:1 constrains:1 econometrica:1 dynamic:7 neglected:1 ghavamzadeh:2 motivate:1 trained:1 solving:4 segment:1 delbaen:1 basis:1 avivt:1 easily:2 various:1 represented:2 jain:2 fast:1 effective:1 monte:2 outcome:1 h0:1 choosing:2 whose:2 stanford:4 plausible:1 solve:2 supplementary:5 ability:1 online:1 sequence:3 propose:2 mission:1 product:2 neighboring:3 combining:2 hadamard:1 rapidly:1 matheson:1 achieve:1 tni:1 validate:1 convergence:6 optimum:1 decisionmaking:1 converges:2 derive:2 ac:1 measured:1 eq:3 involves:7 implies:2 direction:2 safe:2 stochastic:3 enable:1 material:5 require:1 preliminary:1 proposition:5 extension:1 marco:1 hold:3 around:1 considered:1 algorithmic:1 uniqueness:1 favorable:1 travel:2 axiomatic:1 currently:1 maker:2 ross:1 sensitive:8 correctness:2 weighted:1 minimization:1 iyengar:2 reaching:1 avoid:1 hj:3 office:1 derived:3 naval:1 vk:3 improvement:1 hk:1 contrast:3 adversarial:2 sense:1 helpful:1 dependent:5 el:1 typically:1 chow:3 a0:9 relation:1 interested:1 overall:2 aforementioned:1 among:1 dual:2 augment:1 constrained:5 ychow:1 uc:1 equal:7 aware:1 sampling:3 atom:1 represents:1 future:2 others:1 np:1 piecewise:2 simultaneously:1 preserve:2 interpolate:1 resulted:1 cplex:1 organization:1 interest:2 onwards:1 investigate:1 risksensitive:1 evaluation:1 behind:1 sobel:1 chain:1 kt:1 integral:1 tuple:1 shorter:1 pflug:3 mebel:1 desired:1 plotted:1 theoretical:3 uncertain:1 modeling:7 earlier:2 markovian:1 obstacle:5 corroborate:1 measuring:1 maximization:4 ott:1 cost:30 deviation:1 neutral:6 rare:1 uniform:1 technion:2 successful:2 conducted:1 seventh:1 motivating:1 perturbed:5 supx:1 chooses:1 nski:1 fundamental:1 sensitivity:5 randomized:1 siam:2 international:1 destination:4 off:1 contract:1 safeguard:1 augmentation:2 aaai:1 satisfied:2 management:4 opposed:1 choose:3 admit:1 style:2 oper:1 account:3 segal:1 rockafellar:2 satisfy:2 explicitly:1 depends:2 vehicle:1 multiplicative:1 h1:1 view:1 later:2 analyze:1 sup:1 portion:1 yv:4 start:1 prashanth:1 contribution:4 minimize:1 il:1 vyt:1 variance:4 emulates:1 efficiently:1 spaced:1 yellow:1 conceptually:1 generalize:1 iterated:2 carlo:2 trajectory:7 rx:1 history:6 converged:1 nonstandard:1 definition:1 against:1 involved:1 proof:5 vainsencher:1 stop:1 popular:2 knowledge:2 lim:4 focusing:1 appears:1 attained:1 follow:1 formulation:11 evaluated:1 furthermore:6 stage:4 until:1 hand:3 replacing:1 scientific:1 aviv:2 mdp:23 believe:2 usage:3 y2:8 discounting:1 hence:1 attractive:1 during:2 maintained:1 percentile:2 croucher:1 criterion:3 mina:1 mohammad:1 wise:2 novel:3 recently:2 funding:1 superior:1 performed:1 volume:1 discussed:4 interpretation:3 tail:1 relating:2 interpret:1 extend:2 belong:1 refer:3 measurement:1 phillips:1 automatic:2 grid:7 mathematics:1 stochasticity:1 lightning:1 portfolio:1 longer:1 v0:9 add:1 closest:1 recent:3 showed:2 perspective:1 optimizing:2 scenario:1 route:2 certain:1 n00014:1 discretizing:1 arbitrarily:2 vt:13 yi:33 preserving:1 additional:6 impose:1 converge:2 strike:1 preservation:1 ii:1 full:1 desirable:1 stem:1 technical:1 a1:1 calculates:1 converging:1 involving:1 essentially:1 expectation:6 iteration:20 maxx0:1 limt:3 represent:1 histogram:1 cell:1 preserved:1 background:1 remarkably:1 addition:3 interval:1 yinlam:1 limn:1 standpoint:1 envelope:2 parallelization:1 limk:1 heath:1 comment:1 induced:1 shie:3 thing:1 ample:1 leveraging:3 ee:1 presence:2 leverage:2 split:1 affect:1 reduce:1 idea:2 tamar:3 tradeoff:1 utility:1 effort:1 penalty:2 action:4 remark:2 matlab:1 useful:5 generally:1 clear:1 detailed:2 collision:2 discount:1 encyclopedia:1 shapiro:1 notice:1 popularity:2 discrete:1 shall:3 vol:1 key:3 four:2 nevertheless:1 threshold:1 budgeted:1 neither:1 ht:13 v1:1 asymptotically:2 convert:1 run:4 uncertainty:7 master:1 draw:1 decision:23 coherence:1 vb:4 banking:1 capturing:1 bound:16 occur:1 constraint:6 x2:1 interpolated:5 aspect:3 argument:2 min:12 optimality:1 conjecture:1 structured:1 according:5 terminates:1 slightly:2 son:1 y0:1 osogami:2 appealing:1 making:8 explained:1 ghaoui:1 taken:2 equation:3 previously:1 remains:1 discus:2 tractable:4 fp7:1 milgrom:1 available:1 operation:5 v2:2 generic:1 alternative:2 robustness:12 shortly:1 original:2 denotes:2 running:1 cmax:2 unifying:2 calculating:1 exploit:4 practicality:2 quantile:2 especially:1 establish:3 concatenated:1 scholarship:1 unchanged:1 objective:8 move:3 occurs:1 ruszczy:1 strategy:1 gradient:4 dp:10 distance:1 thank:1 athena:1 cvar:95 consumption:1 trivial:1 pavone:3 reason:2 besides:1 code:1 index:1 relationship:1 multiplicatively:1 providing:1 minimizing:1 filar:1 difficult:1 unfortunately:1 disastrous:1 potentially:1 stated:1 dentcheva:1 implementation:3 design:3 motivates:3 policy:34 zt:2 upper:1 markov:17 howard:1 finite:8 descent:2 defining:1 variability:2 y1:3 krass:1 perturbation:28 arbitrary:2 community:1 introduced:2 namely:2 required:3 specified:1 optimized:1 auerle:1 coherent:3 merges:1 established:3 hour:1 address:6 able:1 adversary:2 suggested:1 proceeds:2 challenge:5 program:3 gaining:1 max:8 event:1 natural:1 difficulty:1 business:1 ator:1 scheme:1 mdps:28 numerous:1 temporally:1 concludes:1 coupled:1 review:1 literature:1 geometric:1 acknowledgement:1 kf:1 occupation:1 loss:1 contracting:1 highlight:1 lecture:1 interesting:1 artzner:1 proven:2 var:9 foundation:1 kti:1 agent:3 consistent:2 autonomy:1 supported:1 last:1 free:1 enjoys:2 tsitsiklis:1 side:2 bias:1 correspondingly:1 tolerance:1 distributed:1 overcome:1 calculated:1 transition:9 cumulative:1 world:3 concavity:6 author:5 adaptive:1 programme:1 uryasev:4 transaction:2 functionals:1 approximate:10 obtains:1 robotic:1 terrain:3 continuous:9 decade:2 robust:13 investigated:1 european:1 main:2 linearly:1 motivation:5 noise:1 edition:1 allowed:1 repeated:1 x1:3 augmented:10 advice:1 referred:1 xu:2 simester:1 wiley:1 fails:1 position:3 nilim:1 explicit:2 wish:2 exponential:1 ix:4 admissible:1 theorem:22 xt:20 specific:2 showing:1 sensing:1 unperturbed:1 concern:1 intractable:1 exists:1 sequential:1 effectively:1 gained:1 budget:11 conditioned:1 horizon:3 logarithmic:1 borkar:1 failed:2 hitting:1 partially:2 vulnerable:1 collectively:1 springer:3 corresponds:1 satisfies:3 relies:2 pichler:3 eber:1 ma:1 conditional:7 twofold:2 lipschitz:3 room:1 y20:1 change:1 hard:1 feasible:1 specifically:5 infinite:1 dowd:1 lemma:10 conservative:4 total:8 hospital:1 formally:1 evaluate:1 instructive:1 correlated:1 |
5,542 | 6,015 | Learning with Incremental Iterative Regularization
Lorenzo Rosasco
DIBRIS, Univ. Genova, ITALY
LCSL, IIT & MIT, USA
[email protected]
Silvia Villa
LCSL, IIT & MIT, USA
[email protected]
Abstract
Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method. In
particular, we show that, if all other parameters are fixed a priori, the number of
passes over the data (epochs) acts as a regularization parameter, and prove strong
universal consistency, i.e. almost sure convergence of the risk, as well as sharp
finite sample bounds for the iterates. Our results are a step towards understanding
the effect of multiple epochs in stochastic gradient techniques in machine learning
and rely on integrating statistical and optimization results.
1 Introduction
Machine learning applications often require efficient statistical procedures to process potentially
massive amount of high dimensional data. Motivated by such applications, the broad objective of
our study is designing learning procedures with optimal statistical properties, and, at the same time,
computational complexities proportional to the generalization properties allowed by the data, rather
than their raw amount [6]. We focus on iterative regularization as a viable approach towards this
goal. The key observation behind these techniques is that iterative optimization schemes applied to
scattered, noisy data exhibit a self-regularizing property, in the sense that early termination (earlystop) of the iterative process has a regularizing effect [21, 24]. Indeed, iterative regularization algorithms are classical in inverse problems [15], and have been recently considered in machine learning
[36, 34, 3, 5, 9, 26], where they have been proved to achieve optimal learning bounds, matching
those of variational regularization schemes such as Tikhonov [8, 31].
In this paper, we consider an iterative regularization algorithm for the square loss, based on a recursive procedure processing one training set point at each iteration. Methods of the latter form, often
broadly referred to as online learning algorithms, have become standard in the processing of large
data-sets, because of their low iteration cost and good practical performance. Theoretical studies
for this class of algorithms have been developed within different frameworks. In composite and
stochastic optimization [19, 20, 29], in online learning, a.k.a. sequential prediction [11], and finally,
in statistical learning [10]. The latter is the setting of interest in this paper, where we aim at developing an analysis keeping into account simultaneously both statistical and computational aspects.
To place our contribution in context, it is useful to emphasize the role of regularization and different
ways in which it can be incorporated in online learning algorithms. The key idea of regularization
is that controlling the complexity of a solution can help avoiding overfitting and ensure stability and
generalization [33]. Classically, regularization is achieved penalizing the objective function with
some suitable functional, or minimizing the risk on a restricted space of possible solutions [33].
Model selection is then performed to determine the amount of regularization suitable for the data
at hand. More recently, there has been an interest in alternative, possibly more efficient, ways to
incorporate regularization. We mention in particular [1, 35, 32] where there is no explicit regularization by penalization, and the step-size of an iterative procedure is shown to act as a regularization
parameter. Here, for each fixed step-size, each data point is processed once, but multiple passes are
typically needed to perform model selection (that is, to pick the best step-size). We also mention
1
[22] where an interesting adaptive approach is proposed, which seemingly avoid model selection
under certain assumptions.
In this paper, we consider a different regularization strategy, widely used in practice. Namely, we
consider no explicit penalization, fix the step size a priori, and analyze the effect of the number of
passes over the data, which becomes the only free parameter to avoid overfitting, i.e. regularize.
The associated regularization strategy, that we dub incremental iterative regularization, is hence
based on early stopping. The latter is a well known ?trick?, for example in training large neural
networks [18], and is known to perform very well in practice [16]. Interestingly, early stopping
with the square loss has been shown to be related to boosting [7], see also [2, 17, 36]. Our goal
here is to provide a theoretical understanding of the generalization property of the above heuristic
for incremental/online techniques. Towards this end, we analyze the behavior of both the excess
risk and the iterates themselves. For the latter we obtain sharp finite sample bounds matching those
for Tikhonov regularization in the same setting. Universal consistency and finite sample bounds for
the excess risk can then be easily derived, albeit possibly suboptimal. Our results are developed
in a capacity independent setting [12, 30], that is under no conditions on the covering or entropy
numbers [30]. In this sense our analysis is worst case and dimension free. To the best of our
knowledge the analysis in the paper is the first theoretical study of regularization by early stopping
in incremental/online algorithms, and thus a first step towards understanding the effect of multiple
passes of stochastic gradient for risk minimization.
The rest of the paper is organized as follows. In Section 2 we describe the setting and the main
assumptions, and in Section 3 we state the main results, discuss them and provide the main elements
of the proof, which is deferred to the supplementary material. In Section 4 we present some experimental results on real and synthetic datasets.
Notation We denote by R+ = [0, +1[ , R++ = ]0, +1[ , and N? = N \ {0}. Given a normed
space B and linear Q
operators (Ai )1?i?m , Ai : B ! B for every
their composition Am ? ? ? A1
Qi,
m
m
will be denoted as i=1 Ai . By convention, if j > m, we set i=j Ai = I, where I is the identity
of B. The operator
Pmnorm will be denoted by k ? k and the Hilbert-Schmidt norm by k ? kHS . Also, if
j > m, we set i=j Ai = 0.
2 Setting and Assumptions
We first describe the setting we consider, and then introduce and discuss the main assumptions that
will hold throughout the paper. We build on ideas proposed in [13, 27] and further developed in a
series of follow up works [8, 3, 28, 9]. Unlike these papers where a reproducing kernel Hilbert space
(RKHS) setting is considered, here we consider a formulation within an abstract Hilbert space. As
discussed in the Appendix A, results in a RKHS can be recovered as a special case. The formulation we consider is close to the setting of functional regression [25] and reduces to standard linear
regression if H is finite dimensional, see Appendix A.
Let H be a separable Hilbert space with inner product and norm denoted by h?, ?iH and k?kH . Let
(X, Y ) be a pair of random variables on a probability space (?, S, P), with values in H and R,
respectively. Denote by ? the distribution of (X, Y ), by ?X the marginal measure on H, and by
?(?|x) the conditional measure on R given x 2 H. Considering the square loss function, the problem
under study is the minimizazion of the risk,
Z
inf E(w), E(w) =
(hw, xiH y)2 d?(x, y) ,
(1)
w2H
H?R
provided the distribution ? is fixed but known only through a training set z =
{(x1 , y1 ), . . . , (xn , yn )}, that is a realization of n 2 N? independent identical copies of (X, Y ).
In the following, we measure the quality of an approximate solution w
? 2 H (an estimator) considering the excess risk
E(w)
?
inf E.
(2)
H
If the set of solutions of Problem (1) is non empty, that is O = argminH E 6= ?, we also consider
w
?
w?
H
,
where
w? = argmin kwkH .
w2O
2
(3)
More precisely we are interested in deriving almost sure convergence results and finite sample
bounds on the above error measures. This requires making some assumptions that we discuss next.
We make throughout the following basic assumption.
Assumption 1. There exist M 2 ]0, +1[ and ? 2 ]0, +1[ such that |y| ? M ?-almost surely, and
kxk2H ? ? ?X -almost surely.
The above assumption is fairly standard. The boundness assumption on the output is satisfied in
classification, see Appendix A, and can be easily relaxed, see e.g. [8]. The boundness assumption
on the input can also be relaxed, but the resulting analysis is more involved. We omit these developments for the sake of clarity. It is well known that (see e.g. [14]), under Assumption 1, the risk is a
2
convex and
R continuous functional on L (H, ?X ), the space of square-integrable functions with norm
kf k2? = H?R |f (x)|2 d?X (x). The minimizer of the risk on L2 (H, ?X ) is the regression function
R
f? (x) = yd?(y|x) for ?X -almost every x 2 H. By considering Problem (1) we are restricting
the search for a solution to linear functions. Note that, since H is in general infinite dimensional,
the minimum in (1) might not be achieved. Indeed, bounds on the error measures in (2) and (3)
depend on if, and how well, the regression function can be linearly approximated. The following
assumption quantifies in a precise way such a requirement.
Assumption 2. Consider the space L? = {f : H ! R | 9w 2 H with f (x) = hw, xi ?X - a.s.},
and let L? be its closure in L2 (H, ?X ). Moreover, consider the operator
Z
L : L2 (H, ?X ) ! L2 (H, ?X ), Lf (x) =
hx, x0 i f (x0 )d?(x0 ), 8f 2 L2 (H, ?X ).
(4)
H
Define g? = argming2L? kf?
gk? . Let r 2 [0, +1[, and assume that
(9g 2 L2 (H, ?X ))
such that
g? = Lr g.
(5)
The above assumption is standard in the context of RKHS [8]. Since its statement is somewhat
technical, and we provide a formulation in a Hilbert space with respect to the usual RKHS setting,
we further comment on its interpretation. We begin noting that L? is the space of linear functions
indexed by H and is a proper subspace of L2 (H, ?X ) ? if Assumption 1 holds. Moreover, under
the same assumption, it is easy to see that the operator L is linear, self-adjoint, positive definite and
trace class, hence compact, so that its fractional power in (4) is well defined. Most importantly, the
following equality, which is analogous to Mercer?s theorem [30], can be shown fairly easily:
L? = L1/2 L2 (H, ?X ) .
(6)
This last observation allows to provide an interpretation of Condition (5). Indeed, given (6), for
r = 1/2, Condition (5) states that g? belongs to L? , rather than its closure. In this case, Problem 1
has at least one solution, and the set O in (3) is not empty. Vice versa, if O 6= ? then g? 2 L? ,
and w? is well-defined. If r > 1/2 the condition is stronger than for r = 1/2, for the subspaces of
Lr (L2 (H, ?X )) are nested subspaces of L2 (H, ?X ) for increasing r1 .
2.1
Iterative Incremental Regularized Learning
The learning algorithm we consider is defined by the following iteration.
Let w
?0 2 H and 2 R++ . Consider the sequence (w
?t )t2N generated through the following
procedure: given t 2 N, define
w
?t+1 = u
?nt ,
(7)
where u
?nt is obtained at the end of one cycle, namely as the last step of the recursion
u
?0t = w
?t ;
u
?it = u
?it
1
n
(h?
uit
1
, xi i H
yi )xi ,
i = 1, . . . , n.
(8)
1
If r < 1/2 then the regression function does not have a best linear approximation since g? 2
/ L? , and in
particular, for r = 0 we are making no assumption. Intuitively, for 0 < r < 1/2, the condition quantifies how
far g? is from L? , that is to be well approximated by a linear function.
3
Each cycle, called an epoch, corresponds to one pass over data. The above iteration can be seen as
the incremental gradient method [4, 19] for the minimization of the empirical risk corresponding to
z, that is the functional,
n
1X
?
E(w)
=
(hw, xi iH yi )2 .
(9)
n i=1
(see also Section B.2). Indeed, there is a vast literature on how the iterations (7), (8) can be used to
minimize the empirical risk [4, 19]. Unlike these studies in this paper we are interested in how the
iterations (7), (8) can be used to approximately minimize the risk E. The key idea is that while w
?t is
close to a minimizer of the empirical risk when t is sufficiently large, a good approximate solution
of Problem (1) can be found by terminating the iterations earlier (early stopping). The analysis in
the next few sections grounds theoretically this latter intuition.
Remark 1 (Representer theorem). Let H be a RKHS of functions from X to Y defined by a kernel
K : X ? X ! R. LetP
w
?0 = 0, then the iteration after t epochs of the algorithm in (7)-(8) can
n
be written as w
?t (?) = k=1 (?t )k Kxk , for suitable coefficients ?t = ((?t )1 , . . . , (?t )n ) 2 Rn ,
updated as follows:
?t+1 = cnt
c0t
= ?t ,
(cit )k
=
(
(cit
1
)k
(cit 1 )k ,
n
?P
n
j=1
K(xi , xj )(cit
1
)j
?
yi ,
k=i
k 6= i
3 Early stopping for incremental iterative regularization
In this section, we present and discuss the main results of the paper, together with a sketch of the
proof. The complete proofs can be found in Appendix B. We first present convergence results and
then finite sample bounds for the quantities in (2) and (3).
?
?
Theorem 1. In the setting of Section 2, let Assumption 1 hold. Let 2 0, ? 1 . Then the following
hold:
(i) If we choose a stopping rule t? : N? ! N? such that
t? (n)3 log n
=0
n!+1
n
(10)
inf E(w) = 0 P-almost surely.
(11)
lim t? (n) = +1 and
n!+1
then
lim E(w
?t? (n) )
n!+1
lim
w2H
(ii) Suppose additionally that the set O of minimizers of (1) is nonempty and let w? be defined
as in (3). If we choose a stopping rule t? : N? ! N? satisfying the conditions in (10) then
kw
?t? (n)
w? kH ! 0
P-almost surely.
(12)
The above result shows that for an a priori fixed step-sized, consistency is achieved computing a
suitable number t? (n) of iterations of algorithm (7)-(8) given n points. The number of required
iterations tends to infinity as the number of available training points increases. Condition (10) can
be interpreted as an early stopping rule, since it requires the number of epochs not to grow too fast.
In particular, this excludes the choice t? (n) = 1 for all n 2 N? , namely considering only one pass
over the data. In the following remark we show that, if we let = (n) to depend on the length of
one epoch, convergence is recovered also for one pass.
Remark 2 (Recovering Stochastic Gradient descent). Algorithm in (7)-(8) for t = 1 is a stochastic
gradient descent (one pass over a sequence of i.i.d. data) with stepsize /n. Choosing (n) =
? 1 n? , with ? < 1/5 in Algorithm (7)-(8), we can derive almost sure convergence of E(w
?1 ) inf H E
as n ! +1 relying on a similar proof to that of Theorem 1.
To derive finite sample bounds further assumptions are needed. Indeed, we will see that the behavior
of the bias of the estimator depends on the smoothness Assumption 2. We are in position to state
our main result, giving a finite sample bound.
4
?
?
Theorem 2 (Finite sample bounds in H). In the setting of Section 2, let 2 0, ? 1 for every t 2 N.
Suppose that Assumption 2 is satisfied for some r 2 ]1/2, +1[. Then the set O of minimizers of (1)
is nonempty, and w? in (3) is well defined. Moreover, the following hold:
(i) There exists c 2 ]0, +1[ such that, for every t 2 N? , with probability greater than 1
kw
?t
w? kH ?
32 log 16 ?
p
M?
n
1/2
+ 2M 2 ?
1
+ 3kgk? ?r
3
2
?
t+
?
r
1
2
?r
1
2
,
1
kgk? t 2
r
. (13)
? 1 ?
(ii) For the stopping rule t? : N? ! N? : t? (n) = n 2r+1 , with probability greater than 1
,
2
3
?
?r 12
r 1
r 12
3
16
2
?
1/2
2
1
r
kw
?t?(n) w kH ? 432 log
M?
+ 2M ? + 3kgk? ? 2 +
kgk? 5 n 2r+1 .
(14)
The dependence on ? suggests that a big ?, which corresponds to a small , helps in decreasing the
sample error, but increases the approximation error. Next we present the result for the excess risk.
We consider only the attainable case, that is the case r > 1/2 in Assumption 2. The case r ? 1/2
is deferred to Appendix A, since both the proof and the statement are conceptually similar to the
attainable case.
Theorem 3 (Finite sample bounds? for the
? risk ? attainable case). In the setting of Section 2, let
Assumptions 1 holds, and let 2 0, ? 1 . Let Assumption 2 be satisfied for some r 2 ]1/2, +1].
Then the following hold:
(i) For every t 2 N? , with probability greater than 1
E(w
?t )
2 32 log(16/ )
inf E ?
H
n
2
h
2
M + 2M ?
,
1/2
r
+ 3? kgk?
i2
2
t +2
?
r
t
?2r
kgk2?
?
?
1
(ii) For the stopping rule t? : N? ! N? : t? (n) = n 2(1+r) , with probability greater than 1
" ?
#
?2
? ?2r
?2
16 ?
r
2
1/2
r
2
E(w
?t? (n) ) inf E ? 8 32 log
M + 2M ?
+ 3? kgk? + 2
kgk? n
H
(15)
,
r/(r+1)
(16)
Equations (13) and (15) arise from a form of bias-variance (sample-approximation) decomposition
of the error. Choosing the number of epochs that optimize the bounds in (13) and (15) we derive
a priori stopping rules and corresponding bounds (14) and (16). Again, these results confirm that
the number of epochs acts as a regularization parameter and the best choices following from equations (13) and (15) suggest multiple passes over the data to be beneficial. In both cases, the stopping
rule depends on the smoothness parameter r which is typically unknown, and hold-out cross validation is often used in practice. Following [9], it is possible to show that this procedure allows to
adaptively achieve the same convergence rate as in (16).
3.1
Discussion
In Theorem 2, the obtained bound can be compared to known lower bounds, as well as to previous results for least squares algorithms obtained under Assumption 2. Minimax lower bounds
and individual lower bounds [8, 31], suggest that, for r > 1/2, O(n(r 1/2)/(2r+1)) is the optimal
capacity-independent bound for the H norm2 . In this sense, Theorem 2 provides sharp bounds on
the iterates. Bounds can be improved only under stronger assumptions, e.g. on the covering numbers or on the eigenvalues of L [30]. This question is left for future work. The lower bounds for
the excess risk [8, 31] are of the form O(n 2r/(2r+1) ) and in this case the results in Theorems 3
and 7 are not sharp. Our results can be contrasted with online learning algorithms that use step-size
2
In a recent manuscript, it has been proved that this is indeed the minimax lower bound (G. Blanchard,
personal communication)
5
as regularization parameter. Optimal capacity independent bounds are obtained in [35], see also
[32] and indeed such results can be further improved considering capacity assumptions, see [1] and
references therein. Interestingly, our results can also be contrasted with non incremental iterative
regularization approaches [36, 34, 3, 5, 9, 26]. Our results show that incremental iterative regularization, with distribution independent step-size, behaves as a batch gradient descent, at least in terms
of iterates convergence. Proving advantages of incremental regularization over the batch one is an
interesting future research direction. Finally, we note that optimal capacity independent and dependent bounds are known for several least squares algorithms, including Tikhonov regularization, see
e.g. [31], and spectral filtering methods [3, 9]. These algorithms are essentially equivalent from a
statistical perspective but different from a computational perspective.
3.2
Elements of the proof
The proofs of the main results are based on a suitable decomposition of the error to be estimated as
the sum of two quantities that can be interpreted as a sample and an approximation error, respectively. Bounds on these two terms are then provided. The main technical contribution of the paper is
the sample error bound. The difficulty in proving this result is due to multiple passes over the data,
which induce statistical dependencies in the iterates.
Error decomposition. We consider an auxiliary iteration (wt )t2N which is the expectation of the
iterations (7) and (8), starting from w0 2 H with step-size 2 R++ . More explicitly, the considered
iteration generates wt+1 according to
wt+1 = unt ,
(17)
where unt is given by
uit = uit
u0t = wt ;
1
n
Z
H?R
huit
1
, xiH
(18)
y x d?(x, y) .
If we let S : H ! L2 (H, ?X ) be the linear map w 7! hw, ?iH , which is bounded by
Assumption 1, then it is well-known that [13]
(8t 2 N)
E(w
?t )
inf E = kS w
?t
H
? 2?kw
?t
2
2
g? k? ? 2 kS w
?t
Swt k? + 2 kSwt
wt k2H + 2(E(wt )
inf E).
H
p
? under
2
g? k?
(19)
In this paper, we refer to the gap between the empirical and the expected iterates kw
?t wt kH as the
sample error, and to A(t, , n) = E(wt ) inf H E as the approximation error. Similarly, if w? (as
defined in (3)) exists, using the triangle inequality, we obtain
kw
?t
w? kH ? kw
?t
wt kH + kwt
w? kH .
(20)
Proof main steps. In the setting of Section 2, we summarize the key steps to derive a general
bound for the sample error (the proof of the behavior of the approximation error is more standard).
The bound on the sample error is derived through many technical lemmas and uses concentration
inequalities applied to martingales (the crucial point is the inequality in STEP 5 below). Its complete
derivation is reported in Appendix B.2. We introduce the additional linear operators: T : H !
H : T = S ? S, and, for every x 2 X , Sx : H ! R : Sx w = hw, xi, and Tx : H ! H : Tx = Sx Sx? .
Pn
Moreover, set T? = i=1 Txi /n. We are now ready to state the main steps of the proof.
Sample error bound (STEP 1 to 5)
STEP 1 (see Proposition 1): Find equivalent formulations for the sequences w
?t and wt :
? X
?
n
?
?
1
w
?t+1 = (I
T?)w
?t +
Sx?j yj + 2 A?w
?t ?b
n j=1
wt+1 = (I
T ) wt + S ? g? +
6
2
(Awt
b),
where
n
1 X
A? = 2
n
k=2
A=
1
n2
n
X
k=2
"
"
n
?
Y
I
i=k+1
n
Y
i=k+1
?
I
n
n
Tx i
T
?
?
#
#
T
Tx k
k
X1
k
X1
n
1 X
Txj , ?b = 2
n
j=1
k=2
T,
b=
j=1
n
X
1
n2
k=2
"
"
n
?
Y
I
i=k+1
n
Y
i=k+1
?
I
n
n
Tx i
T
?
?
#
#
T
Tx k
k
X1
Sx?j yj .
j=1
k
X1
S ? g? .
j=1
STEP 2 (see Lemma 5): Use the formulation obtained in STEP 1 to derive the following recursive
inequality,
?
wt = I
w
?t
with ?k = (T
T?)wk + (A?
T? +
2
?t
A? (w
?0
A)wk +
1
n
w0 ) +
t 1?
X
I
k=0
Pn
i=1
S?x?i yi
?t
T? + A?
S ? g? + (b
k+1
?k
?b).
? ? 1, and derive
STEP 3 (see Lemmas 6 and 7): Initialize w
?0 = w0 = 0, prove that kI
T? + Ak
from STEP 2 that,
t 1
n
? 1X
?
X
kw
?t wt kH ? kT T?k + kA? Ak
kwk kH + t
S?x?i yi S ? g? + kb ?bk .
n i=1
k=0
STEP 4 (see Lemma 8): Let Assumption 2 hold for some r 2 R+ and g 2 L2 (H, ?X ). Prove that
?
max{?r 1/2 , ( t)1/2 r }kgk? if r 2 [0, 1/2[,
(8t 2 H) kwt kH ?
?r 1/2 kgk?
if r 2 [1/2, +1[
STEP 5 (see Lemma 9 and Proposition 2: Prove that with probability greater than 1
following inequalities hold:
kA?
T?
32?2
4
AkHS ? p log ,
3 n
T
HS
16?
2
? p log ,
3 n
k?b
bkH ?
32?M 2
4
p log ,
3 n
n
1X ?
S yi
n i=1 xi
S ? g?
H
?
the
p
16 ?M
2
p
log .
3 n
STEP 6 (approximation error bound, see Theorem 6): Prove that, if Assumption 2 holds for
2r
some r 2 ]0, +1[, then E(wt ) inf H E ? r/ t kgk2? . Moreover, if Assumption 2 holds with
?
r = 1/2, then kwt w kH ! 0, and if Assumption 2 holds for some r 2 ]1/2, +1[, then
r 1/2
kwt w? kH ? r 1/2
kgk? .
t
STEP 7: Plug the sample and approximation error bounds obtained in STEP 1-5 and STEP 6 in
(19) and (20), respectively.
4 Experiments
Synthetic data. We consider a scalar linear regression problem with random design. The input
points (xi )1?i?n are uniformly distributed in [0, 1] and the output points are obtained as yi =
hw? , (xi )i + Ni , where Ni is a Gaussian noise with zero mean and standard deviation 1 and =
('k )1?k?d is a dictionary of functions whose k-th element is 'k (x) = cos((k 1)x)+sin((k 1)x).
In Figure 1, we plot the test error for d = 5 (with n = 80 in (a) and 800 in (b)). The plots show
that the number of the epochs acts as a regularization parameter, and that early stopping is beneficial
to achieve a better test error. Moreover, according to the theory, the experiments suggest that the
number of performed epochs increases if the number of available training points increases.
Real data. We tested the kernelized version of our algorithm (see Remark 1 and Appendix A)
on the cpuSmall3 , Adult and Breast Cancer Wisconsin (Diagnostic)4 real-world
3
4
Available at http://www.cs.toronto.edu/?delve/data/comp-activ/desc.html
Adult and Breast Cancer Wisconsin (Diagnostic), UCI repository, 2013.
7
2
Test error
Test error
1.2
1
0.8
0
2000
4000
6000
1.5
8000
1
0
1
Iterations
2
3
Iterations
(a)
4
?10 5
(b)
Figure 1: Test error as a function of the number of iterations. In (a), n = 80, and total number of
iterations of IIR is 8000, corresponding to 100 epochs. In (b), n = 800 and the total number of
epochs is 400. The best test error is obtained for 9 epochs in (a) and for 31 epochs in (b).
datasets. We considered a subset of Adult, with n = 1600. The results are shown in Figure 2. A
comparison of the test errors obtained with the kernelized version of the method proposed in this
paper (Kernel Incremental Iterative Regularization (KIIR)), Kernel Iterative Regularization (KIR),
that is the kernelized version of gradient descent, and Kernel Ridge Regression (KRR) is reported in
Table 1. The results show that the test error of KIIR is comparable to that of KIR and KRR.
0.1
Validation Error
Training Error
Error
0.08
0.06
0.04
0.02
0
0
1
2
Iterations
3
4
?10 6
Figure 2: Training (orange) and validation (blue) classification errors obtained by KIIR on the
Breast Cancer dataset as a function of the number of iterations. The test error increases after a
certain number of iterations, while the training error is ?decreasing? with the number of iterations.
Table 1: Test error comparison on real datasets. Median values over 5 trials.
Dataset
cpuSmall
Adult
Breast Cancer
ntr
5243
1600
400
d
12
123
30
Error Measure
RMSE
Class. Err.
Class. Err.
KIIR
5.9125
0.167
0.0118
KRR
3.6841
0.164
0.0118
KIR
5.4665
0.154
0.0237
Acknowledgments
This material is based upon work supported by CBMM, funded by NSF STC award CCF-1231216.
and by the MIUR FIRB project RBFR12M3AC. S. Villa is member of GNAMPA of the Istituto
Nazionale di Alta Matematica (INdAM).
References
[1] F. Bach and A. Dieuleveut.
arXiv:1408.0361, 2014.
Non-parametric stochastic approximation with large step sizes.
[2] P. Bartlett and M. Traskin. Adaboost is consistent. J. Mach. Learn. Res., 8:2347?2368, 2007.
[3] F. Bauer, S. Pereverzev, and L. Rosasco. On regularization algorithms in learning theory. J. Complexity,
23(1):52?72, 2007.
[4] D. P. Bertsekas. A new class of incremental gradient methods for least squares problems. SIAM J. Optim.,
7(4):913?926, 1997.
[5] G. Blanchard and N. Kr?amer. Optimal learning rates for kernel conjugate gradient regression. In Advances
in Neural Inf. Proc. Systems (NIPS), pages 226?234, 2010.
8
[6] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Suvrit Sra, Sebastian Nowozin, and
Stephen J. Wright, editors, Optimization for Machine Learning, pages 351?368. MIT Press, 2011.
[7] P. Buhlmann and B. Yu. Boosting with the l2 loss: Regression and classification. J. Amer. Stat. Assoc.,
98:324?339, 2003.
[8] A. Caponnetto and E. De Vito. Optimal rates for regularized least-squares algorithm. Found. Comput.
Math., 2006.
[9] A. Caponnetto and Y. Yao. Cross-validation based adaptation for regularization operators in learning
theory. Anal. Appl., 08:161?183, 2010.
[10] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms.
IEEE Trans. Information Theory, 50(9):2050?2057, 2004.
[11] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
[12] F. Cucker and D. X. Zhou. Learning Theory: An Approximation Theory Viewpoint. Cambridge University
Press, 2007.
[13] E. De Vito, L. Rosasco, A. Caponnetto, U. De Giovannini, and F. Odone. Learning from examples as an
inverse problem. J.Mach. Learn. Res., 6:883?904, 2005.
[14] E. De Vito, L. Rosasco, A. Caponnetto, M. Piana, and A. Verri. Some properties of regularized kernel
methods. Journal of Machine Learning Research, 5:1363?1390, 2004.
[15] H. W. Engl, M. Hanke, and A. Neubauer. Regularization of inverse problems. Kluwer, 1996.
[16] P.-S. Huang, H. Avron, T. Sainath, V. Sindhwani, and B. Ramabhadran. Kernel methods match deep
neural networks on timit. In IEEE ICASSP, 2014.
[17] W. Jiang. Process consistency for adaboost. Ann. Stat., 32:13?29, 2004.
[18] Y. LeCun, L. Bottou, G. Orr, and K. Muller. Efficient backprop. In G. Orr and Muller K., editors, Neural
Networks: Tricks of the trade. Springer, 1998.
[19] A. Nedic and D. P Bertsekas. Incremental subgradient methods for nondifferentiable optimization. SIAM
Journal on Optimization, 12(1):109?138, 2001.
[20] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to
stochastic programming. SIAM J. Optim., 19(4):1574?1609, 2008.
[21] A. Nemirovskii. The regularization properties of adjoint gradient method in ill-posed problems. USSR
Computational Mathematics and Mathematical Physics, 26(2):7?16, 1986.
[22] F. Orabona. Simultaneous model selection and optimization through parameter-free stochastic learning.
NIPS Proceedings, 2014.
[23] I. Pinelis. Optimum bounds for the distributions of martingales in Banach spaces. Ann. Probab.,
22(4):1679?1706, 1994.
[24] B. Polyak. Introduction to Optimization. Optimization Software, New York, 1987.
[25] J. Ramsay and B. Silverman. Functional Data Analysis. Springer-Verlag, New York, 2005.
[26] G. Raskutti, M. Wainwright, and B. Yu. Early stopping for non-parametric regression: An optimal datadependent stopping rule. In in 49th Annual Allerton Conference, pages 1318?1325. IEEE, 2011.
[27] S. Smale and D. Zhou. Shannon sampling II: Connections to learning theory. Appl. Comput. Harmon.
Anal., 19(3):285?302, November 2005.
[28] S. Smale and D.-X. Zhou. Learning theory estimates via integral operators and their approximations.
Constr. Approx., 26(2):153?172, 2007.
[29] N. Srebro, K. Sridharan, and A. Tewari. Optimistic rates for learning with a smooth loss. arXiv:1009.3896,
2012.
[30] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008.
[31] I. Steinwart, D. R. Hush, and C. Scovel. Optimal rates for regularized least squares regression. In COLT,
2009.
[32] P. Tarr`es and Y. Yao. Online learning as stochastic approximation of regularization paths: optimality and
almost-sure convergence. IEEE Trans. Inform. Theory, 60(9):5716?5735, 2014.
[33] V. Vapnik. Statistical learning theory. Wiley, New York, 1998.
[34] Y. Yao, L. Rosasco, and A. Caponnetto. On early stopping in gradient descent learning. Constr. Approx.,
26:289?315, 2007.
[35] Y. Ying and M. Pontil. Online gradient descent learning algorithms. Found. Comput. Math., 8:561?596,
2008.
[36] T. Zhang and B. Yu. Boosting with early stopping: Convergence and consistency. Annals of Statistics,
pages 1538?1579, 2005.
9
| 6015 |@word h:1 kgk:10 repository:1 version:3 trial:1 norm:3 stronger:2 termination:1 closure:2 decomposition:3 attainable:3 pick:1 mention:2 t2n:2 series:1 rkhs:5 interestingly:2 err:2 recovered:2 ka:2 nt:2 optim:2 scovel:1 written:1 plot:2 juditsky:1 lr:2 iterates:6 boosting:3 provides:1 toronto:1 math:2 allerton:1 zhang:1 mathematical:1 become:1 viable:1 prove:5 introduce:2 firb:1 theoretically:1 x0:3 expected:1 indeed:7 behavior:3 themselves:1 relying:1 decreasing:2 considering:5 increasing:1 becomes:1 provided:2 begin:1 notation:1 moreover:6 bounded:1 project:1 argmin:1 interpreted:2 developed:3 every:6 avron:1 act:4 k2:1 assoc:1 omit:1 yn:1 bertsekas:2 positive:1 tends:1 mach:2 ak:2 jiang:1 path:1 yd:1 approximately:1 might:1 lugosi:1 therein:1 k:2 suggests:1 appl:2 co:1 delve:1 nemirovski:1 practical:1 acknowledgment:1 lecun:1 yj:2 recursive:2 practice:3 definite:1 lf:1 silverman:1 procedure:6 pontil:1 universal:2 empirical:4 composite:1 matching:2 integrating:1 induce:1 suggest:3 close:2 selection:4 operator:7 risk:16 context:2 optimize:1 equivalent:2 map:1 www:1 pereverzev:1 starting:1 normed:1 convex:1 sainath:1 estimator:2 rule:8 importantly:1 deriving:1 regularize:1 stability:1 proving:2 analogous:1 updated:1 annals:1 controlling:1 suppose:2 massive:1 programming:1 us:1 designing:1 trick:2 element:3 approximated:2 satisfying:1 role:1 worst:1 cycle:2 trade:1 nazionale:1 intuition:1 complexity:3 vito:3 personal:1 terminating:1 depend:2 unt:2 upon:1 triangle:1 easily:3 icassp:1 iit:3 tx:6 derivation:1 univ:1 fast:1 describe:2 choosing:2 odone:1 whose:1 heuristic:1 widely:1 supplementary:1 posed:1 ability:1 statistic:1 noisy:1 online:8 seemingly:1 sequence:3 eigenvalue:1 advantage:1 propose:1 product:1 adaptation:1 uci:1 realization:1 achieve:3 adjoint:2 kh:13 convergence:9 empty:2 requirement:1 r1:1 optimum:1 incremental:14 help:2 cnt:1 derive:6 pinelis:1 stat:2 strong:1 recovering:1 auxiliary:1 c:1 christmann:1 convention:1 direction:1 stochastic:10 kb:1 material:2 backprop:1 require:1 hx:1 fix:1 generalization:4 proposition:2 desc:1 hold:13 sufficiently:1 considered:4 ground:1 cbmm:1 wright:1 k2h:1 dictionary:1 early:11 proc:1 krr:3 vice:1 minimization:2 mit:4 gaussian:1 aim:1 rather:2 avoid:2 pn:2 zhou:3 derived:2 focus:1 sense:3 am:1 dependent:1 stopping:17 minimizers:2 typically:2 kernelized:3 interested:2 classification:3 html:1 ill:1 denoted:3 priori:4 ussr:1 development:1 colt:1 special:1 fairly:2 initialize:1 marginal:1 orange:1 once:1 sampling:1 tarr:1 identical:1 kw:8 broad:1 yu:3 representer:1 future:2 few:1 simultaneously:1 kwt:4 individual:1 interest:2 deferred:2 behind:1 dieuleveut:1 kt:1 integral:1 istituto:1 harmon:1 indexed:1 re:2 theoretical:3 earlier:1 engl:1 cost:1 deviation:1 subset:1 cpusmall:1 too:1 reported:2 iir:1 dependency:1 synthetic:2 adaptively:1 siam:3 physic:1 cucker:1 together:1 yao:3 again:1 satisfied:3 cesa:2 rosasco:5 possibly:2 choose:2 classically:1 huang:1 account:1 de:4 orr:2 wk:2 coefficient:1 blanchard:2 txj:1 explicitly:1 depends:2 performed:2 optimistic:1 analyze:2 kwk:1 kgk2:2 hanke:1 rmse:1 timit:1 contribution:2 minimize:2 square:10 ni:2 variance:1 conceptually:1 raw:1 dub:1 comp:1 simultaneous:1 inform:1 sebastian:1 involved:1 lcsl:2 associated:1 proof:10 di:1 proved:2 dataset:2 knowledge:1 fractional:1 lim:3 organized:1 hilbert:5 manuscript:1 follow:1 adaboost:2 improved:2 verri:1 formulation:5 amer:2 hand:1 sketch:1 steinwart:2 quality:1 usa:2 effect:4 ccf:1 regularization:34 hence:2 equality:1 bkh:1 i2:1 sin:1 game:1 self:2 covering:2 complete:2 ridge:1 txi:1 l1:1 regularizing:2 variational:1 recently:2 raskutti:1 behaves:1 functional:5 banach:1 discussed:1 interpretation:2 kluwer:1 refer:1 composition:1 versa:1 cambridge:2 ai:5 smoothness:2 approx:2 consistency:5 mathematics:1 c0t:1 similarly:1 ramsay:1 funded:1 recent:1 perspective:2 italy:1 lrosasco:1 inf:11 belongs:1 tikhonov:3 certain:2 verlag:1 suvrit:1 inequality:5 rbfr12m3ac:1 yi:7 muller:2 integrable:1 seen:1 minimum:1 greater:5 relaxed:2 somewhat:1 additional:1 gentile:1 surely:4 determine:1 ii:4 stephen:1 multiple:5 ntr:1 reduces:1 caponnetto:5 smooth:1 technical:3 match:1 plug:1 cross:2 bach:1 award:1 a1:1 qi:1 prediction:2 regression:11 basic:1 breast:4 essentially:1 expectation:1 arxiv:2 iteration:21 kernel:8 achieved:3 grow:1 median:1 crucial:1 rest:1 unlike:2 norm2:1 pass:6 sure:4 comment:1 member:1 sridharan:1 noting:1 easy:1 xj:1 suboptimal:1 polyak:1 inner:1 idea:3 tradeoff:1 motivated:1 bartlett:1 york:3 remark:4 deep:1 useful:1 tewari:1 amount:3 processed:1 cit:4 http:1 shapiro:1 exist:1 nsf:1 estimated:1 diagnostic:2 blue:1 broadly:1 key:4 lan:1 clarity:1 penalizing:1 vast:1 excludes:1 subgradient:1 sum:1 inverse:3 place:1 almost:9 throughout:2 appendix:7 genova:1 comparable:1 bound:32 ki:1 letp:1 annual:1 precisely:1 infinity:1 software:1 sake:1 bousquet:1 generates:1 aspect:1 optimality:1 separable:1 developing:1 according:2 conjugate:1 beneficial:2 constr:2 making:2 intuitively:1 restricted:1 neubauer:1 equation:2 awt:1 discus:4 nonempty:2 needed:2 end:2 available:3 spectral:1 stepsize:1 alternative:1 schmidt:1 batch:2 ensure:1 giving:1 build:1 classical:1 ramabhadran:1 objective:2 question:1 quantity:2 strategy:2 swt:1 dependence:1 usual:1 concentration:1 villa:3 parametric:2 exhibit:1 gradient:13 subspace:3 capacity:5 w0:3 nondifferentiable:1 length:1 traskin:1 minimizing:1 ying:1 potentially:1 statement:2 smale:2 gk:1 trace:1 design:1 kir:3 proper:1 anal:2 unknown:1 perform:2 bianchi:2 observation:2 datasets:3 finite:10 descent:6 november:1 miur:1 incorporated:1 precise:1 communication:1 y1:1 rn:1 nemirovskii:1 reproducing:1 incorporate:1 sharp:4 buhlmann:1 bk:1 namely:3 pair:1 required:1 connection:1 xih:2 hush:1 nip:2 trans:2 adult:4 kwkh:1 below:1 giovannini:1 summarize:1 including:1 max:1 wainwright:1 power:1 suitable:5 difficulty:1 rely:1 regularized:4 recursion:1 nedic:1 minimax:2 scheme:2 lorenzo:1 ready:1 epoch:14 understanding:3 l2:13 literature:1 kf:2 probab:1 wisconsin:2 loss:5 interesting:2 proportional:1 filtering:1 srebro:1 penalization:2 validation:4 consistent:1 mercer:1 editor:2 viewpoint:1 nowozin:1 cancer:4 supported:1 last:2 keeping:1 free:3 copy:1 bias:2 u0t:1 distributed:1 bauer:1 dimension:1 xn:1 uit:3 world:1 adaptive:1 far:1 matematica:1 excess:5 approximate:2 emphasize:1 compact:1 confirm:1 overfitting:2 xi:9 piana:1 continuous:1 iterative:15 boundness:2 search:1 quantifies:2 huit:1 table:2 additionally:1 learn:2 robust:1 sra:1 bottou:2 stc:1 main:10 linearly:1 silvia:2 big:1 arise:1 dibris:1 n2:2 noise:1 allowed:1 x1:5 referred:1 scattered:1 martingale:2 wiley:1 position:1 explicit:2 comput:3 hw:6 theorem:10 exists:2 ih:3 albeit:1 sequential:1 restricting:1 kr:1 vapnik:1 sx:6 gap:1 entropy:1 kxk:1 datadependent:1 conconi:1 scalar:1 sindhwani:1 springer:3 nested:1 khs:1 minimizer:2 corresponds:2 conditional:1 goal:2 identity:1 sized:1 ann:2 towards:4 orabona:1 activ:1 infinite:1 contrasted:2 uniformly:1 wt:15 lemma:5 called:1 total:2 pas:4 experimental:1 e:1 shannon:1 support:1 latter:5 argminh:1 tested:1 avoiding:1 alta:1 |
5,543 | 6,016 | No-Regret Learning in Bayesian Games
Vasilis Syrgkanis
Microsoft Research
New York, NY
[email protected]
Jason Hartline
Northwestern University
Evanston, IL
[email protected]
? Tardos
Eva
Cornell University
Ithaca, NY
[email protected]
Abstract
Recent price-of-anarchy analyses of games of complete information suggest that
coarse correlated equilibria, which characterize outcomes resulting from no-regret
learning dynamics, have near-optimal welfare. This work provides two main technical results that lift this conclusion to games of incomplete information, a.k.a.,
Bayesian games. First, near-optimal welfare in Bayesian games follows directly
from the smoothness-based proof of near-optimal welfare in the same game when
the private information is public. Second, no-regret learning dynamics converge
to Bayesian coarse correlated equilibrium in these incomplete information games.
These results are enabled by interpretation of a Bayesian game as a stochastic
game of complete information.
1
Introduction
A recent confluence of results from game theory and learning theory gives a simple explanation for
why good outcomes in large families of strategically-complex games can be expected. The advance
comes from (a) a relaxation the classical notion of equilibrium in games to one that corresponds to
the outcome attained when players? behavior ensures asymptotic no-regret, e.g., via standard online
learning algorithms such as weighted majority, and (b) an extension theorem that shows that the
standard approach for bounding the quality of classical equilibria automatically implies the same
bounds on the quality of no-regret equilibria. This paper generalizes these results from static games
to Bayesian games, for example, auctions.
Our motivation for considering learning outcomes in Bayesian games is the following. Many important games model repeated interactions between an uncertain set of participants. Sponsored search,
and more generally, online ad-auction market places, are important examples of such games. Platforms are running millions of auctions, with each individual auction slightly different and of only
very small value, but such market places have high enough volume to be the financial basis of large
industries. This online auction environment is best modeled by a repeated Bayesian game: the auction game is repeated over time, with the set of participants slightly different each time, depending
on many factors from budgets of the players to subtle differences in the opportunities.
A canonical example to which our methods apply is a single-item first-price auction with players?
values for the item drawn from a product distribution. In such an auction, players simultaneously
submit sealed bids and the player with the highest bid wins and pays her bid. The utility of the
winner is her value minus her bid; the utilities of the losers are zero. When the values are drawn from
non-identical continuous distributions the Bayes-Nash equilibrium is given by a differential equation
1
that is not generally analytically tractable, cf. [8] (and generalizations of this model, computationally
hard, see [3]). Again, though their Bayes-Nash equilibria are complex, we show that good outcomes
can be expected in these kinds of auctions.
Our approach to proving that good equilibria can be expected in repeated Bayesian games is to
extend an analogous result for static games,1 i.e., the setting where the same game with the same
payoffs and the same players is repeated. Nash equilibrium is the classical model of equilibrium for
each stage of the static game. In such an equilibrium the strategies of players may be randomized;
however, the randomizations of the players are independent. To measure the quality of outcomes in
games Koutsoupias and Papadimitriou [9] introduced the price of anarchy, the ratio of the quality
of the worst Nash equilibrium over a socially optimal solution. Price of anarchy results have been
shown for large families of games, with a focus on those relevant for computer networks. Roughgarden [11] identified the canonical approach for bounding the price of anarchy of a game as showing
that it satisfies a natural smoothness condition.
There are two fundamental flaws with Nash equilibrium as a description of strategic behavior. First,
computing a Nash equilibrium can be PPAD hard and, thus, neither should efficient algorithms for
computing a Nash equilibrium be expected nor should any dynamics (of players with bounded computational capabilities) converge to a Nash equilibrium. Second, natural behavior tends to introduce
correlations in strategies and therefore does not converge to Nash equilibrium even in the limit.
Both of these issues can be resolved for large families of games. First, there are relaxations of Nash
equilibrium which allow for correlation in the players? strategies. Of these, this paper will focus
on coarse correlated equilibrium which requires the expected payoff of a player for the correlated
strategy be no worse than the expected payoff of any action at the player?s disposal. Second, it was
proven by Blum et al. [2] that the (asymptotic) no-regret property of many online learning algorithms
implies convergence to the set of coarse correlated equilibria.2
Blum et al. [2] extended the definition of the price of anarchy to outcomes obtained when each
player follows a no-regret learning algorithm.3 As coarse correlated equilibrium generalize Nash
equilibrium it could be that the worst case equilibrium under the former is worse than the latter.
Roughgarden [11], however, observed that there is often no degradation; specifically, the very same
smoothness property that he identified as implying good welfare in Nash equilibrium also proves
good welfare of coarse correlated equilibrium (equivalently: for outcomes from no-regret learners).
Thus, for a large family of static games, we can expect strategic behavior to lead to good outcomes.
This paper extends this theory to Bayesian games. Our contribution is two-fold: (i) We show an
analog of the convergence of no-regret learning to coarse correlated equilibria in Bayesian games,
which is of interest independently of our price of anarchy analysis; and (ii) we show that the coarse
correlated equilibria of the Bayesian version of any smooth static game have good welfare. Combining these results, we conclude that no-regret learning in smooth Bayesian games achieves good
welfare.
These results are obtained as follows. It is possible to view a Bayesian game as a stochastic game,
i.e., where the payoff structure is fixed but there is a random action on the part of Nature. This
viewpoint applied to the above auction example considers a population of bidders associated for
each player and, in each stage, Nature uniformly at random selects one bidder from each population
to participate in the auction. We re-interpret and strengthen a result of Syrgkanis and Tardos [12]
by showing that the smoothness property of the static game (for any fixed profile of bidder values)
implies smoothness of this stochastic game. From the perspective of coarse correlated equilibrium,
there is no difference between a stochastic game and the non-stochastic game with each random
variable replaced with its expected value. Thus, the smoothness framework of Roughgarden [11]
extends this result to imply that the coarse correlated equilibria of the stochastic game are good.
To show that we can expect good outcomes in Bayesian games, it suffices to show that no-regret
learning converges to the coarse correlated equilibrium of this stochastic game. Importantly, when
we consider learning algorithms there is a distinction between the stochastic game where players?
payoffs are random variables and the non-stochastic game where players? payoffs are the expectation
1
In the standard terms of the game theory literature, we extend results for learning in games of complete
information to games of incomplete information.
2
This result is a generalization of one of Foster and Vohra [7].
3
They referred to this price of anarchy for no-regret learners as the price of total anarchy.
2
of these variables. Our analysis addressed this distinction and, in particular, shows that, in the
stochastic game on populations, no-regret learning converges almost surely to the set of coarse
correlated equilibrium. This result implies that the average welfare of no-regret dynamics will be
good, almost surely, and not only in expectation over the random draws of Nature.
2
Preliminaries
This section describes a general game theoretic environment which includes auctions and resource
allocation mechanisms. For this general environment we review the results from the literature for
analyzing the social welfare that arises from no-regret learning dynamics in repeated game play.
The subsequent sections of the paper will generalize this model and these results to Bayesian games,
a.k.a., games of incomplete information.
General Game Form. A general game M is specified by a mapping from a profile a ? A ?
A1 ? ? ? ? ? An of allowable actions of players to an outcome. Behavior in a game may result in
(possibly correlated) randomized actions a ? ?(A).4 Player i?s utility in this game is determined
by a profile of individual values v ? V ? V1 ? ? ? ? ? Vn and the (implicit) outcome of the game; it
is denoted Ui (a; vi ) = Ea?a [Ui (a; vi )]. In games with a social planner or principal who does not
take an action in the game, the utility of the principal is R(a) = Ea?a [R(a)]. In many games of
interest, such as auctions or allocation mechanisms, the utility of the principal is the revenue from
payments from the players. We will use the term mechanism and game interchangeably.
In a static game the payoffs of the players (given by v) are fixed. Subsequent sections will consider
Bayesian games in the independent private value model, i.e., where player i?s value vi is drawn
independently from the other players? values and is known only privately to player i. Classical
game theory assumes complete information for static games, i.e., that v is known, and incomplete
information in Bayesian games, i.e., that the distribution over V is known. For our study of learning
in games no assumptions of knowledge are made; however, to connect to the classical literature
we will use its terminology of complete and incomplete information to refer to static and Bayesian
games, respectively.
Social Welfare. We will be interested in analyzing the quality of the outcome of the game as
defined by the social welfare,
Pwhich is the sum of the utilities of the players and the principal. We
will denote by SW (a; v) = i?[n] Ui (a; vi ) + R(a) the expected social welfare of mechanism M
under a randomized action profile a. For any valuation profile v ? V we will denote the optimal
social welfare, i.e, the maximum over outcomes of the game of the sum of utilities, by O PT(v).
No-regret Learning and Coarse Correlated Equilibria. For complete information games, i.e.,
fixed valuation profile v, Blum et al. [2] analyzed repeated play of players using no-regret learning
algorithms, and showed that this play converges to a relaxation of Nash equilibrium, namely, coarse
correlated equilibrium.
Definition 1 (no regret). A player achieves no regret in a sequence of play a1 , . . . , aT if his regret
against any fixed strategy a0i vanishes to zero:
PT
limT ?? T1 t=1 (Ui (a0i , at?i ; vi ) ? Ui (at ; vi )) = 0.
(1)
Definition 2 (coarse correlated equilibrium, CCE). A randomized action profile a ? ?(A) is a
coarse correlated equilibrium of a complete information game with valuation profile v if for every
player i and a0i ? Ai :
Ea [Ui (a; vi )] ? Ea [Ui (a0i , a?i ; vi )]
(2)
Theorem 3 (Blum et al. [2]). The empirical distribution of actions of any no-regret sequence in a
repeated game converges to the set of CCE of the static game.
Price of Anarchy of CCE. Roughgarden [11] gave a unifying framework for comparing the social
welfare, under various equilibrium notions including coarse correlated equilibrium, to the optimal
social welfare by defining the notion of a smooth game. This framework was extended to games like
auctions and allocation mechanisms by Syrgkanis and Tardos [12].
4
Bold-face symbols denote random variables.
3
Game/Mechanism
Simultaneous First Price Auction with Submodular Bidders
First Price Multi-Unit Auction
First Price Position Auction
All-Pay Auction
Greedy Combinatorial Auction with d-complements
Proportional Bandwitdth Allocation Mechanism
Submodular Welfare Games
Congestion Games with Linear Delays
(?, ?)
(1 ? 1/e, 1)
(1 ? 1/e, 1)
(1/2, 1)
(1/2, 1)
(1 ? 1/e, d)
(1/4, 1)
(1, 1)
(5/3, 1/3)
P OA
e
e?1
e
e?1
2
2
de
e?1
4
2
5/2
Reference
[12]
[5]
[12]
[12]
[10]
[12]
[13, 11]
[11]
Figure 1: Examples of smooth games and mechanisms
Definition 4 (smooth mechanism). A mechanism M is (?, ?)-smooth for some ?, ? ? 0 there exists
an independent randomized action profile a? (v) ? ?(A1 ) ? ? ? ? ? ?(An ) for each valuation profile
v, such that for any action profile a ? A and valuation profile v ? V:
P
?
(3)
i?[n] Ui (ai (v), a?i ; vi ) ? ? ? O PT (v) ? ? ? R(a).
Many important games and mechanisms satisfy this smoothness definition for various parameters
of ? and ? (see Figure 1); the following theorem shows that the welfare of any coarse correlated
equilibrium in any of these games is nearly optimal.
Theorem 5 (efficiency of CCE; [12]). If a mechanism is (?, ?)-smooth then the social welfare of
?
any course correlated equilibrium at least max{1,?}
of the optimal welfare, i.e., the price of anarchy
satisfies P OA ?
max{1,?}
.
?
Price of Anarchy of No-regret Learning. Following Blum et al. [2], Theorem 3 and Theorem 5
imply that no-regret learning dynamics have near-optimal social welfare.
Corollary 6 (efficiency of no-regret dyhamics; [12]). If a mechanism is (?, ?)-smooth then the
average welfare of any no-regret dynamics of the repeated game with a fixed player set and valuation
?
profile, achieves average social welfare at least max{1,?}
of the optimal welfare, i.e., the price of
anarchy satisfies P OA ?
max{1,?}
.
?
Importantly, Corollary 6 holds the valuation profile v ? V fixed throughout the repeated game play.
The main contribution of this paper is in extending this theory to games of incomplete information,
e.g., where the values of the players are drawn at random in each round of game play.
3
Population Interpretation of Bayesian Games
In the standard independent private value model of a Bayesian game there are n players. Player i
has type vi drawn uniformly from the set of type Vi (and this distribution is denoted Fi ).5 We will
restrict attention to the case when the type space Vi is finite. A player?s strategy in this Bayesian
game is a mapping si : Vi ? Ai from a valuation vi ? Vi to an action ai ? Ai . We will denote
i
with ?i = AV
i the strategy space of each player and with ? = ?1 ? . . . ? ?n . In the game, each
player i realizes his type vi from the distribution and then makes action si (vi ) in the game.
In the population interpretation of the Bayesian game, also called the agent normal form representation [6], there are n finite populations of players. Each player in population i has a type vi which we
assume to be distinct for each player in each population and across populations.6 The set of players
in the population is denoted Vi . and the player in population i with type vi is called player vi . In the
population game, each player vi chooses an action si (vi ). Nature uniformly draws one player from
5
The restriction to the uniform distribution is without loss of generality for any finite type space and for any
distribution over the type space that involves only rational probabilities.
6
The restriction to distinct types is without of loss of generality as we can always augment a type space with
an index that does not affect player utilities.
4
each population, and the game is played with those players? actions. In other words, the utility of
player vi from population i is:
AG
Ui,v
(s) = Ev [Ui (s(v); vi ) ? 1{vi = vi }]
i
(4)
Notice that the population interpretation of the Bayesian game is in fact a stochastic game of complete information.
There are multiple generalizations of coarse correlated equilibria from games of complete information to games of incomplete information (c.f. [6], [1], [4]). One of the canonical definitions is simply
the coarse correlated equilibrium of the stochastic game of complete information that is defined by
the population interpretation above.7
Definition 7 (Bayesian coarse correlated equilibrium - BAYES -CCE). A randomized strategy profile
s ? ?(?) is a Bayesian coarse correlated equilibrium if for every a0i ? Ai and for every vi ? Vi :
Es Ev [Ui (s(v); vi ) | vi = vi ] ? Es Ev [Ui (a0i , s?i (v?i ); vi ) | vi = vi ]
(5)
In a game of incomplete information the welfare in equilibrium will be compared to the expected
ex-post optimal social welfare Ev [O PT(v)]. We will refer to the worst-case ratio of the expected
optimal social welfare over the expected social welfare of any BAYES -CCE as BAYES -CCE-P OA.
4
Learning in Repeated Bayesian Game
Consider a repeated version of the population interpretation of a Bayesian game. At each iteration
one player vi from each population is sampled uniformly and independently from other populations.
The set of chosen players then participate in an instance of a mechanism M. We assume that each
player vi ? Vi , uses some no-regret learning rule to play in this repeated game.8 In Definition 8, we
describe the structure of the game and our notation more elaborately.
Definition 8. The repeated Bayesian game of M proceeds as follows. In stage t:
1. Each player vi ? Vi in each population i picks an action sti (vi ) ? Ai . We denote with
|V |
sti ? Ai i the function that maps a player vi ? Vi to his action.
2. From each population i one player vit ? Vi is selected uniformly at random. Let v t =
(v1t , . . . , vnt ) be the chosen profile of players and st (v t ) = (st1 (v1t ), . . . , stn (vnt )) be the
profile of chosen actions.
3. Each player vit participates in an instance of game M, in the role of player i ? [n], with
action sti (vit ) and experiences a utility of Ui (st (v t ); vit ). All players not selected in Step 2
experience zero utility.
Remark. We point out that for each player in a population to achieve no-regret he does not need
to know the distribution of values in other populations. There exist algorithms that can achieve the
no-regret property and simply require an oracle that returns the utility of a player at each iteration.
Thus all we need to assume is that each player receives as feedback his utility at each iteration.
Remark. We also note that our results would extend to the case where at each period multiple
matchings are sampled independently and players potentially participate in more than one instance
of the mechanism M and potentially with different players from the remaining population. The only
thing that the players need to observe in such a setting is their average utility that resulted from their
action sti (vi ) ? Ai from all the instances that they participated at the given period. Such a scenario
seems an appealing model in online ad auction marketplaces where players receive only average
utility feedback from their bids.
7
This notion is the coarse analog of the agent normal form Bayes correlated equilibrium defined in Section
4.2 of Forges [6].
8
An equivalent and standard way to view a Bayesian game is that each player draws his value independently
from his distribution each time the game is played. In this interpretation the player plays by choosing a strategy
that maps his value to an action (or distribution over actions). In this interpretation our no-regret condition
requires that the player not regret his actions for each possible value.
5
Bayesian Price of Anarchy for No-regret Learners. In this repeated game setting we want to
compare the average social welfare of any sequence of play where each player uses a vanishing
regret algorithm versus the average optimal welfare. Moreover, we want to quantify the worst-case
such average welfare over all possible valuation distributions within each population:
sup
F1 ,...,Fn
lim sup
T ??
PT
t
t=1 O PT (v )
M (st (v t );v t )
SW
t=1
PT
(6)
We will refer to this quantity as the Bayesian price of anarchy for no-regret learners. The numerator
of this term is simply the average optimal welfare when players from each population are drawn
independently in each stage; it converges almost surely to the expected ex-post optimal welfare
Ev [O PT(v)] of the stage game. Our main theorem is that if the mechanism is smooth and players
follow no-regret strategies then the expected welfare is guaranteed to be close to the optimal welfare.
Theorem 9 (Main Theorem). If a mechanism is (?, ?)-smooth then the average (over time) welfare
of any no-regret dynamics of the repeated Bayesian game achieves average social welfare at least
max{1,?}
?
, almost surely.
max{1,?} of the average optimal welfare, i.e. P OA ?
?
Roadmap of the proof. In Section 5, we show that any vanishing regret sequence of play of the
repeated Bayesian game, will converge almost surely to the Bayesian version of a coarse correlated
equilibrium of the incomplete information stage game. Therefore the Bayesian price of total anarchy
will be upper bounded by the efficiency of guarantee of any Bayesian coarse correlated equilibrium.
Finally, in Section 6 we show that the price of anarchy bound of smooth mechanisms directly extends
to Bayesian coarse correlated equilibria, thereby providing an upper bound on the Bayesian price of
total anarchy of the repeated game.
Remark. We point out that our definition of BAYES -CCE is inherently different and more restricted
than the one defined in Caragiannis et al. [4]. There, a BAYES -CCE is defined as a joint distribution
D over V ? A, such that if (v, a) ? D then for any vi ? Vi and a0i (vi ) ? Ai :
E(v,a) [Ui (a; vi )] ? E(v,a) [Ui (a0i (vi ), a?i ; vi )]
(7)
The main difference is that the product distribution defined by a distribution in ?(?) and the distribution of values, cannot produce any possible joint distribution over (V, A), but the type of joint
distributions are restricted to satisfy a conditional independence property described by [6]. Namely
that player i?s action is conditionally independent of some other player j?s value, given player i?s
type. Such a conditional independence property is essential for the guarantees that we will present
in this work to extend to a BAYES -CCE and hence do not seem to extend to the notion given in [4].
However, as we will show in Section 5, the no-regret dynamics that we analyze, which are mathematically equivalent to the dynamics in [4], do converge to this smaller set of BAYES -CCE that
we define and for which our efficiency guarantees will extend. This extra convergence property is
not needed when the mechanism satisfies the stronger semi-smoothness property defined in [4] and
thereby was not needed to show efficiency bounds in their setting.
5
Convergence of Bayesian No-Regret to BAYES -CCE
In this section we show that no-regret learning in the repeated Bayesian game converges almost
surely to the set of Bayesian coarse correlated equilibria. Any given sequence of play of the repeated
Bayesian game, which we defined in Definition 8, gives rise to a sequence of strategy-value pairs
i
(st , v t ) where st = (st1 , . . . , stn ) and sti ? AV
i , captures the actions that each player vi in population
i would have chosen, had they been picked. Then observe that all that matters to compute the average
social welfare of the game for any given time step T , is the empirical distribution of pairs (s, v), up
till time step T , denoted as DT , i.e. if (sT , vT ) is a random sample from DT :
PT
1
t t
t
T
T
T
(8)
t=1 SW (s (v ); v ) = E(sT ,vT ) SW (s (v ); v )
T
Lemma 10 (Almost sure convergence to BAYES -CCE). Consider a sequence of play of the random
matching game, where each player uses a vanishing regret algorithm and let DT be the empirical
distribution of (strategy, valuation) profile pairs up till time step T . Consider any subsequence of
{DT }T that converges in distribution to some distribution D. Then, almost surely, D is a product
distribution, i.e. D = Ds ? Dv , with Ds ? ?(?) and Dv ? ?(V) such that Dv = F and
Ds ? BAYES -CCE of the static incomplete information game with distributional beliefs F.
6
Proof. We will denote with
ri (a?i , a; vi ) = Ui (a?i , a?i ; vi ) ? Ui (a; vi ),
the regret of player vi from population i, for action a?i at action profile a. For a vi ? Vi let xti (vi ) =
1{vit = vi }. Since the sequence has vanishing regret for each player vi in population Pi , it must be
that for any s?i ? ?i :
PT
t
?
t t
(9)
t=1 xi (vi ) ? ri (si (vi ), s (v ); vi ) ? o(T )
For any fixed T , let DsT ? ?(?) denote the empirical distribution of st and let s be a random sample
from DsT . For each s ? ?, let Ts ? [T ] denote the time steps such that st = s for each t ? Ts . Then
we can re-write Equation (9) as:
h
i
P
)
(10)
Es |T1s | t?Ts xti (vi ) ? ri (s?i (vi ), st (v t ); vi ) ? o(T
T
For any s ? ? and w ? V, let Ts,w = {t ? Ts : v t = w}. Then we can re-write Equation (10) as:
i
hP
|Ts,w |
o(T )
?
Es
(11)
w?V |Ts | 1{wi = vi } ? ri (si (vi ), s(w); vi ) ? T
|T
|
Now we observe that |Ts,w
is the empirical frequency of the valuation vector w ? V, when filtered
s|
at time steps where the strategy vector was s. Since at each time step t the valuation vector v t is
picked independently from the distribution of valuation profiles F, this is the empirical frequency
of Ts independent samples from F.
By standard arguments from empirical processes theory, if Ts ? ? then this empirical distribution
converges almost surely to the distribution F. On the other hand if Ts doesn?t go to ?, then the
empirical frequency of strategy s vanishes to 0 as T ? ? and therefore has measure zero in the
above expectation as T ? ?. Thus for any convergent subsequence of {DT }, if D is the limit
distribution, then if s is in the support of D, then almost surely the distribution of w conditional on
strategy s is F. Thus we can write D as a product distribution Ds ? F.
Moreover, if we denote with w the random variable that follows distribution F, then the limit of
Equation (11) for any convergent sub-sequence, will give that:
a.s.: Es?Ds Ew?F [1{wi = vi } ? ri (s?i (vi ), s(w); vi )] ? 0
Equivalently, we get that Ds will satisfy that for all vi ? Vi and for all s?i :
a.s.: Es?Ds Ew?F [ri (s?i (wi ), s(w); wi ) | wi = vi ] ? 0
The latter is exactly the BAYES -CCE condition from Definition 7. Thus Ds is in the set of
BAYES -CCE of the static incomplete incomplete information game among n players, where the
type profile is drawn from F.
Given the latter convergence theorem we can easily conclude the following the following theorem,
whose proof is given in the supplementary material.
Theorem 11. The price of anarchy for Bayesian no-regret dynamics is upper bounded by the price
of anarchy of Bayesian coarse correlated equilibria, almost surely.
6
Efficiency of Smooth Mechanisms at Bayes Coarse Correlated Equilibria
In this section we show that smoothness of a mechanism M implies that any BAYES -CCE of the
?
incomplete information setting achieves at least max{1,?}
of the expected optimal welfare. To show
this we will adopt the interpretation of BAYES -CCE that we used in the previous section, as coarse
correlated equilibria of a more complex normal form game; the stochastic agent normal form representation of the Bayesian game. We can interpret this complexP
normal form game as the game
that arises from a complete information mechanism MAG among i |Vi | players, which randomly
samples one player from each of the n population and where the utility of a player in the complete
information mechanism MAG is given by Equation (4). The set of possible outcomes in this agent
7
game corresponds to the set of mappings from a profile of chosen players to an outcome in the underlying mechanism M. The optimal welfare of this game, is then the expected ex-post optimal
welfare O PTAG = Ev [O PT(v)].
The main theorem that we will show is that whenever mechanism M is (?, ?)-smooth, then also
mechanism MAG is (?, ?)-smooth. Then we will invoke a theorem of [12, 11], which shows that
?
any coarse correlated equilibrium of a complete information mechanism achieves at least max{1,?}
of the optimal welfare. By the equivalence between BAYES -CCE and CCE of this complete infor?
mation game, we get that every BAYES -CCE of the Bayesian game achieves at least max{1,?}
of the
expected optimal welfare.
Theorem 12 (From complete information to Bayesian smoothness). If a mechanism M is (?, ?)smooth, then for any vector of independent valuation distributions F = (F1 , . . . , Fn ), the complete
information mechanism MAG is also (?, ?)-smooth.
Proof. Consider the following randomized deviation for each player vi ? Vi in population i: He
random samples a valuation profile w ? F. Then he plays according to the randomized action
s?i (vi , w?i ), i.e., the player deviates using the randomized action guaranteed by the smoothness
property of mechanism M for his type vi and the random sample of the types of the others w?i .
Consider an arbitrary action profile s = (s1 , . . . , sn ) for all players in all populations. In this
P
|V |
context it is better to think of each si as a |Vi | dimensional vector in Ai i and to view s as a i |Vi |
dimensional vector. Then with s?vi we will denote all the components of this large vector except
the ones corresponding to player vi ? Vi . Moreover, we will be denoting with v a sample from F
drawn by mechanism MAG . We now argue about the expected utility of player vi from this deviation,
which is:
AG ?
Ew Ui,v
(si (vi , w?i ), s?vi ) = Ew Ev [Ui (s?i (vi , w?i ), s?i (v?i ); vi ) ? 1{vi = vi }]
i
Summing the latter over all players vi ? Vi in population i:
X
AG ?
P
?
Ew Ui,v
(si (vi , w?i ), s?vi ) = Ew,v
vi ?Vi Ui (si (vi , w?i ), s?i (v?i ); vi ) ? 1{vi = vi }
i
vi ?Vi
= Ev,w [Ui (s?i (vi , w?i ), s?i (v?i ); vi )]
= Ev,w [Ui (s?i (wi , w?i ), s?i (v?i ); wi )]
= Ev,w [Ui (s?i (w), s?i (v?i ); wi )] ,
where the second to last equation is an exchange of variable names and regrouping using independence. Summing over populations and using smoothness of M, we get smoothness of MAG :
hP
i
X X
AG ?
?
Ew Ui,v
(s
(v
,
w
),
s
)
=
E
U
(s
(w),
s
(v
);
w
)
i
?i
?vi
v,w
?i ?i
i
i
i?[n] i i
i
i?[n] vi ?Vi
? Ev,w [?O PT(w) ? ?R(s(v))] = ?Ew [O PT(w)] ? ?RAG (s)
Corollary 13. Every BAYES -CCE of the incomplete information setting of a smooth mechanism
?
M, achieves expected welfare at least max{1,?}
of the expected optimal welfare.
7
Finite Time Analysis and Convergence Rates
In the previous section we argued about the limit average efficiency of the game as time goes to
infinity. In this section we analyze the convergence rate to BAYES -CCE and we show approximate
efficiency results even for finite time, when players are allowed to have some -regret.
Theorem 14. Consider the repeated matching game with a (?, ?)-smooth mechanism. Suppose that
for any T ? T 0 , each player in each of the n populations has regret at most n . Then for every ?
and ?, there exists a T ? (?, ?), such that for any T ? min{T 0 , T ? }, with probability 1 ? ?:
PT
1
?
t t
t
(12)
t=1 SW (s (v ); v ) ? max{1,?} Ev [O PT (v)] ? ? ? ? ?
T
3
2
3
?H
Moreover, T ? (?, ?) ? 54?n ?|?|?|V|
log ?2 .
?3
8
References
[1] Dirk Bergemann and Stephen Morris. Correlated Equilibrium in Games with Incomplete Information. Cowles Foundation Discussion Papers 1822, Cowles Foundation for Research in
Economics, Yale University, October 2011.
[2] Avrim Blum, MohammadTaghi Hajiaghayi, Katrina Ligett, and Aaron Roth. Regret minimization and the price of total anarchy. In Proceedings of the Fortieth Annual ACM Symposium on
Theory of Computing, STOC ?08, pages 373?382, New York, NY, USA, 2008. ACM.
[3] Yang Cai and Christos Papadimitriou. Simultaneous bayesian auctions and computational
complexity. In Proceedings of the fifteenth ACM conference on Economics and Computation,
EC ?14, pages 895?910, New York, NY, USA, 2014. ACM.
[4] Ioannis Caragiannis, Christos Kaklamanis, Panagiotis Kanellopoulos, Maria Kyropoulou,
? Tardos. Bounding the inefficiency of outcomes
Brendan Lucier, Renato Paes Leme, and Eva
in generalized second price auctions. Journal of Economic Theory, (0):?, 2014.
[5] Bart de Keijzer, Evangelos Markakis, Guido Schfer, and Orestis Telelis. Inefficiency of standard multi-unit auctions. In HansL. Bodlaender and GiuseppeF. Italiano, editors, Algorithms
ESA 2013, volume 8125 of Lecture Notes in Computer Science, pages 385?396. Springer
Berlin Heidelberg, 2013.
[6] Franoise Forges. Five legitimate definitions of correlated equilibrium in games with incomplete
information. Theory and Decision, 35(3):277?310, 1993.
[7] Dean P Foster and Rakesh V Vohra. Asymptotic calibration. Biometrika, 85(2):379?390, 1998.
[8] ToddR. Kaplan and Shmuel Zamir. Asymmetric first-price auctions with uniform distributions:
analytic solutions to the general case. Economic Theory, 50(2):269?302, 2012.
[9] Elias Koutsoupias and Christos Papadimitriou. Worst-case equilibria. In Proceedings of the
16th annual conference on Theoretical aspects of computer science, STACS?99, pages 404?
413, Berlin, Heidelberg, 1999. Springer-Verlag.
[10] B. Lucier and A. Borodin. Price of anarchy for greedy auctions. In Proceedings of the TwentyFirst Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ?10, pages 537?553,
Philadelphia, PA, USA, 2010. Society for Industrial and Applied Mathematics.
[11] T. Roughgarden. Intrinsic robustness of the price of anarchy. In Proceedings of the 41st annual
ACM symposium on Theory of computing, STOC ?09, pages 513?522, New York, NY, USA,
2009. ACM.
? Tardos. Composable and efficient mechanisms. In Proceedings of
[12] Vasilis Syrgkanis and Eva
the Forty-fifth Annual ACM Symposium on Theory of Computing, STOC ?13, pages 211?220,
New York, NY, USA, 2013. ACM.
[13] A. Vetta. Nash equilibria in competitive societies, with applications to facility location, traffic routing and auctions. In Foundations of Computer Science, 2002. Proceedings. The 43rd
Annual IEEE Symposium on, pages 416?425, 2002.
9
| 6016 |@word private:3 version:3 seems:1 stronger:1 pick:1 thereby:2 minus:1 inefficiency:2 mag:6 denoting:1 com:1 comparing:1 si:9 must:1 fn:2 subsequent:2 analytic:1 sponsored:1 ligett:1 bart:1 implying:1 greedy:2 congestion:1 selected:2 item:2 vanishing:4 filtered:1 coarse:31 provides:1 location:1 five:1 differential:1 symposium:5 introduce:1 market:2 expected:19 behavior:5 nor:1 multi:2 v1t:2 socially:1 automatically:1 xti:2 considering:1 bounded:3 notation:1 moreover:4 underlying:1 kind:1 t1s:1 ag:4 guarantee:3 every:6 hajiaghayi:1 exactly:1 biometrika:1 evanston:1 unit:2 anarchy:22 t1:1 tends:1 limit:4 analyzing:2 equivalence:1 regret:46 cowles:2 empirical:9 matching:2 word:1 suggest:1 get:3 confluence:1 close:1 cannot:1 context:1 restriction:2 equivalent:2 map:2 dean:1 roth:1 syrgkanis:4 attention:1 go:2 independently:7 vit:5 economics:2 legitimate:1 rule:1 importantly:2 financial:1 enabled:1 his:9 proving:1 population:35 notion:5 analogous:1 tardos:5 pt:15 play:13 suppose:1 strengthen:1 guido:1 us:3 pa:1 asymmetric:1 distributional:1 stacs:1 observed:1 role:1 capture:1 worst:5 zamir:1 eva:4 ensures:1 highest:1 environment:3 nash:14 ui:25 vanishes:2 complexity:1 dynamic:11 efficiency:8 learner:4 basis:1 matchings:1 resolved:1 joint:3 easily:1 various:2 distinct:2 describe:1 marketplace:1 lift:1 outcome:17 choosing:1 whose:1 supplementary:1 katrina:1 think:1 online:5 sequence:9 cai:1 interaction:1 product:4 relevant:1 combining:1 vasilis:2 loser:1 till:2 achieve:2 description:1 convergence:8 extending:1 produce:1 converges:8 depending:1 c:1 involves:1 come:1 implies:5 quantify:1 stochastic:13 routing:1 public:1 material:1 require:1 exchange:1 argued:1 st1:2 suffices:1 generalization:3 f1:2 preliminary:1 randomization:1 pwhich:1 mathematically:1 extension:1 hold:1 normal:5 welfare:45 equilibrium:58 mapping:3 achieves:8 adopt:1 realizes:1 combinatorial:1 panagiotis:1 weighted:1 minimization:1 evangelos:1 always:1 mation:1 cornell:2 corollary:3 focus:2 maria:1 industrial:1 brendan:1 flaw:1 her:3 selects:1 interested:1 infor:1 issue:1 among:2 denoted:4 augment:1 caragiannis:2 platform:1 identical:1 ppad:1 nearly:1 paes:1 papadimitriou:3 others:1 strategically:1 randomly:1 simultaneously:1 resulted:1 individual:2 replaced:1 microsoft:2 interest:2 analyzed:1 a0i:8 experience:2 incomplete:17 re:3 theoretical:1 uncertain:1 instance:4 industry:1 strategic:2 deviation:2 uniform:2 delay:1 characterize:1 connect:1 chooses:1 st:11 fundamental:1 randomized:9 siam:1 participates:1 invoke:1 again:1 possibly:1 worse:2 return:1 de:2 bidder:4 bold:1 ioannis:1 includes:1 vnt:2 matter:1 satisfy:3 ad:2 vi:110 view:3 jason:1 picked:2 analyze:2 traffic:1 sup:2 competitive:1 bayes:22 participant:2 capability:1 contribution:2 il:1 who:1 generalize:2 bayesian:49 vohra:2 hartline:2 simultaneous:2 whenever:1 definition:13 against:1 frequency:3 proof:5 associated:1 static:12 rational:1 sampled:2 knowledge:1 lim:1 lucier:2 subtle:1 ea:4 disposal:1 attained:1 dt:5 follow:1 though:1 generality:2 stage:6 implicit:1 correlation:2 d:8 hand:1 receives:1 twentyfirst:1 vasy:1 quality:5 name:1 usa:5 former:1 analytically:1 hence:1 facility:1 conditionally:1 round:1 interchangeably:1 numerator:1 game:131 generalized:1 allowable:1 complete:16 theoretic:1 auction:26 fi:1 winner:1 volume:2 million:1 extend:6 interpretation:9 analog:2 he:4 interpret:2 refer:3 ai:11 smoothness:13 rd:1 sealed:1 elaborately:1 hp:2 mathematics:1 submodular:2 had:1 calibration:1 recent:2 showed:1 perspective:1 scenario:1 verlag:1 regrouping:1 vt:2 surely:10 converge:5 forty:1 period:2 ii:1 semi:1 multiple:2 stephen:1 smooth:18 technical:1 post:3 a1:3 expectation:3 fifteenth:1 iteration:3 limt:1 receive:1 want:2 participated:1 addressed:1 ithaca:1 extra:1 sure:1 thing:1 seem:1 near:4 yang:1 enough:1 bid:5 affect:1 independence:3 gave:1 identified:2 restrict:1 economic:2 utility:17 york:5 action:29 remark:3 generally:2 leme:1 morris:1 exist:1 canonical:3 notice:1 write:3 discrete:1 terminology:1 blum:6 drawn:8 neither:1 v1:1 relaxation:3 sum:2 sti:5 fortieth:1 soda:1 dst:2 extends:3 place:2 family:4 almost:11 planner:1 vn:1 throughout:1 bergemann:1 draw:3 decision:1 renato:1 bound:4 pay:2 guaranteed:2 played:2 convergent:2 yale:1 fold:1 oracle:1 annual:6 roughgarden:5 markakis:1 infinity:1 ri:6 aspect:1 argument:1 min:1 according:1 describes:1 slightly:2 across:1 smaller:1 wi:8 appealing:1 s1:1 dv:3 restricted:2 computationally:1 equation:6 resource:1 payment:1 mechanism:34 needed:2 know:1 tractable:1 italiano:1 generalizes:1 forge:2 apply:1 observe:3 robustness:1 bodlaender:1 assumes:1 running:1 cf:1 remaining:1 opportunity:1 sw:5 unifying:1 prof:1 classical:5 society:2 quantity:1 strategy:15 mohammadtaghi:1 win:1 berlin:2 oa:5 majority:1 participate:3 roadmap:1 valuation:15 considers:1 argue:1 modeled:1 index:1 ratio:2 providing:1 equivalently:2 october:1 potentially:2 stoc:3 rise:1 kaplan:1 upper:3 av:2 finite:5 t:11 payoff:7 extended:2 defining:1 dirk:1 arbitrary:1 esa:1 introduced:1 complement:1 namely:2 pair:3 specified:1 rag:1 distinction:2 proceeds:1 ev:12 borodin:1 including:1 max:11 explanation:1 belief:1 natural:2 imply:2 cce:23 philadelphia:1 sn:1 deviate:1 review:1 literature:3 stn:2 asymptotic:3 loss:2 expect:2 lecture:1 northwestern:2 allocation:4 proportional:1 proven:1 versus:1 composable:1 revenue:1 foundation:3 agent:4 elia:1 foster:2 viewpoint:1 editor:1 pi:1 course:1 last:1 allow:1 face:1 fifth:1 feedback:2 doesn:1 made:1 ec:1 social:17 approximate:1 summing:2 conclude:2 xi:1 subsequence:2 search:1 continuous:1 why:1 nature:4 shmuel:1 correlated:36 inherently:1 heidelberg:2 complex:3 submit:1 main:6 privately:1 bounding:3 motivation:1 profile:24 repeated:21 allowed:1 referred:1 ny:6 christos:3 sub:1 position:1 theorem:16 showing:2 symbol:1 exists:2 essential:1 intrinsic:1 avrim:1 budget:1 simply:3 springer:2 corresponds:2 satisfies:4 acm:9 conditional:3 vetta:1 price:28 hard:2 specifically:1 koutsoupias:2 uniformly:5 determined:1 except:1 degradation:1 principal:4 total:4 called:2 lemma:1 e:6 player:88 rakesh:1 ew:8 aaron:1 support:1 latter:4 arises:2 ex:3 |
5,544 | 6,017 | Sparse and Low-Rank Tensor Decomposition
Parikshit Shah
[email protected]
Nikhil Rao
[email protected]
Gongguo Tang
[email protected]
Abstract
Motivated by the problem of robust factorization of a low-rank tensor, we study
the question of sparse and low-rank tensor decomposition. We present an efficient
computational algorithm that modifies Leurgans? algoirthm for tensor factorization. Our method relies on a reduction of the problem to sparse and low-rank matrix decomposition via the notion of tensor contraction. We use well-understood
convex techniques for solving the reduced matrix sub-problem which then allows
us to perform the full decomposition of the tensor. We delineate situations where
the problem is recoverable and provide theoretical guarantees for our algorithm.
We validate our algorithm with numerical experiments.
1
Introduction
Tensors are useful representational objects to model a variety of problems such as graphical models
with latent variables [1], audio classification [20], psychometrics [8], and neuroscience [3]. One
concrete example proposed in [1] involves topic modeling in an exchangeable bag-of-words model
wherein given a corpus of documents one wishes to estimate parameters related to the different topics of the different documents (each document has a unique topic associated to it). By computing
the empirical moments associated to (exchangeable) bi-grams and tri-grams of words in the documents, [1] shows that this problem reduces to that of a (low rank) tensor decomposition. A number of
other machine learning tasks, such as Independent Component Analysis [11], and learning Gaussian
mixtures [2] are reducible to that of tensor decomposition. While most tensor problems are computationally intractable [12] there has been renewed interest in developing tractable and principled
approaches for the same [4, 5, 12, 15, 19, 21, 24?27].
In this paper we consider the problem of performing tensor decompositions when a subset of the
entries of a low-rank tensor X are corrupted adversarially, so that the tensor observed is Z = X +Y
where Y is the corruption. One may view this problem as the tensor version of a sparse and low-rank
matrix decomposition problem as studied in [6, 9, 10, 13]. We develop an algorithm for performing
such a decomopsition and provide theoretical guarantees as to when such decomposition is possible.
Our work draws on two sets of tools: (a) The line of work addressing the Robust PCA problem in
the matrix case [6, 9], and (b) Application of Leaurgans? algorithm for tensor decomposition and
tensor inverse problems [4, 17, 24].
Our algorithm is computationally efficient and scalable, it relies on the key notion of tensor contraction which effectively reduces a tensor problem of dimension n ? n ? n to four decompostion
problems for matrices of size n?n. One can then apply convex methods for sparse and low-rank matrix decomposition followed by certain linear algebraic operations to recover the constituent tensors.
Our algorithm not only produces the correct decomposition of Z into X and Y , but also produces
the low rank factorization of X. We are able to avoid tensor unfolding based approaches [14,21,26]
which are expensive and would lead to solving convex problems that are larger by orders of magnitude; in the 3rd order case the unfolded matrix would be n2 ? n. Furthermore, our method is
1
conceptually simple, to impelement as well as to analyze theoretically. Finally our method is also
modular ? it can be extended to the higher order case as well as to settings where the corrupted
tensor Z has missing entries, as described in Section 5.
1.1
Problem Setup
In this paper, vectors are denoted using lower case characters (e.g. x, y, a, b, etc.), matrices by uppercase characters (e.g. X, Y, etc,) and tensors by upper-case bold characters (e.g. X, T , A etc.). We
will work with tensors of third order (representationally to be thought of as three-way arrays), and
the term mode refers to one of the axes of the tensor. A slice of a tensor refers to a two dimensional
matrix generated from the tensor by varying indices along two modes while keeping the third mode
fixed. For a tensor X we will refer to the indices of the ith mode-1 slice (i.e., the slice corresponding
(1)
to the indices {i} ? [n2 ] ? [n3 ]) by Si , where [n2 ] = {1, 2, . . . , n2 } and [n3 ] is defined similarly.
(1)
We denote the matrix corresponding to Si by Xi1 . Similarly the indices of the k th mode-3 slice
(3)
will be denoted by Sk and the matrix by Xk3 .
Given a tensor of interest X, consider its decomposition into rank one tensors
r
X
? i u i ? vi ? w i ,
X=
(1)
i=1
where {ui }i=1,...,r ? Rn1 , {vi }i=1,...,r ? Rn2 , and {wi }i=1,...,r ? Rn3 are unit vectors. Here
? denotes the tensor product, so that X ? Rn1 ?n2 ?n3 is a tensor of order 3 and dimension n1 ?
n2 ? n3 . Without loss of generality, throughout this paper we assume that n1 ? n2 ? n3 . We
will present our results for third order tensors, and analogous results for higher orders follow in
a transparent manner. We will be dealing with low-rank tensors, i.e. those tensors with r ? n1 .
Tensors can have rank larger than the dimension, indeed r ? n3 is an interesting regime, but far
more challenging and is a topic left for future work.
Kruskal?s Theorem [16] guarantees that tensors satisfying Assumption 1.1 below have a unique
minimal decomposition into rank one terms of the form (1). The number of terms is called the
(Kruskal) rank.
Assumption 1.1. {ui }i=1,...,r ? Rn1 , {vi }i=1,...,r ? Rn2 , and {wi }i=1,...,r ? Rn3 are sets of
linearly independent vectors.
While rank decomposition of tensors in the worst case is known to be computationally intractable
[12], it is known that the (mild) assumption stated in Assumption 1.1 above suffices for an algorithm
known as Leurgans? algorithm [4, 18] to correctly identify the factors in this unique decomposition.
In this paper, we will make this assumption about our tensor X throughout. This assumption may
be viewed as a ?genericity? or ?smoothness? assumption [4].
In (1), r is the rank, ?i ? R are scalars, and ui ? Rn1 , vi ? Rn2 , wi ? Rn3 are the tensor factors. Let
U ? Rn1 ?r denote the matrix whose columns are ui , and correspondingly define V ? Rn2 ?r and
W ? Rn3 ?r . Let Y ? Rn1 ?n2 ?n3 be a sparse tensor to be viewed as a ?corruption? or adversarial
noise added to X, so that one observes:
Z =X +Y.
The problem of interest is that of decomposition, i.e. recovering Xand Y from Z.
For a tensor X, we define its mode-3 contraction with respect to a contraction vector a ? Rn3 ,
denoted by Xa3 ? Rn1 ?n2 , as the following matrix:
n3
X
3
Xa ij =
Xijk ak ,
(2)
k=1
so that the resulting n1 ? n2 matrix is a weighted sum of the mode-3 slices of the tensor X. Under this notation, the k th mode-3 slice matrix Xk3 is a mode-3 contraction with respect to the k th
canonical basis vector. We similarly define the mode-1 contraction with respect to a vector c ? Rn1
as
n1
X
1
Xc jk =
Xijk ci .
(3)
i=1
2
In the subsequent discussion we will also use the following
P notation. For a matrix M , kM k refers
to the spectral norm, kM k? the nuclear norm, kM k1 := i,j |Mij | the elementwise `1 norm, and
kM k? := maxi,j |Mi,j | the elementwise `? norm.
1.2
Incoherence
The problem of sparse and low-rank decomposition for matrices has been studied in [6, 9, 13, 22],
and it is well understood that exact decomposition is not always possible. In order for the problem to
be identifiable, two situations must be avoided: (a) the low-rank component X must not be sparse,
and (b) the sparse component Y must not be low-rank. In fact, something stronger is both necessary
and sufficient: the tangent spaces of the low-rank matrix (with respect to the rank variety) and the
sparse matrix (with respect to the variety of sparse matrices) must have a transverse intersection [9].
For the problem to be amenable to recovery using comptationally tractable (convex) methods, somewhat stronger, incoherence assumptions are standard in the matrix case [6,7,9]. We will make similar
assumptions for the tensor case, which we now describe.
Given the decomposition (1) of X we define the following subspaces of matrices:
TU,V = U AT + BV T : A ? Rn2 ?r , B ? Rn1 ?r
TV,W = V C T + DW T : C ? Rn3 ?r , D ? Rn2 ?r .
(4)
Thus TU,V is the set of rank r matrices whose column spaces are contained in span(U ) or row spaces
are contained in span(V ) respectively, and a similar definition holds for TV,W and matrices V, W . If
Q is a rank r matrix with column space span(U ) and row space span(V ), TU,V is the tangent space
at Q with respect to the variety of rank r matrices.
For a tensor Y , the support of Y refers to the indices corresponding to the non-zero entries of Y .
(3)
Let ? ? [n1 ] ? [n2 ] ? [n3 ] denote the support of Y . Further, for a slice Yi3 , let ?i ? [n1 ] ? [n2 ]
(k)
denote the corresponding sparsity pattern of the slice Yi3 (more generally ?i can be defined as
th
the sparsity of the matrix resulting from the i mode k slice). When a tensor contraction of Y is
computed along mode k, the sparsity of the resulting matrix is the union of the sparsity patterns of
Snk (k)
each (matrix) slice, i.e. ?(k) = i=1
?i . Let S ?(k) denote the set of (sparse) matrices with
support ?(k) . We define the following incoherence parameters:
? (U, V ) :=
max
kM k?
? (V, W ) :=
M ?TU,V :kM k?1
? ?(k) :=
max
max
kM k?
M ?TV,W :kM k?1
kN k.
N ?S (?(k) ):kN k? ?1
The quantities ? (U, V ) and ? (V, W ) being small implies that for contractions of the tensor Z, all
matrices in the tangent space of those contractions with respect to the variety of rank r matrices
are ?diffuse?, i.e. do not have sparse elements [9]. Similarly, ? ?(k) being small implies that
all matrices with the contracted sparsity pattern ?(k) are such that their spectrum is ?diffuse?, i.e.
they do not have low rank. We will see specific settings where these forms of incoherence hold for
tensors in Section 3.
2
Algorithm for Sparse and Low Rank Tensor Decomposition
We now introduce our algorithm to perform sparse and low rank tensor decompositions. We begin
with a Lemma:
Lemma 2.1. Let X ? Rn1 ?n2 ?n3 , with n1 ? n2 ? n3 be a tensor of rank r ? n1 . Then the rank
of Xa3 is at most r. Similarly the rank of Xc1 is at most r.
Pr
Proof. Consider a tensor X = i=1 ?i ui ? vi ? wi . The reader may verify in a straightforward
manner that Xa3 enjoys the decomposition:
Xa3 =
r
X
?i hwi , aiui viT .
i=1
3
(5)
The proof for the rank of Xc1 is analogous.
Note that while (5) is a matrix decomposition of the contraction, it is not a singular value decomposition (the components need not be orthogonal, for instance). Recovering the factors needs an
application of simultaneous diagonalization, which we describe next.
Pr
Lemma 2.2. [4, 18] Suppose we are given an order 3 tensor X = i=1 ?i ui ? vi ? wi of size
n1 ? n2 ? n3 satisfying the conditions of Assumption 1.1. Suppose the contractions Xa3 and Xb3
are computed with respect to unit vectors a, b ? Rn3 distributed independently and uniformly on the
unit sphere Sn3 ?1 and consider the matrices M1 and M2 formed as:
M1 = Xa3 (Xb3 )?
M2 = (Xb3 )? Xa3 .
Then the eigenvectors of M1 (corresponding to the non-zero eigenvalues) are {ui }i=1,...,r , and the
eigenvectors of M2T are {vi }i=1,...,r .
Remark Note that while the eigenvectors {ui } , {vj } are thus determined, a source of ambiguity
remains. For a fixed ordering of {ui } one needs to determine the order in which {vj } are to be
arranged. This can be (generically) achieved by using the (common) eigenvalues of M1 and M2 for
pairing i(f the contractions Xa3 , Xb3 are computed with respect to random vectors a, b the eigenvalues
are distinct almost surely). Since the eigenvalues of M1 , M2 are distinct they can be used to pair
the columns of U and V .
Lemma 2.2 is essentially a simultaneous diagonalization result [17] that facilitates tensor decomposition [4]. Given a tensor T , one can compute two contractions for mode 1 and apply simultaneous
diagonalization as described in Lemma 2.2 - this would yield the factors vi , wi (up to sign and reordering). One can then repeat the same process with mode 3 contractions to obtain ui , vi . In the
final step one can then obtain ?i by solving a system of linear equations. The full algorithm is
described in Algorithm 2 in the supplementary material.
For a contraction Zvk of a tensor Z with respect to a vector v along mode k, consider solving the
convex problem:
minimize
X ,Y
kX k? + ?k kYk1
subject to
Zvk = X + Y.
(6)
Our algorithm, stated in Algorithm 1, proceeds as follows: Given a tensor Z = X + Y , we perform
(3)
(3)
two random contractions (w.r.t. vectors a, b) of the tensor along mode 3 to obtain matrices Za , Zb
(3)
(3)
. Since Z is a sum of sparse and low-rank components, by Lemma 2.1 so are the matrices Za , Zb .
We thus use (6) to decompose them into constituent sparse and low-rank components, which are the
(3)
(3)
(3)
(3)
(3)
(3)
contractions of the matrices Xa , Xb , Ya , Yb . We then use Xa , Xb and Lemma 2.2 to
obtain the factors U, V . We perform the same operations along mode 1 to obtain factors V, W . In
the last step, we solve for the scale factors ?i (a system of linear equations).
Algorithm 2 in the supplementary material, which we adopt for our decomposition problem in Algorithm 1, essentially relies on the idea of simultaneous diagonalization of matrices sharing common
row and column spaces [17]. In this paper we do not analyze the situation where random noise is
added to all the entries, but only the sparse adversarial noise setting. We note, however, that the key
algorithmic insight of using contractions to perform tensor recovery is numerically stable and robust
with respect to noise, as has been studied in [4, 11, 17].
Parameters that need to be picked to implement our algorithm are the regularization coefficients
?1 , ?3 . In the theoretical guarantees we will see that this can be picked in a stable manner, and that
a range of values guarantee exact decomposition when the suitable incoherence conditions hold. In
practice these coefficents would need to be determined by a cross-validation method. Note also that
under suitable random sparsity assumptions [6], the regularization coefficient may be picked to be
the inverse of the square-root of the dimension.
2.1
Computational Complexity
The computational complexity of our algorithm is dominated by the complexity of perfoming the
sparse and low-rank matrix decomposition of the contractions via (6). For simplicity, let us consider
4
Algorithm 1 Algorithm for sparse and low rank tensor decomposition
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
Input: Tensor Z, parameters ?1 , ?3 .
Generate contraction vectors a, b ? Rn3 independently and uniformly distributed on unit sphere.
Compute mode 3 contractions Za3 and Zb3 respectively.
Solve the convex problem (6) with v = a, k = 3. Call the resulting solution matrices Xa3 , Ya3 ,
and regularization parameter ?1 .
Solve the convex problem (6) with v = b, k = 3. Call the resulting solution matrices Xb3 , Yb3
and regularization parameter ?3 .
Compute eigen-decomposition of M1 := Xa3 (Xb3 )? and M2 := (Xb3 )? Xa3 . Let U and V denote
the matrices whose columns are the eigenvectors of M1 and M2T respectively corresponding to
the non-zero eigenvalues, in sorted order. (Let r be the (common) rank of M1 and M2 .) The
eigenvectors, thus arranged are denoted as {ui }i=1,...,r and {vi }i=1,...,r .
Generate contraction vectors c, d ? Rn1 independently and uniformly distributed on unit sphere.
Solve the convex problem (6) with v = c, k = 1. Call the resulting solution matrices Xc1 , Yc1
and regularization parameter ?3 .
Solve the convex problem (6) with v = d, k = 1. Call the resulting solution matrices Xd1 , Yd1
and regularization parameter ?4 .
? denote
Compute eigen-decomposition of M3 := Xc1 (Xd1 )? and M4 := (Xc1 )? Xd1 . Let V? and W
T
the matrices whose columns are the eigenvectors of M3 and M4 respectively corresponding to
the non-zero eigenvalues, in sorted order. (Let r be the (common) rank of M3 and M4 .)
? , also performing simultaneous sign reversals as
Simultaneously reorder the columns of V? , W
?
necessary so that the columns of V and V are equal, call the resulting matrix W with columns
{wi }i=1,...,r .
Solve for ?i in the linear system
Xa3 =
r
X
?i ui viT hwi , ai.
i=1
? :=
13: Output: Decomposition X
Pr
i=1
?
?i ui ? vi ? wi , Y? := Z ? X.
the case where the target tensor Z ? Rn?n?n has equal dimensions in different modes. Using
3
a standard first order method, the
solution of (6) has a per iteration complexity of O(n ), and to
1
achieve an accuracy of , O iterations arerequired
[22]. Since only four such steps need be
3
performed, the complexity of the method is O n where is the accuracy to which (6) is solved.
Another alternative is to reformulate (6) such that it is amenable to greedy atomic approaches [23],
which yields an order of magnitude improvement. We note that in contrast, a tensor unfolding for
this problem [14, 21, 26] results in the need to solve much larger convex programs. For instance, for
Z ? Rn?n?n , the resulting flattened matrix
be of size n2 ? n and the resulting convex prob
4would
lem would then have a complexity of O n . For higher order tensors, the gap in computational
complexity would increase by further orders of n.
2.2
Numerical Experiments
We now present numerical results to validate our approach. We perform experiments for tensors
of size 50 ? 50 ? 50 (non-symmetric). A tensor Z is generated as the sum of a low rank tensor
X and a sparse tensor Y . The low-rank component is generated as follows: Three sets of r unit
vecots ui , vi , wi ? R50 are generated randomly, independently and uniformly distributed on the
unit sphere. P
Also a random positive scale factor (uniformly distributed on [0, 1] is chosen and the
r
tensor X = i=1 ?i ui ? vi ? wi . The tensor Y is generated by (Bernoulli) randomly sampling its
entries with probability p. For each such p, we perform 10 trials and apply our algorithm. In all our
experiments, the regularization parameter was picked to be ? = ?1n . The optimization problem (6)
is solved using CVX in MATLAB. We report success if the MSE is smaller than 10?5 , separately
for both the X and Y components. We plot the empirical probability of success as a function of
p in Fig. 1 (a), (b), for multiple values of the true rank r. In Fig. 1 (c), (d) we test the scalability
5
0.8
0.6
P(recovery)
0.6
0.4
0.4
0.2
0
0
r=1
r=2
r=3
r=4
0.2
0.5
1
sparsity x 100
1.5
2
0
0
0.5
1
sparsity x 100
1.5
5
5
4
4
3
2
1
0
0
2
# Inexact Recoveries
0.8
P(recovery)
1
r=1
r=2
r=3
r=4
# Inexact Recoveries
1
0.05
0.1
0.15
0.2
3
2
1
0
0
0.05
Corruption Sparsity
0.1
0.15
0.2
Corruption Sparsity
(a) Low Rank Compo- (b) Sparse Component (c) Low Rank Compo- (d) Sparse Component
nent
nent
Figure 1: Recovery of the low rank and sparse components from our proposed methods. In figures
(a) and (b) we see that the probability of recovery is high when both the rank and sparsity are low.
In figures (c) and (d) we study the recovery error for a tensor of dimensions 300 ? 300 ? 300 and
rank 50.
of our method, by generating a random 300 ? 300 ? 300 tensor of rank 50, and corrupting it with
a sparse tensor of varying sparsity level. We run 5 independent trials and see that for low levels of
corruption, both the low rank and sparse components are accurately recovered by our method.
3
Main Results
We now present the main rigorous guarantees related to the performance of our algorithm. Due to
space constraints, the proofs are deferred to the supplementary materials.
Pr
Theorem 3.1. Suppose Z = X + Y , where X = i=1 ?i ui ? vi ? wi , has rank r ? n1 and
such that the factors satisfy Assumption 1.1. Suppose Y has support ? and the following condition
is satisfied:
1
1
? ?(1) ? (V, W ) < .
? ?(3) ? (U, V ) ?
6
6
Then Algoritm 1 succeeds in exactly recovering the
component
tensors,
i.e. (X, Y) =
1?3?(U,V
)?(?(3) )
?(U,V )
? Y? ) whenever ?k are picked so that ?3 ?
,
(X,
and
1?4?(U,V )?(?(3) )
?(?(3) )
1?3?(V,W )?(?(1) )
?(V,W )
))p
,
?1 ?
. Specifically, choice of ?3 = (3?(U,V
1?p and
(3)
1?4?(V,W )?(?(1) )
?(?(1) )
(?(? ))
p
))
?1 = (3?(V,W
1?p for any p ? [0, 1] in these respective intervals guarantees exact recovery.
(?(?(1) ))
For a matrix M , the degree of M , denoted by deg(M ), is the maximum number of non-zeros in any
row or column of M . For a tensor Y , we define the degree along mode k, denoted by degk (Y ) to
be the maximum number of non-zero entries in any row or column of a matrix supported on ?(k)
(defined in Section 1.2). The degree of Y is denoted by deg(Y ) := maxk?{1,2,3} degk (Y ).
Lemma 3.2. We have:
? ?(k) ? deg(Y ), for all k.
For a subspace S ? Rn , let us define the incoherence of the subspace as:
?(S) := maxkPS ei k2 ,
i
where PS denotes the projection operator onto S, ei is a standard unit vector and k ? k2 is the
Euclidean norm of a vector. Let us define:
inc(X) := max {? (span(U )) , ? (span(V )) , ? (span(W ))}
inc3 (X) := max {? (span(U )) , ? (span(V ))}
inc1 (X) := max {? (span(V )) , ? (span(W ))} .
6
Note that inc(X) < 1, always. For many random ensembles
qof interest, we have that the incoherence
scales gracefully with the dimension n, i.e.: inc(X) ? K
max{r,log n}
.
n
Lemma 3.3. We have
? (U, V ) ? 2 inc(X)
? (V, W ) ? 2 inc(X).
Pr
Corollary 3.4. Let Z = X + Y , with X = i=1 ?i ui ? vi ? wi and rank r ? n1 , the factors
satisfy Assumption 1.1 and incoherence inc(X). Suppose Y is sparse and has degree deg(Y ). If
the condition
1
inc(X)deg(Y ) <
12
? Y? ) when the
holds then Algorithm 1 successfully recovers the true solution, i.e. . (X, Y ) = (X,
parameters
2inc3 (X)
1 ? 6deg3 (Y )inc3 (X)
?3 ?
,
1 ? 8deg3 (Y )inc3 (X)
deg3 (Y )
2inc1 (X)
1 ? 6deg1 (Y )inc1 (X)
?1 ?
,
.
1 ? 8deg1 (Y )inc1 (X)
deg1 (Y )
p
(6inc3 (X))
Specifically, a choice of ?3 = (2deg
1?p , ?1 =
3 (Y ))
that guarantees exact recovery.
(6inc1 (X))p
(2deg1 (Y ))1?p
for any p ? [0, 1] is a valid choice
Remark Note that Corollary 3.4 presents a deterministic guarantee on the recoverability of a sparse
corruption of a low rank tensor, and can be viewed as a tensor extension of [9, Corollary 3].
We now consider, for the sake of simplicity, tensors of uniform dimension, i.e. X, Y , Z ? Rn?n?n .
We show that when the low-rank and sparse components are suitably random, the approach outlined
in Algorithm 1 achieves exact recovery.
We define the random sparsity model to be one where each entry of the tensor Y is non-zero independently and with identical probability ?. We make no assumption about the mangitude of the
entries of Y , only that its non-zero entries are thus sampled.
Pr
n
Lemma 3.5. Let X =
i=1 ?i ui ? vi ? wi , where ui , vi , wi ? R are uniformly randomly
n?1
distributed on the unit sphere S
. Then the incoherence of the tensor X satisifies:
r
max {r, log n}
inc(X) ? c1
n
with probability exceeding 1 ? c2 n?3 log n for some constants c1 , c2 .
Lemma3.6. Suppose the entriesof Y are sampled according to the random sparsity model, and
?1
3
?
n
. Then the tensor Y satisfies: deg(Y ) ? 12c1 max(log
? = O
n 2 max(log n, r)
n,r) with
?
n
probability exceeding 1 ? exp ?c3 max(log
n,r) for some constant c3 > 0.
Corollary 3.7. Let Z = X + Y where X is low rank with random factors as per the conditions
of Lemma
1 3.5
and Y is sparse with random support as per the conditions in Lemma 3.6. Provided
? Y? ) = (X, Y )
2
r ? o n , Algorithm 1 successfully recovers the correct decomposition, i.e. (X,
with probability exceeding 1 ? n?? for some ? > 0.
Remarks 1) Under this sampling model, the cardinality of the support of Y is allowed to be as large
3
as m = O(n 2 log?1 n) when the rank r is constant (independent of n).
2) We could equivalently have looked at a uniformly random sampling model, i.e. one where a
support set of size m is chosen uniformly randomly from the set of all possible support sets of
cardinality at most m, and our results for exact recovery would have gone through. This follows
from the equivalence principle for successful recovery between Bernoulli sampling and uniform
sampling, see [6, Appendix 7.1].
3) Note that for the random sparsity ensemble, [6] shows that a choice of ? = ?1n ensures exact
recovery (an additional condition regarding the magnitudes of the factors is needed, however). By
extension, the same choice can be shown to work for our setting.
7
4
Extensions
The approach described in Algorithm 1 and the analysis is quite modular and can be adapted to
various settings to account for different forms of measurements and robustness models. We do not
present an analysis of these situations due to space constraints, but outline how these extensions
follow from the current development in a straightforward manner.
1) Higher Order Tensors: Algorithm 1 can be extended naturally to the higher order setting. Recall that in the third order case, one needs to recover two contractions along the third mode to
discover factors U, V and then two contractions along the first mode to discover factors V, W .
For an order K tensor of the form Z ? Rn1 ?...?nK which is the sum of a low rank component
Pr
NK (l)
X = i=1 ?i l=1 ui and a sparse component Y , one needs to compute higher order contractions of Z along K ? 1 different modes. For each of these K ? 1 modes the resulting contraction is
the sum of a sparse and low-rank matrix, and thus pairs of matrix problems of the form (6) reveal the
sparse and low-rank components of the contractions. The low-rank factors can then be recovered via
application of Lemma 2.2 and the full decomposition can thus be recovered. The same guarantees
as in Theorem 3.1 and Corollary 3.4 hold verbatim (the notions of incoherence inc(X) and degree
deg(Y ) of tensors need to be extended to the higher order case in the natural way)
2) Block sparsity: Situations where entire slices of the tensor are corrupted may happen in recommender systems with adversarial ratings [10]. A natural approach in this case is to use a convex
relaxation of the form
minimize
M1 ,M2
?k kM1 k? + kM2 k1,2
Zvk = M1 + M2
P
in place of (6) in Algorithm 1. In the above, kM k1,2 := i kMi k2 , where Mi is the ith column of
M . Since exact recovery of the block-sparse and low-rank components of the contractions are guaranteed via this relaxation under suitable assumptions [10], the algorithm would inherit associated
provable guarantees.
subject to
3) Tensor completion: In applications such as recommendation systems, it may be desirable to
perform tensor completion in the presence of sparse corruptions. In [24], an adaptation of Leurgans?
algorithm was presented for performing completion from measurements restricted to only four slices
of the tensor with near-optimal sample complexity (under suitable genericity assumptions about
the tensor). We note that it is straightforward to blend Algorithm 1 with this method to achieve
completion with sparse corruptions. Recalling that Z = X + Y and therefore Zk3 = Xk3 + Yk3 (i.e.
the k th mode 3 slice of Z isa sum of sparse and low rank slices of X and Y ), if only a subset of
elements of Zk3 (say P? Zk3 ) is observed for some index set ?, we can replace (6) in Algorithm 1
with
minimize
?k kM1 k? + kM2 k1
subject to
P? Zvk = P? (M1 + M2 ) .
M1 ,M2
Under suitable incoherence assumptions [6, Theorem 1.2], the above will achieve exact recovery of
the slices. Once four slices are accurately recovered, one can then use Leurgans? algorithm to recover
the full tensor [24, Theorem 3.6]. Indeed the above idea can be extended more generally to the
concept of deconvolving a sum of sparse and low-rank tensors from separable measurements [24].
4) Non-convex approaches: A basic primitive for sparse and low-rank tensor decomposition used
in this paper is that of using (6) for matrix decomposition. More efficient non-convex approaches
such as the ones described in [22] may be used instead to speed up Algorithm
1. These alternative
1
2
nonconvex methods [22] requre O(rn
)
steps
per
iterations,
and
O
log
iterations
resulting in a
1
2
total complexity of O rn log for solving the decomposition of the contractions to an accuracy
of .
References
[1] A. A NANDKUMAR , R. G E , D. H SU , AND S. M. K AKADE, A tensor approach to learning mixed membership community models, The Journal of Machine Learning Research, 15 (2014), pp. 2239?2312.
[2] A. A NANDKUMAR , R. G E , D. H SU , S. M. K AKADE , AND M. T ELGARSKY, Tensor decompositions for
learning latent variable models, Tech. Rep. 1, 2014.
8
[3] C. B ECKMANN AND S. S MITH, Tensorial extensions of independent component analysis for multisubject
FMRI analysis, NeuroImage, 25 (2005), pp. 294?311.
[4] A. B HASKARA , M. C HARIKAR , A. M OITRA , AND A. V IJAYARAGHAVAN, Smoothed analysis of tensor
decompositions, in Proceedings of the 46th Annual ACM Symposium on Theory of Computing, ACM,
2014, pp. 594?603.
[5] S. B HOJANAPALLI AND S. S ANGHAVI, A new sampling technique for tensors, arXiv preprint
arXiv:1502.05023, (2015).
[6] E. J. C AND E` S , X. L I , Y. M A , AND J. W RIGHT, Robust principal component analysis?, Journal of the
ACM, 58 (2011), pp. 11?37.
[7] E. J. C AND E` S AND B. R ECHT, Exact matrix completion via convex optimization, Foundations of Computational Mathematics, 9 (2009), pp. 717?772.
[8] R. B. C ATTELL, Parallel proportional profiles and other principles for determining the choice of factors
by rotation, Psychometrika, 9 (1944), pp. 267?283.
[9] V. C HANDRASEKARAN , S. S ANGHAVI , P. A. PARRILO , AND A. S. W ILLSKY, Rank-sparsity incoherence for matrix decomposition, SIAM Journal on Optimization, 21 (2011), pp. 572?596.
[10] Y. C HEN , H. X U , C. C ARAMANIS , AND S. S ANGHAVI, Robust matrix completion and corrupted
columns, in Proceedings of the 28th International Conference on Machine Learning (ICML-11), L. Getoor
and T. Scheffer, eds., New York, NY, USA, 2011, ACM, pp. 873?880.
[11] N. G OYAL , S. V EMPALA , AND Y. X IAO, Fourier PCA and robust tensor decomposition, in Proceedings
of the 46th Annual ACM Symposium on Theory of Computing, ACM, 2014, pp. 584?593.
[12] C. J. H ILLAR AND L.-H. L IM, Most tensor problems are NP-hard, Journal of the ACM, 60 (2013),
pp. 45:1?45:39.
[13] D. H SU , S. K AKADE , AND T. Z HANG, Robust matrix decomposition with sparse corruptions, Information Theory, IEEE Transactions on, 57 (2011), pp. 7221?7234.
[14] B. H UANG , C. M U , D. G OLDFARB , AND J. W RIGHT, Provable models for robust low-rank tensor
completion, Pacific Journal of Optimization, 11 (2015), pp. 339?364.
[15] A. K RISHNAMURTHY AND A. S INGH, Low-rank matrix and tensor completion via adaptive sampling,
in Advances in Neural Information Processing Systems, 2013.
[16] J. B. K RUSKAL, Three-way arrays: Rank and uniqueness of trilinear decompositions, with application
to arithmetic complexity and statistics, Linear Algebra Applicat., 18 (1977).
[17] V. K ULESHOV, A. C HAGANTY, AND P. L IANG, Tensor factorization via matrix factorization, arXiv.org,
(2015).
[18] S. L EURGANS , R. ROSS , AND R. A BEL, A decomposition for three-way arrays, SIAM Journal on Matrix
Analysis and Applications, 14 (1993), pp. 1064?1083.
[19] Q. L I , A. P RATER , L. S HEN , AND G. TANG, Overcomplete tensor decomposition via convex optimization, in IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing
(CAMSAP), Cancun, Mexico, Dec. 2015.
[20] N. M ESGARANI , M. S LANEY, AND S. A. S HAMMA, Discrimination of speech from non-speech based on
multiscale spectro-temporal modulations, Audio, Speech and Language Processing, IEEE Transactions
on, 14 (2006), pp. 920?930.
[21] C. M U , B. H UANG , J. W RIGHT, AND D. G OLDFARB, Square deal: Lower bounds and improved relaxations for tensor recovery, preprint arXiv:1307.5870, 2013.
[22] P. N ETRAPALLI , U. N IRANJAN , S. S ANGHAVI , A. A NANDKUMAR ,
PCA, in Advances in Neural Information Processing Systems, 2014.
AND
P. JAIN, Non-convex robust
[23] N. R AO , P. S HAH , AND S. W RIGHT, Forward-backward greedy algorithms for signal demixing, in Signals, Systems and Computers, 2013 Asilomar Conference on, IEEE, 2014.
[24] P. S HAH , N. R AO , AND G. TANG, Optimal low-rank tensor recovery from separable measurements:
Four contractions suffice, arXiv.org, (2015).
[25] G. TANG AND P. S HAH, Guaranteed tensor decomposition: A moment approach, International Conference on Machine Learning (ICML 2015), (2015), pp. 1491?1500.
[26] R. T OMIOKA , K. H AYASHI , AND H. K ASHIMA, Estimation of low-rank tensors via convex optimization,
preprint arXiv:1010.0789, 2011.
[27] M. Y UAN AND C.-H. Z HANG, On tensor completion via nuclear norm minimization, preprint
arXiv:1405.1773, 2014.
9
| 6017 |@word mild:1 trial:2 version:1 norm:6 stronger:2 suitably:1 mith:1 tensorial:1 km:9 decomposition:47 contraction:30 moment:2 reduction:1 document:4 renewed:1 xand:1 com:1 recovered:4 current:1 si:2 must:4 numerical:3 subsequent:1 happen:1 plot:1 discrimination:1 greedy:2 nent:2 ith:2 compo:2 org:2 hah:3 empala:1 along:9 c2:2 symposium:2 pairing:1 introduce:1 manner:4 multisubject:1 theoretically:1 indeed:2 camsap:1 multi:1 unfolded:1 cardinality:2 psychometrika:1 begin:1 provided:1 notation:2 discover:2 suffice:1 gongguo:1 guarantee:11 temporal:1 exactly:1 k2:3 exchangeable:2 unit:9 positive:1 understood:2 representationally:1 ak:1 haskara:1 incoherence:12 modulation:1 studied:3 equivalence:1 challenging:1 factorization:5 bi:1 range:1 gone:1 unique:3 atomic:1 union:1 practice:1 implement:1 block:2 empirical:2 thought:1 projection:1 word:2 refers:4 onto:1 operator:1 deterministic:1 missing:1 modifies:1 straightforward:3 primitive:1 vit:2 convex:18 independently:5 simplicity:2 recovery:19 m2:10 insight:1 array:3 nuclear:2 dw:1 notion:3 analogous:2 target:1 suppose:6 exact:10 element:2 expensive:1 satisfying:2 jk:1 observed:2 reducible:1 preprint:4 solved:2 worst:1 ensures:1 ordering:1 observes:1 principled:1 ui:20 complexity:10 kmi:1 mine:1 solving:5 algebra:1 basis:1 various:1 distinct:2 jain:1 describe:2 whose:4 modular:2 larger:3 supplementary:3 solve:7 nikhil:1 psychometrics:1 quite:1 say:1 statistic:1 final:1 eigenvalue:6 product:1 km2:2 adaptation:1 tu:4 achieve:3 representational:1 validate:2 scalability:1 constituent:2 p:1 produce:2 generating:1 object:1 develop:1 completion:9 ij:1 recovering:3 c:1 involves:1 implies:2 correct:2 material:3 transparent:1 suffices:1 ao:2 decompose:1 im:1 extension:5 hold:5 exp:1 algorithmic:1 kruskal:2 achieves:1 adopt:1 uniqueness:1 estimation:1 akade:3 bag:1 utexas:1 ross:1 successfully:2 tool:1 weighted:1 unfolding:2 minimization:1 sensor:1 gaussian:1 always:2 avoid:1 varying:2 deg3:3 corollary:5 ax:1 improvement:1 rank:68 bernoulli:2 tech:1 contrast:1 adversarial:3 rigorous:1 membership:1 entire:1 degk:2 algoritm:1 classification:1 denoted:7 yahoo:1 development:1 equal:2 once:1 sampling:7 xb3:7 adversarially:1 identical:1 deconvolving:1 icml:2 future:1 fmri:1 report:1 np:1 randomly:4 simultaneously:1 verbatim:1 rater:1 m4:3 parikshit:2 yi3:2 n1:12 recalling:1 interest:4 deferred:1 generically:1 mixture:1 hwi:2 uppercase:1 xb:2 amenable:2 necessary:2 decompostion:1 respective:1 orthogonal:1 euclidean:1 rn3:8 overcomplete:1 theoretical:3 minimal:1 instance:2 column:14 modeling:1 rao:1 addressing:1 subset:2 entry:10 uniform:2 successful:1 kn:2 corrupted:4 international:3 siam:2 xi1:1 concrete:1 ambiguity:1 satisfied:1 rn1:12 account:1 parrilo:1 rn2:6 bold:1 coefficient:2 inc:10 satisfy:2 vi:17 performed:1 view:1 picked:5 root:1 analyze:2 recover:3 qof:1 parallel:1 aiui:1 minimize:3 formed:1 square:2 accuracy:3 ensemble:2 yield:2 identify:1 trilinear:1 conceptually:1 accurately:2 corruption:9 applicat:1 za:2 simultaneous:5 sharing:1 whenever:1 ed:1 iang:1 definition:1 inexact:2 pp:15 naturally:1 associated:3 mi:2 proof:3 recovers:2 sampled:2 recall:1 higher:7 follow:2 wherein:1 improved:1 yb:1 arranged:2 delineate:1 generality:1 furthermore:1 xa:3 ei:2 su:3 iao:1 multiscale:1 mode:25 reveal:1 usa:1 concept:1 true:2 verify:1 regularization:7 symmetric:1 deal:1 algoirthm:1 outline:1 common:4 rotation:1 m1:12 elementwise:2 numerically:1 refer:1 measurement:4 ai:1 leurgans:4 smoothness:1 rd:1 outlined:1 mathematics:1 similarly:5 language:1 stable:2 etc:3 something:1 certain:1 nonconvex:1 rep:1 success:2 additional:1 somewhat:1 surely:1 determine:1 signal:2 arithmetic:1 recoverable:1 full:4 multiple:1 desirable:1 reduces:2 cross:1 sphere:5 scalable:1 basic:1 essentially:2 arxiv:7 iteration:4 achieved:1 dec:1 c1:3 separately:1 interval:1 singular:1 source:1 tri:1 subject:3 facilitates:1 call:5 near:1 presence:1 variety:5 idea:2 regarding:1 motivated:1 pca:3 sn3:1 algebraic:1 speech:3 york:1 remark:3 matlab:1 useful:1 generally:2 eigenvectors:6 reduced:1 generate:2 canonical:1 sign:2 neuroscience:1 correctly:1 per:4 harikar:1 key:2 four:5 backward:1 relaxation:3 satisifies:1 sum:7 run:1 inverse:2 prob:1 anghavi:4 place:1 throughout:2 reader:1 almost:1 cvx:1 draw:1 appendix:1 bound:1 followed:1 guaranteed:2 identifiable:1 annual:2 bv:1 adapted:1 uan:1 constraint:2 n3:12 diffuse:2 sake:1 dominated:1 fourier:1 speed:1 span:11 performing:4 separable:2 rishnamurthy:1 pacific:1 developing:1 tv:3 according:1 smaller:1 character:3 wi:14 lem:1 restricted:1 pr:7 asilomar:1 computationally:3 equation:2 remains:1 r50:1 needed:1 tractable:2 reversal:1 operation:2 apply:3 snk:1 spectral:1 alternative:2 robustness:1 shah:1 eigen:2 denotes:2 graphical:1 xc:1 k1:4 tensor:106 question:1 added:2 quantity:1 looked:1 blend:1 subspace:3 gracefully:1 topic:4 provable:2 index:6 reformulate:1 mexico:1 equivalently:1 setup:1 km1:2 stated:2 coefficents:1 perform:8 upper:1 recommender:1 situation:5 extended:4 maxk:1 rn:6 smoothed:1 recoverability:1 community:1 transverse:1 rating:1 pair:2 required:1 c3:2 bel:1 uang:2 able:1 proceeds:1 below:1 pattern:3 regime:1 sparsity:17 program:1 max:11 suitable:5 getoor:1 natural:2 xd1:3 zvk:4 m2t:2 tangent:3 hen:2 determining:1 loss:1 reordering:1 yc1:1 mixed:1 interesting:1 proportional:1 validation:1 foundation:1 degree:5 sufficient:1 principle:2 corrupting:1 etrapalli:1 row:5 repeat:1 last:1 keeping:1 supported:1 enjoys:1 xc1:5 correspondingly:1 sparse:40 distributed:6 slice:16 dimension:8 gram:2 valid:1 forward:1 adaptive:2 avoided:1 far:1 transaction:2 hang:2 spectro:1 dealing:1 deg:8 corpus:1 reorder:1 spectrum:1 latent:2 sk:1 robust:9 xijk:2 mse:1 vj:2 inherit:1 main:2 linearly:1 noise:4 profile:1 n2:16 allowed:1 contracted:1 fig:2 scheffer:1 ny:1 sub:1 neuroimage:1 wish:1 exceeding:3 third:5 kyk1:1 tang:4 theorem:5 xa3:12 specific:1 oyal:1 maxi:1 demixing:1 intractable:2 workshop:1 effectively:1 flattened:1 ci:1 diagonalization:4 magnitude:3 cancun:1 genericity:2 kx:1 nk:2 gap:1 intersection:1 contained:2 scalar:1 recommendation:1 mij:1 satisfies:1 relies:3 acm:7 viewed:3 sorted:2 replace:1 hard:1 determined:2 specifically:2 uniformly:8 nandkumar:3 lemma:14 zb:2 called:1 total:1 principal:1 ya:1 m3:3 succeeds:1 support:8 audio:2 |
5,545 | 6,018 | Analysis of Robust PCA via Local Incoherence
Huishuai Zhang
Department of EECS
Syracuse University
Syracuse, NY 13244
[email protected]
Yi Zhou
Department of EECS
Syracuse University
Syracuse, NY 13244
[email protected]
Yingbin Liang
Department of EECS
Syracuse University
Syracuse, NY 13244
[email protected]
Abstract
We investigate the robust PCA problem of decomposing an observed matrix into
the sum of a low-rank and a sparse error matrices via convex programming Principal Component Pursuit (PCP). In contrast to previous studies that assume the
support of the error matrix is generated by uniform Bernoulli sampling, we allow
non-uniform sampling, i.e., entries of the low-rank matrix are corrupted by errors with unequal probabilities. We characterize conditions on error corruption of
each individual entry based on the local incoherence of the low-rank matrix, under
which correct matrix decomposition by PCP is guaranteed. Such a refined analysis of robust PCA captures how robust each entry of the low rank matrix combats
error corruption. In order to deal with non-uniform error corruption, our technical
proof introduces a new weighted norm and develops/exploits the concentration
properties that such a norm satisfies.
1
Introduction
We consider the problem of robust Principal Component Analysis (PCA). Suppose a n-by-n1 data
matrix M can be decomposed into a low-rank matrix L and a sparse matrix S as
M = L + S.
(1)
Robust PCA aims to find L and S with M given. This problem has been extensively studied recently.
In [1, 2], Principal Component Pursuit (PCP) has been proposed to solve the robust PCA problem
via the following convex programming
PCP:
minimize kLk? + kSk1
L,S
(2)
subject to M = L + S,
where k ? k? denotes the nuclear norm, i.e., the sum of singular values, and k ? k1 denotes the l1
norm i.e., the sum of absolute values of all entries. It was shown in [1, 2] that PCP successfully
recovers L and S if the two matrices are distinguishable from each other in properties, i.e., L is not
sparse and S is not low-rank. One important quantity that determines similarity of L to a sparse
matrix is the incoherence of L, which measures how column and row spaces of L are aligned with
canonical basis and between themselves. Namely, suppose that L is a rank-r matrix with SVD
L = U ?V ? , where ? is a r ? r diagonal matrix with singular values as its diagonal entries, U is a
n ? r matrix with columns as the left singular vectors of L, V is a n ? r matrix with columns as the
right singular vectors of L, and V ? denotes the transpose of V . The incoherence of L is measured
1
In this paper, we focus on square matrices for simplicity. Our results can be extended to rectangular
matrices in a standard way.
1
by ? = max{?0 , ?1 }, where ?0 and ?1 are defined as
r
r
?0 r
?0 r
?
?
kU ei k ?
,
kV ej k ?
,
n
n
r
?1 r
kU V ? k1 ?
.
n2
for all i, j = 1, ? ? ? , n
(3)
(4)
Previous studies suggest that the incoherence crucially determines conditions on sparsity of S in
order for PCP to succeed. For example, Theorem 2 in [3] explicitly shows that the matrix L with
larger ? can tolerate only smaller error density to guarantee correct matrix decomposition by PCP.
In all previous work on robust PCA, the incoherence is defined to be the maximum over all column
and row spaces of L as in (3) and (4), which can be viewed as the global parameter for the entire
matrix L, and consequently, characterization of error density is based on such global (and in fact the
worst case) incoherence.
In fact, each (i, j) entry of the low rank matrix L can be associated with a local incoherence parameter ?ij , which is less than or equal to the global parameter ?, and then the allowable entry-wise error
density can be potentially higher than that characterized based on the global incoherence. Thus,
the total number of errors that the matrix can tolerate in robust PCA can be much higher than that
characterized based on the global incoherence when errors are distributed accordingly. Motivated
by such an observation, this paper aims to characterize conditions on error corruption of each entry
of the low rank matrix based on the corresponding local incoherence parameter, which guarantee
success of PCP. Such conditions imply how robust each individual entry of L to resist error corruption. Naturally, the error corruption probability is allowed to be non-uniform over the matrix (i.e.,
locations of non-zero entries in S are sampled non-uniformly).
We note that the notion of local incoherence was first introduced in [4] for studying the matrix
completion problem, in which local incoherence determines the local sampling density in order to
guarantee correct matrix completion. Here, local incoherence plays a similar role, and determines
the maximum allowable error density at each entry to guarantee correct matrix decomposition. The
difference lies in that local incoherence here depends on both localized ?0 and ?1 rather than only
on localized ?0 in matrix completion due to further difficulty of robust PCA, in which locations of
error corrupted entries are unknown, as pointed out in [1, 3].
Our Contribution. In this paper, we investigate a more general robust PCA problem, in which
entries of the low rank matrix are corrupted by non-uniformly distributed Bernoulli errors. We
characterize the conditions that guarantee correct matrix decomposition by PCP. Our result identifies
the local incoherence (defined by localized ?0 and ?1 for each entry of the low rank matrix) to
determine the condition that each local Bernoulli error corruption parameter should satisfy. Our
results provide the following useful understanding of the robust PCA problem:
? Our characterization provides a localized (and hence more refined) view of robust PCA, and
determines how robust each entry of the low rank matrix combats error corruption.
? Our results suggest that the total number of errors that the low-rank matrix can tolerate depends
on how errors are distributed over the matrix.
? Via cluster problems, our results provide an evidence that ?1 is necessary in characterizing
conditions for robust PCA.
In order to deal with non-uniform error corruption, our technical proof introduces a new weighted
norm denoted by lw(1) , which involves the information of both localized ?0 and ?1 and is hence different from the weighted norms introduced in [4] for matrix completion. Thus, our proof necessarily
involves new technical developments associated with such a new norm.
Related Work. A closely related but different problem from robust PCA is matrix completion, in
which a low-rank matrix is partially observed and is to be completed. Such a problem has been
previously studied in [5?8], and it was shown that a rank-r n-by-n matrix can be provably recoverable by convex optimization with as few as ?(max{?0 , ?1 }nr log2 n)2 observed entries. Later
on, it was shown in [4] that ?1 does not affect sample complexity for matrix completion and hence
?(?0 nr log2 n) observed entries are sufficient for guaranteeing correct matrix completion. It was
further shown in [9] that a coherent low-rank matrix (i.e., with large ?0 ) can be recovered with
2
f (n) 2 ?(g(n)) means k1 ? g(n) ? f (n) ? k2 ? g(n) for some positive k1 , k2 .
2
?(nr log2 n) observations as long as the sampling probability is proportional to the leverage score
(i.e., localized ?0 ). Our problem can be viewed as its counterpart in robust PCA, where the difference lies in the local incoherence in our problem depends on both localized ?0 and ?1 .
Robust PCA aims to decompose an observed matrix into the sum of a low-rank matrix and a sparse
matrix. In [2, 10], robust PCA with fixed error matrix was studied, and it was shown that the maximum number of errors in any row or column should be bounded from above in order to guarantee
correct decomposition by PCP. Robust PCA with random error matrix was investigated in a number
of studies. It has been shown in [1] that such decomposition can be exact with high probability if
the percentage of corrupted entries is small enough, under the assumptions that the low-rank matrix
is incoherent and the support set of the sparse matrix is uniformly distributed. It was further shown
in [11] that if signs of nonzero entries in the sparse matrix are randomly chosen, then an adjusted
convex optimization can produce exact decomposition even when the percentage of corrupted entries goes to one (i.e., error is dense). The problem was further studied in [1, 3, 12] for the case
with the error-corrupted low-rank matrix only partially observed. Our work provides a more refined
(i.e. entry-wise) view of robust PCA with random error matrix, aiming at understanding how local
incoherence affects susceptibility of each matrix entry to error corruption.
2
2.1
Model and Main Result
Problem Statement
We consider the robust PCA problem introduced in Section 1. Namely, suppose an n-by-n matrix
M can be decomposed into two parts: M = L + S, where L is a low rank matrix and S is a sparse
(error) matrix. We assume that the rank of L is r, and the support of S is selected randomly but
non-uniformly. More specifically, let ? denote the support of S and then ? ? [n] ? [n], where [n]
denotes the set {1, 2, . . . , n}. The event {(i, j) 2 ?} is independent across different pairs (i, j) and
(5)
P ((i, j) 2 ?) = ?ij ,
where ?ij represents the probability that the (i, j)-entry of L is corrupted by error. Hence, ? is
determined by Bernoulli sampling with non-uniform probabilities.
We study both the random sign and fixed sign models for S. For the fixed sign model, we assume
signs of nonzero entries in S are arbitrary and fixed, whereas for the random sign model, we assume
that signs of nonzero entries in S are independently distributed Bernoulli variables, randomly taking
values +1 or 1 with probability 1/2 as follows:
8
with prob. ?ij /2
<1
[sgn(S)]ij = 0
(6)
with prob. 1 ?ij
:
1 with prob. ?ij /2.
In this paper, our goal is to characterize conditions on ?ij that guarantees correct recovery of L and
S with observation of M .
We provide some notations that are used throughout this paper. A matrix X is associated with five
norms: kXkF denotes the Frobenius norm, kXk? denotes the nuclear norm (i.e., the sum of singular
values), kXk denotes the spectral norm (i.e., the largest singular value), and kXk1 and kXk1
represent respectively the l1 and l1 norms of the long vector stacked by X. The inner product
between two matrices is defined as hX, Y i := trace(X ? Y ). For a linear operator A that acts on the
space of matrices, kAk denotes the operator norm given by kAk = sup{kXkF =1} kAXkF .
2.2
Main Theorems
We adopt the PCP to solve the robust PCA problem. We define the following local incoherence
parameters, which play an important role in our characterization of conditions on entry-wise ?ij .
n
kU ? ei k2 + kV ? ej k2 ,
2r
?ij := max{?0ij , ?1ij }.
?0ij :=
?1ij :=
n2 ([U V ? ]ij )2
r
(7)
(8)
It is clear that ?0ij ? ?0 and ?1ij ? ?1 for all i, j = 1, ? ? ? , n. We note that although maxi,j ?ij >
1, some ?ij might take values as small as zero.
3
We first consider the robust PCA problem under the random sign model as introduced in Section
2.1. The following theorem characterizes the condition that guarantees correct recovery by PCP.
Theorem 1. Consider the robust PCA problem under the random sign model. If
? r
?ij r
1
1 ?ij max C0
log n, 3
n
n
for some sufficiently large constant C0 and for all i, j 2 [n], then P CP yields correct matrix
recovery with = 32pn1log n , with probability at least 1 cn 10 for some constant c.
We note that the term 1/n3 is introduced to justify dual certificate conditions in thepproof (see Appendix A.2). We further note that satisfying the condition in Theorem 1 implies C0 ?r/n log n ?
1, which is an essential bound required in our proof and coincides with the conditions in previous studies [1, 12]. Although we set = 32pn1log n for the sake of proof, in practice is often
determined via cross validation.
The above theorem suggests that the local incoherence parameter ?ij is closely related to how robust each entry of L to error corruption in matrix recovery. An entry corresponding to smaller ?ij
tolerates larger error density ?ij . This is consistent with the result in [4] for matrix completion, in
which smaller local incoherence parameter requires lower local sampling rate. The difference lies in
that here both ?0ij and ?1ij play roles in ?ij whereas only ?0ij matters in matrix completion. The
necessity of ?1ij for robust PCA is further demonstrated in Section 2.3 via an example.
Theorem 1 also provides a more refined view for robust PCA in the dense error regime, in which
the error corruption probability approaches one. Such an interesting regime was previously studied
in [3, 11]. In [11], it is argued that PCP with adaptive yields exact recovery even when the error
corruption probability approaches one if errors take random signs and the dimension n is sufficiently
large. In [3], it is further shown that PCP with a fixed also yields exact recovery and the scaling
behavior of the error corruption probability is characterized. The above Theorem 1 further provides
the scaling behavior of the local entry-wise error corruption probability ?ij as it approaches one,
and captures how such scaling behavior depends on local incoherence parameters ?ij . Such a result
implies that robustness of PCP depends not only on the error density but also on how errors are
distributed over the matrix with regard to ?ij .
We next consider the robust PCA problem under the fixed sign model as introduced in Section 2.1.
In this case, non-zero entries of the error matrix S can take arbitrary and fixed values, and only
locations of non-zero entries are random.
Theorem 2. Consider the robust PCA problem under the fixed sign model. If
? r
?ij r
1
(1 2?ij ) max C0
log n, 3
n
n
for some sufficient large constant C0 and for all i, j 2 [n], then PCP yields correct recovery with
= 32pn1log n , with probability at least 1 cn 10 for some constant c.
Theorem 2 follows from Theorem 1 by adapting the elimination and derandomization arguments [1,
Section 2.2] as follows. Let ? be the matrix with each (i, j)-entry being ?ij . If PCP yields exact
recovery with a certain probability for the random sign model with the parameter 2?, then it also
yields exact recovery with at least the same probability for the fixed sign model with locations of
non-zero entries sampled using Bernoulli model with the parameter ?.
We now compare Theorem 2 for robust PCA with non-uniform error corruption to Theorem 1.1 in [1]
for robust PCA with uniform error corruption. It is clear that if we set ?i,j = p
? for all i, j 2 [n],
then the two models are the same. It can then be easily checked that conditions ?r/n log n ? ?r
and ? ? ?s in Theorem 1.1 of [1] implies the conditions in Theorem 2. Thus, Theorem 2 provides
a more relaxed condition than Theorem 1.1 in [1]. Such benefit of condition relaxation should be
attributed to the new golfing scheme introduced in [3, 12], and this paper provides a more refined
view of robust PCA by further taking advantage of such a new golfing scheme to analyze local
conditions.
More importantly, Theorem 2 characterizes relationship between local incoherence parameters and
local error corruption probabilities, which implies that different areas of the low-rank matrix have
4
different levels of ability to resist errors: a more incoherent area (i.e., with smaller ?ij ) can tolerate
more errors. Thus, Theorem 2 illustrates the following interesting fact. Whether PCP yields correct
recovery depends not only on the total number of errors but also on how errors are distributed. If
more errors are distributed to more incoherent areas (i.e, with smaller ?ij ), then more errors in total
can be tolerated. However, if errors are distributed in an opposite manner, then only smaller number
of errors can be tolerated.
2.3 Implication on Cluster Matrix
In this subsection, we further illustrate our result when the low rank matrix is a cluster matrix.
Although robust PCA and even more sophisticated approaches have been applied to solve clustering
problems, e.g., [13?15], our perspective here is to demonstrate how local incoherence affects entrywise robustness to error corruption, which has not been illustrated in previous studies.
Suppose there are n elements to be clustered. We use a cluster matrix L to represent the clustering
relationship of these n elements with Lij = 1 if elements i and j are in the same cluster and Lij = 0
otherwise. Thus, with appropriate ordering of the elements, L is a block diagonal matrix with all
diagonal blocks containing all ?1?s and off-diagonal blocks containing all ?0?s. Hence, the rank r of
L equals the number of clusters, which is typically small compared to n. Suppose these entries are
corrupted by errors that flip entries from one to zero or from zero to one. This can be thought of as
adding a (possibly sparse) error matrix S to L so that the observed matrix is L + S. Then PCP can
be applied to recover the cluster matrix L.
0.5
0.5
0.4
0.4
Cluster2 error ?2
Diagonal?block error ?d
We first consider an example with clusters having equal size n/r. We set n = 600 and r = 4 (i.e.,
four equal-size clusters). We apply errors to diagonal-block entries and off-diagonal-block entries
respectively with the probabilities ?d and ?od . In Fig. 1a, we plot recovery accuracy of PCP for
each pairs of (?od , ?d ). It is clear from the figure that failure occurs for larger ?od than ?d , which
thus implies that off-diagonal blocks are more robust to errors than diagonal blocks. This can be
explained by Theorem 2 as follows. For a cluster matrix with equal cluster size n/r, the local
incoherence parameters are given by
?
r, (i, j) is in diagonal blocks
?0ij = 1 for all (i, j), and ?1ij =
0, (i, j) is in off-diagonal blocks,
and thus
?
r, (i, j) is in diagonal blocks
?ij = max{?0ij , ?1ij } =
1, (i, j) is in off-diagonal blocks.
0.3
0.2
0.1
0
0.3
0.2
0.1
0.1
0.2
0.3
0.4
0.5
0
Off?diagonal?block error ?od
0.1
0.2
0.3
0.4
0.5
Cluster1 error ?1
(a) Diagonal-block error vs. off-diagonal-block (b) Error vulnerability with respect to cluster sizes
error. n = 600, r = 4 with equal cluster sizes
500 vs. 100
Figure 1: Error vulnerability on different parts for cluster matrix. In both cases, for each probability pair, we
generate 10 trials of independent random error matrices and count the number of successes of PCP. We declare
? satisfies kL
? LkF /kLkF ? 10 3 . Color from white to black
a trial to be successful if the recovered L
represents the number of successful trials changes from 10 to 0.
Based on Theorem 2, it is clear that diagonal-block entries are more locally coherent and hence are
more vulnerable to errors, whereas off-diagonal-block entries are more locally incoherent and hence
are more robust to errors.
5
Moreover, this example also demonstrates the necessity of ?1 in the robust PCA problem. [4] showed
that ?1 is not necessary for matrix completion and argued informally that ?1 is necessary for robust
PCA by connecting the robust PCA problem to hardness of finding a small clique in a large random
graph. Here, the above example provides an evidence for such a fact. In the example, ?0ij are the
same over the entire matrix, and hence it is ?1ij that differentiates incoherence between diagonal
blocks and off-diagonal blocks, and thus differentiates their robustness to errors.
We then consider the case with two clusters that have different sizes: cluster1 size 500 versus cluster2
size 100. Hence, r = 2. We apply errors to block diagonal entries corresponding to clusters 1
and 2 respectively with the probabilities ?1 and ?2 . In Fig. 1b, we plot the recovery accuracy of
PCP for each pair of (?1 , ?2 ). It is clear from the figure that failure occurs for larger ?1 than ?2 ,
which thus implies that entries corresponding to the larger cluster are more robust to errors than
entries corresponding to smaller clusters. This can be explained by Theorem 2 because the local
n2
incoherence of a block diagonal entry is given by ?ij = rK
2 , where K is the corresponding cluster
p
size, and hence the error corruption probability should satisfy 1 2?ij > C0 Kn log n for correct
recovery. Thus, a larger cluster can resist denser errors. This also coincides with the results on graph
clustering in [13, 16].
2.4 Outline of the Proof of Theorem 1
The proof of Theorem 1 follows the idea established in [1] and further developed in [3, 12]. Our
main technical development lies in analysis of non-uniform error corruption based on local incoherence parameters, for which we introduce a new weighted norm lw(1) , and establish concentration
properties and bounds associated with this norm. As a generalization of matrix infinity norm, lw(1)
incorporates both ?0ij and ?1ij , and is hence different from the weighted norms l?(1) and l?(1,2)
in [9] by its role in the analysis for the robust PCA problem. We next outline the proof here and the
detailed proofs are provided in Appendix A.
We first introduce some notations. We define the subspace T := {U X ? + Y V ? : X, Y 2 Rn?r },
where U, V are left and right singular matrix of L. Then T induces a projection operator PT given
by PT (M ) = U U ? M + M V V ? U U ? M V V ? . Moreover, T ? , the complement subspace to T , induces an orthogonal projection operator PT ? with PT ? (M ) = (I U U ? )M (I V V ? ). We further
define two operators associated with Bernoulli sampling. Let ?0 denote
Pa generic subset of [n]?[n].
We define a corresponding projection operator P?0 as P?0 (M ) = ij I{(i,j)2?0 } hM, ei e?j iei e?j ,
where I{?} is the indicator function. If ?0 is a random set generated by Bernoulli sampling with
P((i, j) 2 ?P
0 ) = tij with 0 < tij ? 1 for all i, j 2 [n], we further define a linear operator R?0 as
R?0 (M ) = ij t1ij I{(i,j)2?0 } hM, ei e?j iei e?j .
We further note that throughout this paper ?with high probability? means ?with probability at least
1 cn 10 ?, where the constant c may be different in various contexts.
Our proof includes two main steps: establishing that existence of a certain dual certificate is sufficient to guarantee correct recovery and constructing such a dual certificate. For the first step, we
establish the following proposition.
n q
o
?ij r
1
Proposition 1. If 1 ?ij max C0
n log n, n3 , PCP yields a unique solution which agrees
with the correct (L, S) with high probability if there exists a dual certificate Y obeying
(9)
P? Y = 0,
kY k1 ?
4
(10)
,
kPT ? ( sgn(S) + Y )k ?
kPT (Y + sgn(S)
where
=
1
,
4
U V ? )kF ?
(11)
n2
(12)
p1
.
32 n log n
The proof of the above proposition adapts the idea in [1,12] for uniform errors to non-uniform errors.
In particular, the proof exploits the properties of R? associated with non-uniform errors, which are
presented as Lemma 1 (established in [9]) and Lemma 2 in Appendix A.1.
6
Proposition 1 suggests that it suffices to prove Theorem 1 if we find a dual certificate Y that satisfies
the dual certificate conditions (9)-(12). Thus, the second step is to construct Y via the golfing
scheme. Although we adapt the steps in [12] to construct the dual certificate Y , our analysis requires
new technical development based on local incoherence parameters. Recall the following definitions
in Section 2.1: P((i, j) 2 ?) = ?ij and P((i, j) 2 ) = pij , where = ?c and pij = 1 ?ij .
Consider the golfing scheme with nonuniform sizes as suggested in [12] to establish bounds with
fewer log factors. Let = 1 [ 2 [ ? ? ? [ l , where { k } are independent random sets given by
pij
P((i, j) 2 1 ) =
,
P((i, j) 2 k ) = qij , for k = 2, ? ? ? , l.
6
pij
Thus, if ?ij = (1
qij )l 1 , the two sampling strategies are equivalent. Due to the overlap
6 )(1
5 pij
between { k }, we have qij
6 l 1 . We set l = b5 log n + 1c and construct a dual certificate Y in
the following iterative way:
Z0 = PT (U V ?
sgn(S))
Zk = (PT PT R k PT )Zk
Y =
l
X
k=1
R k Zk
1,
for
(13)
(14)
k = 1, ? ? ? , l
(15)
1.
It is then sufficient to show that such constructed Y satisfies the dual certificate conditions (9)-(12).
Condition (9) is due to the construction of Y . Condition (12) can be shown by a concentration
property of each iteration step (14) with k ? kF characterized in Lemma 3 in Appendix A.1. In order
to showqthat Y satisfies conditions (10) and (11), we introduce the following weighted norm. Let
?ij r
w
?ij =
?ij , ?}, where ? is the smallest nonzero w
?ij . Here ? is introduced to
n2 and wij = max{w
avoid singularity. Then for any matrix Z, define
kZkw(1) = max
i,j
|Zij |
.
wij
(16)
It is easy to verify k ? kw(1) is a well defined norm. We can then show that each iteration step (14)
with k ? k and k ? kw(1) norms satisfies two concentration properties characterized respectively in
Lemmas 4 and 5, which are essential to prove conditions (10) and (11).
3
Numerical Experiments
In this section, we provide numerical experiments to demonstrate our theoretical results. In these
experiments,pwe adopt an augmented Lagrange multiplier algorithm in [17] to solve the PCP. We
set = 1/ n log n. A trial of PCP (for a given realization of error locations) is declared to be
? recovered by PCP satisfies kL
? LkF /kLkF ? 10 3 .
successful if L
We apply the following three models to construct the low rank matrix L.
? Bernoulli
L = XX ? where X is n ? r matrix with entries independently taking values
p model: p
+1/ n and 1/ n equally likely.
? Gaussian model: L = XX ? , where X is n ? r matrix with entries independently sampled
from Gaussian distribution N (0, 1/n).
? Cluster model: L is a block diagonal matrix with r equal-size blocks containing all ?1?s.
In order to demonstrate that the local incoherence parameter affects local robustness to error corruptions, we study the following two types of error corruption models.
? Uniform error corruption: sgn(Sij ) is generated as (6) with ?ij = ? for all i, j 2 [n], and
S = sgn(S).
p
n2 1/?ij
? Adaptive error corruption: sgn(Sij ) is generated as (6) with ?ij = ? P p
for all i, j 2
ij
1/?ij
[n], and S = sgn(S).
It is clear in both cases, the error matrix has the same average error corruption percentage ?, but in
adaptive error corruption, the local error corruption probability is adaptive to the local incoherence.
Our first experiment demonstrates that robustness of PCP to error corruption not only depends on
the number of errors but also depends on how errors are distributed over the matrix. For all three
7
0.8
Failure frequency
Failure frequency
1
uniform error
adaptive error
0.6
0.4
0.2
0
0
1
uniform error
adaptive error
0.8
Failure frequency
1
0.8
0.6
0.4
0.2
0.1
0.2
0.3
0.4
0.5
0.6
0
0
0.7
Error percentage ?
uniform noise
adaptive noise
0.6
0.4
0.2
0.1
0.2
0.3
0.4
0.5
0.6
0
0
0.7
(a) Bernoulli model
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Error percentage ?
Error percentage ?
(b) Gaussian model
(c) Cluster model
Figure 2: Recovery failure of PCP versus error corruption percentage.
low rank matrix models, we set n = 1200 and rank r = 10. For each low rank matrix model,
we apply the uniform and adaptive error matrices, and plot the failure frequency of PCP versus the
error corruption percentage ? in Fig. 2. For each value of ?, we perform 50 trials of independent
error corruption and count the number of failures of PCP. Each plot of Fig. 2 compares robustness
of PCP to uniform error corruption (the red square line) and adaptive error corruption (the blue
circle line). We observe that PCP can tolerate more errors in the adaptive case. This is because the
adaptive error matrix is distributed based on the local incoherence parameter, where error density is
higher in areas where matrices can tolerate more errors. Furthermore, comparison among the three
plots in Fig. 2 illustrates that the gap between uniform and adaptive error matrices is the smallest
for Bernoulli model and the largest for cluster model. Our theoretic results suggest that the gap is
due to the variation of the local incoherence parameter across the matrix, which can be measured
by the variance of ?ij . Larger variance of ?ij should yield larger gap. Our numerical calculation
of the variances for three models yield Var(?Bernoulli ) = 1.2109, Var(?Gaussian ) = 2.1678, and
Var(?cluster ) = 7.29, which confirms our explanation.
0.7
0.7
uniform error
adaptive error
0.4
0.3
0.2
0.1
0
0
0.5
0.4
0.3
0.2
0.1
20
40
60
80
rank r
(a) Bernoulli model
100
0
0
uniform error
adaptive error
0.7
Error percentage ?
0.5
0.8
uniform error
adative error
0.6
Error percentage ?
Error percentage ?
0.6
0.6
0.5
0.4
0.3
0.2
0.1
10
20
30
40
50
rank r
(b) Gaussian model
60
0
2
4
6
8
10
12
14
rank r
(c) Cluster model
Figure 3: Largest allowable error corruption percentage versus rank of L so that PCP yields correct
recovery.
We next study the phase transition in rank and error corruption probability. For the three low-rank
matrix models, we set n = 1200. In Fig. 3, we plot the error corruption percentage versus the rank
of L for both uniform and adaptive error corruption models. Each point on the curve records the
maximum allowable error corruption percentage under the corresponding rank such that PCP yields
correction recovery. We count a (r, ?) pair to be successful if nine trials out of ten are successful.
We first observe that in each plot of Fig. 3, PCP is more robust in adaptive error corruption due to
the same reason explained above. We further observe that the gap between the uniform and adaptive
error corruption changes as the rank changes. In the low-rank regime, the gap is largely determined
by the variance of incoherence parameter ?ij as we argued before. As the rank increases, the gap is
more dominated by the rank and less affected by the local incoherence. Eventually for large enough
rank, no error can be tolerated no matter how errors are distributed.
4
Conclusion
We characterize refined conditions under which PCP succeeds to solve the robust PCA problem.
Our result shows that the ability of PCP to correctly recover a low-rank matrix from errors is related
not only to the total number of corrupted entries but also to locations of corrupted entries, more
essentially to the local incoherence of the low rank matrix. Such result is well supported by our
numerical experiments. Moreover, our result has rich implication when the low rank matrix is a
cluster matrix, and our result coincides with state-of-the-art studies on clustering problems via low
rank cluster matrix. Our result may motivate the development of weighted PCP to improve recovery
performance similar to the weighted algorithms developed for matrix completion in [9, 18].
8
References
[1] E. J. Cand`es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of the ACM
(JACM), 58(3):11, 2011.
[2] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Rank-sparsity incoherence for matrix
decomposition. SIAM Journal on Optimization, 21(2):572?596, 2011.
[3] Y. Chen, A. Jalali, S. Sanghavi, and C. Caramanis. Low-rank matrix recovery from errors and erasures.
IEEE Transactions on Information Theory, 59(7):4324?4337, 2013.
[4] Y. Chen. Incoherence-optimal matrix completion.
61(5):2909?2923, May 2015.
IEEE Transactions on Information Theory,
[5] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717?772, 2009.
[6] E. J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053?2080, 2010.
[7] D. Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on
Information Theory, 57(3):1548?1566, 2011.
[8] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via
nuclear norm minimization. SIAM Review, 52(3):471?501, 2010.
[9] Y. Chen, S. Bhojanapalli, S. Sanghavi, and R. Ward. Completing any low-rank matrix, provably. arXiv
preprint arXiv:1306.2979, 2013.
[10] D. Hsu, S. M. Kakade, and T. Zhang. Robust matrix decomposition with sparse corruptions. IEEE
Transactions on Information Theory, 57(11):7221?7234, 2011.
[11] A. Ganesh, J. Wright, X. Li, E. J. Candes, and Y. Ma. Dense error correction for low-rank matrices
via principal component pursuit. In IEEE International Symposium on Information Theory (ISIT), pages
1513?1517, Austin, TX, US, June 2010.
[12] X. Li. Compressed sensing and matrix completion with constant proportion of corruptions. Constructive
Approximation, 37(1):73?99, 2013.
[13] S. Oymak and B. Hassibi. Finding dense clusters via ?low rank+ sparse? decomposition. arXiv preprint
arXiv:1104.5186, 2011.
[14] Y. Chen, S. Sanghavi, and H. Xu. Clustering sparse graphs. In Advances in Neural Information Processing
Systems (NIPS), pages 2204?2212, Lake Tahoe, Nevada, US, December 2012.
[15] Y. Chen, S. Sanghavi, and H. Xu. Improved graph clustering. IEEE Transactions on Information Theory,
60(10):6440?6455, Oct 2014.
[16] Y. Chen, A. Jalali, S. Sanghavi, and H. Xu. Clustering partially observed graphs via convex optimization.
Journal of Machine Learning Research, 15(1):2213?2238, 2014.
[17] Z. Lin, M. Chen, and Y. Ma. The augmented lagrange multiplier method for exact recovery of corrupted
low-rank matrices. arXiv preprint arXiv:1009.5055, 2010.
[18] N. Srebro and R. R. Salakhutdinov. Collaborative filtering in a non-uniform world: Learning with the
weighted trace norm. In Advances in Neural Information Processing Systems (NIPS), pages 2056?2064,
Hyatt Regency, Vancouver, Canada, 2010. December.
[19] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices.
arXiv:1011.3027, 2010.
arXiv preprint
[20] J. A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational
Mathematics, 12(4):389?434, 2012.
9
| 6018 |@word trial:6 norm:22 proportion:1 c0:7 confirms:1 crucially:1 decomposition:10 klk:1 necessity:2 score:1 zij:1 ksk1:1 recovered:3 od:4 pcp:38 numerical:4 plot:7 v:2 selected:1 fewer:1 accordingly:1 record:1 characterization:3 provides:7 certificate:9 location:6 tahoe:1 zhang:2 five:1 constructed:1 symposium:1 qij:3 prove:2 introduce:3 manner:1 hardness:1 behavior:3 themselves:1 p1:1 cand:3 derandomization:1 salakhutdinov:1 decomposed:2 provided:1 xx:2 bounded:1 notation:2 moreover:3 bhojanapalli:1 developed:2 finding:2 guarantee:9 combat:2 act:1 friendly:1 k2:4 demonstrates:2 positive:1 declare:1 before:1 local:35 aiming:1 establishing:1 incoherence:38 might:1 black:1 studied:5 suggests:2 fazel:1 unique:1 practice:1 block:23 kaxkf:1 erasure:1 area:4 kpt:2 adapting:1 thought:1 projection:3 suggest:3 operator:7 context:1 equivalent:1 demonstrated:1 go:1 independently:3 convex:7 rectangular:1 simplicity:1 recovery:20 importantly:1 nuclear:3 notion:1 variation:1 pt:8 suppose:5 play:3 construction:1 exact:8 programming:2 user:1 pa:1 element:4 satisfying:1 observed:8 role:4 kxk1:2 preprint:4 capture:2 worst:1 t1ij:1 ordering:1 gross:1 complexity:1 motivate:1 basis:2 easily:1 various:1 caramanis:1 tx:1 stacked:1 refined:6 larger:8 solve:5 denser:1 otherwise:1 compressed:1 ability:2 ward:1 advantage:1 nevada:1 product:1 aligned:1 realization:1 adapts:1 frobenius:1 kv:2 ky:1 cluster:28 produce:1 guaranteeing:1 illustrate:1 completion:15 syr:3 measured:2 ij:67 tolerates:1 recovering:1 involves:2 implies:6 closely:2 correct:16 sgn:8 elimination:1 argued:3 hx:1 suffices:1 clustered:1 generalization:1 decompose:1 proposition:4 isit:1 singularity:1 adjusted:1 correction:2 sufficiently:2 wright:2 adopt:2 smallest:2 susceptibility:1 vulnerability:2 largest:3 agrees:1 successfully:1 weighted:9 minimization:1 gaussian:5 aim:3 rather:1 zhou:1 ej:2 avoid:1 focus:1 june:1 rank:53 bernoulli:13 contrast:1 yingbin:1 entire:2 typically:1 wij:2 tao:1 provably:2 dual:9 among:1 denoted:1 development:4 art:1 equal:7 construct:4 having:1 sampling:9 represents:2 kw:2 sanghavi:6 develops:1 few:2 randomly:3 individual:2 phase:1 n1:1 investigate:2 introduces:2 implication:2 necessary:3 orthogonal:1 circle:1 theoretical:1 column:5 kxkf:2 entry:47 subset:1 uniform:25 successful:5 characterize:5 kn:1 eec:3 corrupted:11 tolerated:3 vershynin:1 recht:2 density:8 international:1 siam:2 oymak:1 off:9 connecting:1 containing:3 possibly:1 li:3 parrilo:2 iei:2 hyatt:1 includes:1 coefficient:1 matter:2 satisfy:2 explicitly:1 depends:8 later:1 view:4 analyze:1 sup:1 characterizes:2 red:1 recover:2 candes:1 contribution:1 minimize:1 square:2 collaborative:1 accuracy:2 variance:4 largely:1 yield:12 corruption:43 checked:1 definition:1 failure:8 frequency:4 naturally:1 proof:12 associated:6 recovers:1 attributed:1 sampled:3 hsu:1 recall:1 subsection:1 color:1 sophisticated:1 tolerate:6 higher:3 improved:1 entrywise:1 furthermore:1 tropp:1 ei:4 ganesh:1 verify:1 multiplier:2 counterpart:1 hence:11 nonzero:4 illustrated:1 deal:2 white:1 pwe:1 kak:2 coincides:3 allowable:4 outline:2 theoretic:1 demonstrate:3 l1:3 cp:1 wise:4 recently:1 tail:1 mathematics:2 pointed:1 similarity:1 showed:1 perspective:1 certain:2 success:2 yi:1 minimum:1 relaxed:1 determine:1 recoverable:1 technical:5 characterized:5 adapt:1 cross:1 long:2 lin:1 calculation:1 equally:1 essentially:1 arxiv:8 iteration:2 represent:2 whereas:3 cluster2:2 singular:7 subject:1 december:2 huishuai:1 incorporates:1 near:1 leverage:1 enough:2 easy:1 affect:4 opposite:1 inner:1 idea:2 cn:3 whether:1 motivated:1 pca:36 b5:1 kzkw:1 nine:1 useful:1 tij:2 clear:6 informally:1 detailed:1 extensively:1 locally:2 induces:2 ten:1 generate:1 percentage:14 canonical:1 sign:14 correctly:1 blue:1 affected:1 four:1 graph:5 relaxation:2 sum:6 prob:3 throughout:2 chandrasekaran:1 lake:1 appendix:4 scaling:3 bound:4 completing:1 guaranteed:2 infinity:1 n3:2 sake:1 dominated:1 declared:1 argument:1 department:3 smaller:7 across:2 kakade:1 explained:3 sij:2 equation:1 previously:2 count:3 differentiates:2 eventually:1 flip:1 studying:1 pursuit:3 decomposing:1 apply:4 observe:3 spectral:1 appropriate:1 generic:1 robustness:6 existence:1 denotes:8 clustering:7 completed:1 log2:3 exploit:2 k1:5 establish:3 quantity:1 occurs:2 strategy:1 concentration:4 diagonal:24 nr:3 jalali:2 subspace:2 reason:1 willsky:1 relationship:2 liang:1 potentially:1 statement:1 adative:1 trace:2 unknown:1 perform:1 observation:3 extended:1 rn:1 nonuniform:1 arbitrary:2 canada:1 syracuse:6 introduced:8 namely:2 pair:5 required:1 kl:2 complement:1 resist:3 unequal:1 coherent:2 established:2 nip:2 suggested:1 regime:3 sparsity:2 max:9 explanation:1 power:1 event:1 overlap:1 difficulty:1 indicator:1 scheme:4 improve:1 imply:1 identifies:1 incoherent:4 hm:2 lij:2 review:1 understanding:2 kf:2 vancouver:1 asymptotic:1 lkf:2 interesting:2 proportional:1 filtering:1 srebro:1 versus:5 localized:7 var:3 validation:1 foundation:2 sufficient:4 consistent:1 pij:5 row:3 austin:1 supported:1 transpose:1 allow:1 characterizing:1 taking:3 absolute:1 sparse:12 distributed:12 regard:1 benefit:1 dimension:1 curve:1 transition:1 world:1 rich:1 adaptive:17 transaction:6 clique:1 global:5 iterative:1 ku:3 zk:3 robust:46 investigated:1 necessarily:1 constructing:1 dense:4 main:4 noise:2 n2:6 allowed:1 golfing:4 augmented:2 fig:7 cluster1:2 xu:3 ny:3 hassibi:1 obeying:1 lie:4 lw:3 theorem:25 rk:1 z0:1 maxi:1 sensing:1 evidence:2 essential:2 exists:1 adding:1 illustrates:2 gap:6 chen:7 klkf:2 distinguishable:1 likely:1 jacm:1 lagrange:2 kxk:2 partially:3 vulnerable:1 satisfies:7 determines:5 acm:1 ma:3 succeed:1 oct:1 viewed:2 goal:1 consequently:1 change:3 specifically:1 determined:3 uniformly:4 justify:1 principal:5 lemma:4 total:5 svd:1 e:3 succeeds:1 support:4 constructive:1 |
5,546 | 6,019 | Algorithmic Stability and Uniform Generalization
Ibrahim Alabdulmohsin
King Abdullah University of Science and Technology
Thuwal 23955, Saudi Arabia
[email protected]
Abstract
One of the central questions in statistical learning theory is to determine the conditions under which agents can learn from experience. This includes the necessary and sufficient conditions for generalization from a given finite training set
to new observations. In this paper, we prove that algorithmic stability in the inference process is equivalent to uniform generalization across all parametric loss
functions. We provide various interpretations of this result. For instance, a relationship is proved between stability and data processing, which reveals that algorithmic stability can be improved by post-processing the inferred hypothesis or by
augmenting training examples with artificial noise prior to learning. In addition,
we establish a relationship between algorithmic stability and the size of the observation space, which provides a formal justification for dimensionality reduction
methods. Finally, we connect algorithmic stability to the size of the hypothesis
space, which recovers the classical PAC result that the size (complexity) of the
hypothesis space should be controlled in order to improve algorithmic stability
and improve generalization.
1
Introduction
One fundamental goal of any learning algorithm is to strike a right balance between underfitting
and overfitting. In mathematical terms, this is often translated into two separate objectives. First,
we would like the learning algorithm to produce a hypothesis that is reasonably consistent with the
empirical evidence (i.e. to have a small empirical risk). Second, we would like to guarantee that the
empirical risk (training error) is a valid estimate of the true unknown risk (test error). The former
condition protects against underfitting while the latter condition protects against overfitting.
The rationale behind these two objectives can be understood if we define the generalization
risk
.
Rgen by the absolute difference between the empirical and true risks: Rgen = Remp ? Rtrue .
Then, it is elementary to observe that the true risk Rtrue is bounded from above by the sum
Remp + Rgen . Hence, by minimizing both the empirical risk (underfitting) and the generalization
risk (overfitting), one obtains an inference procedure whose true risk is minimal.
Minimizing the empirical risk alone can be carried out using the empirical risk minimization (ERM)
procedure [1] or some approximations to it. However, the generalization risk is often impossible to
deal with directly. Instead, it is a common practice to bound it analyticaly so that we can establish
conditions under which it is guaranteed to be small. By establishing conditions for generalization,
one hopes to design better learning algorithms that both perform well empirically and generalize
well to novel observations in the future. A prominent example of such an approach is the Support
Vector Machines (SVM) algorithm for binary classification [2].
However, bounding the generalization risk is quite intricate because it can be approached from
various angles. In fact, several methods have been proposed in the past to prove generalization bounds including uniform convergence, algorithmic stability, Rademacher and Gaussian complexities, generic chaining bounds, the PAC-Bayesian framework, and robustness-based analysis
1
[1, 3, 4, 5, 6, 7, 8, 9]. Concentration of measure inequalities form the building blocks of these rich
theories.
The proliferation of generalization bounds can be understood if we look into the general setting of
learning introduced by Vapnik [1]. In this setting, we have an observation space Z and a hypothesis
m
space H. A learning algorithm, henceforth denoted L : ??
? H, uses a finite set of
m=1 Z
observations to infer a hypothesis H ? H. In the general setting, the inference process end-to-end
is influenced by three key factors: (1) the nature of the observation space Z, (2) the nature of the
hypothesis space H, and (3) the details of the learning algorithm L. By imposing constraints on
any of these three components, one may be able to derive new generalization bounds. For example,
the Vapnik-Chervonenkis (VC) theory derives generalization bounds by assuming constraints on H,
while stability bounds, e.g. [6, 10, 11, 12], are derived by assuming constraints on L.
Given that different generalization bounds can be established by imposing constraints on any of
Z, H, or L, it is intriguing to ask if there exists a single view for generalization that ties all of these
different components together. In this paper, we answer this question in the affirmative by establishing that algorithmic stability alone is equivalent to uniform generalization. Informally speaking, an
inference process is said to generalize uniformly if the generalization risk vanishes uniformly across
all bounded parametric loss functions at the limit of large training sets. A more precise definition
will be presented in the sequel. We will show why constraints that are imposed on either H, Z, or
L to improve uniform generalization can be interpreted as methods of improving the stability of the
learning algorithm L. This is similar in spirit to a result by Kearns and Ron, who showed that having a finite VC dimension in the hypothesis space H implies a certain notion of algorithmic stability
in the inference process [13]. Our statement, however, is more general as it applies to all learning
algorithms that fall under Vapnik?s general setting of learning, well beyond uniform convergence.
The rest of the paper is as follows. First, we review the current literature on algorithmic stability,
generalization, and learnability. Then, we introduce key definitions that will be repeatedly used
throughout the paper. Next, we prove the central theorem, which reveals that algorithmic stability is
equivalent to uniform generalization, and provide various interpretations of this result afterward.
2
Related Work
Perhaps, the two most fundamental concepts in statistical learning theory are those of learnability
and generalization [12, 14]. The two concepts are distinct from each other. As will be discussed
in more details next, whereas learnability is concerned with measuring the excess risk within a
hypothesis space, generalization is concerned with estimating the true risk.
In order to define learnability and generalization, suppose we have an observation space Z, a probability distribution of observations P(z), and a bounded stochastic loss function L(?; H) : Z ?
[0, 1], where H ? H is an inferred hypothesis. Note that L is implicitly a function of (parameterized by) H as well. We define the true risk of a hypothesis H ? H by the risk functional:
Rtrue (H) = EZ?P(z) L(Z; H)
(1)
Then, a learning algorithm is called consistent if the true risk of its inferred hypothesis H converges
to the optimal true risk within the hypothesis space H at the limit of large training sets m ? ?.
A problem is called learnable if it admits a consistent learning algorithm [14]. It has been known
that learnability for supervised classification and regression problems is equivalent to uniform convergence [3, 14]. However, Shalev-Shwartz et al. recently showed that uniform convergence is not
necessary in Vapnik?s general setting of learning and proposed algorithmic stability as an alternative
key condition for learnability [14].
Unlike learnability, the question of generalization is concerned primarily with how representative
the empirical risk Remp is to the true risk Rtrue . To elaborate, suppose we have a finite training set
Sm = {Zi }i=1,..,m , which comprises of m i.i.d. observations Zi ? P(z). We define the empirical
risk of a hypothesis H with respect to Sm by:
1 X
Remp (H; Sm ) =
L(Zi ; H)
(2)
m
Zi ?Sm
We also let Rtrue (H) be the true risk as defined in Eq. (1). Then, a learning algorithm L is said to
generalize if the empirical risk of its inferred hypothesis converges to its true risk as m ? ?.
2
Similar to learnability, uniform convergence is, by definition, sufficient for generalization [1], but
it is not necessary because the learning algorithm can always restrict its search space to a smaller
subset of H (artificially so to speak). By contrast, it is not known whether algorithmic stability is
necessary for generalization. It has been shown that various notions of algorithmic stability can be
defined that are sufficient for generalization [6, 10, 11, 12, 15, 16]. However, it is not known whether
an appropriate notion of algorithmic stability can be defined that is both necessary and sufficient for
generalization in Vapnik?s general setting of learning. In this paper, we answer this question by
showing that stability in the inference process is not only sufficient for generalization, but it is, in
fact, equivalent to uniform generalization, which is a notion of generalization that is stronger than
the one traditionally considered in the literature.
3
Preliminaries
To simplify the discussion, we will always assume that all sets are countable, including the observation space Z and the hypothesis space H. This is similar to the assumptions used in some previous
works such as [6]. However, the main results, which are presented in Section 4, can be readily
generalized. In addition, we assume that all learning algorithms are invariant to permutations of the
training set. Hence, the order of training examples is irrelevant.
Moreover, if X ? P(x) is a random variable
drawn from the alphabet X and f (X) is a function of
P
X, we write EX?P(x) f (X) to mean x?X P(x) f (x). Often, we will simply write EX f (X) to
mean EX?P(x) f (X) if the distribution of X is clear from the context. If X takes its values from
a finite set S uniformly at random, we write X ? S to denote this distribution of X. If X is a
boolean random variable, then I{X} = 1 if and only if X is true, otherwise I{X} = 0. In general,
random variables are denoted with capital letters, instances of random variables are denoted with
small letters, and alphabets are denoted with calligraphic typeface. Also, given two probability mass
functions P and Q defined on the same alphabet A, we will write hP,
Qi to denote the overlapping
. P
coefficient, i.e. intersection, between P and Q. That is, hP, Qi = a?A min{P (a), Q(a)}. Note
that hP, Qi = 1? ||P , Q||T , where ||P , Q||T is the total variation distance. Last, we will write
B(k; ?, n) = nk ?k (1 ? ?)n?k to denote the binomial distribution.
In this paper, we consider the general setting of learning introduced by Vapnik [1]. To reiterate, we
have an observation space Z and a hypothesis space H. Our learning algorithm L receives a set of
m observations Sm = {Zi }i=1,..,m ? Z m generated i.i.d. from a fixed unknown distribution P(z),
m
and picks a hypothesis H ? H with probability PL (H = h|Sm ). Formally, L : ??
? H is a
m=1 Z
stochastic map. In this paper, we allow the hypothesis H to be any summary statistic of the training
set. It can be a measure of central tendency, as in unsupervised learning, or it can be a mapping from
an input space to an output space, as in supervised learning. In fact, we even allow H to be a subset
of the training set itself. In formal terms, L is a stochastic map between the two random variables
H ? H and Sm ? Z m , where the exact interpretation of those random variables is irrelevant.
In any learning task, we assume a non-negative bounded loss function L(Z; H) : Z ? [0, 1] is
used to measure the quality of the inferred hypothesis H ? H on the observation Z ? Z. Most
importantly, we assume that L(?; H) : Z ? [0, 1] is parametric:
Definition 1 (Parametric Loss Functions). A loss function L(?; H) : Z ? [0, 1] is called parametric if it is independent of the training set Sm given the inferred hypothesis H. That is, a parametric
loss function satisfies the Markov chain: Sm ? H ? L(?; H).
For any fixed hypothesis H ? H, we define its true risk Rtrue (H) by Eq. (1), and define its
empirical risk on a training set Sm , denoted Remp (H; Sm ), by Eq. (2). We also define the true and
empirical risks of the learning algorithm L by the expected risk of its inferred hypothesis:
? true (L) = ES EH ?P (h|S ) Rtrue (H)
R
= ESm EH|Sm Rtrue (H)
(3)
m
m
L
?
Remp (L) = ES EH ?P (h|S ) Remp (H; Sm )
= ES EH|S Remp (H; Sm )
(4)
m
L
m
m
m
? true and R
? emp instead of R
? true (L) and R
? emp (L). We will
To simplify notation, we will write R
consider the following definition of generalization:
m
Definition 2 (Generalization). A learning algorithm L : ??
? H with a parametric
m=1 Z
loss function L(?; H) : Z ? [0, 1] generalizes if for any distribution P(z) on Z, we have
? emp ? R
? true = 0, where R
? true and R
? emp are given in Eq. (3) and Eq. (4) respectively.
limm?? |R
3
In other words, a learning algorithm L generalizes according to Definition 2 if its empirical performance (training loss) becomes an unbiased estimator to the true risk as m ? ?. Next, we define
uniform generalization:
m
Definition 3 (Uniform Generalization). A learning algorithm L : ??
? H generalizes
m=1 Z
uniformly if for any > 0, there exists m0 () > 0 such that for all distributions P(z) on Z, all
? emp (L) ? R
? true (L) ? .
parametric loss functions, and all sample sizes m > m0 (), we have |R
Uniform generalization is stronger than the original notion of generalization in Definition 2. In
particular, if a learning algorithm generalizes uniformly, then it generalizes according to Definition
2 as well. The converse, however, is not true. Even though uniform generalization appears to be
quite a strong condition, at first sight, a key contribution of this paper is to show that it is not a strong
condition because it is equivalent to a simple condition, namely algorithmic stability.
4
Main Results
Before we prove that algorithmic stability is equivalent to uniform generalization, we introduce a
probabilistic notion of mutual stability between two random variables. In order to abstract away any
labeling information the random variables might possess, e.g. the observation space may or may not
be a metric space, we define stability by the impact of observations on probability distributions:
Definition 4 (Mutual Stability). Let X ? X and Y ? Y be two random variables. Then, the mutual
stability between X and Y is defined by:
.
S(X; Y ) = hP(X) P(Y ), P(X, Y )i = EX hP(Y ), P(Y |X)i = EY hP(X), P(X|Y )i
If we recall that 0 ? hP, Qi ? 1 is the overlapping coefficient between the two probability distributions P and Q, we see that S(X; Y ) given by Definition 4 is indeed a probabilistic measure
of mutual stability. It measures how stable the distribution of Y is before and after observing an
instance of X, and vice versa. A small value of S(X; Y ) means that the probability distribution of
X or Y is heavily perturbed by a single observation of the other random variable. Perfect mutual
stability is achieved when the two random variables are independent of each other.
With this probabilistic notion of mutual stability in mind, we define the stability of a learning algorithm L by the mutual stability between its inferred hypothesis and a random training example.
m
? H be a learning algorithm that receives
Definition 5 (Algorithmic Stability). Let L : ??
m=1 Z
a finite set of training examples Sm = {Zi }i=1,..,m ? Z m drawn i.i.d. from a fixed distribution
P(z). Let H ? PL (h|Sm ) be the hypothesis inferred by L, and let Ztrn ? Sm be a single random training example. We define the stability of L by: S(L) = inf P(z) S(H; Ztrn ), where the
infimum is taken over all possible distributions of observations P(z). A learning algorithm is called
algorithmically stable if limm?? S(L) = 1.
Note that the above definition of algorithmic stability is rather weak; it only requires that the contribution of any single training example on the overall inference process to be more and more negligible
as the sample size increases. In addition, it is well-defined even if the learning algorithm is deterministic because the hypothesis H, if it is a deterministic function of an entire training set of m
observations, remains a stochastic function of any individual observation. We illustrate this concept
with the following example:
Example 1. Suppose that observations Zi ? {0, 1} are i.i.d. Bernoulli P
trials with P(Zi = 1) = ?,
m
1
and that
the
hypothesis
produced
by
L
is
the
empirical
average
H
=
i=1 Zi . Because P(H =
m
k/m Ztrn = 1) = B(k ? 1; ?, m ? 1) and P(H = k/m Ztrn = 0) = B(k; ?, m ? 1), it can be
shown using Stirling?s approximation [17] that the algorithmic stability of this learning algorithm
is asymptotically given by S(L) ? 1 ? ?21? m , which is achieved when ? = 1/2. A more general
statement will be proved later in Section 5.
Next, we show that the notion of algorithmic stability in Definition 5 is equivalent to the notion of
uniform generalization in Definition 3. Before we do that, we first state the following lemma.
Lemma 1 (Data Processing Inequality). Let A, B, and C be three random variables that satisfy the
Markov chain A ? B ? C. Then: S(A; B) ? S(A; C).
4
Proof. The proof consists of two steps 1 . First, we note that because the Markov chain implies that
P(C|B, A) = P(C|B), we have S(A; (B, C)) = S(A; B) by direct substitution into Definition
5. Second, similar to the information-cannot-hurt inequality in information theory [18], it can be
shown that S(A; (B, C)) ? S(A; C) for any random variables A, B and C. This is proved using
some algebraic manipulationand
minimum of the sums is always larger than the
theP
P the fact
P that
?
,
?
sum of minimums, i.e. min
?
i i
i i
i min{?i , ?i }. Combining both results yields
S(A; B) = S(A; (B, C)) ? S(A; C), which is the desired result.
Now, we are ready to state the main result of this paper.
m
Theorem 1. For any learning algorithm L : ??
? H, algorithmic stability as given in Defm=1 Z
inition
5
is
both
necessary
and
sufficient
for
uniform
generalization
(see Definition 3). In addition,
? true ? R
? emp ? 1 ? S(H; Ztrn ) ? 1 ? S(L), where Rtrue and Remp are the true and empirical
R
risks of the learning algorithm defined in Eq. (3) and (4) respectively.
Proof. Here is an outline of the proof. First, because a parametric loss function L(?; H) : Z ? [0, 1]
is itself a random variable that satisfies the Markov chain Sm ? H ? L(?; H), it is not independent
? emp = EL(?;H) EZ |L(?;H) L(Ztrn ; H). By
of Ztrn ? Sm . Hence, the empirical risk is given by R
trn
? true = EL(?;H) EZ ?P(z) L(Ztrn ; H). The difference is:
contrast, the true risk is given by R
trn
? true ? R
? emp = EL(?;H) EZ L(Ztrn ; H) ? EZ |L(?;H) L(Ztrn ; H)
R
trn
trn
To sandwich the right-hand side between an upper and a lower bound, we note that if P1 (z) and
P2 (z) are two distributions
defined on the same alphabet
Z and F (?) : Z ? [0, 1] is a bounded loss
function, then EZ?P1 (z) F (Z) ? EZ?P2 (z) F (Z) ? ||P1 (z) , P2 (z)||T , where ||P , Q||T is the
total variation distance. The proof to this result can be immediately deduced by considering the two
regions {z ? Z : P1 (z) > P2 (z)} and {z ? Z : P1 (z) < P2 (z)} separately. This is, then, used to
deduce the inequalities:
? true ? R
? emp ? 1 ? S(L(?; H); Ztrn ) ? 1 ? S(H; Ztrn ) ? 1 ? S(L),
R
where the second inequality follows by the data processing inequality in Lemma 1, whereas the
last inequality follows by definition of algorithmic stability (see Definition
5). This
proves that
? true ? R
? emp converges to
if L is algorithmically stable, i.e. S(L) ? 1 as m ? ?, then R
zero uniformly across all parametric loss functions. Therefore, algorithmic stability is sufficient for
uniform generalization. The converse is proved by showing that for any
a bounded
? > 0, there exists
? true ? R
? emp ? 1 ? S(L).
parametric loss and a distribution P? (z) such that 1 ? S(L) ? ? ? R
Therefore, algorithmic stability is also necessary for uniform generalization.
5
Interpreting Algorithmic Stability and Uniform Generalization
In this section, we provide several interpretations of algorithmic stability and uniform generalization.
In addition, we show how Theorem 1 recovers some classical results in learning theory.
5.1
Algorithmic Stability and Data Processing
The relationship between algorithmic stability and data processing is presented in Lemma 1. Given
the random variables A, B, and C and the Markov chain A ? B ? C, we always have S(A; B) ?
S(A; C). This presents us with qualitative insights into the design of machine learning algorithms.
First, suppose we have two different hypotheses H1 and H2 . We will say that H2 contains less
informative than H1 if the Markov chain Sm ? H1 ? H2 holds. For example, if observations
Zi ? {0, 1} are Bernoulli trials, then H1 ? R can be the empirical average as given in Example 1
while H2 ? {0, 1} can be the label that occurs most often in the training set. Because H2 = I{H1 ?
m/2}, the hypothesis H2 contains strictly less information about the original training set than H1 .
Formally, we have Sm ? H1 ? H2 . In this case, H2 enjoys a better uniform generalization bound
than H1 because of data-processing. Intuitively, we know that such a result should hold because H2
is less tied to the original training set than H1 . This brings us to the following remark.
1
Detailed proofs are available in the supplementary file.
5
Remark 1. We can improve the uniform generalization bound (or equivalently algorithmic stability)
of a learning algorithm by post-processing its inferred hypothesis H in a manner that is conditionally independent of the original training set given H.
Example 2. Post-processing hypotheses is a common technique used in machine learning. This
includes sparsifying the coefficient vector w ? Rd in linear methods, where wj is set to zero if it has
a small absolute magnitude. It also includes methods that have been proposed to reduce the number
of support vectors in SVM by exploiting linear dependence [19]. By the data processing inequality,
such methods improve algorithmic stability and uniform generalization.
Needless to mention, better generalization does not immediately translate into a smaller true risk.
This is because the empirical risk itself may increase when the inferred hypothesis is post-processed
independently of the original training set.
Second, if the Markov chain A ? B ? C holds, we also obtain S(A; C) ? S(B; C) by applying
the data processing inequality to the reverse Markov chain C ? B ? A. As a result, we can improve algorithmic stability by contaminating training examples with artificial noise prior to learning.
This is because if S?m is a perturbed version of a training set Sm , then Sm ? S?m ? H implies that
S(Ztrn ; H) ? S(Z?trn ; H), when Ztrn ? Sm and Z?trn ? S?m are random training examples drawn
uniformly at random from each training set respectively. This brings us to the following remark:
Remark 2. We can improve the algorithmic stability of a learning algorithm by introducing artificial
noise to training examples, and applying the learning algorithm on the perturbed training set.
Example 3. Corrupting training examples with artificial noise, such as the recent dropout method,
are popular techniques in neural networks to improve generalization [20]. By the data processing
inequality, such methods indeed improve algorithmic stability and uniform generalization.
5.2
Algorithmic Stability and the Size of the Observation Space
Next, we look into how the size of the observation space Z influences algorithmic stability. First,
we start with the following definition:
Definition 6 (Lazy Learning). A learning algorithm L is called lazy if its hypothesis H ? H is
mapped one-to-one with the training set Sm , i.e. the mapping H ? Sm is injective.
A lazy learner is called lazy if its hypothesis is equivalent to the original training set in its information content. Hence, no learning actually takes place. One example is instance-based learning
when H = Sm . Despite their simple nature, lazy learners are useful in practice. They are useful
theoretical tools as well. In particular, because of the equivalence H ? Sm and the data processing
inequality, the algorithmic stability of a lazy learner provides a lower bound to the stability of any
possible learning algorithm. Therefore, we can relate algorithmic stability (uniform generalization)
to the size of the observation space by quantifying the algorithmic stability of lazy learners. Because
the size of Z is usually infinite, however, we introduce the following definition of effective set size.
Definition 7. In a countable space Z endowed with a probability mass function P(z), the effective
p
2
P
.
size of Z w.r.t. P(z) is defined by: Ess [Z; P(z)] = 1 +
P(z) (1 ? P(z)) .
z?Z
At one extreme, if P(z) is uniform over a finite alphabet Z, then Ess [Z; P(z)] = |Z|. At the
other extreme, if P(z) is a Kronecker delta distribution, then Ess [Z; P(z)] = 1. As proved next,
this notion of effective set size determines the rate of convergence of an empirical probability mass
function to its true distribution when the distance is measured in the total variation sense. As a result,
it allows us to relate algorithmic stability to a property of the observation space Z.
Theorem 2. Let Z be a countable space endowed with a probability mass function P(z). Let Sm
be a set of m i.i.d. samples Zi ? P(z). Define PSm (z) to be the empirical probability mass function
q induced by drawing samples uniformly at random from Sm . Then: ESm ||P(z), PSm (z)||T =
?
Ess [Z; P(z)]?1
+ o(1/ m), where 1 ? Ess [Z; P(z)] ? |Z| is the effective size of Z (see Def2?m
?
m
inition
q 7). In addition, for any learning algorithm L : ?m=1 Z ? H, we have S(H; Ztrn ) ?
?
P(z)]?1
1 ? Ess [Z;
? o(1/ m), where the bound is achieved by lazy learners (see Definition 6)2 .
2?m
2
A special case of Theorem 2 was proved by de Moivre in the 1730s, who showed that the
pempirical mean of
i.i.d. Bernoulli trials with a probability of success ? converges to the true mean at a rate of 2?(1 ? ?)/(?m)
6
m1 m2
m
Proof. Here is an outline of the proof. First, we know that P(Sm ) = m1 , m
p1 p2 ? ? ? , where
,
...
2
?
1
? is the multinomial coefficient. Using the relation ||P, Q||T = 2 ||P ? Q||1 , the multinomial
series, and De Moivre?s formula for the mean deviation of the binomial random variable [22], it can
be shown with some algebraic manipulations that:
1 X
m!
k
ESm ||P(z), PSm (z)||T =
(1 ? pk )(1?pk )m p1+mp
k
m
(pk m)! ((1 ? pk )m ? 1)!
k=1,2,...
Using Stirling?s approximation to the factorial [17], we obtain the simple asymptotic expression:
r
r
X
1
2pk (1 ? pk )
Ess [Z; P(z)] ? 1
ESm ||P(z), PSm (z)||T ?
=1?
,
2
?m
2?m
k=1,2,3,...
which is tight due to the tightness of the Stirling approximation. The rest of the theorem follows
from the Markov chain Sm ? Sm ? H, the data processing inequality, and Definition 6.
Corollary 1. Given the conditions of Theorem 2,q
if Z is in addition finite (i.e. |Z| < ?), then for
?
any learning algorithm L, we have: S(L) ? 1 ? |Z|?1
2?m ? o(1/ m)
Proof. Because in a finite observation space Z, the maximum effective set size (see Definition 7) is
|Z|, which is attained at the uniform distribution P(z) = 1/|Z|.
Intuitively speaking, Theorem 2 and its corollary state that in order to guarantee good uniform
generalization for all possible learning algorithms, the number of observations must be sufficiently
large to cover the entire effective size of the observation space Z. Needless to mention, this is
difficult to achieve in practice so the algorithmic stability of machine learning algorithms must be
controlled in order to guarantee a good generalization from a few empirical observations. Similarly,
the uniform generalization bound can be improved by reducing the effective size of the observation
space, such as by using dimensionality reduction methods.
5.3
Algorithmic Stability and the Complexity of the Hypothesis Space
Finally, we look into the hypothesis space and how it influences algorithmic stability. First, we look
into the role of the size of the hypothesis space. This is formalized in the following theorem.
m
Theorem 3. Denote by H ? H the hypothesis inferred by a learning algorithm L : ??
?
m=1 Z
H. Then, the following bound on algorithmic stability always holds:
r
r
H(H)
log |H|
?1?
,
S(L) ? 1 ?
2m
2m
where H is the Shannon entropy measured in nats (i.e. using natural logarithms).
Proof. The proof is information-theoretic. If we let I(X; Y ) be the mutual information between the
r.v.?s X and Y and let Sm = {Z1 , Z2 , . . . , Zm } be a random choice of a training set, we have:
m
hX
i h
i
I(Sm ; H) = H(Sm ) ? H(Sm | H) =
H(Zi ) ? H(Z1 |H) + H(Z2 |Z1 , H) + ? ? ?
i=1
Because conditioning reduces entropy, i.e. H(A|B) ? H(A) for any r.v.?s A and B, we have:
I(Sm ; H) ?
m
X
[H(Zi ) ? H(Zi | H)] = m [H(Ztrn ) ? H(Ztrn | H)]
i=1
Therefore:
I(Ztrn ; H) ?
I(Sm ; H)
m
(5)
on average. This is believed to be the first appearance of the square-root law in statistical inference in the
literature [21]. Because the effective set size of the Bernoulli distribution, according to Definition 7, is given
by 1 + 4?(1 ? ?), Theorem 2 agrees with, in fact generalizes, de Moivre?s result.
7
Next, we use Pinsker?s
q inequality [18], which states that for any probability distributions P and
D(P || Q)
Q: ||P , Q||T ?
, where ||P , Q||T is total variation distance and D(P || Q) is
2
the Kullback-Leibler divergence measured in nats (i.e. using natural logarithms). If we recall
that S(Sm ; H) = 1 ? ||P(Sm ) P(H) , P(Sm , H)||T while mutual information is I(Sm ; H) =
D(P(Sm , H) || P(Sm ) P(H)), we deduce from Pinsker?s inequality and Eq. (5):
S(Ztrn ; H) = 1 ? ||P(Ztrn ) P(H) , P(Ztrn , H)||T
r
r
r
r
I(Ztrn ; H)
I(Sm ; H)
H(H)
log |H|
?1?
?1?
?1?
?1?
2
2m
2m
2m
In the last line, we used the fact that I(X; Y ) ? H(X) for any random variables X and Y .
Theorem 3 re-establishes the classical PAC result on the finite hypothesis space [23]. In terms of
algorithmic stability, a learning algorithm will enjoy a high stability if the size of the hypothesis
space is small. In terms of uniform generalization, it states that the generalizationp
risk of a learning
algorithm
is
bounded
from
above
uniformly
across
all
parametric
loss
functions
by
H(H)/(2m) ?
p
log |H|/(2m), where H(H) is the Shannon entropy of H.
Next, we relate algorithmic stability to the Vapnik-Chervonenkis (VC) dimension. Despite the fact
that the VC dimension is defined on binary-valued functions whereas algorithmic stability is a functional of probability distributions, there exists a connection between the two concepts. To show this,
we first introduce a notion of an induced concept class that exists for any learning algorithm L:
m
Definition 8. The concept class C induced by a learning algorithm L : ??
? H is defined
m=1 Z
to be the set of total Boolean functions c(z) = I{P(Ztrn = z | H) ? P(Ztrn = z)} for all H ? H.
Intuitively, every hypothesis H ? H induces a total partition on the observation space Z given by
the Boolean function in Definition 8. That is, H splits Z into two disjoint sets: the set of values in
Z that are, a posteriori, less likely to have been present in the training set than before given that the
inferred hypothesis is H, and the set of all other values. The complexity (richness) of the induced
concept class C is related to algorithmic stability via the VC dimension.
m
Theorem 4. Let L : ??
? H be a learning algorithm with an induced concept class C. Let
m=1 Z
dV C (C) be the VC dimension of C. Then, the following bound holds if m > dV C (C) + 1:
p
4 + dV C (C) (1 + log(2m))
?
S(L) ? 1 ?
2m
In particular, L is algorithmically stable if its induced concept class C has a finite VC dimension.
Proof. The
is bounded from below by 1 ?
n proof relies on the fact that algorithmic stability S(L)
o
supP(z) ESm suph?H EZ?P(z) ch (Z) ? EZ?Sm ch (Z) , where cH (z) = I{P(Ztrn =
z|H) ? P(Ztrn ) = z}. The final bound follows by applying uniform convergence results [23].
6
Conclusions
In this paper, we showed that a probabilistic notion of algorithmic stability was equivalent to uniform
generalization. In informal terms, a learning algorithm is called algorithmically stable if the impact
of a single training example on the probability distribution of the final hypothesis always vanishes at
the limit of large training sets. In other words, the inference process never depends heavily on any
single training example. If algorithmic stability holds, then the learning algorithm generalizes well
regardless of the choice of the parametric loss function. We also provided several interpretations of
this result. For instance, the relationship between algorithmic stability and data processing reveals
that algorithmic stability can be improved by either post-processing the inferred hypothesis or by
augmenting training examples with artificial noise prior to learning. In addition, we established a
relationship between algorithmic stability and the effective size of the observation space, which provided a formal justification for dimensionality reduction methods. Finally, we connected algorithmic
stability to the complexity (richness) of the hypothesis space, which re-established the classical PAC
result that the complexity of the hypothesis space should be controlled in order to improve stability,
and, hence, improve generalization.
8
References
[1] V. N. Vapnik, ?An overview of statistical learning theory,? Neural Networks, IEEE Transactions
on, vol. 10, September 1999.
[2] C. Cortes and V. Vapnik, ?Support-vector networks,? Machine learning, vol. 20, pp. 273?297,
1995.
[3] A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth, ?Learnability and the VapnikChervonenkis dimension,? Journal of the ACM (JACM), vol. 36, no. 4, pp. 929?965, 1989.
[4] M. Talagrand, ?Majorizing measures: the generic chaining,? The Annals of Probability, vol. 24,
no. 3, pp. 1049?1103, 1996.
[5] D. A. McAllester, ?PAC-Bayesian stochastic model selection,? Machine Learning, vol. 51,
pp. 5?21, 2003.
[6] O. Bousquet and A. Elisseeff, ?Stability and generalization,? The Journal of Machine Learning
Research (JMLR), vol. 2, pp. 499?526, 2002.
[7] P. L. Bartlett and S. Mendelson, ?Rademacher and gaussian complexities: Risk bounds and
structural results,? The Journal of Machine Learning Research (JMLR), vol. 3, pp. 463?482,
2002.
[8] J.-Y. Audibert and O. Bousquet, ?Combining PAC-Bayesian and generic chaining bounds,?
The Journal of Machine Learning Research (JMLR), vol. 8, pp. 863?889, 2007.
[9] H. Xu and S. Mannor, ?Robustness and generalization,? Machine learning, vol. 86, no. 3,
pp. 391?423, 2012.
[10] A. Elisseeff, M. Pontil, et al., ?Leave-one-out error and stability of learning algorithms with
applications,? NATO-ASI series on Learning Theory and Practice Science Series Sub Series
III: Computer and Systems Sciences, 2002.
[11] S. Kutin and P. Niyogi, ?Almost-everywhere algorithmic stability and generalization error,? in
Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence (UAI), 2002.
[12] T. Poggio, R. Rifkin, S. Mukherjee, and P. Niyogi, ?General conditions for predictivity in
learning theory,? Nature, vol. 428, pp. 419?422, 2004.
[13] M. Kearns and D. Ron, ?Algorithmic stability and sanity-check bounds for leave-one-out crossvalidation,? Neural Computation, vol. 11, no. 6, pp. 1427?1453, 1999.
[14] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan, ?Learnability, stability and uniform convergence,? The Journal of Machine Learning Research (JMLR), vol. 11, pp. 2635?
2670, 2010.
[15] L. Devroye, L. Gy?orfi, and G. Lugosi, A probabilistic theory of pattern recognition. Springer,
1996.
[16] V. Vapnik and O. Chapelle, ?Bounds on error expectation for support vector machines,? Neural
Computation, vol. 12, no. 9, pp. 2013?2036, 2000.
[17] H. Robbins, ?A remark on stirling?s formula,? American Mathematical Monthly, pp. 26?29,
1955.
[18] T. M. Cover and J. A. Thomas, Elements of information theory. Wiley & Sons, 1991.
[19] T. Downs, K. E. Gates, and A. Masters, ?Exact simplification of support vector solutions,?
JMLR, vol. 2, pp. 293?297, 2002.
[20] S. Wager, S. Wang, and P. S. Liang, ?Dropout training as adaptive regularization,? in NIPS,
pp. 351?359, 2013.
[21] S. M. Stigler, The history of statistics: The measurement of uncertainty before 1900. Harvard
University Press, 1986.
[22] P. Diaconis and S. Zabell, ?Closed form summation for classical distributions: Variations on a
theme of de moivre,? Statlstlcal Science, vol. 6, no. 3, pp. 284?302, 1991.
[23] S. Shalev-Shwartz and S. Ben-David, Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014.
9
| 6019 |@word trial:3 version:1 stronger:2 elisseeff:2 pick:1 mention:2 reduction:3 substitution:1 contains:2 series:4 chervonenkis:2 past:1 current:1 z2:2 intriguing:1 must:2 readily:1 partition:1 informative:1 alone:2 intelligence:1 warmuth:1 es:7 provides:2 mannor:1 ron:2 mathematical:2 direct:1 qualitative:1 prove:4 consists:1 underfitting:3 manner:1 introduce:4 expected:1 indeed:2 intricate:1 p1:7 proliferation:1 considering:1 becomes:1 provided:2 estimating:1 bounded:8 moreover:1 notation:1 mass:5 interpreted:1 affirmative:1 guarantee:3 every:1 tie:1 converse:2 enjoy:1 before:5 negligible:1 understood:2 limit:3 despite:2 establishing:2 lugosi:1 might:1 equivalence:1 practice:4 block:1 procedure:2 pontil:1 empirical:21 asi:1 orfi:1 word:2 inition:2 cannot:1 needle:2 selection:1 risk:38 impossible:1 influence:2 context:1 applying:3 equivalent:10 imposed:1 map:2 eighteenth:1 deterministic:2 regardless:1 independently:1 formalized:1 immediately:2 m2:1 estimator:1 insight:1 haussler:1 importantly:1 stability:77 notion:12 traditionally:1 justification:2 variation:5 hurt:1 annals:1 shamir:1 suppose:4 alabdulmohsin:2 heavily:2 speak:1 exact:2 us:1 hypothesis:46 harvard:1 element:1 recognition:1 mukherjee:1 role:1 wang:1 region:1 wj:1 connected:1 richness:2 vanishes:2 complexity:7 nats:2 pinsker:2 tight:1 learner:5 translated:1 various:4 alphabet:5 distinct:1 effective:9 artificial:6 approached:1 labeling:1 shalev:3 sanity:1 whose:1 quite:2 larger:1 supplementary:1 valued:1 say:1 drawing:1 otherwise:1 tightness:1 statistic:2 niyogi:2 itself:3 pempirical:1 final:2 zm:1 combining:2 rifkin:1 translate:1 achieve:1 saudi:1 crossvalidation:1 exploiting:1 convergence:8 rademacher:2 produce:1 perfect:1 converges:4 leave:2 ben:1 derive:1 illustrate:1 augmenting:2 measured:3 eq:7 p2:6 sa:1 strong:2 implies:3 stochastic:5 vc:7 mcallester:1 hx:1 generalization:61 preliminary:1 elementary:1 summation:1 strictly:1 pl:2 hold:6 sufficiently:1 considered:1 algorithmic:57 mapping:2 rgen:3 m0:2 label:1 majorizing:1 robbins:1 agrees:1 vice:1 establishes:1 tool:1 minimization:1 hope:1 gaussian:2 always:6 sight:1 rather:1 corollary:2 derived:1 bernoulli:4 check:1 vapnikchervonenkis:1 contrast:2 sense:1 posteriori:1 inference:9 el:3 entire:2 relation:1 limm:2 overall:1 classification:2 denoted:5 special:1 mutual:9 never:1 having:1 look:4 unsupervised:1 future:1 simplify:2 primarily:1 few:1 diaconis:1 divergence:1 zabell:1 individual:1 sandwich:1 extreme:2 behind:1 wager:1 chain:9 necessary:7 experience:1 injective:1 poggio:1 logarithm:2 desired:1 re:2 theoretical:1 minimal:1 instance:5 boolean:3 cover:2 measuring:1 stirling:4 introducing:1 deviation:1 subset:2 uniform:35 learnability:10 connect:1 answer:2 perturbed:3 deduced:1 fundamental:2 sequel:1 probabilistic:5 together:1 central:3 henceforth:1 american:1 supp:1 de:4 gy:1 includes:3 coefficient:4 satisfy:1 mp:1 audibert:1 reiterate:1 depends:1 later:1 view:1 h1:9 root:1 closed:1 observing:1 start:1 moivre:4 contribution:2 square:1 who:2 yield:1 generalize:3 weak:1 bayesian:3 produced:1 history:1 influenced:1 definition:30 against:2 pp:16 proof:13 recovers:2 proved:6 popular:1 remp:9 ask:1 recall:2 dimensionality:3 actually:1 appears:1 attained:1 supervised:2 improved:3 though:1 typeface:1 talagrand:1 hand:1 receives:2 overlapping:2 brings:2 infimum:1 perhaps:1 quality:1 building:1 concept:9 true:33 unbiased:1 former:1 hence:5 regularization:1 leibler:1 ehrenfeucht:1 deal:1 conditionally:1 chaining:3 generalized:1 prominent:1 outline:2 theoretic:1 interpreting:1 novel:1 recently:1 common:2 functional:2 multinomial:2 empirically:1 overview:1 conditioning:1 discussed:1 interpretation:5 m1:2 measurement:1 monthly:1 versa:1 imposing:2 cambridge:1 rd:1 hp:7 similarly:1 chapelle:1 stable:5 deduce:2 contaminating:1 showed:4 recent:1 irrelevant:2 inf:1 reverse:1 manipulation:2 certain:1 inequality:14 binary:2 calligraphic:1 success:1 minimum:2 ey:1 determine:1 strike:1 infer:1 reduces:1 believed:1 post:5 controlled:3 qi:4 impact:2 regression:1 metric:1 expectation:1 achieved:3 addition:8 whereas:3 separately:1 rest:2 unlike:1 posse:1 file:1 induced:6 spirit:1 sridharan:1 structural:1 split:1 iii:1 concerned:3 zi:14 restrict:1 reduce:1 whether:2 expression:1 ibrahim:2 bartlett:1 algebraic:2 speaking:2 repeatedly:1 remark:5 useful:2 clear:1 informally:1 detailed:1 factorial:1 induces:1 processed:1 delta:1 algorithmically:4 disjoint:1 write:6 vol:15 sparsifying:1 key:4 drawn:3 capital:1 asymptotically:1 sum:3 rtrue:9 angle:1 parameterized:1 letter:2 psm:4 everywhere:1 uncertainty:2 master:1 place:1 throughout:1 almost:1 dropout:2 bound:21 guaranteed:1 simplification:1 abdullah:1 kutin:1 constraint:5 kronecker:1 protects:2 bousquet:2 min:3 trn:6 according:3 arabia:1 across:4 smaller:2 son:1 intuitively:3 invariant:1 dv:3 erm:1 taken:1 remains:1 mind:1 know:2 end:2 informal:1 generalizes:7 available:1 endowed:2 esm:5 observe:1 away:1 generic:3 appropriate:1 alternative:1 robustness:2 gate:1 original:6 thomas:1 binomial:2 prof:1 establish:2 classical:5 objective:2 question:4 occurs:1 parametric:13 concentration:1 dependence:1 said:2 september:1 distance:4 separate:1 mapped:1 assuming:2 devroye:1 relationship:5 balance:1 minimizing:2 equivalently:1 difficult:1 liang:1 statement:2 relate:3 negative:1 design:2 countable:3 unknown:2 perform:1 upper:1 observation:32 markov:9 sm:47 finite:11 precise:1 inferred:14 introduced:2 david:1 namely:1 z1:3 connection:1 established:3 nip:1 able:1 beyond:1 usually:1 below:1 pattern:1 including:2 natural:2 eh:4 improve:11 technology:1 carried:1 ready:1 prior:3 review:1 literature:3 understanding:1 asymptotic:1 law:1 loss:16 permutation:1 rationale:1 suph:1 afterward:1 srebro:1 h2:9 agent:1 sufficient:7 consistent:3 corrupting:1 summary:1 last:3 enjoys:1 formal:3 allow:2 side:1 fall:1 emp:11 absolute:2 dimension:7 valid:1 rich:1 adaptive:1 predictivity:1 transaction:1 excess:1 obtains:1 implicitly:1 kullback:1 nato:1 overfitting:3 reveals:3 uai:1 shwartz:3 thep:1 search:1 why:1 learn:1 reasonably:1 nature:4 improving:1 artificially:1 pk:6 main:3 bounding:1 noise:5 xu:1 representative:1 elaborate:1 wiley:1 sub:1 theme:1 comprises:1 tied:1 jmlr:5 theorem:13 formula:2 down:1 pac:6 showing:2 learnable:1 svm:2 admits:1 cortes:1 evidence:1 derives:1 exists:5 mendelson:1 vapnik:10 magnitude:1 nk:1 entropy:3 intersection:1 simply:1 appearance:1 likely:1 jacm:1 ez:9 lazy:8 applies:1 springer:1 ch:3 satisfies:2 determines:1 relies:1 acm:1 goal:1 king:1 quantifying:1 blumer:1 content:1 infinite:1 uniformly:9 reducing:1 kearns:2 lemma:4 called:7 total:6 tendency:1 e:3 shannon:2 formally:2 support:5 latter:1 ex:4 |
5,547 | 602 | Unsmearing Vistlal Motion:
Development of Long-Range
Horizolltal Intrinsic Conllections
Kevin E. Martin
Jonathan A. Marshall
Department of Computer Science, CB 3175, Sitterson Hall
University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A.
Abstract
Human VlSlon systems integrate information nonlocally, across long
spatial ranges.
For example, a moving stimulus appears smeared
when viewed briefly (30 ms), yet sharp when viewed for a longer
exposure (100 ms) (Burr, 1980). This suggests that visual systems
combine information along a trajectory that matches the motion of
the stimulus. Our self-organizing neural network model shows how
developmental exposure to moving stimuli can direct the formation of
horizontal trajectory-specific motion integration pathways that unsmear
representations of moving stimuli. These results account for Burr's data
and can potentially also model ot.her phenomena, such as visual inertia.
1
INTRODUCTION
Nonlocal interactions strongly influence the processing of visual motion information
and the response characteristics of visual neurons. Examples include: attentional
modulation of receptive field shape; modulation of neural response by stimuli beyond
the classical receptive field; and neural response to large-field background motion.
In this paper we present a model of the development of nonlocal neural mechanisms
for visual motion processing. Our model (Marshall, 1990a, 1991) is based on the
long-range excitatory horizontal intrinsic connections (LEHICs) that have been
identified in the visual cortex of a variety of animal species (Blasdel, Lund, &
Fitzpatrick, 1985; Callaway & Katz, 1990; Gabbott, Martin, & Whitteridge, 1987;
Gilbert & Wiesel, 1989; Luhmann, Martinez Millan, & Singer, 1986; Lund, 1987;
Michalski, Gerstein, Czarkowska, & Tarnecki, 1983; Mitchison & Crick, 1982;
Nelson & Frost, 1985; Rockland & Lund, 1982, 1983; Rockland, Lund, &
Humphrey, 1982; Ts'o, Gilbert, & vViesel, 1986).
2
VISUAL UNSMEARING
Human visual systems summate signals over a period of approximately 120 ms
in daylight (Burr 1980; Ross & Hogben, 1974). This slow summation reinforces
417
418
Martin and Marshall
stationary stimuli but would tend to smear any moving object. Nevertheless, human
observers report perceiving both stationary and moving stimuli as sharp (Anderson,
Van Essen, & Gallant, 1990; Burr, 1980; Burr, Ross, & Morrone, 1986; Morgan &
Benton, 1989; Welch & McKee, 1985). Why do moving objects not appear smeared?
Burr (1980) measured perceived smear of moving spots as a function of exposure
time. He found that a moving visual spot appears smeared (with a comet-like tail)
when it is viewed for a brief exposure (30 ms) yet perfectly sharp when viewed
for a longer exposure (100 ms) (Figure 1). The ability to counteract smear at
longer exposures suggests that human visual systems combine (or integrate) and
sharpen motion information from multiple locations along a specific spatiotemporal
trajectory that matches the motion of the stimulus (Barlow, 1979,1981; Burr, 1980;
Burr & Ross, 1986) in the domains of direction, velocity, position, and time.
This unsmearing phenomenon also suggests the existence of a memory-like effect,
or persistence, which would cause the behavior of processing mechanisms to differ
in the early, smeared stages of a spot's motion and in the later, unsmeared stages.
3
NETWORK ARCHITECTURE
We built a biologically-modeled self-organizing neural network (SONN) containing
long-range excitatory horizontal intrinsic connections (LEHICs) that learns to
integrate visual motion information nonlocally. The network laterally propagates
predictive moving stimulus information in a trajectory-specific manner to successive
image locations where a stimulus is likely to appear. The network uses this
propagated information sharpen its representation of visual motion .
3.1
LONG-RANGE EXCITATORY HORIZONTAL INTRINSIC
CONNECTIONS
The network 's LEHICs modeled several characteristics consistent with neurophysiological data:
? They are highly specific and anisotropic (Callaway & Katz, 1990).
Motion
Oms
?
.~v~
~~:~~
.::: .
5
.~
?
-:.:.:...:.~.~
......,/
~/f/"
30ms
??-:::::::~1??1
?
.>:::::~~
~
:::J
"0
-:::,:~
en
.2
:::J
E
.:: ,.
+=-
(J)
100 ms
??
?
?
?
Figure 1: Motion unsmearing. A spot presented for 30 rns appears to have a comet-like
tail, but a spot presented for 100 lOS appears sharp and unsmeared (Burr, 1980).
Un smearing Visual Motion: Development of Long-Range Horizontal Intrinsic Connections
? They typically run between neurons with similar stimulus preferences
(Callaway & Katz, 1990).
? They can run for very long distances across the network space (e.g., 10 mm
horizontally across cortex) (Luhmann, Martinez Millan, & Singer, 1986).
? They can be shaped adaptively through visual experience (Callaway &
Katz, 1990; Luhmann, Martinez Millan, & Singer, 1986).
? They may serve to predictively prime motion-sensitive neurons (Gabbott,
Martin, & Whitteridge, 1987).
Some characteristics of our modeled LEHICs are also consistent with those of
the horizontal connections described by Hirsch & Gilbert (1991). For instance,
we predicted (Marshall, 1990a) that horizontal excitatory input alone should not
cause suprathreshold activation, but. horizontal excitatory input should amplify
activation when local bottom-up excitation is present. Hirsch & Gilbert (1991)
directly observed these characteristics in area 17 pyramidal neurons in the cat.
Since LEHICs are found in early vision processing areas like VI, we hypothesize
that similar connections are likely to be found within "higher" cortical areas as
well, like areas MT and STS. Our simulated networks may correspond to structures
in such higher areas. Although our long-range lateral signals are modeled as
being excitatory (Orban, Gulyas, & Vogels, 1987), they are also functionally
homologous to long-range trajectory-specific lateral inhibition of neurons tuned to
null-direction motion (Ganz & Felder, 1984; Marlin, Dougla.", & Cynader, 1991 ;
Mott.er, Steinmetz, Duffy, & Mountcastle, 1987).
LEHICs constitute one possible means by which nonlocal communication can take
place in visual cortex. Other means, such as large bottom-up receptive fields, can
also cause information to be transmitted nonlocally. However, the main difference
between LEHICs and bottom-up receptive fields is that LEHICs provide lateral
feedback information about the outcome of other processing within a given stage.
This generates a form of memory, or persistence. Purely bottom-up net.works
(without LEHICs or other feedba.ck) would perform processing afresh at each
step, so that the outcome of processing would be influenced only by the direct,
feedforward inputs at each step.
3.2 RESULTS OF NETWORK DEVELOPMENT
In our model, developmental exposure to moving stimuli guides the formation
of motion-integration pathways that unsmear representations of moving stimuli.
Our model network is repeat.edly exposed to training input sequences of smeared
motion patterns through bottom-up excitatory connections . Smear is modeled as an
exponential decay and represents the responses of t.emporally integrating neurons
to moving visual stimuli. The network contains a set of initially nonspecific LEHICs
with fixed signal transmission latencies. The moving stimuli cause the pattern of
weights across the LEHICs to become refined, eventually fonning "chains" that
correspond to trajectories in the visual environment.
To model unsmearing fully, we would need a 2-D retinotopically organized layer
of neurons tuned to different. directions of motion and different velocities. Each
trajectory in visual space would be represented by a set of like velocity and direction
sensitive neurons whose receptive fields are located along the trajectory. These
neurons would be connected through a trajectory-specific chain of time-delayed
LEHICs. Lat.eral inhibition between chains would be organized selectively to allow
representations of multiple stimuli to be simultaneously active (Marshall, 1990a),
thereby letting most trajectory representations operate independently.
Our simulation consists of a I-D subnet.work of the full 2-D network, with 32
neurons sensitive to a single velocity and direction of motion (Figure 2a). The
419
420
Martin and Marshall
lateral inhibitory connections are fixed in a Gaussian distribution, but the LEHIC
weights can change according to a modified Hebbiall rule (Grossberg, 1982):
d
dt Zii
{f(Xi)(-zii + h(xj)),
=
z;
where
represents the weight of the LEHIC from the jth neuron to the
ith neuron, Xi represents the value of the activation level of the ith neuron, { is
a slow learning ra.te, h(xj) = max(O, Xj)2 is a faster-than-linear signal function, and
f(xd = max(O, Xi)2 is a faster-than-linear sampling function . To model multiplestep trajectories, we used LEHICs with three different signal transmission delays.
Initially the LEHICs were all represented, but their weights were zero.
As stimuli move across the receptive fields of the neurons in the network, many
neurons are coactive because the network is unable to resolve the smear. By the
learning rule, the weights of the LEHICs between these coactive neurons increase.
This leads to a profusion of connection weights (Figure 2b), analogous to the "crude
clusters" proposed by Callaway and Katz (1990) to describe the early (postnatal
days 14-35) structure of horizontal connections in cat area VI.
After sufficient exposure to moving stimuli, the "crude clusters" in our simulation
become sharper (Figure 2c) because of the faster-than-linear signal functions. This
refinement of the pattern of connection weights into chains might correspond to the
later (postnatal day 42+) development of "refined clusters" described by Callaway
and Katz (1990).
3.3
RESULTS OF NETWORK OPERATION
Before learning begins the network is incapable of unsmearing a stimulus moving
across the receptive fields of the neurons (Figure 3a) . As the stimulus moves from
one position to the next, the pattern of neuron activations is no less smeared than
o--..
0
"....
0+ ...?... +0 0
0
..~... .........
...... ",,"
-: : : : :, .
"" ..........
. ~ : : : : ........... .,"
t t t t t t t
Lateral
Inhibitory
Connections
Excitatory Input
(a)
cI O~O""""'0"'0
LEHICs
(b)
(c)
Figure 2: Three phases of modeled development. (a) Initial. Lateral excitatory
connections were modifiable and had zero weight. Lateral inhibition was fixed in a
Gaussian distribut.ion (thickness of dotted arrows). The neurons received sequences
of smeared rightward-moving excitatory input patterns. (b) Profusion. During early
development lateral excitatory connections went through a phase of weight profusion. The
output LERrC weights (thickness of arrows) from one neuron (filled circle) are shown;
weights were biased toward rightward motion. (c) Refinement. During later development,
the pattern of weights settled into sets of regular anisotropic chains; most of the early
profuse connections were eliminated. No external parameters were manipulated during the
simulation to induce the initial-profusion-refinement phase transitions. The simulation
contained three different signal transmission latencies, but only one is shown here.
Unsmearing Visual Motion: Development of Long-Range Horizontal Intrinsic Connections
the moving input pattern. No information is built up along the trajectory since the
LEHIC weights are still zero.
After training, the network is able to resolve the smear (Figure 3b) in a manner
reminiscent of Burr 's results (Figure 1). As a stimulus moves, it excites a sequence
of neurons whose receptive fields lie along its trajectory. As each neuron receives
excitatory input in turn from the moving stimulus, it becomes activated and emits
excitatory signals along its trajectory-specific LEHICs. Subsequent neurons along
the trajectory then receive both direct stimulus-generated excitation and lateral
time-delayed excitation. The combination causes these neurons to become even
more active; thus activation accumulates along the chain toward an asymptote. The
accumulating activation lets neurons farther along the trajectory more effectively
suppress (via lateral inhibition) the activation of the neurons carrying the trailing
smear. The comet-like tail contracts progressively, and the representation of the
Time:1
Time=2
Time=3
1,000000000001
Time=1
??00000]0000
r?? , . ....
1 ~"ooOOOOOOO i
????
0000 0 000
, I ., ... ,
Time=4
Time=5
Time:9
~.'0000000
1
I , 1 . ..
.. ..
,. I"T ???I
Time=5
Time=6
T ???I
?.. ,
,
I
I , 1 . .
.
.
.
????????0 0 00
,.? ???rr,?rr??1 .. . .
?????????0 0 0
, . r .... T
?"I"r"l"??r ??1 . . .
,
1
.
..
..
....
.
" ,000000000
1
00*-.0000000
I ., I , 1 . . . . . . .
,. , ?,1 ?,?-1 . . . . . .
.
Time=8
0 0 0000?.0 000
Time=9
[email protected]
,
r ?rT ? '
, . ,. .... r ??,
T??r??1 ... .
r?r?r ??1
Time=10
000000000.00
Time=11
???????????0
Time=11
0 0 0 0 000000.0
Time=12
@???????????
Time=12
0000000 0000.
,.
, ? . .? .... , .... / .. r
r ?? TT?
r?T ??1
I
000 0 ??000000
??????????00
,. ,....,.... ,....?.. , ?r??I"??rT??1
1
~.e.OOOOOOOO
r I .., ?-1 . . . . . .
Time=10
, ..,... /... ,...., ?r??r?FT???1 .
1
1 000~.000001
? . ,.., I I , 1 . . . . .
1 ??4t?~"?000001
.
Time=8
.,
??????000000
Time=6
Time=7
I
Time=3
000000000
,00
??
0000000000
????1
Time=2
Time=4
1 ??
I
1
, ., ....,..?. , ?r r?rT???1 . .
, , .., .. , ...,. ,. ?r ?r "1"" ?"/"" ?-1 .
?
, .. , .. ., . . , ... r?
r ?rT T ??l ?-1
Figure 3: Results of unsmearing simulation . A simulated spot moves rightward for 12
time steps along a I-D model retina. Smeared input patterns are plotted as vertical lines,
and relative output neuron activation patterns are plotted as shading intensity of circles
(neurons). (a) Before learning (left) the network is unable to resolve the smear in the
input, but (b) after learning (right), the smear is resolved by time step 11. The same test
input patterns are used both before and after learning.
421
422
Martin and Marshall
moving stimulus becomes increasingly sharp.
Each neuron's activation value Xi changes according to a shunting differential
equation (Grossberg, 1982):
d
dtXi
=
-Axi
+
(B - XdEi - (C + xdIi,
where the neuron's total excitatory input Ei = K i (1 + Li) combines bottomup input K j (the smeared motion) with summed lateral excitation input Li
L I:j h( X j )zt, the neuron's inhibitory input is Ii /2: j g( x j )zji,
h(xj) = max(u, Xj)2 and g(Xj) = max(O, .'Cj)3 are faster-than-linear signal functions,
and A, B, C, L, /3, and / are constants.
=
4
=
CONCLUSIONS AND FUTURE RESEARCH
One might wonder why visual systems allow smear to be represented during the
first 30 ms of a stimulus' motion, since simple winner-take-all lateral inhibition
could easily eliminate t.he smear in t.he representation. Our research lea.ds us
to suggest that representing smear lets human visual systems tolerate and even
exploit initial uncertainty in local motion measurements. A system with winnertake-all sharpening could not generate a reliable trajectory prediction from an
initial inaccurate local motion measurement because the motion can be determined
accurately only after multiple measurements along the trajectory are combined
(Figure 4a). The inaccurate trajectory predictions of such a network would impair
its ability to develop or maintain circuits for combining motion measurements
(Marshall, 1990ab). We conclude that motion perception systems need to represent
explicitly both initial smear and subsequent unsmearing.
Figure 4a illustrates that when a moving object first appears, the direction in which
it will move is uncertain. As the object continues to move, its successor positions
become increasingly predictable, in general. The initial smear in the representation
is necessary for communicating prior trajectory information to the representations
of many possible future trajectory positions.
Faster-tha.n-linear signal functions (Figure 4b) were used so that a neuron would
generate little lateral excitation and inhibition when it is uncertain about the
presence of the moving stimulus in its receptive field (when a new stimulus appears)
and so that a highly active neuron (more certain about the presence of the stimulus
in its receptive field) would generate strong lateral excitation and inhibition.
Our results illustrate how visual systems may become able both to propagate motion
Uncertainty
0-... --0- ... --<>- ... ~o-(a)
Strong Certainty
Uncertain
Certain
(b)
Figure 4: Visual motion system uncertainty. (a) When a moving object first appears, the
direction in which it will move is uncertain (top row, circular shaded region). As motion
proceeds (second, third, and fourth rows), the set of possible stimulus locatlOns becomes
increasingly predictable (smaller shaded regions). (b) Faster-than-linear signal functions
maintain smear of uncertain data but sharpen more certain data.
Unsmearing Visual Motion: Development of Long-Range Horizontal Intrinsic Connections
information in a trajectory-specific manner and to use the propagated information
to unsmear representations of moving objects: (1) Regular anisotropic "chain"
patterns of time-delayed horizontal excitatory connections become established
through a learning procedure, in response to exposure to ordinary moving visual
scenes. (2) Accumulation of propagated motion information along these chains
causes a sharpening that unsmears representations of moving visual stimuli.
These results let us model the integration-along-trajectory revealed by Burr's (1980)
experiment, within a developmental framework that corresponds to known
neurophysiological data; they can potentially also let other nonlocal motion
phenomena, such as visual inertia (Anstis & Ramachandran, 1987), be modeled.
ACKNOWLEDGEMENTS
This work was supported in part by the National Eye Institute (EY09669), by the
Office of Naval Research (Cognitive and NeUl'ai Sciences, N00014-93-1-0130), and
by an Oak Ridge Associated Universit.ies Juuior Faculty Enhancement Award.
REFERENCES
Anderson, C.H., Van Essen, D.C., & Gallant, J.L. (1990). "Blur into Focus."
Nature, 343, 419-420.
Anstis, S.M . & Ramachandran, V.S . (1987). "Visual Inertia in Apparent Motion."
Vision Research, 27(5), 755-764.
Barlow, H.B. (1979). "Reconstructing the Visual Image in Space and Time."
Nature, 279, 189-190.
Barlow, H.B. (1981). "Critical Limiting Factors in the Design of the Eye and Visual
Cortex." Proceedings of the Royal Society of Lond011, Ser. B, 212, 1-34.
Blasdel, G.G., Lund, J.S., & Fitzpatrick, D. (1985). "Intrinsic Connections of
Macaque Striate Cortex: Axonal Projections of Cells Outside Lamina 4C." Journal
of Neuroscience, 5(12), 3350-3369.
Burr, D. (1980). "Motion Smear." Nature, 284,164-165.
Burr, D. & Ross, J. (1986). "Visual Processing oO\'lotion." Trends in Neuroscience,
9(7), 304-307.
Burr, D.C., Ross, J., & Morrone, M.C. (1986). "Seeing Objects in Motion."
Proceedings of the Royal Society of London, Ser. B, 227,249-265.
Callaway, E.M. & Katz, 1.C. (1990). "Emergence and Refinement of Clustered
Horizontal Connections in Cat Striate Cortex." J. Neurophysiol., 10, 1134-1153.
Gabbott, P.L.A., Martin, K.A.C., & '?hitteridge, D. (1987). "Connections Between
Pyramidal Neurons in La.yer 5 of Cat Visual Cortex (Area 17)." Journal of
Comparative Neurology, 259, 364-381.
Ganz, 1. & Felder, R. (1984) . "Mechanism of Directional Selectivity is Simple
Neurons of the Cat's Visual Cortex Analyzed with Stationary Flash Sequences."
Journal of Neurophysiology, 51,294-324.
Gilbert, C.D. & Wiesel, T.N. (1989). "Columnar Specificity of Intrinsic Horizontal
and Corticocortical Connections in Cat Visual Cortex." Journal of Neuroscience,
9, 2432-2442.
Hirsch, J. & Gilbert, C.D. (1991). "Synaptic Physiology of Horizontal Connections
in the Cat's Visual Cortex." Journal of Neuroscience, 11, 1800-1809.
Luhmann, H.J., Martinez Millan, L., & Singer, W. (1986). "Development of
423
424
Martin and Marshall
Horizonta.l Intrinsic Connections in Cat Striate Cortex." Experimental Brain
Research, 63, 443-448.
Lund, J .S. (1987). "Local Circuit Neurons of Macaque Monkey Striate Cortex: I.
Neurons of Laminae 4C and 5A." Journal of Comparative Neurology, 257, 60-92.
Marlin, S.G., Douglas, R.M., & Cynader, M.S. (1991).
"Position-Specific
Adaptation in Simple Cell Receptive Fields of the Cat Striate Cortex." Journal
of Neurophysiology, 66(5),1769-1784.
Marshall, J.A. (1990a). "Self-Organizing Neural Networks for Perception of Visual
Motion." Neural Networks, 3, 45-74.
"Representation of Uncertainty in Self-Organizing
Marshall, J .A. ,1990b).
Neural Networks.' Proceedings of the International Neural Network Conference,
Paris, France, July 1990,809-812.
Marshall, J .A. (1991).
"Challenges of Vision Theory: Self-Organization of
Neural Mechanisms for Stable Steering of Object-Grouping Data in Visual Motion
Perception." Invited Paper, in Stochastic and Ne'uralll1ethods in Signal Processing,
Image Processing, and Computer Vision, Su-Shing Chen, Ed., Proceedings of the
SPIE 1569, San Diego, CA, July 1991, pp. 200-21.5.
Michalski, A., Gerstein, G.L., Czarkowska, J., & Tarnecki, R. (1983). "Interactions
Between Cat Striate Cortex Neurons." Experimental Brain Research, 51, 97-107.
Mitchison, G. & Crick, F. (1982). "Long Axons \Vithin t.he Striate Cortex: Their
Distribution, Orientat.ion, and Patt.erns of Connection." Proceedings of the National
Academy of Sciences of the U.S.A., 79, 3661-3665.
Morgan, M.J. & Benton, S. (1989). "Motion-Deblurring in Human Vision." Nature,
340, 385-386.
Motter, B.C., Steinmetz, M.A., Duffy, C.J., & Mountcastle, V.B. (1987).
"Functional Properties of Parietal Visual Neurons: Mechanisms of Directionality
Along a Single Axis." Journal of Neuroscience, 7(1), 154-176.
Nelson, J.1. & Frost, B.J. (1985). "Intracortical Facilitation Among Co-Oriented,
Co-Axially Aligned Simple Cells in Cat Striate Cortex." Experimental Brain
Research, 61, 54-6l.
Orban, G.A., Gulyas, B., & Vogels, R. (1987).
"Influence of a Moving
Textured Background on Direction Selectivity of Cat Striate Neurons." Journal
of Neurophysiology, 57(6), 1792-1812.
Rockland, K.S. & Lund, J .S. (1982). "Widespread Periodic Intrinsic Connections
in the Tree Shrew Visual Cortex." Science, 215, 1532-1534.
Rockland, K.S. & Lund, J .S. (1983). "Intrinsic Laminar Lattice Connections in
Primate Visual Cortex." Journal of Comparative Neurology, 216, 303-318.
Rockland, K.S., Lund, J .S., & Humphrey, A.L. (1982). "Anatomical Banding of
Intrinsic Connections in Striate Cortex of Tree Shrews (Tupaia glis )." Journal of
Comparative Neurology, 209, 41-58.
Ross, J. & Hogben, J.H. (1974). Vision Research, 14,1195-1201.
Ts'o, D.Y., Gilbert, C.D., & Wiesel, T.N. (1986).
"Relationships Between
Horizontal Interactions and Functional Architecture in Cat Striate Cortex as
Revealed by Cross-Correlation Analysis." Journal of Neuroscience, 6(4), 1160-1170.
Welch, L. & McKee, S.P. (1985). "Colliding Targets: Evidence for Spatial
Localization Within the Motion System." Vision Research, 25(12), 1901-1910.
| 602 |@word neurophysiology:3 faculty:1 briefly:1 wiesel:3 horizonta:1 simulation:5 carolina:1 propagate:1 thereby:1 shading:1 initial:6 contains:1 nonlocally:3 tuned:2 coactive:2 activation:9 yet:2 profusion:4 reminiscent:1 subsequent:2 blur:1 shape:1 hypothesize:1 asymptote:1 progressively:1 stationary:3 alone:1 postnatal:2 ith:2 farther:1 location:2 successive:1 preference:1 oak:1 zii:2 along:14 direct:3 become:6 differential:1 consists:1 combine:3 pathway:2 burr:14 manner:3 ra:1 behavior:1 brain:3 resolve:3 little:1 humphrey:2 becomes:3 begin:1 circuit:2 null:1 banding:1 monkey:1 marlin:2 sharpening:2 certainty:1 xd:1 laterally:1 universit:1 ser:2 appear:2 before:3 local:4 accumulates:1 modulation:2 approximately:1 might:2 suggests:3 shaded:2 co:2 callaway:7 range:10 grossberg:2 spot:6 procedure:1 area:7 physiology:1 projection:1 persistence:2 integrating:1 regular:2 induce:1 seeing:1 suggest:1 specificity:1 amplify:1 influence:2 accumulating:1 gilbert:7 accumulation:1 exposure:9 nonspecific:1 independently:1 welch:2 hogben:2 chapel:1 rule:2 communicating:1 facilitation:1 analogous:1 limiting:1 diego:1 target:1 us:1 deblurring:1 velocity:4 trend:1 located:1 continues:1 corticocortical:1 bottom:5 observed:1 ft:1 region:2 connected:1 went:1 developmental:3 environment:1 predictable:2 carrying:1 exposed:1 predictive:1 serve:1 cynader:2 purely:1 localization:1 textured:1 neurophysiol:1 rightward:3 resolved:1 easily:1 cat:13 represented:3 describe:1 london:1 formation:2 kevin:1 outcome:2 refined:2 outside:1 whose:2 apparent:1 ability:2 emergence:1 sequence:4 rr:2 shrew:2 net:1 michalski:2 interaction:3 adaptation:1 aligned:1 combining:1 organizing:4 rockland:5 academy:1 los:1 cluster:3 transmission:3 enhancement:1 comparative:4 object:8 lamina:2 illustrate:1 develop:1 oo:1 measured:1 excites:1 received:1 strong:2 sonn:1 predicted:1 differ:1 direction:8 stochastic:1 human:6 successor:1 suprathreshold:1 subnet:1 clustered:1 summation:1 felder:2 mm:1 hall:1 cb:1 blasdel:2 fitzpatrick:2 trailing:1 early:5 perceived:1 ross:6 sensitive:3 smeared:9 gaussian:2 modified:1 ck:1 office:1 focus:1 naval:1 inaccurate:2 typically:1 eliminate:1 initially:2 her:1 france:1 among:1 smearing:1 distribut:1 development:11 animal:1 spatial:2 integration:3 summed:1 field:12 shaped:1 sampling:1 eliminated:1 represents:3 future:2 report:1 stimulus:30 retina:1 oriented:1 steinmetz:2 simultaneously:1 manipulated:1 national:2 delayed:3 phase:3 maintain:2 ab:1 organization:1 essen:2 highly:2 circular:1 analyzed:1 activated:1 chain:8 necessary:1 experience:1 filled:1 tree:2 circle:2 plotted:2 uncertain:5 instance:1 marshall:12 lattice:1 ordinary:1 afresh:1 delay:1 wonder:1 thickness:2 spatiotemporal:1 periodic:1 combined:1 adaptively:1 international:1 contract:1 unsmearing:10 gulyas:2 settled:1 containing:1 shing:1 luhmann:4 external:1 cognitive:1 li:2 account:1 intracortical:1 north:1 sts:1 explicitly:1 vi:2 later:3 observer:1 characteristic:4 correspond:3 directional:1 accurately:1 trajectory:23 axially:1 influenced:1 synaptic:1 ed:1 pp:1 associated:1 spie:1 propagated:3 emits:1 organized:2 cj:1 appears:7 higher:2 dt:1 day:2 tolerate:1 response:5 ooo:1 strongly:1 anderson:2 stage:3 correlation:1 d:1 ramachandran:2 receives:1 horizontal:16 ei:1 su:1 widespread:1 vogels:2 effect:1 barlow:3 during:4 self:5 excitation:6 m:8 hill:1 smear:16 tt:1 ridge:1 dtxi:1 motion:42 image:3 mckee:2 mt:1 functional:2 retinotopically:1 winner:1 anisotropic:3 tail:3 he:4 katz:7 functionally:1 measurement:4 whitteridge:2 ai:1 sharpen:3 winnertake:1 predictively:1 had:1 moving:26 stable:1 longer:3 cortex:20 inhibition:7 prime:1 selectivity:2 certain:3 n00014:1 incapable:1 morgan:2 transmitted:1 steering:1 period:1 signal:12 ii:1 july:2 multiple:3 full:1 match:2 faster:6 cross:1 long:12 shunting:1 y:1 award:1 prediction:2 vision:7 represent:1 ion:2 lea:1 receive:1 background:2 cell:3 pyramidal:2 ot:1 operate:1 biased:1 invited:1 tend:1 axonal:1 presence:2 feedforward:1 revealed:2 variety:1 xj:6 architecture:2 identified:1 perfectly:1 cause:6 constitute:1 vithin:1 latency:2 generate:3 inhibitory:3 dotted:1 neuroscience:6 reinforces:1 benton:2 modifiable:1 patt:1 anatomical:1 motter:1 nevertheless:1 comet:3 douglas:1 counteract:1 run:2 uncertainty:4 fourth:1 place:1 gerstein:2 eral:1 layer:1 laminar:1 scene:1 colliding:1 generates:1 orban:2 martin:8 ern:1 department:1 according:2 combination:1 across:6 frost:2 increasingly:3 smaller:1 reconstructing:1 biologically:1 primate:1 equation:1 turn:1 eventually:1 mechanism:5 singer:4 letting:1 operation:1 existence:1 top:1 include:1 lat:1 exploit:1 classical:1 society:2 move:7 receptive:11 rt:4 striate:11 distance:1 attentional:1 unable:2 simulated:2 lateral:14 nelson:2 toward:2 modeled:7 relationship:1 daylight:1 nc:1 potentially:2 sharper:1 suppress:1 design:1 zt:1 perform:1 gallant:2 vertical:1 neuron:40 t:2 parietal:1 communication:1 rn:1 sharp:5 fonning:1 intensity:1 paris:1 connection:28 established:1 anstis:2 macaque:2 beyond:1 able:2 impair:1 proceeds:1 pattern:11 perception:3 lund:9 challenge:1 built:2 max:4 memory:2 reliable:1 royal:2 critical:1 homologous:1 representing:1 brief:1 eye:2 ne:1 axis:1 prior:1 mountcastle:2 acknowledgement:1 relative:1 fully:1 oms:1 integrate:3 sufficient:1 consistent:2 propagates:1 row:2 excitatory:15 repeat:1 supported:1 jth:1 guide:1 allow:2 institute:1 van:2 feedback:1 axi:1 cortical:1 transition:1 inertia:3 refinement:4 san:1 nonlocal:4 hirsch:3 active:3 conclude:1 morrone:2 xi:4 mitchison:2 bottomup:1 neurology:4 un:1 why:2 nature:4 ca:1 domain:1 main:1 arrow:2 ooooooo:1 martinez:4 en:1 slow:2 axon:1 position:5 exponential:1 lie:1 crude:2 third:1 learns:1 specific:9 er:1 decay:1 evidence:1 grouping:1 intrinsic:13 effectively:1 ci:1 te:1 yer:1 illustrates:1 duffy:2 zji:1 columnar:1 chen:1 orientat:1 likely:2 neurophysiological:2 visual:40 horizontally:1 contained:1 corresponds:1 tha:1 viewed:4 flash:1 crick:2 change:2 directionality:1 determined:1 perceiving:1 total:1 specie:1 experimental:3 la:1 millan:4 selectively:1 jonathan:1 phenomenon:3 |
5,548 | 6,020 | Mixing Time Estimation in Reversible Markov
Chains from a Single Sample Path
Daniel Hsu
Columbia University
Aryeh Kontorovich
Ben-Gurion University
Csaba Szepesv?ari
University of Alberta
[email protected]
[email protected]
[email protected]
Abstract
This article provides the first procedure for computing a fully data-dependent interval that traps the mixing time tmix of a finite reversible ergodic Markov chain at
a prescribed confidence level. The interval is computed from a single finite-length
sample path from the Markov chain, and does not require the knowledge of any
parameters of the chain. This stands in contrast to previous approaches, which either only provide point estimates, or require a reset mechanism, or additional prior
knowledge. The interval is constructed around the relaxation time trelax , which is
strongly related
? to the mixing time, and the width of the interval converges to zero
roughly at a n rate, where n is the length of the sample path. Upper and lower
bounds are given on the number of samples required to achieve constant-factor
multiplicative accuracy. The lower bounds indicate that, unless further restrictions are placed on the chain, no procedure can achieve this accuracy level before
seeing each state at least ?(trelax ) times on the average. Finally, future directions
of research are identified.
1
Introduction
This work tackles the challenge of constructing fully empirical bounds on the mixing time of
Markov chains based on a single sample path. Let (Xt )t=1,2,... be an irreducible, aperiodic timehomogeneous Markov chain on a finite state space [d] := {1, 2, . . . , d} with transition matrix P .
Under this assumption, the chain converges to its unique stationary distribution ? = (?i )di=1 regardless of the initial state distribution q:
lim Prq (Xt = i) = lim (qP t )i = ?i
t??
t??
for each i ? [d].
The mixing time tmix of the Markov chain is the number of time steps required for the chain to be
within a fixed threshold of its stationary distribution:
tmix := min t ? N : sup max |Prq (Xt ? A) ? ?(A)| ? 1/4 .
(1)
q
A?[d]
P
Here, ?(A) = i?A ?i is the probability assigned to set A by ?, and the supremum is over all
possible initial distributions q. The problem studied in this work is the construction of a non-trivial
confidence interval Cn = Cn (X1 , X2 , . . . , Xn , ?) ? [0, ?], based only on the observed sample
path (X1 , X2 , . . . , Xn ) and ? ? (0, 1), that succeeds with probability 1 ? ? in trapping the value of
the mixing time tmix .
This problem is motivated by the numerous scientificPapplications and machine learning tasks in
which the quantity of interest is the mean ?(f ) =
i ?i f (i) for some function f of the states
of a Markov chain. This is the setting of the celebrated Markov Chain Monte Carlo (MCMC)
paradigm [1], but the problem also arises in performance prediction involving time-correlated data,
as is common in reinforcement learning [2]. Observable bounds on mixing times are useful in the
1
design and diagnostics of these methods; they yield effective approaches to assessing the estimation
quality, even when a priori knowledge of the mixing time or correlation structure is unavailable.
Main results. We develop the first procedure for constructing non-trivial and fully empirical confidence intervals for Markov mixing time. Consider a reversible ergodic Markov chain on d states
with absolute spectral gap ?? and stationary distribution minorized by ?? . As is well-known [3,
Theorems 12.3 and 12.4],
(trelax ? 1) ln 2 ? tmix ? trelax ln
4
??
(2)
where trelax := 1/?? is the relaxation time. Hence, it suffices to estimate ?? and ?? . Our main
results are summarized as follows.
1. In Section 3.1, we show that in some problems n = ?((d log d)/?? + 1/?? ) observations
are necessary for any procedure to guarantee constant multiplicative accuracy in estimating
?? (Theorems 1 and 2). Essentially, in some problems every state may need to be visited
about log(d)/?? times, on average, before an accurate estimate of the mixing time can be
provided, regardless of the actual estimation procedure used.
2. In Section 3.2, we give a point-estimator for ?? , and prove in Theorem 3 that it achieves
3
1
?
multiplicative accuracy from a single sample path of length O(1/(?
? ?? )). We also pro?
vide a point-estimator for ?? that requires a sample path of length O(1/(?? ?? )). This
establishes the feasibility of estimating the mixing time in this setting. However, the valid
confidence intervals suggested by Theorem 3 depend on the unknown quantities ?? and
?? . We also discuss the importance of reversibility, and some possible extensions to nonreversible chains.
3. In Section 4, the construction of valid fully empirical confidence intervals for ?? and ??
are considered. First, the difficulty of the task is explained, i.e., why the standard approach
of turning the finite time confidence intervals of Theorem 3 into a fully empirical one fails.
Combining several results from perturbation theory in a novel fashion we propose a new
procedure and prove that it avoids slow convergence (Theorem 4). We also explain how
to combine the empirical confidence intervals from Algorithm 1 with the non-empirical
bounds from Theorem 3 to produce valid empirical confidence intervals. We prove in
Theorem 5 that the width of these new intervals converge to zero asymptotically at least as
fast as those from either Theorem 3 and Theorem 4.
Related work. There is a vast statistical literature on estimation in Markov chains. For instance, it
is known that under the assumptions
Pn on (Xt )t from above, the law of large numbers guarantees that
the sample mean ? n (f ) := n1 t=1 f (Xt ) converges almost surely to ?(f ) [4], while the central
?
limit theorem tells us that as n ? ?, the distribution of the deviation n(? n (f ) ? ?(f )) will be
normal with mean zero and asymptotic variance limn?? n Var (? n (f )) [5].
Although these asymptotic results help us understand the limiting behavior of the sample mean
over a Markov chain, they say little about the finite-time non-asymptotic behavior, which is often
needed for the prudent evaluation of a method or even its algorithmic design [6?13]. To address
this need, numerous works have developed Chernoff-type bounds on Pr(|? n (f ) ? ?(f )| > ?), thus
providing valuable tools for non-asymptotic probabilistic analysis [6, 14?16]. These probability
bounds are larger than corresponding bounds for independent and identically distributed (iid) data
due to the temporal dependence; intuitively, for the Markov chain to yield a fresh draw Xt? that
behaves as if it was independent of Xt , one must wait ?(tmix ) time steps. Note that the bounds
generally depend on distribution-specific properties of the Markov chain (e.g., P , tmix , ?? ), which
are often unknown a priori in practice. Consequently, much effort has been put towards estimating
these unknown quantities, especially in the context of MCMC diagnostics, in order to provide datadependent assessments of estimation accuracy [e.g., 11, 12, 17?19]. However, these approaches
generally only provide asymptotic guarantees, and hence fall short of our goal of empirical bounds
that are valid with any finite-length sample path.
Learning with dependent data is another main motivation to our work. Many results from statistical learning and empirical process theory have been extended to sufficiently fast mixing, dependent
1
? notation suppresses logarithmic factors.
The O(?)
2
data [e.g., 20?26], providing learnability assurances (e.g., generalization error bounds). These results are often given in terms of mixing coefficients, which can be consistently estimated in some
cases [27]. However, the convergence rates of the estimates from [27], which are needed to derive
confidence bounds, are given in terms of unknown mixing coefficients. When the data comes from a
Markov chain, these mixing coefficients can often be bounded in terms of mixing times, and hence
our main results provide a way to make them fully empirical, at least in the limited setting we study.
It is possible to eliminate many of the difficulties presented above when allowed more flexible access
to the Markov chain. For example, given a sampling oracle that generates independent transitions
from any given state (akin to a ?reset? device), the mixing time becomes an efficiently testable
property in the sense studied in [28, 29]. On the other hand, when one only has a circuit-based
description of the transition probabilities of a Markov chain over an exponentially-large state space,
there are complexity-theoretic barriers for many MCMC diagnostic problems [30].
2
2.1
Preliminaries
Notations
We denote the set of positive integers by N, and the set of the first d positive integers {1, 2, . . . , d}
by [d]. The non-negative part of a real number x is [x]+ := max{0, x}, and ?x?+ := max{0, ?x?}.
We use ln(?) for natural logarithm, and log(?) for logarithm with an arbitrary constant base. Boldface symbols are used for vectors and matrices (e.g., v, M ), and their entries are referenced by
subindexing (e.g., vi , Mi,j ). For a vector v, kvk denotes its Euclidean norm; for a matrix M , kM k
denotes its spectral norm. We use Diag(v) to denote the diagonal matrix whose (i, i)-th entry is vi .
Pd
The probability simplex is denoted by ?d?1 = {p ? [0, 1]d : i=1 pi = 1}, and we regard vectors
in ?d?1 as row vectors.
2.2
Setting
Let P ? (?d?1 )d ? [0, 1]d?d be a d ? d row-stochastic matrix for an ergodic (i.e., irreducible
and aperiodic) Markov chain. This implies there is a unique stationary distribution ? ? ?d?1 with
?i > 0 for all i ? [d] [3, Corollary 1.17]. We also assume that P is reversible (with respect to ?):
?i Pi,j = ?j Pj,i ,
i, j ? [d].
(3)
The minimum stationary probability is denoted by ?? := mini?[d] ?i .
Define the matrices
M := Diag(?)P
and
L := Diag(?)?1/2 M Diag(?)?1/2 .
The (i, j)th entry of the matrix Mi,j contains the doublet probabilities associated with P : Mi,j =
?i Pi,j is the probability of seeing state i followed by state j when the chain is started from its
stationary distribution. The matrix M is symmetric on account of the reversibility of P , and hence
it follows that L is also symmetric. (We will strongly exploit the symmetry in our results.) Further,
L = Diag(?)1/2 P Diag(?)?1/2 , hence L and P are similar and thus their eigenvalue systems are
identical. Ergodicity and reversibility imply that the eigenvalues of L are contained in the interval
(?1, 1], and that 1 is an eigenvalue of L with multiplicity 1 [3, Lemmas 12.1 and 12.2]. Denote and
order the eigenvalues of L as
1 = ?1 > ?2 ? ? ? ? ? ?d > ?1.
Let ?? := max{?2 , |?d |}, and define the (absolute) spectral gap to be ?? := 1??? , which is strictly
positive on account of ergodicity.
Let (Xt )t?N be a Markov chain whose transition probabilities are governed by P . For each t ? N,
let ? (t) ? ?d?1 denote the marginal distribution of Xt , so
? (t+1) = ? (t) P ,
Note that the initial distribution ?
(1)
t ? N.
is arbitrary, and need not be the stationary distribution ?.
The goal is to estimate ?? and ?? from the length n sample path (Xt )t?[n] , and also to construct fully
empirical confidence intervals that ?? and ?? with high probability; in particular, the construction
3
of the intervals should not depend on any unobservable quantities, including ?? and ?? themselves.
As mentioned in the introduction, it is well-known that the mixing time of the Markov chain tmix
(defined in Eq. 1) is bounded in terms of ?? and ?? , as shown in Eq. (2). Moreover, convergence
rates for empirical processes on Markov chain sequences are also often given in terms of mixing
coefficients that can ultimately be bounded in terms of ?? and ?? (as we will show in the proof of
our first result). Therefore, valid confidence intervals for ?? and ?? can be used to make these rates
fully observable.
3
Point estimation
In this section, we present lower and upper bounds on achievable rates for estimating the spectral
gap as a function of the length of the sample path n.
3.1
Lower bounds
The purpose of this section is to show lower bounds on the number of observations necessary to
achieve a fixed multiplicative (or even just additive) accuracy in estimating the spectral gap ?? . By
Eq. (2), the multiplicative accuracy lower bound for ?? gives the same lower bound for estimating
the mixing time. Our first result holds even for two state Markov chains and shows that a sequence
length of ?(1/?? ) is necessary to achieve even a constant additive accuracy in estimating ?? .
Theorem 1. Pick any ?
? ? (0, 1/4). Consider any estimator ??? that takes as input a random
sample path of length n ? 1/(4?
? ) from a Markov chain starting from any desired initial state
distribution. There exists a two-state ergodic and reversible Markov chain distribution with spectral
gap ?? ? 1/2 and minimum stationary probability ?? ? ?
? such that
Pr [|?
?? ? ?? | ? 1/8] ? 3/8.
Next, considering d state chains, we show that a sequence of length ?(d log(d)/?? ) is required to
estimate ?? up to a constant multiplicative accuracy. Essentially, the sequence may have to visit all
d states at least log(d)/?? times each, on average. This holds even if ?? is within a factor of two of
the largest possible value of 1/d that it can take, i.e., when ? is nearly uniform.
Theorem 2. There is an absolute constant c > 0 such that the following holds. Pick any positive
integer d ? 3 and any ?? ? (0, 1/2). Consider any estimator ??? that takes as input a random sample
path of length n < cd log(d)/?
? from a d-state reversible Markov chain starting from any desired
initial state distribution. There is an ergodic and reversible Markov chain distribution with spectral
gap ?? ? [?
? , 2?
? ] and minimum stationary probability ?? ? 1/(2d) such that
Pr [|?
?? ? ?? | ? ?? /2] ? 1/4.
The proofs of Theorems 1 and 2 are given in Appendix A.2
3.2
A plug-in based point estimator and its accuracy
Let us now consider the problem of estimating ?? . For this, we construct a natural plug-in estimator.
Along the way, we also provide an estimator for the minimum stationary probability, allowing one
to use the bounds from Eq. (2) to trap the mixing time.
c ? [0, 1]d?d and random vector ?
? ? ?d?1 by
Define the random matrix M
ci,j := |{t ? [n ? 1] : (Xt , Xt+1 ) = (i, j)}| ,
M
n?1
|{t ? [n] : Xt = i}|
?
?i :=
, i ? [d] .
n
Furthermore, define
b := 1 (L
b +L
b ?)
Sym(L)
2
2
A full version of this paper, with appendices, is available on arXiv [31].
4
i, j ? [d] ,
to be the symmetrized version of the (possibly non-symmetric) matrix
b := Diag(?)
c Diag(?)
? ?1/2 M
? ?1/2 .
L
?1 ? ?
?2 ? ? ? ? ? ?
? d be the eigenvalues of Sym(L).
b Our estimator of the minimum staLet ?
tionary probability ?? is ?
?? := mini?[d] ?
?i , and our estimator of the spectral gap ?? is ??? :=
? 2 , |?
? d |}.
1 ? max{?
These estimators have the following accuracy guarantees:
Theorem 3. There exists an absolute constant C > 0 such that the following holds. Assume the
estimators ?
?? and ??? described above are formed from a sample path of length n from an ergodic and
reversible Markov chain. Let ?? > 0 denote the spectral gap and ?? > 0 the minimum stationary
probability. For any ? ? (0, 1), with probability at least 1 ? ?,
?s
?
log ?d? ?
?? log ?d? ?
?
|?
?? ? ? ? | ? C ?
(4)
+
?? n
?? n
and
?s
|?
?? ? ?? | ? C ?
log
d
?
? log
?? ? ? n
n
?? ?
+
log
1
??
?? n
?
?.
(5)
Theorem 3 implies that the sequence lengths
required
?? and ?? to within constant
to estimate
1
1
?
?
multiplicative factors are, respectively, O ?? ?? and O ?? ? 3 . By Eq. (2), the second of these is
?
also a bound on the required sequence length to estimate tmix .
c and ?
?
The proof of Theorem 3 is based on analyzing the convergence of the sample averages M
to their expectation, and then using perturbation bounds for eigenvalues to derive a bound on the
error of ??? . However, since these averages are formed using a single sample path from a (possibly)
non-stationary Markov chain, we cannot use standard large deviation bounds; moreover applying
c would result in a significantly worse
Chernoff-type bounds for Markov chains to each entry of M
sequence length requirement, roughly a factor of d larger. Instead, we adapt probability tail bounds
for sums of independent random matrices [32] to our non-iid setting by directly applying a blocking
technique of [33] as described in the article of [20]. Due to ergodicity, the convergence rate can be
bounded without any dependence on the initial state distribution ? (1) . The proof of Theorem 3 is
given in Appendix B.
Note that because the eigenvalues of L are the same as that of the transition probability matrix P ,
we could have instead opted to estimate P , say, using simple frequency estimates obtained from
b.
the sample path, and then computing the second largest eigenvalue of this empirical estimate P
In fact, this approach is a way to extend to non-reversible chains, as we would no longer rely on
the symmetry of M or L. The difficulty with this approach is that P lacks the structure required
by certain strong eigenvalue perturbation results. One could instead invoke the Ostrowski-Elsner
theorem [cf. Theorem 1.4 on Page 170 of 34], which bounds the matching distance between the
b ? P k is expected
eigenvalues of a matrix A and its perturbation A + E by O(kEk1/d ). Since kP
to be of size O(n?1/2 ), this approach will give a confidence interval for ?? whose width shrinks
at a rate of O(n?1/(2d) )?an exponential slow-down compared to the rate from Theorem 3. As
demonstrated through an example from [34], the dependence on the d-th root of the norm of the
perturbation cannot be avoided in general. Our approach based on estimating a symmetric matrix
affords us the use of perturbation results that exploit more structure.
Returning to the question of obtaining a fully empirical confidence interval for ?? and ?? , we notice
that, unfortunately, Theorem 3 falls short of being directly suitable for this, at least without further
assumptions. This is because the deviation terms themselves depend inversely both on ?? and ?? ,
and hence can never rule out 0 (or an arbitrarily small positive value) as a possibility for ?? or ?? .3
In effect, the fact that the Markov chain could be slow mixing and the long-term frequency of some
3
Using Theorem 3, it is possible to trap ?? in the union of two empirical confidence intervals?one around
??? and the other around zero, both of which shrink in width as the sequence length increases.
5
Algorithm 1 Empirical confidence intervals
Input: Sample path (X1 , X2 , . . . , Xn ), confidence parameter ? ? (0, 1).
1: Compute state visit counts and smoothed transition probability estimates:
Ni := |{t ? [n ? 1] : Xt = i}| ,
i ? [d];
Ni,j := |{t ? [n ? 1] : (Xt , Xt+1 ) = (i, j)}| ,
2:
3:
4:
5:
Ni,j + 1/d
,
Pbi,j :=
Ni + 1
(i, j) ? [d]2 .
b # be the group inverse of A
b := I ? P
b.
Let A
b.
? ? ?d?1 be the unique stationary distribution for P
Let ?
?
?
?
b Diag(?)
b
b := Diag(?)
? ?1/2 .
? 1/2 P
Compute eigenvalues ?1 ??2 ? ? ? ? ??d of Sym(L), where L
Spectral gap estimate:
? 2 , |?
? d |}.
??? := 1 ? max{?
6: Empirical bounds for |Pbi,j ?Pi,j | for (i, j) ? [d]2 : c := 1.01, ?n,? := inf{t ? 0 : 2d2 (1 +
?logc
and
2n
?t
t ?+ )e
?
bi,j := ?
B
?
? ?},
r
v
?2
s
u
u
b
b
b
(5/3)?n,? + |Pi,j ? 1/d| ?
c?n,? t c?n,?
2cPi,j (1 ? Pi,j )?n,?
+
+
+
? .
2Ni
2Ni
Ni
Ni
7: Relative sensitivity of ?:
o
o
n
n
1
b# : i ? [d] : j ? [d] .
b# ? min A
max A
i,j
j,j
2
p
p
S
?i ? 1|, | ?
?i /?i ? 1|}:
8: Empirical bounds for maxi?[d] |?
?i ? ?i | and max i?[d] {| ?i /?
(
)
n
o
[
?b
?b
1
?b := ?
bi,j : (i, j) ? [d]2 ,
,
? max B
?? := max
.
2
?
?i [?
?i ? ?b]+
i?[d]
?
? :=
9: Empirical bounds for |?
?? ? ?? |:
w
? := 2?
? + ??2 + (1 + 2?
? + ??2 )
X
(i,j)?[d]2
?
?i ? 2
B
?
?j i,j
!1/2
.
states could be small makes it difficult to be confident in the estimates provided by ??? and ?
?? . This
suggests that in order to obtain fully empirical confidence intervals, we need an estimator that is not
subject to such effects?we pursue this in Section 4. Theorem 3 thus primarily serves as a point
of comparison for what is achievable in terms of estimation accuracy when one does not need to
provide empirical confidence bounds.
4
Fully empirical confidence intervals
In this section, we address the shortcoming of Theorem 3 and give fully empirical confidence intervals for the stationary probabilities and the spectral gap ?? . The main idea is to use the Markov
property to eliminate the dependence of the confidence intervals on the unknown quantities (including ?? and ?? ). Specifically, we estimate the transition probabilities from the sample path using
simple frequency estimates: as a consequence of the Markov property, for each state, the frequency
estimates converge at a rate that depends only on the number of visits to the state, and in particular
the rate (given the visit count of the state) is independent of the mixing time of the chain.
6
As discussed in Section 3, it is possible to form a confidence interval for ?? based on the eigenvalues of an estimated transition probability matrix by appealing to the Ostrowski-Elsner theorem.
However, as explained earlier, this would lead to a slow O(n?1/(2d) ) rate. We avoid this slow rate
by using an estimate of the symmetric matrix L, so that we can use a stronger perturbation result
(namely Weyl?s inequality, as in the proof of Theorem 3) available for symmetric matrices.
To form an estimate of L based on an estimate of the transition probabilities, one possibility is
to estimate ? using a frequency-based estimate for ? as was done in Section 3, and appeal to
the relation L = Diag(?)1/2 P Diag(?)?1/2 to form a plug-in estimate. However, as noted in
Section 3.2, confidence intervals for the entries of ? formed this way may depend on the mixing
time. Indeed, such an estimate of ? does not exploit the Markov property.
We adopt a different strategy for estimating ?, which leads to our construction of empirical confib using smoothed frequency estimates
dence intervals, detailed in Algorithm 1. We form the matrix P
#
b
b = I ?P
b (Step 2), followed by
of P (Step 1), then compute the so-called group inverse A of A
b
? of P (Step 3), this way decoupling the bound on the
finding the unique stationary distribution ?
b # of A
b is uniquely defined; and if P
b
? from the mixing time. The group inverse A
accuracy of ?
b # can
defines an ergodic chain (which is the case here due to the use of the smoothed estimates), A
be computed at the cost of inverting an (d?1)?(d?1) matrix [35, Theorem 5.2].4 Further, once
b # , the unique stationary distribution ?
b can be read out from the last row of A
b # [35,
? of P
given A
b,
? and P
Theorem 5.3]. The group inverse is also be used to compute the sensitivity of ?. Based on ?
b
we construct the plug-in estimate L of L, and use the eigenvalues of its symmetrization to form the
estimate ??? of the spectral gap (Steps 4 and 5). In the remaining steps, we use perturbation analyses
b ; and also to relate ??? and ?? , viewing L as a
? and ?, viewing P as the perturbation of P
to relate ?
b Both analyses give error bounds entirely in terms of observable quantities
perturbation of Sym(L).
(e.g., ?
? ), tracing back to empirical error bounds for the smoothed frequency estimates of P .
The most computationally expensive step in Algorithm 1 is the computation of the group inverse
b # , which, as noted reduces to matrix inversion. Thus, with a standard implementation of matrix
A
inversion, the algorithm?s time complexity is O(n + d3 ), while its space complexity is O(d2 ).
? from
To state our main theorem concerning Algorithm 1, we first define ? to be analogous to ?
#
#
b
Step 7, with A replaced by the group inverse A of A := I ? P . The result is as follows.
Theorem 4. Suppose Algorithm 1 is given as input a sample path of length n from an ergodic and
reversible Markov chain and confidence parameter ? ? (0, 1). Let ?? > 0 denote the spectral gap,
? the unique stationary distribution, and ?? > 0 the minimum stationary probability. Then, on an
event of probability at least 1 ? ?,
?i ? [?
?i ? ?b, ?
?i + ?b] for all i ? [d],
and
?? ? [?
?? ? w,
? ??? + w].
?
Moreover, ?b and w
? almost surely satisfy (as n ? ?)
!
r
P
log
log
n
i,j
?b = O
max ?
, w
?=O
?i n
(i,j)?[d]2
?
??
r
log log n
+
?? n
r
d log log n
?? n
!
.5
The proof of Theorem 4 is given in Appendix C. As mentioned above, the obstacle encountered in
We establish fully observable upper and
Theorem 3 is avoided by exploiting the Markov property.
p
lower bounds on the entries of P that converge at a n/ log log n rate using standard martingale tail
inequalities; this justifies the validity of the bounds from Step 6. Properties of the group inverse [35,
36] and eigenvalue perturbation theory [34] are used to validate the empirical bounds on ?i and ??
developed in the remaining steps of the algorithm.
The first part of Theorem 4 provides valid empirical confidence intervals for each ?i and for ?? ,
which are simultaneously valid at confidence level ?. The second part of Theorem 4 shows that the
4
The group inverse of a square matrix A, a special case of the Drazin inverse, is the unique matrix A#
satisfying AA# A = A, A# AA# = A# and A# A = AA# .
5
In Theorems 4 and 5, our use of big-O notation is as follows. For a random sequence (Yn )n and a (nonrandom) positive sequence (??,n )n parameterized by ?, we say ?Yn = O(??,n ) holds almost surely as n ? ??
if there is some universal constant C > 0 such that for all ?, lim supn?? Yn /??,n ? C holds almost surely.
7
width of the intervals decrease
length increases.
We show in Appendix C.5 that
as the sequence
q
q
Pi,j log log n
log log n
d
,
w
?
=
O
.
? ? d/?? , and hence ?b = O max(i,j)?[d]2 ?d?
?i n
?? ??
?? n
It is easy to combine Theorems 3 and 4 to yield intervals whose widths shrink at least as fast
as both the non-empirical intervals from Theorem 3 and the empirical intervals from Theorem 4.
?i ? ?b]+ ,
Specifically, determine lower bounds on ?? and ?? using Algorithm 1, ?? ? mini?[d] [?
?? ? [?
?? ? w]
? + ; then plug-in these lower bounds for ?? and ?? in the deviation bounds in Eq. (5)
from Theorem 3. This yields a new interval centered around the estimate of ?? from Theorem 3,
and it no longer depends on unknown quantities. The interval is a valid 1 ? 2? probability confidence interval for ?? , and for sufficiently large n, the width shrinks at the rate given in Eq. (5). We
can similarly construct an empirical confidence interval for ?? using Eq. (4), which is valid on the
same 1 ? 2? probability event.6 Finally, we can take the intersection of these new intervals with the
corresponding intervals from Algorithm 1. This is summarized in the following theorem, which we
prove in Appendix D.
Theorem 5. The following holds under the same conditions as Theorem 4. For any ? ? (0, 1),
b and Vb described above for ?? and ?? , respectively, satisfy ?? ? U
b and
the confidence intervals U
b
?? ? V with probability at leastr
1 ? 2?. Furthermore,
the widths of these intervals almost surely
!
q
d
?? log ?? ?
log d
? ?log(n)
b | = O min
b| = O
,
|
V
,
w
?
, where w
? is
satisfy (as n ? ?) |U
?? n
?? ? ? n
the width from Algorithm 1.
5
Discussion
The construction used in Theorem 5 applies more generally: Given a confidence interval of the
form In = In (?? , ?? , ?) for some confidence level ? and a fully empirical confidence set En (?)
for (?? , ?? ) for the same level, In? = En (?) ? ?(?,?)?En (?) In (?, ?, ?) is a valid fully empirical 2?level confidence interval whose asymptotic width matches that of In up to lower order terms under
reasonable assumptions on En and In . In particular, this suggests that future work should focus on
closing the gap between the lower and upper bounds on the accuracy of point-estimation. Another
interesting direction is to reduce the computation cost: The current cubic cost in the number of states
can be too high even when the number of states is only moderately large.
Perhaps more important, however, is to extend our results to large state space Markov chains: In
most practical applications the state space is continuous or is exponentially large in some natural parameters. As follows from our lower bounds, without further assumptions, the problem of fully data
dependent estimation of the mixing time is intractable for information theoretical reasons. Interesting directions for future work thus must consider Markov chains with specific structure. Parametric
classes of Markov chains, including but not limited to Markov chains with factored transition kernels
with a few factors, are a promising candidate for such future investigations. The results presented
here are a first step in the ambitious research agenda outlined above, and we hope that they will
serve as a point of departure for further insights in the area of fully empirical estimation of Markov
chain parameters based on a single sample path.
References
[1] J. S. Liu. Monte Carlo Strategies in Scientific Computing. Springer Series in Statistics. Springer-Verlag,
2001.
[2] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction (Adaptive Computation and
Machine Learning). A Bradford Book, 1998.
[3] D. Levin, Y. Peres, and E. Wilmer. Markov Chains and Mixing Times. AMS, 2008.
[4] S. P. Meyn and R. L. Tweedie. Markov Chains and Stochastic Stability. Springer, 1993.
[5] C. Kipnis and S. R. S. Varadhan. Central limit theorem for additive functionals of reversible markov
processes and applications to simple exclusions. Comm. Math. Phys., 104(1):1?19, 1986.
6
For the ?? interval, we only plug-in lower bounds on ?? and ?? only where these quantities appear as 1/??
and 1/?? in Eq. (4). It is then possible to ?solve? for observable bounds on ?? . See Appendix D for details.
8
[6] I. Kontoyiannis, L. A. Lastras-Monta?no, and S. P. Meyn. Exponential bounds and stopping rules for
MCMC and general Markov chains. In VALUETOOLS, page 45, 2006.
[7] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. In ICML, pages 65?72, 2006.
[8] V. Mnih, Cs. Szepesv?ari, and J.-Y. Audibert. Empirical Bernstein stopping. In ICML, pages 672?679,
2008.
[9] A. Maurer and M. Pontil. Empirical Bernstein bounds and sample-variance penalization. In COLT, 2009.
[10] L. Li, M. L. Littman, T. J. Walsh, and A. L. Strehl. Knows what it knows: a framework for self-aware
learning. Machine Learning, 82(3):399?443, 2011.
[11] J. M. Flegal and G. L. Jones. Implementing MCMC: estimating with confidence. In Handbook of Markov
chain Monte Carlo, pages 175?197. Chapman & Hall/CRC, 2011.
[12] B. M. Gyori and D. Paulin. Non-asymptotic confidence intervals for MCMC in practice. arXiv:1212.2016,
2014.
[13] A. Swaminathan and T. Joachims. Counterfactual risk minimization: Learning from logged bandit feedback. In ICML, 2015.
[14] D. Gillman. A Chernoff bound for random walks on expander graphs. SIAM Journal on Computing,
27(4):1203?1220, 1998.
[15] C. A. Le?on and F. Perron. Optimal Hoeffding bounds for discrete reversible Markov chains. Annals of
Applied Probability, pages 958?970, 2004.
[16] D. Paulin. Concentration inequalities for Markov chains by Marton couplings and spectral methods.
Electronic Journal of Probability, 20:1?32, 2015.
[17] S. T. Garren and R. L. Smith. Estimating the second largest eigenvalue of a Markov transition matrix.
Bernoulli, 6:215?242, 2000.
[18] G. L. Jones and J. P. Hobert. Honest exploration of intractable probability distributions via markov chain
monte carlo. Statist. Sci., 16(4):312?334, 11 2001.
[19] Y. Atchad?e. Markov Chain Monte Carlo confidence intervals. Bernoulli, 2015. (to appear).
[20] B. Yu. Rates of convergence for empirical processes of stationary mixing sequences. The Annals of
Probability, 22(1):94?116, January 1994.
[21] R. L. Karandikar and M. Vidyasagar. Rates of uniform convergence of empirical means with mixing
processes. Statistics and Probability Letters, 58(3):297?307, 2002.
[22] D. Gamarnik. Extension of the PAC framework to finite and countable Markov chains. IEEE Transactions
on Information Theory, 49(1):338?345, 2003.
[23] M. Mohri and A. Rostamizadeh. Stability bounds for non-iid processes. In NIPS, 2008.
[24] M. Mohri and A. Rostamizadeh. Rademacher complexity bounds for non-i.i.d. processes. In NIPS, 2009.
[25] I. Steinwart and A. Christmann. Fast learning from non-i.i.d. observations. In NIPS, 2009.
[26] I. Steinwart, D. Hush, and C. Scovel. Learning from dependent observations. Journal of Multivariate
Analysis, 100(1):175?194, 2009.
[27] D. McDonald, C. Shalizi, and M. Schervish. Estimating beta-mixing coefficients. In AISTATS, pages
516?524, 2011.
[28] T. Batu, L. Fortnow, R. Rubinfeld, W. D. Smith, and P. White. Testing that distributions are close. In
FOCS, pages 259?269. IEEE, 2000.
[29] T. Batu, L. Fortnow, R. Rubinfeld, W. D. Smith, and P. White. Testing closeness of discrete distributions.
Journal of the ACM (JACM), 60(1):4:2?4:25, 2013.
[30] N. Bhatnagar, A. Bogdanov, and E. Mossel. The computational complexity of estimating MCMC convergence time. In RANDOM, pages 424?435. Springer, 2011.
[31] D. Hsu, A. Kontorovich, and C. Szepesv?ari. Mixing time estimation in reversible Markov chains from a
single sample path. CoRR, abs/1506.02903, 2015.
[32] J. Tropp. An introduction to matrix concentration inequalities. Foundations and Trends in Machine
Learning, 2015.
[33] S. Bernstein. Sur l?extension du theoreme limite du calcul des probabilites aux sommes de quantites
dependantes. Mathematische Annalen, 97:1?59, 1927.
[34] G. W. Stewart and J. Sun. Matrix perturbation theory. Academic Press, Boston, 1990.
[35] C. D. Meyer Jr. The role of the group generalized inverse in the theory of finite Markov chains. SIAM
Review, 17(3):443?464, 1975.
[36] G. Cho and C. Meyer. Comparison of perturbation bounds for the stationary distribution of a Markov
chain. Linear Algebra and its Applications, 335:137?150, 2001.
9
| 6020 |@word version:2 inversion:2 achievable:2 norm:3 stronger:1 km:1 d2:2 pick:2 initial:6 celebrated:1 contains:1 liu:1 series:1 daniel:1 current:1 scovel:1 beygelzimer:1 must:2 additive:3 gurion:1 weyl:1 stationary:20 assurance:1 device:1 trapping:1 smith:3 short:2 paulin:2 provides:2 math:1 along:1 constructed:1 aryeh:1 beta:1 focs:1 prove:4 combine:2 vide:1 indeed:1 expected:1 behavior:2 themselves:2 roughly:2 alberta:1 actual:1 little:1 considering:1 becomes:1 provided:2 estimating:14 notation:3 bounded:4 circuit:1 moreover:3 agnostic:1 what:2 probabilites:1 pursue:1 suppresses:1 developed:2 finding:1 csaba:1 guarantee:4 temporal:1 nonrandom:1 every:1 tackle:1 returning:1 yn:3 appear:2 before:2 positive:6 referenced:1 limit:2 consequence:1 sutton:1 analyzing:1 path:20 studied:2 suggests:2 limited:2 walsh:1 bi:2 unique:7 practical:1 testing:2 practice:2 union:1 procedure:6 pontil:1 area:1 empirical:37 universal:1 significantly:1 matching:1 confidence:36 seeing:2 wait:1 cannot:2 close:1 put:1 context:1 applying:2 risk:1 restriction:1 demonstrated:1 regardless:2 starting:2 ergodic:8 factored:1 estimator:12 rule:2 insight:1 meyn:2 stability:2 analogous:1 limiting:1 annals:2 construction:5 suppose:1 ualberta:1 trend:1 expensive:1 satisfying:1 blocking:1 observed:1 role:1 sun:1 decrease:1 valuable:1 mentioned:2 pd:1 comm:1 complexity:5 moderately:1 littman:1 ultimately:1 depend:5 algebra:1 serve:1 fast:4 effective:1 shortcoming:1 monte:5 kp:1 tell:1 tmix:9 whose:5 larger:2 solve:1 say:3 statistic:2 sequence:12 eigenvalue:15 propose:1 reset:2 combining:1 mixing:31 achieve:4 description:1 validate:1 exploiting:1 convergence:8 requirement:1 assessing:1 rademacher:1 produce:1 converges:3 ben:1 help:1 derive:2 develop:1 ac:1 coupling:1 eq:9 strong:1 c:4 christmann:1 indicate:1 come:1 implies:2 direction:3 aperiodic:2 stochastic:2 centered:1 exploration:1 viewing:2 implementing:1 crc:1 require:2 shalizi:1 suffices:1 generalization:1 preliminary:1 investigation:1 extension:3 strictly:1 hold:7 around:4 considered:1 sufficiently:2 normal:1 hall:1 algorithmic:1 achieves:1 adopt:1 purpose:1 estimation:11 visited:1 symmetrization:1 largest:3 djhsu:1 establishes:1 tool:1 hope:1 minimization:1 pn:1 avoid:1 barto:1 corollary:1 focus:1 joachim:1 consistently:1 bernoulli:2 contrast:1 opted:1 rostamizadeh:2 sense:1 am:1 dependent:5 stopping:2 eliminate:2 relation:1 bandit:1 unobservable:1 flexible:1 colt:1 prudent:1 priori:2 denoted:2 special:1 logc:1 marginal:1 construct:4 never:1 once:1 reversibility:3 sampling:1 chernoff:3 identical:1 aware:1 chapman:1 jones:2 icml:3 nearly:1 yu:1 future:4 simplex:1 primarily:1 irreducible:2 few:1 simultaneously:1 replaced:1 n1:1 ab:1 interest:1 possibility:2 mnih:1 evaluation:1 kvk:1 diagnostics:2 chain:57 accurate:1 necessary:3 tweedie:1 unless:1 euclidean:1 logarithm:2 maurer:1 desired:2 walk:1 theoretical:1 instance:1 earlier:1 obstacle:1 stewart:1 cost:3 deviation:4 entry:6 uniform:2 levin:1 too:1 learnability:1 cho:1 confident:1 sensitivity:2 siam:2 kontoyiannis:1 probabilistic:1 invoke:1 kontorovich:2 central:2 possibly:2 hoeffding:1 bgu:1 worse:1 book:1 li:1 account:2 de:2 summarized:2 coefficient:5 satisfy:3 audibert:1 vi:2 depends:2 multiplicative:7 root:1 sup:1 square:1 il:1 formed:3 accuracy:14 variance:2 ni:8 efficiently:1 yield:4 iid:3 carlo:5 bhatnagar:1 explain:1 phys:1 frequency:7 associated:1 di:1 mi:3 proof:6 hsu:2 counterfactual:1 knowledge:3 lim:3 back:1 done:1 shrink:4 strongly:2 furthermore:2 ergodicity:3 just:1 swaminathan:1 correlation:1 langford:1 hand:1 steinwart:2 tropp:1 reversible:13 assessment:1 lack:1 defines:1 quality:1 perhaps:1 scientific:1 effect:2 validity:1 hence:7 assigned:1 read:1 symmetric:6 white:2 width:10 uniquely:1 self:1 szepesva:1 noted:2 generalized:1 theoretic:1 mcdonald:1 pro:1 balcan:1 novel:1 ari:3 gamarnik:1 common:1 behaves:1 qp:1 exponentially:2 fortnow:2 tail:2 extend:2 discussed:1 outlined:1 similarly:1 closing:1 varadhan:1 access:1 longer:2 base:1 multivariate:1 exclusion:1 inf:1 certain:1 verlag:1 inequality:4 arbitrarily:1 minimum:7 additional:1 elsner:2 surely:5 converge:3 paradigm:1 determine:1 full:1 reduces:1 match:1 adapt:1 plug:6 academic:1 long:1 concerning:1 doublet:1 visit:4 feasibility:1 prediction:1 involving:1 essentially:2 expectation:1 arxiv:2 kernel:1 somme:1 szepesv:3 interval:45 limn:1 subject:1 expander:1 integer:3 bernstein:3 identically:1 easy:1 identified:1 marton:1 reduce:1 idea:1 cn:2 honest:1 motivated:1 effort:1 akin:1 bogdanov:1 useful:1 generally:3 detailed:1 statist:1 annalen:1 affords:1 notice:1 estimated:2 diagnostic:1 mathematische:1 discrete:2 group:9 threshold:1 d3:1 pj:1 vast:1 asymptotically:1 relaxation:2 graph:1 schervish:1 sum:1 inverse:10 parameterized:1 letter:1 logged:1 almost:5 reasonable:1 electronic:1 draw:1 pbi:2 drazin:1 appendix:7 vb:1 entirely:1 bound:49 followed:2 encountered:1 oracle:1 x2:3 dence:1 generates:1 prescribed:1 min:3 rubinfeld:2 jr:1 appealing:1 explained:2 intuitively:1 pr:3 multiplicity:1 ostrowski:2 ln:3 computationally:1 discus:1 count:2 mechanism:1 needed:2 know:2 serf:1 available:2 spectral:14 symmetrized:1 denotes:2 remaining:2 cf:1 exploit:3 testable:1 especially:1 establish:1 question:1 quantity:8 strategy:2 parametric:1 dependence:4 concentration:2 prq:2 diagonal:1 supn:1 distance:1 sci:1 trivial:2 reason:1 fresh:1 boldface:1 length:18 hobert:1 sur:1 mini:3 providing:2 difficult:1 unfortunately:1 relate:2 negative:1 design:2 implementation:1 ambitious:1 agenda:1 unknown:6 countable:1 allowing:1 upper:4 observation:4 markov:54 finite:8 january:1 peres:1 extended:1 perturbation:13 smoothed:4 arbitrary:2 inverting:1 namely:1 required:6 perron:1 hush:1 nip:3 address:2 suggested:1 departure:1 challenge:1 kek1:1 max:12 including:3 vidyasagar:1 suitable:1 karyeh:1 difficulty:3 natural:3 rely:1 event:2 turning:1 mossel:1 imply:1 numerous:2 inversely:1 started:1 columbia:2 prior:1 literature:1 batu:2 calcul:1 review:1 asymptotic:7 law:1 relative:1 fully:17 interesting:2 var:1 penalization:1 foundation:1 article:2 pi:7 cd:1 strehl:1 row:3 mohri:2 placed:1 last:1 wilmer:1 sym:4 understand:1 fall:2 barrier:1 absolute:4 limite:1 tracing:1 distributed:1 regard:1 feedback:1 xn:3 stand:1 transition:11 valid:10 avoids:1 reinforcement:2 adaptive:1 avoided:2 transaction:1 functionals:1 observable:5 supremum:1 active:1 handbook:1 continuous:1 why:1 promising:1 ca:1 decoupling:1 symmetry:2 obtaining:1 unavailable:1 du:2 constructing:2 diag:12 aistats:1 main:6 motivation:1 big:1 allowed:1 x1:3 cpi:1 en:4 fashion:1 martingale:1 slow:5 cubic:1 fails:1 meyer:2 nonreversible:1 exponential:2 candidate:1 governed:1 karandikar:1 theorem:46 down:1 xt:16 specific:2 pac:1 symbol:1 maxi:1 appeal:1 closeness:1 trap:3 exists:2 intractable:2 corr:1 importance:1 ci:1 justifies:1 tionary:1 gap:13 boston:1 intersection:1 logarithmic:1 jacm:1 datadependent:1 contained:1 applies:1 springer:4 aa:3 acm:1 goal:2 consequently:1 towards:1 trelax:5 specifically:2 lemma:1 called:1 bradford:1 succeeds:1 arises:1 mcmc:7 aux:1 correlated:1 |
5,549 | 6,021 | Efficient Compressive Phase Retrieval
with Constrained Sensing Vectors
Sohail Bahmani, Justin Romberg
School of Electrical and Computer Engineering.
Georgia Institute of Technology
Atlanta, GA 30332
{sohail.bahmani,jrom}@ece.gatech.edu
Abstract
We propose a robust and efficient approach to the problem of compressive phase
retrieval in which the goal is to reconstruct a sparse vector from the magnitude
of a number of its linear measurements. The proposed framework relies on constrained sensing vectors and a two-stage reconstruction method that consists of
two standard convex programs that are solved sequentially.
In recent years, various methods are proposed for compressive phase retrieval, but
they have suboptimal sample complexity or lack robustness guarantees. The main
obstacle has been that there is no straightforward convex relaxations for the type
of structure in the target. Given a set of underdetermined measurements, there is a
standard framework for recovering a sparse matrix, and a standard framework for
recovering a low-rank matrix. However, a general, efficient method for recovering
a jointly sparse and low-rank matrix has remained elusive.
Deviating from the models with generic measurements, in this paper we show that
if the sensing vectors are chosen at random from an incoherent subspace, then the
low-rank and sparse structures of the target signal can be effectively decoupled.
We show that a recovery algorithm that consists of a low-rank recovery stage followed by a sparse recovery stage will produce an accurate estimate of the target
when the number of measurements is O(k log kd ), where k and d denote the sparsity level and the dimension of the input signal. We also evaluate the algorithm
through numerical simulation.
1
1.1
Introduction
Problem setting
The problem of Compressive Phase Retrieval (CPR) is generally stated as the problem of estimating
a k-sparse vector x? ? Rd from noisy measurements of the form
2
yi = |hai , x? i| + zi
(1)
for i = 1, 2, . . . , n, where ai is the sensing vector and zi denotes the additive noise. In this paper,
we study the CPR problem with specific sensing vectors ai of the form
ai = ? T wi ,
(2)
where ? ? Rm?d and wi ? Rm are known. In words, the measurement vectors live in a fixed
low-dimensional subspace (i.e, the row space of ? ). These types of measurements can be applied in
imaging systems that have control over how the scene is illuminated; examples include systems that
use structured illumination with a spatial light modulator or a scattering medium [1, 2].
1
By a standard lifting of the signal x? to X ? = x? x?T , the quadratic measurements (1) can be
expressed as
E
D
?
?
+ zi .
(3)
yi = ai aT
+ zi = ? T wi wT
i ,X
i ?, X
With the linear operator W and A defined as
n
W :B 7? wi wT
i , B i=1
and
A : X 7? W ? X? T ,
we can write the measurements compactly as
y = A (X ? ) + z.
Our goal is to estimate the sparse, rank-one, and positive semidefinite matrix X ? from the measurements (3), which also solves the CPR problem and provides an estimate for the sparse signal x? up
to the inevitable global phase ambiguity.
Assumptions We make the following assumptions throughout the paper.
A1. The vectors wi are independent and have the standard Gaussian distribution on Rm : wi ?
N (0, I) .
A2. The matrix ? is a restricted isometry matrix for 2k-sparse vectors and for a constant ?2k ?
[0, 1]. Namely, it obeys
2
(1 ? ?2k ) kxk2 ? k? xk22 ? (1 + ?2k ) kxk22 ,
(4)
for all 2k-sparse vectors x ? Rd .
A3. The noise vector z is bounded as kzk2 ? ?.
As will be seen in Theorem 1 and its proof below, the Gaussian distribution imposed by the assumption A1 will be used merely to guarantee successful estimation of a rank-one matrix through trace
norm minimization. However, other distributions (e.g., uniform distribution on the unit sphere) can
also be used to obtain similar guarantees. Furthermore, the restricted isometry condition imposed
by the assumption A2 is not critical and can be replaced by weaker assumptions. However, the guarantees obtained under these weaker assumptions usually require more intricate derivations, provide
weaker noise robustness, and often do not hold uniformly for all potential target signals. Therefore,
to keep the exposition simple and straightforward we assume (4) which is known to hold (with high
probability) for various ensembles of random matrices (e.g., Gaussian, Rademacher, partial Fourier,
etc). Because in many scenarios we have the flexibility of selecting ? , the assumption (4) is realistic
as well.
Notation Let us first set the notation used throughout the paper. Matrices and vectors are denoted
by bold capital and small letters, respectively. The set of positive integers less than or equal to
n is denoted by [n]. The notation f = O (g) is used when f = cg for some absolute constant
c > 0. For any matrix M , the Frobenius norm, the nuclear norm, the entrywise `1 -norm, and the
largest entrywise absolute value of the entries are denoted by kM kF , kM k? , kM k1 , and kM k? ,
respectively. To indicate that a matrix M is positive semidefinite we write M < 0.
1.2
Contributions
The main challenge in the CPR problem in its general formulation is to design an accurate estimator
that has optimal sample complexity and computationally tractable. In this paper we address this
challenge in the special setting where the sensing vectors can be factored as (2). Namely, we propose
an algorithm that
? provably produces an accurate estimate of the lifted target X ? from only n = O k log kd
measurements, and
? can be computed in polynomial time through efficient convex optimization methods.
2
1.3
Related work
Several papers including [3, 4, 5, 6, 7] have already studied the application of convex programming
for (non-sparse) phase retrieval (PR) in various settings and have established estimation accuracy
through different mathematical techniques. These phase retrieval methods attain nearly optimal
sample complexities that scales with the dimension of the target signal up to a constant factor [4, 5, 6]
or at most a logarithmic factor [3]. However, to the best of our knowledge, the exiting methods for
CPR either lack accuracy and robustness guarantees or have suboptimal sample complexities.
The problem of recovering a sparse signal from the magnitude of its subsampled Fourier transforms
is cast in [8] as an `1 -minimization with non-convex constraints. While [8] shows that a sufficient
number of measurements would grow quadratically in k (i.e., the sparsity of the signal), the numerical simulations suggest that the non-convex method successfully estimates the sparse signal with
only about k log kd measurements. Another non-convex approach to CPR is considered in [9] which
poses the problem as finding a k-sparse vector that minimizes the residual error that takes a quartic
form. A local search algorithm called GESPAR [10] is then applied to (approximate) the solution
to the formulated sparsity-constrained optimization. This approach is shown to be effective through
simulations, but it also lacks global convergence or statistical accuracy guarantees. An alternating
minimization method for both PR and CPR is studied in [11]. This method is appealing in large
scale problems because of computationally inexpensive iterations. More importantly, [11] proposes
a specific initialization using which the alternating minimization method is shown to converge linearly in noise-free PR and CPR. However, the number of measurements required to establish this
convergence is effectively quadratic in k. In [12] and [13] the `1 -regularized form of the trace
minimization
argmin
X<0
trace (X) + ? kXk1
(5)
subject to A (X) = y
is proposed for the CPR problem. The guarantees of [13] are based on the restricted isometry propn
erty of the sensing operator X 7? [hai a?i , Xi]i=1 for sparse matrices. In [12], however, the analysis is based on construction of a dual certificate through an adaptation of the golfing scheme [14].
Assuming standard Gaussian sensing vectors ai and with appropriate choice of
the regularization
parameter ?, it is shown in [12] that (5) solves the CPR when n = O k 2 log d . Furthermore, this
method fails to recover the target sparse and rank-one matrix if n is dominated by k 2 . Estimation
of simultaneously structured matrices through convex relaxations similar to (5) is also studied in
[15] where it is shown that these methods do not attain optimal sample complexity. More recently,
assuming that the sparse target has a Bernoulli-Gaussian distribution, a generalized approximate
message passing framework is proposed in [16] to solve the CPR problem. Performance of this
method is evaluated through numerical simulations for standard Gaussian sensing matrices
which
show the empirical phase transition for successful estimation occurs at n = O k log kd and also
the algorithms can have a significantly lower runtime compared to some of the competing algorithms including GESPAR [10] and CPRL [13]. The PhaseCode algorithm is proposed in [17] to
solve the CPR problem with sensing vectors designed using sparse graphs and techniques adapted
from coding theory. Although PhaseCode is shown to achieve the optimal sample complexity, it
lacks robustness guarantees.
While preparing the final version of the current paper, we became aware of [18] which has independently proposed an approach similar to ours to address the CPR problem.
2
2.1
Main Results
Algorithm
We propose a two-stage algorithm outlined in Algorithm 1. Each stage of the algorithm is a convex
program for which various efficient numerical solvers exists. In the first stage we solve (6) to obtain
b which is an estimator of the matrix
a low-rank matrix B
B? = ? X ?? T.
3
b is used in the second stage of the algorithm as the measurements for a sparse estimation
Then B
expressed by (7). The constraint of (7) depends on an absolute constant C > 0 that should be
sufficiently large.
Algorithm 1:
input : the measurements y, the operator W, and the matrix ?
c
output: the estimate X
1 Low-rank estimation stage:
b ? argmin
B
trace (B)
B<0
subject to
2
(6)
kW (B) ? yk2 ? ?
Sparse estimation stage:
c ? argmin
X
X
subject to
kXk1
C?
b
??
? X? T ? B
n
F
(7)
Post-processing. The result of the low-rank estimation stage (6) is generally not rank-one. Simic that is k ? k-sparse (i.e., it has
larly, the sparse estimation stage does not necessarily produce a X
at most k nonzero rows and columns) and rank-one. In fact, since we have not imposed the posic is not even guaranteed to be
tive semidefiniteness constraint (i.e., X < 0) in (7), the estimate X
positive semidefinite (PSD). However, we can enforce the rank-one or the sparsity structure in postprocessing steps simply by projecting the produced estimate on the set of rank-one or k ? k-sparse
c onto the desired sets at
PSD matrices. The simple but important observation is that projecting X
most doubles the estimation error. This fact is shown by Lemma 2 in Section 4 in a general setting.
Alternatives. There are alternative convex relaxations for the low-rank estimation and the sparse
estimation stages of Algorithm (1). For example, (6) can be replaced by its regularized least squares
analog
b ? argmin 1 kW (B) ? yk2 + ? kBk ,
B
2
?
2
B<0
for an appropriate choice of the regularization parameter ?. Similarly, instead of (7) we can use
an `1 -regularized least squares. Furthermore, to perform the low-rank estimation and the sparse
estimation we can use non-convex greedy type algorithms that typically have lower computational
costs. For example, the low-rank estimation stage can be performed via the Wirtinger flow method
proposed in [19]. Furthermore, various greedy compressive sensing algorithms such as the Iterative
Hard Thresholding [20] and CoSaMP [21] can be used to solve the desired sparse estimation. To
guarantee the accuracy of these compressive sensing algorithms, however, we might need to adjust
the assumption A2 to have the restricted isometry property for ck-sparse vectors with c being some
small positive integer.
2.2
Accuracy guarantees
The following theorem shows that any solution of the proposed algorithm is an accurate estimator
of X ? .
Theorem 1. Suppose that the assumptions A1, A2, and A3 hold with a sufficiently small constant
?2k . Then, there exist positive absolute constants C1 , C2 , and C3 such that if
n ? C1 m,
(8)
c of the Algorithm 1 obeys
then any estimate X
C2 ?
c
X ? X ?
? ? ,
n
F
4
for all rank-one and k ? k-sparse matrices X ? < 0 with probability exceeding 1 ? e?C3 n .
The proof of Theorem 1 is straightforward and is provided in Section 4. The main idea is first to
show the low-rank estimation stage produces an accurate estimate of B ? . Because this stage can
be viewed as a standard phase retrieval through lifting, we can simply use accuracy guarantees that
are already established in the literature (e.g., [3, 6, 5]). In particular, we use [5, Theorem 2] which
established an error bound that holds uniformly for all valid B ? . Thus we can ensure that X ? is
feasible in the sparse estimation stage. Then the accuracy of the sparse estimation stage can also be
established by a simple adaptation of the analyses based on the restricted isometry property such as
[22].
The dependence of n (i.e., the number of measurements) and k (i.e., the sparsity of the signal) is
not explicit in Theorem 1. This dependence is absorbed in m which must be sufficiently large for
Assumption A2 to hold. Considering a Gaussian matrix ? , the following corollary gives a concrete
example where the dependence of non k through m is exposed.
Corollary 1. Suppose that the assumptions of Theorem 1 including (8) hold. Furthermore, suppose
1
that ? is a Gaussian matrix with iid N 0, m
entries and
d
m ? c1 k log ,
k
c
for some absolute constant c1 > 0. Then any estimate X produced by Algorithm 1 obeys
C2 ?
c
X ? X ?
? ? ,
n
F
(9)
for all rank-one and k ? k-sparse matrices X ? < 0 with probability exceeding 1 ? 3e?c2 m for some
constant c2 > 0.
1
Proof. It is well-known that if ? has iid N 0, m
and we have (9) then (4) holds with high probability. For example, using a standard covering argument and a union bound [23] shows that if
(9) holds for a sufficiently large constant c1 > 0 then we have (4) for a sufficiently small constant ?2k with probability exceeding 1 ? 2e?cm for some constant c > 0 that depends only
on ?2k . Therefore, Theorem 1 yields the desired result which holds with probability exceeding
1 ? 2e?cm ? e?C3 n ? 1 ? 3e?c2 m for some constant c2 > 0 depending only on ?2k .
3
Numerical Experiments
We evaluated the performance of Algorithm 1 through some numerical simulations. The low-rank
estimation stage and the sparse estimation stage are implemented using the TFOCS package [24].
We considered the target k-sparse signal x? to be in R256 (i.e., d = 256). The support set of
of the target signal is selected uniformly at random and the entry values on this support are drawn
independently from N (0, 1). The noise vector z is also Gaussian with independent N 0, 10?4 . The
operator W and the matrix ? are drawn from some Gaussian ensembles as described in Corollary
?
b
kX?X
k
1. We measured the relative error kX ? k F of achieved by the compared methods over 100 trials
F
with sparsity level (i.e., k) varying in the set {2, 4, 6, . . . , 20}.
In the first experiment, for each value of k, the pair (m, n) that determines the size W and ? are
selected from {(8k, 24k) , (8k, 32k) , (12k, 36k) , (12k, 48k) , (16k, 48k)}. Figure 1 illustrates the
0.9 quantiles of the relative error versus k for the mentioned choices of m.
In the second experiment we compared the performance of Algorithm 1 to the convex optimization
methods that do not exploit the structure of the sensing vectors. The setup for this experimentis the
same as in the first experiment except for the size of W and ? ; we chose m = 2k 1 + log kd and
n = 3m, where dre denotes the smallest integer greater than r. Figure 2 illustrates the 0.9 quantiles
of the measured relative errors for Algorithm 1, the semidefinite program (5) for ? = 0 and ? = 0.2,
and the `1 -minimization
argmin
kXk1
X
subject to A (X) = y,
5
Figure 1: The empirical 0.9 quantile of the relative estimation error vs. sparsity for various choices
of m and n with d = 256.
Figure 2: The empirical 0.9 quantile of the relative estimation error vs. sparsity
for Algorithm
1
and different trace- and/or `1 - minimization methods with d = 256, m = 2k 1 + log kd , and
n = 3m.
which are denoted by 2-stage, SDP, SDP+`1 , and `1 , respectively. The SDP-based method did not
perform significantly different for other values of ? in our complementary simulations. The relative
error for each trial is also overlaid in Figure 2 visualize its empirical distribution. The empirical
performance of the algorithms are
in agreement with the theoretical results. Namely in a regime
where n = O (m) = O k log kd , Algorithm 1 can produce accurate estimates whereas while the
other approaches fail in this regime. The SDP and SDP+`1 show nearly identical performance. The
`1 -minimization, however, competes with Algorithm 1 for small values of k. This observation
can be
explained intuitively by the fact that the `1 -minimization succeeds with n = O k 2 measurements
which for small values of k can be sufficiently close to the considered n = 3 2k 1 + log kd
measurements.
6
4
Proofs
Proof of Theorem 1. Clearly, B ? = ? X ? ? T is feasible in 6 because of A3. Therefore, we can
b of (6) accurately estimates B ? using existing results on nuclear-norm
show that any solution B
minimization. In particular, we can invoke [5, Theorem 2 and Section 4.3] which guarantees that for
some positive absolute constants C1 , C20 , and C3 if (8) holds then
C0 ?
b
B ? B ?
? ?2 ,
n
F
holds for all valid B ? , thereby for all valid X ? , with probability exceeding 1 ? e?C3 n . Therefore,
with C = C20 , the target matrix X ? would be feasible in (7). Now, it suffices to show that the
sparse estimation stage can produce an accurate estimate of X ? . Recall that by A2, the matrix ?
is restricted isometry for 2k-sparse vectors. Let X be a matrix that is 2k ? 2k-sparse, i.e., a matrix
whose entries except for some 2k ? 2k submatrix are all zeros. Applying (4) to the columns of X
and adding the inequalities yield
2
(1 ? ?2k ) kXkF ? k? Xk2F ? (1 + ?2k ) kXk2F .
T
(10)
T
Because the columns of X ? are also 2k-sparse we can repeat the same argument and obtain
2
2
2
(1 ? ?2k )
X T ? T
?
(11)
? X T ? T
? (1 + ?2k )
X T ? T
.
F
F
F
Using the facts that
X T ? T
= k? XkF and
? X T ? T
=
? X? T
, the inequalities (10)
F
F
F
and (11) imply that
2
2
2
2
2
(1 ? ?2k ) kXkF ?
(12)
? X? T
? (1 + ?2k ) kXkF .
F
The proof proceeds with an adaptation of the arguments used to prove accuracy of `1 -minimization
c ?X ? .
in compressive sensing based on the restricted isometry property (see, e.g., [22]). Let E = X
?
Furthermore, let S0 ? [d] ? [d] denote the support set of the k ? k-sparse target X . Define E 0 to
be a d ? d matrix that is identical to E over the index set S0 and zero elsewhere. By optimality of
c and feasibility of X ? in (7) we have
X
c
kX ? k1 ?
X
= kX ? + E ? E 0 + E 0 k1 ? kX ? + E ? E 0 k1 ? kE 0 k1
1
= kX ? k1 + kE ? E 0 k1 ? kE 0 k1 ,
where the last line follows from the fact that X ? and E ? E 0 have disjoint supports. Thus, we have
kE ? E 0 k1 ? kE 0 k1 ? k kE 0 kF .
(13)
Now consider a decomposition of E ? E 0 as the sum
J
X
E ? E0 =
Ej ,
(14)
j=1
such that for j ? 0 the d ? d matrices E j have disjoint support sets of size k ? k except perhaps for
the last few matrices that might have smaller supports. More importantly, the partitioning matrices
E j are chosen to have a decreasing Frobenius norm (i.e., kE j kF ? kE j+1 kF ) for j ? 1. We have
X
J
J
X
J
1
1X
?
kE j?1 k1 ? kE ? E 0 k1 ? kE 0 kF ? kE 0 + E 1 kF , (15)
E
kE
k
?
j
j
F
k j=2
k
j=2
j=2
F
where the chain of inequalities follow from the triangle inequality, the fact that kE j k? ?
1
k2 kE j?1 k1 by construction, the fact that the matrices E j have disjoint support and satisfy (14),
the bound (13), and the fact that E 0 and E 1 are orthogonal. Furthermore, we have
?
?
*
+
J
2
X
T
T
T
Ej ? ?
? (E 0 + E 1 ) ?
= ? (E 0 + E 1 ) ? , ? ?E ?
F
j=2
1 X
J D
E
X
?
? (E 0 + E 1 ) ? T
? E? T
+
? Ei? T, ? Ej ? T ,
F
F
i=0 j=2
(16)
7
where the first term is obtained by the Cauchy-Schwarz inequality and the summation is obtained by
c ? X ? by definition, the triangle inequality and the fact that
the triangle inequality. Because E = X
c T b
?
b
c are feasible in (7) imply that
? B
+
? X ? ? T ? B
X and X
?
? E? T
?
? X?
F
F
F
2C?
? .
n
Furthermore, Lemma 1 below which is adapted from [22, Lemma 2.1] guarantees that for
D
E
i ? {0, 1} and j ? 2 we have ? E i ? T , ? E j ? T ? 2?2k kE i kF kE j kF . Therefore, we obtain
2
2
2
(1 ? ?2k ) kE 0 + E 1 kF ?
? (E 0 + E 1 ) ? T
F
1 X
J
X
2C?
? ?
? (E 0 + E 1 ) ? T
+ 2?2k
kE i kF kE j kF
n
F
i=0 j=2
1
J
XX
2C?
? ? (1 + ?2k ) kE 0 + E 1 kF + 2?2k
kE i kF kE j kF
n
i=0 j=2
2C?
? ? (1 + ?2k ) kE 0 + E 1 kF + 2?2k (kE 0 kF + kE 1 kF ) kE 0 + E 1 kF
n
?
2C?
? kE 0 + E 1 kF ? (1 + ?2k ) + 2 2?2k kE 0 + E 1 kF
n
where the chain of inequalities follow from the lower bound in (12),?the bound (16), the upper
bound in (12), the bound
(15), and the fact that kE 0 kF + kE 1 kF ? 2 kE 0 + E 1 kF . If ?2k <
p
?
?
?
2
1 + 2 1 ? 1 + 2 ? 0.216, then we have ? := (1 ? ?2k ) ? 2 2?2k > 0 and thus
kE 0 + E 1 kF ?
2C (1 + ?2k ) ?
?
.
? n
Adding the above inequality to (13) and applying the triangle then yields the desired result.
Lemma 1. Let ? be a matrix obeying (4). Then for any pair of k ? k-sparse matrices X and X 0
with disjoint supports we have
D
E
? X? T , ? X 0 ? T ? 2?2k kXkF
X 0
F .
Proof. Suppose that X
and X 0 have unit Frobenius norm.
Using the identity
2
2
D
E
? X? T , ? X 0 ? T = 41
? X + X 0 ? T
?
? X ? X 0 ? T
and the fact that
F
F
X and X 0 have disjoint supports, it follows from (12) that
2
2
D
E (1 + ? )2 ? (1 ? ? )2
(1 ? ?2k ) ? (1 + ?2k )
2k
2k
? ? X? T , ? X 0 ? T ?
= 2?2k .
?2?2k =
2
2
The general result follows immediately as the desired inequality is homogeneous in the Frobenius
norms of X and X 0 .
Lemma 2 (Projected estimator). Let S be a closed nonempty subset of a normed vector space
b ? V, not necessarily in S, that obeys
(V, k?k). Suppose that for v ? ? S we have an estimator v
e denotes a projection of v
b onto S, then we have ke
kb
v ? v ? k ? . If v
v ? v ? k ? 2.
e ? argminv?S kv ? v
bk . Therefore, because v ? ? S we have
Proof. By definition v
bk ? 2 kb
ke
v ? v ? k ? kb
v ? v ? k + ke
v?v
v ? v ? k ? 2.
Acknowledgements
This work was supported by ONR grant N00014-11-1-0459, and NSF grants CCF-1415498 and
CCF-1422540.
8
References
[1] Jacopo Bertolotti, Elbert G. van Putten, Christian Blum, Ad Lagendijk, Willem L. Vos, and Allard P.
Mosk. Non-invasive imaging through opaque scattering layers. Nature, 491(7423):232?234, Nov. 2012.
[2] Antoine Liutkus, David Martina, S?bastien Popoff, Gilles Chardon, Ori Katz, Geoffroy Lerosey, Sylvain
Gigan, Laurent Daudet, and Igor Carron. Imaging with nature: Compressive imaging using a multiply
scattering medium. Scientific Reports, volume 4, article no. 5552, Jul. 2014.
[3] Emmanuel J. Cand?s, Thomas Strohmer, and Vladislav Voroninski. PhaseLift: Exact and stable signal
recovery from magnitude measurements via convex programming. Communications on Pure and Applied
Mathematics, 66(8):1241?1274, 2013.
[4] Emmanuel J. Cand?s and Xiaodong Li. Solving quadratic equations via PhaseLift when there are about
as many equations as unknowns. Foundations of Computational Mathematics, 14(5):1017?1026, 2014.
[5] R. Kueng, H. Rauhut, and U. Terstiege. Low rank matrix recovery from rank one measurements. Applied
and Computational Harmonic Analysis, 2015. In press. Preprint arXiv:1410.6913 [cs.IT].
[6] Joel A. Tropp. Convex recovery of a structured signal from independent random linear measurements.
Preprint arXiv:1405.1102 [cs.IT], 2014.
[7] Ir?ne Waldspurger, Alexandre d?Aspremont, and St?phane Mallat. Phase recovery, MaxCut and complex
semidefinite programming. Mathematical Programming, 149(1-2):47?81, 2015.
[8] Matthew L. Moravec, Justin K. Romberg, and Richard G. Baraniuk. Compressive phase retrieval. In
Proceedings of SPIE Wavelets XII, volume 6701, pages 670120 1?11, 2007.
[9] Yoav Shechtman, Yonina C. Eldar, Alexander Szameit, and Mordechai Segev. Sparsity based subwavelength imaging with partially incoherent light via quadratic compressed sensing. Optics Express,
19(16):14807?14822, Aug. 2011.
[10] Yoav Shechtman, Amir Beck, and Yonina C. Eldar. GESPAR: Efficient phase retrieval of sparse signals.
Signal Processing, IEEE Transactions on, 62(4):928?938, Feb. 2014.
[11] Praneeth Netrapalli, Prateek Jain, and Sujay Sanghavi. Phase retrieval using alternating minimization. In
Advances in Neural Information Processing Systems 26 (NIPS 2013), pages 2796?2804, 2013.
[12] Xiaodong Li and Vladislav Voroninski. Sparse signal recovery from quadratic measurements via convex
programming. SIAM Journal on Mathematical Analysis, 45(5):3019?3033, 2013.
[13] Henrik Ohlsson, Allen Yang, Roy Dong, and Shankar Sastry. CPRL?an extension of compressive sensing
to the phase retrieval problem. In Advances in Neural Information Processing Systems 25 (NIPS 2012),
pages 1367?1375, 2012.
[14] David Gross. Recovering low-rank matrices from few coefficients in any basis. Information Theory, IEEE
Transactions on, 57(3):1548?1566, Mar. 2011.
[15] Samet Oymak, Amin Jalali, Maryam Fazel, Yonina Eldar, and Babak Hassibi. Simultaneously structured
models with application to sparse and low-rank matrices. Information Theory, IEEE Transactions on,
61(5):2886?2908, 2015.
[16] P. Schniter and S. Rangan. Compressive phase retrieval via generalized approximate message passing.
Signal Processing, IEEE Transactions on, 63(4):1043?1055, February 2015.
[17] Ramtin Pedarsani, Kangwook Lee, and Kannan Ramchandran. Phasecode: Fast and efficient compressive
phase retrieval based on sparse-graph codes. In Communication, Control, and Computing (Allerton), 52nd
Annual Allerton Conference on, pages 842?849, Sep. 2014. Extended preprint arXiv:1408.0034
[cs.IT].
[18] Mark Iwen, Aditya Viswanathan, and Yang Wang. Robust sparse phase retrieval made easy. Applied and
Computational Harmonic Analysis, 2015. In press. Preprint arXiv:1410.5295 [math.NA].
[19] Emmanuel J. Cand?s, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via Wirtinger flow: Theory
and algorithms. Information Theory, IEEE Transactions on, 61(4):1985?2007, Apr. 2015.
[20] Thomas Blumensath and Mike E. Davies. Iterative hard thresholding for compressed sensing. Applied
and Computational Harmonic Analysis, 27(3):265?274, 2009.
[21] Deanna Needell and Joel A. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate
samples. Applied and Computational Harmonic Analysis, 26(3):301?321, 2009.
[22] Emmanuel J. Cand?s. The restricted isometry property and its implications for compressed sensing.
Comptes Rendus Mathematique, 346(9-10):589?592, 2008.
[23] Richard Baraniuk, Mark Davenport, Ronald DeVore, and Michael Wakin. A simple proof of the restricted
isometry property for random matrices. Constructive Approximation, 28(3):253?263, 2008.
[24] Stephen R. Becker, Emmanuel J. Cand?s, and Michael C. Grant. Templates for convex cone problems
with applications to sparse signal recovery. Mathematical Programming Computation, 3(3):165?218,
2011.
9
| 6021 |@word trial:2 version:1 polynomial:1 norm:8 nd:1 c0:1 km:4 simulation:6 decomposition:1 thereby:1 bahmani:2 shechtman:2 selecting:1 ours:1 existing:1 current:1 must:1 ronald:1 additive:1 realistic:1 numerical:6 mordechai:1 christian:1 designed:1 v:2 greedy:2 selected:2 amir:1 provides:1 certificate:1 math:1 allerton:2 mathematical:4 c2:7 consists:2 prove:1 blumensath:1 intricate:1 cand:5 sdp:5 decreasing:1 solver:1 considering:1 provided:1 estimating:1 bounded:1 notation:3 competes:1 medium:2 erty:1 xx:1 prateek:1 argmin:5 cm:2 minimizes:1 compressive:12 finding:1 guarantee:13 runtime:1 rm:3 k2:1 control:2 unit:2 partitioning:1 grant:3 positive:7 engineering:1 local:1 laurent:1 might:2 chose:1 initialization:1 studied:3 kxk2f:1 obeys:4 fazel:1 union:1 elbert:1 empirical:5 attain:2 significantly:2 projection:1 davy:1 word:1 suggest:1 onto:2 ga:1 close:1 romberg:2 operator:4 shankar:1 live:1 applying:2 imposed:3 elusive:1 straightforward:3 independently:2 convex:16 normed:1 ke:36 recovery:10 immediately:1 pure:1 needell:1 factored:1 estimator:5 importantly:2 nuclear:2 target:12 construction:2 suppose:5 mallat:1 exact:1 programming:6 homogeneous:1 larly:1 agreement:1 roy:1 kxk1:3 mike:1 preprint:4 electrical:1 solved:1 wang:1 mentioned:1 gross:1 complexity:6 babak:1 solving:1 exposed:1 basis:1 triangle:4 compactly:1 sep:1 various:6 derivation:1 jain:1 fast:1 effective:1 whose:1 solve:4 reconstruct:1 compressed:3 jointly:1 noisy:1 final:1 propose:3 reconstruction:1 maryam:1 adaptation:3 flexibility:1 achieve:1 amin:1 frobenius:4 kv:1 waldspurger:1 convergence:2 double:1 cosamp:2 rademacher:1 produce:6 phane:1 depending:1 pose:1 measured:2 school:1 aug:1 solves:2 netrapalli:1 recovering:5 implemented:1 c:3 indicate:1 kb:3 require:1 mathematique:1 suffices:1 samet:1 underdetermined:1 summation:1 extension:1 cpr:13 hold:11 sufficiently:6 considered:3 overlaid:1 visualize:1 matthew:1 a2:6 smallest:1 estimation:24 schwarz:1 largest:1 successfully:1 sohail:2 minimization:12 clearly:1 gaussian:10 ck:1 ej:3 lifted:1 varying:1 gatech:1 corollary:3 kueng:1 rank:25 bernoulli:1 cg:1 inaccurate:1 typically:1 voroninski:2 provably:1 dual:1 cprl:2 eldar:3 denoted:4 proposes:1 constrained:3 spatial:1 special:1 equal:1 aware:1 yonina:3 preparing:1 kw:2 identical:2 nearly:2 igor:1 inevitable:1 report:1 sanghavi:1 richard:2 few:2 simultaneously:2 deviating:1 subsampled:1 replaced:2 phase:18 beck:1 psd:2 atlanta:1 message:2 multiply:1 joel:2 adjust:1 semidefinite:5 light:2 strohmer:1 chain:2 implication:1 accurate:7 schniter:1 partial:1 decoupled:1 orthogonal:1 vladislav:2 incomplete:1 phaselift:2 desired:5 e0:1 theoretical:1 column:3 obstacle:1 kxkf:4 yoav:2 cost:1 entry:4 subset:1 uniform:1 successful:2 st:1 siam:1 oymak:1 lee:1 invoke:1 dong:1 michael:2 concrete:1 iwen:1 na:1 ambiguity:1 davenport:1 li:3 potential:1 semidefiniteness:1 bold:1 coding:1 coefficient:1 satisfy:1 kzk2:1 depends:2 ad:1 performed:1 ori:1 closed:1 recover:1 jul:1 contribution:1 square:2 ir:1 accuracy:8 became:1 ensemble:2 yield:3 xk2f:1 accurately:1 produced:2 iid:2 rauhut:1 ohlsson:1 szameit:1 definition:2 inexpensive:1 invasive:1 proof:9 spie:1 recall:1 knowledge:1 alexandre:1 scattering:3 follow:2 devore:1 entrywise:2 formulation:1 evaluated:2 mar:1 furthermore:8 stage:21 tropp:2 ei:1 lack:4 perhaps:1 scientific:1 xiaodong:3 ccf:2 regularization:2 alternating:3 nonzero:1 covering:1 simic:1 generalized:2 allen:1 postprocessing:1 harmonic:4 recently:1 volume:2 analog:1 katz:1 measurement:23 ai:5 rd:2 sujay:1 outlined:1 mathematics:2 similarly:1 sastry:1 maxcut:1 soltanolkotabi:1 geoffroy:1 vos:1 stable:1 yk2:2 etc:1 feb:1 isometry:9 recent:1 quartic:1 scenario:1 argminv:1 n00014:1 inequality:10 onr:1 allard:1 yi:2 seen:1 greater:1 converge:1 signal:20 stephen:1 dre:1 sphere:1 retrieval:15 post:1 a1:3 feasibility:1 arxiv:4 iteration:1 achieved:1 c1:6 whereas:1 grow:1 subject:4 flow:2 integer:3 yang:2 wirtinger:2 easy:1 zi:4 modulator:1 competing:1 suboptimal:2 idea:1 praneeth:1 becker:1 passing:2 generally:2 transforms:1 exist:1 nsf:1 disjoint:5 xii:1 write:2 express:1 blum:1 drawn:2 capital:1 imaging:5 graph:2 relaxation:3 merely:1 cone:1 year:1 sum:1 package:1 letter:1 baraniuk:2 opaque:1 throughout:2 illuminated:1 submatrix:1 bound:7 layer:1 followed:1 guaranteed:1 quadratic:5 annual:1 adapted:2 optic:1 constraint:3 segev:1 rangan:1 scene:1 dominated:1 fourier:2 argument:3 c20:2 optimality:1 structured:4 viswanathan:1 kd:8 smaller:1 wi:6 appealing:1 kbk:1 projecting:2 restricted:9 pr:3 explained:1 intuitively:1 computationally:2 xk22:1 equation:2 rendus:1 fail:1 nonempty:1 tractable:1 willem:1 generic:1 appropriate:2 enforce:1 alternative:2 robustness:4 xkf:1 thomas:2 denotes:3 include:1 ensure:1 wakin:1 exploit:1 k1:13 quantile:2 establish:1 emmanuel:5 february:1 already:2 occurs:1 dependence:3 jalali:1 antoine:1 hai:2 subspace:2 cauchy:1 kannan:1 assuming:2 code:1 index:1 setup:1 trace:5 stated:1 design:1 unknown:1 perform:2 gilles:1 upper:1 observation:2 extended:1 communication:2 exiting:1 tive:1 bk:2 namely:3 cast:1 required:1 c3:5 r256:1 pair:2 david:2 quadratically:1 established:4 nip:2 address:2 justin:2 deanna:1 proceeds:1 below:2 usually:1 martina:1 regime:2 sparsity:9 challenge:2 program:3 including:3 critical:1 regularized:3 residual:1 scheme:1 kxk22:1 technology:1 imply:2 ne:1 incoherent:2 aspremont:1 literature:1 acknowledgement:1 kf:24 relative:6 versus:1 foundation:1 sufficient:1 s0:2 article:1 thresholding:2 pedarsani:1 row:2 elsewhere:1 repeat:1 last:2 free:1 supported:1 weaker:3 institute:1 template:1 absolute:6 sparse:45 van:1 dimension:2 moravec:1 transition:1 valid:3 made:1 projected:1 transaction:5 approximate:3 nov:1 keep:1 global:2 sequentially:1 xi:1 gespar:3 putten:1 search:1 iterative:3 nature:2 robust:2 necessarily:2 complex:1 did:1 apr:1 main:4 linearly:1 noise:5 complementary:1 phasecode:3 golfing:1 quantiles:2 georgia:1 henrik:1 hassibi:1 fails:1 exceeding:5 explicit:1 obeying:1 kxk2:1 mahdi:1 wavelet:1 theorem:10 remained:1 specific:2 bastien:1 tfocs:1 sensing:18 a3:3 exists:1 adding:2 effectively:2 liutkus:1 lifting:2 magnitude:3 illumination:1 illustrates:2 ramchandran:1 kx:6 logarithmic:1 simply:2 absorbed:1 expressed:2 aditya:1 partially:1 daudet:1 determines:1 relies:1 goal:2 formulated:1 viewed:1 identity:1 exposition:1 feasible:4 hard:2 sylvain:1 except:3 uniformly:3 wt:2 lemma:5 comptes:1 called:1 terstiege:1 ece:1 succeeds:1 support:9 mark:2 alexander:1 constructive:1 evaluate:1 |
5,550 | 6,022 | Unified View of Matrix Completion under General
Structural Constraints
Suriya Gunasekar
UT at Austin, USA
[email protected]
Arindam Banerjee
UMN Twin Cities, USA
[email protected]
Joydeep Ghosh
UT at Austin, USA
[email protected]
Abstract
Matrix completion problems have been widely studied under special low dimensional structures such as low rank or structure induced by decomposable norms.
In this paper, we present a unified analysis of matrix completion under general
low-dimensional structural constraints induced by any norm regularization. We
consider two estimators for the general problem of structured matrix completion,
and provide unified upper bounds on the sample complexity and the estimation
error. Our analysis relies on generic chaining, and we establish two intermediate
results of independent interest: (a) in characterizing the size or complexity of low
dimensional subsets in high dimensional ambient space, a certain partial complexity measure encountered in the analysis of matrix completion problems is characterized in terms of a well understood complexity measure of Gaussian widths, and
(b) it is shown that a form of restricted strong convexity holds for matrix completion problems under general norm regularization. Further, we provide several
non-trivial examples of structures included in our framework, notably including
the recently proposed spectral k-support norm.
1
Introduction
The task of completing the missing entries of a matrix from an incomplete subset of (potentially
noisy) entries is encountered in many applications including recommendation systems, data imputation, covariance matrix estimation, and sensor localization among others. Traditionally ill?posed
high dimensional estimation problems, where the number of parameters to be estimated is much
higher than the number of observations, has been extensively studied in the recent literature. However, matrix completion problems are particularly ill?posed as the observations are both limited
(high dimensional), and the measurements are extremely localized, i.e., the observations consist of
individual matrix entries. The localized measurement model, in contrast to random Gaussian or sub?
Gaussian measurements, poses additional complications in general high dimensional estimation.
For well?posed estimation in high dimensional problems including matrix completion, it is imperative that low dimensional structural constraints are imposed on the target. For matrix completion,
the special case of low?rank constraint has been widely studied. Several existing work propose
tractable estimators with near?optimal recovery guarantees for (approximate) low?rank matrix completion [8, 7, 28, 26, 18, 19, 22, 11, 20, 21]. A recent work [16] addresses the extension to structures
with decomposable norm regularization. However, the scope of matrix completion extends for low
dimensional structures far beyond simple low?rankness or decomposable norm structures.
In this paper, we consider a unified statistical analysis of matrix completion under a general set of
low dimensional structures that are induced by any suitable norm regularization. We provide statistical analysis of two generalized matrix completion estimators, the constrained norm minimizer, and
the generalized matrix Dantzig selector (Section 2.2). The main results in the paper (Theorem 1a?
1b) provide unified upper bounds on the sample complexity and estimation error of these estimators
1
for matrix completion under any norm regularization. Existing results on matrix completion with
low rank or other decomposable structures can be obtained as special cases of our general results.
Our unified analysis of sample complexity is motivated by recent work on high dimensional estimation using global (sub) Gaussian measurements [10, 1, 35, 3, 37, 5]. A key ingredient in the recovery analysis of high dimensional estimation involves establishing a certain variation of Restricted
Isometry Property (RIP) [9] of the measurement operator. It has been shown that such properties
are satisfied by Gaussian and sub?Gaussian measurement operators with high probability. Unfortunately, as has been noted before by Candes et al. [8], owing to highly localized measurements,
such conditions are not satisfied in the matrix completion problem, and the existing results based on
global (sub) Gaussian measurements are not directly applicable. In fact, a key question we consider
is: given the radically limited measurement model in matrix completion, by how much would the
sample complexity of estimation increase beyond the known sample complexity bounds for global
(sub) Gaussian measurements. Our results upper bounds the sample complexity for matrix completion to within only a log d factor larger over that of global (sub) Gaussian measurements [10, 3, 5].
While the result is known for low rank matrix completion using nuclear norm minimization [26, 20],
with a careful use of generic chaining, we show that the log d factor suffices for structures induced
by any norm! As a key intermediate result, we show that a useful form restricted strong convexity
(RSC) [27] holds for the localized measurements encountered in matrix completion under general
norm regularized structures. The result substantially generalizes existing RSC results for matrix
completion under the special cases of nuclear norm and decomposable norm regularization [26, 16].
For our analysis, we use tools from generic chaining [33] to characterize the main results (Theorem 1a?1b) in terms of the Gaussian width (Definition 1) of certain error sets. Gaussian widths
provide a powerful geometric characterization for quantifying the complexity of a structured low
dimensional subset in a high dimensional ambient space. Such a unified characterization in terms
of Gaussian width has the advantage that numerous tools have been developed in the literature for
bounding the Gaussian width for structured sets, and this literature can be readily leveraged to derive
new recovery guarantees for matrix completion under suitable structural constraints (Appendix D.2).
In addition to the theoretical elegance of such a unified framework, identifying useful but potentially
non?decomposable low dimensional structures is of significant practical interest. The broad class
of structures enforced through symmetric convex bodies and symmetric atomic sets [10] can be
analyzed under this paradigm (Section 2.1). Such specialized structures can potentially capture the
constraints in certain applications better than simple low?rankness. In particular, we discuss in
detail, a non?trivial example of the spectral k?support norm introduced by McDonald et al. [25].
To summarize the key contributions of the paper:
? Theorem 1a?1b provide unified upper bounds on sample complexity and estimation error for
matrix completion estimators using general norm regularization: a substantial generalization of the
existing results on matrix completion under structural constraints.
? Theorem 1a is applied to derive statistical results for the special case of matrix completion under
spectral k?support norm regularization.
? An intermediate result, Theorem 5 shows that under any norm regularization, a form of Restricted
Strong Convexity (RSC) holds in the matrix completion setting with extremely localized measurements. Further, a certain partial measure of complexity of a set is encountered in matrix completion
analysis (12). Another intermediate result, Theorem 2 provides bounds on the partial complexity
measures in terms of a better understood complexity measure of Gaussian width. These intermediate
results are of independent interest beyond the scope of the paper.
Notations and Preliminaries
Indexes i, j are typically used to index rows and columns respectively of matrices, and index k is
used to index the observations. ei , ej , ek , etc. denote the standard basis in appropriate dimensions1 .
Notation G ans g are used to denote a matrix and vector respectively, with independent standard
Gaussian random variables. P(.) and E(.) denote the probability of an event and the expectation of
a random variable, respectively. Given
p an integer N , let [N ] = {1, 2, . . . , N }. Euclidean norm in a
vector space is denoted as kxk2 = hx, xi. For a matrix
? . . .,
pPX with singular values ?1 ? ?2 P
2
common norms include the Frobenius norm kXkF =
i ?i , the nuclear norm kXk? =
i ?i ,
1
for brevity we omit the explicit dependence of dimension unless necessary
2
the spectral norm kXkop = ?1 , and the maximum norm kXk? = maxij |Xij |. Also let, Sd1 d2 ?1 =
{X ? Rd1?d2 : kXkF = 1} and Bd1 d2 = {X ? Rd1?d2 : kXkF ? 1}. Finally, given a norm k.k
defined on a vectorspace V, its dual norm is given by kXk? = supkY k?1 hX, Y i.
Definition 1 (Gaussian Width). Gaussian width of a set S ? Rd1?d2 is a widely studied measure of
complexity of a subset in high dimensional ambient space and is given by:
wG (S) = EG sup hX, Gi,
(1)
X?S
where recall that G is a matrix of independent standard Gaussian random variables. Some key results
on Gaussian width are discussed in Appendix D.2.
Definition 2 (Sub?Gaussian Random Variable [36]). The sub?Gaussian norm of a random variable
X is given by: kXk?2 = supp?1 p?1/2 (E|X|p )1/p . X is said be b?sub?Gaussian if kXk?2 ? b.
Equivalently, X is sub?Gaussian if one of the following conditions are satisfied for some constants
k1 , k2 , and k3 [Lemma 5.5 of [36]].
2
2 2
?
(1) ?p ? 1, (E|X|p )1/p ? b p,
(2) ?t > 0, P(|X| > t) ? e1?t /k1 b ,
2
2
2 2
(3) E[ek2 X /b ] ? e, or
(4) if EX = 0, then ?s > 0, E[esX ] ? ek3 s b /2 .
Definition 3 (Restricted Strong Convexity (RSC)). A function L is said to satisfy Restricted Strong
Convexity (RSC) at ? with respect to a subset S, if for some RSC parameter ?L > 0,
?? ? S, L(? + ?) ? L(?) ? h?L(?), ?i ? ?L k?k2F .
(2)
1?d2
Definition 4 (Spikiness Ratio [26]). For X ? Rd?
, a measure of the ?spikiness? is given by:
d1 d2 kXk?
?sp (X) =
.
(3)
kXkF
Definition 5 (Norm Compatibility Constant [27]). The compatibility constant of a norm R : V ? R
under a closed convex cone C ? V is defined as follows:
R(X)
?R (C) = sup
.
(4)
X?C\{0} kXkF
2
Structured Matrix Completion
Denote the ground truth target matrix as ?? ? Rd1?d2 ; let d = d1 + d2 . In the noisy matrix completion, observations consists of individual entries of ?? observed through an additive noise channel.
Sub?Gaussian Noise: Given, a list of independently sampled standard basis ? = {Ek = eik e>
jk :
ik ? [d1 ], jk ? [d2 ]} with potential duplicates, observations (yk ) ? R|?| are given by:
yk = h?? , Ek i + ??k , for k = 1, 2, . . . , |?|,
(5)
|?|
where ? ? R is the noise vector of independent sub?Gaussian random variables with E[?k ] = 0,
and k?k k?2 = 1 (recall k.k?2 from Definition 2), and ? 2 is scaled variance of noise per observation,
(note Var(?k ) ? constant). Also, without loss of generality, assume normalization k?? kF = 1.
Uniform Sampling: Assume that the entries in ? are drawn independently and uniformly:
Ek ? uniform{ei e>
(6)
j : i ? [d1 ], j ? [d2 ]}, for Ek ? ?.
Given ?, define the linear operator P? : Rd1?d2 ? R? as follows (ek ? R|?| ):
P|?|
P? (X) = k=1 hX, Ek iek
(7)
Structural Constraints For matrix completion with |?| < d1 d2 , low dimensional structural constraints on ?? are necessary for well?posedness. We consider a generalized constraint setting
wherein for some low?dimensional model space M, ?? ? M is enforced through a surrogate
norm regularizer R(.). We make no further assumptions on R other than it being a norm in Rd1?d2 .
Low Spikiness In matrix completion under uniform sampling model, further restrictions on ?? (beyond low dimensional structure) are required to ensure that the most informative entries of the matrix
are observed with high probability [8]. Early work assumed stringent matrix incoherence conditions
for low?rank completion to preclude such matrices [7, 18, 19], while more recent work [11, 26], relax these assumptions to a more intuitive restriction of the spikiness ratio, defined in (3). However,
under this relaxation only an approximate recovery is typically guaranteed in low?noise regime, as
opposed to near exact recovery under incoherence assumptions.
Assumption 1 (Spikiness Ratio). There exists ?? > 0, such that
?
?
kF
?
? ?d? d .
k?? k? = ?sp (?? ) k?
d d
1 2
3
1 2
2.1
Special Cases and Applications
We briefly introduce some interesting examples of structural constraints with practical applications.
Example 1 (Low Rank and Decomposable Norms). Low?rankness is the most common structure
used in many matrix estimation problems including collaborative filtering, PCA, spectral clustering,
etc. Convex estimators using nuclear norm k?k? regularization has been widely studied statistically
[8, 7, 28, 26, 18, 19, 22, 11, 20, 21]. A recent work [16] extends the analysis of low rank matrix
completion to general decomposable norms, i.e. R : ?X, Y ? (M, M? ), R(X+Y ) = R(X)+R(Y ).
Example 2 (Spectral k?support Norm). A non?trivial and significant example of norm regularization that is not decomposable is the spectral k?support norm recently introduced by McDonald et al. [25]. Spectral k?support norm is essentially the vector k?support norm [2] applied on the
singular values ?(?) of a matrix ? ? Rd1?d2 . Without loss of generality, let d? = d1 = d2 .
? : |g| ? k} be the set of all subsets [d]
? of cardinality at most k, and denote the set
Let Gk = {g ? [d]
d?
V(Gk ) = {(vg )g?Gk : vg ? R , supp(vg ) ? g}. The spectral k?support norm is given by:
nX
o
X
k?kk?sp = inf
kvg k2 :
vg = ?(?) ,
(8)
v?V(Gk )
g?Gk
g?Gk
McDonald et al. [25] showed that spectral k?support norm is a special case of cluster norm [17]. It
was further shown that in multi?task learning, wherein the tasks (columns of ?? ) are assumed to be
clustered into dense groups, the cluster norm provides a trade?off between intra?cluster variance,
(inverse) inter?cluster variance, and the norm of the task vectors. Both [17] and [25] demonstrate
superior empirical performance of cluster norms (and k?support norm) over traditional trace norm
and spectral elastic net minimization on bench marked matrix completion and multi?task learning
datasets. However, there are no known work on the statistical analysis of matrix completion with
spectral k?support norm regularization. In Section 3.2, we discuss the consequence of our main
theorem for this non?trivial special case.
Example 3 (Additive Decomposition). Elementwise sparsity is a common structure often assumed
in high?dimensional estimation problems. However, in matrix completion, elementwise sparsity
conflicts with Assumption 1 (and more traditional incoherence assumptions). Indeed, it is easy to
see that with high probability most of the |?| d1 d2 uniformly sampled observations will be zero,
and an informed prediction is infeasible. However, elementwise
P sparse structures are often used
within an additive decomposition framework, wherein ?? = k ?(k) , such that each component
matrix ?(k) is in turn structured (e.g. low rank+sparse used for robust PCA [6]). In such structures,
there is no scope for recovering sparse components outside the observed indices, and it is assumed
that: ?(k) is sparse ? supp(?(k) ) ? ?. Further, the sparsity assumption might still conflict with
the spikiness assumption. In such cases, consistent matrix completion is feasible under additional
regularity assumptions if the superposed matrix is non?spiky. A candidate norm regularizer for such
structures is the weighted infimum convolution of individual structure inducing norms [6, 39],
X
X
Rw (?) = inf
wk Rk (?(k) ) :
?(k) = ? .
k
k
Example 4 (Other Applications). Other potential applications including cut matrices [30, 10], structures induced by compact convex sets, norms inducing structured sparsity assumptions on the spectrum of ?? , etc. can also be handled under the paradigm of this paper.
2.2
Structured Matrix Estimator
Let R be the norm surrogate for the structural constraints on ?? , and R? denote its dual norm. We
propose and analyze two convex estimators for the task of structured matrix completion:
Constrained Norm Minimizer
b cn = argmin R(?)
?
s.t. kP? (?) ? yk2 ? ?cn .
?
(9)
k?k? ? ??
d1 d2
Generalized Matrix Dantzig Selector
b ds =
?
argmin
R(?)
?
d1 d2 ? ?
R P? (P? (?) ? y) ? ?ds ,
|?|
s.t.
??
k?k? ? ?
d1 d2
4
(10)
where P?? : R? ? Rd1?d2 is the linear adjoint of P? , i.e. hP? (X), yi = hX, P?? (y)i.
Note: Theorem 1a?1b gives consistency results for (9) and (10), respectively, under certain conditions on the parameters ?cn > 0, ?ds > 0, and ?? > 1. In particular, these conditions assume
knowledge of the noise variance ? 2 and spikiness ratio ?sp (?? ). In practice, both ? and ?sp (?? ) are
typically unknown and the parameters are tuned by validating on held out data.
3
Main Results
We define the following ?restricted? error cone and its subset:
TR = TR (?? ) = cone{? : R(?? + ?) ? R(?? )}, and ER = TR ? Sd1 d2 ?1 .
(11)
b cn and ?
b ds be the estimates from (9) and (10), respectively. If ?cn and ?ds are chosen such that
Let ?
?
b cn = ?
b cn ? ??
? belongs to the feasible sets in (9) and (10), respectively, then the error matrices ?
?
b ds = ?
b ds ? ? are contained in TR .
and ?
b cn =
Theorem 1a (Constrained Norm Minimizer). Under the problem setup in Section 2, let ?
?
2
2
b cn be the estimate from (9) with ?cn = 2?. For large enough c0 , if |?| > c w (ER ) log d,
? +?
0 G
then there exists an RSC parameter ?c0 > 0 and constants c1 , c2 , c3 such that with probability
2
2
greater than 1?exp(?c1 wG
(ER ))?2 exp(?c2 wG
(ER ) log d),
)
(
2
1 b 2
(ER ) log d
c3 ? 2 ??2 c20 wG
.
k?cn kF ? 4 max
,
d1 d2
?c0 d1 d2
|?|
b ds =
Theorem 1b (Matrix Dantzig Selector). Under the ?problem setup in Section 2, let ?
d
d
b ds be the estimate from (10) with ?ds ? 2? 1 2 R? P ? (w). For large enough c0 , if
?? + ?
?
|?|
2
|?| > c20 wG
(ER ) log d, then there exists an RSC parameter ?c0 > 1, and constant c1 such that
2
with probability greater than 1?exp(?c1 wG
(ER )),
)
(
2 2
2
2
(E
)
log
d
(T
)
c
w
?
?
R
R
0
?2
2
G
R
ds
b ds kF ? 4 max
,?
.
k?
?2c0
|?|
Recall Gaussian width wG and subspace compatibility constant ?R from (1) and (4), respectively.
Remarks:
2
(ER ) ? 3dr, ?R (TR ) ? 2r and
1. If R(?) = k?k? q
and rank(?? ) = r, then wG
?
log d
? 2 d |?|
w.h.p [10, 14, 26]. Using these bounds in Theorem 1b recovers
near?optimal results for low rank matrix completion under spikiness assumption [26].
For both estimators, upper bound on sample complexity is dominated by the square of Gaussian
width which is often considered the effective dimension of a subset in high dimensional space
and plays a key role in high dimensional estimation under Gaussian measurement ensembles.
The results show that, independent of R(.), the upper bound on sample complexity for consistent
matrix completion with highly localized measurements is within a log d factor of the known
2
sample complexity of ? wG
(ER ) for estimation from Gaussian measurements [3, 10, 37, 5].
First term in estimation error bounds in Theorem 1a?1b scales with ? 2 which is the per observation noise variance (upto constant). The second term is an upper bound on error that arises due
to unidentifiability of ?? within a certain radius under the spikiness constraints [26]; in contrast
[7] show exact recovery when ? = 0 using more stringent matrix incoherence conditions.
b cn from Theorem 1a is comparable to the result by Cand?es et al. [7] for low rank
Bound on ?
matrix completion under non?low?noise regime, where the first term dominates, and those of [10,
2
35] for high dimensional estimation under Gaussian measurements. With a bound on wG
(ER ), it
is easy to specialize this result for new structural constraints. However, this bound is potentially
loose and asymptotically converges to a constant error proportional to the noise variance ? 2 .
The estimation error bound in Theorem 1b is typically sharper than that in Theorem 1a. However, for specific structures, using application of Theorem 1b requires additional bounds on
2
ER? (P? (W )) and ?R (TR ) besides wG
(ER ).
d1 d2
?
|?| kP? (?)k2
2.
3.
4.
5.
3.1
Partial Complexity Measures
Recall that for wG (S) = E supX?S hX, Gi and R|?| 3 g ? N (0, I|?| ) is a standard normal vector.
5
Definition 6 (Partial Complexity Measures). Given a randomly sampled collection ? = {Ek ?
Rd1?d2 }, and a random vector ? ? R|?| , the partial ??complexity measure of S is given by:
w?,? (S) = E?,? sup hX, P?? (?)i.
(12)
X?S?S
The special cases where ? is a vector of standard Gaussian g, or standard Rademacher (i.e. k ?
{?1, 1} w.p. 1/2) variables, are of particular interest. In the case of symmetric ?, like g and
, w?,? (S) = 2E?,? supX?S hX, P?? (?)i, and the later expression will be used interchangeably
ignoring the constant term.
Theorem 2 (Partial Gaussian Complexity). Let S ? Bd1 d2 , and let ? be sampled according to (6).
? universal constants K1 , K2 , K3 , and K4 such that:
s
n r
?sp (X) o
|?|
w?,g (S) ? K1
wG (S) + min K2 E? sup kP? (X ? Y )k22 , K3 sup ?
(13)
d1 d2
d1 d2
X,Y ?S
X?S
Further, for a centered i.i.d. 1?sub?Gaussian vector ? ? R|?| , w?,? (S) ? K4 w?,g (S).
Note: For ? ( [d1 ] ? [d2 ], the second term in (13) is a consequence of the localized measurements.
3.2
Spectral k?Support Norm
We introduced spectral k?support norm in Section 2.1. The estimators from (9) and (10) for spectral
k?support norm can be efficiently solved through proximal methods using the proximal operators
derived in [25]. We are interested in the statistical guarantees for matrix completion using spectral
k?support norm regularization. We extend the analysis for upper bounding the Gaussian width of the
descent cone for the vector k?support norm by [29] to the case of spectral k?support norm. WLOG
? Let ? ? ? Rd? be the vector of singular values of ?? sorted in non?ascending order.
let d1 = d2 = d.
Pp
1
?
?
?
Let r ? {0, 1, 2, . . . , k ? 1} be the unique integer satisfying: ?k?r?1
> r+1
i=k?r ?i ? ?k?r .
?
Denote I2 = {1, 2, . . . , k ? r ? 1}, I1 = {k ? r, k ? r + 1, . . . , s}, and I0 = {s + 1, s + 2, . . . , d}.
? (? ? )i = 0 ?i ? I c , and (? ? )i = ? ? ?i ? I.
Finally, for I ? [d],
i
I
I
Lemma 3. If rank of ?? is s and ER is the error set from R(?) = k?kk?sp , then
(r + 1)2 k? ? k2
I2 2
2
+ |I1 | (2d? ? s).
wG
(ER ) ? s(2d? ? s) +
?
2
k?I1 k1
Proof of the above lemma is provided in the appendix. Lemma 3 can be combined with Theorem 1a
to obtain recovery guarantees for matrix completion under spectral k?support norm.
4
Discussions and Related Work
Sample Complexity: For consistent recovery in high dimensional convex estimation, it is desirable
that the descent cone at the target parameter ?? is ?small? relative to the feasible set (enforced by
the observations) of the estimator. Thus, a measure of complexity/size of the error cone at ?? is
crucial in establishing sample complexity and estimation error bounds. Results in this paper are
largely characterized in terms of a widely used complexity measure of Gaussian width wG (.), and
can be compared with the literature on estimation from Gaussian measurements.
Error Bounds: Theorem 1a provides estimation error bounds that depends only on the Gaussian
width of the descent cone. In non?low?noise regime, this result is comparable to analogous results
of constrained norm minimization [6, 10, 35]. However, this bound is potentially loose owing to
mismatched data?fit term using squared loss, and asymptotically converges to a constant error proportional to the noise variance ? 2 .
A tighter analysis on the estimation error can be obtained for the matrix Dantzig selector (10) from
Theorem 1b. However, application of Theorem 1b requires computing high probability upper bound
on R? (P? (W )). The literature on norms of random matrices [13, 24, 36, 34] can be exploited in
deriving such bounds. Beside, in special cases: if R(.) ? Kk.k? , then KR? (.) ? k.kop can be used
to obtain asymptotically consistent results.
Finally, under near zero?noise, the second term in the results of Theorem 1 dominates, and bounds
are weaker than that of [6, 19] owing to the relaxation of stronger incoherence assumption.
6
Related Work and Future Directions: The closest related work is the result on consistency of
matrix completion under decomposable norm regularization by [16]. Results in this paper are a strict
generalization to general norm regularized (not necessarily decomposable) matrix completion. We
provide non?trivial examples of application where structures enforced by such non?decomposable
norms are of interest. Further, in contrast to our results that are based on Gaussian width, the RSC
parameter in [16] depends on a modified complexity measure ?R (d, |?|) (see definition in [16]). An
advantage of results based on Gaussian width is that, application of Theorem 1 for special cases can
greatly benefit from the numerous tools in the literature for the computation of wG (.).
Another closely related line of work is the non?asymptotic analysis of high dimensional estimation
under random Gaussian or sub?Gaussian measurements [10, 1, 35, 3, 37, 5]. However, the analysis
from this literature rely on variants of RIP of the measurement ensemble [9], which is not satisfied by
the the extremely localized measurements encountered in matrix completion[8]. In an intermediate
result, we establish a form of RSC for matrix completion under general norm regularization: a result
that was previously known only for nuclear norm and decomposable norm regularization.
In future work, it is of interest to derive matching lower bounds on estimation error for matrix
completion under general low dimensional structures, along the lines of [22, 5] and explore special
case applications of the results in the paper. We also plan to derive explicit characterization of ?ds
in terms of Gaussian width of unit balls by exploiting generic chaining results for general Banach
spaces [33].
5
Proof Sketch
Proofs of the lemmas are provided in the Appendix.
5.1
Proof of Theorem 1
Define the following set of ??non?spiky matrices in Rd1?d2 for constant c0 from Theorem 1:
(
)
?
d1 d2 kXk?
A(?) = X : ?sp (X) =
?? .
(14)
kXkF
Define,
?c0
1
=
c0
s
|?|
2 (E ) log d
wG
R
(15)
Case 1: Spiky Error Matrix When the error matrix from (9) or (10) has large spikiness ratio,
?
b ? ? k?k
b ? +k?? k??2?? / d1 d2 in (3).
following bound on error is immediate using k?k
b
Proposition 4 (Spiky
/ A(?c0 ), then
q 2Error Matrix). For the constant c0 in Theorem 1a, if ?sp (?cn ) ?
w
(E
)
log
d
R
?
G
b cn kF ? 2c0 ?
b ds .
k?
. An analogous result also holds for ?
|?|
b ds , ?
b cn ? A(?).
Case 2: Non?Spiky Error Matrix Let ?
5.1.1
Restricted Strong Convexity (RSC)
Recall TR and ER from (11). The most significant step in the proof of Theorem 1 involves showing
that over a useful subset of TR , a form of RSC (2) is satisfied by a squared loss penalty.
2
Theorem 5 (Restricted Strong Convexity). Let |?| > c20 wG
(ER ) log d, for large enough constant
2
c0 ; further, sub?sampling excess samples such that |?| ? ?(wG
(ER ) log2 d). There exists a RSC
2
parameter ?c0 = 1 ? ?c0 > 0, such that the following holds w.p. greater that 1 ? exp(?c1 wG
(ER )),
?X ? TR ? A(?),
d1 d2
kP? (X)k22 ? ?c0 kXk2F .
|?|
Proof in Appendix A combines empirical process tools along with Theorem 2.
Recall from (5), that y ? P? (?? ) = ?w, where w ? R|?| consists of independent sub?Gaussian
random variables with E[wk ] = 0 and kwk k?2 = 1 (recall k.k?2 from Definition 2).
7
5.1.2
Constrained Norm Minimizer
Lemma 6. Under the conditions of Theorem 1, let c1 p
be a constant such that ?k, Var(wk ) ?
c1 . ? a universal constant c2 such that, if ?cn ? 2c1 ? |?|, then with probability greater than
b ds ? TR , and (b) kP? (?
b cn )k2 ? 2?cn .
1 ? 2 exp (?c2 |?|), (a) ?
p
b cn ? A(?), then using Theorem 5 and Lemma 6, w.h.p.
Using ?cn = 2c1 ? |?| in (9), if ?
b cn k2
b cn )k2 4c2 ? 2
k?
1 kP? (?
2
F
?
? 1 .
d1 d2
?c0
|?|
?c0
5.1.3
(16)
Matrix Dantzig Selector
?
Proposition 7. ?ds ? ?
d1 d2 ? ?
|?| R P? (w)
?
b ds ? TR ; (b)
? w.h.p. (a) ?
d1 d2 ? ?
b
|?| R P? (P? (?ds )) ? 2?ds .
b ds and triangle inequality. Also,
Above result follows from optimality of ?
?
?
d1 d2
b ds )k22 ? d1 d2 R? P?? (P? (?
b ds ))R(?
b ds ) ? 2?ds ?R (TR )k?
b ds kF ,
kP? (?
|?|
|?|
where recall norm compatibility constant ?R (TR ) from (4). Finally, using Theorem 5, w.h.p.
b ds )k2 4?ds ?R (TR ) k?
b ds k2
b k
k?
1 kP? (?
2
F
? ds F .
?
?
d1 d2
|?|
?c0
?c0
d1 d2
5.2
(17)
Proof of Theorem 2
|?|
Let the entries of ? = {Ek = eik e>
jk : k = 1, 2, . . . , |?|} be sampled as in (6). Recall that g ? R
is a standard normal vector. For a compact S ? Rd1?d2 , it suffices to prove Theorem 2 for a dense
countable subset of S. Overloading S to such a countable subset, define following random process:
P
(X?,g (X))X?S , where X?,g (X) = hX, P?? (g)i = k hX, Ek igk .
(18)
We start with a key lemma in the proof of Theorem 2. Proof of this lemma, provided in Appendix B,
uses tools from the broad topic of generic chaining developed in recent works [31, 33].
d1 d2 ?1
Lemma 8. ? constants k1 , k2 such
, then
sthat for S ? S
r
|?|
w?,g (S) = E sup X?,g (X) ? k1
wG (S) + k2 E sup kP? (X ? Y )k22 .
d1 d2
X,Y ?S
X?S
Lemma 9. There exists constants k3 , k4 , such that for S ? Sd1 d2 ?1
?sp (X)
|?| 2
w (S)
E sup kP? (X ? Y )k22 ? k3 sup ?
w?,g (S) + k4
d1 d2 G
d1 d2
X,Y ?S
X?S
?
Theorem 2 follows by combining Lemma 8 and Lemma 9, and simplifying using ab ? a/2 + b/2
and triangle inequality (See Appendix B). The statement in Theorem 2 about partial sub?Gaussian
complexity follows from a standard result in empirical process given in Lemma 12.
Acknowledgments We thank the anonymous reviewers for helpful comments and suggestions. S.
Gunasekar and J. Ghosh acknowledge funding funding from NSF grants IIS-1421729, IIS-1417697,
and IIS1116656. A. Banerjee acknowledges NSF grants IIS-1447566, IIS-1422557, CCF-1451986,
CNS-1314560, IIS-0953274, IIS-1029711, and NASA grant NNX12AQ39A.
8
References
[1] D. Amelunxen, M. Lotz, M. B. McCoy, and J. A. Tropp. Living on the edge: A geometric theory of phase
transitions in convex optimization. Inform. Inference, 2014.
[2] A. Argyriou, R. Foygel, and N. Srebro. Sparse prediction with the k-support norm. In NIPS, 2012.
[3] A. Banerjee, S. Chen, F. Fazayeli, and V. Sivakumar. Estimation with norm regularization. In NIPS, 2014.
[4] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with bregman divergences. JMLR, 2005.
[5] T. Cai, T. Liang, and A. Rakhlin. Geometrizing local rates of convergence for linear inverse problems.
arXiv preprint, 2014.
[6] E. J. Cand?es, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? ACM, 2011.
[7] E. J. Cand?es and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 2010.
[8] E. J. Cand?es and B. Recht. Exact matrix completion via convex optimization. FoCM, 2009.
[9] Emmanuel J Candes and Terence Tao. Decoding by linear programming. Information Theory, IEEE
Transactions on, 2005.
[10] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse
problems. Foundations of Computational Mathematics, 2012.
[11] M. A. Davenport, Y. Plan, E. Berg, and M. Wootters. 1-bit matrix completion. Inform. Inference, 2014.
[12] R. M. Dudley. The sizes of compact subsets of hilbert space and continuity of gaussian processes. Journal
of Functional Analysis, 1967.
[13] A. Edelman. Eigenvalues and condition numbers of random matrices. Journal on Matrix Analysis and
Applications, 1988.
[14] M. Fazel, H Hindi, and S. P. Boyd. A rank minimization heuristic with application to minimum order
system approximation. In American Control Conference, 2001.
[15] J. Forster and M. Warmuth. Relative expected instantaneous loss bounds. Journal of Computer and
System Sciences, 2002.
[16] S. Gunasekar, P. Ravikumar, and J. Ghosh. Exponential family matrix completion under structural constraints. In ICML, 2014.
[17] L. Jacob, J. P. Vert, and F. R. Bach. Clustered multi-task learning: A convex formulation. In NIPS, 2009.
[18] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Trans. IT, 2010.
[19] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. JMLR, 2010.
[20] O. Klopp. Noisy low-rank matrix completion with general sampling distribution. Bernoulli, 2014.
[21] O. Klopp. Matrix completion by singular value thresholding: sharp bounds. arXiv preprint arXiv, 2015.
[22] Vladimir Koltchinskii, Karim Lounici, Alexandre B Tsybakov, et al. Nuclear-norm penalization and
optimal rates for noisy low-rank matrix completion. The Annals of Statistics, 2011.
[23] M. Ledoux and M. Talagrand. Probability in Banach Spaces: isoperimetry and processes. Springer, 1991.
[24] A. E. Litvak, A. Pajor, M. Rudelson, and N. Tomczak-Jaegermann. Smallest singular value of random
matrices and geometry of random polytopes. Advances in Mathematics, 2005.
[25] A. M. McDonald, M. Pontil, and D. Stamos. New perspectives on k-support and cluster norms. arXiv
preprint, 2014.
[26] S. Negahban and M. J. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal
bounds with noise. JMLR, 2012.
[27] S. Negahban, B. Yu, M. J. Wainwright, and P. Ravikumar. A unified framework for high-dimensional
analysis of m-estimators with decomposable regularizers. In NIPS, 2009.
[28] B. Recht. A simpler approach to matrix completion. JMLR, 2011.
[29] E. Richard, G. Obozinski, and J.-P. Vert. Tight convex relaxations for sparse matrix factorization. In
ArXiv e-prints, 2014.
[30] N. Srebro and A. Shraibman. Rank, trace-norm and max-norm. In Learning Theory. Springer, 2005.
[31] M. Talagrand. Majorizing measures: the generic chaining. The Annals of Probability, 1996.
[32] M. Talagrand. Majorizing measures without measures. Annals of probability, 2001.
[33] M. Talagrand. Upper and Lower Bounds for Stochastic Processes. Springer, 2014.
[34] J. A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational
Mathematics, 2012.
[35] J. A. Tropp. Convex recovery of a structured signal from independent random linear measurements. arXiv
preprint, 2014.
[36] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. Compressed sensing,
pages 210?268, 2012.
[37] R. Vershynin. Estimation in high dimensions: a geometric perspective. ArXiv e-prints, 2014.
[38] A. G. Watson. Characterization of the subdifferential of some matrix norms. Linear Algebra and its
Applications, 1992.
[39] E. Yang and P. Ravikumar. Dirty statistical models. In NIPS, 2013.
9
| 6022 |@word briefly:1 norm:79 stronger:1 c0:20 d2:48 covariance:1 decomposition:2 simplifying:1 jacob:1 tr:14 tuned:1 existing:5 readily:1 additive:3 informative:1 unidentifiability:1 warmuth:1 provides:3 characterization:4 complication:1 simpler:1 along:2 c2:5 bd1:2 ik:1 edelman:1 consists:2 specialize:1 prove:1 combine:1 introduce:1 inter:1 notably:1 expected:1 indeed:1 cand:4 multi:3 preclude:1 cardinality:1 pajor:1 provided:3 notation:2 argmin:2 substantially:1 developed:2 informed:1 unified:10 ghosh:5 shraibman:1 guarantee:4 friendly:1 k2:13 scaled:1 control:1 unit:1 grant:3 omit:1 before:1 understood:2 local:1 consequence:2 establishing:2 incoherence:5 sivakumar:1 might:1 koltchinskii:1 studied:5 dantzig:5 kxk2f:1 limited:2 factorization:1 statistically:1 fazel:1 practical:2 unique:1 acknowledgment:1 atomic:1 practice:1 litvak:1 pontil:1 universal:2 empirical:3 vert:2 matching:1 boyd:1 operator:4 superposed:1 restriction:2 imposed:1 reviewer:1 missing:1 independently:2 convex:12 decomposable:14 recovery:9 identifying:1 estimator:12 d1:31 nuclear:6 deriving:1 oh:2 traditionally:1 variation:1 analogous:2 annals:3 target:3 play:1 rip:2 exact:3 programming:1 user:1 us:1 satisfying:1 particularly:1 jk:3 cut:1 observed:3 role:1 preprint:4 solved:1 capture:1 trade:1 yk:2 substantial:1 convexity:8 complexity:28 tight:1 algebra:1 localization:1 basis:2 triangle:2 regularizer:2 effective:1 kp:10 outside:1 heuristic:1 widely:5 posed:3 larger:1 relax:1 wg:21 compressed:1 statistic:1 gi:2 noisy:5 advantage:2 eigenvalue:1 ledoux:1 net:1 cai:1 propose:2 combining:1 adjoint:1 intuitive:1 frobenius:1 inducing:2 exploiting:1 convergence:1 cluster:6 regularity:1 rademacher:1 converges:2 derive:4 completion:59 pose:1 strong:8 recovering:1 c:1 involves:2 direction:1 radius:1 closely:1 owing:3 stochastic:1 centered:1 stringent:2 hx:10 suffices:2 generalization:2 clustered:2 preliminary:1 anonymous:1 proposition:2 tighter:1 extension:1 hold:5 considered:1 ground:1 normal:2 exp:5 wright:1 k3:5 scope:3 early:1 smallest:1 estimation:26 applicable:1 utexas:2 majorizing:2 city:1 tool:5 weighted:2 minimization:4 sensor:1 gaussian:45 modified:1 ej:1 mccoy:1 derived:1 rank:17 bernoulli:1 greatly:1 contrast:3 amelunxen:1 helpful:1 inference:2 i0:1 typically:4 stamos:1 lotz:1 interested:1 i1:3 tao:1 compatibility:4 among:1 ill:2 dual:2 denoted:1 plan:3 constrained:5 special:12 sampling:4 broad:2 yu:1 k2f:1 icml:1 eik:2 future:2 others:1 duplicate:1 few:1 richard:1 randomly:1 divergence:1 geometrizing:1 individual:3 phase:1 geometry:2 cns:1 ab:1 interest:6 highly:2 intra:1 umn:2 fazayeli:1 analyzed:1 regularizers:1 held:1 kxkop:1 ambient:3 bregman:1 edge:1 partial:8 necessary:2 unless:1 incomplete:1 euclidean:1 ppx:1 theoretical:1 joydeep:1 rsc:13 column:2 kxkf:6 subset:12 entry:9 imperative:1 uniform:3 characterize:1 supx:2 proximal:2 combined:1 vershynin:2 recht:3 negahban:2 off:1 decoding:1 terence:1 squared:2 satisfied:5 opposed:1 leveraged:1 davenport:1 dr:1 ek:10 american:1 li:1 supp:3 potential:2 parrilo:1 twin:1 wk:3 satisfy:1 depends:2 later:1 view:1 closed:1 analyze:1 sup:9 kwk:1 start:1 candes:2 contribution:1 collaborative:1 square:1 variance:7 largely:1 efficiently:1 ensemble:2 merugu:1 inform:2 definition:10 pp:1 elegance:1 proof:9 jaegermann:1 recovers:1 sampled:5 recall:9 knowledge:1 ut:2 hilbert:1 nasa:1 alexandre:1 higher:1 wherein:3 formulation:1 lounici:1 generality:2 spiky:5 d:30 sketch:1 talagrand:4 tropp:3 ei:2 keshavan:2 banerjee:5 continuity:1 infimum:1 usa:3 k22:5 ccf:1 regularization:17 symmetric:3 dhillon:1 karim:1 i2:2 eg:1 interchangeably:1 width:17 noted:1 chaining:6 generalized:4 mcdonald:4 demonstrate:1 instantaneous:1 arindam:1 recently:2 funding:2 common:3 superior:1 specialized:1 functional:1 banach:2 discussed:1 extend:1 tail:1 elementwise:3 measurement:23 significant:3 iek:1 rd:2 consistency:2 mathematics:3 hp:1 yk2:1 etc:3 closest:1 isometry:1 recent:6 showed:1 perspective:2 inf:2 belongs:1 certain:7 inequality:2 watson:1 yi:1 exploited:1 minimum:1 additional:3 greater:4 paradigm:2 living:1 ii:6 signal:1 desirable:1 characterized:2 bach:1 e1:1 ravikumar:3 gunasekar:3 prediction:2 variant:1 essentially:1 expectation:1 arxiv:7 normalization:1 c1:9 addition:1 subdifferential:1 spikiness:10 singular:5 kvg:1 crucial:1 strict:1 comment:1 induced:5 validating:1 integer:2 structural:11 near:4 yang:1 intermediate:6 easy:2 enough:3 fit:1 cn:22 motivated:1 pca:2 handled:1 expression:1 penalty:1 remark:1 wootters:1 useful:3 tsybakov:1 extensively:1 rw:1 xij:1 nsf:2 estimated:1 per:2 group:1 key:7 drawn:1 imputation:1 k4:4 asymptotically:3 relaxation:3 cone:7 sum:1 enforced:4 inverse:3 powerful:1 extends:2 family:1 chandrasekaran:1 appendix:7 comparable:2 bit:1 bound:30 suriya:2 completing:1 guaranteed:1 encountered:5 constraint:15 dominated:1 extremely:3 c20:3 min:1 optimality:1 structured:9 according:1 ball:1 restricted:10 previously:1 foygel:1 discus:2 turn:1 loose:2 nnx12aq39a:1 tractable:1 ascending:1 generalizes:1 generic:6 spectral:18 appropriate:1 upto:1 dudley:1 rudelson:1 clustering:2 include:1 ensure:1 dirty:1 log2:1 k1:7 emmanuel:1 establish:2 ek2:1 klopp:2 question:1 print:2 dependence:1 traditional:2 surrogate:2 said:2 forster:1 subspace:1 thank:1 nx:1 topic:1 trivial:5 willsky:1 besides:1 index:5 kk:3 ratio:5 vladimir:1 tomczak:1 equivalently:1 setup:2 unfortunately:1 liang:1 potentially:5 sharper:1 statement:1 gk:6 trace:2 countable:2 unknown:1 upper:10 observation:10 convolution:1 datasets:1 acknowledge:1 descent:3 immediate:1 sd1:3 sharp:1 posedness:1 introduced:3 required:1 bench:1 c3:2 conflict:2 polytopes:1 nip:5 trans:1 address:1 beyond:4 regime:3 sparsity:4 summarize:1 including:5 max:3 maxij:1 wainwright:2 suitable:2 event:1 rely:1 regularized:2 isoperimetry:1 hindi:1 numerous:2 acknowledges:1 focm:1 literature:7 geometric:3 kf:6 relative:2 asymptotic:2 beside:1 loss:5 interesting:1 suggestion:1 filtering:1 proportional:2 srebro:2 var:2 localized:8 ingredient:1 vg:4 penalization:1 foundation:2 consistent:4 thresholding:1 austin:2 row:1 infeasible:1 weaker:1 mismatched:1 characterizing:1 sparse:6 benefit:1 dimension:3 transition:1 collection:1 far:1 transaction:1 excess:1 approximate:2 selector:5 compact:3 global:4 assumed:4 xi:1 spectrum:1 channel:1 robust:2 elastic:1 ignoring:1 necessarily:1 sp:10 main:4 dense:2 montanari:2 bounding:2 noise:14 body:1 wlog:1 sub:17 explicit:2 exponential:1 candidate:1 kxk2:1 jmlr:4 theorem:37 rk:1 kop:1 specific:1 showing:1 er:18 sensing:1 list:1 rakhlin:1 dominates:2 consist:1 exists:5 overloading:1 kr:1 rankness:3 chen:1 rd1:11 explore:1 kxk:7 contained:1 recommendation:1 springer:3 radically:1 minimizer:4 truth:1 relies:1 acm:1 ma:1 obozinski:1 marked:1 sorted:1 quantifying:1 careful:1 feasible:3 included:1 uniformly:2 lemma:14 principal:1 ece:1 e:4 berg:1 support:20 arises:1 brevity:1 argyriou:1 ex:1 |
5,551 | 6,023 | Copeland Dueling Bandits
Masrour Zoghi
Informatics Institute
University of Amsterdam, Netherlands
[email protected]
Zohar Karnin
Yahoo Labs
New York, NY
[email protected]
Shimon Whiteson
Department of Computer Science
University of Oxford, UK
[email protected]
Maarten de Rijke
Informatics Institute
University of Amsterdam
[email protected]
Abstract
A version of the dueling bandit problem is addressed in which a Condorcet winner
may not exist. Two algorithms are proposed that instead seek to minimize regret
with respect to the Copeland winner, which, unlike the Condorcet winner, is guaranteed to exist. The first, Copeland Confidence Bound (CCB), is designed for
small numbers of arms, while the second, Scalable Copeland Bandits (SCB),
works better for large-scale problems. We provide theoretical results bounding
the regret accumulated by CCB and SCB, both substantially improving existing
results. Such existing results either offer bounds of the form O(K log T ) but
require restrictive assumptions, or offer bounds of the form O(K 2 log T ) without
requiring such assumptions. Our results offer the best of both worlds: O(K log T )
bounds without restrictive assumptions.
1
Introduction
The dueling bandit problem [1] arises naturally in domains where feedback is more reliable when
given as a pairwise preference (e.g., when it is provided by a human) and specifying real-valued
feedback instead would be arbitrary or inefficient. Examples include ranker evaluation [2, 3, 4] in
information retrieval, ad placement and recommender systems. As with other preference learning
problems [5], feedback consists of a pairwise preference between a selected pair of arms, instead of
scalar reward for a single selected arm, as in the K-armed bandit problem.
Most existing algorithms for the dueling bandit problem require the existence of a Condorcet winner, which is an arm that beats every other arm with probability greater than 0.5. If such algorithms
are applied when no Condorcet winner exists, no decision may be reached even after many comparisons. This is a key weakness limiting their practical applicability. For example, in industrial ranker
evaluation [6], when many rankers must be compared, each comparison corresponds to a costly live
experiment and thus the potential for failure if no Condorcet winner exists is unacceptable [7].
This risk is not merely theoretical. On the contrary, recent experiments on K-armed dueling bandit
problems based on information retrieval datasets show that dueling bandit problems without Condorcet winners arise regularly in practice [8, Figure 1]. In addition, we show in Appendix C.1 in the
supplementary material that there are realistic situations in ranker evaluation in information retrieval
in which the probability that the Condorcet assumption holds, decreases rapidly as the number of
arms grows. Since the K-armed dueling bandit methods mentioned above do not provide regret
bounds in the absence of a Condorcet winner, applying them remains risky in practice. Indeed, we
demonstrate empirically the danger of applying such algorithms to dueling bandit problems that do
not have a Condorcet winner (cf. Appendix A in the supplementary material).
The non-existence of the Condorcet winner has been investigated extensively in social choice theory,
where numerous definitions have been proposed, without a clear contender for the most suitable
resolution [9]. In the dueling bandit context, a few methods have been proposed to address this
issue, e.g., SAVAGE [10], PBR [11] and RankEl [12], which use some of the notions proposed by
1
social choice theorists, such as the Copeland score or the Borda score to measure the quality of each
arm, hence determining what constitutes the best arm (or more generally the top-k arms). In this
paper, we focus on finding Copeland winners, which are arms that beat the greatest number of other
arms, because it is a natural, conceptually simple extension of the Condorcet winner.
Unfortunately, the methods mentioned above come with bounds of the form O(K 2 log T ). In this
paper, we propose two new K-armed dueling bandit algorithms for the Copeland setting with significantly improved bounds.
The first algorithm, called Copeland Confidence Bound (CCB), is inspired by the recently proposed Relative Upper Confidence Bound method [13], but modified and extended to address the
unique challenges that arise when no Condorcet winner exists. We prove anytime high-probability
and expected regret bounds for CCB of the form O(K 2 + K log T ). Furthermore, the denominator
of this result has much better dependence on the ?gaps? arising from the dueling bandit problem
than most existing results (cf. Sections 3 and 5.1 for the details).
However, a remaining weakness of CCB is the additive O(K 2 ) term in its regret bounds. In applications with large K, this term can dominate for any experiment of reasonable duration. For example,
at Bing, 200 experiments are run concurrently on any given day [14], in which case the duration
of the experiment needs to be longer than the age of the universe in nanoseconds before K log T
becomes significant in comparison to K 2 .
Our second algorithm, called Scalable Copeland Bandits (SCB), addresses this weakness by eliminating the O(K 2 ) term, achieving an expected regret bound of the form O(K log K log T ). The
price of SCB?s tighter regret bounds is that, when two suboptimal arms are close to evenly matched,
it may waste comparisons trying to determine which one wins in expectation. By contrast, CCB
can identify that this determination is unnecessary, yielding better performance unless there are very
many arms. CCB and SCB are thus complementary algorithms for finding Copeland winners.
Our main contributions are as follows:
1. We propose two algorithms that address the dueling bandit problem in the absence of a Condorcet
winner, one designed for problems with small numbers of arms and the other scaling well with
the number of arms.
2. We provide regret bounds that bridge the gap between two groups of results: those of the form
O(K log T ) that make the Condorcet assumption, and those of the form O(K 2 log T ) that do not
make the Condorcet assumption. Our bounds are similar to those of the former but are as broadly
applicable as the latter. Furthermore, the result for CCB has substantially better dependence on
the gaps than the second group of results.
3. We include an empirical evaluation of CCB and SCB using a real-life problem arising from
information retrieval (IR). The experimental results mirror the theoretical ones.
2
Problem Setting
Let K
2. The K-armed dueling bandit problem [1] is a modification of the K-armed bandit
problem [15]. The latter considers K arms {a1 , . . . , aK } and at each time-step, an arm ai can be
pulled, generating a reward drawn from an unknown stationary distribution with expected value ?i .
The K-armed dueling bandit problem is a variation in which, instead of pulling a single arm, we
choose a pair (ai , aj ) and receive one of them as the better choice, with the probability of ai being
picked equal to an unknown constant pij and that of aj being picked equal to pji = 1 pij . A
problem instance is fully specified by a preference matrix P = [pij ], whose ij entry is equal to pij .
Most previous work assumes the existence of a Condorcet winner [10]: an arm, which without loss
of generality we label a1 , such that p1i > 12 for all i > 1. In such work, regret is defined relative to
the Condorcet winner. However, Condorcet winners do not always exist [8, 13]. In this paper, we
consider a formulation of the problem that does not assume the existence of a Condorcet winner.
Instead, we consider the Copeland dueling bandit problem, which defines regret with respect to a
Copeland winner, which is an arm with maximal Copeland score. The Copeland score of ai , denoted
Cpld(ai ), is the number of arms aj for which pij > 0.5. The normalized Copeland score, denoted
i)
cpld(ai ), is simply Cpld(a
K 1 . Without loss of generality, we assume that a1 , . . . , aC are the Copeland
winners, where C is the number of Copeland winners. We define regret as follows:
Definition 1. The regret incurred by comparing ai and aj is 2cpld(a1 ) cpld(ai ) cpld(aj ).
2
Remark 2. Since our results (see ?5) establish bounds on the number of queries to non-Copeland
winners, they can also be applied to other notions of regret.
3
Related Work
Numerous methods have been proposed for the K-armed dueling bandit problem, including Interleaved Filter [1], Beat the Mean [3], Relative Confidence Sampling [8], Relative Upper Confidence
Bound (RUCB) [13], Doubler and MultiSBM [16], and mergeRUCB [17], all of which require the
existence of a Condorcet winner, and often come with bounds of the form O(K log T ). However,
as observed in [13] and Appendix C.1, real-world problems do not always have Condorcet winners.
There is another group of algorithms that do not assume the existence of a Condorcet winner, but
have bounds of the form O(K 2 log T ) in the Copeland setting: Sensitivity Analysis of VAriables
for Generic Exploration (SAVAGE) [10], Preference-Based Racing (PBR) [11] and Rank Elicitation
(RankEl) [12]. All three of these algorithms are designed to solve more general or more difficult
problems, and they solve the Copeland dueling bandit problem as a special case.
This work bridges the gap between these two groups by providing algorithms that are as broadly
applicable as the second group but have regret bounds comparable to those of the first group. Furthermore, in the case of the results for CCB, rather than depending on the smallest gap between arms
ai and aj , min :=mini>j |pij 0.5|, as in the case of many results in the Copeland setting,1 our
regret bounds depend on a larger quantity that results in a substantially lower upper-bound, cf. ?5.1.
In addition to the above, bounds have been proven for other notions of winners, including Borda
[10, 11, 12], Random Walk [11, 18], and very recently von Neumann [19]. The dichotomy discussed
also persists in the case of these results, which either rely on restrictive assumptions to obtain a linear
dependence on K or are more broadly applicable, at the expense of a quadratic dependence on K. A
natural question for future work is whether the improvements achieved in this paper in the case of the
Copeland winner can be obtained in the case of these other notions as well. We refer the interested
reader to Appendix C.2 for a numerical comparison of these notions of winners in practice. More
generally, there is a proliferation of notions of winners that the field of Social Choice Theory has put
forth and even though each definition has its merits, it is difficult to argue for any single definition
to be superior to all others.
A related setting is that of partial monitoring games [20]. While a dueling bandit problem can be
modeled as a partial monitoring problem, doing so yields weaker results. In [21], the authors present
problem-dependent bounds from which a regret bound of the form O(K 2 log T ) can be deduced for
the dueling bandit problem, whereas our work achieves a linear dependence in K.
4
Method
We now present two algorithms that find Copeland winners.
4.1 Copeland Confidence Bound (CCB)
CCB (see Algorithm 1) is based on the principle of optimism followed by pessimism: it maintains
optimistic and pessimistic estimates of the preference matrix, i.e., matrices U and L (Line 6). It uses
U to choose an optimistic Copeland winner ac (Lines 7?9 and 11?12), i.e., an arm that has some
chance of being a Copeland winner. Then, it uses L to choose an opponent ad (Line 13), i.e., an arm
deemed likely to discredit the hypothesis that ac is indeed a Copeland winner.
More precisely, an optimistic estimate of the Copeland score of each arm ai is calculated using U
(Line 7), and ac is selected from the set of top scorers, with preference given to those in a shortlist, Bt
(Line 11). These are arms that have, roughly speaking, been optimistic winners throughout history.
To maintain Bt , as soon as CCB discovers that the optimistic Copeland score of an arm is lower than
the pessimistic Copeland score of another arm, it purges the former from Bt (Line 9B).
The mechanism for choosing the opponent ad is as follows. The matrices U and L define a confidence interval around pij for each i and j. In relation to ac , there are three types of arms: (1) arms
aj s.t. the confidence region of pcj is strictly above 0.5, (2) arms aj s.t. the confidence region of pcj
is strictly below 0.5, and (3) arms aj s.t. the confidence region of pcj contains 0.5. Note that an arm
of type (1) or (2) at time t0 may become an arm of type (3) at time t > t0 even without queries to the
corresponding pair as the size of the confidence intervals increases as time goes on.
1
Cf. [10, Equation 9 in ?4.1.1] and [11, Theorem 1].
3
Algorithm 1 Copeland Confidence Bound
Input: A Copeland dueling bandit problem and an exploration parameter ? > 12 .
1: W = [wij ]
0K?K // 2D array of wins: wij is the number of times ai beat aj
2: B1 = {a1 , . . . , aK } // potential best arms
3: B1i = ? for each i = 1, . . . , K // potential to beat ai
4: LC = K // estimated max losses of a Copeland winner
5: for t = 1, 2, . . . do
q
q
W
? ln t
W
? ln t
6:
U := [uij ] = W+W
and
L
:=
[l
]
=
, with uii = lii = 12 , 8i
ij
T +
T
T
W+W
W+W
W+WT
7:
8:
9:
10:
11:
12:
13:
14:
15:
1
1
Cpld(ai ) = # k | uik
2 , k 6= i and Cpld(ai ) = # k | lik
2 , k 6= i
Ct = {ai | Cpld(ai ) = maxj Cpld(aj )}
Set Bt
Bt 1 and Bti
Bti 1 and update as follows:
A. Reset disproven hypotheses: If for any i and aj 2 Bti we have lij > 0.5, reset Bt , LC and
Btk for all k (i.e., set them to their original values as in Lines 2?4 above).
B. Remove non-Copeland winners: For each ai 2 Bt , if Cpld(ai ) < Cpld(aj ) holds for any
j, set Bt
Bt \ {ai }, and if |Bti | =
6 LC + 1, then set Bti
{ak |uik < 0.5}. However, if
k
Bt = ?, reset Bt , LC and Bt for all k.
C. Add Copeland winners: For any ai 2 Ct with Cpld(ai ) = Cpld(ai ), set Bt
Bt [ {ai },
j
i
Bt
? and LC
K 1 Cpld(ai ). For each j 6= i, if we have |Bt | < LC + 1, set
Btj ?, and if |Btj | > LC +1, randomly choose LC +1 elements of Btj and remove the rest.
With probability 1/4, sample (c, d) uniformly from the set {(i, j) | aj 2 Bti and 0.5 2
[lij , uij ]} (if it is non-empty) and skip to Line 14.
If Bt \ Ct 6= ?, then with probability 2/3, set Ct
Bt \ Ct .
Sample ac from Ct uniformly at random.
With probability 1/2, choose the set B i to be either Bti or {a1 , . . . , aK } and then set
d
arg max{j2Bi | ljc ?0.5} ujc . If there is a tie, d is not allowed to be equal to c.
Compare arms ac and ad and increment wcd or wdc depending on which arm wins.
end for
CCB always chooses ad from arms of type (3) because comparing ac and a type (3) arm is most
informative about the Copeland score of ac . Among arms of type (3), CCB favors those that have
confidently beaten arm ac in the past (Line 13), i.e., arms that in some round t0 < t were of type (2).
Such arms are maintained in a shortlist of ?formidable? opponents (Bti ) that are likely to confirm
that ai is not a Copeland winner; these arms are favored when selecting ad (Lines 10 and 13).
The sets Bti are what speed up the elimination of non-Copeland winners, enabling regret bounds that
scale asymptotically with K rather than K 2 . Specifically, for a non-Copeland winner ai , the set
Bti will eventually contain LC +1 strong opponents for ai (Line 4.1C), where LC is the number of
losses of each Copeland winner. Since LC is typically small (cf. Appendix C.3), asymptotically this
leads to a bound of only O(log T ) on the number of time-steps when ai is chosen as an optimistic
Copeland winner, instead of a bound of O(K log T ), which a more naive algorithm would produce.
4.2 Scalable Copeland Bandits (SCB)
SCB is designed to handle dueling bandit problems with large numbers of arms. It is based on an
arm-identification algorithm, described in Algorithm 2, designed for a PAC setting, i.e., it finds an
?-Copeland winner with probability 1
, although we are primarily interested in the case with
? = 0. Algorithm 2 relies on a reduction to a K-armed bandit problem where we have direct access
Algorithm 2 Approximate Copeland Bandit Solver
Input: A Copeland dueling bandit problem with preference matrix P = [pij ], failure probability
> 0, and approximation parameter ? > 0. Also, define [K] := {1, . . . , K}.
1: Define a random variable reward(i) for i 2 [K] as the following procedure: pick a uniformly
random j 6= i from [K]; query the pair (ai , aj ) sufficiently many times in order to determine
w.p. at least 1
/K 2 whether pij > 1/2; return 1 if pij > 0.5 and 0 otherwise.
2: Invoke Algorithm 4, where in each of its calls to reward(i), the feedback is determined by the
above stochastic process.
Return: The same output returned by Algorithm 4.
4
to a noisy version of the Copeland score; the process of estimating the score of arm ai consists of
comparing ai to a random arm aj until it becomes clear which arm beats the other. The sample
complexity bound, which yields the regret bound, is achieved by combining a bound for K-armed
bandits and a bound on the number of arms that can have a high Copeland score.
Algorithm 2 calls a K-armed bandit algorithm as a subroutine. To this end, we use the KL-based
arm-elimination algorithm, a slight modification of Algorithm 2 in [22]: it implements an elimination tournament with confidence regions based on the KL-divergence between probability distributions. The interested reader can find the pseudo-code in Algorithm 4 contained in Appendix
J.
Combining this with the squaring trick, a modification of the doubling trick that reduces the number
of partitions from log T to log log T , the SCB algorithm, described in Algorithm 3, repeatedly calls
Algorithm 2 but force-terminates if an increasing threshold is reached. If it terminates early, then
the identified arm is played against itself until the threshold is reached.
Algorithm 3 Scalable Copeland Bandits
Input: A Copeland dueling bandit problem with preference matrix P = [pij ]
1: for all r = 1, r2, . . . do
2:
Set T = 22 and run Algorithm 2 with failure probability log(T )/T in order to find an exact
Copeland winner (? = 0); force-terminate if it requires more than T queries.
3:
Let T0 be the number of queries used by invoking Algorithm 2, and let ai be the arm produced
by it; query the pair (ai , ai ) T T0 times.
4: end for
5
Theoretical Results
In this section, we present regret bounds for both CCB and SCB. Assuming that the number of
Copeland winners and the number of losses of each Copeland winner are bounded,2 CCB?s regret
bound takes the form O(K 2 + K log T ), while SCB?s is of the form O(K log K log T ). Note that
these bounds are not directly comparable. When there are relatively few arms, CCB is expected to
perform better. By contrast, when there are many arms SCB is expected to be superior. Appendix A
in the supplementary material provides empirical evidence to support these expectations.
Throughout this section we impose the following condition on the preference matrix:
A There are no ties, i.e., for all pairs (ai , aj ) with i 6= j, we have pij 6= 0.5.
This assumption is not very restrictive in practice. For example, in the ranker evaluation setting from
information retrieval, each arm corresponds to a ranker, a complex and highly engineered system,
so it is unlikely that two rankers are indistinguishable. Furthermore, some of the results we present
in this section actually hold under even weaker assumptions. However, for the sake of clarity, we
defer a discussion of these nuanced differences to Appendix F in the supplementary material.
5.1 Copeland Confidence Bound (CCB)
In this section, we provide a rough outline of our argument for the bound on the regret accumulated
by Algorithm 1. For a more detailed argument, the interested reader is referred to Appendix E.
Consider a K-armed Copeland bandit problem with arms a1 , . . . , aK and preference matrix P =
[pij ], such that arms a1 , . . . , aC are the Copeland winners, with C being the number of Copeland
winners. Moreover, we define LC to be the number of arms to which a Copeland winner loses in
expectation.
? 2
?
Using this notation, our expected regret bound for CCB takes the form: O K +(C+L2C )K ln T
(1)
Here, is a notion of gap defined in Appendix E, which is an improvement upon the smallest gap
between any pair of arms.
This result is proven in two steps. First, we bound the number of comparisons involving nonCopeland winners, yielding a result of the form O(K 2 ln T ). Second, Theorem 3 closes the gap
2
See Appendix C.3 in the supplementary material for experimental evidence that this is the case in practice.
5
between this bound and the one in (1) by showing that, beyond a certain time horizon, CCB selects
non-Copeland winning arms as the optimistic Copeland winner very infrequently.
Theorem 3. Given a Copeland bandit problem satisfying Assumption A and any > 0 and ? > 0.5,
(1)
(2)
there exist constants A and A such that, with probability 1
, the regret accumulated by CCB
is bounded by the following:
p
2K(C + LC + 1)
(1)
(2)
A +A
ln T +
ln T.
2
Using the high probability regret bound given in Theorem 3, we can deduce the expected regret
result claimed in (1) for ? > 1, as a corollary by integrating over the interval [0, 1].
5.2 Scalable Copeland Bandits
We now turn to our regret result for SCB, which lowers the K 2 dependence in the additive constant
of CCB?s regret result to K log K. We begin by defining the relevant quantities:
Definition 4. Given a K-armed Copeland bandit problem and an arm ai , we define the following:
1. Recall that cpld(ai ) := Cpld(ai )/(K 1) is called the normalized Copeland score.
2. ai is an ?-Copeland-winner if 1 cpld(ai ) ? (1 cpld(a1P
)) (1 + ?).
3. i := max{cpld(a1 ) cpld(ai ), 1/(K 1)} and Hi := j6=i 12 , with H1 := maxi Hi .
4.
?
i
= max {
i , ?(1
ij
cpld(a1 ))}.
We now state our main scalability result:
Theorem 5. Given a Copeland ?bandit problem satisfying
Assumption A, the expected regret of SCB
PK Hi (1 cpld(ai )) ?
1
(Algorithm 3) is bounded by O K
log(T ), which in turn can be bounded by
2
i=1
i
?
?
K) log T
O K(LC +log
, where LC and min are as in Definition 10.
2
min
Recall that SCB is based on Algorithm 2, an arm-identification algorithm that identifies a Copeland
winner with high probability. As a result, Theorem 5 is an immediate corollary of Lemma 6, obtained
by using the well known squaring trick. As mentioned in Section 4.2, the squaring trick is a minor
variation on the doubling trick that reduces the number of partitions from log T to log log T .
Lemma 6 is a result for finding an ?-approximate Copeland winner (see Definition 4.2). Note that,
for the regret setting, we are only interested in the special case with ? = 0, i.e., the problem of
identifying the best arm.
Lemma 6. With probability 1
O
1
K
K
X
i=1
cpld(ai ))
Hi (1
(
? )2
i
?(1)
, Algorithm 2 finds an ?-approximate Copeland winner by time
!
log(1/ ) ? O H1 log(K) + min ?
2
, LC
log(1/ ).
assuming3 = (KH1 )
. In particular when there is a Condorcet winner (cpld(a1 ) = 1, LC =
0) or more generally cpld(a1 ) = 1 O(1/K), LC = O(1), an exact solution is found with probability at least 1
by using an expected number of queries of at most O (H1 (LC + log K)) log(1/ ).
In the remainder of this section, we sketch the main ideas underlying the proof of Lemma 6, detailed
in Appendix I in the supplementary material. We first treat the simpler deterministic setting in which
a single query suffices to determine which of a pair of arms beats the other. While a solution can
easily be obtained using K(K 1)/2 many queries, we aim for one with query complexity linear
in K. The main ingredients of the proof are as follows:
1. cpld(ai ) is the mean of a Bernoulli random variable defined as such: sample uniformly at random
an index j from the set {1, . . . , K} \ {i} and return 1 if ai beats aj and 0 otherwise.
2. Applying a KL-divergence based arm-elimination algorithm (Algorithm 4) to the K-armed bandit arising from the above observation, we obtain a bound by dividing the arms into two groups:
those with Copeland scores close to that of the Copeland winners, and the rest. For the former,
we use the result from Lemma 7 to bound the number of such arms; for the latter, the resulting
regret is dealt with using Lemma 8, which exploits the possible distribution of Copeland scores.
3
The exact expression requires replacing log(1/ ) with log(KH1 / ).
6
1200000
cumulative regret
1000000
800000
600000
MSLR Informational CM with 5 Rankers
RUCB
RankEl
PBR
SCB
SAVAGE
CCB
400000
200000
0 4
10
105
106
time
107
108
Figure 1: Small-scale regret results for a 5-armed Copeland dueling bandit problem arising from
ranker evaluation.
Let us state the two key lemmas here:
Lemma 7. Let D ? {a1 , . . . , aK } be the set of arms for which cpld(ai ) 1 d/(K 1), that is
arms that are beaten by at most d arms. Then |D| ? 2d + 1.
Proof. Consider a fully connected directed graph, whose node set is D and the arc (ai , aj ) is in the
graph if arm ai beats arm aj . By the definition of cpld, the in-degree of any node i is upper bounded
by d. Therefore, the total number of arcs in the graph is at most |D|d. Now, the full connectivity
of the graph implies that the total number of arcs in the graph is exactly |D|(|D| 1)/2. Thus,
|D|(|D| 1)/2 ? |D|d and the claim follows.
P
1
Lemma 8. The sum {i|cpld(ai )<1} 1 cpld(a
is in O(K log K).
i)
Proof. Follows from Lemma 7 via a careful partitioning of arms. Details are in Appendix I.
Given the structure of Algorithm 2, the stochastic case is similar to the deterministic case for the
following reason: while the latter requires a single comparison between arms ai and aj to determine
which arm beats the other, in the stochastic case, we need roughly
log(K log(
2
ij
1
ij )/
)
between the two arms to correctly answer the same question with probability at least 1
6
comparisons
/K 2 .
Experiments
To evaluate our methods CCB and SCB, we apply them to a Copeland dueling bandit problem arising
from ranker evaluation in the field of information retrieval (IR) [23].
We follow the experimental approach in [3, 13] and use a preference matrix to simulate comparisons
between each pair of arms (ai , aj ) by drawing samples from Bernoulli random variables with mean
pij . We compare our proposed algorithms against the state of the art K-armed dueling bandit algorithms, RUCB [13], Copeland SAVAGE, PBR and RankEl. We include RUCB in order to verify
our claim that K-armed dueling bandit algorithms that assume the existence of a Condorcet winner
have linear regret if applied to a Copeland dueling bandit problem without a Condorcet winner.
More specifically, we consider a 5-armed dueling bandit problem obtained from comparing five
rankers, none of whom beat the other four, i.e. there is no Condorcet winner. Due to lack of space,
the details of the experimental setup have been included in Appendix B4 . Figure 1 shows the regret
accumulated by CCB, SCB, the Copeland variants of SAVAGE, PBR, RankEl and RUCB on this
problem. The horizontal time axis uses a log scale, while the vertical axis, which measures cumulative regret, uses a linear scale. CCB outperforms all other algorithms in this 5-armed experiment.
Note that three of the baseline algorithms under consideration here (i.e., SAVAGE, PBR and RankEl)
require the horizon of the experiment as an input, either directly or through a failure probability ,
4
Sample code and the preference matrices used in the experiments can be found at http://bit.ly/nips15data.
7
which we set to 1/T (with T being the horizon), in order to obtain a finite-horizon regret algorithm, as prescribed in [3, 10]. Therefore, we ran independent experiments with varying horizons
and recorded the accumulated regret: the markers on the curves corresponding to these algorithms
represent these numbers. Consequently, the regret curves are not monotonically increasing. For
instance, SAVAGE?s cumulative regret at time 2 ? 107 is lower than at time 107 because the runs
that produced the former number were not continuations of those that resulted in the latter, but rather
completely independent. Furthermore, RUCB?s cumulative regret grows linearly, which is why the
plot does not contain the entire curve.
Appendix A contains further experimental results, including those of our scalability experiment.
7
Conclusion
In many applications that involve learning from human behavior, feedback is more reliable when
provided in the form of pairwise preferences. In the dueling bandit problem, the goal is to use such
pairwise feedback to find the most desirable choice from a set of options. Most existing work in
this area assumes the existence of a Condorcet winner, i.e., an arm that beats all other arms with
probability greater than 0.5. Even though these results have the advantage that the bounds they
provide scale linearly in the number of arms, their main drawback is that in practice the Condorcet
assumption is too restrictive. By contrast, other results that do not impose the Condorcet assumption
achieve bounds that scale quadratically in the number of arms.
In this paper, we set out to solve a natural generalization of the problem, where instead of assuming
the existence of a Condorcet winner, we seek to find a Copeland winner, which is guaranteed to
exist. We proposed two algorithms to address this problem: one for small numbers of arms, called
CCB, and a more scalable one, called SCB, that works better for problems with large numbers of
arms. We provided theoretical results bounding the regret accumulated by each algorithm: these
results improve substantially over existing results in the literature, by filling the gap that exists in the
current results, namely the discrepancy between results that make the Condorcet assumption and are
of the form O(K log T ) and the more general results that are of the form O(K 2 log T ).
Moreover, we have included in the supplementary material empirical results on both a dueling bandit
problem arising from a real-life application domain and a large-scale synthetic problem used to test
the scalability of SCB. The results of these experiments show that CCB beats all existing Copeland
dueling bandit algorithms, while SCB outperforms CCB on the large-scale problem.
One open question raised by our work is how to devise an algorithm that has the benefits of both
CCB and SCB, i.e., the scalability of the latter together with the former?s better dependence on the
gaps. At this point, it is not clear to us how this could be achieved. Another interesting direction
for future work is an extension of both CCB and SCB to problems with a continuous set of arms.
Given the prevalence of cyclical preference relationships in practice, we hypothesize that the nonexistence of a Condorcet winner is an even greater issue when dealing with an infinite number of
arms. Given that both our algorithms utilize confidence bounds to make their choices, we anticipate
that continuous-armed UCB-style algorithms like those proposed in [24, 25, 26, 27, 28, 29, 30] can
be combined with our ideas to produce a solution to the continuous-armed Copeland bandit problem
that does not rely on the convexity assumptions made by algorithms such as the one proposed in
[31]. Finally, it is also interesting to expand our results to handle scores other than the Copeland
score, such as an ?-insensitive variant of the Copeland score (as in [12]), or completely different
notions of winners, such as the Borda, Random Walk or von Neumann winners (see, e.g., [32, 19]).
Acknowledgments
We would like to thank Nir Ailon and Ulle Endriss for helpful discussions. This research was supported by
Amsterdam Data Science, the Dutch national program COMMIT, Elsevier, the European Community?s Seventh
Framework Programme (FP7/2007-2013) under grant agreement nr 312827 (VOX-Pol), the ESF Research Network Program ELIAS, the Royal Dutch Academy of Sciences (KNAW) under the Elite Network Shifts project,
the Microsoft Research Ph.D. program, the Netherlands eScience Center under project number 027.012.105,
the Netherlands Institute for Sound and Vision, the Netherlands Organisation for Scientific Research (NWO)
under project nrs 727.011.005, 612.001.116, HOR-11-10, 640.006.013, 612.066.930, CI-14-25, SH-322-15,
the Yahoo! Faculty Research and Engagement Program, and Yandex. All content represents the opinion of the
authors, which is not necessarily shared or endorsed by their respective employers and/or sponsors.
8
References
[1] Y. Yue, J. Broder, R. Kleinberg, and T. Joachims. The K-armed dueling bandits problem.
Journal of Computer and System Sciences, 78(5), 2012.
[2] T. Joachims. Optimizing search engines using clickthrough data. In KDD, 2002.
[3] Y. Yue and T. Joachims. Beat the mean bandit. In ICML, 2011.
[4] K. Hofmann, S. Whiteson, and M. de Rijke. Balancing exploration and exploitation in listwise
and pairwise online learning to rank for information retrieval. Information Retrieval, 16, 2013.
[5] J. F?urnkranz and E. H?ullermeier, editors. Preference Learning. Springer-Verlag, 2010.
[6] A. Schuth, F. Sietsma, S. Whiteson, D. Lefortier, and M. de Rijke. Multileaved comparisons
for fast online evaluation. In CIKM, 2014.
[7] L. Li, J. Kim, and I. Zitouni. Toward predicting the outcome of an A/B experiment for search
relevance. In WSDM, 2015.
[8] M. Zoghi, S. Whiteson, M. de Rijke, and R. Munos. Relative confidence sampling for efficient
on-line ranker evaluation. In WSDM, 2014.
[9] M. Schulze. A new monotonic, clone-independent, reversal symmetric, and Condorcetconsistent single-winner election method. Social Choice and Welfare, 36(2):267?303, 2011.
[10] T. Urvoy, F. Clerot, R. F?eraud, and S. Naamane. Generic exploration and k-armed voting
bandits. In ICML, 2013.
[11] R. Busa-Fekete, B. Sz?or?enyi, P. Weng, W. Cheng, and E. H?ullermeier. Top-k selection based
on adaptive sampling of noisy preferences. In ICML, 2013.
[12] R. Busa-Fekete, B. Sz?or?enyi, and E. H?ullermeier. PAC rank elicitation through adaptive sampling of stochastic pairwise preferences. In AAAI, 2014.
[13] M. Zoghi, S. Whiteson, R. Munos, and M. de Rijke. Relative upper confidence bound for the
K-armed dueling bandits problem. In ICML, 2014.
[14] R. Kohavi, A. Deng, B. Frasca, T. Walker, Y. Xu, and N. Pohlmann. Online controlled experiments at large scale. In KDD, 2013.
[15] W. R. Thompson. On the likelihood that one unknown probability exceeds another in view of
the evidence of two samples. Biometrika, pages 285?294, 1933.
[16] N. Ailon, Z. Karnin, and T. Joachims. Reducing dueling bandits to cardinal bandits. In ICML,
2014.
[17] M. Zoghi, S. Whiteson, and M. de Rijke. MergeRUCB: A method for large-scale online ranker
evaluation. In WSDM, 2015.
[18] S. Negahban, S. Oh, and D. Shah. Iterative ranking from pair-wise comparisons. In NIPS,
2012.
[19] M. Dud??k, K. Hofmann, R. E. Schapire, A. Slivkins, and M. Zoghi. Contextual dueling bandits.
In COLT, 2015.
[20] A. Piccolboni and C. Schindelhauer. Discrete prediction games with arbitrary feedback and
loss. In COLT, 2001.
[21] G. Bart?ok, N. Zolghadr, and C. Szepesv?ari. An adaptive algorithm for finite stochastic partial
monitoring. In ICML, 2012.
[22] O. Capp?e, A. Garivier, O. Maillard, R. Munos, G. Stoltz, et al. Kullback?leibler upper confidence bounds for optimal sequential allocation. The Annals of Statistics, 41(3), 2013.
[23] C. Manning, P. Raghavan, and H. Sch?utze. Introduction to Information Retrieval. Cambridge
University Press, 2008.
[24] R. Kleinberg, A. Slivkins, and E. Upfa. Multi-armed bandits in metric space. In STOC, 2008.
[25] S. Bubeck, R. Munos, G. Stoltz, and C. Szepesvari. X-armed bandits. JMLR, 12, 2011.
[26] N. Srinivas, A. Krause, S. M. Kakade, and M. Seeger. Gaussian process optimization in the
bandit setting: No regret and experimental design. In ICML, 2010.
[27] R. Munos. Optimistic optimization of a deterministic function without the knowledge of its
smoothness. In NIPS, 2011.
[28] A. D. Bull. Convergence rates of efficient global optimization algorithms. JMLR, 12, 2011.
[29] N. de Freitas, A. Smola, and M. Zoghi. Exponential regret bounds for Gaussian process bandits
with deterministic observations. In ICML, 2012.
[30] M. Valko, A. Carpentier, and R. Munos. Stochastic simultaneous optimistic optimization. In
ICML, 2013.
[31] Y. Yue and T. Joachims. Interactively optimizing information retrieval systems as a dueling
bandits problem. In ICML, 2009.
[32] A. Altman and M. Tennenholtz. Axiomatic foundations for ranking systems. JAIR, 2008.
9
| 6023 |@word exploitation:1 faculty:1 version:2 eliminating:1 open:1 seek:2 invoking:1 pick:1 reduction:1 contains:2 score:18 selecting:1 past:1 existing:7 outperforms:2 savage:7 com:1 comparing:4 current:1 contextual:1 freitas:1 must:1 realistic:1 additive:2 numerical:1 informative:1 partition:2 kdd:2 hofmann:2 remove:2 designed:5 plot:1 update:1 hypothesize:1 bart:1 stationary:1 selected:3 shortlist:2 provides:1 node:2 preference:18 simpler:1 five:1 unacceptable:1 direct:1 become:1 consists:2 prove:1 busa:2 pairwise:6 expected:9 indeed:2 behavior:1 roughly:2 proliferation:1 multi:1 inspired:1 informational:1 wsdm:3 pbr:6 election:1 armed:26 solver:1 increasing:2 becomes:2 provided:3 estimating:1 matched:1 bounded:5 formidable:1 moreover:2 notation:1 begin:1 what:2 underlying:1 cm:1 substantially:4 finding:3 pseudo:1 every:1 voting:1 tie:2 exactly:1 biometrika:1 mergerucb:2 uk:2 partitioning:1 ly:1 grant:1 before:1 persists:1 schindelhauer:1 treat:1 ak:6 oxford:1 tournament:1 cpld:31 specifying:1 a1p:1 sietsma:1 directed:1 practical:1 unique:1 acknowledgment:1 practice:7 regret:43 implement:1 prevalence:1 procedure:1 danger:1 area:1 empirical:3 significantly:1 confidence:18 integrating:1 masrour:1 close:3 selection:1 put:1 risk:1 live:1 applying:3 context:1 deterministic:4 center:1 go:1 duration:2 elite:1 l2c:1 resolution:1 thompson:1 identifying:1 array:1 dominate:1 oh:1 maarten:1 notion:8 ccb:32 variation:2 increment:1 limiting:1 handle:2 annals:1 altman:1 exact:3 us:4 hypothesis:2 agreement:1 trick:5 element:1 infrequently:1 satisfying:2 racing:1 observed:1 region:4 connected:1 decrease:1 ran:1 mentioned:3 convexity:1 complexity:2 pol:1 reward:4 depend:1 upon:1 completely:2 capp:1 easily:1 enyi:2 fast:1 query:10 dichotomy:1 choosing:1 outcome:1 whose:2 supplementary:7 valued:1 solve:3 larger:1 drawing:1 otherwise:2 favor:1 statistic:1 commit:1 noisy:2 itself:1 online:4 advantage:1 propose:2 maximal:1 reset:3 remainder:1 relevant:1 combining:2 rapidly:1 achieve:1 academy:1 forth:1 scalability:4 convergence:1 empty:1 neumann:2 produce:2 generating:1 depending:2 scorer:1 ac:12 ij:5 minor:1 strong:1 dividing:1 c:1 skip:1 come:2 implies:1 direction:1 drawback:1 filter:1 stochastic:6 exploration:4 human:2 engineered:1 raghavan:1 vox:1 opinion:1 material:7 elimination:4 require:4 suffices:1 generalization:1 tighter:1 pessimistic:2 anticipate:1 frasca:1 extension:2 strictly:2 hold:3 around:1 sufficiently:1 welfare:1 urvoy:1 naamane:1 claim:2 achieves:1 early:1 smallest:2 utze:1 applicable:3 axiomatic:1 label:1 esf:1 nwo:1 bridge:2 pcj:3 rough:1 concurrently:1 always:3 gaussian:2 aim:1 modified:1 rather:3 varying:1 corollary:2 focus:1 joachim:5 improvement:2 rank:3 bernoulli:2 zoghi:7 likelihood:1 seeger:1 industrial:1 contrast:3 baseline:1 kim:1 helpful:1 elsevier:1 dependent:1 squaring:3 accumulated:6 entire:1 bt:18 typically:1 unlikely:1 bandit:61 relation:1 uij:2 wij:2 expand:1 selects:1 subroutine:1 interested:5 issue:2 arg:1 among:1 colt:2 denoted:2 favored:1 yahoo:3 art:1 special:2 raised:1 equal:4 field:2 karnin:2 sampling:4 represents:1 icml:10 filling:1 constitutes:1 discrepancy:1 future:2 others:1 ullermeier:3 cardinal:1 few:2 primarily:1 randomly:1 divergence:2 resulted:1 national:1 maxj:1 maintain:1 microsoft:1 highly:1 evaluation:10 weakness:3 weng:1 sh:1 nl:2 yielding:2 partial:3 respective:1 unless:1 stoltz:2 walk:2 theoretical:5 instance:2 bull:1 applicability:1 entry:1 seventh:1 too:1 answer:1 synthetic:1 contender:1 chooses:1 deduced:1 combined:1 engagement:1 sensitivity:1 broder:1 clone:1 negahban:1 invoke:1 informatics:2 pessimism:1 together:1 connectivity:1 von:2 aaai:1 recorded:1 interactively:1 choose:5 lii:1 inefficient:1 style:1 return:3 li:1 potential:3 de:7 waste:1 inc:1 ranking:2 ad:6 yandex:1 h1:3 picked:2 lab:1 optimistic:9 doing:1 view:1 reached:3 maintains:1 option:1 defer:1 borda:3 contribution:1 minimize:1 ir:2 yield:2 identify:1 kh1:2 rijke:6 conceptually:1 dealt:1 identification:2 produced:2 none:1 monitoring:3 j6:1 history:1 simultaneous:1 definition:8 failure:4 against:2 copeland:81 naturally:1 proof:4 recall:2 anytime:1 knowledge:1 maillard:1 actually:1 ok:1 jair:1 nrs:1 day:1 follow:1 improved:1 formulation:1 ox:1 though:2 generality:2 furthermore:5 smola:1 until:2 sketch:1 horizontal:1 replacing:1 marker:1 lack:1 eraud:1 ulle:1 defines:1 quality:1 aj:22 pulling:1 grows:2 nuanced:1 hor:1 scientific:1 requiring:1 normalized:2 contain:2 verify:1 former:5 hence:1 clerot:1 piccolboni:1 dud:1 symmetric:1 leibler:1 round:1 indistinguishable:1 game:2 maintained:1 trying:1 outline:1 demonstrate:1 btj:3 wise:1 consideration:1 discovers:1 recently:2 ari:1 nanosecond:1 superior:2 empirically:1 winner:68 b4:1 insensitive:1 schulze:1 discussed:1 slight:1 significant:1 refer:1 theorist:1 cambridge:1 ai:51 smoothness:1 access:1 longer:1 bti:10 deduce:1 add:1 recent:1 optimizing:2 p1i:1 claimed:1 certain:1 verlag:1 life:2 devise:1 greater:3 impose:2 deng:1 determine:4 monotonically:1 lik:1 full:1 desirable:1 sound:1 reduces:2 exceeds:1 determination:1 offer:3 retrieval:10 a1:13 sponsor:1 controlled:1 prediction:1 scalable:6 involving:1 variant:2 denominator:1 vision:1 expectation:3 metric:1 dutch:2 represent:1 achieved:3 receive:1 addition:2 whereas:1 szepesv:1 krause:1 addressed:1 interval:3 walker:1 kohavi:1 sch:1 rest:2 unlike:1 yue:3 contrary:1 regularly:1 call:3 scb:23 identified:1 suboptimal:1 idea:2 shift:1 ranker:13 t0:5 whether:2 expression:1 optimism:1 returned:1 york:1 speaking:1 remark:1 repeatedly:1 generally:3 clear:3 detailed:2 involve:1 endorsed:1 netherlands:4 extensively:1 ph:1 http:1 continuation:1 schapire:1 exist:5 estimated:1 arising:6 correctly:1 cikm:1 broadly:3 discrete:1 urnkranz:1 group:7 key:2 four:1 threshold:2 achieving:1 drawn:1 clarity:1 carpentier:1 garivier:1 utilize:1 asymptotically:2 graph:5 merely:1 sum:1 run:3 employer:1 throughout:2 reasonable:1 reader:3 decision:1 appendix:15 scaling:1 comparable:2 bit:1 interleaved:1 bound:51 hi:4 ct:6 guaranteed:2 followed:1 played:1 cheng:1 quadratic:1 placement:1 precisely:1 uii:1 btk:1 sake:1 kleinberg:2 speed:1 argument:2 min:4 simulate:1 prescribed:1 relatively:1 department:1 ailon:2 project:3 manning:1 terminates:2 kakade:1 modification:3 ln:6 equation:1 wcd:1 remains:1 bing:1 turn:2 eventually:1 mechanism:1 merit:1 fp7:1 end:3 reversal:1 opponent:4 apply:1 generic:2 pji:1 shah:1 existence:9 rucb:6 original:1 top:3 remaining:1 include:3 cf:5 assumes:2 endriss:1 zolghadr:1 exploit:1 restrictive:5 nonexistence:1 establish:1 question:3 quantity:2 costly:1 dependence:7 nr:1 mslr:1 win:3 thank:1 condorcet:32 evenly:1 whom:1 argue:1 considers:1 reason:1 toward:1 zkarnin:1 assuming:2 code:2 modeled:1 index:1 mini:1 providing:1 relationship:1 difficult:2 unfortunately:1 setup:1 stoc:1 expense:1 design:1 clickthrough:1 unknown:3 perform:1 recommender:1 upper:6 observation:2 vertical:1 datasets:1 arc:3 enabling:1 finite:2 beat:14 immediate:1 situation:1 extended:1 defining:1 arbitrary:2 community:1 pair:10 namely:1 specified:1 kl:3 slivkins:2 engine:1 quadratically:1 nip:2 zohar:1 address:5 elicitation:2 beyond:1 below:1 tennenholtz:1 challenge:1 confidently:1 program:4 reliable:2 including:3 max:4 royal:1 dueling:37 greatest:1 suitable:1 natural:3 rely:2 force:2 escience:1 predicting:1 valko:1 arm:86 improve:1 risky:1 numerous:2 identifies:1 axis:2 deemed:1 naive:1 lij:2 nir:1 literature:1 determining:1 relative:6 fully:2 loss:6 interesting:2 allocation:1 proven:2 ingredient:1 age:1 foundation:1 incurred:1 degree:1 elia:1 pij:14 principle:1 editor:1 balancing:1 supported:1 soon:1 weaker:2 pulled:1 institute:3 munos:6 benefit:1 listwise:1 feedback:7 calculated:1 curve:3 world:2 cumulative:4 zitouni:1 author:2 made:1 adaptive:3 programme:1 social:4 approximate:3 kullback:1 confirm:1 dealing:1 sz:2 global:1 b1:1 unnecessary:1 continuous:3 search:2 iterative:1 why:1 terminate:1 szepesvari:1 improving:1 whiteson:7 investigated:1 complex:1 european:1 necessarily:1 domain:2 uva:2 pk:1 main:5 universe:1 linearly:2 bounding:2 arise:2 allowed:1 complementary:1 xu:1 referred:1 uik:2 ny:1 lc:19 winning:1 exponential:1 jmlr:2 shimon:2 theorem:6 pac:2 showing:1 maxi:1 r2:1 beaten:2 evidence:3 organisation:1 exists:4 sequential:1 ci:1 mirror:1 horizon:5 gap:10 simply:1 likely:2 bubeck:1 amsterdam:3 contained:1 scalar:1 doubling:2 cyclical:1 monotonic:1 springer:1 fekete:2 corresponds:2 loses:1 chance:1 relies:1 b1i:1 goal:1 consequently:1 careful:1 price:1 absence:2 content:1 shared:1 included:2 specifically:2 determined:1 uniformly:4 infinite:1 wt:1 reducing:1 lemma:10 called:5 total:2 experimental:6 ucb:1 support:1 latter:6 arises:1 relevance:1 evaluate:1 srinivas:1 |
5,552 | 6,024 | Regret Lower Bound and Optimal Algorithm in
Finite Stochastic Partial Monitoring
Junpei Komiyama
The University of Tokyo
[email protected]
Junya Honda
The University of Tokyo
[email protected]
Hiroshi Nakagawa
The University of Tokyo
[email protected]
Abstract
Partial monitoring is a general model for sequential learning with limited feedback formalized as a game between two players. In this game, the learner chooses
an action and at the same time the opponent chooses an outcome, then the learner
suffers a loss and receives a feedback signal. The goal of the learner is to minimize the total loss. In this paper, we study partial monitoring with finite actions
and stochastic outcomes. We derive a logarithmic distribution-dependent regret
lower bound that defines the hardness of the problem. Inspired by the DMED
algorithm (Honda and Takemura, 2010) for the multi-armed bandit problem, we
propose PM-DMED, an algorithm that minimizes the distribution-dependent regret. PM-DMED significantly outperforms state-of-the-art algorithms in numerical experiments. To show the optimality of PM-DMED with respect to the regret
bound, we slightly modify the algorithm by introducing a hinge function (PMDMED-Hinge). Then, we derive an asymptotically optimal regret upper bound of
PM-DMED-Hinge that matches the lower bound.
1 Introduction
Partial monitoring is a general framework for sequential decision making problems with imperfect
feedback. Many classes of problems, including prediction with expert advice [1], the multi-armed
bandit problem [2], dynamic pricing [3], the dark pool problem [4], label efficient prediction [5],
and linear and convex optimization with full or bandit feedback [6, 7] can be modeled as an instance
of partial monitoring.
Partial monitoring is formalized as a repeated game played by two players called a learner and an
opponent. At each round, the learner chooses an action, and at the same time the opponent chooses
an outcome. Then, the learner observes a feedback signal from a given set of symbols and suffers
some loss, both of which are deterministic functions of the selected action and outcome.
The goal of the learner is to find the optimal action that minimizes his/her cumulative loss. Alternatively, we can define the regret as the difference between the cumulative losses of the learner and
the single optimal action, and minimization of the loss is equivalent to minimization of the regret.
A learner with a small regret balances exploration (acquisition of information about the strategy of
the opponent) and exploitation (utilization of information). The rate of regret indicates how fast the
learner adapts to the problem: a linear regret indicates the inability of the learner to find the optimal
action, whereas a sublinear regret indicates that the learner can approach the optimal action given
sufficiently large time steps.
1
The study of partial monitoring is classified into two settings with respect to the assumption on the
outcomes. On one hand, in the stochastic setting, the opponent chooses an outcome distribution
before the game starts, and an outcome at each round is an i.i.d. sample from the distribution. On
the other hand, in the adversarial setting, the opponent chooses the outcomes to maximize the regret
of the learner. In this paper, we study the former setting.
1.1
Related work
The paper by Piccolboni and Schindelhauer [8] is one of the first to study the regret of the finite partial monitoring problem. They proposed the FeedExp3 algorithm, which attains O(T 3/4 ) minimax
regret on some problems. This bound was later improved by Cesa-Bianchi et al. [9] to O(T 2/3 ),
who also showed an instance in which the bound is optimal. Since then, most literature on partial
monitoring has dealt with the minimax regret, which is the worst-case regret over all possible opponent?s strategies. Bart?ok et al. [10] classified the partial monitoring problems into four categories
?
? T)
in terms of the minimax regret: a trivial problem with zero regret, an easy problem with ?(
regret1 , a hard problem with ?(T 2/3 ) regret, and a hopeless problem with ?(T ) regret. This shows
that the class of the partial monitoring problems is not limited to the bandit sort but also includes
?
? T)
larger classes of problems, such as dynamic pricing. Since then, several algorithms with a O(
regret bound for easy problems have been proposed [11, 12, 13]. Among them, the Bayes-update
Partial Monitoring (BPM) algorithm [13] is state-of-the-art in the sense of empirical performance.
Distribution-dependent and minimax regret: we focus on the distribution-dependent regret that
depends on the strategy of the opponent. While the minimax regret in partial monitoring has been extensively studied, little has been known on distribution-dependent regret in partial monitoring. To the
authors? knowledge, the only paper focusing on the distribution-dependent regret in finite discrete
partial monitoring is the one by Bart?ok et al. [11], which derived O(log T ) distribution-dependent regret for easy problems. In contrast to this situation, much more interest in the distribution-dependent
regret has been shown in the field of multi-armed bandit problems. Upper confidence bound (UCB),
the most well-known algorithm for the multi-armed bandits, has a distribution-dependent regret
bound [2, 14], and algorithms that minimize the distribution-dependent regret (e.g., KL-UCB) has
been shown to perform better than ones that minimize the minimax regret (e.g., MOSS), even in
instances in which the distributions are hard to distinguish (e.g., Scenario 2 in Garivier et al. [15]).
Therefore, in the field of partial monitoring, we can expect that an algorithm that minimizes the
distribution-dependent regret would perform better than the existing ones.
Contribution: the contributions of this paper lie in the following three aspects. First, we derive
the regret lower bound: in some special classes of partial monitoring (e.g., multi-armed bandits), an
O(log T ) regret lower bound is known to be achievable. In this paper, we further extend this lower
bound to obtain a regret lower bound for general partial monitoring problems. Second, we propose
an algorithm called Partial Monitoring DMED (PM-DMED). We also introduce a slightly modified
version of this algorithm (PM-DMED-Hinge) and derive its regret bound. PM-DMED-Hinge is the
first algorithm with a logarithmic regret bound for hard problems. Moreover, for both easy and hard
problems, it is the first algorithm with the optimal constant factor on the leading logarithmic term.
Third, performances of PM-DMED and existing algorithms are compared in numerical experiments.
Here, the partial monitoring problems consisted of three specific instances of varying difficulty. In
all instances, PM-DMED significantly outperformed the existing methods when a number of rounds
is large. The regret of PM-DMED on these problems quickly approached the theoretical lower
bound.
2
Problem Setup
This paper studies the finite stochastic partial monitoring problem with N actions, M outcomes,
and A symbols. An instance of the partial monitoring game is defined by a loss matrix L = (li,j ) ?
RN ?M and a feedback matrix H = (hi,j ) ? [A]N ?M , where [A] = {1, 2, . . . , A}. At the beginning, the learner is informed of L and H. At each round t = 1, 2, . . . , T , a learner selects an
action i(t) ? [N ], and at the same time an opponent selects an outcome j(t) ? [M ]. The learner
1
? ignores a polylog factor.
Note that ?
2
suffers loss li(t),j(t) , which he/she cannot observe: the only information the learner receives is the
signal hi(t),j(t) ? [A]. We consider a stochastic opponent whose strategy for selecting outcomes is
governed by the opponent?s strategy p? ? PM , where PM is a set of probability distributions over
an M -ary outcome. The outcome j(t) of each round is an i.i.d. sample from p? .
The goal of the learner is to minimize the cumulative loss over T
rounds. Let the optimal action be the one that minimizes the loss in
?
c
*
expectation, that is, i? = arg mini?[N ] L?
i p , where Li is the i-th ||p -C1 ||M
?
row of L. Assume that i is unique. Without loss of generality, we
can assume that i? = 1. Let ?i = (Li ? L1 )? p? ? [0, ?) and Ni (t)
be the number of rounds before the t-th in which action i is selected.
The performance of the algorithm is measured by the (pseudo) regret,
Regret(T ) =
T
?
?i(t) =
?
C2
p* C4
C1
C5
C3
?i Ni (T + 1),
Figure 1: Cell decomposition of a partial monitoring
which is the difference between the expected loss of the learner and instance with M = 3.
the optimal action. It is easy to see that minimizing the loss is equivalent to minimizing the regret. The expectation of the regret measures the performance of an algorithm that the learner uses.
t=1
i?[N ]
For each action i ? [N ], let Ci be the set of opponent strategies for which action i is optimal:
Ci = {q ? PM : ?j?=i (Li ? Lj )? q ? 0}.
We call Ci the optimality cell of action i. Each optimality cell is a convex closed polytope. Furthermore, we call the set of optimality cells {C1 , . . . , CN } the cell decomposition as shown in Figure 1.
Let Cic = PM \ Ci be the set of strategies with which action i is not optimal.
The signal matrix Si ? {0, 1}A?M of action i is defined as (Si )k,j = 11[hi,j = k], where 11[X] = 1
if X is true and 0 otherwise. The signal matrix defined here is slightly different from the one
in the previous papers (e.g., Bart?ok et al. [10]) in which the number of rows of Si is the number
of the different symbols in the i-th row of H. The advantage in using the definition here is that,
Si p? ? RA is a probability distribution over symbols that the algorithm observes when it selects
an action i. Examples of signal matrices are shown in Section 5. An instance of partial monitoring
is globally observable if for all pairs i, j of actions, Li ? Lj ? ?k?[N ] ImSk? . In this paper, we
exclusively deal with globally observable instances: in view of the minimax regret, this includes
trivial, easy, and hard problems.
3
Regret Lower Bound
A good algorithm should work well against any opponent?s strategy. We extend this idea by introducing the notion of strong consistency: a partial monitoring algorithm is strongly consistent if it
satisfies E[Regret(T )] = o(T a ) for any a > 0 and p ? PM given L and H.
In the context of the multi-armed bandit problem, Lai and Robbins [2] derived the regret lower
bound of a strongly consistent algorithm: an algorithm must select each arm i until its number of
draws Ni (t) satisfies log t ? Ni (t)d(?i ??1 ), where d(?i ??1 ) is the KL divergence between the two
one-parameter distributions from which the rewards of action i and the optimal action are generated.
Analogously, in the partial monitoring problem, we can define the minimum number of observations.
Lemma 1. For sufficiently large T , a strongly consistent algorithm satisfies:
?
?q?C1c
E[Ni (T )]D(p?i ?Si q) ? log T ? o(log T ),
i?[N ]
?
where
= Si p and D(p?q) = i (p)i log ((p)i /(q)i ) is the KL divergence between two discrete
distributions, in which we define 0 log 0/0 = 0.
p?i
?
Lemma 1 can be interpreted as follows: for each round t, consistency requires the algorithm to
make sure that the possible risk that action i ?= 1 is optimal is smaller than 1/t. Large deviation principle [16] states that, the probability that an opponent with strategy q behaves like p? is
3
?
?
roughly
? exp (? i?Ni (t)D(pi ?Si q)). Therefore, wec need to continue exploration of the actions
until i Ni (t)D(pi ?Si q) ? log t holds for any q ? C1 to reduce the risk to exp (? log t) = 1/t.
The proof of Lemma 1 is in Appendix B in the supplementary material. Based on the technique
used in Lai and Robbins [2], the proof considers a modified game in which another action i ?= 1 is
optimal. The difficulty in proving the lower bound in partial monitoring lies in that, the feedback
structure can be quite complex: for example, to confirm the superiority of action 1 over 2, one might
need to use the feedback from action 3 ?
/ {1, 2}. Still, we can derive the lower bound by utilizing
the consistency of the algorithm in the original and modified games.
We next derive a lower bound on the regret?
based on Lemma 1. Note that, the expectation of the
regret can be expressed as E[Regret(T )] = i?=1 E[Ni (t)](Li ? L1 )? p? . Let
{
}
?
N ?1
Rj ({pi }) = {ri }i?=j ? [0, ?)
ri D(pi ?Si q) ? 1 ,
:
inf
c
q?cl(Cj ):pj =Sj q
where cl(?) denotes a closure. Moreover, let
Cj? (p, {pi }) =
?
inf
ri ?Rj ({pi })
i
ri (Li ? Lj )? p ,
i?=j
the optimal solution of which is
{
}
?
?
?
?
Rj (p, {pi }) = {ri }i?=j ? Rj ({pi }) :
ri (Li ? Lj ) p = Cj (p, {pi }) .
i?=j
The value C1? (p? , {p?i }) log T is the possible minimum regret for observations such that the minimum divergence of p? from any q ? C1c is larger than log T . Using Lemma 1 yields the following
regret lower bound:
Theorem 2. The regret of a strongly consistent algorithm is lower bounded as:
E[Regret(T )] ? C1? (p? , {p?i }) log T ? o(log T ).
From this theorem, we can naturally measure the harshness of the instance by C1? (p? , {p?i }), whereas
the past studies (e.g., Vanchinathan et al. [13]) ambiguously define the harshness as the closeness to
the boundary of the cells. Furthermore, we show in Lemma 5 in the Appendix that C1? (p? , {p?i }) =
O(N/?p? ? C1c ?2M ): the regret bound has at most quadratic dependence on ?p? ? C1c ?M , which is
defined in Appendix D as the closeness of p? to the boundary of the optimal cell.
4
PM-DMED Algorithm
In this section, we describe the partial monitoring deterministic minimum empirical divergence (PMDMED) algorithm, which is inspired by DMED [17] for solving the multi-armed bandit problem.
Let p?i (t) ? [0, 1]A be the empirical distribution of the symbols under the selection of action i.
?t?1
?t?1
Namely, the k-th element of p?i (t) is ( t? =1 11[i(t? ) = i ? hi(t? ),j(t? ) = k])/( t? =1 11[i(t? ) = i]). We
sometimes
omit t from p?i when it is clear from the context. Let the empirical divergence of q ? PM
?
be i?[N ] Ni (t)D(?
pi (t)?Si q), the exponential of which can be considered as a likelihood that q is
the opponent?s strategy.
The main routine of PM-DMED is in Algorithm 1. At each loop, the actions in the current list ZC
are selected once. The list for the actions in the next loop ZN is determined by the subroutine in
Algorithm 2. The subroutine checks whether the empirical divergence of each point q ? C1c is larger
than log t or not (Eq. (3)). If it is large enough, it exploits the current information by selecting ?i(t),
the optimal action based on the estimation p?(t) that minimizes the empirical divergence. Otherwise,
it selects the actions with the number of observations below the minimum requirement for making
the empirical divergence of each suboptimal point q ? C1c larger than log t.
Unlike the N -armed bandit problem in which a reward is associated with an action, in the partial
monitoring problem, actions, outcomes, and feedback signals can be intricately related. Therefore,
we need to solve a non-trivial optimization to run PM-DMED. Later in Section 5, we discuss a
practical implementation of the optimization.
4
Algorithm 1 Main routine of PM-DMED and Algorithm 2 PM-DMED subroutine for adding
PM-DMED-Hinge
actions to ZN (without duplication).
1: Parameter: c > 0.
1: Initialization: select each action once.
2: Compute an arbitrary p?(t) such that
2: ZC , ZR ? [N ], ZN ? ?.
?
3: while t ? T do
p?(t) ? arg min
Ni (t)D(?
pi (t)?Si q) (1)
4:
for i(t) ? ZC in an arbitrarily fixed order
q
i
do
5:
Select i(t), and receive feedback.
?(t).
and let ?i(t) = arg mini L?
i p
6:
ZR ? ZR \ {i(t)}.
3: If ?i(t) ?
/ ZR then put ?i(t) into ZN .
7:
Add actions to ZN in accordance with 4: If there are actions i ?
/ ZR such that
{
?
Algorithm 2 (PM-DMED)
.
Ni (t) < c log t
(2)
Algorithm 3 (PM-DMED-Hinge)
8:
t ? t + 1.
then put them into ZN .
9:
end for
5: If
10:
ZC , ZR ? ZN , ZN ? ?.
{Ni (t)/ log t}i?=?i(t) ?
/ R?i(t) ({?
pi (t)}) (3)
11: end while
then compute some
{ri? }i?=?i(t) ? R??i(t) (?
p(t), {?
pi (t)})
(4)
and put all actions i such that i ?
/ ZR and
ri? > Ni (t)/ log t into ZN .
?
Necessity of log T exploration: PM-DMED tries to observe each action to some extent (Eq. (2)),
which is necessary for the following reason: consider a four-state game characterized by
?
?
?
?
0 1
1
0
1 1 1 1
0 0 ?
? 10 1
? 1 2 2 3 ?
L=?
, H=?
, and p? = (0.1, 0.2, 0.3, 0.4)? .
10 0
1 0 ?
1 2 2 3 ?
11 11 11 11
1 1 2 2
The optimal action here is action 1, which does not yield any useful information. By using action 2,
one receives three kinds of symbols from which one can estimate (p? )1 , (p? )2 + (p? )3 , and (p? )4 ,
where (p? )j is the j-th component of p? . From this, an algorithm can find that (p? )1 is not very
small and thus the expected loss of actions 2 and 3 is larger than that of action 1. Since the feedback
of actions 2 and 3 are the same, one may also use action 3 in the same manner. However, the loss per
observation is 1.2 and 1.3 for actions 2 and 3, respectively, and thus it is better to use action 2. This
difference comes from the fact that (p? )2 = 0.2 < 0.3 = (p? )3 . Since an algorithm does not know
p? beforehand, it needs to observe action 4, the only source for distinguishing (p? )2 from (p? )3 .
Yet, an optimal algorithm cannot select it more than ?(log T ) times because it affects the O(log T )
factor in the regret. In fact, O((log T )a ) observations of action 4 with some a > 0 are sufficient to
?
?
poly(a)
be convinced that (p
). For this reason, PM-DMED
? )2 < (p )3 with probability 1 ? o(1/T
selects each action log t times.
5
Experiment
Following Bart?ok et al. [11], we compared the performances of algorithms in three different games:
the four-state game (Section 4), a three-state game and dynamic pricing. Experiments on the N armed bandit game was also done, and the result is shown in Appendix C.1 .
The three-state game, which is classified as easy in terms of the minimax regret, is characterized by:
(
L=
1 1
0 1
1 0
0
1
1
)
(
and
The signal matrices of this game are,
(
)
(
1 0 0
0
S1 =
, S2 =
0 1 1
1
1
0
5
H=
0
1
1 2
2 1
2 2
)
2
2
1
(
, and S3 =
)
.
0 0
1 1
1
0
)
.
600
Random
FeedExp3
CBP
BPM-LEAST
BPM-TS
PM-DMED
LB
80
60
40
500
R(t): regret
101
102
103 104
t: round
105
0 0
10
6000
Random
FeedExp3
CBP
BPM-LEAST
BPM-TS
PM-DMED
LB
10
2
5000
10
3
4
10
10
t: round
5
10
2000
1500
1000
500
101
102
103 104
t: round
105
0 3
10
106
4000
3000
2000
0 0
10
6
10
(d) dynamic pricing, benign
R(t): regret
1500
1000
105
106
(c) three-states, harsh
120000
Random
FeedExp3
CBP
BPM-LEAST
BPM-TS
PM-DMED
LB
Random
FeedExp3
CBP
BPM-LEAST
BPM-TS
PM-DMED
LB
100000
80000
60000
40000
20000
1
10
2
10
3
4
10
10
t: round
5
10
0 3
10
6
10
104
105
106
t: round
(e) dynamic pricing, intermediate
2000
104
t: round
1000
1
2500
(b) three-states, intermediate
R(t): regret
R(t): regret
200
106
(a) three-states, benign
1400
1200
1000
800
600
400
200
0 0
10
300
Random
FeedExp3
CBP
BPM-LEAST
BPM-TS
PM-DMED
LB
3000
100
20
0 0
10
400
Random
FeedExp3
CBP
BPM-LEAST
BPM-TS
PM-DMED
LB
R(t): regret
R(t): regret
100
R(t): regret
120
(f) dynamic pricing, harsh
Random
CBP
BPM-LEAST
BPM-TS
PM-DMED
LB
500
0 0
10
101
102
103 104
t: round
105
106
(g) four-states
Figure 2: Regret-round semilog plots of algorithms. The regrets are averaged over 100 runs. LB is
the asymptotic regret lower bound of Theorem 2.
Dynamic pricing, which is classified as hard in terms of the minimax regret, is a game that models
a repeated auction between a seller (learner) and a buyer (opponent). At each round, the seller sets
a price for a product, and at the same time, the buyer secretly sets a maximum price he is willing to
pay. The signal is ?buy? or ?no-buy?, and the seller?s loss is either a given constant (no-buy) or the
difference between the buyer?s and the seller?s prices (buy). The loss and feedback matrices are:
?
?
?
?
0 1 ... N ? 1
2 2 ... 2
? c 0 ... N ? 2 ?
? 1 2 ... 2 ?
? and H = ? . .
?
L=?
..
? ... . . . . . .
?
? ..
. . . . . ... ? ,
.
c
...
c
0
1
...
1
2
where signals 1 and 2 correspond to no-buy and buy. The signal matrix of action i is
z
M ?i+1
i?1
}|
{z
}|
{
)
1 ... 1 0 ... 0
Si =
.
0 ... 0 1 ... 1
Following Bart?ok et al. [11], we set N = 5, M = 5, and c = 2.
(
In our experiments with the three-state game and dynamic pricing, we tested three settings regarding
the harshness of the opponent: at the beginning of a simulation, we sampled 1,000 points uniformly
at random from PM , then sorted them by C1? (p? , {p?i }). We chose the top 10%, 50%, and 90%
harshest ones as the opponent?s strategy in the harsh, intermediate, and benign settings, respectively.
We compared Random, FeedExp3 [8], CBP [11] with ? = 1.01, BPM-LEAST, BPM-TS [13], and
PM-DMED with c = 1. Random is a naive algorithm that selects an action uniformly random.
FeedExp3 requires a matrix G such that H ? G = L? , and thus one cannot apply it to the four-state
game. CBP is an algorithm of logarithmic regret for easy games. The parameters ? and f (t) of
CBP were
? set in accordance with Theorem 1 in their paper. BPM-LEAST is a Bayesian algorithm
? T ) regret for easy games, and BPM-TS is a heuristic of state-of-the-art performance. The
with O(
priors of two BPMs were set to be uninformative to avoid a misspecification, as recommended in
their paper.
6
Algorithm 3 PM-DMED-Hinge subroutine for adding actions to ZN (without duplication).
1: Parameters: c > 0, f (n) = bn?1/2 for b > 0, ?(t) = a/(log log t) for a > 0.
2: Compute arbitrary p?(t) which satisfies
p?(t) ? arg min
q
?(t).
and let ?i(t) = arg mini L?
i p
3: If ?i(t) ?
/ ZR then put ?i(t) into ZN .
4: If
?
Ni (t)(D(?
pi (t)?Si q) ? f (Ni (t)))+
(5)
p?(t) ?
/ C?i(t),?(t)
(6)
i
or there exists an action i such that
D(?
pi (t)?Si p?(t)) > f (Ni (t))
then put all actions i ?
/ ZR into ZN .
?
5: If there are actions i such that
Ni (t) < c log t
then put the actions not in ZR into ZN .
6: If
{Ni (t)/ log t}i?=?i(t) ?
/ R?i(t) ({?
pi (t), f (Ni (t))})
then compute some
{ri? }i?=?i(t) ? R??i(t) (?
p(t), {?
pi (t), f (Ni (t))})
(7)
(8)
(9)
(10)
and put all actions such that i ?
/ ZR and ri? > Ni (t)/ log t into ZN . If such ri? is infeasible then
put all action i ?
/ ZR into ZN .
The computation of p?(t) in (1) and the evaluation of the condition in (3) involve convex optimizations, which were done with Ipopt [18]. Moreover, obtaining {ri? } in (4) is classified as a linear
semi-infinite programming (LSIP) problem, a linear programming (LP) with finitely many variables
and infinitely many constraints. Following the optimization of BPM-LEAST [13], we resorted to a
finite sample approximation and used the Gurobi LP solver [19] in computing {ri? }: at each round,
we sampled 1,000 points from PM , and relaxed the constraints on the samples. To speed up the
computation, we skipped these optimizations in most rounds with large t and used the result of
the last computation. The computation of the coefficient C1? (p? , {p?i }) of the regret lower bound
(Theorem 2) is also an LSIP, which was approximated by 100,000 sample points from C1c .
The experimental results are shown in Figure 2. In the four-state game and the other two games with
an easy or intermediate opponent, PM-DMED outperforms the other algorithms when the number of
rounds is large. In particular, in the dynamic pricing game with an intermediate opponent, the regret
of PM-DMED at T = 106 is ten times smaller than those of the other algorithms. Even in the harsh
setting in which the minimax regret matters, PM-DMED has some advantage over all algorithms
except for BPM-TS. With sufficiently large T , the slope of an optimal algorithm should converge to
LB. In all games and settings, the slope of PM-DMED converges to LB, which is empirical evidence
of the optimality of PM-DMED.
6 Theoretical Analysis
Section 5 shows that the empirical performance of PM-DMED is very close to the regret lower
bound in Theorem 2. Although the authors conjecture that PM-DMED is optimal, it is hard to
analyze PM-DMED. The technically hardest part arises from the case in which the divergence of
each action is small but not yet fully converged. To circumvent this difficulty, we can introduce a
discount factor. Let
{
}
?
N ?1
Rj ({pi , ?i })= {ri }i?=j ? [0, ?)
:
inf
ri (D(pi ?Si q) ? ?i )+ ? 1 , (11)
c
q?cl(Cj ):D(pj ?Sj q)??j
i
where (X)+ = max(X, 0). Note that Rj ({pi , ?i }) in (11) is a natural generalization of Rj ({pi })
in Section 4 in the sense that Rj ({pi , 0}) = Rj ({pi }). Event {Ni (t)/ log t}i?=1 ? R1 ({?
pi (t), ?i })
means that the number of observations {Ni (t)} is enough to ensure that the ?{?i }-discounted? empirical divergence of each q ? C1c is larger than log t. Analogous to Rj ({pi , ?i }), we define
7
Cj? (p, {pi , ?i }) =
?
inf
{ri }i?=j ?Rj ({pi ,?i }))
ri (Lj ? Li )? p
i?=j
and its optimal solution by
{
}
?
?
?
?
Rj (p, {pi , ?i }) = {ri }i?=j ? Rj ({pi , ?i }) :
ri (Lj ? Li ) p = Cj (p, {pi , ?i }) .
i?=j
L?
i p
We also define Ci,? = {p ? PM :
+ ? ? minj?=i L?
j p}, the optimal region of action i
with margin. PM-DMED-Hinge shares the main routine of Algorithm 1 with PM-DMED and lists
the next actions by Algorithm 3. Unlike PM-DMED, it (i) discounts f (Ni (t)) from the empirical
divergence D(?
pi (t)?Si q). Moreover, (ii) when p?(t) is close to the cell boundary, it encourages more
exploration to identify the cell it belongs to by Eq. (6).
Theorem 3. Assume that the following regularity conditions hold for p? . (1) R?1 (p, {pi , ?i }) is
unique at p = p? , pi = Si p? , ?i = 0. Moreover, (2) for S? = {q : D(p?1 ?S1 q) ? ?}, it holds that
cl(int(C1c ) ? S? ) = cl(cl(C1c ) ? S? ) for all ? ? 0 in some neighborhood of ? = 0, where cl(?) and
int(?) denote the closure and the interior, respectively. Then,
E[Regret(T )] ? C1? (p? , {p?i }) log T + o(log T ) .
We prove this theorem in Appendix D . Recall that R?1 (p, {?
pi (t), ?i }) is the set of optimal solutions
of an LSIP. In this problem, KKT conditions and the duality theorem apply as in the case of finite
constraints; thus, we can check whether Condition 1 holds or not for each p? (see, e.g., Ito et al. [20]
and references therein). Condition 2 holds in most cases, and an example of an exceptional case is
shown in Appendix A.
Theorem 3 states that PM-DMED-Hinge has a regret upper bound that matches the lower bound of
Theorem 2.
Corollary 4. (Optimality in the N -armed bandit problem) In the N -armed Bernoulli bandit problem, the regularity conditions in Theorem 3 always hold. Moreover, the coefficient of the leading logarithmic term in the regret bound of the partial monitoring problem is equal to the bound
?N
given in Lai and Robbins [2]. Namely, C1? (p? , {p?i }) =
i?=1 (?i /d(?i ??1 )), where d(p?q) =
p log (p/q) + (1 ? p) log ((1 ? p)/(1 ? q)) is the KL-divergence between Bernoulli distributions.
Corollary 4, which is proven in Appendix C, states that PM-DMED-Hinge attains the optimal regret
of the N -armed bandit if we run it on an N -armed bandit game represented as partial monitoring.
Asymptotic analysis: it is Theorem 6 where we lose the finite-time property. This theorem shows
the continuity of the optimal solution set R?1 (p, {pi , ?i }) of Cj? (p, {pj }), which does not mention
how close R?1 (p, {pi , ?i }) is to R?1 (p? , {p?i , 0}) if max{?p?p? ?M , maxi ?pi ?p?i ?M , maxi ?i } ? ?
for given ?. To obtain an explicit bound, we need sensitivity analysis, the theory of the robustness
of the optimal value and the solution for small deviations of its parameters (see e.g., Fiacco [21]).
In particular, the optimal solution of partial monitoring involves an infinite number of constraints,
which makes the analysis quite hard. For this reason, we will not perform a finite-time analysis.
Note that, the N -armed bandit problem is a special instance in which we can avoid solving the
above optimization and a finite-time optimal bound is known.
Necessity of the discount factor: we are not sure whether discount factor f (n) in PM-DMEDHinge is necessary or not. We also empirically tested PM-DMED-Hinge: although it is better than
the other algorithms in many settings, such as dynamic pricing with an intermediate opponent, it
is far worse than PM-DMED. We found that our implementation, which uses the Ipopt nonlinear
optimization solver, was sometimes inaccurate at optimizing (5): there were some cases in which
the true p? satisfies ?i?[N ] D(?
pi (t)?Si p? ) ? f (Ni (t)) = 0, while the solution p?(t) we obtained
had non-zero hinge values. In this case, the algorithm lists all actions from (7), which degrades
performance. Determining whether the discount factor is essential or not is our future work.
Acknowledgements
The authors gratefully acknowledge the advice of Kentaro Minami and sincerely thank the anonymous reviewers for their useful comments. This work was supported in part by JSPS KAKENHI
Grant Number 15J09850 and 26106506.
8
References
[1] Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Inf. Comput.,
108(2):212?261, February 1994.
[2] T. L. Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances in
Applied Mathematics, 6(1):4?22, 1985.
[3] Robert D. Kleinberg and Frank Thomson Leighton. The value of knowing a demand curve:
Bounds on regret for online posted-price auctions. In FOCS, pages 594?605, 2003.
[4] Alekh Agarwal, Peter L. Bartlett, and Max Dama. Optimal allocation strategies for the dark
pool problem. In AISTATS, pages 9?16, 2010.
[5] Nicol`o Cesa-Bianchi, G?abor Lugosi, and Gilles Stoltz. Minimizing regret with label efficient
prediction. IEEE Transactions on Information Theory, 51(6):2152?2162, 2005.
[6] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent.
In ICML, pages 928?936, 2003.
[7] Varsha Dani, Thomas P. Hayes, and Sham M. Kakade. Stochastic linear optimization under
bandit feedback. In COLT, pages 355?366, 2008.
[8] Antonio Piccolboni and Christian Schindelhauer. Discrete prediction games with arbitrary
feedback and loss. In COLT, pages 208?223, 2001.
[9] Nicol`o Cesa-Bianchi, G?abor Lugosi, and Gilles Stoltz. Regret minimization under partial
monitoring. Math. Oper. Res., 31(3):562?580, 2006.
[10] G?abor Bart?ok, D?avid P?al, and Csaba Szepesv?ari. Minimax regret of finite partial-monitoring
games in stochastic environments. In COLT, pages 133?154, 2011.
[11] G?abor Bart?ok, Navid Zolghadr, and Csaba Szepesv?ari. An adaptive algorithm for finite stochastic partial monitoring. In ICML, 2012.
[12] G?abor Bart?ok. A near-optimal algorithm for finite partial-monitoring games against adversarial
opponents. In COLT, pages 696?710, 2013.
[13] Hastagiri P. Vanchinathan, G?abor Bart?ok, and Andreas Krause. Efficient partial monitoring
with prior information. In NIPS, pages 1691?1699, 2014.
[14] Peter Auer, Nicol?o Cesa-bianchi, and Paul Fischer. Finite-time Analysis of the Multiarmed
Bandit Problem. Machine Learning, 47:235?256, 2002.
[15] Aur?elien Garivier and Olivier Capp?e. The KL-UCB algorithm for bounded stochastic bandits
and beyond. In COLT, pages 359?376, 2011.
[16] Amir Dembo and Ofer Zeitouni. Large deviations techniques and applications. Applications
of mathematics. Springer, New York, Berlin, Heidelberg, 1998.
[17] Junya Honda and Akimichi Takemura. An Asymptotically Optimal Bandit Algorithm for
Bounded Support Models. In COLT, pages 67?79, 2010.
[18] Andreas W?achter and Carl D. Laird. Interior point optimizer (IPOPT).
[19] Gurobi Optimization Inc. Gurobi optimizer.
[20] S. Ito, Y. Liu, and K. L. Teo. A dual parametrization method for convex semi-infinite programming. Annals of Operations Research, 98(1-4):189?213, 2000.
[21] Anthony V. Fiacco. Introduction to sensitivity and stability analysis in nonlinear programming.
Academic Press, New York, 1983.
9
| 6024 |@word exploitation:1 version:1 achievable:1 leighton:1 closure:2 willing:1 simulation:1 bn:1 decomposition:2 mention:1 necessity:2 liu:1 exclusively:1 selecting:2 outperforms:2 existing:3 past:1 current:2 si:18 yet:2 must:1 numerical:2 benign:3 christian:1 plot:1 update:1 bart:9 selected:3 warmuth:1 amir:1 beginning:2 dembo:1 parametrization:1 wec:1 manfred:1 math:1 honda:4 cbp:10 c2:1 focs:1 prove:1 manner:1 introduce:2 expected:2 ra:1 hardness:1 roughly:1 multi:7 inspired:2 globally:2 discounted:1 little:1 armed:14 solver:2 moreover:6 bounded:3 kind:1 interpreted:1 minimizes:5 informed:1 csaba:2 pseudo:1 utilization:1 grant:1 omit:1 superiority:1 before:2 schindelhauer:2 accordance:2 modify:1 lugosi:2 might:1 chose:1 initialization:1 studied:1 therein:1 limited:2 averaged:1 unique:2 practical:1 regret:81 empirical:11 significantly:2 confidence:1 dmed:47 cannot:3 close:3 selection:1 interior:2 put:8 context:2 risk:2 equivalent:2 deterministic:2 reviewer:1 zinkevich:1 secretly:1 convex:5 formalized:2 rule:1 utilizing:1 his:1 proving:1 stability:1 notion:1 analogous:1 annals:1 programming:5 olivier:1 us:2 distinguishing:1 carl:1 element:1 approximated:1 worst:1 region:1 observes:2 dama:1 environment:1 reward:2 dynamic:10 seller:4 solving:2 technically:1 learner:21 capp:1 represented:1 fast:1 describe:1 hiroshi:1 approached:1 outcome:14 neighborhood:1 whose:1 quite:2 larger:6 supplementary:1 solve:1 heuristic:1 otherwise:2 fischer:1 laird:1 online:2 advantage:2 propose:2 ambiguously:1 product:1 loop:2 adapts:1 regularity:2 requirement:1 r1:1 converges:1 derive:6 polylog:1 ac:2 stat:1 measured:1 finitely:1 eq:3 strong:1 involves:1 come:1 tokyo:5 stochastic:9 exploration:4 material:1 generalization:1 anonymous:1 akimichi:1 minami:1 cic:1 hold:6 sufficiently:3 considered:1 exp:2 optimizer:2 estimation:1 outperformed:1 lose:1 label:2 teo:1 robbins:4 exceptional:1 weighted:1 minimization:3 dani:1 always:1 modified:3 avoid:2 varying:1 corollary:2 derived:2 focus:1 she:1 kakenhi:1 bernoulli:2 indicates:3 likelihood:1 check:2 contrast:1 adversarial:2 attains:2 skipped:1 sense:2 dependent:11 inaccurate:1 lj:6 abor:6 her:1 bandit:20 bpm:22 subroutine:4 selects:6 arg:5 among:1 colt:6 dual:1 art:3 special:2 field:2 once:2 equal:1 hardest:1 icml:2 future:1 divergence:12 interest:1 evaluation:1 beforehand:1 partial:37 necessary:2 stoltz:2 littlestone:1 re:1 theoretical:2 instance:11 zn:15 introducing:2 deviation:3 jsps:1 fiacco:2 chooses:6 varsha:1 sensitivity:2 aur:1 intricately:1 pool:2 analogously:1 quickly:1 cesa:4 worse:1 expert:1 leading:2 achter:1 li:11 oper:1 elien:1 includes:2 coefficient:2 matter:1 int:2 inc:1 depends:1 later:2 view:1 try:1 closed:1 analyze:1 start:1 sort:1 bayes:1 slope:2 contribution:2 minimize:4 ni:25 who:1 yield:2 correspond:1 identify:1 dealt:1 bayesian:1 monitoring:37 ary:1 classified:5 converged:1 minj:1 suffers:3 definition:1 infinitesimal:1 against:2 acquisition:1 naturally:1 proof:2 associated:1 sampled:2 recall:1 knowledge:1 cj:7 routine:3 auer:1 focusing:1 ok:9 improved:1 done:2 strongly:4 generality:1 furthermore:2 until:2 hand:2 receives:3 nonlinear:2 continuity:1 defines:1 pricing:10 consisted:1 true:2 former:1 piccolboni:2 deal:1 round:20 game:27 encourages:1 generalized:1 thomson:1 l1:2 auction:2 ari:2 behaves:1 empirically:1 jp:2 extend:2 he:2 multiarmed:1 consistency:3 pm:54 mathematics:2 gratefully:1 had:1 alekh:1 add:1 showed:1 optimizing:1 inf:5 belongs:1 scenario:1 continue:1 arbitrarily:1 herbert:1 minimum:5 relaxed:1 vanchinathan:2 converge:1 maximize:1 recommended:1 signal:11 semi:2 ii:1 full:1 rj:13 sham:1 match:2 characterized:2 academic:1 lai:4 navid:1 prediction:4 itc:1 expectation:3 sometimes:2 agarwal:1 cell:9 c1:12 receive:1 whereas:2 uninformative:1 szepesv:2 krause:1 source:1 semilog:1 unlike:2 sure:2 comment:1 duplication:2 ascent:1 call:2 near:1 intermediate:6 easy:10 enough:2 affect:1 suboptimal:1 imperfect:1 idea:1 cn:1 reduce:1 regarding:1 knowing:1 avid:1 andreas:2 whether:4 bartlett:1 ipopt:3 peter:2 york:2 action:65 antonio:1 useful:2 clear:1 involve:1 dark:2 discount:5 extensively:1 ten:1 category:1 lsip:3 s3:1 per:1 discrete:3 four:6 pj:3 garivier:2 resorted:1 asymptotically:3 run:3 feedexp3:9 draw:1 decision:1 appendix:7 bound:34 hi:4 pay:1 played:1 distinguish:1 quadratic:1 constraint:4 junya:2 ri:19 kleinberg:1 aspect:1 speed:1 optimality:6 min:2 martin:1 conjecture:1 smaller:2 slightly:3 lp:2 kakade:1 making:2 s1:2 sincerely:1 discus:1 know:1 end:2 ofer:1 operation:1 komiyama:2 opponent:22 apply:2 observe:3 robustness:1 original:1 thomas:1 denotes:1 top:1 ensure:1 hinge:13 zolghadr:1 zeitouni:1 exploit:1 february:1 strategy:12 degrades:1 dependence:1 gradient:1 thank:1 berlin:1 majority:1 polytope:1 considers:1 extent:1 trivial:3 reason:3 modeled:1 mini:3 balance:1 minimizing:3 setup:1 robert:1 frank:1 info:1 implementation:2 perform:3 bianchi:4 upper:3 gilles:2 observation:6 finite:14 acknowledge:1 t:10 situation:1 misspecification:1 rn:1 arbitrary:3 lb:10 pair:1 namely:2 kl:5 c3:1 gurobi:3 nick:1 c4:1 nip:1 beyond:1 below:1 including:1 max:3 event:1 difficulty:3 natural:1 circumvent:1 zr:12 regret1:1 arm:1 minimax:11 harsh:4 naive:1 moss:1 prior:2 literature:1 acknowledgement:1 nicol:3 determining:1 asymptotic:2 loss:18 expect:1 fully:1 takemura:2 sublinear:1 allocation:2 proven:1 sufficient:1 consistent:4 principle:1 pi:38 share:1 row:3 hopeless:1 convinced:1 supported:1 last:1 infeasible:1 zc:4 feedback:14 boundary:3 curve:1 cumulative:3 ignores:1 author:3 c5:1 adaptive:2 far:1 transaction:1 sj:2 observable:2 confirm:1 kkt:1 buy:6 hayes:1 alternatively:1 obtaining:1 heidelberg:1 complex:1 cl:7 poly:1 posted:1 anthony:1 aistats:1 main:3 s2:1 paul:1 repeated:2 advice:2 explicit:1 exponential:1 comput:1 lie:2 governed:1 third:1 ito:2 theorem:14 specific:1 symbol:6 list:4 maxi:2 closeness:2 dl:1 exists:1 evidence:1 essential:1 sequential:2 adding:2 ci:5 margin:1 demand:1 logarithmic:5 infinitely:1 expressed:1 springer:1 satisfies:5 goal:3 sorted:1 kentaro:1 price:4 hard:8 hastagiri:1 nakagawa:2 determined:1 uniformly:2 infinite:3 except:1 lemma:6 total:1 called:2 duality:1 experimental:1 buyer:3 player:2 ucb:3 select:4 junpei:2 support:1 inability:1 arises:1 tested:2 |
5,553 | 6,025 | Online Learning for Adversaries with Memory:
Price of Past Mistakes
Elad Hazan
Princeton University
New York, USA
[email protected]
Oren Anava
Technion
Haifa, Israel
[email protected]
Shie Mannor
Technion
Haifa, Israel
[email protected]
Abstract
The framework of online learning with memory naturally captures learning problems with temporal effects, and was previously studied for the experts setting. In
this work we extend the notion of learning with memory to the general Online
Convex Optimization (OCO) framework, and present two algorithms that attain
low regret. The first algorithm applies to Lipschitz continuous loss functions, obtaining optimal regret bounds for both convex and strongly convex losses. The
second algorithm attains the optimal regret bounds and applies more broadly to
convex losses without requiring Lipschitz continuity, yet is more complicated to
implement. We complement the theoretical results with two applications: statistical arbitrage in finance, and multi-step ahead prediction in statistics.
1
Introduction
Online learning is a well-established learning paradigm which has both theoretical and practical
appeals. The goal in this paradigm is to make a sequential decision, where at each trial the cost
associated with previous prediction tasks is given. In recent years, online learning has been widely
applied to several research fields including game theory, information theory, and optimization. We
refer the reader to [1, 2, 3] for more comprehensive survey.
One of the most well-studied frameworks of online learning is Online Convex Optimization (OCO).
In this framework, an online player iteratively chooses a decision in a convex set, then a convex loss
function is revealed, and the player suffers loss that is the convex function applied to the decision
she chose. It is usually assumed that the loss functions are chosen arbitrarily, possibly by an allpowerful adversary. The performance of the online player is measured using the regret criterion,
which compares the accumulated loss of the player with the accumulated loss of the best fixed
decision in hindsight.
The above notion of regret captures only memoryless adversaries who determine the loss based on
the player?s current decision, and fails to cope with bounded-memory adversaries who determine
the loss based on the player?s current and previous decisions. However, in many scenarios such
as coding, compression, portfolio selection and more, the adversary is not completely memoryless
and the previous decisions of the player affect her current loss. We are particularly concerned with
scenarios in which the memory is relatively short-term and simple, in contrast to state-action models
for which reinforcement learning models are more suitable [4].
1
An important aspect of our work is that the memory is not used to relax the adaptiveness of the adversary (cf. [5, 6]), but rather to model the feedback received by the player. In particular, throughout
this work we assume that the adversary is oblivious, that is, must determine the whole set of loss
functions in advance. In addition, we assume a counterfactual feedback model: the player is aware
of the loss she would suffer had she played any sequence of m decisions in the previous m rounds.
This model is quite common in the online learning literature; see for instance [7, 8].
Our goal in this work is to extend the notion of learning with memory to one of the most general
online learning frameworks - the OCO. To this end, we adapt the policy regret1 criterion of [5], and
propose two different approaches for the extended framework, both attain the optimal bounds with
respect to this criterion.
1.1
Summary of Results
We present and analyze two algorithms for the framework of OCO with memory, both attain policy
regret bounds that are optimal in the number of rounds. Our first algorithm utilizes the Lipschitz
property of the loss functions, and ? to the best of our knowledge ? is the first algorithm for this
framework that is not based on any blocking technique (this technique is detailed in the related work
section below). This algorithm attains O(T 1/2 )-policy regret for general convex loss functions and
O(log T )-policy regret for strongly convex losses.
For the case of convex and non-Lipschitz loss functions, our second algorithm attains the nearly op? 1/2 )-policy regret2 ; its downside is that it is randomized and more difficult to implement.
timal O(T
A novel result that follows immediately from our analysis is that our second algorithm attains an
? 1/2 )-regret, along with O(T
? 1/2 ) decision switches in the standard OCO framework.
expected O(T
Similar result currently exists only for the special case of the experts problem [9]. We note that
the two algorithms we present are related in spirit (both designed to cope with bounded-memory
adversaries), but differ in the techniques and analysis.
Framework
Experts
with Memory
OCO with memory
(convex losses)
OCO with Memory
(strongly convex losses)
Previous bound
Our first approach
Our second approach
Not applicable
? 1/2 )
O(T
O(T 2/3 )
O(T 1/2 )
? 1/2 )
O(T
? 1/3 )
O(T
O(log T )
? 1/2 )
O(T
O(T
1/2
)
Table 1: State-of-the-art upper-bounds on the policy regret as a function of T (number of rounds)
for the framework of OCO with memory. The best known bounds are due to the works of [9], [8],
and [5], which are detailed in the related work section below.
1.2
Related Work
The framework of OCO with memory was initially considered in [7] as an extension to the experts
framework of [10]. Merhav et al. offered a blocking technique that guarantees a policy regret
bound of O(T 2/3 ) against bounded-memory adversaries. Roughly speaking, the proposed technique
divides the T rounds into T 2/3 equal-sized blocks, while employing a constant decision throughout
each of these blocks. The small number of decision switches enables the learning in the extended
framework, yet the constant block size results in a suboptimal policy regret bound.
Later, [8] showed that a policy regret bound of O(T 1/2 ) can be achieved by simply adapting the
Shrinking Dartboard (SD) algorithm of [9] to the framework considered in [7]. In short, the SD
algorithm is aimed at ensuring an expected O(T 1/2 ) decision switches in addition to O(T 1/2 )regret. These two properties together enable the learning in the considered framework, and the
randomized block size yields an optimal policy regret bound. Note that in both [7] and [8], the
1
The policy regret compares the performance of the online player with the best fixed sequence of actions in
hindsight, and thus captures the notion of adversaries with memory. A formal definition appears in Section 2.
2
? is a variant of the O(?) notation that ignores logarithmic factors.
The notation O(?)
2
presented techniques are applicable only to the variant of the experts framework to adversaries with
memory, and not to the general OCO framework.
The framework of online learning against adversaries with memory was studied also in the setting
of the adversarial multi-armed bandit problem. In this context, [5] showed how to convert an online
learning algorithm with regret guarantee of O(T q ) into an online learning algorithm that attains
O(T 1/(2?q) )-policy regret, also using a blocking technique. This approach is in fact a generalization
of [7] to the bandit setting, yet the ideas presented are somewhat simpler. Despite the original
presentation of [5] in the bandit setting, their ideas can be easily generalized to the framework of
? 1/3 )-policy
OCO with memory, yielding a policy regret bound of O(T 2/3 ) for convex losses and O(T
regret for strongly convex losses.
An important concept that is captured by the framework of OCO with memory is switching costs,
which can be seen as a special case where the memory is of length 1. This special case was studied in
the works of [11], who studied the relationship between second order regret bounds and switching
costs; and [12], who proved that the blocking algorithm of [5] is optimal for the setting of the
adversarial multi-armed bandit with switching costs.
2
Preliminaries and Model
We continue to formally define the notation for both the standard OCO framework and the framework of OCO with memory. For sake of readability, we shall use the notation gt for memoryless
loss functions (that correspond to memoryless adversaries), and ft for loss functions with memory
(that correspond to bounded-memory adversaries).
2.1
The Standard OCO Framework
In the standard OCO framework, an online player iteratively chooses a decision xt ? K, and suffers
loss that is equal to gt (xt ). The decision set K is assumed to be a bounded convex subset of Rn ,
and the loss functions {gt }Tt=1 are assumed to be convex functions from K to [0, 1]. In addition,
the set {gt }Tt=1 is assumed to be chosen in advance, possibly by an all-powerful adversary that has
full knowledge of our learning algorithm (see [1], for instance). The performance of the player is
measured using the regret criterion, defined as follows:
RT =
T
X
gt (xt ) ? min
x?K
t=1
T
X
gt (x),
t=1
where T is a predefined integer denoting the total number of rounds played. The goal in this framework is to design efficient algorithms, whose regret grows sublinearly in T , corresponding to an
average per-round regret going to zero as T increases.
2.2
The Framework of OCO with Memory
In this work we consider the framework of OCO with memory, detailed as follows: at each round
t, the online player chooses a decision xt ? K ? Rn . Then, a loss function ft : Km+1 ? R is
revealed, and the player suffers loss of ft (xt?m , . . . , xt ). For simplicity, we assume that 0 ? K,
and that ft (x0 , . . . , xm ) ? [0, 1] for any x0 , . . . , xm ? K. Notice that the loss at round t depends
on the previous m decisions of the player, as well as on his current one. We assume that after ft is
revealed, the player is aware of the loss she would suffer had she played any sequence of decisions
xt?m , . . . , xt (this corresponds to the counterfactual feedback model mentioned earlier).
Our goal in this framework is to minimize the policy regret, as defined in [5]3 :
RT,m =
T
X
ft (xt?m , . . . , xt ) ? min
x?K
t=m
T
X
ft (x, . . . , x).
t=m
We define the notion of convexity for the loss functions {ft }Tt=1 as follows: we say that ft is a convex loss function with memory if f?t (x) = ft (x, . . . , x) is convex in x. From now on, we assume that
3
The rounds in which t < m are ignored since we assume that the loss per round is bounded by a constant;
this adds at most a constant to the final regret bound.
3
Algorithm 1
1: Input: learning rate ? > 0, ?-strongly convex and smooth regularization function R(x).
2: Choose x0 , . . . , xm ? K arbitrarily.
3: for t = m to T do
4:
Play xt and suffer loss ftn(xt?m , . . . , xt ).
o
Pt
5:
Set xt+1 = arg minx?K ? ? ? =m f?? (x) + R(x)
6: end for
{ft }Tt=1 are convex loss functions with memory. This assumption is necessary in some cases, if efPT
ficient algorithms are considered; otherwise, the optimization problem minx?K t=m ft (x, . . . , x)
might not be solvable efficiently.
3
Policy Regret for Lipschitz Continuous Loss Functions
In this section we assume that the loss functions {ft }Tt=1 are Lipschitz continuous for some Lipschitz
constant L, that is
|ft (x0 , . . . , xm ) ? ft (y0 , . . . , ym )| ? L ? k(x0 , . . . , xm ) ? (y0 , . . . , ym )k,
and adapt the well-known Regularized Follow The Leader (RFTL) algorithm to cope with boundedmemory adversaries. In the above and throughout the paper, we use k ? k to denote the `2 -norm.
Due to space constraints we present here only the algorithm and the main theorem, and defer the
complete analysis to the supplementary material.
Intuitively, Algorithm 1 relies on the fact that the corresponding functions {f?t }Tt=1 are memoryless
and convex. Thus, standard regret minimization techniques are applicable, yielding a regret bound
of O(T 1/2 ) for {f?t }Tt=1 . This however, is not the policy regret bound we are interested in, but is in
fact quite close if we use the Lipschitz property of {ft }Tt=1 and set the learning rate properly. The
algorithm requires the following standard definitions of R and ? (see supplementary material for
more comprehensive background and exact norm definitions):
2
?=
sup
k?f?t (x)k?y
and R = sup {R(x) ? R(y)} .
(1)
x,y?K
t?{1,...,T },x,y?K
4
Additionally, we denote by ? the strong convexity parameter of the regularization function R(x).
For Algorithm 1 we can prove the following:
Theorem 3.1. Let {ft }Tt=1 be Lipschitz continuous loss functions with memory (from Km+1 to
[0, 1]), and let R and ? be as defined in Equation (1). Then, Algorithm 1 generates an online
sequence {xt }Tt=1 , for which the following holds:
T
X
RT,m =
ft (xt?m , . . . , xt ) ? min
x?K
t=m
Setting ? = R
1/2
(T L)
T
X
?1/2
?3/4
(m+1)
(?/?)
t=m
?1/4
ft (x, . . . , x) ? 2T ??(m + 1)3/2 +
R
.
?
yields RT,m ? 3(T RL)1/2 (m+1)3/4 (?/?)1/4 .
The following is an immediate corollary of Theorem 3.1 to H-strongly convex losses:
Corollary 3.2. Let {ft }Tt=1 be Lipschitz continuous and H-strongly convex loss functions with memory (from Km+1 to [0, 1]), and denote G = supt,x?K k?f?t (x)k. Then, Algorithm 1 generates an
online sequence {xt }Tt=1 , for which the following holds:
T
T
X
X
1
1
3/2 2
? 2
RT,m ? 2(m + 1) G
?t +
kxt ? x k
?
?H .
?t+1
?t
t=m
t=m
Setting ?t =
1
Ht
yields RT,m ?
2(m+1)3/2 G2
H
(1 + log(T )).
The proof simply requires plugging time-dependent learning rate in the proof of Theorem 3.1, and
is thus omitted here.
f (x) is ?-strongly convex if ?2 f (x) ?In?n for all x ? K. We say that ft : Km+1 ? R is ?-strongly
convex loss function with memory if f?t (x) = ft (x, . . . , x) is ?-strongly convex in x.
4
4
Algorithm 2
1: Input: learning parameter ? > 0.
2: Initialize w1 (x) = 1 for all x ? K, and choose x1 ? K arbitrarily.
3: for t = 1 to T do
4:
Play xt and suffer loss gt (xt ). P
t
?
5:
Define weights wt+1 (x) = e?? ? =1 g?? (x) , where ? = 4G
?t (x) = gt (x) + ?2 kxk2 .
2 and g
(xt )
6:
Set xt+1 = xt with probability wwt+1
.
t (xt )
?1
R
.
7:
Otherwise, sample xt+1 from the density function pt+1 (x) = wt+1 (x) ? K wt+1 (x)dx
8: end for
4
Policy Regret with Low Switches
In this section we present a different approach to the framework of OCO with memory ? low
switches. This approach was considered before in [8], who adapted the Shrinking Dartboard (SD)
algorithm of [9] to cope with limited-delay coding. However, the authors in [9, 8] consider only the
experts setting, in which the decision set is the simplex and the loss functions are linear. Here we
adapt this approach to general decision sets and generally convex loss functions, and obtain optimal
policy regret against bounded-memory adversaries.
Due to space constraints, we present here only the algorithm and main theorem. The complete
analysis appears in the supplementary material.
Intuitively, Algorithm 2 defines a probability distribution over K at each round t. By sampling from
this probability distribution one can generate an online sequence that has an expected low regret
guarantee. This however is not sufficient in order to cope with bounded-memory adversaries, and
thus an additional element of choosing xt+1 = xt with high probability is necessary (line 6). Our
(xt )
analysis shows that if this probability is equal to wwt+1
the regret guarantee remains, and we get
t (xt )
an additional low switches guarantee.
For Algorithm 2 we can prove the following:
Theorem 4.1. Let {gt }Tt=1 be convex functions from K to [0, 1], such that D = supx,y?K kx ? yk
q
1+log(T +1)
and G = supx,t k?gt (x)k, and define g?t (x) = gt (x) + ?2 kxk2 for ? = 2G
. Then,
D
T
T
Algorithm 2 generates an online sequence {xt }t=1 , for which it holds that
p
p
and E [S] = O T log(T ) ,
E [RT ] = O T log(T )
where S is the number of decision switches in the sequence {xt }Tt=1 .
The exact bounds for E [RT ] and E [S] are given in the supplementary material. Notice that Algorithm 2 applies to memoryless loss functions, yet its low switches guarantee implies learning against
bounded-memory adversaries as stated and proven in Lemma C.5 (see supplementary material).
5
Application to Statistical Arbitrage
Our first application is motivated by financial models that are aimed at creating statistical arbitrage
opportunities. In the literature, ?statistical arbitrage? refers to statistical mispricing of one or more
assets based on their expected value. One of the most common trading strategies, known as ?pairs
trading?, seeks to create a mean reverting portfolio using two assets with same sectorial belonging
(typically using both long and short sales). Then, by buying this portfolio below its mean and selling
it above, one can have an expected positive profit with low risk.
Here we extend the traditional pairs trading strategy, and present an approach that aims at constructing a mean reverting portfolio from an arbitrary (yet known in advance) number of assets. Roughly
speaking, our goal is to synthetically create a mean reverting portfolio by maintaining weights upon
n different assets. The main problem arises in this context is how do we quantify the amount of mean
reversion of a given portfolio? Indeed, mean reversion is somewhat an ill-defined concept, and thus
5
different proxies are usually defined to capture its notion. We refer the reader to [13, 14, 15], in
which few of these proxies (such as predictability and zero-crossing) are presented.
In this work, we consider a proxy that is aimed at preserving the mean price of the constructed
portfolio (over the last m trading periods) close to zero, while maximizing its variance. We note that
due to the very nature of the problem (weights of one trading period affect future performance), the
memory comes unavoidably into the picture.
We proceed to formally define the new mean reversion proxy and the use of our new algorithm in
this model. Thus, denote by yt ? Rn the prices of n assets at time t, and by xt ? Rn a distribution
of weights over these assets. Since short selling is allowed, the norm of xt can sum up to an arbitrary
number, determined by the loan flexibility. Without loss of generality we assume that kxt k2 = 1,
which is also assumed in the works of [14, 15]. Note that since xt determines the proportion of
wealth to be invested in each asset and not the actual wealth it self, any other constant would work
as well. Consequently, define:
!2
m
m
X
X
2
>
ft (xt?m , . . . , xt ) =
xt?i yt?i
???
x>
,
(2)
t?i yt?i
i=0
i=0
T
for some ? > 0. Notice that minimizing ft iteratively yields a process {x>
t yt }t=1 such that its
mean is close to zero (due to the expression on the left), and its variance is maximized (due to the
expression on the right). We use the regret criterion to measure our performance against the best
distribution of weights in hindsight, and wish to generate a series of weights {xt }Tt=1 such that the
regret is sublinear. Thus, define the memoryless loss function f?t (x) = ft (x, . . . , x) and denote
!
m?1
m?1
X m?1
X
X
>
>
yt?i yt?j and Bt = ? ?
yt?i yt?i .
At =
i=0 j=0
i=0
Notice we can write f?t (x) = x At x ? x Bt x. Since f?t is not convex in general, our techniques
are not straightforwardly applicable here. However, the hidden convexity of the problem allows us
to bypass this issue by a simple and tight Positive Semi-Definite (PSD) relaxation. Define
ht (X) = X ? At ? X ? Bt ,
(3)
Pn Pn
where X is a PSD matrix with T r(X) = 1, and X ? A is defined as i=1 j=1 X(i, j) ? A(i, j).
PT
Now, notice that the problem of minimizing t=m ht (X) is a PSD relaxation to the minimization
PT
problem t=m f?t (x), and for the optimal solution it holds that:
>
min
X
T
X
t=m
>
ht (X) ?
T
X
ht (x? x?> ) =
t=m
T
X
f?t (x? ).
t=m
PT
where x = arg minx?K t=m f?t (x). Also, we can recover a vector
Pn x from>the PSD matrix X
using an eigenvector decomposition as follows: represent X P
=
i=1 ?i vi vi , where each vi is
n
a unit vector and ?i are non-negative coefficients such that i=1 ?i = 1. Then, by sampling
the eigenvector x = vi with probability ?i , we get that E f?t (x) = ht (X). Technically, this
decomposition is possible due to the fact that X is a PSD matrix with T r(X) = 1. Notice that ht
is linear in X, and thus we can apply regret minimization techniques on the loss functions {ht }Tt=1 .
This procedure is formally given in Algorithm 3. For this algorithm we can prove the following:
?
Corollary 5.1. Let {ft }Tt=1 be as defined in Equation (2), and {ht }Tt=1 be the corresponding memoryless functions, as defined in Equation (3). Then, applying Algorithm 2 to the loss functions
{ht }Tt=1 yields an online sequence {Xt }Tt=1 , for which the following holds:
T
X
t=1
E [ht (Xt )] ? min
T
X
X0
Tr(X)=1 t=1
ht (X) = O
p
T log(T ) .
Sampling xt ? Xt using the eigenvector decomposition described above yields:
E [RT,m ] =
T
X
t=m
E [ft (xt?m , . . . , xt )] ? min
kxk=1
6
T
X
t=m
ft (x, . . . , x) = O
p
T log(T ) .
Algorithm 3 Online Statistical Arbitrage (OSA)
1: Input: Learning rate ?, memory parameter m, regularizer ?.
2: Initialize X1 = n1 In?n .
3: for t = 1 to T do
4:
Randomize xt ? Xt using the eigenvector decomposition.
5:
Observe ft and define ht as in equation (3).
6:
Apply Algorithm 2 to ht (Xt ) to get Xt+1 .
7: end for
Remark: We assume here that the prices of the n assets at round t are bounded for all t by a constant
which is independent of T .
The main novelty of our approach to the task of constructing mean reverting portfolios is the ability
to maintain the weight distributions online. This is in contrast to the traditional offline approaches
that require a training period (to learn a weight distribution), and a trading period (to apply a corresponding trading strategy).
6
Application to Multi-Step Ahead Prediction
Our second application is motivated by statistical models for time series prediction, and in particular
by statistical models for multi-step ahead AR prediction. Thus, let {Xt }Tt=1 be a time series (that is,
a series of signal observations). The traditional AR (short for autoregressive) model, parameterized
by lag p and coefficient vector ? ? Rp , assumes that each observation complies with
Xt =
p
X
?k Xt?k + t ,
k=1
where {t }t?Z is white noise. In words, the model assumes that Xt is a noisy linear combination of
the previous p observations. Sometimes, an additional additive term ?0 is included to indicate drift,
but we ignore this for simplicity.
The online setting for time series prediction is well-established by now, and appears in the works
of [16, 17]. Here, we adapt this setting to the task of multi-step ahead AR prediction as follows: at
round t, the online player has to predict Xt+m , while at her disposal are all the previous observations
X1 , . . . , Xt?1 (the parameter m determines the number of steps ahead). Then, Xt is revealed and
? t ), where X
? t denotes her prediction for Xt . For simplicity, we consider
she suffers loss of ft (Xt , X
? t ) = (Xt ? X
? t )2 .
the squared loss to be our error measure, that is, ft (Xt , X
In the statistical literature, a common approach to the problem of multi-step ahead prediction is
to consider 1-step ahead recursive AR predictors [18, 19]: essentially, this approach makes use of
standard methods (e.g., maximum likelihood or least squares estimation) to extract the 1-step ahead
estimator. For instance, a least squares estimator for ? at round t would be:
?
( t?1
)
!2 ?
p
t?1
?X
?
2
X
X
? AR (?)
X? ? X
=
arg
min
X
?
?
X
.
?LS = arg min
?
k ? ?k
?
?
? ?
?
? =1
? =1
? tAR (?LS ) =
Then, ?LS is used to generate a prediction for Xt : X
used as a proxy for it in order to predict the value of Xt+1 :
AR
? t+1
? tAR (?LS ) +
X
(?LS ) = ?1LS X
p
X
k=1
Pp
i=1
?iLS Xt?i , which is in turn
?kLS Xt?k+1 .
(4)
k=2
The values of Xt+2 , . . . , Xt+m are predicted in the same recursive manner. The most obvious
drawback of this approach is that not much can be said on the quality of this predictor even if the
AR model is well-specified, let alone if it is not (see [18] for further discussion on this issue).
In light of this, the motivation to formulate the problem of multi-step ahead prediction in the online
setting is quite clear: attaining regret in this setting would imply that our algorithm?s performance
7
Algorithm 4 Adaptation of Algorithm 1 to Multi-Step Ahead Prediction
1: Input: learning rate ?, regularization function R(x), signal {Xt }T
t=1 .
2: Choose w0 , . . . , w m ? KIP arbitrarily.
3: for t = m to T do
? tIP (wt?m ) = Pp wt?m Xt?m?k and suffer loss Xt ? X
? tIP (wt?m ) 2 .
4:
Predict X
k=1
k
n P
o
t
? IP (w) 2 + kwk2
5:
Set wt+1 = arg minw?KIP ? ? =m X? ? X
?
2
6: end for
is comparable with the best 1-step ahead recursive AR predictor in hindsight (even if the latter is
misspecified). Thus, our goal is to minimize the following regret term:
T
T
X
X
? t 2 ? min
? tAR (?) 2 ,
RT =
Xt ? X
Xt ? X
??K
t=1
t=1
where K denotes the set of all 1-step ahead recursive AR predictors, against which we want to
compete. Note that since the feedback is delayed (the AR coefficients chosen at round t?m are used
to generate the prediction at round t), the memory comes unavoidably into the picture. Nevertheless,
here also both of our techniques are not straightforwardly applicable due the non-convex structure
? AR (?) contains products of ? coefficients that cause the losses to
of the problem: each prediction X
t
be non-convex in ?.
To circumvent this issue,
let our predictions to be of
Ppwe use non-proper learning techniques, and
NP
NP
?
the form Xt+m (w) = k=1 wk Xt?k for a properly chosen set K ? Rp of the w coefficients.
Basically, the idea is to show that (a) attaining regret bound with respect to the best predictor in the
new family can be done using the techniques we present in this work; and (b) the best predictor in
the new family is better than the best 1-step ahead recursive AR predictor. This would imply a regret
bound with respect to best 1-step ahead recursive AR predictor in hindsight. Our formal result is
given in the following corollary:
? t (w))k2 . Then,
Corollary 6.1. Let D = supw1 ,w2 ?KIP kw1 ? w2 k2 and G = supw,t k?ft (Xt , X
t T
Algorithm 4 generates an online sequence {w }t=1 , for which it holds that
T
T
X
X
?
? AR (?) 2 ? 3GD T m.
? IP (wt?m ) 2 ? min
Xt ? X
Xt ? X
t
t
t=1
??K
t=1
Remark: The tighter bound in m (m1/2 instead of m3/4 ) follows directly by modifying the proof
of theorem 3.1 to this setting (ft is affected only by wt?m and not by wt?m , . . . , wt ).
In the above, the values of D and G are determined by the choice of the set K. For instance, if we
want to compete against the best ? ? K = [?1, 1]p we need to use the restriction wk ? 2m for
m
all k. In this
G ? 1. If we consider K to be the set of all ? ? Rp such that
? case, D ? 2 and ?
?k ? (1/ 2)k , we get that D ? m and G ? 1.
The main novelty of our approach to the task of multi-step ahead prediction is the elimination of
generative assumptions on the data, that is, we allow the time series to be arbitrarily generated. Such
assumptions are common in the statistical literature, and needed in general to extract ML estimators.
7
Discussion and Conclusion
In this work we extended the notion of online learning with memory to capture the general OCO
framework, and proposed two algorithms with tight regret guarantees. We then applied our algorithms to two extensively studied problems: construction of mean reverting portfolios, and multistep
ahead prediction. It remains for future work to further investigate the performance of our algorithms
in these problems and other problems in which the memory naturally arises.
Acknowledgments
This work has been supported by the European Community?s Seventh Framework Programme
(FP7/2007-2013) under grant agreement 306638 (SUPREL).
8
References
[1] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
[2] Elad Hazan. The convex optimization approach to regret minimization. Optimization for machine learning, page 287, 2011.
[3] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107?194, 2012.
[4] M.L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley Series
in Probability and Statistics. Wiley, 1994.
[5] Raman Arora, Ofer Dekel, and Ambuj Tewari. Online bandit learning against an adaptive adversary: from
regret to policy regret. 2012.
[6] Nicol`o Cesa-Bianchi, Ofer Dekel, and Ohad Shamir. Online learning with switching costs and other
adaptive adversaries. CoRR, abs/1302.4387, 2013.
[7] Neri Merhav, Erik Ordentlich, Gadiel Seroussi, and Marcelo J. Weinberger. On sequential strategies for
loss functions with memory. IEEE Transactions on Information Theory, 48(7):1947?1958, 2002.
[8] Andr?as Gy?orgy and Gergely Neu. Near-optimal rates for limited-delay universal lossy source coding. In
ISIT, pages 2218?2222, 2011.
[9] Sascha Geulen, Berthold V?ocking, and Melanie Winkler. Regret minimization for online buffering problems using the weighted majority algorithm. In COLT, pages 132?143, 2010.
[10] Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. In FOCS, pages 256?261,
1989.
[11] Eyal Gofer. Higher-order regret bounds with switching costs. In Proceedings of The 27th Conference on
Learning Theory, pages 210?243, 2014.
[12] Ofer Dekel, Jian Ding, Tomer Koren, and Yuval Peres. Bandits with switching costs: T?{2/3} regret.
arXiv preprint arXiv:1310.2997, 2013.
[13] Anatoly B. Schmidt. Financial Markets and Trading: An Introduction to Market Microstructure and
Trading Strategies (Wiley Finance). Wiley, 1 edition, August 2011.
[14] Alexandre D?Aspremont. Identifying small mean-reverting portfolios. Quant. Finance, 11(3):351?364,
2011.
[15] Marco Cuturi and Alexandre D?aspremont. Mean reversion with a variance threshold. 28(3):271?279,
May 2013.
[16] Oren Anava, Elad Hazan, Shie Mannor, and Ohad Shamir. Online learning for time series prediction.
arXiv preprint arXiv:1302.6927, 2013.
[17] Oren Anava, Elad Hazan, and Assaf Zeevi. Online time series prediction with missing data. In ICML,
2015.
[18] Michael P Clements and David F Hendry. Multi-step estimation for forecasting. Oxford Bulletin of
Economics and Statistics, 58(4):657?684, 1996.
[19] Massimiliano Marcellino, James H Stock, and Mark W Watson. A comparison of direct and iterated
multistep ar methods for forecasting macroeconomic time series. Journal of Econometrics, 135(1):499?
526, 2006.
[20] G.S. Maddala and I.M. Kim. Unit Roots, Cointegration, and Structural Change. Themes in Modern
Econometrics. Cambridge University Press, 1998.
[21] Soren Johansen. Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models. Econometrica, 59(6):1551?80, November 1991.
[22] Jakub W Jurek and Halla Yang. Dynamic portfolio selection in arbitrage. In EFA 2006 Meetings Paper,
2007.
[23] Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2-3):169?192, 2007.
[24] L?aszl?o Lov?asz and Santosh Vempala. Logconcave functions: Geometry and efficient sampling algorithms.
In FOCS, pages 640?649. IEEE Computer Society, 2003.
[25] Hariharan Narayanan and Alexander Rakhlin. Random walk approach to regret minimization. In John D.
Lafferty, Christopher K. I. Williams, John Shawe-Taylor, Richard S. Zemel, and Aron Culotta, editors,
NIPS, pages 1777?1785. Curran Associates, Inc., 2010.
9
| 6025 |@word trial:1 compression:1 norm:3 proportion:1 dekel:3 km:4 seek:1 decomposition:4 profit:1 tr:1 series:10 contains:1 denoting:1 past:1 current:4 clements:1 yet:5 dx:1 must:1 john:2 additive:1 enables:1 designed:1 alone:1 generative:1 warmuth:1 short:5 manfred:1 mannor:2 readability:1 simpler:1 along:1 constructed:1 reversion:4 direct:1 focs:2 prove:3 assaf:1 manner:1 x0:5 lov:1 sublinearly:1 indeed:1 market:2 expected:5 roughly:2 multi:11 buying:1 actual:1 armed:2 bounded:10 notation:4 israel:2 eigenvector:4 hindsight:5 guarantee:7 temporal:1 finance:3 k2:3 sale:1 unit:2 grant:1 suprel:1 before:1 positive:2 sd:3 mistake:1 switching:6 despite:1 oxford:1 multistep:2 lugosi:1 might:1 chose:1 studied:6 limited:2 practical:1 acknowledgment:1 testing:1 recursive:6 regret:50 implement:2 block:4 definite:1 procedure:1 universal:1 attain:3 adapting:1 word:1 refers:1 get:4 close:3 selection:2 context:2 risk:1 applying:1 restriction:1 yt:8 maximizing:1 missing:1 kale:1 economics:1 williams:1 l:6 convex:35 survey:1 formulate:1 simplicity:3 identifying:1 immediately:1 estimator:3 his:1 financial:2 notion:7 pt:5 play:2 construction:1 shamir:2 exact:2 programming:1 curran:1 hypothesis:1 agreement:1 associate:1 element:1 crossing:1 trend:1 particularly:1 econometrics:2 blocking:4 ft:33 aszl:1 preprint:2 ding:1 capture:5 culotta:1 yk:1 mentioned:1 convexity:3 cuturi:1 econometrica:1 dynamic:2 tight:2 technically:1 upon:1 completely:1 selling:2 easily:1 stock:1 tx:1 regularizer:1 massimiliano:1 zemel:1 choosing:1 shalev:1 quite:3 lag:1 whose:1 elad:5 widely:1 supplementary:5 say:2 relax:1 otherwise:2 ability:1 statistic:3 winkler:1 satyen:1 invested:1 noisy:1 final:1 online:37 ip:2 sequence:10 kxt:2 propose:1 product:1 adaptation:1 unavoidably:2 flexibility:1 ac:2 measured:2 seroussi:1 op:1 received:1 strong:1 c:1 predicted:1 implies:1 trading:9 quantify:1 differ:1 come:2 indicate:1 drawback:1 modifying:1 stochastic:1 enable:1 material:5 elimination:1 require:1 microstructure:1 generalization:1 preliminary:1 isit:1 tighter:1 extension:1 hold:6 marco:1 considered:5 predict:3 zeevi:1 omitted:1 estimation:3 applicable:5 currently:1 create:2 weighted:2 minimization:6 gaussian:1 supt:1 aim:1 rather:1 pn:3 tar:3 corollary:5 she:6 properly:2 likelihood:1 contrast:2 adversarial:2 attains:5 kim:1 ftn:1 dependent:1 accumulated:2 typically:1 bt:3 initially:1 her:3 bandit:6 hidden:1 going:1 interested:1 arg:5 issue:3 ill:1 supw:1 colt:1 art:1 special:3 initialize:2 field:1 aware:2 equal:3 santosh:1 sampling:4 buffering:1 icml:1 oco:20 nearly:1 future:2 simplex:1 np:2 richard:1 oblivious:1 few:1 modern:1 comprehensive:2 delayed:1 geometry:1 n1:1 maintain:1 psd:5 ab:1 efa:1 investigate:1 yielding:2 light:1 predefined:1 ehazan:1 cointegration:2 necessary:2 minw:1 ohad:2 divide:1 taylor:1 littlestone:1 haifa:2 walk:1 theoretical:2 instance:4 earlier:1 downside:1 ar:15 cost:7 subset:1 predictor:8 technion:4 delay:2 seventh:1 straightforwardly:2 supx:2 chooses:3 gd:1 density:1 randomized:2 michael:1 tip:2 together:1 ym:2 anatoly:1 w1:1 squared:1 gergely:1 cesa:2 choose:3 possibly:2 creating:1 expert:6 attaining:2 gy:1 coding:3 wk:2 coefficient:5 inc:1 depends:1 vi:4 aron:1 later:1 root:1 eyal:1 hazan:5 analyze:1 sup:2 recover:1 complicated:1 shai:1 defer:1 minimize:2 square:2 hariharan:1 il:3 marcelo:1 variance:3 who:5 efficiently:1 maximized:1 yield:6 correspond:2 iterated:1 basically:1 asset:8 suffers:4 neu:1 definition:3 against:8 pp:2 james:1 obvious:1 naturally:2 associated:1 proof:3 proved:1 counterfactual:2 knowledge:2 appears:3 wwt:2 disposal:1 higher:1 alexandre:2 soren:1 follow:1 done:1 strongly:10 generality:1 christopher:1 continuity:1 defines:1 quality:1 grows:1 lossy:1 usa:1 effect:1 requiring:1 concept:2 regularization:3 memoryless:8 iteratively:3 geulen:1 white:1 puterman:1 round:16 game:2 self:1 criterion:5 generalized:1 tt:21 complete:2 novel:1 misspecified:1 common:4 rl:1 extend:3 m1:1 kwk2:1 refer:2 cambridge:2 shawe:1 portfolio:11 had:2 kw1:1 gt:11 add:1 recent:1 showed:2 scenario:2 arbitrarily:5 continue:1 watson:1 meeting:1 supw1:1 captured:1 seen:1 additional:3 somewhat:2 preserving:1 determine:3 paradigm:2 period:4 novelty:2 signal:2 semi:1 full:1 smooth:1 anava:3 adapt:4 long:1 plugging:1 ensuring:1 prediction:20 variant:2 essentially:1 arxiv:4 represent:1 sometimes:1 agarwal:1 oren:3 achieved:1 addition:3 background:1 want:2 wealth:2 source:1 jian:1 w2:2 asz:1 logconcave:1 shie:3 lafferty:1 spirit:1 integer:1 ee:1 near:1 structural:1 synthetically:1 revealed:4 yang:1 concerned:1 switch:8 affect:2 suboptimal:1 quant:1 idea:3 motivated:2 expression:2 forecasting:2 neri:1 ficient:1 suffer:5 york:1 speaking:2 proceed:1 action:2 remark:2 cause:1 ignored:1 generally:1 tewari:1 detailed:3 aimed:3 clear:1 amount:1 extensively:1 narayanan:1 generate:4 andr:1 notice:6 per:2 broadly:1 write:1 discrete:1 shall:1 affected:1 gadiel:1 nevertheless:1 threshold:1 ht:14 relaxation:2 year:1 convert:1 sum:1 compete:2 parameterized:1 powerful:1 throughout:3 reader:2 family:2 utilizes:1 raman:1 decision:21 comparable:1 bound:21 played:3 koren:1 adapted:1 ahead:16 constraint:2 sake:1 generates:4 aspect:1 min:10 vempala:1 relatively:1 combination:1 belonging:1 y0:2 intuitively:2 equation:4 previously:1 remains:2 turn:1 reverting:6 needed:1 fp7:1 complies:1 end:5 ocking:1 ofer:3 apply:3 observe:1 schmidt:1 weinberger:1 rp:3 original:1 assumes:2 denotes:2 cf:1 opportunity:1 maintaining:1 amit:1 society:1 strategy:5 randomize:1 rt:10 traditional:3 said:1 minx:3 majority:2 w0:1 erik:1 length:1 relationship:1 minimizing:2 difficult:1 merhav:2 stated:1 negative:1 design:1 proper:1 policy:20 bianchi:2 upper:1 observation:4 markov:1 november:1 immediate:1 peres:1 extended:3 rn:4 arbitrary:2 august:1 tomer:1 community:1 drift:1 david:1 complement:1 timal:1 pair:2 specified:1 kip:3 nick:1 johansen:1 established:2 nip:1 adversary:21 usually:2 below:3 xm:5 ambuj:1 including:1 memory:41 suitable:1 regularized:1 circumvent:1 solvable:1 regret1:1 melanie:1 imply:2 picture:2 arora:1 aspremont:2 extract:2 literature:4 nicol:1 loss:53 sublinear:1 proven:1 foundation:1 offered:1 sufficient:1 proxy:5 editor:1 bypass:1 arbitrage:6 summary:1 supported:1 last:1 offline:1 formal:2 allow:1 bulletin:1 feedback:4 berthold:1 ordentlich:1 autoregressive:2 ignores:1 author:1 reinforcement:1 adaptive:2 programme:1 employing:1 cope:5 transaction:1 ignore:1 ml:1 assumed:5 sascha:1 leader:1 shwartz:1 continuous:5 table:1 additionally:1 nature:1 learn:1 obtaining:1 orgy:1 european:1 constructing:2 main:5 whole:1 noise:1 motivation:1 edition:1 allowed:1 x1:3 predictability:1 wiley:4 shrinking:2 fails:1 theme:1 wish:1 kxk2:2 maddala:1 theorem:7 dartboard:2 xt:75 jakub:1 appeal:1 rakhlin:1 exists:1 sequential:2 corr:1 kx:1 logarithmic:2 simply:2 kxk:1 g2:1 applies:3 corresponds:1 determines:2 relies:1 kls:1 goal:6 sized:1 presentation:1 consequently:1 price:4 lipschitz:10 change:1 loan:1 determined:2 included:1 yuval:1 wt:11 lemma:1 total:1 player:17 m3:1 formally:3 mark:1 latter:1 arises:2 macroeconomic:1 adaptiveness:1 alexander:1 princeton:2 |
5,554 | 6,026 | Revenue Optimization against
Strategic Buyers
Mehryar Mohri
Courant Institute of Mathematical Sciences
251 Mercer Street
New York, NY, 10012
? Medina?
Andr?es Munoz
Google Research
111 8th Avenue
New York, NY, 10011
Abstract
We present a revenue optimization algorithm for posted-price auctions when facing a buyer with random valuations who seeks to optimize his -discounted surplus. In order to analyze this problem we introduce the notion of ?-strategic buyer,
a more natural notion of strategic behavior than what has been considered in the
past. We improve upon the previous state-of-the-art and achieve an optimal regret
bound in O(log T + 1/ log(1/ ))pwhen the seller selects prices from a finite set
e T + T 1/4 / log(1/ )) when the prices offered
and provide a regret bound in O(
are selected out of the interval [0, 1].
1
Introduction
Online advertisement is currently the fastest growing form of advertising. This growth has been
motivated, among other reasons, by the existence of well defined metrics of effectiveness such as
click-through-rate and conversion rates. Moreover, online advertisement enables the design of better
targeted campaigns by allowing advertisers to decide which type of consumers should see their
advertisement. These advantages have promoted the fast pace development of a large number of
advertising platforms. Among them, AdExchanges have increased in popularity in recent years. In
contrast to traditional advertising, AdExchanges do not involve contracts between publishers and
advertisers. Instead, advertisers are allowed to bid in real-time for the right to display their ad.
An AdExchange works as follows: when a user visits a publisher?s website, the publisher sends
this information to the AdExchange which runs a second-price auction with reserve (Vickrey, 1961;
Milgrom, 2004) among all interested advertisers. Finally, the winner of the auction gets the right
to display his ad on the publisher?s website and pays the maximum of the second highest bid and
the reserve price. In practice, this process is performed in milliseconds, resulting in millions of
transactions recorded daily by the AdExchange. Thus, one might expect that the AdExchange could
benefit from this information by learning how much an advertiser values the right to display his ad
and setting an optimal reserve price. This idea has recently motivated research in the learning community on revenue optimization in second-price auctions with reserve (Mohri and Medina, 2014a;
Cui et al., 2011; Cesa-Bianchi et al., 2015).
The algorithms proposed by these authors heavily rely on the assumption that the advertisers? bids
are drawn i.i.d. from some underlying distribution. However, if an advertiser is aware of the fact that
the AdExchange or publisher are using a revenue optimization algorithm, then, most likely, he would
adjust his behavior to trick the publisher into offering a more beneficial price in the future. Under
this scenario, the assumptions of (Mohri and Medina, 2014a) and (Cesa-Bianchi et al., 2015) would
be violated. In fact, empirical evidence of strategic behavior by advertisers has been documented by
Edelman and Ostrovsky (2007). It is therefore critical to analyze the interactions between publishers
and strategic advertisers.
?
This work was partially done at the Courant Institute of Mathematical Sciences.
1
In this paper, we consider the simpler scenario of revenue optimization in posted-price auctions with
strategic buyers, first analyzed by Amin et al. (2013). As pointed out by Amin et al. (2013), the study
of this simplified problem is truly relevant since a large number of auctions run by AdExchanges
consist of only one buyer (or one buyer with a large bid and several buyers with negligible bids). In
this scenario, a second-price auction in fact reduces to a posted-price auction where the seller sets a
reserve price and the buyer decides to accept it (bid above it) or reject it (bid below).
To analyze the sequential nature of this problem, we can cast it as a repeated game between a buyer
and a seller where a strategic buyer seeks to optimize his surplus while the seller seeks to collect
the largest possible revenue from the buyer. This can be viewed as an instance of a repeated nonzero sum game with incomplete information, which is a problem that has been well studied in the
Economics and Game Theory community (Nachbar, 1997, 2001). However, such previous work has
mostly concentrated on the characterization of different types of achievable equilibria as opposed to
the design of an algorithm for the seller. Furthermore, the problem we consider admits a particular
structure that can be exploited to derive learning algorithms with more favorable guarantees for the
specific task of revenue optimization.
The problem can also be viewed as an instance of a multi-armed bandit problem (Auer et al., 2002;
Lai and Robbins, 1985), more specifically, a particular type of continuous bandit problem previously
studied by Kleinberg and Leighton (2003). Indeed, at every time t the buyer can only observe the
revenue of the price he offered and his goal is to find, as fast as possible, the price that would yield the
largest expected revenue. Unlike a bandit problem, however, here, the performance of an algorithm
cannot be measured in terms of the external regret. Indeed, as observed by Bubeck and Cesa-Bianchi
(2012) and Arora et al. (2012), the notion of external regret becomes meaningless when facing an
adversary that reacts to the learner?s actions. In short, instead of comparing to the best achievable
revenue by a fixed price over the sequence of rewards seen, one should compare against the simulated
sequence of rewards that would have been seen had the seller played a fixed price. This notion of
regret is known as strategic regret and regret minimization algorithms have been proposed before
under different scenarios (Amin et al., 2013, 2014; Mohri and Medina, 2014a). In this paper we
provide a regret minimization algorithm for the stochastic scenario, where, at each round, the buyer
receives an i.i.d. valuation from an underlying distribution. While this random valuation might seems
surprising, it is in fact a standard assumption in the study of auctions (Milgrom and Weber, 1982;
Milgrom, 2004; Cole and Roughgarden, 2014). Moreover, in practice, advertisers rarely interact
directly with an AdExchange. Instead, several advertisers are part of an ad network and it is that ad
network that bids on their behalf. Therefore, the valuation of the ad network is not likely to remain
fixed. Our model is also motivated by the fact that the valuation of an advertiser depends on the
user visiting the publisher?s website. Since these visits can be considered random, it follows that the
buyer?s valuation is in fact a random variable.
A crucial component of our analysis is the definition of a strategic buyer. We consider a buyer who
seeks to optimize his cumulative discounted surplus. However, we show that a buyer who exactly
maximizes his surplus must have unlimited computational power, which is not a realistic assumption
in practice. Instead, we define the notion of an ?-strategic buyer who seeks only to approximately
optimize his surplus. Our main contribution is to show that, when facing an ?-strategic buyer,
p a seller
can achieve O(log T ) regret when the set of possible prices to offer is finite, and an O( T ) regret
bound when the set of prices is [0, 1]. Remarkably, these bounds on the regret match those given by
Kleinberg and Leighton (2003) in a truthful scenario where the buyer does not behave strategically.
The rest of this paper is organized as follows. In Section 2, we discuss in more detail related previous
work. Next, we define more formally the problem setup (Section 3). In particular, we give a precise
definition of the notion of ?-strategic buyer (Section 3.2). Our main algorithm for a finite set of
prices is described in Section 4, where we also provide a regret analysis.
p In Section 5, we extend
our algorithm to the continuous case where we show that a regret in O( T ) can be achieved.
2
Previous work
The problem of revenue optimization in auctions goes back to the seminal work of Myerson (1981),
who showed that under some regularity assumptions over the distribution D, the revenue optimal,
incentive-compatible mechanism is a second-price auction with reserve. This result applies to singleshot auctions where buyers and the seller interact only once and the underlying value distribution is
2
known to the seller. In practice, however it is not realistic to assume that the seller has access to this
distribution. Instead, in cases such as on-line advertisement, the seller interacts with the buyer a large
number of times and can therefore infer his behavior from historical data. This fact has motivated
the design of several learning algorithms such as that of (Cesa-Bianchi et al., 2015) who proposed
a bandit algorithm for revenue optimization in second-price auctions; and the work of (Mohri and
Medina, 2014a), who provided learning guarantees and an algorithm for revenue optimization where
each auction is associated with a feature vector.
The aforementioned algorithms are formulated under the assumption of buyers bidding in an i.i.d.
fashion and do not take into account the fact that buyers can in fact react to the use of revenue
optimization algorithms by the seller. This has motivated a series of publications focusing on this
particular problem. Bikhchandani and McCardle (2012) analyzed the same problem proposed here
when the buyer and seller interact for only two rounds. Kanoria and Nazerzadeh (2014) considered a repeated game of second-price auctions where the seller knows that the value distribution
can be either high, meaning it is concentrated around high values, or low; and his goal is to find
out from which distribution the valuations are drawn under the assumption that buyers can behave
strategically.
Finally, the scenario considered here was first introduced by Amin et al. (2013) where the authors
solve the problem of optimizing revenue
against a strategic buyer with a fixed valuation and showed
p
that a buyer can achieve regret in O 1 T . Mohri and Medina (2014b) later showed that one can
T
in fact achieve a regret in O( log
) closing the gap with the lower bound to a factor of log T . The
1
scenario of random valuations we consider here was also analyzed by Amin et al. (2013) where an
1
algorithm achieving regret in O |P|T ? + (1 1)1/? + 1/?
was proposed when prices are offered
?
?
from a finite set P, with = minp2P p D(v > p ) pD(v > p) and ? a free parameter. Finally,
an extension of this algorithm to the contextual setting was presented by the same authors in (Amin
2/3
et al., 2014) where they provide an algorithm achieving O T1
regret.
The algorithms proposed by Amin et al. (2013, 2014) consist of alternating exploration and exploitation. That is, there exist rounds where the seller only tries to estimate the value of the buyer and
other rounds where he uses this information to try to extract the largest possible revenue. It is well
known in the bandit literature (Dani and Hayes, 2006; Abernethy et al., 2008) that algorithms that
ignore information obtained on exploitation rounds tend to be sub-optimal. Indeed, even in a truthful
scenario where the UCB algorithm (Auer et al., 2002) achieves regret in O( log T ), the algorithm prop
1
posed by Amin et al. (2013) achieves sub-optimal regret in O e log T log
for the optimal choice
of ? which, incidentally, requires also access to the unknown value .
We propose instead an algorithm inspired by the UCB strategy using exploration and exploitation
|P|
simultaneously. We show that our algorithm admits a regret that is in O log T + log(1/
) , which
matches the UCB bound in the truthful scenario and which depends on only through the additive
1
1
term log(1/
known to be unavoidable (Amin et al., 2013). Our results cannot be directly
) ? 1
compared with those of Amin et al. (2013) since they consider a fully strategic adversary whereas
we consider an ?-strategic adversary. As we will see in the next section, however, the notion of ?strategic adversary is in fact more natural than that of a buyer who exactly optimizes his discounted
surplus. Moreover, it is not hard to show that, when applied to our scenario, perhaps modulo a
constant, the algorithm of Amin et al. (2013) cannot achieve a better regret than in the fully strategic
adversary.
3
Setup
We consider the following scenario, similar to the one introduced by Amin et al. (2013).
3.1
Scenario
A buyer and a seller interact for T rounds. At each round t 2 {1, . . . , T }, the seller attempts to sell
some good to the buyer, such as the right to display an ad. The buyer receives a valuation vt 2 [0, 1]
which is unknown to the seller and is sampled from a distribution D. The seller offers a price pt ,
3
in response to which the buyer selects an action at 2 {0, 1}, with at = 1 indicating that he accepts
the price and at = 0 otherwise. We will say the buyer lies if he accepts the price at time t (at = 1)
while the price offered is above his valuation (vt ? pt ), or when he rejects the price (at = 0) while
his valuation is above the price offered (vt > pt ).
The seller seeks to optimize his expected revenue over the T rounds of interaction, that is,
?X
T
Rev = E
a t pt .
t=1
Notice that, when facing a truthful buyer, for any price p, the expected revenue of the seller is given
by pD(v > p). Therefore, with knowledge of D, the seller could set all prices pt to p? , where
p? 2 argmaxp2[0,1] pD(v > p). Since the actions of the buyer do not affect the choice of future
prices by the seller, the buyer has no incentive to lie and the seller will obtain an expected revenue
of T p? D(v > p? ). It is therefore natural to measure the performance of any revenue optimization
algorithm in terms of the following notion of strategic regret:
?X
T
?
?
RegT = T p D(v > p ) Rev = max T pD(v > p) E
a t pt .
p2[0,1]
t=1
The objective of the seller coincides with the one assumed by Kleinberg and Leighton (2003) in the
study of repeated interactions with buyers with a random valuation. However, here, we will allow
the buyer to behave strategically, which results in a harder problem. Nevertheless, the buyer is not
assumed to be fully adversarial as in (Kleinberg and Leighton, 2003). Instead, we will assume, as
discussed in detail in the next section, that the buyer seeks to approximately optimize his surplus,
which can be viewed as a more natural assumption.
3.2
?-strategic Buyers
Here, we define the family of buyers considered throughout this paper. We denote by x1:t 2 Rt
the vector (x1 , . . . , xt ) and define the history of the game up to time t by Ht := p1:t , v1:t , a1:t .
Before the first round, the seller decides on an algorithm A for setting prices and this algorithm is
announced to the buyer. The buyer then selects a strategy B : (Ht 1 , vt , pt ) 7! at . For any value
2 (0, 1) and strategy B, we define the buyer?s discounted expected surplus by
?X
T
t 1
Sur (B) = E
at (vt pt ) .
t=1
A buyer minimizing this discounted surplus wishes to acquire the item as inexpensively as possible,
but does not wish to wait too long to obtain a favorable price.
In order to optimize his surplus, a buyer must then solve a non-homogeneous Markov decision process (MDP). Indeed, consider the scenario where at time t the seller offers prices from a distribution
Dt 2 D, where D is a family of probability distributions over the interval [0, 1]. The seller updates his beliefs as follows: the current distribution Dt is selected as a function of the distribution
at the previous round as well as the history Ht 1 (which is all the information available to the
seller). More formally, we let ft : (Dt , Ht ) 7! Dt+1 be a transition function for the seller. Let
st = (Dt , Ht 1 , vt , pt ) denote the state of the environment at time t, that is, all the information
available at time t to the buyer. Finally, let St (st ) denote the maximum attainable expected surplus
of a buyer that is in state st at time t. It is clear that St will satisfy the following Bellman equations:
?
?
St (st ) = max t 1 at (vt pt ) + E(vt+1 ,pt+1 )?D?ft (Dt ,Ht ) St+1 (ft (Dt , Ht ), Ht , vt+1 , pt+1 ,
at 2{0,1}
(1)
with the boundary condition ST (sT ) = T 1 (vT pT )1pT ?vT .
Definition 1. A buyer is said to be strategic if his action at time t is a solution of the Bellman
equation (1).
Notice that, depending on the choice of the family D, the number of states of the MDP solved by
a strategic buyer may be infinite. Even for a deterministic algorithm that offers prices from a finite
set P, the number of states of this MDP would be in ?(T |P| ), which quickly becomes intractable.
Thus, in view of the prohibitive cost of computing his actions, the model of a fully strategic buyer
does not seem to be realistic. We introduce instead the concept of ?-strategic buyers.
4
Definition 2. A buyer is said to be ?-strategic if he behaves strategically, except when no sequence
of actions can improve upon the future surplus of the truthful sequence by more than t0 ?, or except
for the first 0 < t < t0 rounds, for some t0 0 depending only on the seller?s algorithm, in which
cases he acts truthfully.
We show in Section 4 that this definition implies the existence of t1 > t0 such that an ?-strategic
buyer only solves an MDP over the interval [t0 , t1 ] which becomes a tractable problem for t1 ? T .
The parameter t0 used in the definition is introduced to consider the unlikely scenario where a
buyer?s algorithm deliberately ignores all information observed during the rounds 0 < t < t0 , in
which case it is optimal for the buyer to behave truthfully.
Our definition is motivated by the fact that, for a buyer with bounded computational power, there is
no incentive in acting non-truthfully if the gain in surplus over a truthful behavior is negligible.
4
Regret Analysis
We now turn our attention to the problem faced by the seller. The seller?s goal is to maximize his
revenue. When the buyer is truthful, Kleinberg and Leighton (2003) have shown that this problem
can be cast as a continuous bandit problem. In that scenario, the strategic regret in fact coincides
with the pseudo-regret, which is the quantity commonly minimized in a stochastic bandit setting
(Auer et al., 2002; Bubeck and Cesa-Bianchi, 2012). Thus, if the set of possible prices P is finite,
the seller can use the UCB algorithm Auer et al. (2002) to minimize his pseudo-regret.
In the presence of an ?-strategic buyer, the rewards are no longer stochastic. Therefore, we need to
analyze the regret of a seller in the presence of lies. Let P denote a finite set of prices offered by
the seller. Define ?p = pD(v > p) and p = ?p? ?p . For every price p 2 P, define also Tp (t)
to be the number of times price p has been offered up to time t. We will denote by T ? and ?? the
corresponding quantities associated with the optimal price p? .
Lemma 1. Let L denote the number of times a buyer lies. For any > 0, the strategic regret of a
seller can be bounded as follows:
X
RegT ? E[L] +
E[Tp (t)] p + T .
p:
p>
Proof. Let Lt denote the event that the buyer lies at round t, then the expected revenue of a seller is
given by
?X
?X
?XX
T X
T X
T
c
c
E
at pt 1pt =p (1Lt + 1Lt )
E
at pt 1pt =p 1Lt = E
1vt >p p1pt =p 1Lct ,
t=1 p2P
t=1 p2P
p2P t=1
where the last equality follows from the fact that when the buyer is truthful at = 1vt >p . Moreover,
PT
using the fact that t=1 1Lt = L, we have
?XX
?XX
?XX
T
T
T
E
1vt >p p1pt =p 1Lct = E
1vt >p p1pt =p
E
1vt >p p1pt =p 1Lt
p2P t=1
p2P t=1
=
X
p2P t=1
?p E[Tp (T )]
t=1
p2P
X
E
?X
T
?p E[Tp (T )]
1vt >pt pt 1Lt
E[L].
p2P
Since the regret of offering prices
Pfor which
the seller is bounded by E[L] + p : p >
p
is bounded by T , it follows that the regret of
p ?
E[Tp (T )] + T .
We now define a robust UCB (R-UCBL ) algorithm for which we can bound the expectations
E[Tp (T )]. For every price p 2 P, define
t
1 X
?
bp (t) =
pt 1pt =p 1vt >pt
Tp (t) i=1
5
to be the trueP
empirical mean of the reward that a seller would obtain when facing a truthful buyer.
t
Let Lt (p) = i=1 at 1vt >p 1pt =p p denote the revenue obtained by the seller in rounds where
the buyer lied. Notice that Lt (p) can be positive or negative. Finally, let
Lt (p)
Tp (t)
?p (t) = ?
bp (t) +
be the empirical mean obtained when offering price p that is observed by the seller. For the definition
of our algorithm, we will make use of the following upper confidence bound:
s
Lp
2 log t
Bp (t, L) =
+
.
Tp (t)
Tp (t)
We will use B ? as a shorthand for Bp? . Our R-UCBL algorithm selects the price pt that maximizes
the quantity
max ?p (t) + Bp (t, L).
p2P
We proceed to bound the expected number of times a sub-optimal price p is offered.
Proposition 1. Let Pt (p, L) := P
inequality holds:
E[Tp (t)] ?
4Lp
?? (t)
+ | LTt?(p(t))
+
32 log T
2
p
p
Proof. For any p and t define ?p (t) =
then
?p (t) + Bp (t, L)
?
Lt (p)
Tp (t)
q
2 log t
Tp (t)
?
?
bp (t)
?p
+2+
p
Tp (t)
T
X
+
p?
T ? (t)
. Then, the following
Pt (p, L).
t=1
and let ? ? = ?p? . If at time t price p 6= p? is offered
B ? (t, L)
0
Lt (p? )
0
T ? (t)
? h Lt (p) Lt (p? )
Lp
Lp? i
p +
Tp (t)
T ? (t)
Tp (t) T ? (t)
? ?
?
?
+ ?
?
b (t) ? ? (t)
0.
Lt (p)
?
b? (t)
Tp (t)
? ?
?p (t) + 2Bp (t, L)
B ? (t, L)
, ?
bp (t) + Bp (t, L) +
,
L
(2)
Therefore, if price p is selected, then at least one of the four terms in inequality (2) must be positive.
32 log T
Let u = 4Lp
. Notice that if Tp (t) > u then 2Bp (t, L)
2
p < 0. Thus, we can write
p +
p
E[Tp (T )] = E
T
hX
t=1
T
i
X
1pt =p (1Tp (t)?u + 1Tp (t)>u ) = u +
Pr(pt = p, Tp (t) > u).
t=u
This combined with the positivity of at least one of the four terms in (2) yields:
E[Tp (T )] ? u +
T
X
t=u
Pr ?
bp (t)
?p
Pr ?
bp (t)
?p
+ Pr ??
?u+
T
X
t=u
?p (t) + Pr
?
b? (t) > ? ? (t)
? L (p? )
t
T ? (t)
?p (t) + Pr ??
?
b? (t) > ? ? (t) + Pt (p, L).
We can now bound the probabilities appearing in (3) as follows:
s
!
s
1X
2 log t
Pr ?
bp (t) ?p
? Pr 9s 2 [0, t] :
p1vi >p
Tp (t)
s i=1
?
t
X
t
4
=t
s=1
6
3
,
Lp
Lp? ?
+ ?
Tp (t) T (t)
Lt (p)
Tp (t)
?p
r
2 log t
s
!
(3)
where the last inequality follows from an application of Hoeffding?s inequality as well as the union
bound. A similar argument can be made to bound the other term in (3). Using the definition of u we
then have
E[Tp (T )] ?
4Lp
+
32 log T
2
p
p
which completes the proof.
+
T
X
2t
3
t=u
+
T
X
t=1
Pt (p, L) ?
4Lp
+
32 log T
2
p
p
+2+
T
X
Pt (p, L),
t=1
Corollary 1. Let L denote the number of times a buyer lies. Then, the strategic regret of R-UCBL
can be bounded as follows:
?
T
? X ?
X ? 32 log T
X
RegT ? L 4
p + E[L] +
+2 p+
Pt (p, L) + T .
p2P
p:
p
p>
t=1
Notice that the choice of parameter L of R-UCBL is subject to a trade-off: on the one hand, L
should be small to minimize the first term of this regret bound; on the other hand, function Pt (p, L)
PT
is decreasing in T , therefore the term t=1 Pt (p, L) is beneficial for larger values of L.
We now show that an ?-strategic buyer can only lie a finite number of times, which will imply
the existence of an appropriate choice of L for which we can ensure that Pt (p, L) = 0, thereby
recovering the standard logarithmic regret of UCB.
Proposition
factor satisfies ? 0 < 1, an ?-strategic buyer stops lying
l 2. If the discounting
m
0 ))
after S = log(1/?(1
rounds.
log(1/ 0 )
Proof. After S rounds, for any sequence of actions at the surplus that can be achieved by the buyer
in the remaining rounds is bounded by
T
X
E[at (vt
t=t0 +S
S+t0
pt )] ?
T
S+t0
?
1
1
? ?,
for any sequence of actions. Thus, by definition, an ?-strategic buyer does not lie after S rounds.
Corollary 2. If the ldiscounting factor
satisfies ? 0 < 1 and the seller uses the R-UCBL
m
0 ))
algorithm with L = log(1/?(1
,
then
the strategic regret of the seller is bounded by
log(1/ 0 )
&
'
?
X 32 log T
log ?(1 1 0 ) ? X
4
p
+
1
+
+2 p+T .
(4)
1
log 0
p
p2P
p:
>
p
Proof. Follows trivially from Corollary 1 and the previous proposition, which implies that
Pt (p, L) ? 0.
Let us compare
those of Amin ?
et al. (2013). The regret bound given in (Amin et al.,
? our results with
|P|2
|P|2
?
2013) is in O |P|T + 2/? +
, where ? is a parameter controlling the fraction of
1/?
(1
0)
rounds used for exploration and = minp2P p . In particular, notice that the dependency of this
bound on the cardinality of P is quadratic instead of linear as in our case. Moreover, the dependency
1/?
on 0 is in O( 1 1
). Therefore, even in a truthful scenario where ? 1. The dependency on T
remains polynomial whereas we recover the standard logarithmic regret. Only when the seller has
access
is a strong requirement, can he set the optimal value of ? to achieve regret in
p to , which
log T log 1
O e
.
Of course, the algorithm proposed by Amin et al. (2013) assumes that the buyer is fully strategic
whereas we only require the buyer to be ?-strategic. However, the authors assume that the distribution satisfies a Lipchitz condition which technically allows them to bound the number of lies in the
same way as in Proposition 2. Therefore, the regret bound achieved by their algorithm remains the
same in our scenario.
7
5
Continuous pricing strategy
Thus far, we have assumed that the prices offered by the buyer are selected out of a discrete set P.
In practice, however, the optimal price may not be within P and therefore the algorithm described in
the previous section might accumulate a large regret when compared against the best price in [0, 1].
In order to solve this problem, we propose to discretize the interval [0, 1] and run our R-UCBL algorithm on the resulting discretization. This induces a trade-off since a better discretization implies a
larger regret term in (4). To find the optimal size of the discretization we follow the ideas of Kleinberg and Leighton (2003) and consider distributions D that satisfy the condition that the function
f : p 7! pD(v > p) admits a unique maximizer p? such that f 00 (p) < 0.
Throughout this section, we let K 2 N and we consider the following finite set of prices
PK = Ki |1 ? i ? K ? [0, 1]. We also let pK be an optimal price in PK , that is pK 2
argmaxp2PK f (p) and we let p? = argmaxp2[0,1] f (p). Finally, we denote by p = f (pK ) f (p)
the sub-optimality gap with respect to price pK and by p = f (p? ) f (p) the corresponding
gap with respect to p? . The following theorem can be proven following similar ideas to those of
Kleinberg and Leighton (2003). We defer its proof to the appendix.
Theorem 1. Let K =
1/4
T
,
log T
if the discounting factor
satisfies ? 0 < 1 and the seller
l
m
0 ))
and L = log(1/?(1
, then the strategic
log(1/ 0 )
uses the R-UCBL algorithm with the set of prices PK
regret of the seller can be bounded as follows:
&
?X
T
log ?(1 1
p
max f (p) E
at pt ? C T log T +
p2[0,1]
log 10
t=1
6
0)
'?
? T ?1/4
+1 .
log T
Conclusion
We introduced a revenue optimization algorithm for posted-price auctions that is robust against ?strategic buyers. Moreover, we showed that our notion of strategic behavior is more natural than
what has been previously studied. Our algorithm benefits from the optimal O log T + 1 1 regret
1/4
bound for a finite set of prices and admits regret in O T 1/2 + T1
when the buyer is offered prices
in [0, 1], a scenario that had not been considered previously in the literature of revenue optimization
against strategic buyers. It is known that a regret in o(T 1/2 ) is unattainable even in a truthful setting, but it remains an open problem to verify that the dependency on cannot be improved. Our
algorithm admits a simple analysis and we believe that the idea of making truthful algorithms robust
is general and can be extended to more complex auction mechanisms such as second-price auctions
with reserve.
7
Acknowledgments
We thank Afshin Rostamizadeh and Umar Syed for useful discussions about the topic of this paper
and the NIPS reviewers for their insightful comments. This work was partly funded by NSF IIS1117591 and NSF CCF-1535987.
8
References
Abernethy, J., E. Hazan, and A. Rakhlin (2008). Competing in the dark: An efficient algorithm for
bandit linear optimization. In Proceedings of COLT 2008, pp. 263?274.
Amin, K., A. Rostamizadeh, and U. Syed (2013). Learning prices for repeated auctions with strategic
buyers. In Proceedings of NIPS, pp. 1169?1177.
Amin, K., A. Rostamizadeh, and U. Syed (2014). Repeated contextual auctions with strategic buyers. In Proceedings of NIPS 2014, pp. 622?630.
Arora, R., O. Dekel, and A. Tewari (2012). Online bandit learning against an adaptive adversary:
from regret to policy regret. In Proceedings of ICML.
Auer, P., N. Cesa-Bianchi, and P. Fischer (2002). Finite-time analysis of the multiarmed bandit
problem. Machine Learning 47(2-3), 235?256.
Bikhchandani, S. and K. McCardle (2012). Behaviour-based price discrimination by a patient seller.
The B.E. Journal of Theoretical Economics 12(1), 1935?1704.
Bubeck, S. and N. Cesa-Bianchi (2012). Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in Machine Learning 5(1), 1?122.
Cesa-Bianchi, N., C. Gentile, and Y. Mansour (2015). Regret minimization for reserve prices in
second-price auctions. IEEE Transactions on Information Theory 61(1), 549?564.
Cole, R. and T. Roughgarden (2014). The sample complexity of revenue maximization. In Proceedings of STOC 2014, pp. 243?252.
Cui, Y., R. Zhang, W. Li, and J. Mao (2011). Bid landscape forecasting in online ad exchange
marketplace. In Proceedings of SIGKDD 2011, pp. 265?273.
Dani, V. and T. P. Hayes (2006). Robbing the bandit: less regret in online geometric optimization
against an adaptive adversary. In Proceedings of SODA 2006, pp. 937?943.
Edelman, B. and M. Ostrovsky (2007). Strategic bidder behavior in sponsored search auctions.
Decision Support Systems 43(1), 192?198.
Kanoria, Y. and H. Nazerzadeh (2014). Dynamic reserve prices for repeated auctions: Learning
from bids. In Proceedings of WINE 2014, pp. 232.
Kleinberg, R. D. and F. T. Leighton (2003). The value of knowing a demand curve: Bounds on
regret for online posted-price auctions. In Proceedings of FOCS 2003, pp. 594?605.
Lai, T. and H. Robbins (1985). Asymptotically efficient adaptive allocation rules. Advances in
Applied Mathematics 6(1), 4 ? 22.
Milgrom, P. and R. Weber (1982). A theory of auctions and competitive bidding. Econometrica:
Journal of the Econometric Society 50(5), 1089?1122.
Milgrom, P. R. (2004). Putting auction theory to work. Cambridge University Press.
Mohri, M. and A. M. Medina (2014a). Learning theory and algorithms for revenue optimization in
second price auctions with reserve. In Proceedings of ICML 2014, pp. 262?270.
Mohri, M. and A. M. Medina (2014b). Optimal regret minimization in posted-price auctions with
strategic buyers. In Proceedings of NIPS 2014, pp. 1871?1879.
Myerson, R. B. (1981). Optimal auction design. Mathematics of Operations Research 6(1), pp.
58?73.
Nachbar, J. (2001). Bayesian learning in repeated games of incomplete information. Social Choice
and Welfare 18(2), 303?326.
Nachbar, J. H. (1997). Prediction, optimization, and learning in repeated games. Econometrica:
Journal of the Econometric Society 65(2), 275?309.
Vickrey, W. (1961). Counterspeculation, auctions, and competitive sealed tenders. The Journal of
finance 16(1), 8?37.
9
| 6026 |@word exploitation:3 polynomial:1 achievable:2 leighton:8 seems:1 dekel:1 open:1 seek:7 attainable:1 thereby:1 harder:1 series:1 offering:3 past:1 current:1 comparing:1 contextual:2 surprising:1 discretization:3 must:3 adexchange:6 realistic:3 additive:1 enables:1 sponsored:1 update:1 discrimination:1 selected:4 website:3 item:1 prohibitive:1 short:1 characterization:1 lipchitz:1 simpler:1 zhang:1 mathematical:2 edelman:2 shorthand:1 focs:1 introduce:2 expected:8 indeed:4 behavior:7 p1:1 growing:1 multi:2 bellman:2 inspired:1 discounted:5 decreasing:1 armed:2 cardinality:1 becomes:3 provided:1 xx:4 moreover:6 underlying:3 maximizes:2 bounded:8 what:2 guarantee:2 pseudo:2 every:3 act:1 growth:1 finance:1 exactly:2 ostrovsky:2 before:2 negligible:2 t1:5 positive:2 approximately:2 might:3 studied:3 collect:1 fastest:1 campaign:1 unique:1 acknowledgment:1 practice:5 regret:51 union:1 empirical:3 reject:2 confidence:1 wait:1 get:1 cannot:4 seminal:1 optimize:7 deterministic:1 reviewer:1 lied:1 go:1 economics:2 attention:1 react:1 rule:1 his:22 notion:9 lct:2 pt:41 controlling:1 heavily:1 user:2 modulo:1 homogeneous:1 us:3 trick:1 trend:1 observed:3 ft:3 solved:1 trade:2 highest:1 pd:6 environment:1 complexity:1 reward:4 econometrica:2 seller:48 dynamic:1 technically:1 upon:2 learner:1 bidding:2 fast:2 marketplace:1 abernethy:2 posed:1 solve:3 larger:2 say:1 otherwise:1 fischer:1 online:6 advantage:1 sequence:6 propose:2 interaction:3 relevant:1 achieve:6 amin:17 regularity:1 requirement:1 incidentally:1 derive:1 depending:2 measured:1 strong:1 solves:1 recovering:1 p2:2 implies:3 stochastic:4 exploration:3 require:1 exchange:1 hx:1 behaviour:1 proposition:4 extension:1 hold:1 lying:1 around:1 considered:6 welfare:1 equilibrium:1 reserve:10 achieves:2 wine:1 favorable:2 currently:1 cole:2 robbins:2 largest:3 minimization:4 dani:2 publication:1 corollary:3 contrast:1 adversarial:1 sigkdd:1 rostamizadeh:3 unlikely:1 accept:1 bandit:12 selects:4 interested:1 among:3 aforementioned:1 colt:1 development:1 art:1 platform:1 aware:1 once:1 sell:1 icml:2 future:3 minimized:1 strategically:4 simultaneously:1 attempt:1 inexpensively:1 adjust:1 analyzed:3 truly:1 daily:1 incomplete:2 theoretical:1 increased:1 instance:2 tp:27 maximization:1 strategic:43 cost:1 too:1 dependency:4 unattainable:1 combined:1 st:10 contract:1 off:2 quickly:1 recorded:1 cesa:8 opposed:1 unavoidable:1 hoeffding:1 positivity:1 external:2 li:1 account:1 bidder:1 satisfy:2 ad:8 depends:2 performed:1 later:1 try:2 view:1 analyze:4 hazan:1 competitive:2 recover:1 p2p:11 defer:1 contribution:1 minimize:2 who:8 yield:2 landscape:1 bayesian:1 nazerzadeh:2 advertising:3 history:2 definition:10 against:8 pp:11 associated:2 proof:6 sampled:1 gain:1 stop:1 knowledge:1 organized:1 surplus:14 auer:5 back:1 focusing:1 courant:2 dt:7 follow:1 response:1 improved:1 done:1 furthermore:1 hand:2 receives:2 maximizer:1 google:1 perhaps:1 pricing:1 believe:1 mdp:4 concept:1 verify:1 deliberately:1 ccf:1 equality:1 discounting:2 alternating:1 nonzero:1 vickrey:2 round:19 game:7 during:1 coincides:2 auction:30 weber:2 meaning:1 recently:1 behaves:1 winner:1 million:1 extend:1 he:9 discussed:1 accumulate:1 singleshot:1 multiarmed:1 munoz:1 cambridge:1 trivially:1 mathematics:2 sealed:1 pointed:1 closing:1 had:2 funded:1 access:3 longer:1 recent:1 showed:4 optimizing:1 optimizes:1 scenario:19 inequality:4 vt:20 exploited:1 seen:2 nachbar:3 gentile:1 promoted:1 truthful:12 maximize:1 advertiser:12 reduces:1 infer:1 match:2 offer:4 long:1 lai:2 visit:2 a1:1 prediction:1 patient:1 metric:1 expectation:1 achieved:3 whereas:3 remarkably:1 interval:4 completes:1 sends:1 publisher:8 crucial:1 meaningless:1 unlike:1 rest:1 comment:1 subject:1 tend:1 effectiveness:1 seem:1 presence:2 reacts:1 bid:10 affect:1 nonstochastic:1 competing:1 click:1 idea:4 avenue:1 knowing:1 t0:10 motivated:6 regt:3 forecasting:1 york:2 proceed:1 action:8 useful:1 tewari:1 clear:1 involve:1 dark:1 concentrated:2 induces:1 documented:1 exist:1 andr:1 millisecond:1 notice:6 nsf:2 popularity:1 pace:1 write:1 discrete:1 incentive:3 putting:1 four:2 nevertheless:1 achieving:2 drawn:2 ht:8 v1:1 econometric:2 asymptotically:1 fraction:1 year:1 sum:1 run:3 soda:1 family:3 throughout:2 decide:1 decision:2 appendix:1 announced:1 bound:19 ki:1 pay:1 played:1 display:4 quadratic:1 roughgarden:2 bp:14 unlimited:1 kleinberg:8 argument:1 optimality:1 adexchanges:3 cui:2 beneficial:2 remain:1 lp:9 rev:2 making:1 ltt:1 pr:8 equation:2 previously:3 remains:3 discus:1 turn:1 mechanism:2 know:1 tractable:1 milgrom:5 available:2 operation:1 observe:1 appropriate:1 appearing:1 existence:3 assumes:1 remaining:1 ensure:1 umar:1 robbing:1 society:2 objective:1 quantity:3 strategy:4 rt:1 traditional:1 interacts:1 visiting:1 behalf:1 said:2 thank:1 simulated:1 street:1 topic:1 valuation:13 reason:1 consumer:1 afshin:1 sur:1 minimizing:1 acquire:1 setup:2 mostly:1 stoc:1 negative:1 design:4 policy:1 unknown:2 allowing:1 conversion:1 bianchi:8 upper:1 discretize:1 markov:1 finite:11 behave:4 extended:1 precise:1 mansour:1 community:2 introduced:4 cast:2 accepts:2 nip:4 adversary:7 below:1 max:4 belief:1 power:2 critical:1 event:1 natural:5 rely:1 syed:3 improve:2 imply:1 arora:2 extract:1 faced:1 literature:2 geometric:1 fully:5 expect:1 allocation:1 proven:1 facing:5 revenue:28 foundation:1 offered:11 mercer:1 compatible:1 mohri:8 course:1 last:2 free:1 allow:1 institute:2 benefit:2 boundary:1 curve:1 transition:1 cumulative:1 ignores:1 author:4 commonly:1 made:1 adaptive:3 simplified:1 historical:1 far:1 social:1 transaction:2 ignore:1 decides:2 hayes:2 assumed:3 truthfully:3 continuous:4 search:1 nature:1 robust:3 interact:4 mehryar:1 complex:1 posted:6 pk:7 main:2 allowed:1 repeated:9 x1:2 fashion:1 ny:2 sub:4 mao:1 medina:8 wish:2 lie:9 advertisement:4 counterspeculation:1 theorem:2 specific:1 xt:1 insightful:1 rakhlin:1 admits:5 evidence:1 consist:2 intractable:1 sequential:1 demand:1 gap:3 lt:16 logarithmic:2 likely:2 bubeck:3 myerson:2 partially:1 applies:1 satisfies:4 prop:1 viewed:3 targeted:1 goal:3 formulated:1 tender:1 price:67 hard:1 specifically:1 infinite:1 except:2 acting:1 lemma:1 partly:1 e:1 buyer:82 ucb:6 rarely:1 formally:2 indicating:1 support:1 pfor:1 violated:1 |
5,555 | 6,027 | On Top-k Selection in Multi-Armed Bandits and
Hidden Bipartite Graphs
Wei Cao1
Jian Li1
Yufei Tao2
Zhize Li1
1
Tsinghua University 2 Chinese University of Hong Kong
1
{cao-w13@mails, lijian83@mail, zz-li14@mails}.tsinghua.edu.cn 2 [email protected]
Abstract
This paper discusses how to efficiently choose from n unknown distributions the k
ones whose means are the greatest by a certain metric, up to a small relative error.
We study the topic under two standard settings?multi-armed bandits and hidden
bipartite graphs?which differ in the nature of the input distributions. In the former setting, each distribution can be sampled (in the i.i.d. manner) an arbitrary
number of times, whereas in the latter, each distribution is defined on a population
of a finite size m (and hence, is fully revealed after m samples). For both settings, we prove lower bounds on the total number of samples needed, and propose
optimal algorithms whose sample complexities match those lower bounds.
1
Introduction
This paper studies a class of problems that share a common high-level objective: from a number n
of probabilistic distributions, find the k ones whose means are the greatest by a certain metric.
Crowdsourcing. A crowdsourcing algorithm (see recent works [1, 13] and the references therein)
summons a certain number, say k, of individuals, called workers, to collaboratively accomplish
a complex task. Typically, the algorithm breaks the task into a potentially very large number of
micro-tasks, each of which makes a binary decision (yes or no) by taking the majority vote from the
participating workers. Each worker is given an (often monetary) reward for every micro-task that
s/he participates in. It is therefore crucial to identify the most reliable workers that have the highest
rates of making correct decisions. Because of this, a crowdsourcing algorithm should ideally be
preceded by an exploration phase, which selects the best k workers from n candidates by a series of
?control questions?. Every control-question must be paid for in the same way as a micro-task. The
challenge is to find the best workers with the least amount of money.
Frequent Pattern Discovery. Let B and W be two relations. Given a join predicate Q(b, w), the
joining power of a tuple b ? B equals the number of tuples w ? W such that b and w satisfy Q. A
top-k semi-join [14, 17] returns the k tuples in B with the greatest joining power. This type of semijoins is notoriously difficult to process when the evaluation of Q is complicated, and thus unfriendly
to tailored-made optimization. A well-known example from graph databases is the discovery of
frequent patterns [14], where B is a set of graph patterns, W a set of data graphs, and Q(b, w)
decides if a pattern b is a subgraph of a data graph w. In this case, top-k semi-join essentially returns
the set of k graph patterns most frequently found in the data graphs. Given a black box for resolving
subgraph isomorphism Q(b, w), the challenge is to minimize the number of calls to the black box.
We refer to the reader to [14, 15] for more examples of difficult top-k semi-joins of this sort.
1.1
Problem Formulation
The paper studies four problems that capture the essence of the above applications.
Multi-Armed Bandit. We consider a standard setting of stochastic multi-armed bandit selection.
Specifically, there is a bandit with a set B of n arms, where the i-th arm is associated with a Bernoulli
1
distribution with an unknown mean ?i ? (0, 1]. In each round, we choose an arm, pull it, and then
collect a reward, which is an i.i.d. sample from the arm?s reward distribution.
Given a subset V ? B of arms, we denote by ai (V ) the arm with the i-th largest mean in V , and
Pk
by ?i (V ) the mean of ai (V ). Define ?avg (V ) = k1 i=1 ?i (V ), namely, the average of the means of
the top-k arms in V .
Our first two problems aim to identify k arms whose means are the greatest either individually or
aggregatively:
1
Problem 1 [Top-k Arm Selection (k-AS)] Given parameters ? 0, 41 , ? ? 0, 48
, and k ?
n/2, we want to select a k-sized subset V of B such that, with probability at least 1 ? ?, it holds that
?i (V ) ? (1 ? )?i (B), ?i ? k.
We further study a variation of k-AS where we change the multiplicative guarantee ?i (V ) ? (1 ?
)?i (B) to an additive guarantee ?i (V ) ? ?i (B) ? 0 . We refer to the modified problem as Topkadd Arm Selection(kadd -AS). Due to the space constraint, we present all the details of kadd -AS in
Appendix C.
Problem 2 [Top-kavg Arm Selection (kavg -AS)] Given the same parameters as in k-AS, we want
to select a k-sized subset V of B such that, with probability at least 1 ? ?, it holds that
?avg (V ) ? (1 ? )?avg (B).
For both problems, the cost of an algorithm is the total number of arms pulled, or equivalently, the
total number of samples drawn from the arms? distributions. For this reason, we refer to the cost
as the algorithm?s sample complexity. It is easy to see that k-AS is more stringent than kavg -AS;
hence, a feasible solution to the former is also a feasible solution to the latter, but not the vice versa.
Hidden Bipartite Graph. The second main focus of the paper is the exploration of hidden bipartite
graphs. Let G = (B, W, E) be a bipartite graph, where the nodes in B are colored black, and those
in W colored white. Set n = |B| and m = |W |. The edge set E is hidden in the sense that an
algorithm does not see any edge at the beginning. To find out whether an edge exists between a
black vertex b and a white vertex w, the algorithm must perform a probe operation. The cost of the
algorithm equals the number of such operations performed.
If an edge exists between b and w, we say that there is a solid edge between them; otherwise,
we say that they have an empty edge. Let deg(b) be the degree of a black vertex b, namely, the
number of solid edges of b. Given a subset of black vertices V ? B, we denote by bi (V ) the
black vertex with i-th largest degree in V , and by degi (V ) the degree of bi (V ). Furthermore, define
Pk
degavg (V ) = k1 i=1 degi (V ).
We now state the other two problems studied in this work, which aim to identify k black vertices
whose degrees are the greatest either individually or aggregatively:
1
Problem 3 [k-Most Connected Vertex [14] (k-MCV)] Given parameters ? 0, 14 , ? ? 0, 48
,
and k ? n/2, we want to select a k-sized subset V of B such that, with probability at least 1 ? ?, it
holds that
degi (V ) ? (1 ? ) degi (B), ?i ? k.
Problem 4 [kavg -Most Connected Vertex (kavg -MCV)] Given the same parameters as in k-MCV,
we want to select a k-sized subset V of B such that, with probability at least 1 ? ?, it holds that
degavg (V ) ? (1 ? ) degavg (B).
A feasible solution to k-MCV is also feasible for kavg -MCV, but not the vice versa. We will refer to
the cost of an algorithm also as its sample complexity, by regarding a probe operation as ?sampling?
the edge probed. For any deterministic algorithm, the adversary can force the algorithm to always
probe ?(mn) edges. Hence, we only consider randomized algorithms.
k-MCV can be reduced to k-AS. Given a hidden bipartite graph (B, W, E), we can treat every
black vertex b ? B as an ?arm? associated with a Bernoulli reward distribution: the reward is 1 with
probability deg(b)/m (recall m = |W |), and 0 with probability 1 ? deg(b)/m. Any algorithm A for
k-AS can be deployed to solve k-MCV as follows. Whenever A samples from arm b, we randomly
choose a white vertex w ? W , and probe the edge between b and w. A reward of 1 is returned to A
if and only if the edge exists.
2
k-AS and k-MCV differ, however, in the size of the population that a reward distribution is defined
on. For k-AS, the reward of each arm is sampled from a population of an indefinite size, which can
even be infinite. Consequently, k-AS nicely models situations such as the crowdsourcing application
mentioned earlier.
For k-MCV, the reward distribution of each ?arm? (i.e., a black vertex b) is defined on a population
of size m = |W | (i.e., the edges of b). This has three implications. First, k-MCV is a better
modeling of applications like top-k semi-join (where an edge exists between b ? B and w ? W
if and only if Q(b, w) is true). Second, the problem admits an obvious algorithm with cost O(nm)
(recall n = |B|): simply probe all the hidden edges. Third, an algorithm never needs to probe the
same edge between b and w twice?once probed, whether the edge is solid or empty is perpetually
revealed. We refer to the last implication as the history-awareness property.
The above discussion on k-AS and k-MCV also applies to kavg -AS and kavg -MCV. For each of
above problems, we refer to an algorithm which achieves the precision and failure requirements
prescribed by and ? as an (, ?)-approximate algorithm.
1.2
Previous Results
Problem 1. Sheng et al. [14] presented an algorithm1 that solves k-AS with expected cost
1
O( n2 ?k (B)
log n? ). No lower bound is known on the sample complexity of k-AS. The closest work
is due to Kalyanakrishnan et al. [11]. They considered the E XPLORE-k problem, where the goal
is to return a set V of k arms such that, with probability at least 1 ? ?, the mean of each arm in
V is at least ?k (B) ? 0 . They showed an algorithm with sample complexity ?( n02 log k? ) in expectation and establish a matching lower bound. Note that E XPLORE-k ensures an absolute-error
guarantee, which is weaker than the individually relative-error guarantee of k-AS. Therefore, the
same E XPLORE-k lower bound also applies to k-AS.
1
k
The readers may be tempted to set 0 = ? ?k (B) to derive a ?lower bound? of ?( n2 (?k (B))
2 log ? )
for k-AS. This, however, is clearly wrong because when ?k (B) = o(1) (a typical case in practice)
this ?lower bound? may be even higher than the upper bound of [14] mentioned earlier. The cause
of the error lies in that the hard instance constructed in [11] requires ?k (B) = ?(1).
1
Problem 2. The O( n2 ?k (B)
log n? ) upper bound of [14] on k-AS carries over to kavg -AS (which, as
mentioned before, can be solved by any k-AS algorithm). Zhou et al. [16] considered an O PT MAI
problem whose goal is to find a k-sized subset V such that ?avg (V ) ? ?avg (B) ? 0 holds with
probability at least 1 ? ?. Note, once again, that this is an absolute-error guarantee, as opposed to the
relative-error guarantee of kavg -AS. For O PT MAI, Zhou et al. presented an algorithm with sample
complexity O( n02 (1 + log(1/?)
)) in expectation. Observe that if ?avg (B) is available magically in
k
advance, we can immediately apply the O PT MAI algorithm of [16] to settle kavg -AS by setting
log(1/?)
1
0 = ? ?avg (B). The expected cost of the algorithm becomes O( n2 (?avg (B))
)) (which
2 (1 +
k
is suboptimal. See the table).
No lower bound is known on the sample complexity of kavg -AS. For O PT MAI, Zhou et al. [16]
)), which directly applies to kavg -AS due to its stronger
proved a lower bound of ?( n02 (1 + log(1/?)
k
quality guarantee.
Problems 3 and 4. Both problems can be trivially solved with cost O(nm). Furthermore, as
explained in Section 1.1, k-MCV and kavg -MCV can be reduced to k-AS and kavg -AS respectively.
Indeed, the best existing k-AS and kavg -AS algorithms (surveyed in the above) serve as the state of
the art for k-MCV and kavg -MCV, respectively.
Prior to this work, no lower bound results were known for k-MCV and kavg -MCV. Note that none
of the lower bounds for k-AS (or kavg -AS) is applicable to k-MCV (or kavg -MCV, resp.), because
there is no reduction from the former problem to the latter.
1.3 Our Results
We obtain tight upper and lower bounds for all of the problems defined in Section 1.1. Our main results are summarized in Table 1 (all bounds are in expectation). Next, we explain several highlights,
and provide an overview into our techniques.
1
The algorithm was designed for k-MCV, but it can be adapted to k-AS as well.
3
Table 1: Comparison of our and previous results (all bounds are in expectation)
problem
sample
complexity
source
k-AS
O
upper
bound
upper
bound
O
lower
bound
lower
bound
1
?k (B)
log
n
?
[14]
log(1/?)
k
log(1/?)
n
? 2 1+
k
n
1
? 2 ?avg (B) 1 + log(1/?)
k
kavg -AS
k-MCV
2
1
O n2 ?k (B)
log k?
n
k
? 2 log ?
1
? n2 ?k (B)
log k?
1
O( n2 ?k (B)
log n? )
log(1/?)
1
1
+
O n2 (?avg (B))
2
k
lower
bound
upper
bound
n
n
1
2?avg (B)
n
1+
o
O min n2 degm(B) log n? , nm
k
n
o
O min n2 degm(B) log k? , nm
(
k
m
k
n
? 2 deg (B) log ? if degk (B) ? ?( 12 log n? )
k
?(nm) if degkn(B) < O( 1 )
o
new
[11]
new
[14]
[16]
new
[16]
new
[14]
new
new
O min n2 degm(B) log n? , nm
[14]
k
n
o
2
upper
O min n2 (deg m(B))2 1 + log(1/?)
[16]
, nm
k
bound
avg
n
o
kavg -MCV
O min n2 deg m(B) 1 + log(1/?)
, nm
new
?
avg
k
log(1/?)
m
?
? ? n2 degavg
(B) 1 +
k
lower
new
if degavg (B) ? ?( 12 log n? )
bound ?
?
?(nm) if degavg (B) < O( 1 )
k-AS. Our algorithm improves the log n factor of [14] to log k (in practice k n), thereby achieving the optimal sample complexity (Theorem 1).
Our analysis for k-AS is inspired by [8, 10, 11] (in particular the median elimination technique in
[8]). However, the details are very different and more involved than the previous ones (the application of median elimination of [8] was in a much simpler context where the analysis was considerably
easier). On the lower bound side, our argument is similar to that of [11], but we need to get rid of
the ?k (B) = ?(1) assumption (as explained in Section 1.2), which requires several changes in the
analysis (Theorem 2).
kavg -AS. Our algorithm improves both existing solutions in [14, 16] significantly, noticing that both
?k (B) and (?avg (B))2 are never larger, but can be far smaller, than ?avg (B). This improvement results from an enhanced version of median elimination, and once again, requires a non-trivial analysis
specific to our context (Theorem 4). Our lower bound is established with a novel reduction from the
1-AS problem (Theorem 5). It is worth nothing that the reduction can be used to simplify the proof
of the lower bound in [16, Theorem 5.5] .
k-MCV and kavg -MCV. The stated upper bounds for k-MCV and kavg -MCV in Table 1 can be
obtained directly from our k-AS and kavg -AS algorithms. In contrast, all the lower-bound arguments
for k-AS and kavg -AS?which crucially rely on the samples being i.i.d.?break down for the two
MCV problems, due to the history-awareness property explained in Section 1.1.
For k-MCV, we remedy the issue by (i) (when degk (B) is large) a reduction from k-AS, and (ii)
(when degk (B) is small) a reduction from a sampling lower bound for distinguishing two extremely
similar distributions (Theorem 3). Analogous ideas are deployed for kavg -MCV (Theorem 6). Note
that for a small range of degk (B) (i.e., ?( 1 ) < degk (B) < O( 12 log n? )), we do not have the
optimal lower bounds yet for k-MCV and kavg -MCV. Closing the gap is left as an interesting open
problem.
4
Algorithm 1: ME-AS
1 input: B, , ?, k
2 for ? = 1/2, 1/4, . . . do
3
S = ME(B, , ?, ?, k);
4
{(ai , ??US (ai )) | 1 ? i ? k} = US(S, , ?, (1 ? /2)?, k);
5
if ??US (ak ) ? 2? then
6
return {a1 , . . . , ak };
Algorithm 2: Median Elimination (ME)
1 input: B, , ?, ?, k
2 S1 = B, 1 = /16, ?1 = ?/8, ?1 = ?, and ` = 1;
3 while |S` | > 4k do
4
sample every arm a ? S` for Q` = (12/2` )(1/?` ) log(6k/?` ) times;
5
for each arm a ? S` do
? = the average of the Q` samples from a;
6
its empirical value ?(a)
7
a1 , . . . , a|S` | = the arms sorted in non-increasing order of their empirical values;
8
S`+1 = {a1 , . . . , a|S` |/2 };
9
`+1 = 3` /4, ?`+1 = ?` /2, ?`+1 = (1 ? ` )?` , and ` = ` + 1;
10 return S` ;
Algorithm 3: Uniform Sampling (US)
1 input: S, , ?, ?s , k
2 sample every arm a ? S for Q = (96/2 )(1/?s ) log(4|S|/?) times;
3 for each arm a ? S do
4
its US-empirical value ??US (a) = the average of the Q samples from a;
5 a1 , . . . , a|S| = the arms sorted in non-increasing order of their US-empirical values;
6 return {(a1 , ??US (a1 )), . . . , (ak , ??US (ak ))}
2
Top-k Arm Selection
In this section, we describe a new algorithm for the k-AS problem. We present the detailed analysis
in Appendix B.
Our k-AS algorithm consists of three components: ME-AS, Median Elimination (ME), and Uniform
Sampling (US), as shown in Algorithms 1, 2, and 3, respectively.
Given parameters B, , ?, k (as in Problem 1), ME-AS takes a ?guess? ? (Line 2) on the value of
?k (B), and then applies ME (Line 3) to prune B down to a set S of at most 4k arms. Then, at Line
4, US is invoked to process S. At Line 5, (as will be clear shortly) the value of ??US (ak ) is what
ME-AS thinks should be the value of ?k (B); thus, the algorithm performs a quality check to see
whether ??US (ak ) is larger than but close to ?. If the check fails, ME-AS halves its guess ? (Line 2),
and repeats the above steps; otherwise, the output of US from Line 4 is returned as the final result.
ME runs in rounds. Round ` (= 1, 2, ...) is controlled by parameters S` , ` , ?` , and ?` (their values
for Round 1 are given at Line 1). In general, S` is the set of arms from which we still want to sample.
?
For each arm a ? S` , ME takes Q` (Line 4) samples from a, and calculates its empirical value ?(a)
(Lines 5 and 6). ME drops (at Lines 7 and 8) half of the arms in S` with the smallest empirical
values, and then (at Line 9) sets the parameters of the next round. ME terminates by returning S` as
soon as |S` | is at most 4k (Lines 3 and 10).
US simply takes Q samples from each arm a ? S (Line 2), and calculates its US-empirical value
??US (a) (Lines 3 and 4). Finally, US returns the k arms in S with the largest US-empirical values
(Lines 5 and 6).
Remark. If we ignore Line 3 of Algorithm 1 and simply set S = B, then ME-AS degenerates into
the algorithm in [14].
5
Theorem 1 ME-AS solves the k-AS problem with expected cost O
n
1
2 ?k (B)
log
k
?
.
We extends the proof in [11] and establish the lower bound for k-AS as shown in Theorem 2.
1
Theorem 2 For any ? 0, 14 and ? ? 0, 48
, given any algorithm, there is an instance of the
1
k-AS problem on which the algorithm must entail ?( n2 ?k (B)
log k? ) cost in expectation.
3
k-MOST CONNECTED VERTEX
This section is devoted to the k-MCV problem (Problem 3). We will focus on lower bounds because
our k-AS algorithm in the previous section also settles k-MCV with the cost claimed in Table 1 by
applying the reduction described in Section 1.1. We establish matching lower bounds below:
1
1
Theorem 3 For any ? 0, 12
and ? ? 0, 48
, the following statements are true about any
k-MCV algorithm:
? when degk (B) ? ? 12 log n? , there is an instance on which the algorithm must probe
?( n2 degm(B) log k? ) edges in expectation.
k
? when degk (B) < O( 1 ), there is an instance on which the algorithm must probe ?(nm)
edges in expectation.
For large degk (B) in Theorem 3, we utilize an instance for k-AS to construct a random hidden
bipartite graph and fed it to any algorithm solves k-MCV. By doing this, we reduce k-AS to kMCV and thus, establish our first lower bound.
For small degk (B), we define the single-vertex problem where the goal is to distinguish two extremely distributions. We prove the lower bound of single-vertex problem and reduce it to k-MCV.
Thus, we establish our second lower bound. The details are presented in Appendix D.
4
Top-kavg Arm Selection
Our kavg -AS algorithm QE-AS is similar to ME-AS described in Section 2, except that the parameters are adjusted appropriately, as shown in Algorithm 4, 5, 6 respectively. We present the details in
Appendix E.
Theorem 4 QE-AS solves the kavg -AS problem with expected cost O n2 ?avg1(B) 1 + log(1/?)
.
k
We establish the lower bound for kavg -AS as shown in Theorem 5.
1
1
Theorem 5 For any ? 0, 12
and ? ? 0, 48
, given any (, ?)-approximate algorithm,
there is an instance
of the kavg -AS problem on which the algorithm must entail
?
1
n
2 ?avg (B)
1+
log(1/?)
k
cost in expectation.
We show that the lower bound of kavg -AS is the maximum of ? n2 ?avg1(B) log(1/?)
and
k
n
1
? 2 ?avg (B) . Our proof of the first lower bound is based on a novel reduction from 1-AS. We
stress that our reduction can be used to simplify the proof of the lower bound in [16, Theorem 5.5].
5
kavg -MOST CONNECTED VERTEX
Our kavg -AS algorithm, combined with the reduction described in Section 1.1, already settles kavg MCV with the sample complexity given in Table 1. We establish the following lower bound and
prove it in Appendix F.
1
1
Theorem 6 For any ? 0, 12
and ? ? 0, 48
, the following statements are true about any
kavg -MCV algorithm:
? when degavg (B) ? ? 12 log n? , there is an instance on which the algorithm must probe
n
m
log(1/?)
? 2
1+
degavg (B)
k
edges in expectation.
? when degk (B) < O( 1 ), there is an instance on which the algorithm must probe ?(nm)
edges in expectation.
6
Algorithm 4: QE-AS
1 input: B, , ?, k
2 for ? = 1/2, 1/4, . . . do
3
S = QE(B, , ?, ?, k);
US
4
{(ai | 1 ? i ? k), ??avg
} = US(S, , ?, (1 ? /2)?, k);
U
S
5
if ??avg ? 2? then
6
return {a1 , . . . , ak };
Algorithm 5: Quartile Elimination (QE)
1 input: B, , ?, ?, k
2 S1 = B, 1 = /32, ?1 = ?/8, ?1 = ?, and ` = 1;
3 while |S` | > 4k do
4
sample every arm a ? S` for Q` = (48/2` )(1/?` ) 1 +
5
6
log(2/?` )
k
times;
for each arm a ? S` do
? = the average of the Q` samples from a;
its empirical value ?(a)
a1 , . . . , a|S` | = the arms sorted in non-increasing order of their empirical values;
S`+1 = {a1 , . . . , a3|S` |/4 };
`+1 = 7` /8, ?`+1 = ?` /2, ?`+1 = (1 ? ` )?` , and ` = ` + 1;
10 return S` ;
7
8
9
Algorithm 6: Uniform Sampling (US)
1 input: S, , ?, ?s , k
2 sample every arm a ? S for Q = (120/2 )(1/?s ) 1 +
log(4/?)
k
times;
3 for each arm a ? S do
4
its US-empirical value ??US (a) = the average of the Q samples from a;
5 a1 , . . . , a|S| = the arms sorted in non-increasing order of their US-empirical values;
US
6 return {(a1 , . . . , ak ), ??avg
=
6
1
k
Pk
i=1
??US (ai )}
Experiment Evaluation
Due to the space constraint, we show only the experiments that compare ME-AS and AMCV [14] for
k-MCV problem. Additional experiments can be found in Appendix G. We use two synthetic data
sets and one real world data set to evaluate the algorithms. Each dataset is represented as a bipartite
graph with n = m = 5000. For the synthetic data, the degrees of the black vertices follow a power
law distribution. For each black vertex b ? B, its degree equals d with probability c(d + 1)?? where
? is the parameter to be set and c is the normalizing factor. Furthermore, for each black vertex with
degree d, we connected it to d randomly selected white vertices. Thus, we build two bipartite graphs
by setting the proper parameters in order to control the average degrees of the black vertices to be
50 and 3000 respectively. For the real world data, we crawl 5000 active users from twitter with their
corresponding relationships. We construct a bipartite graph G = (B, W, E) where each of B and
W represents all the users and E represents the 2-hop relationships. We say two users b ? B and
w ? W have a 2-hop relationship if they share at least one common friend.
As the theoretical analysis is rather pessimistic due to the extensive usage of the union bound, to
make a fair comparison, we adopt the same strategy as in [14], i.e., to divide the sample cost in
theory by a heuristic constant ?. We use the same parameter ? = 2000 for AMCV as in [14].
For ME-AS, we first take ? = 107 for each round of the median elimination step and then we use
the previous sample cost dividing 250 as the samples of the uniform sampling step. Notice that it
does not conflict the theoretical sample complexity since the median elimination step dominates the
sample complexity of the algorithm.
We fix the parameters ? = 0.1, k = 20 and enumerate from 0.01 to 0.1. We then calculate
the actual failure probability by counting the successful runs in 100 repeats. Recall that due to
the heuristic nature, the algorithm may not achieve the theoretical guarantees prescribed by (, ?).
7
Whenever this happens, we label the percentage of actual error a it achieves according to the failure
probability ?. For example 2.9 means the algorithm actually achieves an error a = 0.029 with
failure probability ?. The experiment result is shown in Fig 1.
108
2.9
107
5.6
8.5
11.0
106
16.3
19.2 18.9
28.0
26.2
15.5
11.0 12.1
13.9
105
0.01
0.03
0.05
0.07
0.09
108
AMCV
ME-AS
107
106
11.8
105
0.01
sample cost
AMCV
ME-AS
sample cost
sample cost
108
107
9.3
0.03
0.05
0.07
0.09
12.1
106
0.01
? = 3000
(b) Power law with deg
16.3
5.7
105
? = 50
(a) Power law with deg
AMCV
ME-AS
3.7
0.03
18.0
22.7 23.1
29.5
27.6
12.8
7.4 8.6
9.9 11.2
0.05
0.07
0.09
(c) 2-hop
Figure 1: Performance comparison for k-MCV vs.
As we can see, ME-AS outperforms AMCV in both sample complexity and the actual error in all
data sets. We stress that in the worst case, it seems ME-AS only shows a difference when n k.
However for the most of the real world data, the degrees of the vertices usually follow a power
law distribution or a Gaussian distribution. For such cases, our algorithm only needs to take a
few samples in each round of the elimination step and drops half of vertices with high confidence.
Therefore, the experimental result shows that the sample cost of ME-AS is much less than AMCV.
7
Related Work
Multi-armed bandit problems are classical decision problems with exploration-exploitation tradeoffs, and have been extensively studied for several decades (dating back to 1930s). In this line of
research, k-AS and kavg -AS fit into the pure exploration category, which has attracted significant
attentions in recent years due to its abundant applications such as online advertisement placement [6], channel allocation for mobile communications [2], crowdsourcing [16], etc. We mention some
closely related work below, and refer the interested readers to a recent survey [4].
Even-Dar et al. [8] proposed an optimal algorithm for selecting a single arm which approximates
the best arm with an additive error at most (a matching lower bound was established by Mannor et
al. [12]). Kalyanakrishnan et al. [10, 11] considered the E XPLORE-k problem which we mentioned
in Section 1.2. They provided an algorithm with the sample complexity O( n2 log k? ). Similarly,
Zhou et al. [16] studied the O PT MAI problem which, again as mentioned in Section 1.2, is the
absolute-error version of kavg -AS.
Audibert et al. [2] and Bubeck et al. [4] investigated the fixed budget setting where, given a fixed
number of samples, we want to minimize the so-called misidentification probability (informally, the
probability that the solution is not optimal). Buckeck et al. [5] also showed the links between the
simple regret (the gap between the arm we obtain and the best arm) and the cumulative regret (the
gap between the reward we obtained and the expected reward of the best arm). Gabillon et al. [9]
provide a unified approach UGapE for E XPLORE-k in both the fixed budget and the fixed confidence
settings. They derived the algorithms based on ?lower and upper confidence bound? (LUCB) where
the time complexity depends on the gap between ?k (B) and the other arms . Note that each time
LUCB samples the two arms that are most difficult to distinguish. Since our problem ensures an
individually guarantee, it is unclear whether only sampling the most difficult-to-distinguish arms
would be enough. We leave it as an intriguing direction for future work. Chen et al. [6] studied how
to select the best arms under various combinatorial constraints.
Acknowledgements. Jian Li, Wei Cao, Zhize Li were supported in part by the National Basic
Research Program of China grants 2015CB358700, 2011CBA00300, 2011CBA00301, and the National NSFC grants 61202009, 61033001, 61361136003. Yufei Tao was supported in part by projects
GRF 4168/13 and GRF 142072/14 from HKRGC.
8
References
[1] Y. Amsterdamer, S. B. Davidson, T. Milo, S. Novgorodov, and A. Somech. OASSIS: query
driven crowd mining. In SIGMOD, pages 589?600, 2014.
[2] J.-Y. Audibert, S. Bubeck, et al. Best arm identification in multi-armed bandits. COLT, 2010.
[3] Z. Bar-Yossef. The complexity of massive data set computations. PhD thesis, University of
California, 2002.
[4] S. Bubeck, N. Cesa-Bianchi, et al. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and trends in machine learning, 5(1):1?122, 2012.
[5] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in finitely-armed and continuous-armed
bandits. Theoretical Computer Science, 412(19):1832?1852, 2011.
[6] S. Chen, T. Lin, I. King, M. R. Lyu, and W. Chen. Combinatorial pure exploration of multiarmed bandits. In Advances in Neural Information Processing Systems, pages 379?387, 2014.
[7] D. P. Dubhashi and A. Panconesi. Concentration of measure for the analysis of randomized
algorithms. Cambridge University Press, 2009.
[8] E. Even-Dar, S. Mannor, and Y. Mansour. Action elimination and stopping conditions for the
multi-armed bandit and reinforcement learning problems. The Journal of Machine Learning
Research, 7:1079?1105, 2006.
[9] V. Gabillon, M. Ghavamzadeh, and A. Lazaric. Best arm identification: A unified approach
to fixed budget and fixed confidence. In Advances in Neural Information Processing Systems,
pages 3212?3220, 2012.
[10] S. Kalyanakrishnan and P. Stone. Efficient selection of multiple bandit arms: Theory and
practice. In ICML, pages 511?518, 2010.
[11] S. Kalyanakrishnan, A. Tewari, P. Auer, and P. Stone. PAC subset selection in stochastic multiarmed bandits. In ICML, pages 655?662, 2012.
[12] S. Mannor and J. N. Tsitsiklis. The sample complexity of exploration in the multi-armed bandit
problem. The Journal of Machine Learning Research, 5:623?648, 2004.
[13] A. G. Parameswaran, S. Boyd, H. Garcia-Molina, A. Gupta, N. Polyzotis, and J. Widom.
Optimal crowd-powered rating and filtering algorithms. PVLDB, 7(9):685?696, 2014.
[14] C. Sheng, Y. Tao, and J. Li. Exact and approximate algorithms for the most connected vertex
problem. TODS, 37(2):12, 2012.
[15] J. Wang, E. Lo, and M. L. Yiu. Identifying the most connected vertices in hidden bipartite
graphs using group testing. TKDE, 25(10):2245?2256, 2013.
[16] Y. Zhou, X. Chen, and J. Li. Optimal PAC multiple arm identification with applications to
crowdsourcing. In ICML, pages 217?225, 2014.
[17] M. Zhu, D. Papadias, J. Zhang, and D. L. Lee. Top-k spatial joins. TKDE, 17(4):567?579,
2005.
9
| 6027 |@word kong:1 exploitation:1 version:2 stronger:1 seems:1 open:1 widom:1 crucially:1 kalyanakrishnan:4 paid:1 thereby:1 mention:1 solid:3 carry:1 reduction:9 series:1 selecting:1 outperforms:1 existing:2 yet:1 intriguing:1 must:8 attracted:1 additive:2 designed:1 drop:2 v:1 half:3 selected:1 guess:2 beginning:1 pvldb:1 colored:2 mannor:3 cse:1 node:1 simpler:1 zhang:1 constructed:1 prove:3 consists:1 manner:1 indeed:1 expected:5 frequently:1 multi:9 inspired:1 actual:3 armed:11 increasing:4 becomes:1 provided:1 project:1 what:1 unified:2 guarantee:9 every:7 returning:1 wrong:1 control:3 grant:2 before:1 treat:1 tsinghua:2 joining:2 ak:8 nsfc:1 black:14 twice:1 therein:1 studied:4 china:1 collect:1 bi:2 range:1 testing:1 practice:3 union:1 regret:3 empirical:12 significantly:1 matching:3 boyd:1 confidence:4 yufei:2 get:1 close:1 selection:9 context:2 applying:1 deterministic:1 attention:1 survey:1 identifying:1 immediately:1 pure:3 pull:1 population:4 variation:1 analogous:1 resp:1 pt:5 enhanced:1 user:3 massive:1 exact:1 distinguishing:1 trend:1 database:1 yossef:1 solved:2 capture:1 worst:1 calculate:1 wang:1 ensures:2 connected:7 highest:1 mentioned:5 ugape:1 complexity:17 reward:11 ideally:1 ghavamzadeh:1 tight:1 serve:1 bipartite:11 represented:1 various:1 describe:1 query:1 crowd:2 whose:6 heuristic:2 larger:2 solve:1 say:4 cb358700:1 otherwise:2 think:1 final:1 online:1 propose:1 frequent:2 cao:2 monetary:1 subgraph:2 degenerate:1 achieve:1 grf:2 participating:1 empty:2 requirement:1 leave:1 derive:1 friend:1 finitely:1 solves:4 dividing:1 differ:2 direction:1 closely:1 correct:1 stochastic:3 quartile:1 exploration:7 stringent:1 settle:3 elimination:10 fix:1 pessimistic:1 adjusted:1 hold:5 considered:3 lyu:1 achieves:3 adopt:1 collaboratively:1 smallest:1 applicable:1 label:1 combinatorial:2 individually:4 largest:3 vice:2 clearly:1 always:1 gaussian:1 aim:2 modified:1 rather:1 zhou:5 mobile:1 derived:1 focus:2 improvement:1 bernoulli:2 check:2 hk:1 contrast:1 sense:1 parameswaran:1 twitter:1 stopping:1 typically:1 hidden:9 bandit:14 relation:1 degk:10 selects:1 interested:1 tao:2 issue:1 colt:1 art:1 spatial:1 equal:3 once:3 never:2 nicely:1 construct:2 sampling:7 zz:1 hop:3 represents:2 icml:3 future:1 simplify:2 micro:3 few:1 randomly:2 national:2 individual:1 n02:3 phase:1 mining:1 xplore:5 evaluation:2 devoted:1 implication:2 tuple:1 edge:20 worker:6 stoltz:1 divide:1 abundant:1 theoretical:4 cba00300:1 instance:8 earlier:2 modeling:1 cost:19 vertex:24 subset:8 uniform:4 predicate:1 successful:1 degi:4 accomplish:1 considerably:1 combined:1 synthetic:2 randomized:2 probabilistic:1 participates:1 lee:1 gabillon:2 again:3 thesis:1 nm:11 cesa:1 opposed:1 choose:3 return:10 li:4 summarized:1 satisfy:1 audibert:2 depends:1 performed:1 break:2 multiplicative:1 doing:1 sort:1 complicated:1 minimize:2 efficiently:1 identify:3 yes:1 identification:3 none:1 notoriously:1 worth:1 history:2 explain:1 whenever:2 failure:4 involved:1 obvious:1 associated:2 proof:4 sampled:2 proved:1 dataset:1 recall:3 improves:2 actually:1 back:1 auer:1 higher:1 follow:2 wei:2 formulation:1 box:2 furthermore:3 sheng:2 quality:2 usage:1 true:3 remedy:1 former:3 hence:3 white:4 round:7 essence:1 qe:5 cba00301:1 hong:1 stone:2 stress:2 performs:1 invoked:1 novel:2 common:2 preceded:1 overview:1 he:1 approximates:1 refer:7 significant:1 multiarmed:2 versa:2 cambridge:1 ai:6 trivially:1 similarly:1 closing:1 entail:2 money:1 etc:1 closest:1 recent:3 showed:2 driven:1 claimed:1 certain:3 binary:1 molina:1 additional:1 prune:1 cuhk:1 semi:4 resolving:1 ii:1 multiple:2 match:1 w13:1 lin:1 a1:11 controlled:1 calculates:2 basic:1 essentially:1 metric:2 expectation:10 tailored:1 whereas:1 want:6 median:7 jian:2 source:1 crucial:1 appropriately:1 call:1 counting:1 revealed:2 easy:1 enough:1 fit:1 li1:2 nonstochastic:1 suboptimal:1 reduce:2 regarding:1 cn:1 idea:1 tradeoff:1 panconesi:1 whether:4 isomorphism:1 returned:2 cause:1 remark:1 dar:2 action:1 enumerate:1 tewari:1 detailed:1 clear:1 informally:1 amount:1 extensively:1 category:1 reduced:2 mai:5 percentage:1 notice:1 lazaric:1 tkde:2 milo:1 probed:2 group:1 four:1 indefinite:1 achieving:1 drawn:1 utilize:1 graph:17 year:1 run:2 noticing:1 extends:1 reader:3 decision:3 appendix:6 bound:45 distinguish:3 adapted:1 placement:1 constraint:3 argument:2 prescribed:2 min:5 extremely:2 according:1 magically:1 smaller:1 terminates:1 making:1 s1:2 happens:1 explained:3 discus:1 needed:1 fed:1 available:1 operation:3 probe:10 observe:1 apply:1 shortly:1 algorithm1:1 top:11 sigmod:1 k1:2 chinese:1 establish:7 build:1 classical:1 dubhashi:1 objective:1 question:2 already:1 strategy:1 concentration:1 unclear:1 link:1 majority:1 me:24 topic:1 mail:3 trivial:1 reason:1 relationship:3 equivalently:1 difficult:4 potentially:1 statement:2 stated:1 proper:1 unknown:2 perform:1 bianchi:1 upper:9 mcv:41 degm:4 finite:1 situation:1 communication:1 mansour:1 arbitrary:1 rating:1 namely:2 extensive:1 conflict:1 california:1 established:2 adversary:1 bar:1 below:2 pattern:5 usually:1 challenge:2 program:1 reliable:1 greatest:5 power:6 force:1 rely:1 arm:52 mn:1 zhu:1 dating:1 prior:1 discovery:2 acknowledgement:1 powered:1 relative:3 law:4 fully:1 highlight:1 interesting:1 allocation:1 filtering:1 foundation:1 awareness:2 degree:9 share:2 lo:1 repeat:2 last:1 soon:1 supported:2 tsitsiklis:1 side:1 weaker:1 pulled:1 taking:1 munos:1 absolute:3 yiu:1 world:3 crawl:1 cumulative:1 made:1 avg:20 perpetually:1 reinforcement:1 far:1 approximate:3 ignore:1 deg:8 decides:1 active:1 rid:1 tuples:2 davidson:1 continuous:1 decade:1 table:6 nature:2 channel:1 investigated:1 complex:1 pk:3 main:2 n2:19 nothing:1 fair:1 fig:1 join:6 tod:1 deployed:2 precision:1 surveyed:1 fails:1 candidate:1 lie:1 third:1 advertisement:1 theorem:17 down:2 specific:1 pac:2 admits:1 gupta:1 a3:1 normalizing:1 exists:4 dominates:1 phd:1 budget:3 gap:4 easier:1 chen:4 garcia:1 simply:3 bubeck:4 applies:4 sized:5 goal:3 sorted:4 consequently:1 king:1 tempted:1 feasible:4 change:2 hard:1 specifically:1 infinite:1 typical:1 except:1 total:3 called:2 experimental:1 lucb:2 vote:1 select:5 latter:3 evaluate:1 crowdsourcing:6 |
5,556 | 6,028 | Improved Iteration Complexity Bounds of Cyclic
Block Coordinate Descent for Convex Problems
Ruoyu Sun?, Mingyi Hong? ?
Abstract
The iteration complexity of the block-coordinate descent (BCD) type algorithm
has been under extensive investigation. It was recently shown that for convex
problems the classical cyclic BCGD (block coordinate gradient descent) achieves
an O(1/r) complexity (r is the number of passes of all blocks). However, such
bounds are at least linearly depend on K (the number of variable blocks), and
are at least K times worse than those of the gradient descent (GD) and proximal
gradient (PG) methods. In this paper, we close such theoretical performance gap
between cyclic BCD and GD/PG. First we show that for a family of quadratic
nonsmooth problems, the complexity bounds for cyclic Block Coordinate Proximal Gradient (BCPG), a popular variant of BCD, can match those of the GD/PG
in terms of dependency on K (up to a log2 (K) factor). Second, we establish an
improved complexity bound for Coordinate Gradient Descent (CGD) for general
convex problems which can match that of GD in certain scenarios. Our bounds
are sharper than the known bounds as they are always at least K times worse than
GD. Our analyses do not depend on the update order of block variables inside
each cycle, thus our results also apply to BCD methods with random permutation
(random sampling without replacement, another popular variant).
1 Introduction
Consider the following convex optimization problem
min f (x) = g(x1 , ? ? ? , xK ) +
K
hk (xk ),
s.t.
xk ? Xk , ? k = 1, ? ? ? K,
(1)
k=1
where g : X ? R is a convex smooth function; h : X ? R is a convex lower semi-continuous
possibly nonsmooth function; xk ? Xk ? RN is a block variable. A very popular method for
solving this problem is the so-called block coordinate descent (BCD) method [5], where each time
a single block variable is optimized while the rest of the variables remain fixed. Using the classical
cyclic block selection rule, the BCD method can be described below.
Algorithm 1: The Cyclic Block Coordinate Descent (BCD)
At each iteration r + 1, update the variable blocks by:
(r)
(r)
xk ? min g xk , w?k + hk (xk ), k = 1, ? ? ? , K.
xk ?Xk
(2)
?
Department of Management Science and Engineering, Stanford University, Stanford, CA.
[email protected]
?
Department of Industrial & Manufacturing Systems Engineering and Department of Electrical & Computer
Engineering, Iowa State University, Ames, IA, [email protected]
?
The authors contribute equally to this work.
1
where we have used the following short-handed notations:
(r)
(r)
(r)
(r?1)
(r?1)
(r?1)
, k = 1, ? ? ? , K,
, xk+1 , ? ? ? , xK
wk := x1 , ? ? ? , xk?1 , xk
(r)
(r)
(r)
(r?1)
(r?1)
w?k := x1 , ? ? ? , xk?1 , xk+1 , ? ? ? , xK
, k = 1, ? ? ? , K,
x?k := [x1 , ? ? ? , xk?1 , xk+1 , ? ? ? , xK ] .
The convergence analysis of the BCD has been extensively studied in the literature, see [5, 14,
19, 15, 4, 7, 6, 10, 20]. For example it is known that for smooth problems (i.e. f is continuous
differentiable but possibly nonconvex, h = 0), if each subproblem has a unique solution and g is
non-decreasing in the interval between the current iterate and the minimizer of the subproblem (one
special case is per-block strict convexity), then every limit point of {x(r) } is a stationary point [5,
Proposition 2.7.1]. The authors of [6, 19] have derived relaxed conditions on the convergence of
BCD. In particular, when problem (1) is convex and the level sets are compact, the convergence of
the BCD is guaranteed without requiring the subproblems to have unique solutions [6]. Recently
Razaviyayn et al [15] have shown that the BCD converges if each subproblem (2) is solved inexactly,
by way of optimizing certain surrogate functions.
Luo and Tseng in [10] have shown that when problem (1) satisfies certain additional assumptions
such as having a smooth composite objective and a polyhedral feasible set, then BCD converges linearly without requiring the objective to be strongly convex. There are many recent works on showing
iteration complexity for randomized BCGD (block coordinate gradient descent), see [17, 12, 8, 16, 9]
and the references therein. However the results on the classical cyclic BCD is rather scant. Saha
and Tewari [18] show that the cyclic BCD achieves sublinear convergence for a family of special
LASSO problems. Nutini et al [13] show that when the problem is strongly convex, unconstrained
and smooth, BCGD with certain Gauss-Southwell block selection rule could be faster than the randomized rule. Recently Beck and Tetruashvili show that cyclic BCGD converges sublinearly if the
objective is smooth. Subsequently Hong et al in [7] show that such sublinear rate not only can
be extended to problems with nonsmooth objective, but is true for a large family of BCD-type algorithm (with or without per-block exact minimization, which includes BCGD as a special case).
When each block is minimized exactly and when there is no per-block strong convexity, Beck [2]
proves the sublinear convergence for certain 2-block convex problem (with only one block having
Lipschitzian gradient). It is worth mentioning that all the above results on cyclic BCD can be used
to prove the complexity for a popular randomly permuted BCD in which the blocks are randomly
sampled without replacement.
To illustrate the rates developed for the cyclic BCD algorithm, let us define X ? to be the optimal
solution set for problem (1), and define the constant
?
(0)
x
?
x
R0 := max max
|
f
(x)
?
f
(x
)
.
(3)
?
?
x?X x ?X
Let us assume that hk (xk ) ? 0, Xk = RN , ? k for now, and assume that g(?) has Lipschitz
continuous gradient:
?g(x) ? ?g(z) ? Lx ? z, ? x, z ? X.
(4)
Also assume that g(?, x?k ) has Lipschitz continuous gradient with respect to each xk , i.e.,
?k g(xk , x?k ) ? ?k g(vk , x?k ) ? Lk xk ? vk , ? x, v ? X, ? k.
(5)
Let Lmax := maxk Lk and Lmin := mink Lk . It is known that the cyclic BCPG has the following
iteration complexity [4, 7] 1
1
(r)
?BCD := f (x(r) ) ? f ? ? CLmax (1 + KL2 /L2min)R02 ,
r
? r ? 1,
(6)
where C > 0 is some constant independent of problem dimension. Similar bounds are provided
for cyclic BCD in [7, Theorem 6.1]. In contrast, it is well known that when applying the classical
1
Note that the assumptions made in [4] and [7] are slightly different, but the rates derived in both cases have
similar dependency on the problem dimension K.
2
gradient descent (GD) method to problem (1) with the constant stepsize 1/L, we have the following
rate estimate [11, Corollary 2.1.2]
(r)
?GD := f (x(r) ) ? f (x? ) ?
2R02 L
2x(0) ? x? 2 L
?
,
r+4
r+4
? r ? 1, ? x? ? X ? .
(7)
Note that unlike (6), here the constant in front of the 1/(r + 4) term is independent of the problem
dimension. In fact, the ratio of the bound given in (6) and (7) is
r+4
CLmax
(1 + KL2 /L2min )
L
r
which is at least in the order of K. For big data related problems with over millions of variables, a
multiplicative constant in the order of K can be a serious issue. In a recent work by Saha and Tewari
[18], the authors show that for a LASSO problem with special data matrix, the rate of cyclic BCD
(with special initialization) is indeed K-independent. Unfortunately, such a result has not yet been
extended to any other convex problems. An open question posed by a few authors [4, 3, 18] are: is
such a K factor gap intrinsic to the cyclic BCD or merely an artifact of the existing analysis?
2 Improved Bounds of Cyclic BCPG for Nonsmooth Quadratic Problem
In this section, we consider the following nonsmooth quadratic problem
K
2
K
1
min f (x) :=
Ak xk ? b +
hk (xk ), s.t. xk ? Xk , ? k
2
k=1
(8)
k=1
where Ak ? RM?N ; b ? RM ; xk ? RN is the kth block coordinate; hk (?) is the same as in
(1). Note the blocks are assumed to have equal dimension for simplicity of presentation. Define
A := [A1 , ? ? ? , Ak ] ? RM?KN . For simplicity, we have assumed that all the blocks have the same
size. Problem (8) includes for example LASSO and group LASSO as special cases.
We consider the following cyclic BCPG algorithm.
Algorithm 2: The Cyclic Block Coordinate Proximal Gradient (BCPG)
At each iteration r + 1, update the variable blocks by:
2
P
k
(r+1)
(r+1)
(r+1)
(r)
(r)
xk
= arg min g(wk
) + ?k g wk
, xk ? xk +
xk ? xk + hk (xk )
xk ?Xk
2
(9)
Here Pk is the inverse of the stepsize for xk , which satisfies
Pk ? ?max ATk Ak = Lk , ? k.
(10)
Define Pmax := maxk Pk and Pmin = mink Pk . Note that for the least square problem (smooth
quadratic minimization, i.e. hk ? 0, ? k), BCPG reduces to the widely used BCGD method.
The optimality condition for the kth subproblem is given by
(r+1)
(r+1)
(r)
(r+1)
(r+1)
+ hk (xk ) ? hk (xk
) + Pk (xk
? xk ), xk ? xk
) ? 0, ? xk ? Xk . (11)
?k g(wk
In what follows we show that the cyclic BCPG for problem (8) achieves a complexity bound that
only dependents on log2 (N K), and apart from such log factor it is at least K times better than those
known in the literature. Our analysis consists of the following three main steps:
1. Estimate the descent of the objective after each BCPG iteration;
2. Estimate the cost yet to be minimized (cost-to-go) after each BCPG iteration;
3. Combine the above two estimates to obtain the final bound.
First we show that the BCPG achieves the sufficient descent.
3
Lemma 2.1. We have the following estimate of the descent when using the BCPG:
f (x(r) ) ? f (x(r+1) ) ?
K
Pk
k=1
2
(r+1)
xk
(r)
? xk 2 .
(12)
Proof. We have the following series of inequalities
f (x(r) ) ? f (x(r+1) )
=
K
(r+1)
f (wk
(r+1)
) ? f (wk+1 )
k=1
2
P
k (r+1)
(r+1)
(r+1)
(r+1)
(r+1)
(r)
(r)
+
)? g(wk
) + hk (xk
) + ?k g(wk
), xk
? xk
? xk
xk
2
k=1
K
2
P
k (r+1)
(r)
(r+1)
(r+1)
(r+1)
(r)
(r)
? k g wk
, xk
+
hk (xk ) ? hk (xk
)?
? xk
? xk
=
xk
2
?
K
(r+1)
f (wk
k=1
?
K
Pk
k=1
2
(r+1)
xk
(r)
? xk 2 .
where the second inequality uses the optimality condition (11).
Q.E.D.
given below, which have dimension K ? K and
To proceed, let us introduce two matrices P and A
M K ? N K, respectively
? P
1
? 0
P
:= ?
? ..
.
0
0
P2
..
.
0
0
0
..
.
0
???
???
???
???
0
0
..
.
0
0
0
..
.
PK
?
? A
1
?
? 0
:= ? .
?, A
?
? .
.
0
0
A2
..
.
0
0
0
..
.
0
???
???
???
???
0
0
..
.
0
0
0
..
.
AK
?
?
?.
?
By utilizing the definition of Pk in (10) we have the following inequalities (the second inequality
comes from [12, Lemma 1])
where IN
KA
T A
AT A
T A,
P ? IN A
(13)
is the N ? N identity matrix and the notation ??? denotes the Kronecker product.
Next let us estimate the cost-to-go.
Lemma 2.2. We have the following estimate of the optimality gap when using the BCPG:
?(r+1) : = f (x(r+1) ) ? f (x? )
(r+1)
(r) 1/2
? x )(P
? IN )
? R0 log(2N K) L/ Pmin + Pmax (x
(14)
Our third step combines the previous two steps and characterizes the iteration complexity. This is
the main result of this section.
Theorem 2.1. The iteration complexity of using BCPG to solve (8) is given below.
1. When the stepsizes are chosen conservatively as Pk = L, ? k, we have
R02
?(r+1) ? 3 max ?0 , 4 log2 (2N K)L
r+1
(15)
2. When the stepsizes are chosen as Pk = ?max (ATk Ak ) = Lk , ? k. Then we have
R02
L2
(16)
?(r+1) ? 3 max ?0 , 2 log2 (2N K) Lmax +
Lmin
r+1
In particular, if the problem is smooth and unconstrained, i.e., when h ? 0, and Xk =
RN , ? k, then we have
R02
L2
2
(r+1)
.
(17)
? 3 max L, 2 log (2N K) Lmax +
?
Lmin
r+1
4
We comment on the bounds derived in the above theorem. The bound for BCPG with uniform
?conservative? stepsize 1/L has the same order as the GD method, except for the log2 (2N K) factor
(cf. (7)). In [4, Corollary 3.2], it is shown that the BCGD with the same ?conservative? stepsize
achieves a sublinear rate with a constant of 4L(1 + K)R02 , which is about K/(3 log2 (2N K)) times
worse than our bound. Further, our bound has the same dependency on L (i.e., 12L v.s. L/2) as
the one derived in [18] for BCPG with a ?conservative? stepsize to solve an 1 penalized quadratic
problem with special data matrix, but our bound holds true for a much larger class of problems (i.e.,
all quadratic nonsmooth problem in the form of (8)). However, in practice such conservative stepsize
is slow (compared with BCPG with Pk = Lk , for all k) hence is rarely used.
The rest of the bounds derived in Theorem 2.1 is again at least K/ log2 (2N K) times better than
existing bounds of cyclic BCPG. For example, when the problem is smooth and unconstrained, the
ratio between our bound (17) and the bound (6) is given by
6 log2 (2N K)(1 + L2 /(LminLmax ))
6R02 log2 (2N K)(L2 /Lmin + Lmax )
?
= O(log2 (2N K)/K)
2
2
2
CLmax (1 + KL /Lmin)R0
C(1 + KL2 /L2min)
(18)
where in the last inequality we have used the fact that Lmax /Lmin ? 1.
For unconstrained smooth problems, let us compare the bound derived in the second part of Theorem 2.1 (stepsize Pk = Lk , ?k) with that of the GD (7). If L = KLk for all k (problem badly
conditioned), our bound is about K log2 (2N K) times worse than that of the GD. This indicates a
counter-intuitive phenomenon: by choosing conservative stepsize Pk = L, ?k the iteration complexity of BCGD is K times better compared with choosing a more aggressive stepzise Pk = Lk , ?k. It
also indicates that the factor L/Lmin may hide an additional factor of K.
3 Iteration Complexity for General Convex Problems
In this section, we consider improved iteration complexity bounds of BCD for general unconstrained
smooth convex problems. We prove a general iteration complexity result, which includes a result of
Beck et al. [4] as a special case. Our analysis for the general case also applies to smooth quadratic
problems, but is very different from the analysis in previous sections for quadratic problems. For
simplicity, we only consider the case N = 1 (scalar blocks); the generalization to the case N > 1 is
left as future work.
2
g
(x).
Let us assume that the smooth objective g has second order derivatives Hij (x) := ?x?i ?x
j
When each
block is just a coordinate, we assume |Hij (x)| ? Lij , ?i, j. Then Li = Lii and
?
Lij ? Li Lj . For unconstrained smooth convex problems with scalar block variables, the BCPG
iteration reduces to the following coordinate gradient descent (CGD) iteration:
(r)
(r)
(r)
(r)
1
2
K
w2 ??
w3 ?? . . . ??
wK+1 = x(r+1) ,
x(r) = w1 ??
(r)
d
(r)
d
d
(r)
d
(r)
(19)
(r)
k
wk+1 means that wk+1 is a linear combination of wk and
where dk = ?k g(wk ) and wk ??
dk ek (ek is the k-th block unit vector).
In the following theorem, we provide an iteration complexity bound for the general convex problem.
The proof framework follows the standard three-step approach that combines sufficient descent and
cost-to-go estimate; nevertheless, the analysis of the sufficient descent is very different from the
methods used in the previous sections. The intuition is that CGD can be viewed as an inexact
gradient descent method, thus the amount of descent can be bounded in terms of the norm of the full
gradient. It would be difficult to further tighten this bound if the goal is to obtain a sufficient descent
based on the norm of the full gradient. Having established the sufficient descent in terms of the
full gradient ?g(x(r) ), we can easily prove the iteration complexity result, following the standard
analysis of GD (see, e.g. [11, Theorem 2.1.13]).
Theorem 3.1. For CGD with Pk ? Lmax , ?k, we have
min{KL2, ( k Lk )2 } R02
(r)
?
, ? r ? 1.
g(x ) ? g(x ) ? 2 Pmax +
Pmin
r
5
(20)
Proof.
r
Since wk+1
and wkr only differ by the k-th block, and ?k g is Lipschitz continuous with
Lipschitz constant Lk , we have 2
r
r
g(wk+1
) ?g(wkr ) + ?k g(wkr ), wk+1
? wkr +
2Pk ? Lk
?k g(wkr )2
2Pk2
1
?g(wkr ) ?
?k g(wkr )2 ,
2Pk
Lk r
wk+1 ? wkr 2
2
=g(wkr ) ?
(21)
where the last inequality is due to Pk ? Lk .
The amount of decrease can be estimated as
r
r
1
r
g(xr ) ? g(xr+1 ) =
[g(wkr ) ? g(wk+1
)] ?
?k g(wkr )2 .
2Pk
k=1
Since
wkr = xr ?
(22)
k=1
T
1
1
P1 d1 , . . . , Pk?1 dk?1 , 0, . . . , 0
,
by the mean-value theorem, there must exist ?k such that
?k g(xr ) ? ?k g(wkr ) = ?(?k g)(?k ) ? (xr ? wkr )
T
?2g
?2g
1
1
(?k ), . . . ,
(?k ), 0, . . . , 0 P1 d1 , . . . , Pk?1 dk?1 , 0, . . . , 0
=
?xk ?x1
?xk ?xk?1
T
1
1
1
1
= ? Hk1 (?k ), . . . , ?
Hk,k?1 (?k ), 0, . . . , 0 ?P1 d1 , . . . , ?PK dK ,
Pk?1
P1
where Hij (x) =
?2g
?xi ?xj (x)
(23)
is the second order derivative of g. Then
?k g(xr ) = ?k g(xr ) ? ?k g(wkr ) + ?k g(wkr )
T
1
1
1
1
= ? Hk1 (?k ), . . . , ?
Hk,k?1 (?k ), 0, . . . , 0 ?P1 d1 , . . . , ?PK dK + dk
P
P1
k?1
T
?
1
1
1
1
= ? Hk1 (?k ), . . . , ?
Hk,k?1 (?k ), Pk , 0, . . . , 0 ?P1 d1 , . . . , ?PK dK
P
P1
k?1
(24)
= vkT d,
where we have defined
d :=
T
,
1
1
vk := ? Hk1 (?k ), . . . ,
Hk,k?1 (?k ), Pk , . . . , 0 .
P1
Pk?1
Let
?
?
?
?
?
?
V := ? . . . ? = ?
?
T
?
vK
?
?
?
?
v1T
?
?1 d1 , . . . , ? 1 dK
P1
PK
?
?0
P2
H32 (?3 )
0
0
?
P3
P1
?1 H21 (?2 )
P1
?1 H31 (?3 )
P1
?1
P2
?1 H41 (?4 )
P1
?1 H42 (?4 )
P2
?1 H43 (?4 )
P3
?1 HK1 (?K )
P1
?1 HK2 (?K )
P2
?1 HK3 (?K )
P3
..
.
..
.
..
.
...
...
...
..
.
..
.
...
0
0
0
?
1
PK?1
0
..
.
HK,K?1 (?K )
(25)
0
0
0
?
?
?
?
?
?
?
0 ?
?
.. ?
?
.
? ?
PK
(26)
Pk2
2Pk ?Lk
r
A stronger bound is g(wk+1
) ? g(wkr ) ? 2P1k ?k g(wkr )2 , where P?k =
? Pk , but since
Pk ? 2Pk ? Lk ? 2Pk , the improvement ratio of using this stronger bound is no more than a factor of 2.
2
6
Therefore, we have
1
(24)
?k g(xr )2 =
vkT d2 = V d2 ? V 2 d2 = V 2
?k g(wkr )2 .
?g(xr )2 =
Pk
k
k
k
Combining with (22), we get
g(xr ) ? g(xr+1 ) ?
1
1
?k g(wkr )2 ?
?g(xr )2 .
2Pk
2V 2
(27)
k
Let D Diag(P1 , . . . , PK ) and let H(?) be defined as
?
0
? H21 (?2 )
? H31 (?3 )
H(?) := ?
?
..
?
.
HK1 (?K )
0
0
H32 (?3 )
..
.
HK2 (?K )
0
0
0
..
.
HK3 (?K )
...
...
...
..
.
...
Then V = D1/2 + H(?)D?1/2 , which implies
2
V = D
1/2
+ H(?)D
?1/2 2
? 2(D
1/2 2
+ H(?)D
0
0
0
..
.
HK,K?1 (?K )
?
0
0?
0?
?.
.. ?
?
.
0
(28)
H(?)2
) ? 2 Pmax +
.
Pmin
?1/2 2
Plugging into (27), we obtain
g(x(r) ) ? g(x(r+1) ) ?
1
1
?g(x(r) )2 .
2 Pmax + H(?)2
(29)
Pmin
From the fact that Hkj (?k ) is a scalar bounded above by |Hkj (?k )| ? Lkj ? Lk Lj , thus
H2 ? H2F =
|Hkj (?k )|2 ?
Lk Lj ? (
Lk )2 .
k<j
k<j
(30)
k
We provide the second bound of H below. Let Hk denote the k-th row of H, then Hk ? L.
Therefore, we have
Hk 2 ?
L2 = KL2 .
H2 ? H2F =
k
k
Combining this bound and (30), we obtain that H ? min{KL2 , ( k Lk )2 } ? 2 .
2
Denote ? =
1
1
2 Pmax + ? 2
P
, then (29) becomes
min
g(x(r) ) ? g(x(r+1) ) ? ??g(x(r) )2 , ?r.
(r)
(0)
(31)
(r)
This relation also implies g(x ) ? g(x ), thus by the definition of R0 in (3) we have x
x? ? R0 . By the convexity of g and the Cauchy-Schwartz inequality, we have
g(x(r) ) ? g(x? ) ? ?g(x(r) ), x(r) ? x? ? ?g(x(r) )R0 .
Combining with (31), we obtain
g(x(r) ) ? g(x(r+1) ) ?
?
(g(x(r) ) ? g(x? ))2 .
R02
Let ?(r) = g(x(r) ) ? g(x? ), we obtain
?(r) ? ?(r+1) ?
Then we have
1
?(r+1)
?
? (r)
? .
R02
1
? ?(r)
1
?
+
? (r) + 2 .
R02 ?(r+1)
R0
?(r)
?
7
?
Summarizing the inequalities, we get
1
1
?
?
? (0) + 2 (r + 1) ? 2 (r + 1),
R0
R0
?(r+1)
?
which leads to
?(r+1) = g(x(r+1) ) ? g(x? ) ?
1 R02
?2
R2
= 2(Pmax +
) 0 ,
?r+1
Pmin r + 1
where ? 2 = min{KL2 , ( k Lk )2 }. This completes the proof.
Q.E.D.
Let us compare this bound with the bound derived in [4, Theorem 3.1] (replacing the denominator
r + 8/K by r), which is
Pmax KL2 R2
.
(32)
g(xr ) ? g(x? ) ? 4 Pmax +
Pmin Pmin
r
, we
In our new bound, besides reducing the coefficient from 4 to 2 and removing the factor PPmax
min
2
2
2
2
2
improve KL to min{KL , ( k Lk ) }. Neither
of the two bounds KL and ( k Lk ) implies
the other: when L = Lk , ?k the new bound ( k Lk )2 is K times larger; when L = KLk , ?k or
L = L1 > L2 = ? ? ? = LK = 0 the new bound is K times smaller. In fact, when L = KLk , ?k,
our new bound is K times better than the bound in [4] for either Pk = Lk or Pk = L. For example,
bound is O( Lr ), which matches GD
when Pk = L, ?k, the bound in [4] becomes O( KL
r ), while our
(listed in Table 1 below). Another advantage of the new bound ( k Lk )2 is that it does not increase
if we add an artificial block xK+1 and perform CGD for function g?(x, xk+1 ) = g(x); in contrast,
the existing bound KL2 will increase to (K + 1)L2 , even though the algorithm does not change at
all.
We have demonstrated that our bound can match GD in some cases, but can possibly be K times
worse than GD. An interesting question is: for general convex problems can we obtain an O( Lr )
bound for cyclic BCGD, matching the bound of GD? Removing the K-factor in (32) will lead to an
O( Lr ) bound for conservative stepsize Pk = L no matter how large Lk and L are. We conjecture that
an O( Lr ) bound for cyclic BCGD cannot be achieved for general convex problems. That being said,
we point out that the iteration complexity of cyclic BCGD may depend on other intrinsic parameters
of the problem such as {Lk }k and, possibly, third order derivatives of g. Thus the question of finding
the best iteration complexity bound of the form O(h(K) Lr ), where h(K) is a function of K, may
not be the right question to ask for BCD type algorithms.
4 Conclusion
In this paper, we provide new analysis and improved complexity bounds for cyclic BCD-type methods. For convex quadratic problems, we show that the bounds are O( Lr ), which is independent of
K (except for a mild log2 (2K) factor) and is about Lmax /L + L/Lmin times worse than those
for GD/PG. By a simple example we show that it is not possible to obtain an iteration complexity
O(L/(Kr)) for cyclic BCPG. For illustration, the main results of this paper in several simple settings are summarized in the table below. Note that different ratios of L over Lk can lead to quite
different comparison.
Table 1: Comparison of Various Iteration Complexity Results
Lip-constant
1/Stepsize
GD
Random BCGD
Cyclic BCGD [4]
Cyclic CGD, Cor 3.1
Cyclic BCGD (QP)
Diagonal Hessian Li = L
Pi = L
L/r
L/r
KL/r
KL/r
log2 (2K)L/r
L
Full Hessian Li = K
L
Large stepsize Pi = K
N/A
L/(Kr)
K 2 L/r
KL/r
log2 (2K)KL/r
8
L
Full Hessian Li = K
Small stepsize Pi = L
L/r
L/r
KL/r
L/r
log2 (2K)L/r
References
[1] J. R. Angelos, C. C. Cowen, and S. K. Narayan. Triangular truncation and finding the norm of
a hadamard multiplier. Linear Algebra and its Applications, 170:117 ? 135, 1992.
[2] A. Beck. On the convergence of alternating minimization with applications to iteratively reweighted least squares and decomposition schemes. SIAM Journal on Optimization,
25(1):185?209, 2015.
[3] A. Beck, E. Pauwels, and S. Sabach. The cyclic block coordinate gradient method for convex
optimization problems. 2015. Preprint, available on arXiv:1502.03716v1.
[4] A. Beck and L. Tetruashvili. On the convergence of block coordinate descent type methods.
SIAM Journal on Optimization, 23(4):2037?2060, 2013.
[5] D. P. Bertsekas. Nonlinear Programming, 2nd ed. Athena Scientific, Belmont, MA, 1999.
[6] L. Grippo and M. Sciandrone. On the convergence of the block nonlinear Gauss-Seidel method
under convex constraints. Operations Research Letters, 26:127?136, 2000.
[7] M. Hong, X. Wang, M. Razaviyayn, and Z.-Q. Luo. Iteration complexity analysis of block
coordinate descent methods. 2013. Preprint, available online arXiv:1310.6957.
[8] Z. Lu and L. Xiao. On the complexity analysis of randomized block-coordinate descent methods. 2013. accepted by Mathematical Programming.
[9] Z. Lu and L. Xiao. Randomized block coordinate non-monotone gradient method for a class
of nonlinear programming. 2013. Preprint.
[10] Z.-Q. Luo and P. Tseng. On the convergence of the coordinate descent method for convex
differentiable minimization. Journal of Optimization Theory and Application, 72(1):7?35,
1992.
[11] Y. Nesterov. Introductory lectures on convex optimization: A basic course. Springer, 2004.
[12] Y. Nesterov. Efficiency of coordiate descent methods on huge-scale optimization problems.
SIAM Journal on Optimization, 22(2):341?362, 2012.
[13] J. Nutini, M. Schmidt, I. H. Laradji, M. Friedlander, and H. Koepke. Coordinate descent
converges faster with the Gauss-Southwell rule than random selection. In the Proceeding of
the 30th International Conference on Machine Learning (ICML), 2015.
[14] M. J. D. Powell. On search directions for minimization algorithms. Mathematical Programming, 4:193?201, 1973.
[15] M. Razaviyayn, M. Hong, and Z.-Q. Luo. A unified convergence analysis of block successive minimization methods for nonsmooth optimization. SIAM Journal on Optimization,
23(2):1126?1153, 2013.
[16] M. Razaviyayn, M. Hong, Z.-Q. Luo, and J. S. Pang. Parallel successive convex approximation for nonsmooth nonconvex optimization. In the Proceedings of the Neural Information
Processing (NIPS), 2014.
[17] P. Richt?arik and M. Tak?ac? . Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, 144:1?38, 2014.
[18] A. Saha and A. Tewari. On the nonasymptotic convergence of cyclic coordinate descent
method. SIAM Journal on Optimization, 23(1):576?601, 2013.
[19] P. Tseng. Convergence of a block coordinate descent method for nondifferentiable minimization. Journal of Optimization Theory and Applications, 103(9):475?494, 2001.
[20] Y. Xu and W. Yin. A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM Journal on
Imaging Sciences, 6(3):1758?1789, 2013.
9
| 6028 |@word mild:1 norm:3 stronger:2 nd:1 open:1 decomposition:1 pg:4 klk:3 cyclic:30 series:1 existing:3 current:1 ka:1 luo:5 yet:2 must:1 belmont:1 update:3 stationary:1 xk:69 short:1 lr:6 contribute:1 ames:1 successive:2 mathematical:3 prove:3 consists:1 combine:3 introductory:1 inside:1 polyhedral:1 introduce:1 indeed:1 sublinearly:1 p1:16 v1t:1 decreasing:1 becomes:2 provided:1 notation:2 bounded:2 what:1 developed:1 unified:1 finding:2 every:1 exactly:1 rm:3 schwartz:1 unit:1 bertsekas:1 engineering:3 limit:1 ak:6 therein:1 studied:1 initialization:1 mentioning:1 factorization:1 unique:2 practice:1 block:43 xr:13 powell:1 scant:1 cowen:1 composite:2 matching:1 get:2 cannot:1 close:1 selection:3 lmin:8 applying:1 demonstrated:1 go:3 convex:23 simplicity:3 rule:4 utilizing:1 pk2:2 cgd:6 coordinate:23 exact:1 programming:5 us:1 subproblem:4 preprint:3 electrical:1 solved:1 wang:1 cycle:1 sun:1 richt:1 counter:1 decrease:1 intuition:1 convexity:3 complexity:25 nesterov:2 depend:3 solving:1 algebra:1 efficiency:1 easily:1 various:1 wkr:20 artificial:1 choosing:2 quite:1 stanford:3 posed:1 widely:1 solve:2 larger:2 hk1:6 triangular:1 p1k:1 final:1 online:1 advantage:1 differentiable:2 product:1 combining:3 hadamard:1 intuitive:1 r02:12 convergence:12 converges:4 illustrate:1 ac:1 completion:1 narayan:1 p2:5 strong:1 come:1 implies:3 differ:1 direction:1 subsequently:1 atk:2 generalization:1 investigation:1 proposition:1 hold:1 achieves:5 a2:1 minimization:7 always:1 arik:1 rather:1 stepsizes:2 koepke:1 corollary:2 derived:7 vk:4 improvement:1 indicates:2 hk:21 industrial:1 contrast:2 summarizing:1 dependent:1 lj:3 relation:1 tak:1 issue:1 arg:1 special:8 equal:1 having:3 sampling:1 icml:1 future:1 minimized:2 nonsmooth:8 serious:1 few:1 saha:3 randomly:2 beck:6 replacement:2 huge:1 multiconvex:1 theoretical:1 handed:1 cost:4 uniform:1 front:1 dependency:3 kn:1 proximal:3 gd:17 international:1 randomized:5 siam:6 w1:1 again:1 management:1 possibly:4 bcgd:14 worse:6 lii:1 ek:2 derivative:3 pmin:8 li:5 aggressive:1 sabach:1 nonasymptotic:1 summarized:1 wk:22 includes:3 coefficient:1 matter:1 multiplicative:1 characterizes:1 parallel:1 square:2 pang:1 lu:2 worth:1 ed:1 definition:2 inexact:1 kl2:9 proof:4 sampled:1 popular:4 ask:1 improved:5 though:1 strongly:2 just:1 replacing:1 nonlinear:3 artifact:1 scientific:1 requiring:2 true:2 multiplier:1 hence:1 alternating:1 iteratively:1 reweighted:1 hong:5 l1:1 recently:3 permuted:1 qp:1 million:1 unconstrained:6 add:1 hide:1 recent:2 optimizing:1 apart:1 scenario:1 certain:5 nonconvex:2 inequality:8 ruoyu:2 additional:2 relaxed:1 r0:9 semi:1 full:5 reduces:2 seidel:1 smooth:13 match:4 faster:2 equally:1 a1:1 plugging:1 variant:2 basic:1 denominator:1 arxiv:2 iteration:24 achieved:1 hk2:2 interval:1 completes:1 w2:1 rest:2 unlike:1 pass:1 strict:1 comment:1 iterate:1 xj:1 w3:1 lasso:4 pauwels:1 proceed:1 hessian:3 tewari:3 listed:1 amount:2 extensively:1 exist:1 estimated:1 per:3 group:1 nevertheless:1 neither:1 v1:1 imaging:1 merely:1 monotone:1 inverse:1 letter:1 family:3 p3:3 bound:49 guaranteed:1 quadratic:9 nonnegative:1 badly:1 kronecker:1 constraint:1 bcd:25 min:10 optimality:3 conjecture:1 department:3 combination:1 smaller:1 iastate:1 remain:1 slightly:1 southwell:2 cor:1 available:2 operation:1 apply:1 stepsize:12 sciandrone:1 schmidt:1 tetruashvili:2 denotes:1 cf:1 log2:15 lipschitzian:1 prof:1 establish:1 classical:4 tensor:1 objective:6 question:4 diagonal:1 surrogate:1 said:1 gradient:18 kth:2 athena:1 nondifferentiable:1 cauchy:1 tseng:3 besides:1 illustration:1 ratio:4 minimizing:1 difficult:1 unfortunately:1 sharper:1 subproblems:1 hij:3 mink:2 pmax:9 perform:1 vkt:2 descent:29 maxk:2 extended:2 rn:4 clmax:3 kl:10 extensive:1 optimized:1 established:1 nip:1 below:6 lkj:1 max:7 ia:1 regularized:1 scheme:1 improve:1 lk:30 lij:2 literature:2 l2:7 friedlander:1 lecture:1 permutation:1 sublinear:4 interesting:1 iowa:1 sufficient:5 xiao:2 pi:3 row:1 course:1 lmax:7 penalized:1 last:2 truncation:1 dimension:5 conservatively:1 author:4 made:1 tighten:1 compact:1 assumed:2 xi:1 continuous:5 search:1 table:3 lip:1 ca:1 diag:1 pk:44 main:3 linearly:2 big:1 razaviyayn:4 x1:5 xu:1 slow:1 third:2 theorem:10 h32:2 removing:2 showing:1 r2:2 dk:9 intrinsic:2 kr:2 conditioned:1 gap:3 yin:1 scalar:3 grippo:1 applies:1 nutini:2 springer:1 minimizer:1 mingyi:2 inexactly:1 satisfies:2 ma:1 identity:1 presentation:1 viewed:1 goal:1 manufacturing:1 lipschitz:4 feasible:1 change:1 except:2 reducing:1 laradji:1 hkj:3 lemma:3 conservative:6 called:1 accepted:1 gauss:3 rarely:1 h21:2 d1:7 phenomenon:1 |
5,557 | 6,029 | Cornering Stationary and Restless Mixing Bandits
with Remix-UCB
Liva Ralaivola
Q ARMA, LIF, CNRS
Aix Marseille University
F-13289 Marseille cedex 9, France
[email protected]
Julien Audiffren
CMLA
ENS Cachan, Paris Saclay University
94235 Cachan France
[email protected]
Abstract
We study the restless bandit problem where arms are associated with stationary
?-mixing processes and where rewards are therefore dependent: the question that
arises from this setting is that of carefully recovering some independence by ?ignoring? the values of some rewards. As we shall see, the bandit problem we tackle
requires us to address the exploration/exploitation/independence trade-off, which
we do by considering the idea of a waiting arm in the new Remix-UCB algorithm, a generalization of Improved-UCB for the problem at hand, that we
introduce. We provide a regret analysis for this bandit strategy; two noticeable
features of Remix-UCB are that i) it reduces to the regular Improved-UCB
when the ?-mixing coefficients are all 0, i.e. when the i.i.d scenario is recovered,
??
and ii) when ?(n) = O(n
), it is able to ensure a controlled regret of order
(??2)/?
1/?
e
? ??
log
T , where ?? encodes the distance between the best arm
and the best suboptimal arm, even in the case when ? < 1, i.e. the case when the
?-mixing coefficients are not summable.
1
Introduction
Bandit with mixing arms. The bandit problem consists in an agent who has to choose at each step
between K arms. A stochastic process is associated to each arm, and pulling an arm produces a
reward which is the realization of the corresponding stochastic process. The objective of the agent
is to maximize its long term reward. In the abundant bandit literature, it is often assumed that the
stochastic process associated to each arm is a sequence of independently and identically distributed
(i.i.d) random variables (see, e.g. [12]). In that case, the challenge the agent has to address is the
well-known exploration/exploitation problem: she has to simultaneously make sure that she collects
information from all arms to try to identify the most rewarding ones?this is exploration?and to
maximize the rewards along the sequence of pulls she performs?this is exploitation. Many algorithms have been proposed to solve this trade-off between exploration and exploitation [2, 3, 6, 12].
We propose to go a step further than the i.i.d setting and to work in the situation where the process
associated with each arm is a stationary ?-mixing process: the rewards are thus dependent from one
another, with a strength of dependence that weakens over time. From an application point of view,
this is a reasonable dependence structure: if a user clicks on some ad (a typical use of bandit algorithms) at some point in time, it is very likely that her choice will have an influence on what she will
click in the close future, while it may have a (lot) weaker impact on what ad she will choose to view
in a more distant future. As it shall appear in the sequel, working with such dependent observations
poses the question of how informative are some of the rewards with respect to the value of an arm
since, because of the dependencies and the strong correlation between close-by (in time) rewards,
they might not reflect the true ?value? of the arms. However, as the dependencies weaken over time,
some kind of independence might be recovered if some rewards are ignored, in some sense. This
1
actually requires us to deal with a new tradeoff, the exploration/exploitation/independence tradeoff,
where the usual exploration/exploitation compromise has to be balanced with the need for some
independence. Dealing with this new tradeoff is the pivotal feature of our work.
Non i.i.d bandit. A closely related setup that addresses the bandit problem with dependent rewards
is when they are distributed according to Markov processes, such as Markov chains and Markov
decision process (MDP) [16, 22], where the dependences between rewards are of bounded range,
which is what distinguishes those works with ours. Contributions in this area study two settings:
the rested case, where the process attached to an arm evolves only when the arm is pulled, and the
restless case, where all processes simultaneously evolve at each time step. In the present work, we
will focus on the restless setting. The adversarial bandit setup (see e.g. [1, 4, 19]) can be seen as
a non i.i.d setup as the rewards chosen by the adversary might depend on the agent?s past actions.
However, even if the algorithms developed for this framework can be used in our setting, they might
perform very poorly as they are not designed to take advantage of any mixing structure. Finally, we
may also mention the bandit scenario where the dependencies are between the arms instead being
within-arm time-dependent (e.g., [17]); this is orthogonal to what we propose to study here.
Mixing Processes. Mixing process theory is hardly new. One of the seminal works on the study of
mixing processes was done by Bernstein [5] who introduced the well-known block method, central
to prove results on mixing processes. In statistical machine learning, one of the first papers on
estimators for mixing processes is [23]. More recent works include the contributions of Mohri and
Rostamizadeh [14, 15], which address the problem of stability bound and Rademacher stability for
?- and ?-mixing processes; Kulkarni et al [11] establish the consistency of regularized boosting
algorithms learning from ?-mixing processes, Steinwart et al [21] prove the consistency of support
vector machines learning from ?-mixing processes and Steinwart and Christmann [20] establish a
general oracle inequality for generic regularized learning algorithms and ?-mixing observations. As
far as we know, it is the first time that mixing processes are studied in a multi-arm bandit framework.
Contribution. Our main result states that a strategy based on the improved Upper Confidence
Bound (or Improved-UCB, in the sequel) proposed by Auer and Ortner [2], allows us to achieve a
controlled regret in the restless mixing scenario. Namely, our algorithm, Remix-UCB (which stands
e (??2)/?
for Restless Mixing UCB), achieves a regret of the form ?(?
log1/? T ), where ?? encodes
?
the distance between the best arm and the best suboptimal arm, ? encodes the rate of decrease
e is a O-like notation (that neglects logarithmic
of the ? coefficients, i.e. ?(n) = O(n? ), and ?
dependencies, see Section 2.2). It is worth noticing that all the results we give hold for ? < 1, i.e.
when the dependencies are no longer summable. When the mixing coefficients at hand are all zero,
i.e. in the i.i.d case, the regret of our algorithm naturally reduces to the classical Improved-UCB.
Remix-UCB uses the assumption about known (convergence rates of) ?-mixing coefficients, which
is a classical standpoint that has been used by most of the papers studying the behavior of machine
learning algorithms in the case of mixing processes (see e.g. [9, 14, 15, 18, 21, 23]). The estimation
of the mixing coefficients poses a learning problem on its own (see e.g. [13] for the estimation of
?-mixing coefficients) and is beyond the scope of this paper.
Structure of the paper. Section 2 defines our setup: ?-mixing processes are recalled, together with
a relevant concentration inequality for such processes [10, 15]; the notion of regret we focus on is
given. Section 3 is devoted to the presentation of our algorithm, Remix-UCB, and to the statement
of our main result regarding its regret. Finally, Section 4 discusses the obtained results.
2
2.1
Overview of the Problem
Concentration of Stationary ?-mixing Processes
Let (?, F, P) be a probability space. We recall the notions of stationarity and ?-mixing processes.
Definition 1 (Stationarity). A sequence of random variables X = {Xt }t?Z is stationary if, for any
t, m ? 0, s ? 0, (Xt , . . . , Xt+m ) and (Xt+s , . . . , Xt+m+s ) are identically distributed.
Definition 2 (?-mixing process). Let X = {Xt }t?Z be a stationary sequence of random variables.
For any i, j ? Z ? {??, +?}, let ?ij denote the ?-algebra generated by {Xt : i ? t ? j}. Then,
2
for any positive n, the ?-mixing coefficient ?(n) of the stochastic process X is defined as
?(n) =
sup
+?
t
t,A??t+n
,B????
,P(B)>0
P [A|B] ? P [A] .
(1)
X is ?-mixing if ?(n) ? 0. X is algebraically mixing if ??0 > 0, ? > 0 so that ?(n) = ?0 n?? .
As we recall later, concentration inequalities are the pivotal tools to devise multi-armed bandits
strategy. Hoeffding?s inequality [7, 8] is, for instance, at the root of a number of UCB-based methods.
This inequality is yet devoted to characterize the deviation of the sum of independent variables from
its expected value and cannot be used in the framework we are investigating. In the case of stationary
?-mixing distributions, there however is the following concentration inequality, due to [10] and [15].
Theorem 1 ([10, 15]). Let ?m : U m ? R be a function defined over a countable space U, and X
be a stationary ?-mixing process. If ?m is `-Lipschitz wrt the Hamming metric for some ` > 0, then
?? > 0, PX [|?m (X) ? E?m (X)| > ?] ? 2 exp ?
?2
,
2m`2 ?2m
(2)
Pm
.
where ?m = 1 + 2 ? =1 ?(? ) and ?m (X) = ?m (X0 , . . . , Xm ).
Here, we do not have to use this concentration inequality in its full generality as we will
restrict
. 1 Pm
to the situation where ?m is the mean of its arguments, i.e. ?m (Xt1 , . . . , Xtm ) = m
i=1 Xti ,
which is obviously 1/m-Lipschitz provided that the Xt ?s have range [0; 1]?which will be one of
our working assumptions. If, with a slight abuse of notation, ?m is now used to denote
m
X
.
?(ti ? t1 ),
?m (t) = 1 + 2
(3)
i=2
for an increasing sequence t = (ti )m
i=1 of times steps, then, the concentration inequality that will
serve our purpose is given in the next corollary.
Corollary 1 ([10, 15]). Let X be a stationary ? mixing process. The following holds: for all ? > 0
and all m-sequence t = (ti )m
i=1 with t1 < . . . < tm ,
#
"
m
X
m?2
1
Xti ? EX1 > ? ? 2 exp ? 2
.
P{Xt }t?t
m
2?m (t)
(4)
i=1
Pm
(Thanks to the stationarity of {Xt }t?Z and the linearity of the expectation, E i=1 Xti = mEXt1 .)
Remark
paper [10], the function ?m should be
n 3. PAccording to Kontorovitch?s
o
m
maxj 1 + 2 i=j+1 ?(ti ? tj ) . However, when the time lag between two consecutive
time steps ti and ti+1 is non-decreasing, which will be imposed by the Remix-UCB algorithm
(see below), and the mixing coefficients are decreasing, which is a natural assumption that simply
says that the amount of dependence between Xt and Xt0 reduces when |t ? t0 | increases, then ?m
reduces to the more compact expression given by (3).
Note that when there is independence, then ?(? ) = 0, for all ? , ?m = 1 and, as a consequence,
Equation (4) reduces to Hoeffing?s inequality: the precise values of the time instants in t do not
impact the value of the bound and the length m of t is the central parameter that matters. This
is in
contrast with what happens in the dependent setting, where the bound on the deviation
Pclear
m
of i=1 Xti /m from its expectation directly depends onP
the timepoints ti through ?m . For two
m
0
0 m
sequences t = (ti )m
and
t
=
(t
)
of
m
timepoints,
i=1
i=1 Xti /m may be more sharply conPm i i=1
centrated around EX1 than i=1 Xt0i /m provided ?m (t) < ?m (t0 ), which can be a consequence
of a more favorable spacing of the points in t than in t0 .
2.2
Problem: Minimize the Expected Regret
We may now define the multi-armed bandit problem we consider and the regret we want to control.
Restless ?-mixing Bandits. We study the problem of sampling from a K-armed ?-mixing bandit.
In our setting, pulling arm k at time t provides the agent with a realization of the random variable Xtk ,
3
where the family Xtk t?Z satisfies the following assumptions: (A) ?k, (Xtk )t?Z is a stationary ?mixing process with decreasing mixing coefficients ?k and (B) ?k, X1k takes its values in a discrete
finite set (by stationarity, the same holds for any Xtk , with t 6= 1) included in [0; 1].
Regret The regret we want to bound is the classical pseudo-regret which, after T pulls, is given by
T
X
.
R(T ) = T ?? ? E
?I t
(5)
t=1
.
.
where ?k = EX1k , ?? = maxk ?k , and It is the index of the arm selected at time t. We want to
devise a strategy is capable to select, at each time t, the arm It so that the obtained regret is minimal.
Bottleneck. The setting we assume entails the possibility of long-term dependencies between the
rewards output by the arms. Hence, as evoked earlier, in order to choose which arm to pull, the agent
is forced to address the exploration/exploitation/independence trade-off where independence may
be partially recovered by taking advantage of the observation regarding spacings of timepoints that
induce sharper concentration of the empirical rewards than others. As emphasized later, targetting
good spacing in the bandit framework translates into the idea of ignoring the rewards provided by
some pulls to compute the empirical averages: this idea is carried by the concept of a waiting arm,
which is formally defined later on. The questions raised by the waiting arm that we address with
the Remix-UCB algorithm are a) how often should the waiting arm be pulled so the concentration
of the empirical means is high enough to be relied on (so the usual exploration/exploitation tradeoff
can be tackled) and b) from the regret standpoint, how hindering is it to pull the waiting arm?
e analysis. In the analysis of Remix-UCB that we provide, just as is the case for most, if not
O and ?
all, analyses that exist for bandit algorithms, we will focus in the order of the regret and we will not
be concerned about the precise constants involved in the derived results. We will therefore naturally
e notation, that bears the following meaning.
heavily rely on the usual O notation and on the ?
e notation). For any two functions f, g from R to R, we say that f = ?(g)
e
Definition 4 (?
if there
exist ?, ? > 0 so that |f | log? |f | ? |g|, and |g| log? |g| ? |f |.
3
Remix-UCB: a UCB Strategy for Restless Mixing Bandits
This section contains our main contributions: the Remix-UCB algorithm. From now on, we use
a ? b (resp. a ? b) for the maximum (resp. minimum) of two elements a and b. We consider that
the processes attached to the arms are algebraically mixing and for arm k, the exponent is ?k > 0:
there exist ?k,0 such that ?k (t) = ?k,0 t??k ?this assumption is not very restrictive as considering
rates such as t??k are appropriate/natural to capture and characterize the decreasing behavior of the
convergent sequence (?k (t))t . Also, we will sometimes say that arm k is faster (resp. slower) than
arm k 0 for k 6= k 0 , to convey the fact that ?k > ?k0 (resp. ?k < ?k0 ).
For any k and any increasing sequence ? = (? (n))tn=1 of t timepoints, the empirical reward ?
b?k of
Pt
? . 1
k
k given ? is ?
bk = t n=1 X? (n) . The subscripted notation ? k = (?k (n))1?n?t is used to denote
the sequence of timepoints at which arm k was selected. Finally, we define ??k in a similar way as
in (3), the difference with the former notation being the subscript k, as
t
X
.
?
?k k = 1 + 2
?k (?k (n) ? ?k (1)).
(6)
n=1
We feel important to discuss when Improved-UCB may be robust to the mixing process scenario.
3.1
Robustness of Improved-UCB to Restless ?-Mixing Bandits
We will not recall the Improved-UCB algorithm [2] in its entirety as it will turn out to be a special
case of our Remix-UCB algorithm, but it is instructive to identify its distinctive features that make
it a relevant base algorithm for the handling of mixing processes. First, it is essential to keep in mind
that Improved-UCB is designed for the i.i.d case and that it achieves an optimal O(log T ) regret.
Second, it is an algorithm that works in successive rounds/epochs, at the end of each of which a
number of arms are eliminated because they are identified (with high probability) as being the least
4
promising ones, from a regret point of view. More precisely, at each round, the same number of
consecutive pulls is planned for each arm: this number is induced by Hoeffding?s inequality [8] and
devised in such a way that all remaining arms share the same confidence interval for their respective
expected gains, the ?k = EX1k , for k in the set of remaining arms at the current round. From a
technical standpoint, this is what makes it possible to draw conclusions on whether an arm is useless
(i.e. eliminated) or not. It is enlightening to understand what are the favorable and unfavorable
setups for Improved-UCB to keep working when facing restless mixing bandits. The following
Proposition depicts the favorable case.
P
Proposition 5. If t ?k (t) < +?, ?k, then the classical Improved-UCB run on the restless
?-mixing bandit preserves its O(log T ) regret.
Proof. Straightforward.
Given the assumption on the mixing coefficients, it exists M > 0 such that
P
maxk?{1,??? ,K} t?0 ?k (t) < M. Therefore, from Theorem 1, for any arm k, and any sequence ?
|? |?2
of |? | consecutive timepoints, P (|?k ? ?
b?k | > ?) ? 2 exp ? 2(1+2M
, which is akin to Hoeffd2
)
ing?s inequality up to the multiplicative (1 + 2M )2 constant in the exponential. This, and the lines
to prove the O(log T ) regret of Improved-UCB [2] directly give the desired result.
P
In the general case where t ?k (t) < +? does not hold for every k, then nothing ensures for
Improved-UCB to keep working, the idea of consecutive pulls being the essential culprit. To
illustrate the problem, suppose that ?k, ?k (n) = n?1/4 . Then, after a sequence ? = (t1 + 1, t1 +
2, . . . , t1 + t) of t consecutive time instances where k was selected, simple calculations give that
??k = O(t3/4 ) and the concentration inequality from Corollary 1 for ?
b?k reads as
P(|?k ? ?
b?k | > ?) ? 2 exp ?C?2 t?1/2
(7)
where C is some strictly positive constant. The quality of the confidence interval that can be derived
from this concentration inequality degrades when additional pulls are performed, which counters
the usual nature of concentration inequalities and prevents the obtention of a reasonable regret for
Improved-UCB. This is a direct consequence of the dependency of the ?-mixing variables. Indeed, if ?(n) decreases slowly, taking the average over multiple consecutive pulls may move the
estimator away from the mean value of the stationary process.
Another way of understanding the difference between the i.i.d. case and the restless mixing case is
to look at the sizes of the confidence intervals around the true value of an arm when the time t to the
next pull increases. Given Corollary 1, Improved-UCB run in the restless mixing scenario would
advocate a pulling strategy based on the lengths ?k of the confidence intervals given by
.
?k, ?k (t) = |? k |?1/2
q
?
2(?k k + 2?k (t ? ? (1)))2 log(t)
(8)
where t is the overall time index. This shows that working in the i.i.d. case or in the mixing case
can imply two different behaviors for the lengths of the confidence interval: in the i.i.d. scenario,
?k has the same form as the classical UCB term (as ?k = 0 and ??k k = 1) and is an increasing
function of t while in the ?-mixing scenario the behavior may be non-monotonic with a decreasing
confidence interval up to some point after which the confidence interval becomes increasingly larger.
As the purpose of exploration is to tighten the confidence interval as much as possible, the mixing
framework points to carefully designed strategies. For instance, when an arm is slow, it is beneficial
to wait between two successive pulls of this arm.
By alternating the pulls of the different arms, it is possible to wait up to K unit of time between
two consecutive pulls of the same arm. However, it is not sufficient to recover enough independence
between the two observed values. For instance, in the case described in (7), after a sequence ? =
(t1 , t1 + K, . . . , t1 + tK), simple calculations give that ??k = O((Kt)3/4 ) and the concentration
inequality from Corollary 1 for ?
b?k reads as P(|?k ? ?
b?k | > ?) ? 2 exp ?CK 3/2 ?2 t?1/2 which
entails the same problem.
The problem exhibited above is that if the decrease of the ?k is too slow, pulling an arm in the
traditional way, with consecutive pulls, and updating the value of the empirical estimator may lower
the certainty with which the estimation of the expected gain is performed. To solve this problem
and reduce the confidence interval that are computed for each arm, a better independence between
5
Algorithm 1 Remix-UCB, with parameter K, (?i )i=1???K , T , G defined in (11)
B0 ? {1, ? ? ? , K},? ? 1 ? mini?B0 ?i , ?
bi ? 0, ni0 ? 0, , k = 1, . . . , K , i? ? 1
?1
for s = 1, . . . , bG (T )c do
Select arm : If |Bs | > 1, then until total time Ts = dG(s)e pull each arm i ? Bs at time ?i (?)
defined in (10). If no arm is ready to be pulled, pull the waiting arm i? instead.
Update :
1.
Update the empirical mean ?
bi and the number of pulls ni for each arm i ? Bs .
2.
Obtain Bs+1 by eliminating from Bs each arm i such that
s
i
?
b +
3.
2
(1 + 2
Pni
j=1
?i (?i (j)))2 log(T 2?2s )
ni
s
k
b ?
< max ?
k?Bs
2
(1 + 2
Pnk
j=1
?k (?k (j)))2 log(T 2?2s )
nk
update
s
??1?
min
i?Bs+1
?i ,
and
?
i
i ? argmax ?
b +
2
(1 + 2
i?Bs+1
Pni
j=1
?i (?i (j)))2 log(T 2?2s )
ni
end for
the values observed from a given arm is required. This can only be achieved by waiting for the
time to pass by. Since an arm must be pulled at each time t, simulating the time passing by may be
implemented by the idea to pull an arm but not to update the empirical mean ?
bk of this arm with
the observed reward. At the same time, it is important to note that even if we do not update the
empirical mean of the arm, the resort to the waiting arm may impact the regret. It is therefore crucial
to ensure that we pull the best possible arm to limit the resulting regret, whence the arm with the
best optimistic value, being used as the waiting arm. Note that this arm may change over time. For
the rest of the paper, ? will only refer to significant pulls of an arm, that is, pulls that lead to an
update of the empirical value of the arm.
3.2
Algorithm and Regret bound
We may now introduce Remix-UCB, depicted in Algorithm 1. As Improved-UCB, Remix-UCB
works in epochs and eliminates, at each epoch, the significantly suboptimal arms.
High-Level View. Let (?s )s?N be a decreasing sequence of R?+ and (?s )s?N ? RN
+ . The main idea
promoted by Remix-UCB is to divide the time available in epochs 1, . . . , smax (the outer loop of
the algorithm), such that at the end of each epoch s, for all the remaining arms k the following holds,
P(b
??k k ? ?k + ?s ) ? P(b
??k k ? ?k ? ?s ) ? ?s , where ? k identifies the time instants up to current
time t when arm k was selected. Using (4), this means that, for all k, with high probability:
|b
?k k ? ?k | ? nk ?1/2
?
q
2(??k )2 log(?s ).
(9)
Thus, at the end of epoch s we have, with high probability, a uniform control of the uncertainty
with which the empirical rewards ?
b?k k approximate their corresponding rewards ?k . Based on
this, the algorithm eliminates the arms that appear significantly suboptimal (step 2 of the update
of Remix-UCB). Just as in Improved-UCB, the process is re-iterated with parameters ?s and ?s
adjusted as ?s = 1/(T ?s2 ) and ?s = 1/2s , where T is the time budget; the modifications of the ?s
and ?s values makes it possible to gain additional information, through new pulls, on the quality of
the remaining arms, so arms associated with close-by rewards can be distinguished by the algorithm.
Policy for pulling arms at epoch s. The objective of the policy is to obtain a uniform control of the
uncertainty/confidence intervals (9) of all the remaining arms. For some arm k and fixed time budget
(?? s )2
s t? such that
T , such a policy could be obtained as the solution of min?s ,(ti )?i=1
s
ns?1 +?s < ? where
the times of pulls ti ?s must be increasing and greater than t0 the last element of ? s?1 , ? s = ? s?1 ?
(t1 , ...t?s ) and ns?1 (the number of times this arm has already been pulled), ?, ? s?1 are given. This
conveys our aim to obtain as fast and efficiently the targetted confidence interval. However, this
problem does not have a closed-form solution and, even if it could be solved efficiently, we are more
interested in assessing whether it is possible to devise relevant sequences of timepoints that induce a
controlled regret, even if they do not solve the optimization problem. To this end, we only focus on
6
the best sampling rate of the arms, which is an approximation of the previous minimization problem:
for each k, we search for sampling schemes of the form ?k (n) = tn = O(n? ) for ? ? 1. For the
case where the ?k are not summable ( ?k ? 1), we have the following result.
Proposition 6. Let ?k ? (0; 1] (recall that ?k (n) = n??k ). The optimal sampling rate ?k for arm
e 1/?k ).
k is ?k (n) = ?(n
Proof. The idea of the proof is that if the sampling is too frequent (i.e. ? close to 1), then the
dependency between
P the values of the arm reduces the information obtained by taking the average.
In other words, n ?k (?k (n)) increases too quickly. On the other hand, if the sampling is too scarce
(i.e. ? is very large), the information obtained at each pull is important, but the total amount of pulls
in a given time T is approximately T 1/? and thus is too low. The optimal solution to
Pthis trade-off
is to take ? = 1/?, which directly comes from the fact that this is the point where n ?k (?k (n))
becomes logarithmic. The complete proof is available in the supplementary material.
If ?k < 1, for all k, this result means that the best policy (with a sampling scheme of the form
O(n? )) should update the empirical means associated with each arm k at a rate O(n1/?k ); contrary
to the i.i.d case it is therefore not relevant to try and update the empirical rewards at each time step.
There henceforth must be gaps between updates of the means: this is precisely the role of the waiting
arm to make this gaps possible. As seen in the depiction of Remix-UCB, when pulled, the waiting
arm provides a reward that will count for the cumulative gains of the agent and help her control her
regret, but that will not be used to update any empirical mean.
As for a precise pulling strategy to implement given Proposition 6, it must be understood that it
is the slowest arm that determines the best uniform control possible, since it is the one which will
be selected the least number of times: it is unnecessary to pull the fastest arms more often than
.
the slowest arm. Therefore, if i1 , . . . , iks are the ks remaining arms at epoch s, and ? = 1 ?
1
mini?{i1 ,...,iks } ?i , then an arm selection strategy based on the rate of the slowest arm suggests to
?
pull arm im and update ?
?imim for the n-th time at time instants
(?i1 (n ? 1) + ks ) ? dn1/? e
?i1 (n) + m ? 1
if m = 1
otherwise
(10)
(i.e. all arms are pulled at the same O(n1/? ) frequency) and to pull the waiting arm while waiting.
Time budget per epoch. In the Remix-UCB algorithm, the function G defines the size of the
rounds. The definition of G is rather technical: we have G(s) = maxk?Bs Gk (s) where
.
Gk (s) = inf t ? N+ , 2(??k )2 log(1/?s ) ? t?s
(11)
where the ?k (n) are defined above. In other words, Gk encodes the minimum amount of time
necessary to reach the aimed length of confidence interval by following the aforementioned policy.
e ?2 log(?s ))1/? ). This is the key element
But the most interesting property of G is that G(s) = ?((?
s
which will be used in the proof of the regret bound which can be found in Theorem 2 below.
Putting it all together. At epoch s, the Remix-UCB algorithm starts by selecting the best empirical
arm and flags it as the waiting arm. It then determines the speed ? of the slowest arm, after which it
computes a time budget Ts = G(s). Then, until this time horizon is reached, it pulls arms following
the policy described above. Finally, after the time budget is reached, the algorithm eliminates the
arms whose empirical mean is significantly lower than the best available empirical mean.
Note that when all the ?k are summable, we have ? = 1, and thus the algorithm never pulls the
waiting arm: Remix-UCB mainly differs from Improved-UCB by its strategy of alternate pulls.
The result below provides an upper bound for the regret of the Remix-UCB algorithm:
Theorem 2. For all arm k, let 1 ? ?k > 0 and ?k (n) = n??k . Let ? = mink?{1,??? ,K} ?k and
?? = mink?{1,??? ,K} {?k > 0}. If ? ? 1, the regret of Remix-UCB is bounded in order by
e ?(??2)/?
?
log(T )1/? .
?
1
Since 1/? encodes the rate of sampling, it cannot be greater than 1.
7
(12)
Proof. The proof follows the same line as the proof of the upper bound of the regret of the
Improved-UCB algorithm. The important modification is the sizes of the blocks, which depend
in the mixing case of the ? mixing coefficient, and might grow arbitrary large, and the waiting arm,
which does not exist in the i.i.d. setting. The dominant term in the regret mentioned in Theorem 2
is related to the pulls of the waiting arm. Indeed, the waiting arm is pulled with an always increasing frequency, but the quality of the waiting arm tends to increase over time, as the arms with the
smallest values are eliminated. The complete proof is available in the supplementary material.
4
Discussion and Particular Cases
We here discuss Theorem 2, and some of its variations for special cases of ?-mixing processes.
First, in the i.i.d case, the regret of Improved-UCB is upper bounded by O ??1
? log(T ) [2].
Observe that (12) comes down to this bound when ? tends to 1. Also, note that it is an upper bound
of the regret in the algebraically mixing case. It reflects the fact that in this particular case, it is
possible to ignore the dependency of the mixing process. It also implies that, even if ? < 1, i.e.
even if the dependency cannot be ignored, by properly using the ? mixing property of the different
stationary processes, it is possible to obtain an upper bound of polynomial logarithmic order.
Another question is to see what happens when ?k = 1, which is an important threshold in our study.
Indeed, if ?k = 1 the ?k are not summable, but from Proposition 6 we have that ?k (n) ? O(n), i.e.
the arms should be sampled as often as possible. Theorem 2 states that the regret is upper bounded
e ?1 log T ). However, it is not possible to know if this bound is comparable to
in this case by ?(?
?
e Still, from the proof of Theorem 2 we get the following result:
that of the i.i.d case due to the ?.
Corollary 2. For all arm k, let 1 ? ?k > 0 and ?k (n) = n??k . Let ? = mink?{1,??? ,K} ?k . Then
if ? = 1, the regret for Algorithm 1 is upper bounded in order by
O ??1
? G? (log(T ))
where ?? = mink?{1,??? ,K} {?k > 0} and G is solution of
G??1 (x)
(13)
?
2
= x /(log(x)) .
Although we do not have an explicit formula for the regret in the case ? = 1, it is interesting to note
that (13) is strictly negligible with respect to (12) ?? < 1, but strictly dominates O ??1
? log(T ) .
This comes from that while in the case ? = 1 the waiting arm is no longer used, the time budget
necessary to complete step s is still higher that in the i.i.d case.
When ?(n) decreases at a logarithmic speed (?(n) ? 1/log(n)? for some ? > 0), it is still possible
to apply the same reasoning as the one
developed
in this paper. But in this case, Remix-UCB
e exp (T /?? )1/? , which is no longer logarithmic in T. In other
will only achieve a regret of ?
words, if the ? mixing coefficients decrease too slowly, the information given by the concentration
inequality in Theorem 1 is not sufficient to deduce interesting information about the mean value of
the arms. In this case, the successive values of the ?-mixing processes are too dependent, and the
randomness in the sequence of values is almost negligible; an adversarial bandit algorithm such as
Exp4 [4] may give better results than Remix-UCB.
5
Conclusion
We have studied an extension of the multi-armed bandit problem to the stationary ?-mixing framework in the restless case, by providing a functional algorithm and an upper bound of the regret in a
general framework. Future work might include a study of a lower bound for the regret in the mixing
process case: our first findings on the issue are that the analysis of the worst-case scenario in the
mixing framework bears significant challenges. Another interesting point would be the study of the
more difficult case of ?-mixing processes. A rather different, but very interesting question that we
may address in the future is the possibility to exploit a possible structure of the correlation between
rewards over time. For instance, in the case wher the correlation of an arm with the close past is
much higher than the correlation with the distant past, it might be interesting to see if the analysis
done in [16] can be extended to exploit this correlation structure.
Acknowledgments. This work is partially supported by the ANR-funded projet GRETA ? Greediness: theory and algorithms (ANR-12-BS02-004-01) and the ND project.
8
References
[1] Audibert JY, Bubeck S (2009) Minimax policies for adversarial and stochastic bandits. In: Annual Conference on Learning Theory
[2] Auer P, Ortner R (2010) Ucb revisited: Improved regret bounds for the stochastic multi-armed bandit
problem. Periodica Mathematica Hungarica 61:5565
[3] Auer P, Cesa-Bianchi N, Fischer P (2002) Finite-time analysis of the multi- armed bandit problem. Machine Learning Journal 47(23):235?256
[4] Auer P, Cesa-Bianchi N, Freund Y, Schapire RE (2002) The nonstochastic multiarmed bandit problem.
SIAM Journal on Computing 32(1):48?77
[5] Bernstein S (1927) Sur l?extension du th?eor`eme limite du calcul des probabilit?es aux sommes de quantit?es
d?ependantes. Mathematische Annalen 97(1):1?59
[6] Bubeck S, Cesa-Bianchi N (2012) Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit
Problems, Foundation and Trends in Machine Learning, vol 5. NOW
[7] Hoeffding W (1948) A Class of Statistics with Asymptotically Normal Distribution. Annals of Mathematical Statistics 19(3):293?325
[8] Hoeffding W (1963) Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association 58(301):13?30, DOI 10.2307/2282952, URL http://dx.doi.org/10.
2307/2282952
[9] Karandikar RL, Vidyasagar M (2002) Rates of uniform convergence of empirical means with mixing
processes. Statistics & probability letters 58(3):297?307
[10] Kontorovich L, Ramanan K (2008) Concentration inequalities for dependent random variables via the
martingale method. The Annals of Probability 36(6):2126?2158
[11] Kulkarni S, Lozano A, Schapire RE (2005) Convergence and consistency of regularized boosting algorithms with stationary ?-mixing observations. In: Advances in neural information processing systems, pp
819?826
[12] Lai TL, Robbins H (1985) Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics 6:422
[13] McDonald D, Shalizi C, Schervish M (2011) Estimating beta-mixing coefficients. arXiv preprint
arXiv:11030941
[14] Mohri M, Rostamizadeh A (2009) Rademacher complexity bounds for non-i.i.d. processes. In: Koller D,
Schuurmans D, Bengio Y, Bottou L (eds) Advances in Neural Information Processing Systems 21, pp
1097?1104
[15] Mohri M, Rostamizadeh A (2010) Stability bounds for stationary -mixing and -mixing processes. Journal
of Machine Learning Research 11:789?814
[16] Ortner R, Ryabko D, Auer P, Munos R (2012) Regret bounds for restless markov bandits. In: Proceeding
of the Int. Conf. Algorithmic Learning Theory, pp 214?228
[17] Pandey S, Chakrabarti D, Agarwal D (2007) Multi-armed bandit problems with dependent arms. In: Proceedings of the 24th international conference on Machine learning, ACM, pp 721?728
[18] Ralaivola L, Szafranski M, Stempfel G (2010) Chromatic pac-bayes bounds for non-iid data: Applications
to ranking and stationary ?-mixing processes. The Journal of Machine Learning Research 11:1927?1956
[19] Seldin Y, Slivkins A (2014) One practical algorithm for both stochastic and adversarial bandits. In: Proceedings of the 31st International Conference on Machine Learning (ICML-14), pp 1287?1295
[20] Steinwart I, Christmann A (2009) Fast learning from non-iid observations. In: Advances in Neural Information Processing Systems, pp 1768?1776
[21] Steinwart I, Hush D, Scovel C (2009) Learning from dependent observations. Journal of Multivariate
Analysis 100(1):175?194
[22] Tekin C, Liu M (2012) Online learning of rested and restless bandits. IEEE Transactions on Information Theory 58(8):5588?5611, URL http://dblp.uni-trier.de/db/journals/tit/
tit58.html#TekinL12
[23] Yu B (1994) Rates of convergence for empirical processes of stationary mixing sequences. Annals of
Probability 22(1):94?116
9
| 6029 |@word eor:1 exploitation:8 eliminating:1 polynomial:1 nd:1 mention:1 liu:1 contains:1 selecting:1 ours:1 past:3 recovered:3 current:2 scovel:1 culprit:1 yet:1 liva:2 must:4 dx:1 distant:2 informative:1 designed:3 update:12 stationary:17 selected:5 provides:3 boosting:2 revisited:1 successive:3 org:1 quantit:1 mathematical:1 along:1 direct:1 beta:1 chakrabarti:1 consists:1 prove:3 advocate:1 stempfel:1 introduce:2 x0:1 indeed:3 expected:4 behavior:4 multi:8 decreasing:6 xti:5 armed:8 considering:2 increasing:5 becomes:2 provided:3 project:1 bounded:6 notation:7 linearity:1 estimating:1 what:8 bs02:1 kind:1 developed:2 finding:1 pseudo:1 certainty:1 every:1 ti:10 tackle:1 control:5 unit:1 ramanan:1 appear:2 positive:2 t1:9 understood:1 negligible:2 tends:2 limit:1 consequence:3 subscript:1 abuse:1 approximately:1 might:7 studied:2 k:2 evoked:1 collect:1 suggests:1 fastest:1 range:2 bi:2 acknowledgment:1 practical:1 regret:41 block:2 implement:1 differs:1 probabilit:1 area:1 empirical:18 significantly:3 greta:1 confidence:13 induce:2 regular:1 word:3 wait:2 get:1 cannot:3 close:5 selection:1 ralaivola:3 influence:1 seminal:1 greediness:1 szafranski:1 imposed:1 go:1 straightforward:1 independently:1 tekin:1 estimator:3 rule:1 pull:32 stability:3 notion:2 variation:1 feel:1 resp:4 pt:1 suppose:1 heavily:1 user:1 annals:3 cmla:2 us:1 element:3 trend:1 updating:1 observed:3 role:1 preprint:1 solved:1 capture:1 worst:1 ensures:1 ni0:1 ryabko:1 trade:4 decrease:5 marseille:2 counter:1 balanced:1 mentioned:1 wher:1 complexity:1 reward:23 depend:2 algebra:1 compromise:1 tit:1 serve:1 distinctive:1 k0:2 univ:1 forced:1 fast:2 doi:2 whose:1 lag:1 larger:1 solve:3 supplementary:2 say:3 otherwise:1 anr:2 statistic:3 fischer:1 pnk:1 online:1 obviously:1 sequence:17 advantage:2 propose:2 hindering:1 fr:2 frequent:1 relevant:4 loop:1 realization:2 mixing:72 poorly:1 achieve:2 convergence:4 assessing:1 rademacher:2 produce:1 smax:1 tk:1 help:1 weakens:1 illustrate:1 pose:2 ij:1 b0:2 noticeable:1 strong:1 recovering:1 entirety:1 christmann:2 implemented:1 come:3 implies:1 closely:1 stochastic:8 exploration:9 material:2 shalizi:1 generalization:1 proposition:5 im:1 adjusted:1 strictly:3 extension:2 hold:5 around:2 normal:1 exp:6 scope:1 algorithmic:1 achieves:2 consecutive:8 smallest:1 purpose:2 estimation:3 favorable:3 robbins:1 tool:1 reflects:1 kontorovitch:1 minimization:1 always:1 aim:1 ck:1 rather:2 chromatic:1 corollary:6 derived:2 focus:4 she:5 properly:1 mainly:1 slowest:4 contrast:1 adversarial:4 rostamizadeh:3 sense:1 whence:1 dependent:10 cnrs:1 xtm:1 her:3 bandit:34 koller:1 france:2 subscripted:1 i1:4 interested:1 overall:1 aforementioned:1 issue:1 html:1 exponent:1 raised:1 lif:2 special:2 never:1 sampling:8 eliminated:3 look:1 icml:1 yu:1 future:4 others:1 ortner:3 distinguishes:1 dg:1 simultaneously:2 preserve:1 maxj:1 argmax:1 n1:2 stationarity:4 possibility:2 tj:1 devoted:2 chain:1 kt:1 capable:1 necessary:2 respective:1 orthogonal:1 divide:1 periodica:1 arma:1 abundant:1 desired:1 re:3 weaken:1 minimal:1 instance:5 earlier:1 planned:1 deviation:2 uniform:4 too:7 characterize:2 dependency:10 thanks:1 st:1 international:2 siam:1 sequel:2 off:4 rewarding:1 aix:1 together:2 kontorovich:1 quickly:1 reflect:1 central:2 cesa:3 choose:3 summable:5 hoeffding:4 slowly:2 henceforth:1 conf:1 resort:1 american:1 de:3 coefficient:14 matter:1 int:1 audibert:1 ad:2 depends:1 ranking:1 later:3 try:2 view:4 lot:1 root:1 multiplicative:1 sup:1 performed:2 optimistic:1 relied:1 recover:1 closed:1 start:1 reached:2 bayes:1 contribution:4 minimize:1 ni:3 who:2 efficiently:2 t3:1 identify:2 iterated:1 iid:2 worth:1 randomness:1 reach:1 ed:1 definition:4 frequency:2 involved:1 mathematica:1 pp:6 conveys:1 naturally:2 associated:6 proof:10 hamming:1 gain:4 sampled:1 recall:4 carefully:2 actually:1 auer:5 higher:2 improved:21 done:2 generality:1 just:2 correlation:5 until:2 hand:3 working:5 steinwart:4 defines:2 quality:3 pulling:6 mdp:1 concept:1 true:2 former:1 hence:1 lozano:1 read:2 alternating:1 deal:1 ex1:2 round:4 pthis:1 complete:3 mcdonald:1 tn:2 performs:1 reasoning:1 meaning:1 functional:1 rl:1 overview:1 attached:2 dn1:1 slight:1 association:1 refer:1 significant:2 multiarmed:1 consistency:3 pm:3 mathematics:1 funded:1 entail:2 longer:3 depiction:1 deduce:1 base:1 dominant:1 multivariate:1 own:1 recent:1 inf:1 scenario:8 inequality:18 devise:3 seen:2 minimum:2 additional:2 greater:2 promoted:1 mr:1 algebraically:3 maximize:2 ii:1 full:1 multiple:1 reduces:6 trier:1 ing:1 onp:1 faster:1 technical:2 calculation:2 long:2 devised:1 lai:1 jy:1 controlled:3 impact:3 metric:1 expectation:2 arxiv:2 sometimes:1 somme:1 achieved:1 agarwal:1 want:3 spacing:3 interval:12 grow:1 standpoint:3 crucial:1 rest:1 eliminates:3 exhibited:1 sure:1 cedex:1 induced:1 db:1 contrary:1 rested:2 bernstein:2 bengio:1 identically:2 enough:2 concerned:1 independence:10 nonstochastic:2 restrict:1 suboptimal:4 click:2 identified:1 idea:7 regarding:2 tm:1 tradeoff:4 translates:1 reduce:1 t0:4 bottleneck:1 expression:1 whether:2 url:2 x1k:1 akin:1 passing:1 hardly:1 action:1 remark:1 ignored:2 aimed:1 amount:3 annalen:1 schapire:2 http:2 exist:4 per:1 mathematische:1 discrete:1 shall:2 vol:1 waiting:20 key:1 putting:1 threshold:1 asymptotically:2 eme:1 schervish:1 sum:2 run:2 noticing:1 uncertainty:2 letter:1 family:1 reasonable:2 almost:1 draw:1 decision:1 cachan:3 comparable:1 bound:20 tackled:1 convergent:1 oracle:1 annual:1 strength:1 precisely:2 sharply:1 encodes:5 speed:2 argument:1 min:2 px:1 xtk:4 according:1 alternate:1 beneficial:1 increasingly:1 evolves:1 b:9 happens:2 modification:2 equation:1 discus:3 turn:1 count:1 wrt:1 know:2 mind:1 ependantes:1 end:5 studying:1 available:4 apply:1 observe:1 away:1 generic:1 appropriate:1 targetted:1 simulating:1 distinguished:1 robustness:1 slower:1 remaining:6 ensure:2 include:2 instant:3 neglect:1 exploit:2 restrictive:1 establish:2 classical:5 iks:2 objective:2 move:1 question:5 already:1 strategy:10 concentration:14 dependence:4 usual:4 degrades:1 traditional:1 distance:2 outer:1 length:4 sur:1 index:2 useless:1 mini:2 providing:1 setup:5 difficult:1 statement:1 sharper:1 gk:3 mink:4 countable:1 policy:7 perform:1 bianchi:3 upper:9 observation:6 markov:4 finite:2 t:2 situation:2 maxk:3 targetting:1 precise:3 extended:1 rn:1 arbitrary:1 introduced:1 bk:2 namely:1 paris:1 required:1 slivkins:1 recalled:1 hush:1 address:7 able:1 adversary:1 beyond:1 below:3 xm:1 challenge:2 saclay:1 max:1 enlightening:1 exp4:1 vidyasagar:1 natural:2 rely:1 regularized:3 scarce:1 arm:108 minimax:1 scheme:2 imply:1 julien:1 identifies:1 carried:1 ready:1 log1:1 audiffren:2 hungarica:1 epoch:10 literature:1 understanding:1 calcul:1 evolve:1 freund:1 bear:2 interesting:6 allocation:1 facing:1 foundation:1 agent:7 sufficient:2 share:1 mohri:3 supported:1 last:1 weaker:1 pulled:8 understand:1 pni:2 taking:3 munos:1 limite:1 distributed:3 stand:1 cumulative:1 computes:1 adaptive:1 far:1 tighten:1 transaction:1 approximate:1 compact:1 ignore:1 uni:1 keep:3 dealing:1 investigating:1 xt1:1 assumed:1 unnecessary:1 projet:1 bg:1 search:1 pandey:1 promising:1 nature:1 robust:1 ignoring:2 schuurmans:1 du:2 bottou:1 main:4 s2:1 nothing:1 pivotal:2 convey:1 en:2 tl:1 depicts:1 martingale:1 slow:2 n:2 timepoints:7 explicit:1 exponential:1 karandikar:1 theorem:9 down:1 formula:1 xt:11 emphasized:1 pac:1 dominates:1 essential:2 exists:1 budget:6 horizon:1 nk:2 restless:16 gap:2 dblp:1 depicted:1 logarithmic:5 simply:1 likely:1 xt0:1 bubeck:2 seldin:1 prevents:1 partially:2 monotonic:1 satisfies:1 determines:2 acm:1 presentation:1 lipschitz:2 change:1 included:1 typical:1 flag:1 total:2 pas:1 e:2 unfavorable:1 ucb:49 select:2 formally:1 support:1 arises:1 kulkarni:2 aux:1 instructive:1 handling:1 |
5,558 | 603 | Word Space
Hinrich Schiitze
Center for the Study of Language and Information
Ventura Hall
Stanford, CA 94305-4115
Abstract
Representations for semantic information about words are necessary for many applications of neural networks in natural language
processing. This paper describes an efficient, corpus-based method
for inducing distributed semantic representations for a large number of words (50,000) from lexical coccurrence statistics by means
of a large-scale linear regression. The representations are successfully applied to word sense disambiguation using a nearest neighbor
method .
1
Introduction
Many tasks in natural language processing require access to semantic information
about lexical items and text segments. For example, a system processing the sound
sequence: /rE.k~maisbi:tJ/ needs to know the topic of the discourse in order to decide
which of the plausible hypotheses for analysis is the right one: e.g. "wreck a nice
beach" or "recognize speech" . Similarly, a mail filtering program has to know the
topical significance of words to do its job properly.
Traditional semantic representations are ill-suited for artificial neural networks since
they presume a varying number of elements in representations for different words
which is incompatible with a fixed input window. Their localist nature also poses
problems because semantic similarity (for example between dog and cat) may be
hidden in inheritance hierarchies and complicated feature structures. Neural networks perform best when similarity of targets corresponds to similarity of inputs;
traditional symbolic representations do not have this property. Microfeatures have
been widely used to overcome these problems . However, microfeature representa895
896
Schutze
tions have to be encoded by hand and don't scale up to large vocabularies.
This paper presents an efficient method for deriving vector representations for words
from lexical cooccurrence counts in a large text corpus. Proximity of vectors in the
space (measured by the normalized correlation coefficient) corresponds to semantic
similarity. Lexical coocurrence can be easily measured. However, for a vocabulary
of 50,000 words, there are 2,500,000,000 possible coo currence counts to keep track
of. While many of these are zero, the number of non-zero counts is still huge. On
the other hand , in any document collection most of these counts are small and
therefore unreliable. Therefore, letter fourgrams are used here to bootstrap the
representations. Cooccurrence statistics are collected for 5,000 selected fourgrams.
Since each of the 5000 fourgrams is frequent, counts are more reliable than cooccurrence counts for rare words. The 5000-by-5000 matrix used for this purpose is
manageable. A vector for a lexical item is computed as the sum of fourgram vectors
that occur close to it in the text . This process of confusion yields representations
of words that are fine-grained enough to reflect semantic differences between the
various case and inflectional forms a word may have in the corpus.
The paper is organized as follows. Section 2 discusses related work. Section 3
describes the derivation of the vector representations . Section 4 performs an evaluation. The final section concludes.
2
Related Work
Two kinds of semantic representations commonly used in connectionism are microfeatures (e.g . \Valtz and Pollack 1985, McClelland and Kawamoto 1986) and localist schemes in which there is a separate node for each word (e.g . Cottrell 1989).
Neither approach scales up well enough in its original form to be applicable to large
vocabularies and a wide variety of topics. Gallant (1991), Gallant et a1. (1992)
present a less labor-intensive method based on microfeatures, but the features for
core stems still have to be encoded by hand for each new document collection. The
derivation of the Word Space presented here is fully automatic. It also uses feature vectors to represent words, but the features cannot be interpreted on their
own . Vector similarity is the only information present in Word Space: semantically
related words are close, unrelated words are distant. The emphasis on semantic similarity rather than decomposition into interpretable features is similar to
Kawamoto (1988) . Scholtes (1991) uses a two-dimensional Kohonen map to represent semantic similarity. While a Kohonen map can deal with non-linea.rities
(in contrast to the singular value decomposition used below), a space of much
higher dimensionality is likely to capture more of the complexity of semantic relatedness present in natural language. Scholtes ' idea to use n-gl'ams to reduce
the number of initial features for the semantic representations is extended here by
looking at n-gram (oocurrence statistics rather than occurrence in documents (cf.
(Kimbrell 1988) for the use of n-grams in information retrieval).
An important goal of many schemes of semantic represent.ation is to find a limited
number of semantic classes (e.g . classical thesauri such as Roget's , Crouch 1990,
Brown et a1. 1990). Instead, a multidimensional space is constructed here, in which
each word has its own individual representation. Any clustering into classes introduces artificial boundaries that cut off words from part of their semantic neighbol'-
Word Space
governor quits knights of columbus over bishop's abortion gag rule
GOVE
_QUI
VERN
QUIT
ERNO
RNOR
NIGH
HTS
OLUM
LUMB
SHOP
HOP
ABOR
BORT
ORTI
RTIO
RUL
RULE
ULE_
Figure 1: A line from the New York Times with selected fourgrams.
hood. In large classes, there will be members "from opposite sides of the class" that
are only distantly related. So any class size is problematic, since words are either
separated from close neighbors or lumped together with distant terms. Conversely,
a multidimensional space does not make such an arbitrary classification necessary.
3
Derivation of the Vector Representations
Fourgram selection. There are about. 600,000 possible fourgrams if the empty
space, numbers and non-alphanumeric characters are included as "special letters" .
Of these, 95,000 occurred in 5 months of the New York Times. They were reduced
to 5000 by first deleting all rare ones (frequency less than 1000) and then redundant
and uninformative fourgrams as described below .
If there is a group of fourgrams tha.t occurs in only one word, all but. one is delet.ed.
For instance, the fourgrams BAGH, AGHD, GHDA, HDAD tend to occur together in
Baghdad, so three of them will be deleted. The rationale for this move is that
cooccurrence information about one of the fourgrams can be fully derived from
each of the others, so that an index in the matrix would be wasted if more than
one of them was included. The relative frequency of one fourgram occurring after
another was calculated with fivegrams. For instance, the relative frequency of AGHD
following BAGH is the frequency of the fivegram BAGHD divided by the frequency of
the fourgram BAGH.
Most fourgrams occur predominantly in three or four stems or words. U ninformative fourgrams are sequences such as RET! or TION that are part of so many
different words (resigned, residents, retirements, resisted, . .. ; abortion, desperation, construction, detention, ... ) that knowledge about coocurrence with them
carries almost no semantic information. Such fourgrams are therefore useless and
are deleted. Again, fivegrams were used to identify fourgrams that occurred frequently in many stems.
A set of 6290 fourgrams remained after these deletions. To reduce it to the required
size of 5000, t.he most frequent 300 and the least frequent. 990 were also delet.ed.
Figure 1 shows a line from the New York Times and which of the 5000 selecteo
fourgrams occurred in it.
Computation of fourgram vectors. The computation of word vectors described below depends on fourgram vectors that accurately reflect semantic similarity in the sense of being used to describe the same contents. Consequently, one
needs to be able to compare the sets of contexts two fourgrams occur in. For this
purpose, a collocation matrix for fourgrams was collected such that the entry ai,j
897
898
Schiitze
counts the number of times that fourgram i occurs at most 200 fourgrams to the
left of fourgram j. Two columns in this matrix are similar if the contexts the corresponding fourgrams are used in are similar. The counts were determined using five
months of the New York Times (June - October 1990). The resulting collocation
matrix is dense: only 2% of entries are zeros, because almost any two fourgrams
cooccur. Only 10% of entries are smaller than 10, so that culling small counts
would not increase the sparseness of the matrix. Consequently, any computation
that employs the fourgram vectors directly would be inefficient. For this reason, a
singular value decomposition was performed and 97 singular values extracted (cf.
Deerwester et al. 1990) using an algorithm from SVDPACK (Berry 1992). Each
fourgram can then be represented by a vector of 97 real values. Since the singular
value decomposition finds the best least-square approximation of the original space
in 97 dimensions, two fourgram vectors will be similar if their original vectors in
the collocation matrix are similar. The reduced fourgram vectors can be efficiently
used for confusion as described in the following section.
Computation of word vectors. We can think of fourgrams as highly ambiguous
terms. Therefore, they are inadequate if used directly as input to a neural net. We
have to get back from fourgrams to words. For the experiment reported here,
cooccurrence information was used for a second time to achieve this goal: in this
case coo currence of a target word with any of the 5000 fourgrams . For each of
the selected words (see below), a context vector was computed for every position
at which it occurred in the text. A context vector was defined as the sum of all
defined fourgram vectors in a window of 1001 fourgrams centered around the target
word. The context vectors were then normalized and summed. This sum of vectors
is the vector representation of the target word. It is the confusion of all its uses
in the corpus. More formally, if C( w) is the set of positions in the corpus at which
w occurs and if 'P(f) is the vector representation for fourgram f, then the vector
representation r( w) of w is defined as: (the dot stands for normalization)
r(w) =
L ( L
i?C(w) J
?
'P(f))
close to i
The treatment of words is case-sensitive. The following terminology will be used:
a surface form is the string of characters as it occurs in the text; a lemma is either
lower case or upper case: all letters are lower case with the possible exception of
the first; word is used as a case-insensitive term. So every word has exactly two
lemmas. A lemma of length n has up to 2n surface forms. Almost every lower case
lemma can be realized as an upper case surface form. But upper case lemmas are
hardly ever realized as lower case surface forms.
The confusion vectors were computed for all 54366 lemmas that occurred at least
10 times in 18 months of the New York Times News Service (May 1989 - October
1990, about 50 million words). Table 1 lists the percentage of lower case and upper
case lemmas, and the distribution of lemmas with respect to words.
Word Space
lemmas
lower ca.se
upper case
total
number
percent
32549
21817
54366
60 0
40%
100 0
words
lower case lemma only
upper case lemma only
both lemmas
total
number
percent
23766
13034
8783
45583
52 0
29%
19%
100 0
Table 1: The distribution of lower and upper case in words and lemmas.
word
burglar
disable
disenchantment
domestically
Dour
grunts
kid
S.O.B.
Ste.
workforce
keepmg
I nearest
I
neighbors
burglars thief rob mugging stray robbing lookout chase C) ate thieves
deter intercept repel halting surveillance shield maneuvers
disenchanted sentiment resentment grudging mindful unenthusiastic
domestic auto/-s importers/-ed threefold inventories drastically cars
melodies/-dic Jazzie danceable reggae synthesizers Soul funk tunes
heap into ragged goose neatly pulls buzzing rake odd rough
dad kidding mom ok buddies Mom Oh Hey hey mama
Confessions Jill Julie biography Judith Novak Lois Learned Pulitzer
dry oyster whisky hot filling rolls lean float bottle ice
jobs employ /-s/-ed/-ing attrition workers clerical labor hourly
..
hopmg brmg wlpmg could some would other here rest have
Table 2: Ten random and one selected word and their nearest neighbors.
4
Evaluation
Table 2 shows a random sample of 10 words and their ten nearest neighbors in Word
Space (or less depending on how many would fit in the table). The neighbors are
listed in order of proximity to the head word. burglar, disenchantment, kid, and
workforce are closely related to almost all of their nearest neighbors. The same is
true for disable, dom esticaUy, and Dour, if we regard as the goal to come up with
a characterization of semantic similarity in a corpus (as opposed to the language
in general). In the New York Times, the military use of disable dominates, Iraq's
military, oil pipelines and ships are disabled. Similarly, domestic usually refers to
the domestic market, and only one person named Dour occurs in the newspaper: the
Senegalese jazz musician Youssou N'Dour. So these three cases can also be counted
as successes. The topic/ content of grunts is moderately well characterized by other
objects like goose and rake that one would also expect on a farm. Finally, little
useful information can be extracted for S. D.B. and Ste. S. D.B. mainly occurs in
articles about. the bestseller "Confessions of an S.O.B." Since it is not. used literally,
its semantics don't come out very well. The neighbors of Ste are for the most part
words associated with water, because the name of the river "Ste.-Marguerite" in
Quebec (popular for salmon fishing) is the most frequent context for Ste. Since
the significance of Ste depends heavily on the name it occurs in, its usefulness a.s a.
contributor of semantic informa.tion is limited, so its poor characterization should
probably not be seen as problematic. The word keeping has been added to the table
to show that the vector representations of words that can be used in a wide variety
of contexts are not. interesting.
Table 3 shows that it is important for many words to make a distinction between
899
900
Schiitze
word
pinch (.41)
Pinch
kappa (.49)
Kappa
roe (.54)
Roe
completion (.73)
completions
ok (.60)
oks
triad (.52)
triads
nearest neighbors
outs pitch Cone hitting Cary strikeout Whitehurst Teufel Dykstra mound
unsalted grated cloves pepper teaspoons coarsely parsley Combine cumin
casein protein/-s synthesize liposomes recombinant enzymes amino dna
Phi Wesleyan graduate cum dean graduating nyu Amherst College Yale
cod squid fish salmon flounder lobster haddock lobsters crab chilled
Wade v overturn/-ing uphold/-ing abortion Reproductive overrule
complete/-~/-s/-ing complex phase/-s uncompleted incomplete
touchdown/-s interception/-s td yardage yarder tds fumble sacked
d me I m wouldn t crazy you ain anymore
approve/-s/-d/-ing Senate Waxman bill appropriations omnibus
warhea~/-s ballistic missile[-s ss bombers intercontinental silos
Triads Organized Interpol Cosa Crips gangs trafficking smuggling
Table 3: Words for which case or inflection matter.
word
cap ita ljs
interestjs
motionjs
plantjs
"uling
space
suitjs
tankjs
trainjs
vesse1js
I
% correct
senses
goodsLseat of government
special attention/financial
movement/proposal
factory /living being
decision/to exert control
area, volume/outer space
legal action/garments
combat vehicle/receptacle
railroad cars/to teach
ship/blood vessel/hollow utensil
96
94
92
94
90
89
94
97
94
93
2
92
92
91
88
91
90
95
85
69
91
3
sum
95
93
92
92
90
90
95
95
89
86
92
Table 4: Ten disambiguation experiments using the vector representations.
lower case and upper case and between different inflections. The normalized correlation coefficient between the two case/inflectional forms of the word is indicated
in each example.
Word sense disambiguation. Word sense disambiguation is a task that many
semantic phenomena bear on and therefore well suited to evaluate the quality of
semantic representations. One can use the vector representations for disambiguation in the following way. The context vector of the occurrence of an ambiguous
word is defined as the sum of all word vectors ocurring in a window around it .
The set of context vectors of the word in the training set can be clustered. The
clustering programs used were AutoClass (Cheeseman et al. 1988) and Buckshot
(Cutting et al. 1992). The clusters found (between 2 and 13) were assigned senses
by inspecting a few of its members (10-20) . An occurrence of an ambiguous word
in the test set was then disambiguated by assigning the sense of the training cluster
that was closest to its context vector. Note that this method is unsupervised in
that the structure of the "sense space" is analyzed automatically by clustering. See
Schiitze (1992) for a more detailed description .
Table 4 lists the results for ten disambiguation experiments that were performed
Word Space
using the above algorithm. Each line shows the ambiguous words, its major senses
and the success rate of disambiguation for the individual senses and all major senses
together. Training and test sets were taken from the New York Times newswire
and were disjoint for each word. These disambiguation results are among the best
reported in the literature (e.g. Yarowsky 1992). Apparently, the vector representations respect fine sense distinctions.
An interesting question is to what degree the vector representations are distributed.
Using the algorithm for disambiguation described above, a set of contexts of suit
was clustered and applied to a test text. When the first 30 dimensions were used
for clustering the training set, the error rate was 9% in the test set. \\Then only the
odd dimensions were used (1,3,5, ... ,27,29) the error was 14%. With only the even
dimensions (2,4,6, ... ,28,30), 13% of occurrences in the test set were misclassified.
This graceful degradation indicates that the vector representations are distributed.
5
Discussion and Conclusion
The linear dimensionality reduction performed here could be a useful preprocessing
step for other applications as well . Each of the fourgram features carries a small
amount of information. Neglecting individual features degrades performance, but
there are so many that they cannot be used directly as input to a neural network.
The word sense disambiguation results suggest that no information is lost when
only axes of variations extracted by the singular value decomposition are considered
instead of the original 5000-dimensional fourgram vectors. Schiitze (Forthcoming)
uses the same methodology for the derivation of syntactic representations for words
(so that verbs and nouns occupy different regions in syntactic word space). Problems
in pattern recognition often have the same characteristics: uniform distribution of
information over all input features or pixels and a high-dimensional input space
that causes problems in training if the features are used directly. A singular value
decomposition could be a useful preprocessing step for data of this nature that
makes neural nets applicable to high-dimensional problems for which training would
otherwise be slow if possible at all.
This paper presents Word Space, a new approach to representing semantic information about words derived from lexical cooccurrence statistics. In contrast to
microfeature representations, these semantic representations can be summed for a
given context to compute a representation of the topic of a text segment. It was
shown that semantically related words are close in Word Space and that the vector representations can be used for word sense disambiguation. Word Space could
therefore be a promising input representation for applications of neural nets in naturallanguage processing such as information filtering or language modeling in speech
recognition.
Acknowledgements
I'm indebted to Mike Berry for SVDPACK, to NASA and RIACS for AutoClass
and to the San Diego Supercomputer Center for computing resources. Thanks to
Martin Kay, Julian Kupiec, Jan Pedersen, Martin Roscheisen, and Andreas Weigend
for help and discussions.
901
902
Schutze
References
Berry, M. W. 1992. Large-scale sparse singular value computations. The International Journal of Supercomputer Applications 6(1):13-49.
Brown, P. F., V. J. D. Pietra, P. V. deSouza, J. C. Lai, and R. L. Mercer. 1990.
Class-based n-gram models of natural language. Manuscript, IBM.
Cheeseman, P., J. Kelly, M. Self, J. Stutz, W. Taylor, and D. Freeman. 1988. AutoClass: A Bayesian classification system. In Proceedings of the Fifth International
Conference on Machine Learning.
Cottrell, G. ,,y. 1989. A Connectionist Approach to Word Sense Disambiguation.
London: Pitman.
Crouch, C. J. 1990. An approach to the automatic construction of global thesauri.
Information Processing {3 Management 26(5):629-640.
Cutting, D., D. Karger, J. Pedersen, and J. Thkey. 1992. Scatter-gather: A clusterbased approach to browsing large document collections. In Proceedings of SIGIR '92.
Deerwester, S., S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. Harshman.
1990. Indexing by latent semantic analysis. Journal of the American Society for
Information Science 41(6):391-407.
Gallant, S. I. 1991. A practical approach for representing context and for performing
word sense disambiguation using neural networks. Neural Computation 3(3) :293309.
Gallant, S. I., W. R. Caid, J. Carleton, R. Hecht-Nielsen, K. P. Qing, and D. Sudbeck. 1992. HNC's matchplus system. In Proceedings of TREC.
Kawamoto, A. H. 1988. Distributed representations of ambiguous words and their
resolution in a connectionist network. In S. 1. Small, G. W. Cottrell, and M. K.
Tanenhaus (Eds.), Lexical A mbiguity Resolution: Perspectives from Psycholinguistics, Neuropsychology, and Artificial Intelligence. San Mateo CA: Morgan Kaufmann.
Kimbrell, R. E. 1988. Searching for text? Send an N-gram! Byte Magazine
May:297-312.
McClelland, J. L., and A. H. Kawamoto. 1986. Mechanisms of sentence processing:
Assigning roles to constituents of sentences. In J. L. McClelland, D. E. Rumelhart,
and the PDP Research Group (Eds.), Parallel Distributed Processing. Explorations
in the Microstructure of Cognition. Volume 2: Psychological and Biological M ode/s,
272-325. Cambridge MA: The MIT Press.
Scholtes, J. C. 1991. Unsupervised learning and the information retrieval problem.
In Proceedings of the International Joint Conference on Neural Networks.
Schiitze, H. 1992. Dimensions of meaning. In Proceedings of Supercomputing '92.
Schiitze, H. Forthcoming. Sublexical tagging. In Proceedings of the IEEE International Conference on N euml Networks.
Waltz, D. L., and J. B. Pollack. 1985. A strongly interactive model of natural
language interpretation. Cognitive Science 9:51-74.
Yarowsky, D. 1992. Word-sense disambiguation using statistical models of Roget's
categories trained on large corpora. In Proceedings of Coling-92.
| 603 |@word manageable:1 squid:1 decomposition:6 uphold:1 carry:2 reduction:1 initial:1 karger:1 document:4 ocurring:1 synthesizer:1 assigning:2 scatter:1 riacs:1 cottrell:3 distant:2 alphanumeric:1 interpretable:1 hts:1 intelligence:1 selected:4 item:2 core:1 characterization:2 node:1 judith:1 five:1 scholtes:3 constructed:1 cooccur:1 novak:1 combine:1 tagging:1 market:1 frequently:1 freeman:1 td:1 trafficking:1 little:1 automatically:1 window:3 domestic:3 unrelated:1 hinrich:1 inflectional:2 what:1 kind:1 interpreted:1 string:1 ret:1 interception:1 ragged:1 every:3 multidimensional:2 combat:1 interactive:1 exactly:1 control:1 yarowsky:2 harshman:1 maneuver:1 ice:1 service:1 hourly:1 uncompleted:1 culling:1 graduating:1 emphasis:1 exert:1 mateo:1 conversely:1 bestseller:1 limited:2 graduate:1 practical:1 hood:1 oyster:1 lost:1 bootstrap:1 jan:1 area:1 lois:1 word:74 refers:1 protein:1 symbolic:1 get:1 cannot:2 close:5 selection:1 suggest:1 context:13 intercept:1 bill:1 map:2 dean:1 center:2 lexical:7 musician:1 fishing:1 attention:1 send:1 sigir:1 resolution:2 rule:2 deriving:1 pull:1 oh:1 financial:1 kay:1 searching:1 variation:1 hierarchy:1 target:4 construction:2 heavily:1 diego:1 magazine:1 us:4 hypothesis:1 element:1 synthesize:1 recognition:2 kappa:2 rumelhart:1 iraq:1 cut:1 lean:1 mike:1 role:1 capture:1 region:1 news:1 triad:3 movement:1 knight:1 neuropsychology:1 dour:4 complexity:1 moderately:1 cooccurrence:6 rake:2 dom:1 trained:1 segment:2 roget:2 easily:1 joint:1 psycholinguistics:1 vern:1 cat:1 various:1 represented:1 derivation:4 separated:1 describe:1 cod:1 london:1 artificial:3 pinch:2 encoded:2 stanford:1 plausible:1 widely:1 s:1 otherwise:1 statistic:4 think:1 farm:1 syntactic:2 final:1 chase:1 sequence:2 net:3 frequent:4 kohonen:2 ste:6 achieve:1 description:1 inducing:1 constituent:1 empty:1 cluster:2 object:1 tions:1 depending:1 help:1 completion:2 pose:1 measured:2 nearest:6 odd:2 job:2 come:2 closely:1 correct:1 centered:1 deter:1 exploration:1 melody:1 require:1 government:1 microstructure:1 clustered:2 biological:1 connectionism:1 inspecting:1 quit:1 attrition:1 proximity:2 around:2 hall:1 crab:1 considered:1 cognition:1 major:2 heap:1 purpose:2 applicable:2 jazz:1 ballistic:1 sensitive:1 contributor:1 ain:1 cary:1 successfully:1 rough:1 mit:1 rather:2 varying:1 surveillance:1 railroad:1 derived:2 kid:2 june:1 ax:1 properly:1 indicates:1 mainly:1 contrast:2 inflection:2 schutze:2 sense:12 am:1 collocation:3 abor:1 hidden:1 misclassified:1 semantics:1 pixel:1 classification:2 ill:1 among:1 noun:1 special:2 crouch:2 summed:2 beach:1 tds:1 hop:1 unsupervised:2 filling:1 distantly:1 others:1 connectionist:2 employ:2 few:1 recognize:1 individual:3 pietra:1 qing:1 phase:1 suit:1 huge:1 highly:1 evaluation:2 introduces:1 analyzed:1 sens:5 tj:1 waltz:1 stutz:1 worker:1 necessary:2 neglecting:1 retirement:1 literally:1 incomplete:1 taylor:1 re:1 pollack:2 psychological:1 instance:2 column:1 modeling:1 military:2 localist:2 entry:3 rare:2 uniform:1 usefulness:1 inadequate:1 reported:2 dumais:1 person:1 thanks:1 international:4 river:1 amherst:1 biography:1 off:1 together:3 autoclass:3 again:1 reflect:2 management:1 opposed:1 cognitive:1 american:1 inefficient:1 waxman:1 halting:1 gag:1 coefficient:2 matter:1 depends:2 tion:2 performed:3 vehicle:1 dad:1 apparently:1 complicated:1 parallel:1 square:1 roll:1 kaufmann:1 characteristic:1 efficiently:1 yield:1 identify:1 dry:1 hdad:1 pedersen:2 bayesian:1 accurately:1 presume:1 indebted:1 ed:6 frequency:5 lobster:2 associated:1 treatment:1 popular:1 knowledge:1 car:2 dimensionality:2 cap:1 organized:2 nielsen:1 back:1 reggae:1 nasa:1 resigned:1 ok:3 higher:1 manuscript:1 methodology:1 schiitze:7 strongly:1 roscheisen:1 correlation:2 hand:3 resident:1 quality:1 columbus:1 indicated:1 disabled:1 oil:1 name:2 omnibus:1 normalized:3 brown:2 true:1 assigned:1 overrule:1 semantic:24 deal:1 lumped:1 self:1 ambiguous:5 complete:1 confusion:4 performs:1 percent:2 meaning:1 salmon:2 predominantly:1 insensitive:1 volume:2 million:1 occurred:5 he:1 interpretation:1 cambridge:1 ai:1 automatic:2 tanenhaus:1 similarly:2 mindful:1 neatly:1 newswire:1 recombinant:1 language:8 dot:1 funk:1 access:1 similarity:9 surface:4 enzyme:1 closest:1 own:2 perspective:1 ship:2 success:2 seen:1 morgan:1 disable:3 redundant:1 living:1 sound:1 stem:3 ing:5 characterized:1 retrieval:2 divided:1 lai:1 dic:1 hecht:1 a1:2 pitch:1 regression:1 roe:2 represent:3 normalization:1 proposal:1 uninformative:1 fine:2 ode:1 singular:7 float:1 carleton:1 rest:1 probably:1 tend:1 member:2 quebec:1 microfeatures:3 enough:2 variety:2 fit:1 pepper:1 forthcoming:2 opposite:1 reduce:2 idea:1 andreas:1 intensive:1 sentiment:1 speech:2 york:7 cause:1 hardly:1 action:1 useful:3 se:1 listed:1 tune:1 detailed:1 amount:1 ten:4 wreck:1 mcclelland:3 reduced:2 dna:1 occupy:1 category:1 percentage:1 problematic:2 fish:1 disjoint:1 track:1 utensil:1 threefold:1 coarsely:1 group:2 four:1 terminology:1 blood:1 deleted:2 neither:1 wasted:1 sum:5 deerwester:2 cone:1 weigend:1 letter:3 you:1 named:1 almost:4 decide:1 disambiguation:14 thesaurus:2 incompatible:1 decision:1 qui:1 abortion:3 yale:1 gang:1 occur:4 disambiguated:1 performing:1 graceful:1 missile:1 linea:1 martin:2 poor:1 describes:2 smaller:1 ate:1 character:2 rob:1 coo:2 indexing:1 pipeline:1 taken:1 legal:1 goose:2 resource:1 discus:1 count:9 mechanism:1 know:2 kawamoto:4 occurrence:4 anymore:1 supercomputer:2 original:4 clustering:4 cf:2 touchdown:1 robbing:1 society:1 classical:1 dykstra:1 approve:1 move:1 added:1 realized:2 occurs:7 question:1 degrades:1 traditional:2 separate:1 outer:1 me:1 topic:4 mail:1 collected:2 reason:1 water:1 length:1 index:1 useless:1 julian:1 october:2 ventura:1 teach:1 perform:1 gallant:4 upper:8 workforce:2 extended:1 looking:1 ever:1 head:1 topical:1 trec:1 pdp:1 arbitrary:1 verb:1 dog:1 required:1 bottle:1 repel:1 sentence:2 learned:1 deletion:1 distinction:2 able:1 below:4 usually:1 pattern:1 soul:1 program:2 reliable:1 wade:1 deleting:1 hot:1 ation:1 natural:5 cheeseman:2 senate:1 representing:2 scheme:2 shop:1 jill:1 concludes:1 governor:1 auto:1 text:8 nice:1 hnc:1 inheritance:1 berry:3 mom:2 literature:1 acknowledgement:1 relative:2 buddy:1 kelly:1 fully:2 expect:1 bear:1 rationale:1 interesting:2 filtering:2 ita:1 degree:1 gather:1 article:1 mercer:1 ibm:1 gl:1 keeping:1 drastically:1 side:1 neighbor:9 wide:2 fifth:1 sparse:1 julie:1 distributed:5 regard:1 overcome:1 boundary:1 vocabulary:3 gram:4 calculated:1 dimension:5 stand:1 collection:3 commonly:1 delet:2 wouldn:1 preprocessing:2 counted:1 san:2 supercomputing:1 newspaper:1 relatedness:1 cutting:2 keep:1 unreliable:1 desouza:1 rul:1 global:1 corpus:7 landauer:1 don:2 pitman:1 latent:1 table:10 promising:1 nature:2 ca:3 inventory:1 vessel:1 complex:1 significance:2 dense:1 cum:1 confession:2 amino:1 slow:1 shield:1 furnas:1 position:2 stray:1 factory:1 crazy:1 coling:1 grained:1 remained:1 bishop:1 reproductive:1 list:2 nyu:1 dominates:1 resisted:1 occurring:1 sparseness:1 browsing:1 suited:2 likely:1 labor:2 hitting:1 phi:1 hey:2 corresponds:2 informa:1 discourse:1 tha:1 extracted:3 ma:1 goal:3 month:3 consequently:2 content:2 included:2 determined:1 marguerite:1 semantically:2 lemma:13 degradation:1 total:2 byte:1 exception:1 formally:1 college:1 appropriation:1 hollow:1 evaluate:1 phenomenon:1 |
5,559 | 6,030 | Fighting Bandits with a New Kind of Smoothness
Jacob Abernethy
University of Michigan
[email protected]
Chansoo Lee
University of Michigan
[email protected]
Ambuj Tewari
University of Michigan
[email protected]
Abstract
We provide a new analysis framework for the adversarial multi-armed bandit
problem. Using the notion of convex smoothing, we define a novel family of
algorithms with minimax optimal regret guarantees. First, we show that regularizationp
via the Tsallis entropy, which includes EXP3 as a special case, matches
the O( N T ) minimax regret with a smaller constant factor. Second, we show
that a p
wide class of perturbation methods achieve a near-optimal regret as low
as O( N T log N ), as long as the perturbation distribution has a bounded hazard function. For example, the Gumbel, Weibull, Frechet, Pareto, and Gamma
distributions all satisfy this key property and lead to near-optimal algorithms.
1
Introduction
The classic multi-armed bandit (MAB) problem, generally attributed to the early work of Robbins
(1952), poses a generic online decision scenario in which an agent must make a sequence of choices
from a fixed set of options. After each decision is made, the agent receives some feedback in the
form of a loss (or gain) associated with her choice, but no information is provided on the outcomes
of alternative options. The agent?s goal is to minimize the total loss over time, and the agent is thus
faced with the balancing act of both experimenting with the menu of choices while also utilizing
the data gathered in the process to improve her decisions. The MAB framework is not only mathematically elegant, but useful for a wide range of applications including medical experiments design
(Gittins, 1996), automated poker playing strategies (Van den Broeck et al., 2009), and hyperparameter tuning (Pacula et al., 2012).
Early MAB results relied on stochastic assumptions (e.g., IID) on the loss sequence (Auer et al.,
2002; Gittins et al., 2011; Lai and Robbins, 1985). As researchers began to establish non-stochastic,
worst-case guarantees for sequential decision problems such as prediction with expert advice (Littlestone and Warmuth, 1994), a natural question arose as to whether similar guarantees were possible
for the bandit setting. The pioneering work of Auer, Cesa-Bianchi, Freund, and Schapire (2003) answered this in the affirmative by showing that their algorithm EXP3 possesses nearly-optimal regret
bounds with matching lower bounds. Attention later turned to the bandit version of online linear
optimization, and several associated guarantees were published the following decade (Abernethy
et al., 2012; Dani and Hayes, 2006; Dani et al., 2008; Flaxman et al., 2005; McMahan and Blum,
2004).
Nearly all proposed methods have relied on a particular algorithmic blueprint; they reduce the bandit problem to the full-information setting, while using randomization to make decisions and to
estimate the losses. A well-studied family of algorithms for the full-information setting is Follow
the Regularized Leader (FTRL), which optimizes the objective function of the following form:
arg min L> x + R(x)
(1)
x2K
where K is the decision set, L is (an estimate of) the cumulative loss vector, and R is a regularizer,
a convex function with suitable curvature to stabilize the objective. The choice of regularizer R is
1
critical to the algorithm?s performance. For example, the EXP3 algorithm (Auer, 2003) regularizes
with the entropy function and achieves a nearly optimal regret bound when K is the probability simplex. For a general convex set, however, other regularizers such as self-concordant barrier functions
(Abernethy et al., 2012) have tighter regret bounds.
Another class of algorithms for the full information setting is Follow the Perturbed Leader (FTPL)
(Kalai and Vempala, 2005) whose foundations date back to the earliest work in adversarial online
learning (Hannan, 1957). Here we choose a distribution D on RN , sample a random vector Z ? D,
and solve the following linear optimization problem
arg min(L + Z)> x.
(2)
x2K
FTPL is computationally simpler than FTRL due to the linearity of the objective, but it is analytically
much more complex due to the randomness. For every different choice of D, an entirely new set of
techniques had to be developed (Devroye et al., 2013; Van Erven et al., 2014). Rakhlin et al. (2012)
and Abernethy et al. (2014) made some progress towards unifying the analysis framework. Their
techniques, however, are limited to the full-information setting.
In this paper, we propose a new analysis framework for the multi-armed bandit problem that unifies
the regularization and perturbation algorithms. The key element is a new kind of smoothness property, which we call differential consistency. It allows us to generate a wide class of both optimal and
near-optimal algorithms for the adversarial multi-armed bandit problem. We summarize our main
results:
1. We show that regularization via the Tsallis entropy leads to the state-of-the-art adversarial MAB
algorithm, matching the minimax regret rate of Audibert and Bubeck (2009) with a tighter constant. Interestingly, our algorithm fully generalizes EXP3.
2. We show that a wide array of well-studied noise distributions lead to near-optimal regret bounds
(matching those of EXP3). Furthermore,
our analysis reveals a strikingly simple and appealing
p
sufficient condition for achieving O( T ) regret: the hazard rate function of the noise distribution
must be bounded by a constant. We conjecture that this requirement is in fact both necessary and
sufficient.
2
Gradient-Based Prediction Algorithms for the Multi-Armed Bandit
Let us now introduce the adversarial multi-armed bandit problem. On each round t = 1, . . . , T ,
a learner must choose a distribution pt 2 N over the set of N available actions. The adversary
(Nature) chooses a vector gt 2 [ 1, 0]N of losses, the learner samples it ? pt , and plays action it .
After selecting this action, the learner observes only the value gt,it , and receives no information as
to the values gt,j for j 6= it . This limited information feedback is what makes the bandit problem
much more challenging than the full-information setting in which the entire gt is observed.
The learner?s goal is to minimize the regret. Regret is defined to be the difference in the realized
loss and the loss of the best fixed action in hindsight:
RegretT := max
i2[N ]
T
X
(gt,i
gt,it ).
(3)
t=1
To be precise, we consider the expected regret, where the expectation is taken with respect to the
learner?s randomization.
Loss vs. Gain Note: We use the term ?loss? to refer to g, although the maximization in (3) would
imply that g should be thought of as a ?gain? instead. We use the former term, however, as we
impose the assumption that gt 2 [ 1, 0]N throughout the paper.
2.1
The Gradient-Based Algorithmic Template
Our results focus on a particular algorithmic template described in Framework 1, which is a slight
variation of the Gradient Based Prediction Algorithm (GBPA) of Abernethy et al. (2014). Note that
2
? t , (ii) updates G
? t by
the algorithm (i) maintains an unbiased estimate of the cumulative losses G
adding a single round estimate g?t that has only one non-zero coordinate, and (iii) uses the gradient
of a convex function ? as sampling distribution pt . The choice of ? is flexible but ? must be a
differentiable convex function and its derivatives must always be a probability distribution.
Framework 1 may appear restrictive but it has served as the basis for much of the published work on
adversarial MAB algorithms (Auer et al., 2003; Kujala and Elomaa, 2005; Neu and Bart?ok, 2013).
First, the GBPA framework essentially encompasses all FTRL and FTPL algorithms (Abernethy
et al., 2014), which are the core techniques not only for the full information settings, but also for
? t remains an unbiased estimate of
the bandit settings. Second, the estimation scheme ensures that G
Gt . Although there is some flexibility, any unbiased estimation scheme would require some kind
of inverse-probability scaling?information theory tells us that the unbiased estimates of a quantity
that is observed with only probabilty p must necessarily involve fluctuations that scale as O(1/p).
Framework 1: Gradient-Based Prediction Alg. (GBPA) Template for Multi-Armed Bandit
GBPA( ? ): ? is a differentiable convex function such that r ? 2 N and ri ? > 0 for all i.
?0 = 0
Initialize G
for t = 1 to T do
Nature: A loss vector gt 2 [ 1, 0]N is chosen by the Adversary
? t 1 ) = r t (G
?t 1)
Sampling: Learner chooses it according to the distribution p(G
Cost: Learner ?gains? loss gt,it
gt,it
Estimation: Learner ?guesses? g?t := p (G
e
?
) it
?t = G
?t
Update: G
it
1
t
1
+ g?t
Lemma 2.1. Define (G) ? maxi Gi so that we can write the expected regret of GBPA( ? ) as
PT
? ?
ERegretT = (GT )
t=1 hr (Gt 1 ), gt i.
Then, the expected regret of the GBPA( ? ) can be written as:
?
T
X
?
? T ) ? (G
?T ) +
?t, G
?t
ERegretT ?
(0)
(0) +Ei1 ,...,it 1
(G
Ei [D ? (G
|
{z
}
|
{z
} t=1 | t
{z
overestimation penalty
underestimation penalty
where the expectations are over the sampling of it .
?
1 )|Gt 1 ]
divergence penalty
}
,
(4)
Proof. Let ? be a valid convex function for the GBPA. Consider GBPA( ? ) being run on the loss
sequence g1 , . . . , gT . The algorithm produces a sequence of estimated losses g?1 , . . . , g?T . Now
consider GBPA-NE( ? ), which is GBPA( ? ) run with the full information on the deterministic loss
? t directly). The regret of
sequence g?1 , . . . , g?T (there is no estimation step, and the learner updates G
this run can be written as
? T ) PT hr ? (G
? t 1 ), g?t i,
(G
t=1
? T ) by the convexity of . Hence, it suffices to show that the GBPA-NE( ? ) has
and (GT ) ? (G
regret at most the righthand side of Equation 4, which is a fairly well-known result in online learning
literature; see, for example, (Cesa-Bianchi and Lugosi, 2006, Theorem 11.6) or (Abernethy et al.,
2014, Section 2). For completeness, we included the full proof in Appendix A.
2.2
A New Kind of Smoothness
What has emerged as a guiding principle throughout machine learning is that enforcing stability of
an algorithm can often lead immediately to performance guarantees?that is, small modifications of
the input data should not dramatically alter the output. In the context of GBPA, algorithmic stability
is guaranteed as long as the dervative r ? (?) is Lipschitz. Abernethy et al. (2014) explored a set of
conditions on r2 ? (?) that lead to optimal regret guarantees for the full-information setting. Indeed,
3
this work discussed different settings where the regret depends on an upper bound on either the
nuclear norm or the operator norm of this hessian.
In short, regret in the full information setting relies on the smoothness of the choice of ? . In the
bandit setting, however, merely a uniform bound on the magnitude of r2 ? is insufficient to guar? t 1 + g?t , G
? t 1 ), where
antee low regret; the regret (Lemma 2.1) involves terms of the form D ? (G
? t 1 ).
the incremental quantity g?t can scale as large as the inverse of the smallest probability of p(G
2?
What is needed is a stronger notion of the smoothness that bounds r in correspondence with r ? ,
and we propose the following definition:
Definition 2.2 (Differential Consistency). For constants , C > 0, we say that a convex function
? (?) is ( , C)-differentially-consistent if for all G 2 ( 1, 0]N ,
r2ii ? (G) ? C(ri ? (G)) .
We now prove a useful bound that emerges from differential consistency, and in the following two
sections we shall show how this leads to regret guarantees.
Theorem 2.3. Suppose ? is ( , C)-differentially-consistent for constants C, > 0. Then divergence penalty at time t in Lemma 2.1 can be upper bounded as:
?t, G
?t
Eit [D ? (G
?
1 )|Gt 1 ]
?C
N ?
X
i=1
?t
ri ? ( G
1)
?
1
.
? to denote the cumulative estimate
Proof. For the sake of clarity, we drop the subscripts; we use G
? t 1 , g? to denote the marginal estimate g?t = G
?t G
? t 1 , and g to denote the true loss gt .
G
Note that by definition of Algorithm 1, g? is a sparse vector with one non-zero and non-positive
? Plus, it is conditionally independent given G.
? For a fixed it , Let
coordinate g?it = gt,i /ri ? (G).
? + r?
? = D ? (G
? + rei , G),
?
h(r) := D ? (G
g /k?
g k, G)
t
?
?
?
?
2?
? + t?
? tei ei . Now we can
so that h00 (r) = (?
g /k?
g k)> r2 ? G
g /k?
g k (?
g /k?
g k) = e>
G
t
t
it r
write
R
R
? + g?, G)|
? G]
? = PN P[it = i] k?gk s h00 (r) dr ds
Eit [D ? (G
i=1
0
0
?
?
R
R
PN
? k?gk s e> r2 ? G
? rei ei dr ds
= i=1 ri ? (G)
0
0 i
?
?
R
R
PN
? k?gk s C ri ? (G
? rei )
? i=1 ri ? (G)
dr ds
0
0
?
?
R
R
PN
? k?gk s C ri ? (G)
?
? i=1 ri ? (G)
dr ds
0
0
?
?
1+
R k?gk R s
PN
?
= C i=1 ri ? (G)
dr ds
0
0
?
?
? 1
1
PN
PN ?
?
?
= C2 i=1 ri ? (G)
gi2 ? C i=1 ri ? (G)
.
The first inequality is by the supposition and the second inequality is due to the convexity of ?
which guarantees that ri is an increasing function in the i-th coordinate. Interestingly, this part
of the proof critically depends on the fact that the we are in the ?loss? setting where g is always
non-positive.
3
A Minimax Bandit Algorithm via Tsallis Smoothing
The design of a multi-armed bandit algorithm in the adversarial setting proved to be a challenging
task. Ignoring the dependence on N for the moment, we note that the initial published work on
EXP3 provided only an O(T 2/3 ) guarantee (Auer et al., 1995), and it was not
p until the final version
of this work (Auer et al., 2003) that the authors obtained the optimal O( T ) rate. For the more
4
general setting of online linear optimization, several sub-optimal rates were achieved
(Dani and
p
Hayes, 2006; Flaxman et al., 2005; McMahan and Blum, 2004) before the desired T was obtained
(Abernethy et al., 2012; Dani et al., 2008).
We can view EXP3 as an instance of GBPA where the potential function ? (?) is the Fenchel conjugate of the
P Shannon entropy. For any p 2 N , the ?(negative) Shannon entropy is defined as
H(p) :=
Gi ?H(p)}. In
i pi log pi , and its Fenchel conjugate is H (G) = supp2 N {hp,
P
1
?
fact, we have a closed-form expression for the supremum: H (G) = ? log ( i exp(?Gi )) . By
inspecting the gradient of the above expression, it is easy to see that EXP3 chooses the distribution
pt = rH ? (G) every round.
p
The tighter EXP3 bound given by Auer et al. (2003) scaled
p according to O( T N log N ) and the
authors provided a matching lower bound of the form ?( T N ). It remained an open question for
some time whether there exists a minimax optimal algorithm that does not contain the log term until Audibert and Bubeck (2009) proposed the Implicitly Normalized Forecaster (INF). The INF is
implicitly defined via a specially-designed potential function with certain properties. It was not immediately clear from this result how to define a minimax-optimal algorithm using the now-standard
tools of regularization and Bregman divergence.
More recently, Audibert et al. (2011) improved upon Audibert and Bubeck (2009), extending the
results to the combinatorial setting, and they also discovered that INF can be interpreted in terms
of Bregman divergences. We give here a reformulation of INF that leads to a very simple analysis
in terms of our notion of differential consistency. Our reformulation can be viewed as a variation
of EXP3, where the key modification is to replace the Shannon entropy function with the Tsallis
entropy1 for parameter 0 < ? < 1:
X ?
1 ?
S? (p) =
1
p?
.
i
1 ?
This particular function, proposed by Tsallis (1988), possesses a number of natural properties. The
Tsallis entropy is in fact a generalization of the Shannon entropy, as one obtains the latter as a special
case of the former asymptotically. That is, it is easy to prove the following uniform convergence:
S? (?) ! H(?)
as ? ! 1.
We emphasize again that one can easily show that Tsallis-smoothing bandit algorithm is indeed
identical to INF using the appropriate parameter mapping, although our analysis is simpler due to
the notion of differential consistency (Definition 2.2).
Theorem 3.1. Let ? (G) = maxp2 N {hp, Gi ?S? (p)}. Then the GBPA( ? ) has regret at most
ERegret ? ?
N1
1
?
1
?
+
N ?T
.
??
(5)
Before proving the theorem, we note that it immediately recovers the EXP3 upper bound as a special
1 ?
case ? ! 1. An easy application of L?H?opital?s rule shows that as ? ! 1, N 1 ? 1 ! log N and
p
Np? /? ! N . Choosing ? =
(N log N )/T , we see that the right-hand side of (5) tends to
2 T N log N . However the choice ? ! 1 is clearly not the optimal choice, as we show in the
following statement, which directly follows from the theorem once we see that N 1 ? 1 < N 1 ? .
q
1 2?
Corollary 3.2. For any ? 2 (0, 1), if we choose ? = ?N
(1 ?)T then we have
q
NT
ERegret ? 2 ?(1
?) .
p
In particular, the choice of ? = 12 gives a regret of no more than 4 N T .
Proof of Theorem 3.1. We will bound each penalty term in Lemma 2.1. Since S? is non-positive,
the underestimation penalty is upper bounded by 0 and the overestimation penalty is at most
( min S? ). The minimum of S? occurs at (1/N, . . . , 1/N ). Hence,
!
N
X
?
1
(overestimation penalty) ?
1
? ?(N 1 ? 1).
(6)
?
1 ?
N
i=1
1
More precisely, the function we give here is the negative Tsallis entropy according to its original definition.
5
Now it remains to upper bound the divergence penalty with (??) 1 N ? T . We observe that straight2
2
forward calculus gives r2 S? (p) = ??diag(p?
, . . . , p?
1
N ). Let I N (?) be the indicator function
of N ; that is, I N (x) = 0 for x 2 N and I N (x) = 1 for x 2
/ N . It is clear that ? (?) is
the dual of the function S? (?) + I N (?), and moreover we observe that r2 S? (p) is a sub-hessian of
S? (?) + I N (?) at p(G), following the setup of Penot (1994). Taking advantage of Proposition 3.2
in the latter reference, we conclude that r 2 S? (p(G)) is a super-hessian of ? = S?? at G. Hence,
r2 ? (G)
1
(??)
diag(p21
?
(G), . . . , p2N ? (G))
for any G. What we have stated, indeed, is that ? is (2
thus applying Theorem 2.3 gives
?t, G
?t
D ? (G
1)
? (??)
1
?, (??)
N ?
X
?t
pi ( G
i=1
1)
1
)-differentially-consistent, and
?1
?
.
Noting that the ?1 -norm and the 1 1 ? -norm are dual to each other, we can apply H?older?s inequality
to any probability distribution p1 , . . . , pN to obtain
!1 ? N
!?
N
N
N
1 ?
X
X
X
X 1
p1i ? =
p1i ? ? 1 ?
pi1 ?
1?
= (1)1 ? N ? = N ? .
i=1
i=1
i=1
So, the divergence penalty is at most (??)
4
i=1
1
N , which completes the proof.
?
Near-Optimal Bandit Algorithms via Stochastic Smoothing
Let D be a continuous distribution over an unbounded support with probability density function f
and cumulative density function F . Consider the GBPA( ? (G; D)) where
? (G; D) = E
iid
Z1 ,...,ZN ?D
max{Gi + Zi }
i
which is a stochastic smoothing of (maxi Gi ) function. Since the max function is convex, ? is also
convex. By Bertsekas (1973), we can swap the order of differentiation and expectation:
? (G; D) = E
iid
Z1 ,...,ZN ?D
ei? , where i? = arg max{Gi + Zi }.
(7)
i=1,...,N
Even if the function is not differentiable everywhere, the swapping is still possible with any subgradient as long as they are bounded. Hence, the ties between coordinates (which happen with
probability zero anyways) can be resolved in an arbitrary manner. It is clear that r ? is in the
probability simplex, and note that
@?
= EZ1 ,...,ZN 1{Gi + Zi > Gj + Zj , 8j 6= i}
@Gi
? j ? Gi ]] = E ? [1 F (G
? j?
= EG? j? [PZi [Zi > G
Gj ?
Gi )]
(8)
? j ? = maxj6=i Gj + Zj . The unbounded support condition guarantees that this partial
where G
derivative is non-zero for all i given any G. So, ? (G; D) satisfies the requirements of Algorithm 1.
4.1
Connection to Follow the Perturbed Leader
There is a straightforward way to efficiently implement the sampling step of the bandit GBPA (Algorithm 1) with a stochastically smoothed function. Instead of evaluating the expectation of Equation 7, we simply take a random sample. In fact, this is equivalent to Follow the Perturbed Leader
Algorithm (FTPL) (Kalai and Vempala, 2005) for bandit settings. On the other hand, implementing
the estimation step is hard because generally there is no closed-form expression for r ? .
To address this issue, Neu and Bart?ok (2013) proposed Geometric Resampling (GR). GR uses an
iterative resampling process to estimate ri ? . This process gives an unbiased estimate when allowed
6
to run for an unbounded number of iterations. Even when we truncate the resampling process after
T
M iterations, the extra regret due to the estimation bias is at most N
Since the
eM (additive term).
p
p
lower bound for the multi-armed bandit problem is O( N T ), any choice of M = O( N T ) does
not affect the asymptotic regret of the algorithm. In summary, all our GBPA regret bounds in this
T
section hold for the corresponding FTPL algorithm with an extra additive N
eM term in the bound.
Despite the fact that perturbation-based algorithms provide a natural randomized decision strategy,
they have seen little applications mostly because they are hard to analyze. But one should expect
general results to be within reach: the EXP3 algorithm, for example, can be viewed through the
lens of perturbations, where the noise is distributed according to the Gumbel distribution. Indeed,
an early result of Kujala and Elomaa (2005) showed that a near-optimal MAB strategy comes about
through the use of exponentially-distributed noise, and the same perturbation strategy has more
recently been utilized in the work of Neu and Bart?ok (2013) and Koc?ak et al. (2014). However,
a more general understanding of perturbation methods has remained elusive. For example, would
Gaussian noise be sufficient for a guarantee? What about, say, the Weibull distribution?
4.2
Hazard Rate analysis
In this section, we show that the performance of the GBPA( ? (G; D)) can be characterized by the
hazard function of the smoothing distribution D. The hazard rate is a standard tool in survival
analysis to describe failures due to aging; for example, an increasing hazard rate models units that
deteriorate with age while a decreasing hazard rate models units that improve with age (a counter
intuitive but not illogical possibility). To the best of our knowledge, the connection between hazard
rates and design of adversarial bandit algorithms has not been made before.
Definition 4.1 (Hazard rate function). Hazard rate function of a distribution D is
f (x)
1 F (x)
For the rest of the section, we assume that D is unbounded in the direction of +1, so that the hazard
function is well-defined everywhere. This assumption is for the clarity of presentation and can be
easily removed (Appendix B).
Theorem 4.2. The regret of the GBPA on ? (L) = EZ ,...,Z ?D maxi {Gi + ?Zi } is at most:
hD (x) :=
1
n
h
i
N (sup hD )
T + ?EZ1 ,...,Zn ?D max Zi
i
?
Proof. We analyze each penalty term in Lemma 2.1. Due to the convexity of , the underestimation
penalty is non-positive. The overestimation penalty is clearly at most EZ1 ,...,Zn ?D [maxi Zi ], and
Lemma 4.3 proves the N (sup hD ) upper bound on the divergence penalty.
It remains to provide the tuning parameter ?. Suppose we scale the perturbation Z by ? > 0, i.e., we
add ?Zi to each coordinate. It is easy to see that E[maxi=1,...,n ?Xi ] = ?E[maxi=1,...,n Xi ]. For the
divergence penalty, let F? be the CDF of the scaled random variable. Observe that F? (t) = F (t/?)
and thus f? (t) = ?1 f (t/?). Hence, the hazard rate scales by 1/?, which completes the proof.
Lemma 4.3. The divergence penalty of the GBPA with ? (G) = EZ?D maxi {Gi + Zi } is at most
N (sup hD ) each round.
Proof. Recall the gradient expression in Equation 8. The i-th diagonal entry of the Hessian is:
?
@
@
2 ?
?
? j ? Gi )) = E ? f (G
? j ? Gi )
rii (G) =
EG? j? [1 F (Gj ? Gi )] = EG? j?
(1 F (G
Gj ?
@Gi
@Gi
? j ? Gi )(1 F (G
? j ? Gi ))]
= EG? j? [h(G
(9)
? (sup h)EG? j? [1
? j?
F (G
Gi )]
= (sup h)ri (G)
? j ? = maxj6=i {Gj + Zj } which is a random variable independent of Zi . We now apply
where G
Theorem 2.3 with = 1 and C = (sup h) to complete the proof.
7
Distribution
Gumbel(? = 1, = 1)
Frechet (? > 1)
Weibull*( = 1, k ? 1)
Pareto*(xm = 1, ?)
Gamma(? 1, )
supx hD (x)
1 as x ! 0
at most 2?
k at x = 0
? at x = 0
as x ! 1
E[maxN
i=1 Zi ]
log N + 0
N 1/? (1 1/?)
1
O( k1 !(log N ) k )
?N 1/? /(? 1)
log N +(? 1) log log N
log (?) + 1 0
p
O( T N log N ) Param.
N/A
? = log N
k = 1 (Exponential)
? = log N
= ? = 1 (Exponential)
p
Table 1: Distributions that give O( T N log N ) regret FTPL algorithm. The parameterization follows Wikipedia pages for easy lookup. We denote the Euler constant (? 0.58) by 0 . Distributions
marked with (*) need to be slightly modified using the conditioning trick explained in Appendix B.2.
The maximum of Frechet hazard function has to be computed numerically (Elsayed, 2012, p. 47)
but elementary calculations show that it is bounded by 2? (Appendix D).
Corollary 4.4. Follow the Perturbed Leader Algorithm with distributions in Table 1 (restricted
p to a
certain range of parameters), combined
with
Geometric
Resampling
(Section
4.1)
with
M
=
NT,
p
has an expected regret of order O( T N log N ).
Table 1 provides the two terms we need to bound. We derive the third column of the table in
Appendix C using Extreme Value Theory (Embrechts et al., 1997). Note that our analysis in the
proof of Lemma 4.3 is quite tight; the only place we have an inequality is when we upper bound the
hazard rate. It is thus reasonable to pose the following conjecture:
Conjecture 4.5. If a distribution D has a monotonically increasing hazard rate hD (x) that does
not converge as x ! +1 (e.g., Gaussian), then there is a sequence of losses that will incur at least
a linear regret.
The intuition is that if adversary keeps incurring a high loss for the i-th arm, then with high prob? j ? Gi will be large. So, the expectation in Equation 9 will be dominated by the hazard
ability G
? j ? Gi .
function evaluated at large values of G
Acknowledgments. J. Abernethy acknowledges the support of NSF under CAREER grant IIS1453304. A. Tewari acknowledges the support of NSF under CAREER grant IIS-1452099.
References
J. Abernethy, E. Hazan, and A. Rakhlin. Interior-point methods for full-information and bandit
online learning. IEEE Transactions on Information Theory, 58(7):4164?4175, 2012.
J. Abernethy, C. Lee, A. Sinha, and A. Tewari. Online linear optimization via smoothing. In COLT,
pages 807?823, 2014.
J.-Y. Audibert and S. Bubeck. Minimax policies for adversarial and stochastic bandits. In COLT,
pages 217?226, 2009.
J.-Y. Audibert, S. Bubeck, and G. Lugosi. Minimax policies for combinatorial prediction games. In
COLT, 2011.
P. Auer. Using confidence bounds for exploitation-exploration trade-offs. The Journal of Machine
Learning Research, 3:397?422, 2003.
P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. Gambling in a rigged casino: The adversarial multi-arm bandit problem. In FOCS, 1995.
P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine learning, 47(2-3):235?256, 2002.
P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit
problem. SIAM Journal of Computuataion, 32(1):48?77, 2003. ISSN 0097-5397.
D. P. Bertsekas. Stochastic optimization problems with nondifferentiable cost functionals. Journal
of Optimization Theory and Applications, 12(2):218?231, 1973. ISSN 0022-3239.
8
N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press,
2006.
V. Dani and T. P. Hayes. Robbing the bandit: less regret in online geometric optimization against an
adaptive adversary. In SODA, pages 937?943, 2006.
V. Dani, T. Hayes, and S. Kakade. The price of bandit information for online optimization. In NIPS,
2008.
L. Devroye, G. Lugosi, and G. Neu. Prediction by random-walk perturbation. In Conference on
Learning Theory, pages 460?473, 2013.
E. Elsayed. Reliability Engineering. Wiley Series in Systems Engineering and Management.
Wiley, 2012. ISBN 9781118309544. URL https://books.google.com/books?id=
NdjF5G6tfLQC.
P. Embrechts, C. Kl?uppelberg, and T. Mikosch. Modelling Extremal Events: For Insurance and
Finance. Applications of mathematics. Springer, 1997. ISBN 9783540609315. URL https:
//books.google.com/books?id=BXOI2pICfJUC.
A. D. Flaxman, A. T. Kalai, and H. B. McMahan. Online convex optimization in the bandit setting:
gradient descent without a gradient. In SODA, pages 385?394, 2005. ISBN 0-89871-585-7.
J. Gittins. Quantitative methods in the planning of pharmaceutical research. Drug Information
Journal, 30(2):479?487, 1996.
J. Gittins, K. Glazebrook, and R. Weber. Multi-armed bandit allocation indices. John Wiley & Sons,
2011.
J. Hannan. Approximation to bayes risk in repeated play. In M. Dresher, A. W. Tucker, and P. Wolfe,
editors, Contributions to the Theory of Games, volume III, pages 97?139, 1957.
A. Kalai and S. Vempala. Efficient algorithms for online decision problems. Journal of Computer
and System Sciences, 71(3):291?307, 2005.
T. Koc?ak, G. Neu, M. Valko, and R. Munos. Efficient learning by implicit exploration in bandit
problems with side observations. In NIPS, pages 613?621. Curran Associates, Inc., 2014.
J. Kujala and T. Elomaa. On following the perturbed leader in the bandit setting. In Algorithmic
Learning Theory, pages 371?385. Springer, 2005.
T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied
Mathematics, 6(1):4?22, 1985.
N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation,
108(2):212?261, 1994. ISSN 0890-5401.
H. B. McMahan and A. Blum. Online geometric optimization in the bandit setting against an adaptive adversary. In COLT, pages 109?123, 2004.
G. Neu and G. Bart?ok. An efficient algorithm for learning with semi-bandit feedback. In Algorithmic
Learning Theory, pages 234?248. Springer, 2013.
M. Pacula, J. Ansel, S. Amarasinghe, and U.-M. OReilly. Hyperparameter tuning in bandit-based
adaptive operator selection. In Applications of Evolutionary Computation, pages 73?82. Springer,
2012.
J.-P. Penot. Sub-hessians, super-hessians and conjugation. Nonlinear Analysis: Theory, Methods & Applications, 23(6):689?702, 1994. URL http://www.sciencedirect.com/
science/article/pii/0362546X94902127.
S. Rakhlin, O. Shamir, and K. Sridharan. Relax and randomize: From value to algorithms. In
Advances in Neural Information Processing Systems, pages 2141?2149, 2012.
H. Robbins. Some aspects of the sequential design of experiments. Bull. Amer. Math. Soc., 58(5):
527?535, 1952.
C. Tsallis. Possible generalization of boltzmann-gibbs statistics. Journal of Statistical Physics, 52
(1-2):479?487, 1988.
G. Van den Broeck, K. Driessens, and J. Ramon. Monte-carlo tree search in poker using expected
reward distributions. In Advances in Machine Learning, pages 367?381. Springer, 2009.
T. Van Erven, W. Kotlowski, and M. K. Warmuth. Follow the leader with dropout perturbations. In
COLT, 2014.
9
| 6030 |@word exploitation:1 version:2 norm:4 stronger:1 open:1 rigged:1 calculus:1 forecaster:1 jacob:1 moment:1 initial:1 ftrl:3 series:1 selecting:1 interestingly:2 erven:2 com:3 nt:2 must:6 written:2 john:1 additive:2 happen:1 drop:1 designed:1 update:3 v:1 bart:4 resampling:4 guess:1 warmuth:3 parameterization:1 core:1 short:1 completeness:1 provides:1 math:1 simpler:2 unbounded:4 c2:1 differential:5 focs:1 prove:2 manner:1 introduce:1 deteriorate:1 indeed:4 expected:5 p1:1 planning:1 multi:11 decreasing:1 little:1 armed:10 param:1 increasing:3 provided:3 linearity:1 bounded:6 moreover:1 what:5 kind:4 interpreted:1 weibull:3 affirmative:1 developed:1 hindsight:1 differentiation:1 guarantee:11 tewaria:1 quantitative:1 every:2 act:1 finance:1 tie:1 scaled:2 unit:2 medical:1 grant:2 appear:1 bertsekas:2 positive:4 before:3 engineering:2 tends:1 aging:1 driessens:1 despite:1 ak:2 id:2 subscript:1 fluctuation:1 lugosi:4 plus:1 studied:2 challenging:2 tsallis:9 limited:2 range:2 acknowledgment:1 regret:32 implement:1 drug:1 thought:1 matching:4 confidence:1 glazebrook:1 interior:1 selection:1 operator:2 context:1 applying:1 risk:1 www:1 equivalent:1 deterministic:1 blueprint:1 elusive:1 straightforward:1 attention:1 convex:11 immediately:3 rule:2 utilizing:1 array:1 nuclear:1 menu:1 hd:6 classic:1 stability:2 notion:4 variation:2 coordinate:5 proving:1 pt:6 play:2 suppose:2 shamir:1 us:2 curran:1 trick:1 element:1 wolfe:1 associate:1 utilized:1 observed:2 worst:1 ensures:1 counter:1 removed:1 trade:1 observes:1 intuition:1 convexity:3 overestimation:4 reward:1 tight:1 incur:1 upon:1 learner:9 basis:1 swap:1 strikingly:1 easily:2 resolved:1 eit:2 regularizer:2 describe:1 monte:1 tell:1 outcome:1 choosing:1 abernethy:12 whose:1 emerged:1 quite:1 solve:1 say:2 relax:1 ability:1 statistic:1 gi:23 g1:1 fischer:1 final:1 online:12 sequence:6 differentiable:3 advantage:1 isbn:3 propose:2 turned:1 date:1 flexibility:1 achieve:1 intuitive:1 differentially:3 convergence:1 requirement:2 extending:1 produce:1 gittins:4 incremental:1 derive:1 pose:2 progress:1 soc:1 involves:1 come:1 pii:1 direction:1 rei:3 gi2:1 stochastic:6 exploration:2 implementing:1 require:1 suffices:1 generalization:2 mab:6 randomization:2 tighter:3 proposition:1 inspecting:1 mathematically:1 elementary:1 hold:1 exp:1 algorithmic:6 mapping:1 achieves:1 early:3 smallest:1 estimation:6 combinatorial:2 extremal:1 robbins:4 tool:2 weighted:1 dani:6 offs:1 clearly:2 always:2 gaussian:2 super:2 modified:1 arose:1 kalai:4 pn:8 earliest:1 corollary:2 focus:1 modelling:1 experimenting:1 adversarial:10 entire:1 her:2 bandit:36 kujala:3 arg:3 dual:2 flexible:1 issue:1 colt:5 art:1 smoothing:7 special:3 initialize:1 fairly:1 marginal:1 once:1 sampling:4 identical:1 nearly:3 alter:1 simplex:2 np:1 gamma:2 divergence:9 pharmaceutical:1 n1:1 possibility:1 righthand:1 insurance:1 extreme:1 swapping:1 regularizers:1 pzi:1 ftpl:6 bregman:2 partial:1 necessary:1 tree:1 littlestone:2 desired:1 walk:1 sinha:1 instance:1 fenchel:2 column:1 frechet:3 zn:5 maximization:1 bull:1 cost:2 entry:1 euler:1 uniform:2 gr:2 perturbed:5 supx:1 chansoo:1 broeck:2 chooses:3 combined:1 density:2 randomized:1 siam:1 lee:2 physic:1 again:1 cesa:6 management:1 choose:3 dr:5 stochastically:1 book:4 expert:1 derivative:2 potential:2 lookup:1 casino:1 stabilize:1 includes:1 inc:1 satisfy:1 audibert:6 depends:2 later:1 view:1 closed:2 analyze:2 sup:6 hazan:1 relied:2 option:2 maintains:1 bayes:1 contribution:1 minimize:2 efficiently:1 gathered:1 unifies:1 critically:1 iid:3 carlo:1 served:1 researcher:1 published:3 randomness:1 reach:1 koc:2 neu:6 definition:6 failure:1 against:2 tucker:1 associated:2 attributed:1 proof:11 recovers:1 gain:4 proved:1 recall:1 knowledge:1 emerges:1 oreilly:1 auer:11 back:1 ok:4 follow:6 improved:1 amer:1 evaluated:1 furthermore:1 implicit:1 until:2 d:5 hand:2 receives:2 ei:4 nonlinear:1 google:2 contain:1 unbiased:5 true:1 normalized:1 former:2 analytically:1 regularization:3 hence:5 i2:1 eg:5 conditionally:1 round:4 game:3 self:1 complete:1 weber:1 novel:1 recently:2 began:1 wikipedia:1 conditioning:1 exponentially:1 volume:1 discussed:1 slight:1 numerically:1 refer:1 multiarmed:2 cambridge:1 gibbs:1 smoothness:5 tuning:3 consistency:5 mathematics:2 hp:2 had:1 maxj6:2 reliability:1 anyways:1 gt:20 gj:6 add:1 curvature:1 showed:1 optimizes:1 inf:5 p1i:2 scenario:1 certain:2 inequality:4 opital:1 seen:1 minimum:1 impose:1 converge:1 monotonically:1 ii:2 semi:1 full:11 hannan:2 match:1 exp3:12 characterized:1 calculation:1 long:3 hazard:16 lai:2 ez1:3 prediction:7 essentially:1 expectation:5 iteration:2 achieved:1 completes:2 extra:2 rest:1 specially:1 posse:2 kotlowski:1 elegant:1 sridharan:1 call:1 near:6 noting:1 iii:2 easy:5 automated:1 tei:1 affect:1 zi:11 nonstochastic:1 reduce:1 whether:2 expression:4 url:3 penalty:16 hessian:6 action:4 regrett:1 dramatically:1 generally:2 tewari:3 useful:2 probabilty:1 involve:1 clear:3 schapire:3 generate:1 http:3 zj:3 nsf:2 estimated:1 write:2 hyperparameter:2 shall:1 key:3 reformulation:2 blum:3 achieving:1 clarity:2 asymptotically:2 subgradient:1 merely:1 run:4 inverse:2 everywhere:2 prob:1 soda:2 place:1 family:2 throughout:2 reasonable:1 decision:8 appendix:5 scaling:1 x2k:2 dropout:1 entirely:1 bound:21 guaranteed:1 conjugation:1 correspondence:1 dresher:1 precisely:1 ri:15 sake:1 dominated:1 aspect:1 answered:1 min:3 pi1:1 vempala:3 conjecture:3 according:4 maxn:1 truncate:1 conjugate:2 smaller:1 slightly:1 em:2 son:1 appealing:1 kakade:1 modification:2 den:2 explained:1 restricted:1 taken:1 computationally:1 equation:4 remains:3 needed:1 umich:3 generalizes:1 available:1 incurring:1 apply:2 observe:3 generic:1 appropriate:1 alternative:1 original:1 unifying:1 robbing:1 restrictive:1 k1:1 prof:1 establish:1 objective:3 question:2 realized:1 quantity:2 occurs:1 strategy:4 randomize:1 dependence:1 diagonal:1 poker:2 evolutionary:1 gradient:9 majority:1 ei1:1 nondifferentiable:1 enforcing:1 devroye:2 fighting:1 issn:3 index:1 insufficient:1 setup:1 mostly:1 statement:1 gk:5 negative:2 stated:1 design:4 rii:1 policy:2 boltzmann:1 bianchi:6 upper:7 observation:1 finite:1 descent:1 regularizes:1 precise:1 rn:1 jabernet:1 perturbation:10 discovered:1 arbitrary:1 smoothed:1 kl:1 z1:2 connection:2 nip:2 address:1 adversary:5 xm:1 summarize:1 encompasses:1 pioneering:1 ambuj:1 including:1 max:5 ramon:1 suitable:1 critical:1 natural:3 event:1 regularized:1 valko:1 indicator:1 hr:2 arm:2 minimax:8 scheme:2 improve:2 older:1 imply:1 ne:2 acknowledges:2 flaxman:3 faced:1 literature:1 geometric:4 understanding:1 asymptotic:1 freund:3 loss:20 fully:1 expect:1 allocation:2 age:2 foundation:1 agent:4 sufficient:3 consistent:3 article:1 principle:1 editor:1 pareto:2 playing:1 balancing:1 pi:3 summary:1 side:3 bias:1 wide:4 template:3 taking:1 barrier:1 munos:1 sparse:1 van:4 distributed:2 feedback:3 valid:1 cumulative:4 evaluating:1 author:2 made:3 forward:1 adaptive:4 transaction:1 functionals:1 obtains:1 emphasize:1 implicitly:2 supremum:1 keep:1 hayes:4 reveals:1 conclude:1 leader:7 xi:2 p2n:1 continuous:1 iterative:1 decade:1 search:1 table:4 nature:2 career:2 ignoring:1 alg:1 complex:1 necessarily:1 diag:2 main:1 rh:1 noise:5 allowed:1 repeated:1 advice:1 gambling:1 wiley:3 sub:3 guiding:1 exponential:2 mcmahan:4 third:1 theorem:9 remained:2 showing:1 maxi:7 rakhlin:3 explored:1 r2:7 supposition:1 survival:1 exists:1 sequential:2 adding:1 uppelberg:1 magnitude:1 gumbel:3 entropy:9 michigan:3 simply:1 bubeck:5 ez:2 elomaa:3 springer:5 satisfies:1 relies:1 cdf:1 goal:2 viewed:2 presentation:1 marked:1 towards:1 lipschitz:1 replace:1 price:1 h00:2 hard:2 included:1 lemma:8 total:1 lens:1 concordant:1 shannon:4 underestimation:3 support:4 embrechts:2 latter:2 p21:1 |
5,560 | 6,031 | Asynchronous stochastic convex optimization:
the noise is in the noise and SGD don?t care
Sorathan Chaturapruek1
John C. Duchi2
Chris R?e1
1
2
Departments of Computer Science, Electrical Engineering, and 2 Statistics
Stanford University
Stanford, CA 94305
{sorathan,jduchi,chrismre}@stanford.edu
Abstract
We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution
of convex optimization problems under nearly the same conditions required for
asymptotic optimality of standard stochastic gradient procedures. Roughly, the
noise inherent to the stochastic approximation scheme dominates any noise from
asynchrony. We also give empirical evidence demonstrating the strong performance of asynchronous, parallel stochastic optimization schemes, demonstrating
that the robustness inherent to stochastic approximation problems allows substantially faster parallel and asynchronous solution methods. In short, we show that
for many stochastic approximation problems, as Freddie Mercury sings in Queen?s
Bohemian Rhapsody, ?Nothing really matters.?
1
Introduction
We study a natural asynchronous stochastic gradient method for the solution of minimization problems of the form
Z
minimize f (x) := EP [F (x; W )] =
F (x; ?)dP (?),
(1)
?
where x 7? F (x; ?) is convex for each ? ? ?, P is a probability distribution on ?, and the vector
x ? Rd . Stochastic gradient techniques for the solution of problem (1) have a long history in
optimization, starting from the early work of Robbins and Monro [19] and continuing on through
Ermoliev [7], Polyak and Juditsky [16], and Nemirovski et al. [14]. The latter two show how certain
long stepsizes and averaging techniques yield more robust and asymptotically optimal optimization
schemes, and we show how their results extend to practical parallel and asynchronous settings.
We consider an extension of previous stochastic gradient methods to a natural family of asynchronous gradient methods [3], where multiple processors can draw samples from the distribution
P and asynchronously perform updates to a centralized (shared) decision vector x. Our iterative
scheme is based on the H OGWILD ! algorithm of Niu et al. [15], which is designed to asynchronously
solve certain stochastic optimization problems in multi-core environments, though our analysis and
iterations are different. In particular, we study the following procedure, where each processor runs
asynchronously and independently of the others, though they maintain a shared integer iteration
counter k; each processor P asynchronously performs the following:
(i) Processor P reads current problem data x
(ii) Processor P draws a random sample W ? P , computes g = ?F (x; W ), and increments the
centralized counter k
(iii) Processor P updates x ? x ? ?k g sequentially for each coordinate j = 1, 2, . . . , d by incrementing [x]j ? [x]j ? ?k [g]j , where the scalars ?k are a non-increasing stepsize sequence.
1
Our main results show that because of the noise inherent to the sampling process for W , the errors introduced by asynchrony in iterations (i)?(iii) are asymptotically negligible: they do not matter. Even
more, we can efficiently construct an x from the asynchronous process possessing optimal convergence rate and asymptotic variance. This has consequences for solving stochastic optimization
problems on multi-core and multi-processor systems; we can leverage parallel computing without
performing any synchronization, so that given a machine with m processors, we can read data and
perform updates m times as quickly as with a single processor, and the error from reading stale
information on x becomes asymptotically negligible. In Section 2, we state our main convergence
theorems about the asynchronous iteration (i)?(iii) for solving problem (1). Our main result, Theorem 1, gives explicit conditions under which our results hold, and we give applications to specific
stochastic optimization problems as well as a general result for asynchronous solution of operator
equations. Roughly, all we require for our (optimal) convergence results is that the Hessian of f be
positive definite near x? = argminx f (x) and that the gradients ?f (x) be smooth.
Several researchers have provided and analyzed asynchronous algorithms for optimization. Bertsekas and Tsitsiklis [3] provide a comprehensive study both of models of asynchronous computation
and analyses of asynchronous numerical algorithms. More recent work has studied asynchronous
gradient procedures, though it often imposes strong conditions on gradient sparsity, conditioning of
the Hessian of f , or allowable types of asynchrony; as we show, none are essential. Niu et al. [15]
propose H OGWILD ! and show that under sparsity and smoothness assumptions (essentially, that
the gradients ?F (x; W ) have a vanishing fraction of non-zero entries, that f is strongly convex,
and ?F (x; ?) is Lipschitz for all ?), convergence guarantees similar to the synchronous case are
possible; Agarwal and Duchi [1] showed under restrictive ordering assumptions that some delayed
gradient calculations have negligible asymptotic effect; and Duchi et al. [4] extended Niu et al.?s
results to a dual averaging algorithm that works for non-smooth, non strongly-convex problems,
so long as certain gradient sparsity assumptions hold. Researchers have also investigated parallel
coordinate descent solvers; Richt?arik and Tak?ac? [18] and Liu et al. [13] show how certain ?nearseparability? properties of an objective function f govern convergence rate of parallel coordinate
descent methods, the latter focusing on asynchronous schemes. As we show, large-scale stochastic
optimization renders many of these problem assumptions unnecessary.
In addition to theoretical results, in Section 3 we give empirical results on the power of parallelism
and asynchrony in the implementation of stochastic approximation procedures. Our experiments
demonstrate two results: first, even in non-asymptotic finite-sample settings, asynchrony introduces
little degradation in solution quality, regardless of data sparsity (a common assumption in previous
analyses); that is, asynchronously-constructed estimates are statistically efficient. Second, we show
that there is some subtlety in implementation of these procedures in real hardware; while increases in
parallelism lead to concomitant linear improvements in the speed with which we compute solutions
to problem (1), in some cases we require strategies to reduce hardware resource competition between
processors to achieve the full benefits of asynchrony.
Notation A sequence of random variables or vectors Xn converges in distribution to Z, denoted
p
d
Xn ? Z, if E[f (Xn )] ? E[f (Z)] for all bounded continuous functions f . We let Xn ? Z denote
convergence in probability, meaning that limn P(kXn ? Zk > ?) = 0 for any ? > 0. The notation
N(?, ?) denotes the multivariate Gaussian with mean ? and covariance ?.
2
Main results
Our main results repose on a few standard assumptions often used for the analysis of stochastic optimization procedures, which we now detail, along with a few necessary definitions. We let k denote
the iteration counter used throughout the asynchronous gradient procedure. Given that we compute
g = ?F (x; W ) with counter value k in the iterations (i)?(iii), we let xk denote the (possibly inconsistent) particular x used to compute g, and likewise say that g = gk , noting that the update to
x is then performed using ?k . In addition, throughout paper, we assume there is some finite bound
M < ? such that no processor reads information more than M steps out of date.
2.1
Asynchronous convex optimization
We now present our main theoretical results for solving the stochastic convex problem (1), giving
the necessary assumptions on f and F (?; W ) for our results. Our first assumption roughly states that
f has quadratic expansion near the (unique) optimal point x? and is smooth.
2
Assumption A. The function f has unique minimizer x? and is twice continuously differentiable in
the neighborhood of x? with positive definite Hessian H = ?2 f (x? ) ? 0 and there is a covariance
matrix ? ? 0 such that
E[?F (x? ; W )?F (x? ; W )? ] = ?.
Additionally, there exists a constant C < ? such that the gradients ?F (x; W ) satisfy
2
2
E[k?F (x; W ) ? ?F (x? ; W )k ] ? C kx ? x? k for all x ? Rd .
(2)
Lastly, f has L-Lipschitz continuous gradient: k?f (x) ? ?f (y)k ? L kx ? yk for all x, y ? Rd .
Assumption A guarantees the uniqueness of the vector x? minimizing f (x) over Rd and ensures that
f is well-behaved enough for our asynchronous iteration procedure to introduce negligible noise
over a non-asynchronous procedure. In addition to Assumption A, we make one of two additional
assumptions. In the first case, we assume that f is strongly convex:
Assumption B. The function f is ?-strongly convex over all of Rd for some ? > 0, that is,
?
2
f (y) ? f (x) + h?f (x), y ? xi + kx ? yk for x, y ? Rd .
(3)
2
Our alternate assumption is a Lipschitz assumption on f itself, made by virtue of a second moment
bound on ?F (x; W ).
Assumption B?. There exists a constant G < ? such that for all x ? Rd ,
2
E[k?F (x; W )k ] ? G2 .
(4)
With our assumptions in place, we state our main theorem.
Theorem 1. Let the iterates xk be generated by the asynchronous process (i), (ii), (iii) with stepsize
choice ?k = ?k ?? , where ? ? ( 12 , 1) and ? > 0. Let Assumption A and either of Assumptions B
or B? hold. Then
n
1 X
d
?
(xk ? x? ) ? N 0, H ?1 ?H ?1 = N 0, (?2 f (x? ))?1 ?(?2 f (x? ))?1 .
n
k=1
Before moving to example applications of Theorem 1, we note that its convergence guarantee is
generally unimprovable even by numerical constants. Indeed, for classical statistical problems, the
covariance H ?1 ?H ?1 is the inverse Fisher information, and by the Le Cam-H?ajek local minimax
theorems [9] and results on Bahadur efficiency [21, Chapter 8], this is the optimal covariance matrix,
1
and the best possible rate is n? 2 . As for function values, using the delta method [e.g. 10, Theorem
1.8.12], we can show the optimal convergence rate of 1/n on function values.
d
Pn
Corollary
1.
Let the conditions of Theorem 1 hold. Then n f n1 k=1 xk ? f (x? ) ?
1
?1
? ? ?21 , where ?21 denotes a chi-squared random variable with 1 degree of freedom, and
2 tr H
2
H = ? f (x? ) and ? = E[?F (x? ; W )?F (x? ; W )? ].
2.2
Examples
We now give two classical statistical optimization problems to illustrate Theorem 1. We verify that
the conditions of Assumptions A and B or B? are not overly restrictive.
Linear regression Standard linear regression problems satisfies the conditions of Assumption B.
In this case, the data ? = (a, b) ? Rd ? R and the objective F (x; ?) = 21 (ha, xi ? b)2 . If we have
4
moment bounds E[kak2 ] < ?, E[b2 ] < ? and H = E[aa? ] ? 0, we have ?2 f (x? ) = H, and
the assumptions of Theorem 1 are certainly satisfied. Standard modeling assumptions yield more
concrete guarantees. For example, if b = ha, x? i + ? where ? is independent mean-zero noise with
E[?2 ] = ? 2 , the minimizer of f (x) = E[F (x; W )] is x? , we have ha, x? i ? b = ??, and
E[?F (x? ; W )?F (x? ; W )? ] = E[(ha, x? i?b)aa? (ha, x? i?b)] = E[aa? ?2 ] = ? 2 E[aa? ] = ? 2 H.
In particular, the asynchronous iterates satisfy
n
1 X
d
?
(xk ? x? ) ? N(0, ? 2 H ?1 ) = N 0, ? 2 E[aa? ]?1 ,
n
k=1
which is the (minimax optimal) asymptotic covariance of the ordinary least squares estimate of x? .
3
Logistic regression As long as the data has finite second moment, logistic regression problems
satisfy all the conditions of Assumption B? in Theorem 1. We have ? = (a, b) ? Rd ? {?1, 1} and
instantaneous objective F (x; ?) = log(1 + exp(?b ha, xi)). For fixed ?, this function is Lipschitz
continuous and has gradient and Hessian
?F (x; ?) = ?
1
ebha,xi
aa? ,
ba and ?2 F (x; ?) =
1 + exp(b ha, xi)
(1 + ebha,xi )2
2
2
where ?F (x; ?) is Lipschitz continuous as k?2 F (x; (a, b))k ? 14 kak2 . So long as E[kak2 ] < ?
and E[?2 F (x? ; W )] ? 0 (i.e. E[aa? ] is positive definite), Theorem 1 applies to logistic regression.
2.3
Extension to nonlinear problems
We prove Theorem 1 by way of a more general result on finding the zeros of a residual operator
R : Rd ? Rd , where we only observe noisy views of R(x), and there is unique x? such that
R(x? ) = 0. Such situations arise, for example, in the solution of stochastic monotone operator
problems (cf. Juditsky, Nemirovski, and Tauvel [8]). In this more general setting, we consider the
following asynchronous iterative process, which extends that for the convex case outlined previously.
Each processor P performs the following asynchronously and independently:
(i) Processor P reads current problem data x
(ii) Processor P receives vector g = R(x) + ?, where ? is a random (conditionally) mean-zero
noise vector, and increments a centralized counter k
(iii) Processor P updates x ? x ? ?k g sequentially for each coordinate j = 1, 2, . . . , d by incrementing [x]j = [x]j ? ?k [g]j .
As in the convex case, we associate vectors xk and gk with the update performed using ?k , and we
let ?k denote the noise vector used to construct gk . These iterates and assignment of indices imply
that xk has the form
k?1
X
?i E ki gi ,
(5)
xk = ?
i=1
ki
d?d
where E ? {0, 1}
is a diagonal matrix whose jth diagonal entry captures that coordinate j of
the ith gradient has been incorporated into iterate xk .
We define the an increasing sequence of ?-fields Fk by
Fk = ? ?1 , . . . , ?k , E ij : i ? k + 1, j ? i ,
(6)
that is, the noise variables ?k are adapted to the filtration Fk , and these ?-fields are the smallest
containing both the noise and all index updates that have occurred and that will occur to compute
xk+1 . Thus we have xk+1 ? Fk , and our mean-zero assumption on the noise ? is
E[?k | Fk?1 ] = 0.
We base our analysis on Polyak and Juditsky?s study [16] of stochastic approximation procedures,
so we enumerate a few more requirements?modeled on theirs?for our results on convergence of
the asynchronous iterations for solving the nonlinear equality R(x? ) = 0. We assume there is a
2
Lyapunov function V satisfying V (x) ? ? kxk for all x ? Rd , k?V (x) ? ?V (y)k ? L kx ? yk
for all x, y, that ?V (0) = 0, and V (0) = 0. This implies
2
? kxk ? V (x) ? V (0) + h?V (0), x ? 0i +
2
L
L
2
2
kxk = kxk
2
2
(7)
2
and k?V (x)k ? L2 kxk ? (L2 /?)V (x). We make the following assumptions on the residual R.
Assumption C. There exists a matrix H ? Rd?d with H ? 0, a parameter 0 < ? ? 1, constant
C < ?, and ? > 0 such that if x satisfies kx ? x? k ? ?,
1+?
kR(x) ? H(x ? x? )k ? C kx ? x? k
4
.
Assumption C essentially requires that R is differentiable at x? with derivative matrix H ? 0. We
also make a few assumptions on the noise process ?; specifically, we assume ? implicitly depends
on x ? Rd (so that we may write ?k = ?(xk )), and that the following assumption holds.
Assumption D. The noise vector ?(x) decomposes as ?(x) = ?(0) + ?(x), where ?(0) is a process
p
2
satisfying E[?k (0)?k (0)? | Fk?1 ] ? ? ? 0 for a matrix ? ? Rd?d , supk E[k?k (0)k | Fk?1 ] < ?
2
? 2
with probability 1, and E[k?k (x)k | Fk?1 ] ? C kx ? x k for a constant C < ? and all x ? Rd .
As in the convex case, we make one of two additional assumptions, which should be compared with
Assumptions B and B?. The first is that R gives globally strong information about x? .
Assumption E (Strongly convex residuals). There exists a constant ?0 > 0 such that for all x ? Rd ,
h?V (x ? x? ), R(x)i ? ?0 V (x ? x? ).
Alternatively, we may make an assumption on the boundedness of R, which we shall see suffices
for proving our main results.
Assumption E? (Bounded residuals). There exist ?0 > 0 and ? > 0 such that
h?V (x ? x? ), R(x)i
? ?0 and
inf
h?V (x ? x? ), R(x)i > 0.
inf ?
V (x ? x? )
?<kx?x? k
0<kx?x k??
In addition there exists C < ? such that, kR(x)k ? C and E[k?k k2 | Fk?1 ] ? C 2 for all k and x.
With these assumptions in place, we obtain the following more general version of Theorem 1; indeed, we show that Theorem 1 is a consequence of this result.
Theorem 2. Let V be a function satisfying inequality (7), and let Assumptions C and D hold. Let
1
< ? < 1. Let one of Assumptions E or E? hold. Then
the stepsizes ?k = ?k ?? , where 1+?
n
1 X
d
?
(xk ? x? ) ? N 0, H ?1 ?H ?1 .
n
k=1
We may compare this result to Polyak and Juditsky?s Theorem 2 [16], which gives identical asymptotic convergence guarantees but with somewhat weaker conditions on the function V and stepsize
sequence ?k . Our stronger assumptions, however, allow our result to apply even in fully asynchronous settings.
2.4
Proof sketch
We provide rigorous proofs in the long version of this paper [5], providing an amputated sketch
here. First, to show that Theorem 1 follows from Theorem 2, we set R(x) = ?f (x) and V (x) =
2
1
2 kxk . We can then show that Assumption A, which guarantees a second-order Taylor expansion,
implies Assumption C with ? = 1 and H = ?2 f (x? ). Moreover, Assumption B (or B?) implies
Assumption E (respectively, E?), while to see that Assumption D holds, we set ?(0) = ?F (x? ; W ),
?
taking ? = E[?F (x? ; W )?F (x? ; W ) ] and ?(x) = ?F (x; W ) ? ?F (x? ; W ), and applying
inequality (2) of Assumption A to satisfy Assumption D with the vector ?.
The proof of Theorem 2 is somewhat more involved. Roughly, we show the asymptotic equivalence
Pk?1
of the sequence xk from expression (5) to the easier to analyze sequence x
ek = ? i=1 ?i gi .
Asymptotically, we obtain E[kxk ? x
ek k2 ] = O(?k2 ), while the iterates x
ek ?in spite of their incorrect
gradient calculations?are close enough to a correct stochastic gradient iterate that they possess
optimal asymptotic normality properties. This ?close enough? follows by virtue of the squared error
bounds for ? in Assumption D, which guarantee that ?k essentially behaves like an i.i.d. sequence
asymptotically (after application of the Robbins-Siegmund martingale convergence theorem [20]),
which we then average to obtain a central-limit-theorem.
3
Experimental results
We provide empirical results studying the performance of asynchronous stochastic approximation
schemes on several simulated and real-world datasets. Our theoretical results suggest that asynchrony should introduce little degradation in solution quality, which we would like to verify; we
5
also investigate the engineering techniques necessary to truly leverage the power of asynchronous
stochastic procedures. In our experiments, we focus on linear and logistic regression, the examples given in Section 2.2; that is, we have data (ai , bi ) ? Rd ? R (for linear regression) or
(ai , bi ) ? Rd ? {?1, 1} (for logistic regression), for i = 1, . . . , N , and objectives
f (x) =
N
N
1 X
1 X
(hai , xi ? bi )2 and f (x) =
log 1 + exp(?bi hai , xi) .
2N i=1
N i=1
(8)
We perform each of our experiments using a 48-core Intel Xeon machine with 1 terabyte of RAM,
and have put code and binaries to replicate our experiments on CodaLab [6]. The Xeon architecture
puts each core onto one of four sockets, where each socket has its own memory. To limit the impact
of communication overhead in our experiments, we limit all experiments to at most 12 cores, all
on the same socket. Within an experiment?based on the empirical expectations (8)?we iterate in
epochs, meaning that our stochastic gradient procedure repeatedly loops through all examples, each
exactly once.1 Within an epoch, we use a fixed stepsize ?, decreasing the stepsize by a factor of .9
between each epoch (this matches the experimental protocol of Niu et al. [15]). Within each epoch,
we choose examples in a randomly permuted order, where the order changes from epoch to epoch
(cf. [17]). To address issues of hardware resource contention (see Section 3.2 for more on this), in
some cases we use a mini-batching strategy. Abstractly, in the formulation of the basic problem (1),
this means that in each calculation of a stochastic gradient g we draw B ? 1 samples W1 , . . . , WB
i.i.d. according to P , then set
B
1X
g(x) =
?F (x; Wb ).
(9)
B
b=1
The mini-batching strategy (9) does not change the (asymptotic) convergence guarantees of asynchronous stochastic gradient descent, as the covariance matrix ? = E[g(x? )g(x? )? ] satisfies
? = B1 E[?F (x? ; W )?F (x? ; W )? ], while the total iteration count is reduced by the a factor B.
Lastly, we measure the performance of optimization schemes via speedup, defined as
speedup =
average epoch runtime on a single core using H OGWILD !
.
average epoch runtime on m cores
(10)
In our experiments, as increasing the number m of cores does not change the gap in optimality
f (xk ) ? f (x? ) after each epoch, speedup is equivalent to the ratio of the time required to obtain an
?-accurate solution using a single processor/core to that required to obtain ?-accurate solution using
m processors/cores.
3.1
Efficiency and sparsity
For our first set of experiments, we study the effect that data sparsity has on the convergence behavior
of asynchronous methods?sparsity has been an essential part of the analysis of many asynchronous
and parallel optimization schemes [15, 4, 18], while our theoretical results suggest it should be
unimportant?using the linear regression objective (8). We generate synthetic linear regression problems with N = 106 examples in d = 103 dimensions via the following procedure. Let ? ? (0, 1]
be the desired fraction of non-zero gradient entries, and let ?? be a random projection operator
that zeros out all but a fraction ? of the elements of its argument, meaning that for a ? Rd , ?? (a)
uniformly at random chooses ?d elements of a, leaves them identical, and zeroes the remaining
elements. We generate data for our linear regression drawing a random vector u? ? N(0, I), then
i.i.d.
i.i.d.
constructing bi = hai , u? i + ?i , i = 1, . . . , N , where ?i ? N(0, 1), ai = ?? (e
ai ), e
ai ? N(0, I),
and ?? (e
ai ) denotes an independent random sparse projection of e
ai . To measure optimality gap, we
directly compute x? = (AT A)?1 AT b, where A = [a1 a2 ? ? ? aN ]? ? RN ?d .
In Figure 1, we plot the results of simulations using densities ? ? {.005, .01, .2, 1} and mini-batch
size B = 10, showing the gap f (xk ) ? f (x? ) as a function of the number of epochs for each of
the given sparsity levels. We give results using 1, 2, 4, and 10 processor cores (increasing degrees
of asynchrony), and from the plots, we see that regardless of the number of cores, the convergence
1
Strictly speaking, this violates the stochastic gradient assumption, but it allows direct comparison with the
original H OGWILD ! code and implementation [15].
6
behavior is nearly identical, with very minor degradations in performance for the sparsest data. (We
plot the gaps f (xk ) ? f (x? ) on a logarithmic axis.) Moreover, as the data becomes denser, the
more asynchronous methods?larger number of cores?achieve performance essentially identical
to the fully synchronous method in terms of convergence versus number of epochs. In Figure 2,
we plot the speedup achieved using different numbers of cores. We also include speedup achieved
using multiple cores with explicit synchronization (locking) of the updates, meaning that instead
of allowing asynchronous updates, each of the cores globally locks the decision vector when it
reads, unlocks and performs mini-batched gradient computations, and locks the vector again when
it updates the vector. We can see that the performance curve is much worse than than the withoutlocking performance curve across all densities. That the locking strategy also gains some speedup
when the density is higher is likely due to longer computation of the gradients. However, the lockingstrategy performance is still not competitive with that of the without-locking strategy.
10
0
10
10 cores
8 cores
4 cores
1 core
f (xk ) ? f (x? )
10 -1
10
-2
10
-3
10 cores
8 cores
4 cores
1 core
10 0
10
1
10 cores
8 cores
4 cores
1 core
0
10
2
10
0
10 -2
10 -1
10
-4
10
10 -4
0
5
10
Epochs
15
20
10
0
5
(a) ? = .005
10
Epochs
15
20
10 cores
8 cores
4 cores
1 core
-2
10 -4
-2
0
5
(b) ? = .01
10
Epochs
15
20
0
5
(c) ? = .2
10
Epochs
15
20
(d) ? = 1
Figure 1. (Exponential backoff stepsizes) Optimality gaps for synthetic linear regression experiments
showing effects of data sparsity and asynchrony on f (xk )?f (x? ). A fraction ? of each vector ai ? Rd
is non-zero.
10
10
linear speedup
without locking
with locking
8
10
linear speedup
without locking
with locking
8
10
linear speedup
without locking
with locking
8
6
6
6
6
4
4
4
4
2
2
2
2
0
2
4
6
Cores
8
(a) ? = .005
10
0
2
4
6
Cores
8
10
(b) ? = .01
0
2
4
6
Cores
(c) ? = .2
linear speedup
without locking
with locking
8
8
10
0
2
4
6
Cores
8
10
(d) ? = 1
Figure 2. (Exponential backoff stepsizes) Speedups for synthetic linear regression experiments showing effects of data sparsity on speedup (10). A fraction ? of each vector ai ? Rd is non-zero.
3.2
Hardware issues and cache locality
We detail a small set of experiments investigating hardware issues that arise even in implementation
of asynchronous gradient methods. The Intel x86 architecture (as with essentially every processor
architecture) organizes memory in a hierarchy, going from L1 to L3 (level 1 to level 3) caches of
increasing sizes. An important aspect of the speed of different optimization schemes is the relative
fraction of memory hits, meaning accesses to memory that is cached locally (in order of decreasing
speed, L1, L2, or L3 cache). In Table 1, we show the proportion of cache misses at each level of the
memory hierarchy for our synthetic regression experiment with fully dense data (? = 1) over the
execution of 20 epochs, averaged over 10 different experiments. We compare memory contention
when the batch size B used to compute the local asynchronous gradients (9) is 1 and 10. We see
that the proportion of misses for the fastest two levels?1 and 2?of the cache for B = 1 increase
significantly with the number of cores, while increasing the batch size to B = 10 substantially
mitigates cache incoherency. In particular, we maintain (near) linear increases in iteration speed
with little degradation in solution quality (the gap f (b
x) ? f (x? ) output by each of the procedures
?3
with and without batching is identical to within 10 ; cf. Figure 1(d)).
7
Number of cores
fraction of L1 misses
fraction of L2 misses
fraction of L3 misses
epoch average time (s)
speedup
Number of cores
fraction of L1 misses
fraction of L2 misses
fraction of L3 misses
epoch average time (s)
speedup
No batching (B = 1)
Batch size B = 10
1
0.0009
0.5638
0.6152
4.2101
1.00
1
0.0012
0.5420
0.5677
4.4286
1.00
4
0.0017
0.6594
0.4528
1.6577
2.54
4
0.0011
0.5467
0.5895
1.1868
3.73
8
0.0025
0.7551
0.3068
1.4052
3.00
8
0.0011
0.5537
0.5714
0.6971
6.35
10
0.0026
0.7762
0.2841
1.3183
3.19
10
0.0011
0.5621
0.5578
0.6220
7.12
Table 1. Memory traffic for batched updates (9) versus non-batched updates (B = 1) for a dense
linear regression problem in d = 103 dimensions with a sample of size N = 106 . Cache misses are
substantially higher with B = 1.
3.3
Real datasets
We perform experiments using three different real-world datasets: the Reuters RCV1 corpus [11],
the Higgs detection dataset [2], and the Forest Cover dataset [12]. Each represents a binary classification problem which we formulate using logistic regression. We briefly detail statistics for each:
(1) Reuters RCV1 dataset consists of N ? 7.81 ? 105 data vectors (documents) ai ? {0, 1}d with
d ? 5 ? 104 dimensions; each vector has sparsity approximately ? = 3 ? 10?3 . Our task is to
classify each document as being about corporate industrial topics (CCAT) or not.
ai ? Rd0 , with d0 = 28. We
(2) The Higgs detection dataset consists of N = 106 data vectors e
quantize each coordinate into 5 bins containing equal fraction of the coordinate values and
encode each vector e
ai as a vector ai ? {0, 1}5d0 whose non-zero entries correspond to quantiles
into which coordinates fall. The task is to detect (simulated) emissions from a linear accelerator.
(3) The Forest Cover dataset consists of N ? 5.7 ? 105 data vectors ai ? {?1, 1}d with d = 54,
and the task is to predict forest growth types.
f (xk ) ? f (x? )
10
10 -1
10 -1
10 -1
10 cores
8 cores
4 cores
1 core
-2
10 cores
8 cores
4 cores
1 core
10 -2
10 cores
8 cores
4 cores
1 core
Figure 3. (Exponential
backoff stepsizes) Optimality gaps f (xk ) ?
f (x? ) on the (a) RCV1,
(b) Higgs, and (c) Forest
Cover datasets.
10 -2
10 -3
10 -3
10
0
5
10
Epochs
15
20
0
(a) RCV1 (? = .003)
10
5
10
Epochs
15
linear speedup
without locking
0
(b) Higgs (? = 1)
linear speedup
without locking
9
8
7
7
7
6
6
6
5
5
5
4
4
4
3
3
3
2
2
4
6
Cores
8
10
(a) RCV1 (? = .003)
1
15
20
Figure 4. (Exponential backoff stepsizes)
Logistic
regression
experiments
showing
speedup (10) on the
(a) RCV1, (b) Higgs,
and (c) Forest Cover
datasets.
linear speedup
without locking
9
8
2
10
Epochs
10
8
1
5
(c) Forest (? = 1)
10
9
-3
20
2
2
4
6
Cores
8
10
(b) Higgs (? = 1)
1
2
4
6
Cores
8
10
(c) Forest (? = 1)
In Figure 3, we plot the gap f (xk ) ? f (x? ) as a function of epochs, giving standard error intervals
over 10 runs for each experiment. There is essentially no degradation in objective value for the
different numbers of processors, and in Figure 4, we plot speedup achieved using 1, 4, 8, and 10
cores with batch sizes B = 10. Asynchronous gradient methods achieve speedup of between 6?
and 8? on each of the datasets using 10 cores.
8
References
[1] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In Advances in
Neural Information Processing Systems 24, 2011.
[2] P. Baldi, P. Sadowski, and D. Whiteson. Searching for exotic particles in high-energy physics
with deep learning. Nature Communications, 5, July 2014.
[3] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods. Prentice-Hall, Inc., 1989.
[4] J. C. Duchi, M. I. Jordan, and H. B. McMahan. Estimation, optimization, and parallelism when
data is sparse. In Advances in Neural Information Processing Systems 26, 2013.
[5] J. C. Duchi, S. Chaturapruek, and C. R?e. Asynchronous stochastic convex optimization.
arXiv:1508.00882 [math.OC], 2015.
[6] J. C. Duchi, S. Chaturapruek, and C. R?e. Asynchronous stochastic convex optimization, 2015.
URL https://www.codalab.org/worksheets/. Code for reproducing experiments.
[7] Y. M. Ermoliev. On the stochastic quasi-gradient method and stochastic quasi-Feyer sequences.
Kibernetika, 2:72?83, 1969.
[8] A. Juditsky, A. Nemirovski, and C. Tauvel. Solving variational inequalities with the stochastic
mirror-prox algorithm. Stochastic Systems, 1(1):17?58, 2011.
[9] L. Le Cam and G. L. Yang. Asymptotics in Statistics: Some Basic Concepts. Springer, 2000.
[10] E. L. Lehmann and G. Casella. Theory of Point Estimation, Second Edition. Springer, 1998.
[11] D. Lewis, Y. Yang, T. Rose, and F. Li. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361?397, 2004.
[12] M. Lichman.
UCI machine learning repository,
2013.
URL
http://archive.ics.uci.edu/ml.
[13] J. Liu, S. J. Wright, C. R?e, V. Bittorf, and S. Sridhar. An asynchronous parallel stochastic
coordinate descent algorithm. In Proceedings of the 31st International Conference on Machine
Learning, 2014.
[14] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[15] F. Niu, B. Recht, C. Re, and S. Wright. Hogwild: a lock-free approach to parallelizing stochastic gradient descent. In Advances in Neural Information Processing Systems 24, 2011.
[16] B. T. Polyak and A. B. Juditsky. Acceleration of stochastic approximation by averaging. SIAM
Journal on Control and Optimization, 30(4):838?855, 1992.
[17] B. Recht and C. R?e. Beneath the valley of the noncommutative arithmetic-geometric mean
inequality: conjectures, case-studies, and consequences. In Proceedings of the Twenty Fifth
Annual Conference on Computational Learning Theory, 2012.
[18] P. Richt?arik and M. Tak?ac? .
Parallel coordinate descent methods for big data
optimization.
Mathematical Programming, page Online first, 2015.
URL
http://link.springer.com/article/10.1007/s10107-015-0901-6.
[19] H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical
Statistics, 22:400?407, 1951.
[20] H. Robbins and D. Siegmund. A convergence theorem for non-negative almost supermartingales and some applications. In Optimizing Methods in Statistics, pages 233?257. Academic
Press, New York, 1971.
[21] A. W. van der Vaart. Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic
Mathematics. Cambridge University Press, 1998. ISBN 0-521-49603-9.
9
| 6031 |@word repository:1 briefly:1 version:2 stronger:1 replicate:1 proportion:2 simulation:1 covariance:6 sgd:1 tr:1 boundedness:1 moment:3 liu:2 series:1 lichman:1 document:2 current:2 com:1 john:1 numerical:3 designed:1 plot:6 update:12 juditsky:7 leaf:1 xk:22 ith:1 vanishing:1 short:1 core:56 iterates:4 math:1 noncommutative:1 bittorf:1 org:1 mathematical:2 along:1 constructed:1 direct:1 incorrect:1 prove:1 consists:3 overhead:1 baldi:1 introduce:2 indeed:2 behavior:2 roughly:4 multi:3 chi:1 globally:2 decreasing:2 little:3 cache:7 solver:1 increasing:6 becomes:2 provided:1 notation:2 bounded:2 moreover:2 exotic:1 substantially:3 finding:1 jduchi:1 guarantee:8 every:1 growth:1 runtime:2 exactly:1 k2:3 hit:1 control:1 bertsekas:2 positive:3 negligible:4 engineering:2 before:1 local:2 limit:3 consequence:3 niu:5 approximately:1 twice:1 studied:1 equivalence:1 fastest:1 nemirovski:4 bi:5 statistically:1 averaged:1 practical:1 unique:3 definite:3 procedure:15 asymptotics:1 sings:1 empirical:4 significantly:1 projection:2 spite:1 suggest:2 onto:1 close:2 valley:1 operator:4 put:2 prentice:1 applying:1 www:1 equivalent:1 regardless:2 starting:1 independently:2 convex:15 formulate:1 chrismre:1 ogwild:4 proving:1 searching:1 coordinate:10 increment:2 siegmund:2 annals:1 hierarchy:2 programming:2 associate:1 element:3 satisfying:3 ep:1 electrical:1 capture:1 ensures:1 richt:2 ordering:1 counter:5 yk:3 rose:1 environment:1 govern:1 locking:14 cam:2 solving:5 efficiency:2 completely:1 chapter:1 neighborhood:1 whose:2 incoherency:1 stanford:3 solve:1 denser:1 say:1 drawing:1 larger:1 statistic:6 mercury:1 gi:2 vaart:1 abstractly:1 itself:1 noisy:1 asynchronously:6 online:1 sequence:8 differentiable:2 isbn:1 propose:1 uci:2 loop:1 beneath:1 date:1 achieve:4 x86:1 competition:1 convergence:17 requirement:1 cached:1 categorization:1 converges:1 illustrate:1 ac:2 ij:1 minor:1 strong:3 implies:3 lyapunov:1 correct:1 stochastic:39 supermartingales:1 violates:1 bin:1 require:2 suffices:1 really:1 extension:2 strictly:1 hold:8 hall:1 ic:1 wright:2 exp:3 predict:1 early:1 smallest:1 a2:1 uniqueness:1 estimation:2 robbins:4 minimization:1 gaussian:1 arik:2 pn:1 stepsizes:6 corollary:1 encode:1 focus:1 emission:1 improvement:1 industrial:1 rigorous:1 detect:1 tak:2 going:1 quasi:2 issue:3 dual:1 classification:1 denoted:1 field:2 construct:2 once:1 equal:1 sampling:1 identical:5 represents:1 nearly:2 others:1 inherent:3 few:4 randomly:1 comprehensive:1 delayed:2 argminx:1 maintain:2 n1:1 freedom:1 detection:2 centralized:3 unimprovable:1 investigate:1 certainly:1 introduces:1 analyzed:1 truly:1 accurate:2 chaturapruek:2 necessary:3 continuing:1 taylor:1 desired:1 re:1 theoretical:4 xeon:2 modeling:1 wb:2 classify:1 cover:4 queen:1 assignment:1 ordinary:1 entry:4 synthetic:4 chooses:1 st:1 density:3 international:1 siam:2 recht:2 probabilistic:1 physic:1 quickly:1 continuously:1 concrete:1 w1:1 squared:2 central:1 satisfied:1 again:1 containing:2 choose:1 possibly:1 worse:1 ek:3 derivative:1 li:1 prox:1 b2:1 matter:2 inc:1 satisfy:4 depends:1 performed:2 hogwild:1 view:1 higgs:6 analyze:1 traffic:1 competitive:1 parallel:10 monro:2 minimize:1 square:1 variance:1 efficiently:1 likewise:1 yield:2 correspond:1 socket:3 ccat:1 none:1 researcher:2 processor:20 history:1 casella:1 s10107:1 definition:1 codalab:2 energy:1 involved:1 proof:3 gain:1 dataset:5 focusing:1 higher:2 formulation:1 though:3 strongly:5 lastly:2 sketch:2 receives:1 nonlinear:2 sorathan:2 logistic:7 quality:3 asynchrony:9 behaved:1 stale:1 effect:4 verify:2 concept:1 equality:1 kxn:1 read:5 ajek:1 conditionally:1 oc:1 allowable:1 demonstrate:1 performs:3 duchi:6 l1:4 meaning:5 variational:1 instantaneous:1 contention:2 possessing:1 common:1 behaves:1 permuted:1 conditioning:1 extend:1 occurred:1 rd0:1 theirs:1 cambridge:2 ai:14 smoothness:1 rd:22 outlined:1 fk:9 mathematics:1 particle:1 l3:4 moving:1 access:1 longer:1 base:1 multivariate:1 own:1 recent:1 showed:1 optimizing:1 inf:2 certain:4 inequality:4 binary:2 der:1 additional:2 care:1 somewhat:2 terabyte:1 july:1 arithmetic:1 ii:3 multiple:2 full:1 corporate:1 d0:2 smooth:3 faster:1 match:1 calculation:3 academic:1 long:6 e1:1 a1:1 impact:1 regression:17 basic:2 essentially:6 expectation:1 arxiv:1 iteration:10 worksheet:1 agarwal:2 achieved:3 addition:4 tauvel:2 interval:1 limn:1 posse:1 archive:1 inconsistent:1 jordan:1 integer:1 near:3 leverage:2 noting:1 yang:2 iii:6 enough:3 iterate:3 architecture:3 polyak:4 reduce:1 synchronous:2 expression:1 url:3 render:1 hessian:4 speaking:1 york:1 repeatedly:1 deep:1 enumerate:1 generally:1 backoff:4 unimportant:1 locally:1 hardware:5 reduced:1 generate:2 http:3 shapiro:1 exist:1 bahadur:1 delta:1 overly:1 write:1 shall:1 four:1 demonstrating:2 lan:1 ram:1 asymptotically:6 monotone:1 fraction:13 run:2 inverse:1 lehmann:1 place:2 family:1 throughout:2 extends:1 almost:1 draw:3 decision:2 unlocks:1 bound:4 ki:2 quadratic:1 repose:1 annual:1 adapted:1 occur:1 aspect:1 speed:4 argument:1 optimality:5 performing:1 rcv1:7 conjecture:1 speedup:20 department:1 according:1 alternate:1 across:1 equation:1 resource:2 previously:1 count:1 studying:1 apply:1 observe:1 batching:4 stepsize:5 batch:5 robustness:1 original:1 denotes:3 remaining:1 cf:3 include:1 lock:3 giving:2 restrictive:2 classical:2 objective:6 strategy:5 kak2:3 diagonal:2 hai:3 gradient:31 dp:1 link:1 simulated:2 chris:1 topic:1 code:3 index:2 modeled:1 mini:4 concomitant:1 minimizing:1 providing:1 ratio:1 gk:3 negative:1 filtration:1 ba:1 implementation:4 twenty:1 perform:4 allowing:1 datasets:6 benchmark:1 finite:3 descent:6 situation:1 extended:1 incorporated:1 communication:2 rn:1 reproducing:1 parallelizing:1 introduced:1 required:3 address:1 parallelism:3 reading:1 sparsity:11 memory:7 power:2 natural:2 residual:4 minimax:2 scheme:9 normality:1 imply:1 axis:1 text:1 epoch:22 geometric:1 l2:5 asymptotic:10 relative:1 synchronization:2 fully:3 accelerator:1 versus:2 degree:2 imposes:1 article:1 asynchronous:37 free:1 jth:1 tsitsiklis:2 weaker:1 allow:1 fall:1 kibernetika:1 taking:1 fifth:1 sparse:2 benefit:1 distributed:2 curve:2 ermoliev:2 xn:4 world:2 dimension:3 van:1 computes:1 made:1 collection:1 implicitly:1 ml:1 sequentially:2 investigating:1 b1:1 corpus:1 unnecessary:1 xi:8 alternatively:1 don:1 continuous:4 iterative:2 decomposes:1 table:2 additionally:1 nature:1 zk:1 robust:2 ca:1 forest:7 whiteson:1 expansion:2 quantize:1 investigated:1 constructing:1 protocol:1 pk:1 main:8 dense:2 incrementing:2 noise:14 arise:2 reuters:2 edition:1 nothing:1 sridhar:1 big:1 intel:2 quantiles:1 batched:3 martingale:1 explicit:2 sparsest:1 exponential:4 mcmahan:1 theorem:23 sadowski:1 specific:1 showing:4 mitigates:1 virtue:2 dominates:1 evidence:1 essential:2 exists:5 kr:2 mirror:1 execution:1 kx:9 gap:8 easier:1 locality:1 logarithmic:1 likely:1 kxk:7 g2:1 scalar:1 subtlety:1 supk:1 applies:1 springer:3 aa:7 minimizer:2 satisfies:3 lewis:1 acceleration:1 shared:2 lipschitz:5 fisher:1 change:3 specifically:1 uniformly:1 averaging:3 miss:9 degradation:5 total:1 experimental:2 organizes:1 latter:2 |
5,561 | 6,032 | The Pareto Regret Frontier for Bandits
Tor Lattimore
Department of Computing Science
University of Alberta, Canada
[email protected]
Abstract
Given a multi-armed bandit problem it may be desirable to achieve a smallerthan-usual worst-case regret for some special actions. I show that the price for
such unbalanced worst-case regret guarantees is rather high. Specifically, if an
algorithm enjoys a worst-case regret of B with respect to some action, then there
must exist another action for which the worst-case regret is at least ?(nK/B),
where n is the horizon and K the number of actions. I also give upper bounds
in both the stochastic and adversarial settings showing that this result cannot be
improved. For the stochastic case the pareto regret frontier is characterised exactly
up to constant factors.
1
Introduction
The multi-armed bandit is the simplest class of problems that exhibit the exploration/exploitation
dilemma. In each time step the learner chooses one of K actions and receives a noisy reward signal
for the chosen action. A learner?s performance is measured in terms of the regret, which is the
(expected) difference between the rewards it actually received and those it would have received (in
expectation) by choosing the optimal action.
Prior work on the regret criterion for finite-armed bandits has treated all actions uniformly and has
aimed for bounds on the regret that do not depend on which action turned out to be optimal. I
take a different approach and ask what can be achieved if some actions are given special treatment.
Focussing on worst-case bounds, I ask whether or not it is possible to achieve improved worst-case
regret for some actions, and what is the cost in terms of the regret for the remaining actions. Such
results may be useful in a variety of cases. For example, a company that is exploring some new
strategies might expect an especially small regret if its existing strategy turns out to be (nearly)
optimal.
This problem has previously been considered in the experts setting where the learner is allowed
to observe the reward for all actions in every round, not only for the action actually chosen. The
earliest work seems to be by Hutter and Poland [2005] where it is?shown that the learner can assign
a prior weight to each action and pays a worst-case regret of O( ?n log ?i ) for expert i where ?i
is the prior belief in expert i and n is the horizon.
The uniform regret is obtained by choosing ?i =
?
1/K, which leads to the well-known O( n log K) bound achieved by the exponential weighting
algorithm [Cesa-Bianchi, 2006]. The consequence of this is that an algorithm can enjoy a constant
regret with respect to a single action while suffering minimally on the remainder. The problem was
studied in more detail by Koolen [2013] where (remarkably) the author was able to exactly describe
the pareto regret frontier when K = 2.
Other related work (also in the experts setting) is where the objective is to obtain an improved regret
against a mixture of available experts/actions [Even-Dar et al., 2008, Kapralov and Panigrahy, 2011].
In a similar vain, Sani et al. [2014] showed that algorithms for prediction with expert advice can be
combined with minimal cost to obtain the best of both worlds. In the bandit setting I am only aware
1
of the work by Liu and Li [2015] who study the effect of the prior on the regret of Thompson
sampling in a special case. In contrast the lower bound given here applies to all algorithms in a
relatively standard setting.
The main contribution of this work is a characterisation of the pareto regret frontier (the set of
achievable worst-case regret bounds) for stochastic bandits.
Let ?i ? R be the unknown mean of the ith arm and assume that supi,j ?i ? ?j ? 1. In each time
step the learner chooses an action It ? {1, . . . , K} and receives reward gIt ,t = ?i + ?t where ?t
is the noise term that I assume to be sampled independently from a 1-subgaussian distribution that
may depend on It . This model subsumes both Gaussian and Bernoulli (or bounded) rewards. Let
? be a bandit strategy, which is a function from histories of observations to an action It . Then the
n-step expected pseudo regret with respect to the ith arm is
?
R?,i
= n?i ? E
n
X
?It ,
t=1
where the expectation is taken with respect to the randomness in the noise and the actions of the
policy. Throughout this work n will be fixed, so is omitted from the notation. The worst-case
expected pseudo-regret with respect to arm i is
?
Ri? = sup R?,i
.
?
(1)
This means that R? ? RK is a vector of worst-case pseudo regrets with respect to each of the arms.
Let B ? RK be a set defined by
?
?
?
?
?
? X n ?
?
B = B ? [0, n]K : Bi ? min n,
for all i .
(2)
?
?
?
Bj ?
j6=i
The boundary of B is denoted by ?B. The following theorem shows that ?B describes the pareto
regret frontier up to constant factors.
Theorem
There exist universal constants c1 = 8 and c2 = 252 such that:
Lower bound: for ?t ? N (0, 1) and all strategies ? we have c1 (R? + K) ? B
Upper bound: for all B ? B there exists a strategy ? such that Ri? ? c2 Bi for all i
Observe that the lower bound relies on the assumption that the noise term be Gaussian while the
upper bound holds for subgaussian noise. The lower bound may be generalised to other noise models
such as Bernoulli, but does not hold for all subgaussian noise models. For example, it does not hold
if there is no noise (?t = 0 almost surely).
The lower bound also applies to the adversarial framework where the rewards may be chosen arbitrarily. Although I was not able to derive a matching upper bound in this case, a simple modification
of the Exp-? algorithm [Bubeck and Cesa-Bianchi, 2012] leads to an algorithm with
nK
nK
?
?
log
for all k ? 2 ,
R1 ? B1 and Rk .
B1
B12
where the regret is the adversarial version of the expected regret. Details are in the supplementary
material.
The new results seem elegant, but disappointing. In the experts setting we have seen that the learner
can distribute a prior amongst the actions and obtain a bound on the regret depending in a natural
way on the prior weight of the optimal action. In contrast, in the bandit setting the learner pays
an enormously higher price to obtain a small regret with respect to even a single arm. In fact,
the learner must essentially choose a single arm to favour, after which the regret for the remaining
arms has very limited flexibility. Unlike in the experts setting, if even a single arm enjoys constant
worst-case regret, then the worst-case regret with respect to all other arms is necessarily linear.
2
2
Preliminaries
I use the same notation as Bubeck and Cesa-Bianchi [2012]. Define Ti (t) to be the number of times
action i has been chosen after time step t and ?
?i,s to be the empirical estimate of ?i from the first s
times action i was sampled. This means that ?
?i,Ti (t?1) is the empirical estimate of ?i at the start of
the tth round. I use the convention that ?
?i,0 = 0. Since the noise model is 1-subgaussian we have
2
?
?? > 0
P {?s ? t : ?
?i,s ? ?i ? ?/s} ? exp ?
.
(3)
2t
This result is presumably well known, but a proof is included in the supplementary material for
convenience. The optimal arm is i? = arg maxi ?i with ties broken in some arbitrary way. The
optimal reward is ?? = maxi ?i . The gap between the mean rewards of the jth arm and the optimal
arm is ?j = ?? ? ?j and ?ji = ?i ? ?j . The vector of worst-case regrets is R? ? RK and has
been defined already in Eq. (1). I write R? ? B ? RK if Ri? ? Bi for all i ? {1, . . . , K}. For
vector R? and x ? R we have (R? + x)i = Ri? + x.
3
Understanding the Frontier
Before proving
p the main theorem I briefly describe the features of the regret frontier. First notice
that if Bi = n(K ? 1) for all i, then
Xp
X n
p
Bi = n(K ? 1) =
n/(K ? 1) =
.
Bj
j6=i
j6=i
Thus B ? B as expected. This particular B is witnessed up to constant factors by MOSS [Audibert
and Bubeck,
? 2009] and OC-UCB [Lattimore, 2015], but not UCB [Auer et al., 2002], which suffers
Riucb ? ?( nK log n).
Of course the uniform choice of B is not the only option. Suppose the first arm is special, so B1
should be chosen especially small. Assume without loss of generality that B1 ? B2 ? . . . ? BK ?
n. Then by the main theorem we have
B1 ?
K
k
X
X
n
n
(k ? 1)n
?
?
.
B
B
Bk
i
i
i=2
i=2
Therefore
Bk ?
(k ? 1)n
.
B1
(4)
This also proves the claim in the abstract, since it implies that BK ? (K ? 1)n/B1 . If B1 is fixed,
then choosing Bk = (k ? 1)n/B1 does not lie on the frontier because
K
K
X
X
n
B1
=
? ?(B1 log K)
Bk
k?1
k=2
k=2
PK
However, if H = k=2 1/(k ? 1) ? ?(log K), then choosing Bk = (k ? 1)nH/B1 does lie on
the frontier and is a factor of log K away from the lower bound given in Eq. (4). Therefore up the
a log K factor, points on the regret frontier are characterised entirely by a permutation determining
the order of worst-case regrets and the smallest worst-case regret.
Perhaps the most natural choice of B (assuming again that B1 ? . . . ? BK ) is
B1 = np
Bk = (k ? 1)n1?p H for k > 1 .
?
For p = 1/2 this leads to a bound?that is at most K log K worse than that obtained by MOSS and
OC-UCB while being a factor of K better for a select few.
and
3
Assumptions
The assumption that ?i ? [0, 1] is used to avoid annoying boundary problems caused by the fact that
time is discrete. This means that if ?i is extremely large, then even a single sample from this arm can
cause?
a big regret bound. This assumption is already quite common, for example a worst-case regret
of ?( Kn) clearly does not hold if the gaps are permitted to be unbounded. Unfortunately there is
no perfect resolution to this annoyance. Most elegant would be to allow time to be continuous with
actions taken up to stopping times. Otherwise you have to deal with the discretisation/boundary
problem with special cases, or make assumptions as I have done here.
4
Lower Bounds
Theorem 1. Assume ?t ? N (0, 1) is sampled from a standard Gaussian. Let ? be an arbitrary
strategy, then 8(R? + K) ? B.
Proof. Assume without loss of generality that R1? = mini Ri? (if this is not the case, then simply
re-order the actions). If R1? > n/8, then the result is trivial. From now on assume R1? ? n/8. Let
c = 4 and define
1
1 cRk?
,
? .
?k = min
2 n
2
Define K vectors ?1 , . . . , ?K ? RK by
?
0
1 ?
(?k )j = + ?k
2 ?
??j
if j = 1
if j = k 6= 1
otherwise .
Therefore the optimal action for the bandit with means ?k is k. Let A = {k : Rk? ? n/8} and
A0 = {k : k ?
/ A} and assume k ? A. Then
?
?
X
(a)
(b)
(d) cRk? (n ? E??k Tk (n))
(c)
,
Rk? ? R??k ,k ? ?k E??k ?
Tj (n)? = ?k n ? E??k Tk (n) =
n
j6=k
where (a) follows since
is the worst-case regret with respect to arm k, (b) since the gap between
the means of the kth arm and any other arm is atP
least ?k (Note that this is also true for k = 1
since ?1 = mink ?k . (c) follows from the fact that i Ti (n) = n and (d) from the definition of ?k .
Therefore
1
n 1?
? E??k Tk (n) .
(5)
c
Rk?
Therefore for k 6= 1 with k ? A we have
q
(a)
1
? E??k Tk (n) ? E??1 Tk (n) + n?k E??1 Tk (n)
n 1?
c
q
q
(c) n
(b)
? n ? E??1 T1 (n) + n?k E??1 Tk (n) ? + n?k E??1 Tk (n) ,
c
where (a) follows from standard entropy inequalities and a similar argument as used by Auer et al.
[1995] (details in supplementary material), (b) since k 6= 1 and E??1 T1 (n) + E??1 Tk (n) ? n, and (c)
by Eq. (5). Therefore
E??1 Tk (n) ?
1 ? 2c
,
?2k
which implies that
R1? ? R??1 ,1 =
K
X
k=2
?k E??1 Tk (n) ?
X
k?A?{1}
4
1 ? 2c
1
=
?k
8
X
k?A?{1}
n
.
Rk?
Therefore for all i ? A we have
8Ri? ?
X
k?A?{1}
n Ri?
?
?
Rk? R1?
X
k?A?{i}
n
.
Rk?
Therefore
8Ri? + 8K ?
X n
+ 8K ?
Rk?
k6=i
X
k?A0 ?{i}
X n
n
?
,
?
Rk
Rk?
k6=i
which implies that 8(R? + K) ? B as required.
5
Upper Bounds
I now show that the lower bound derived in the previous section is tight up to constant factors. The
algorithm is a generalisation MOSS [Audibert and Bubeck, 2009] with two modifications. First, the
width of the confidence bounds are biased in a non-uniform way, and second, the upper confidence
bounds are shifted. The new algorithm is functionally identical to MOSS in the special case that Bi
is uniform. Define log+(x) = max {0, log(x)}.
1: Input: n and B1 , . . . , BK
2: ni = n2 /Bi2 for all i
3: for t ? 1, . . . , n do
4:
s
It = arg max ?
?i,Ti (t?1) +
5: end for
i
r
4
1
ni
log+
?
Ti (t ? 1)
Ti (t ? 1)
ni
Algorithm 1: Unbalanced MOSS
Theorem 2. Let B ? B, then the strategy ? given in Algorithm 1 satisfies R? ? 252B.
Corollary 3. For all ? the following hold:
?
1. R?,i
? ? 252Bi? .
?
2. R?,i
? ? mini (n?i + 252Bi )
The second part of the corollary is useful when Bi? is large, but there exists an arm for which n?i
and Bi are both small. The proof of Theorem 2 requires a few lemmas. The first is a somewhat standard concentration inequality that follows from a combination of the peeling argument and Doob?s
maximal inequality. The proof may be found in the supplementary material.
r
n
4
i
Lemma 4. Let Zi = max ?i ? ?
?i,s ?
log+
. Then P {Zi ? ?} ? ni20
?2 for all ? > 0.
1?s?n
s
s
In the analysis of traditional bandit algorithms the gap ?ji measures how quickly the algorithm can
detect the difference between arms i and j. By
pdesign, however, Algorithm 1 is negatively biasing
its estimate of the empirical mean of arm i by 1/ni . This has the effect of shifting the gaps, which
? ji and define to be
I denote by ?
p
p
p
p
? ji = ?ji + 1/nj ? 1/ni = ?i ? ?j + 1/nj ? 1/ni .
?
Lemma 5. Define stopping time ?ji by
(
?ji = min s : ?
?j,s +
r
)
n
4
j
?
log+
? ?j + ?ji /2 .
s
s
? ji /2, then Tj (n) ? ?ji .
If Zi < ?
5
Proof. Let t be the first time step such that Tj (t ? 1) = ?ji . Then
s
p
p
4
nj
? ji /2 ? 1/nj
?
?j,Tj (t?1) +
log+
? 1/nj ? ?j + ?
Tj (t ? 1)
Tj (t ? 1)
p
? ji ? ?
? ji /2 ? 1/nj
= ?j + ?
p
? ji /2
= ?i ? 1/ni ? ?
s
p
4
ni
<?
?i,Ti (t?1) +
log+
? 1/ni ,
Ti (t ? 1)
Ti (t ? 1)
which implies that arm j will not be chosen at time step t and so also not for any subsequent time
steps by the same argument and induction. Therefore Tj (n) ? ?ji .
!
?2
nj ?
40
64
ji
? ji > 0, then E?ji ?
Lemma 6. If ?
.
?2 + ?
? 2 ProductLog
64
?
ji
ji
Proof. Let s0 be defined by
&
64
s0 = ? 2 ProductLog
?ji
?2
nj ?
ji
64
s
!'
=?
? ji
4
nj
?
log+
.
?
s0
s0
4
Therefore
E?ji =
n
X
s=1
P {?ji ? s} ? 1 +
n?1
X
s=1
(
P ?
?i,s ? ?i,s
? ji
?
?
?
2
r
n
4
j
log+
s
s
)
?
?2
X
? ji
s?
?
ji
? 1 + s0 +
exp ?
? 1 + s0 +
P ?
?i,s ? ?i,s ?
4
32
s=s0 +1
s=s0 +1
!
2
?
nj ?ji
40
64
32
? 1 + s0 + ? 2 ? ? 2 + ? 2 ProductLog
,
64
?ji
?ji
?ji
n?1
X
!
? ji ? 2.
where the last inequality follows since ?
?
Proof of Theoremp
2. Let ? = 2/ ni and A = {j
p: ?ji > ?}. Then for j ? A we have ?ji ?
?
? ji and ?
? ji ? 1/ni + 1/nj . Letting ?0 = 1/ni we have
2?
?
?
K
X
?
R?,i
= E?
?ji Tj (n)?
j=1
?
? n? + E ?
?
X
?ji Tj (n)?
j?A
(a)
?
? 2Bi + E ?
X
j?A
(b)
? 2Bi +
(c)
? 2Bi +
X
j?A
X
j?A
?
? ji /2 ?
?ji ?ji + n max ?ji : Zi ? ?
j?A
80
128
+ ? ProductLog
?
?ji
?ji
?2
nj ?
ji
64
!!
+ 4nE[Zi 1{Zi ? ?0 }]
?
90 nj + 4nE[Zi 1{Zi ? ?0 }] ,
? ji . On the other hand,
where (a) follows by using Lemma 5 to bound Tj (n) ? ?ji when Zi < ?
?
the total number of pulls for arms j for which Zi ? ?ji /2 is at most n. (b) follows by bounding
6
?ji in expectation
using Lemma 6. (c) follows from basic calculus and because for j ? A we have
p
?
?ji ? 1/ni . All that remains is to bound the expectation.
Z ?
160n
160n
4nE[Zi 1{Zi ? ?0 }] ? 4n?0 P {Zi ? ?0 } + 4n
P {Zi ? z} dz ? 0 = ? = 160Bi ,
? ni
ni
?0
where I have used Lemma 4 and simple identities. Putting it together we obtain
X ?
?
R?,i
? 2Bi +
90 nj + 160B1 ? 252Bi ,
j?A
where I applied the assumption B ? B and so
P
j6=1
?
nj =
P
j6=1
n/Bj ? Bi .
The above proof may be simplified in the special case that B is uniform where we recover the
minimax regret of MOSS, but with perhaps a simpler proof than was given originally by Audibert
and Bubeck [2009].
On Logarithmic Regret
In a recent technical report I demonstrated empirically that MOSS suffers sub-optimal problemdependent regret in terms of the minimum gap [Lattimore, 2015]. Specifically, it can happen that
K
moss
R?,i? ? ?
log n ,
(6)
?min
where ?min = mini:?i >0 ?i . On the other hand, the order-optimal asymptotic regret can be significantly smaller. Specifically, UCB by Auer et al. [2002] satisfies
!
X 1
ucb
R?,i? ? O
log n ,
(7)
?i
i:?i >0
which for unequal gaps can be much smaller than Eq. (6) and is asymptotically order-optimal [Lai
and Robbins, 1985]. The problem is that MOSS explores only enough to obtain minimax regret, but
sometimes obtains minimax regret even when a more conservative algorithm would do better. It is
worth remarking that this effect is harder to observe than one might think. The example given in the
afforementioned technical report is carefully tuned to exploit this failing, but still requires n = 109
and K = 103 before significant problems arise. In all other experiments MOSS was performing
admirably in comparison to UCB.
All?these problems can be avoided by modifying UCB rather than MOSS. The cost is a factor of
O( log n). The algorithm is similar to Algorithm 1, but chooses the action that maximises the
following index.
s
r
(2 + ?) log t
log n
?
,
It = arg max ?
?i,Ti (t?1) +
T
(t
?
1)
ni
i
i
where ? > 0 is a fixed arbitrary constant.
Theorem 7. If ? is the strategy of unbalanced UCB with ni = n2 /Bi2 and B ? B, then the regret
of the unbalanced UCB satisfies:
?
?
1. (problem-independent regret). R?,i
log n .
? ? O Bi?
n
o
p
2. (problem-dependent regret). Let A = i : ?i ? 2 1/ni? log n . Then
!
X 1
p
?
Bi? log n1{A 6= ?} +
R?,i
log n .
? ? O
?i
i?A
The proof is deferred to the supplementary material. The indicator function in the problemdependent bound vanishes for sufficiently large n provided ni? ? ?(log(n)), which is equivalent to
7
?
Bi? ? o(n/ log n). Thus for reasonable choices of B1 , . . . , BK the algorithm is going to enjoy the
same asymptotic performance as UCB. Theorem 7 may be proven for any index-based algorithm for
which it can be shown that
1
ETi (n) ? O
log n ,
?2i
which includes (for example) KL-UCB [Capp?e et al., 2013] and Thompson sampling (see analysis by Agrawal and Goyal [2012a,b] and original paper by Thompson [1933]), but not OC-UCB
[Lattimore, 2015] or MOSS [Audibert and Bubeck, 2009].
Experimental Results
I compare MOSS and unbalanced MOSS in two simple simulated examples, both with horizon
n = 5000. Each data point is an empirical average of ?104 i.i.d. samples, so error bars are too small
to see. Code/data is available in the supplementary material. The first experiment has K = 2 arms
1
2
and B1 = n 3 and B2 = n 3 . I plotted the results for ? = (0, ??) for varying ?. As predicted,
the new algorithm performs significantly better than MOSS for positive ?, and significantly
worse
?
otherwise (Fig. 1). The second experiment has K = 10 arms. This time B1 = n and Bk =
P9
?
(k ? 1)H n with H = k=1 1/k. Results are shown for ?k = ?1{k = i? } for ? ? [0, 1/2] and
i? ? {1, . . . , 10}. Again, the results agree with the theory. The unbalanced algorithm is superior to
MOSS for i? ? {1, 2} and inferior otherwise (Fig. 2).
MOSS
U. MOSS
800
2,000
Regret
Regret
600
400
1,000
200
0
?0.4
?0.2
0
?
0.2
0.4
0
0
1
2
?
3
4
5
Figure 2: ? = ? + (i? ? 1)/2
Figure 1
Sadly the experiments serve only to highlight the plight of the biased learner, which suffers significantly worse results than its unbaised counterpart for most actions.
6
Discussion
I have shown that the cost of favouritism for multi-armed bandit algorithms is rather serious. If
an algorithm exhibits a small worst-case regret for a specific action, then the worst-case regret of
the remaining
? actions is necessarily significantly larger than the well-known uniform worst-case
bound of ?( Kn). This unfortunate result is in stark contrast to the experts setting for which there
exist algorithms that suffer constant regret with respect to a single expert at almost no cost for the
remainder. Surprisingly, the best achievable (non-uniform) worst-case bounds are determined up to
a permutation almost entirely by the value of the smallest worst-case regret.
There are some interesting open questions. Most notably, in the adversarial setting I am not sure if
the upper or lower bound is tight (or neither). It would also be nice to know if the constant factors
can be determined exactly asymptotically, but so far this has not been done even in the uniform
case. For the stochastic setting it is natural to ask if the OC-UCB algorithm can also be modified.
Intuitively one would expect this to be possible, but it would require re-working the very long proof.
Acknowledgements
I am indebted to the very careful reviewers who made many suggestions for improving this paper.
Thank you!
8
References
Shipra Agrawal and Navin Goyal. Further optimal regret bounds for thompson sampling. In Proceedings of International Conference on Artificial Intelligence and Statistics (AISTATS), 2012a.
Shipra Agrawal and Navin Goyal. Analysis of thompson sampling for the multi-armed bandit problem. In Proceedings of Conference on Learning Theory (COLT), 2012b.
Jean-Yves Audibert and S?ebastien Bubeck. Minimax policies for adversarial and stochastic bandits.
In COLT, pages 217?226, 2009.
Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. Gambling in a rigged
casino: The adversarial multi-armed bandit problem. In Foundations of Computer Science, 1995.
Proceedings., 36th Annual Symposium on, pages 322?331. IEEE, 1995.
Peter Auer, Nicol?o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit
problem. Machine Learning, 47:235?256, 2002.
S?ebastien Bubeck and Nicol`o Cesa-Bianchi. Regret Analysis of Stochastic and Nonstochastic Multiarmed Bandit Problems. Foundations and Trends in Machine Learning. Now Publishers Incorporated, 2012. ISBN 9781601986269.
Olivier Capp?e, Aur?elien Garivier, Odalric-Ambrym Maillard, R?emi Munos, and Gilles Stoltz.
Kullback?Leibler upper confidence bounds for optimal sequential allocation. The Annals of
Statistics, 41(3):1516?1541, 2013.
Nicolo Cesa-Bianchi. Prediction, learning, and games. Cambridge University Press, 2006.
Eyal Even-Dar, Michael Kearns, Yishay Mansour, and Jennifer Wortman. Regret to the best vs.
regret to the average. Machine Learning, 72(1-2):21?37, 2008.
Marcus Hutter and Jan Poland. Adaptive online prediction by following the perturbed leader. The
Journal of Machine Learning Research, 6:639?660, 2005.
Michael Kapralov and Rina Panigrahy. Prediction strategies without loss. In Advances in Neural
Information Processing Systems, pages 828?836, 2011.
Wouter M Koolen. The pareto regret frontier. In Advances in Neural Information Processing Systems, pages 863?871, 2013.
Tze Leung Lai and Herbert Robbins. Asymptotically efficient adaptive allocation rules. Advances
in applied mathematics, 6(1):4?22, 1985.
Tor Lattimore. Optimally confident UCB : Improved regret for finite-armed bandits. Technical
report, 2015. URL http://arxiv.org/abs/1507.07880.
Che-Yu Liu and Lihong Li.
arXiv:1506.03378, 2015.
On the prior sensitivity of thompson sampling.
arXiv preprint
Amir Sani, Gergely Neu, and Alessandro Lazaric. Exploiting easy data in online optimization. In
Advances in Neural Information Processing Systems, pages 810?818, 2014.
William Thompson. On the likelihood that one unknown probability exceeds another in view of the
evidence of two samples. Biometrika, 25(3/4):285?294, 1933.
9
| 6032 |@word exploitation:1 briefly:1 version:1 achievable:2 seems:1 annoying:1 open:1 rigged:1 calculus:1 git:1 harder:1 liu:2 tuned:1 existing:1 com:1 gmail:1 must:2 subsequent:1 happen:1 v:1 intelligence:1 amir:1 ith:2 org:1 simpler:1 unbounded:1 c2:2 symposium:1 admirably:1 notably:1 expected:5 multi:5 alberta:1 company:1 p9:1 armed:7 provided:1 bounded:1 notation:2 what:2 nj:15 guarantee:1 pseudo:3 every:1 ti:10 tie:1 exactly:3 biometrika:1 enjoy:2 generalised:1 before:2 t1:2 positive:1 consequence:1 might:2 minimally:1 studied:1 b12:1 limited:1 bi:20 regret:60 goyal:3 jan:1 problemdependent:2 empirical:4 universal:1 significantly:5 matching:1 confidence:3 cannot:1 convenience:1 equivalent:1 demonstrated:1 dz:1 reviewer:1 independently:1 thompson:7 resolution:1 rule:1 pull:1 proving:1 annals:1 yishay:1 suppose:1 olivier:1 trend:1 preprint:1 worst:22 sadly:1 rina:1 alessandro:1 vanishes:1 broken:1 reward:8 depend:2 tight:2 dilemma:1 negatively:1 serve:1 learner:9 sani:2 shipra:2 capp:2 describe:2 artificial:1 choosing:4 quite:1 jean:1 supplementary:6 larger:1 otherwise:4 statistic:2 fischer:1 think:1 noisy:1 vain:1 online:2 agrawal:3 isbn:1 maximal:1 remainder:2 turned:1 flexibility:1 achieve:2 exploiting:1 r1:6 perfect:1 tk:11 derive:1 depending:1 measured:1 received:2 eq:4 predicted:1 implies:4 convention:1 modifying:1 stochastic:6 exploration:1 material:6 require:1 assign:1 preliminary:1 frontier:11 exploring:1 hold:5 sufficiently:1 considered:1 exp:3 presumably:1 bj:3 claim:1 tor:3 smallest:2 omitted:1 failing:1 robbins:2 eti:1 clearly:1 gaussian:3 modified:1 rather:3 avoid:1 varying:1 earliest:1 corollary:2 derived:1 bernoulli:2 likelihood:1 contrast:3 adversarial:6 am:3 detect:1 dependent:1 stopping:2 leung:1 a0:2 bandit:17 doob:1 going:1 arg:3 colt:2 denoted:1 k6:2 special:7 aware:1 sampling:5 identical:1 yu:1 nearly:1 np:1 report:3 serious:1 few:2 n1:2 william:1 ab:1 wouter:1 deferred:1 mixture:1 tj:10 discretisation:1 stoltz:1 re:2 plotted:1 minimal:1 hutter:2 witnessed:1 yoav:1 cost:5 uniform:8 wortman:1 too:1 optimally:1 kn:2 perturbed:1 chooses:3 combined:1 confident:1 explores:1 international:1 aur:1 sensitivity:1 michael:2 together:1 quickly:1 gergely:1 again:2 cesa:7 choose:1 worse:3 expert:10 stark:1 li:2 elien:1 distribute:1 b2:2 subsumes:1 includes:1 casino:1 caused:1 audibert:5 view:1 eyal:1 sup:1 kapralov:2 start:1 recover:1 option:1 contribution:1 yves:1 ni:19 who:2 worth:1 j6:6 randomness:1 history:1 indebted:1 suffers:3 neu:1 definition:1 against:1 proof:11 sampled:3 treatment:1 annoyance:1 ask:3 maillard:1 carefully:1 actually:2 auer:5 higher:1 originally:1 permitted:1 improved:4 done:2 generality:2 hand:2 receives:2 working:1 navin:2 perhaps:2 effect:3 true:1 counterpart:1 leibler:1 deal:1 round:2 game:1 width:1 inferior:1 oc:4 criterion:1 performs:1 lattimore:6 common:1 superior:1 koolen:2 ji:52 empirically:1 nh:1 functionally:1 significant:1 multiarmed:2 cambridge:1 atp:1 mathematics:1 lihong:1 nicolo:2 showed:1 recent:1 disappointing:1 inequality:4 arbitrarily:1 seen:1 minimum:1 herbert:1 somewhat:1 surely:1 focussing:1 signal:1 desirable:1 exceeds:1 technical:3 long:1 lai:2 prediction:4 basic:1 essentially:1 supi:1 expectation:4 crk:2 arxiv:3 sometimes:1 achieved:2 c1:2 remarkably:1 publisher:1 biased:2 unlike:1 sure:1 elegant:2 seem:1 subgaussian:4 enough:1 easy:1 variety:1 zi:14 nonstochastic:1 favour:1 whether:1 url:1 suffer:1 peter:2 cause:1 action:31 dar:2 useful:2 aimed:1 simplest:1 tth:1 schapire:1 http:1 exist:3 notice:1 shifted:1 lazaric:1 write:1 discrete:1 putting:1 characterisation:1 neither:1 garivier:1 asymptotically:3 you:2 throughout:1 almost:3 reasonable:1 entirely:2 bound:30 pay:2 annual:1 ri:8 emi:1 argument:3 min:5 extremely:1 performing:1 relatively:1 department:1 combination:1 describes:1 smaller:2 modification:2 intuitively:1 taken:2 agree:1 previously:1 remains:1 turn:1 jennifer:1 know:1 letting:1 end:1 available:2 observe:3 away:1 original:1 remaining:3 unfortunate:1 exploit:1 especially:2 prof:1 objective:1 already:2 question:1 strategy:9 concentration:1 usual:1 traditional:1 che:1 exhibit:2 amongst:1 kth:1 thank:1 simulated:1 odalric:1 trivial:1 induction:1 marcus:1 panigrahy:2 assuming:1 code:1 index:2 mini:3 unfortunately:1 robert:1 mink:1 ebastien:2 policy:2 unknown:2 bianchi:7 upper:8 maximises:1 observation:1 gilles:1 finite:3 incorporated:1 mansour:1 arbitrary:3 canada:1 bk:12 required:1 kl:1 unequal:1 able:2 bar:1 remarking:1 biasing:1 max:5 belief:1 shifting:1 bi2:2 treated:1 natural:3 indicator:1 arm:24 minimax:4 ne:3 moss:18 poland:2 prior:7 understanding:1 nice:1 acknowledgement:1 nicol:2 determining:1 asymptotic:2 freund:1 loss:3 expect:2 permutation:2 highlight:1 interesting:1 suggestion:1 allocation:2 proven:1 foundation:2 xp:1 s0:9 pareto:6 course:1 surprisingly:1 last:1 jth:1 enjoys:2 allow:1 ambrym:1 munos:1 boundary:3 world:1 author:1 made:1 adaptive:2 simplified:1 avoided:1 far:1 obtains:1 kullback:1 b1:19 leader:1 continuous:1 improving:1 necessarily:2 aistats:1 pk:1 main:3 big:1 noise:8 bounding:1 arise:1 n2:2 paul:1 allowed:1 suffering:1 advice:1 fig:2 gambling:1 enormously:1 sub:1 exponential:1 lie:2 weighting:1 peeling:1 rk:15 theorem:9 specific:1 showing:1 maxi:2 evidence:1 exists:2 sequential:1 horizon:3 nk:4 gap:7 entropy:1 logarithmic:1 simply:1 tze:1 bubeck:8 applies:2 satisfies:3 relies:1 identity:1 careful:1 price:2 included:1 specifically:3 characterised:2 uniformly:1 generalisation:1 determined:2 lemma:7 conservative:1 total:1 kearns:1 experimental:1 ucb:14 select:1 unbalanced:6 |
5,562 | 6,033 | Online Learning with Gaussian Payoffs and Side
Observations
Yifan Wu1
1
Andr?as Gy?orgy2
Dept. of Computing Science
University of Alberta
{ywu12,szepesva}@ualberta.ca
2
Csaba Szepesv?ari1
Dept. of Electrical and Electronic Engineering
Imperial College London
[email protected]
Abstract
We consider a sequential learning problem with Gaussian payoffs and side observations: after selecting an action i, the learner receives information about the
payoff of every action j in the form of Gaussian observations whose mean is the
same as the mean payoff, but the variance depends on the pair (i, j) (and may be
infinite). The setup allows a more refined information transfer from one action to
another than previous partial monitoring setups, including the recently introduced
graph-structured feedback case. For the first time in the literature, we provide
non-asymptotic problem-dependent lower bounds on the regret of any algorithm,
which recover existing asymptotic problem-dependent lower bounds and finitetime minimax lower bounds available in the literature. We also provide algorithms
that achieve the problem-dependent lower bound (up to some universal constant
factor) or the minimax lower bounds (up to logarithmic factors).
1
Introduction
Online learning in stochastic environments is a sequential decision problem where in each time step
a learner chooses an action from a given finite set, observes some random feedback and receives
a random payoff. Several feedback models have been considered in the literature: The simplest is
the full information case where the learner observes the payoff of all possible actions at the end
of every round. A popular setup is the case of bandit feedback, where the learner only observes
its own payoff and receives no information about the payoff of other actions [1]. Recently, several
papers considered a more refined setup, called graph-structured feedback, that interpolates between
the full-information and the bandit case: here the feedback structure is described by a (possibly
directed) graph, and choosing an action reveals the payoff of all actions that are connected to the
selected one, including the chosen action itself. This problem, motivated for example by social
networks, has been studied extensively in both the adversarial [2, 3, 4, 5] and the stochastic cases
[6, 7]. However, most algorithms presented heavily depend on the self-observability assumption,
that is, that the payoff of the selected action can be observed. Removing this self-loop assumption
leads to the so-called partial monitoring case [5]. In the absolutely general partial monitoring setup
the learner receives some general feedback that depends on its choice (and the environment), with
some arbitrary (but known) dependence [8, 9]. While the partial monitoring setup covers all other
problems, its analysis has concentrated on the finite case where both the set of actions and the set
of feedback signals are finite [8, 9], which is in contrast to the standard full information and bandit
settings where the feedback is typically assumed to be real-valued. To our knowledge there are only
a few exceptions to this case: in [5], graph-structured feedback is considered without the self-loop
assumption, while continuous action spaces are considered in [10] and [11] with special feedback
structure (linear and censored observations, resp.).
In this paper we consider a generalization of the graph-structured feedback model that can also be
viewed as a general partial monitoring model with real-valued feedback. We assume that selecting
1
an action i the learner can observe a random variable Xij for each action j whose mean is the same
2
as the payoff of j, but its variance ?ij
depends on the pair (i, j). For simplicity, throughout the paper
we assume that all the payoffs and the Xij are Gaussian. While in the graph-structured feedback
case one either has observation on an action or not, but the observation always gives the same amount
2
, the information can be
of information, our model is more refined: Depending on the value of ?ij
2
of different quality. For example, if ?ij = ?, trying action i gives no information about action j.
2
In general, for any ?ij
< ?, the value of the information depends on the time horizon T of the
2
problem: when ?ij
is large relative to T (and the payoff differences of the actions) essentially no
information is received, while a small variance results in useful observations.
After defining the problem formally in Section 2, we provide non-asymptotic problem-dependent
lower bounds in Section 3, which depend on the distribution of the observations through their mean
payoffs and variances. To our knowledge, these are the first such bounds presented for any stochastic partial monitoring problem beyond the full-information setting: previous work either presented
asymptotic problem-dependent lower bounds (e.g., [12, 7]), or finite-time minimax bounds (e.g.,
[9, 3, 5]). Our bounds can recover all previous bounds up to some universal constant factors not depending on the problem. In Section 4, we present two algorithms with finite-time performance
guarantees for the case of graph-structured feedback without the self-observability assumption.
While due to their complicated forms it is hard to compare our finite-time upper and lower bounds,
we show that our first algorithm achieves the asymptotic problem-dependent lower bound up to
e 1/2 )
problem-independent multiplicative factors. Regarding the minimax regret, the hardness (?(T
e 2/3 ) regret1 ) of partial monitoring problems is characterized by their global/local observabilor ?(T
ity property [9] or, in case of the graph-structured feedback model, by their strong/weak observability property [5]. In the same section we present another algorithm that achieves the minimax regret
(up to logarithmic factors) under both strong and weak observability, and achieves an O(log3/2 T )
problem-dependent regret. Earlier results for the stochastic graph-structured feedback problems
[6, 7] provided only asymptotic problem-dependent lower bounds and performance bounds that did
not match the asymptotic lower bounds or the minimax rate up to constant factors. A related combinatorial partial monitoring problem with linear feedback was considered in [10], where the presented
e 2/3 ) minimax bound and a logarithmic problem depenalgorithm was shown to satisfy both an O(T
dent bound. However, the dependence on the ?
problem structure in that paper is not optimal, and, in
particular, the paper does not achieve the O( T ) minimax bound for easy problems. Finally, we
draw conclusions and consider some interesting future directions in Section 5. Proofs can be found
in the long version of this paper [13].
2
Problem Formulation
Formally, we consider an online learning problem with Gaussian payoffs and side observations:
Suppose a learner has to choose from K actions in every round. When choosing an action, the
learner receives a random payoff and also some side observations corresponding to other actions.
More precisely, each action i ? [K] = {1, . . . , K} is associated with some parameter ?i , and
the payoff Yt,i to action i in round t is normally distributed random variable with mean ?i and
2
variance ?ii
, while the learner observes a K-dimensional Gaussian random vector Xt,i whose jth
2
coordinate is a normal random variable with mean ?j and variance ?ij
(we assume 0 ? ?ij ? ?)
and the coordinates of Xt,i are independent of each other. We assume the following: (i) the random
variables (Xt , Yt )t are independent for all t; (ii) the parameter vector ? is unknown to the learner but
2
the variance matrix ? = (?ij
)i,j?[K] is known in advance; (iii) ? ? [0, D]K for some D > 0; (iv)
mini?[K] ?ij ? ? < ? for all j ? [K], that is, the expected payoff of each action can be observed.
The goal of the learner is to maximize its payoff or, in other words, minimize the expected regret
RT = T max ?i ?
i?[K]
T
X
E [Yt,it ]
t=1
where it is the action selected by the learner in round t. Note that the problem encompasses several
common feedback models considered in online learning (modulo the Gaussian assumption), and
makes it possible to examine more delicate observation structures:
1
Tilde denotes order up to logarithmic factors.
2
Full information: ?ij = ?j < ? for all i, j ? [K].
Bandit: ?ii < ? and ?ij = ? for all i 6= j ? [K].
Partial monitoring with feedback graphs [5]: Each action i ? [K] is associated with an observation set Si ? [K] such that ?ij = ?j < ? if j ? Si and ?ij = ? otherwise.
We will call the uniform variance version of these problems when all the finite ?ij are equal to some
? ? 0. Some interesting features of the problem can be seen when considering the generalized full
information case , when all entries of ? are finite. In this case, the greedy algorithm, which estimates
the payoff of each action by the average of the corresponding observed samples and selects the one
with the highest average, achieves at most a constant regret for any time horizon T .2 On the other
hand, the constant can be quite large: in particular, when the variance of some observations are
large relative to the gaps dj = maxi ?i ? ?j , the situation is rather similar to a partial monitoring
setup for a smaller, finite time horizon. In this paper we are going to analyze this problem and
present algorithms and lower bounds that are able to ?interpolate? between these cases and capture
the characteristics of the different regimes.
2.1
Notation
P
Define CTN = {c ? NK : ci ? 0 , i?[K] ci = T } and let N (T ) ? CTN denote the number of
R
K
plays
P over all actions taken by some algorithm in T rounds. Also let CT = {c ? R : ci ?
0 , i?[K] ci = T }. We will consider environments with different expected payoff vectors ? ? ?,
but the variance matrix ? will be fixed. Therefore, an environment can be specified by ?; oftentimes,
we will explicitly denote the dependence of different quantities on ?: The probability and expectation
functionals under environment ? will be denoted by Pr (?; ?) and E [?; ?], respectively. Furthermore,
let ij (?) be the jth best action (ties are broken arbitrarily, i.e., ?i1 ? ?i2 ? ? ? ? ? ?iK ) and define
d
i (?) = ?i1 (?) ? ?i for any i ? [K]. Then the expected regret under environment ? is RT (?) =
P
i?[K] E [Ni (T ); ?] di (?). For any action i ? [K], let Si = {j ? [K] : ?ij < ?} denote the set of
actions whose parameter ?j is observable by choosing action i. Throughout the paper, log denotes
the natural logarithm and ?n denotes the n-dimensional simplex for any positive integer n.
3
Lower Bounds
The aim of this section is to derive generic, problem-dependent lower bounds to the regret, which
are also able to provide minimax lower bounds. The hardness in deriving such bounds is that for any
fixed ? and ?, the dumb algorithm that always selects i1 (?) achieves zero regret (obviously, the regret of this algorithm is linear for any ?0 with i1 (?) 6= i1 (?0 )), so in general it is not possible to give a
lower bound for a single instance. When deriving asymptotic lower bounds, this is circumvented by
only considering consistent algorithms whose regret is sub-polynomial for any problem [12]. However, this asymptotic notion of consistency is not applicable to finite-horizon problems. Therefore,
following ideas of [14], for any problem we create a family of related problems (by perturbing the
mean payoffs) such that if the regret of an algorithm is ?too small? in one of the problems than it
will be ?large? in another one, while it still depends on the original problem parameters (note that
deriving minimax bounds usually only involves perturbing certain special ?worst-case? problems).
As a warm-up, and to show the reader what form of a lower bound can be expected, first we present
an asymptotic lower bound for the uniform-variance version of the problem of partial monitoring
with feedback graphs. The result presented below is an easy consequence of [12], hence its proof
is omitted. An algorithm is said to be consistent if sup??? RT (?) = o(T ? ) for every ? > 0. Now
assume for simplicity that there is a unique optimal action in environment ?, that is, ?i1 (?) > ?i for
all i 6= i1 and let
?
?
?
?
2
2
X
X
2?
2?
for all j 6= i1 (?) ,
ci ? 2
.
C? = c ? [0, ?)K :
ci ? 2
?
dj (?)
di2 (?) (?) ?
i:j?Si
i:i1 (?)?Si
2
To see this, notice that the error of identifying the optimal action decays exponentially with the number of
rounds.
3
Then, for any consistent algorithm and for any ? with ?i1 (?) > ?i2 (?) ,
lim inf
T ??
RT (?)
? inf hc, d(?)i .
c?C?
log T
(1)
Note that the right hand side of (1) is 0 for any generalized full information problem (recall that
the expected regret is bounded by a constant for such problems), but it is a finite positive number
for other problems. Similar bounds have been provided in [6, 7] for graph-structured feedback with
self-observability (under non-Gaussian assumptions on the payoffs). In the following we derive
finite time lower bounds that are also able to replicate this result.
3.1
A General Finite Time Lower Bound
N
First we derive a general lower bound. For any ?, ?0 ? ? and q ? ?|CT | , define f (?, q, ?0 ) as
X
q 0 (a) ha, d(?0 )i
f (?, q, ?0 ) = inf
N
q 0 ??|CT | a?C N
T
such that
X
N
a?CT
?
?
X
X
q(a)
?Ii (?, ?0 )
?
q(a) log 0
q(a)ai ? ,
q (a)
N
i?[K]
a?CT
where Ii (?, ?0 ) is the KL-divergence between Xt,i (?) and Xt,i (?0 ), given by Ii (?, ?0 ) =
PK
2
KL(Xt,i (?); Xt,i (?0 )) = j=1 (?j ? ?j0 )2 /2?ij
. Clearly, f (?, q, ?0 ) is a lower bound on RT (?0 )
for any algorithm for which the distribution of N (T ) is q. The intuition behind the allowed values
of q 0 is that we want q 0 to be as similar to q as the environments ? and ?0 look like for the algorithm
(through the feedback (Xt,it )t ). Now define
X
g(?, c) = inf sup f (?, q, ?0 ),
such that
q(a)a = c ? CTR .
N
0
q??|CT | ? ??
N
a?CT
g(?, c) is a lower bound of the worst-case regret of any algorithm with E [N (T ); ?] = c. Finally, for
any x > 0, define
b(?, x) = inf hc, d(?)i
c?C?,x
where
C?,x = {c ? CTR ; g(?, c) ? x}.
Here C?,B contains all the possible values of E [N (T ); ?] that can be achieved by some algorithm
whose lower bound g on the worst-case regret is smaller than x. These definitions give rise to the
following theorem:
Theorem 1. Given any B > 0, for any algorithm such that sup?0 ?? RT (?0 ) ? B, we have, for any
environment ? ? ?, RT (?) ? b(?, B).
Remark 2. If B is picked as the minimax value of the problem given the observation structure ?,
the theorem states that for any minimax optimal algorithm the expected regret for a certain ? is lower
bounded by b(?, B).
3.2
A Relaxed Lower Bound
Now we introduce a relaxed but more interpretable version of the finite-time lower bound of Theorem 1, which can be shown to match the asymptotic lower bound (1). The idea of deriving the lower
bound is the following: instead of ensuring that the algorithm performs well in the most adversarial
environment ?0 , we consider a set of ?bad? environments and make sure that the algorithm performs
well on them, where each ?bad? environment ?0 is the most adversarial one by only perturbing one
coordinate ?i of ?.
However, in order to get meaningful finite-time lower bounds, we need to perturb ? more carefully
than in the case of asymptotic lower bounds. The reason for this is that for any sub-optimal action
i, if ?i is very close to ?i1 (?) , then E [Ni (T ); ?] is not necessarily small for a good algorithm for
?. If it is small, one can increase ?i to obtain an environment ?0 where i is the best action and the
algorithm performs bad; otherwise, when E [Ni (T ); ?] is large, we need to decrease ?i to make the
4
algorithm perform badly in ?0 . Moreover, when perturbing ?i to be better than ?i1 (?) , we cannot
make ?i0 ? ?i1 (?) arbitrarily small as in asymptotic lower-bound arguments, because when ?i0 ? ?i1 (?)
is small, large E Ni1 (?) ; ?0 , and not necessarily large E [Ni (T ); ?0 ], may also lead to low finite-time
regret in ?0 . In the following we make this argument precise to obtain an interpretable lower bound.
3.2.1
Formulation
We start with defining a subset of CTR that contains the set of ?reasonable? values for E [N (T ); ?].
For any ? ? ? and B > 0, let
?
?
K
?
?
X
cj
0
?
m
(?,
B)
for
all
i
?
[K]
C?,B
= c ? CTR :
i
2
?
?
?ji
j=1
where mi , the minimum sample size required to distinguish between ?i and its worst-case perturbation, is defined as follows: For i 6= i1 , if ?i1 = D,3 then mi (?, B) = 0. Otherwise let
mi,+ (?, B) =
1
max
2
?(di (?),D??i ]
1
2
?(0,?i ]
mi,? (?, B) = max
log
log
T (?di (?))
,
8B
T (+di (?))
,
8B
and let i,+ and i,? denote the value of achieving the maximum in mi,+ and mi,? , respectively.
Then, define
mi,+ (?, B)
if di (?) ? 4B/T ;
mi (?, B) =
min {mi,+ (?, B), mi,? (?, B)} if di (?) < 4B/T .
For i = i1 , then mi1 (?, B) = 0 if ?i2 (?) = 0, else the definitions for i 6= i1 change by replacing
di (?) with di2 (?) (?) (and switching the + and ? indices):
mi1 (?),? (?, B) =
1
max
2
?(di2 (?) (?),?i1 (?) ]
mi1 (?),+ (?, B) =
1
2
?(0,D??i1 (?) ]
max
log
log
T (?di2 (?) (?))
,
8B
T (+di2 (?) (?))
8B
where i1 (?),? and i1 (?),+ are the maximizers for in the above expressions. Then, define
mi1 (?),? (?, B)
if di2 (?) (?) ? 4B/T ;
mi1 (?) (?, B) =
min mi1 (?),+ (?, B), mi1 (?),? (?, B)
if di2 (?) (?) < 4B/T .
Note that i,+ and i,? can be expressed in closed form using the Lambert W : R ? R function
satisfying W (x)eW (x) = x: for any i 6= i1 (?),
(
)
di (?)T
?
W
?
16 eB
i,+ = min D ? ?i , 8 eBe
/T + di (?) ,
(
i,?
W
?
= min ?i , 8 eBe
?
di (?)T
?
16 eB
)
(2)
/T ? di (?) ,
and similar results hold for i = i1 , as well.
Now we can give the main result of this section, a simplified version of Theorem 1:
Corollary 3. Given B > 0, for any algorithm such that sup??? RT (?) ? B, we have, for any
0
environment ? ? ?, RT (?) ? b0 (?, B) = minc?C?,B
hc, d(?)i.
Next we compare this bound to existing lower bounds.
3.2.2
Comparison to the Asymptotic Lower Bound (1)
Now we will show that our finite time lower bound in Corollary 3 matches the asymptotic lower
bound in (1) up to some constants. Pick B = ?T ? for some ? > 0 and 0 < ? < 1. For simplicity, we only consider ? which is ?away from? the boundary of ? (so that the minima in (2) are
3
Recall that ?i ? [0, D].
5
achieved by the second terms) and has a unique optimal action. Then, for i 6= i1 (?), it is easy
T (i,+ ?di (?))
i (?)
?
to show that i,+ = 2W (di (?)Td1??
+ di (?) by (2) and mi (?, B) = 21 log
8B
/(16? e))
i,+
for large enough T . Then, using the fact that log x ? log log x ? W (x) ? log x for x ? e,
it follows that limT ?? mi (?, B)/ log T = (1 ? ?)/d2i (?), and similarly we can show that
0
limT ?? mi1 (?) (?, B)/ log T = (1 ? ?)/d2i2 (?) (?). Thus, C?,B
? (1??)2 log T C? , under the assumptions of (1), as T ? ?. This implies that Corollary 3 matches the asymptotic lower bound of
(1) up to a factor of (1 ? ?)/2.
3.2.3
Comparison to Minimax Bounds
Now we will show that our ?-dependent finite-time lower bound reproduces the minimax regret
bounds of [2] and [5], except for the generalized full information case.
The minimax bounds depend on the following notion of observability: An action i is strongly observable if either i ? Si or [K] \ {i} ? {j : i ? Sj }. i is weakly observable if it is not strongly
observable but there exists j such that i ? Sj (note that we already assumed the latter condition for
all i). Let W(?) be the set of all weakly observable actions. ? is said to be strongly observable if
W(?) = ?. ? is weakly observable if W(?) 6= ?.
Next we will define two key qualities introduced by [2] and [5] that characterize the hardness of a
problem instance with feedback structure ?: A set A ? [K] is called an independent set if for any
i ? A, Si ? A ? {i}. The independence number ?(?) is defined as the cardinality of the largest
independent set. For any pair of subsets A, A0 ? [K], A is said to be dominating A0 if for any j ? A0
there exists i ? A such that j ? Si . The weak domination number ?(?) is defined as the cardinality
of the smallest set that dominates W(?).
Corollary 4. Assume that ?ij = ? for some i, j ? [K], that is, we are not in the generalized full
information case. Then,
p
(i) if ? is strongly observable,
with
B
=
??
?(?)T for some ? > 0, we have
?
?(?)T
?
sup??? b0 (?, B) ? 64e? for T ? 64e2 ?2 ? 2 ?(?)3 /D2 .
(ii) If ? is weakly observable, with B = ?(?(?)D)1/3 (?T )2/3 log?2/3 K for some ? > 0, we
(?(?)D)1/3 (?T )2/3 log?2/3 K
.
51200e2 ?2
1
picking ? = 8?e for strongly observable
have sup??? b0 (?, B) ?
1
Remark 5. In Corollary 4,
for weakly
? and ? = 73
observable ? gives formal minimax
lower
bounds:
(i)
If
?
is
strongly
observable,
for
any
algorithm
?
we have sup??? RT (?) ?
?
?(?)T
?
8 e
algorithm we have sup??? RT (?) ?
4
for T ? e? 2 ?(?)3 /D2 . (ii) If ? is weakly observable, for any
(?(?)D)1/3 (?T )2/3
.
73 log2/3 K
Algorithms
In this section we present two algorithms and their finite-time analysis for the uniform variance
version of our problem (where ?ij is either ? or ?). The upper bound for the first algorithm matches
the asymptotic lower bound in (1) up to constants. The second algorithm achieves the minimax lower
bounds of Corollary 4 up to logarithmic factors, as well as O(log3/2 T ) problem-dependent regret.
In the problem-dependent upper bounds of both algorithms, we assume that the optimal action is
unique, that is, di2 (?) (?) > 0.
4.1
An Asymptotically Optimal Algorithm
Let c(?) = argminc?C? hc, d(?)i; note that increasing ci1 (?) (?) does not change the value of
hc, d(?)i (since di1 (?) (?) = 0), so we take the minimum value of ci1 (?) (?) in this definition. Let
Pt?1
ni (t) = s=1 I {i ? Sis } be the number of observations for action i before round t and ??t,i be the
Pt?1
empirical estimate of ?i based on the first ni (t) observations. Let Ni (t) = s=1 I {is = i} be the
number of plays for action i before round t. Note that this definition of Ni (t) is different from that
in the previous sections since it excludes round t.
6
Algorithm 1
1: Inputs: ?, ?, ? : N ? [0, ?).
2: For t = 1, ..., K, observe each action i at least
once by playing it such that t ? Sit .
3: Set exploration count ne (K + 1) = 0.
4: for t = K + 1, K + 2, ... do
(t)
5:
if 4?Nlog
t ? C??t then
6:
Play it = i1 (??t ).
7:
Set ne (t + 1) = ne (t).
8:
else
9:
if mini?[K] ni (t) < ?(ne (t))/K then
10:
Play it such that argmini?[K] ni (t) ? Sit .
11:
else
12:
Play it such that Ni (t) < ci (??t )4? log t.
13:
end if
14:
Set ne (t + 1) = ne (t) + 1.
15:
end if
16: end for
Our first algorithm is presented in Algorithm 1. The main idea, coming from
[15], is that by forcing exploration over
all actions, the solution c(?) of the linear program can be well approximated
while paying a constant price. This solves
the main difficulty that, without getting
enough observations on each action, we
may not have good enough estimates for
d(?) and c(?). One advantage of our algorithm compared to that of [15] is that we
use a nondecreasing, sublinear exploration
schedule ?(n) (? : N ? [0, ?)) instead
of a constant rate ?(n) = ?n. This resolves the problem that, to achieve asymptotically optimal performance, some parameter of the algorithm needs to be chosen according to dmin (?) as in [15]. The
expected regret of Algorithm 1 is upper
bounded as follows:
Theorem 6. For any ? ? ?, > 0, ? > 2 and any non-decreasing ?(n) that satisfies 0 ? ?(n) ?
n/2 and ?(m + n) ? ?(m) + ?(n) for m, n ? N,
T
?(s)2
X
RT (?) ? 2K + 2 + 4K/(? ? 2) dmax (?) + 4Kdmax (?)
exp ?
2K? 2
s=0
X
X
+ 2dmax (?)? 4? log T
ci (?, ) + K + 4? log T
ci (?, )di (?) .
i?[K]
0
where ci (?, ) = sup{ci (? ) :
|?j0
i?[K]
? ?j | ? for all j ? [K]}.
Further specifying ?(n) and using the continuity of c(?) around ?, it immediately follows that Algorithm 1 achieves asymptotically optimal performance:
Corollary 7. Suppose the conditions
of
Theorem 6 hold. Assume, furthermore, that ?(n) satisfies
P?
?(s)2
?(n) = o(n) and s=0 exp ? 2K?2 < ? for any > 0, then for any ? such that c(?) is unique,
lim sup RT (?)/ log T ? 4? inf hc, d(?)i .
c?C(?)
T ??
Note that any ?(n) = anb with a ? (0, 21 ], b ? (0, 1) satisfies the requirements in Theorem 6 and
Corollary 7. Also note that the algorithms presented in [6, 7] do not achieve this asymptotic bound.
4.2
A Minimax Optimal Algorithm
0
Next we present an algorithm achieving
P the minimax bounds. For any A, A ? [K], let
c(A, A0 ) = argmaxc??|A| mini?A0 j:i?Sj cj (ties are broken arbitrarily) and m(A, A0 ) =
P
mini?A0 j:i?Sj cj (A, A0 ). For any A ? [K] and |A| ? 2, let AS = {i ? A : ?j ? A, i ? Sj }
q
Pr?1
2 r 3 /?)
and AW = A ? AS . Furthermore, let gr,i (?) = ? 2 log(8K
where ni (r) = s=1 is,i and ??r,i
ni (r)
be the empirical estimate of ?i based on first ni (r) observations (i.e., the average of the samples).
The algorithm is presented in Algorithm 2. It follows a successive elimination process: it explores all
possibly optimal actions (called ?good actions? later) based on some confidence intervals until only
one action remains. While doing exploration, the algorithm first tries to explore the good actions
by only using good ones. However, due to weak observability, some good actions might have to be
explored by actions that have already been eliminated. To control this exploration-exploitation trade
off, we use a sublinear function ? to control the exploration of weakly observable actions.
In the following we present high-probability bounds on the performance of the algorithm, so, with a
slight abuse of notation, RT (?) will denote the regret without expectation in the rest of this section.
7
Algorithm 2
1: Inputs: ?, ?.
2: Set t1 = 0, A1 = [K].
3: for r = 1, 2, ... do
2/3
4:
Let ?r = min1?s?r,AW
m([K] , AW
. (Define ?r = 1 if
s ) and ?(r) = (??r tr /D)
s 6=?
W
As = ? for all 1 ? s ? r.)
5:
if AW
ni (r) < mini?ASr ni (r) and mini?AW
ni (r) < ?(r) then
r 6= ? and mini?AW
r
r
6:
Set cr = c([K] , AW
).
r
7:
else
8:
Set cr = c(Ar , ASr ).
9:
end if
10:
Play ir = dcr ? kcr k0 e and set tr+1 ? tr + kir k1 .
11:
Ar+1 ? {i ? Ar : ??r+1,i + gr+1,i (?) ? maxj?Ar ??r+1,j ? gr+1,j (?)}.
12:
if |Ar+1 | = 1 then
13:
Play the only action in the remaining rounds.
14:
end if
15: end for
Theorem 8. For any ? ? (0, 1) and any ? ? ?,
p
RT (?) ? (?(?)D)1/3 (?T )2/3 ? 7 6 log(2KT /?) + 125? 2 K 3 /D + 13K 3 D
with probability at least 1 ? ? if ? is weakly observable, while
r
2KT
RT (?) ? 2KD + 80? ?(?)T ? 6 log K log
?
with probability at least 1 ? ? if ? is strongly observable.
Theorem 9 (Problem-dependent upper bound). For any ? ? (0, 1) and any ? ? ? such that the
optimal action is unique, with probability at least 1 ? ?,
1603?(?)D? 2
3/2
(log(2KT /?)) + 14K 3 D + 125? 2 K 3 /D
d2min (?)
1/3
1/2
+ 15 ?(?)D? 2
125? 2 /D2 + 10 K 2 (log(2KT /?))
.
RT (?) ?
Remark 10. Picking ? = 1/T gives an O(log3/2 T ) upper bound on the expected regret.
Remark 11. Note that Algortihm 2 is similar to the UCB-LP algorithm of [7], which admits a better problem-dependent upper bound (although does not achieve it with optimal problem-dependent
constants), but it does not achieve the minimax bound even under strong observability.
5
Conclusions and Open Problems
We considered a novel partial-monitoring setup with Gaussian side observations, which generalizes
the recently introduced setting of graph-structured feedback, allowing finer quantification of the
observed information from one action to another. We provided non-asymptotic problem-dependent
lower bounds that imply existing asymptotic problem-dependent and non-asymptotic minimax lower
bounds (up to some constant factors) beyond the full information case. We also provided an algorithm that achieves the asymptotic problem-dependent lower bound (up to some universal constants)
and another algorithm that achieves the minimax bounds under both weak and strong observability.
However, we think this is just the beginning. For example, we currently have no algorithm that
achieves both the problem dependent and the minimax lower bounds at the same time. Also, our
upper bounds only correspond to the graph-structured feedback case. It is of great interest to go
beyond the weak/strong observability in characterizing the hardness of the problem, and provide
algorithms that can adapt to any correspondence between the mean payoffs and the variances (the
hardness is that one needs to identify suboptimal actions with good information/cost trade-off).
Acknowledgments This work was supported by the Alberta Innovates Technology Futures
through the Alberta Ingenuity Centre for Machine Learning (AICML) and NSERC. During this
work, A. Gy?orgy was with the Department of Computing Science, University of Alberta.
8
References
[1] S?ebatien Bubeck and Nicol`o Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122,
2012.
[2] Shie Mannor and Ohad Shamir. From bandits to experts: on the value of side-observations. In
Advances in Neural Information Processing Systems 24 (NIPS), pages 684?692, 2011.
[3] Noga Alon, Nicolo Cesa-Bianchi, Claudio Gentile, and Yishay Mansour. From bandits to experts: A tale of domination and independence. In Advances in Neural Information Processing
Systems 26 (NIPS), pages 1610?1618, 2013.
[4] Tom?as? Koc?ak, Gergely Neu, Michal Valko, and R?emi Munos. Efficient learning by implicit
exploration in bandit problems with side observations. In Advances in Neural Information
Processing Systems 27 (NIPS), pages 613?621, 2014.
[5] Noga Alon, Nicol`o Cesa-Bianchi, Ofer Dekel, and Tomer Koren. Online learning with feedback graphs: beyond bandits. In Proceedings of The 28th Conference on Learning Theory
(COLT), pages 23?35, 2015.
[6] St?ephane Caron, Branislav Kveton, Marc Lelarge, and Smriti Bhagat. Leveraging side observations in stochastic bandits. In Proceedings of the 28th Conference on Uncertainty in Artificial
Intelligence (UAI), pages 142?151, 2012.
[7] Swapna Buccapatnam, Atilla Eryilmaz, and Ness B. Shroff. Stochastic bandits with side observations on networks. SIGMETRICS Perform. Eval. Rev., 42(1):289?300, June 2014.
[8] Nicol`o Cesa-Bianchi and G?abor Lugosi. Prediction, Learning, and Games. Cambridge University Press, Cambridge, 2006.
[9] G?abor Bart?ok, Dean P. Foster, D?avid P?al, Alexander Rakhlin, and Csaba Szepesv?ari. Partial monitoring ? classification, regret bounds, and algorithms. Mathematics of Operations
Research, 39:967?997, 2014.
[10] Tian Lin, Bruno Abrahao, Robert Kleinberg, John Lui, and Wei Chen. Combinatorial partial monitoring game with linear feedback and its applications. In Proceedings of the 31st
International Conference on Machine Learning (ICML), pages 901?909, 2014.
[11] Tor Lattimore, Andr?as Gy?orgy, and Csaba Szepesv?ari. On learning the optimal waiting time.
In Peter Auer, Alexander Clark, Thomas Zeugmann, and Sandra Zilles, editors, Algorithmic Learning Theory, volume 8776 of Lecture Notes in Computer Science, pages 200?214.
Springer International Publishing, 2014.
[12] Todd L. Graves and Tze Leung Lai. Asymptotically efficient adaptive choice of control laws
incontrolled markov chains. SIAM Journal on Control and Optimization, 35(3):715?743, 1997.
[13] Yifan Wu, Andr?as Gy?orgy, and Csaba Szepesv?ari. Online learning with Gaussian payoffs and
side observations. arXiv preprint arXiv:1510.08108, 2015.
[14] Lihong Li, R?emi Munos, and Csaba Szepesv?ari. Toward minimax off-policy value estimation. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and
Statistics (AISTATS), pages 608?616, 2015.
[15] Stefan Magureanu, Richard Combes, and Alexandre Proutiere. Lipschitz bandits: Regret lower
bounds and optimal algorithms. In Proceedings of The 27th Conference on Learning Theory
(COLT), pages 975?999, 2014.
[16] Emilie Kaufmann, Olivier Capp?e, and Aur?elien Garivier. On the complexity of best arm identification in multi-armed bandit models. The Journal of Machine Learning Research, 2015. (to
appear).
[17] Richard Combes and Alexandre Proutiere. Unimodal bandits: Regret lower bounds and optimal algorithms. In Proceedings of the 31st International Conference on Machine Learning
(ICML), pages 521?529, 2014.
9
| 6033 |@word innovates:1 exploitation:1 version:6 polynomial:1 replicate:1 dekel:1 open:1 d2:3 pick:1 tr:3 contains:2 selecting:2 kcr:1 existing:3 di2:8 michal:1 si:9 john:1 ctn:2 interpretable:2 bart:1 greedy:1 selected:3 intelligence:2 beginning:1 mannor:1 successive:1 ik:1 introduce:1 expected:9 hardness:5 ingenuity:1 examine:1 multi:2 decreasing:1 alberta:4 resolve:1 armed:2 considering:2 cardinality:2 increasing:1 provided:4 notation:2 bounded:3 moreover:1 what:1 csaba:5 guarantee:1 every:4 kdmax:1 tie:2 uk:1 control:4 normally:1 appear:1 positive:2 before:2 engineering:1 local:1 t1:1 todd:1 consequence:1 switching:1 ak:1 abuse:1 lugosi:1 might:1 eb:2 studied:1 argminc:1 specifying:1 tian:1 directed:1 unique:5 acknowledgment:1 kveton:1 regret:26 j0:2 empirical:2 universal:3 word:1 confidence:1 get:1 cannot:1 close:1 branislav:1 dean:1 yt:3 eighteenth:1 d2min:1 go:1 simplicity:3 identifying:1 immediately:1 deriving:4 ity:1 notion:2 coordinate:3 resp:1 pt:2 suppose:2 heavily:1 ualberta:1 modulo:1 play:7 shamir:1 yishay:1 olivier:1 trend:1 satisfying:1 approximated:1 observed:4 min1:1 preprint:1 electrical:1 capture:1 worst:4 connected:1 decrease:1 highest:1 trade:2 observes:4 intuition:1 environment:14 broken:2 complexity:1 d2i:1 depend:3 weakly:8 learner:12 capp:1 k0:1 london:1 artificial:2 choosing:3 refined:3 whose:6 quite:1 valued:2 dominating:1 otherwise:3 statistic:1 think:1 nondecreasing:1 itself:1 online:6 obviously:1 advantage:1 nlog:1 coming:1 loop:2 achieve:6 getting:1 requirement:1 depending:2 derive:3 ac:1 alon:2 tale:1 ij:19 b0:3 received:1 paying:1 solves:1 strong:5 involves:1 implies:1 direction:1 stochastic:7 exploration:7 elimination:1 sandra:1 generalization:1 ci1:2 dent:1 hold:2 around:1 considered:7 normal:1 exp:2 great:1 algorithmic:1 tor:1 achieves:10 smallest:1 omitted:1 estimation:1 applicable:1 combinatorial:2 currently:1 largest:1 create:1 stefan:1 clearly:1 gaussian:10 always:2 aim:1 sigmetrics:1 rather:1 cr:2 claudio:1 minc:1 corollary:8 june:1 abrahao:1 contrast:1 adversarial:3 algortihm:1 dependent:19 i0:2 leung:1 typically:1 a0:8 abor:2 bandit:14 proutiere:2 going:1 selects:2 i1:26 classification:1 colt:2 denoted:1 special:2 ness:1 equal:1 once:1 asr:2 eliminated:1 look:1 icml:2 future:2 simplex:1 di1:1 ephane:1 richard:2 few:1 divergence:1 interpolate:1 maxj:1 delicate:1 interest:1 eval:1 behind:1 dumb:1 chain:1 kt:4 partial:14 censored:1 ohad:1 iv:1 logarithm:1 instance:2 earlier:1 cover:1 ar:5 cost:1 entry:1 subset:2 uniform:3 gr:3 too:1 characterize:1 aw:7 ni1:1 chooses:1 st:3 explores:1 international:4 siam:1 aur:1 off:3 picking:2 ctr:4 gergely:1 cesa:4 choose:1 possibly:2 expert:2 li:1 elien:1 gy:4 satisfy:1 explicitly:1 depends:5 multiplicative:1 try:1 later:1 picked:1 closed:1 analyze:1 sup:10 doing:1 start:1 recover:2 complicated:1 minimize:1 ni:17 ir:1 variance:13 characteristic:1 kaufmann:1 correspond:1 identify:1 weak:6 identification:1 lambert:1 monitoring:14 finer:1 emilie:1 koc:1 neu:1 definition:4 lelarge:1 e2:2 proof:2 associated:2 di:15 mi:12 argmaxc:1 popular:1 recall:2 knowledge:2 lim:2 cj:3 schedule:1 carefully:1 auer:1 shroff:1 ok:1 alexandre:2 tom:1 wei:1 formulation:2 strongly:7 furthermore:3 just:1 implicit:1 until:1 hand:2 receives:5 replacing:1 combes:2 continuity:1 quality:2 hence:1 i2:3 round:10 during:1 self:5 game:2 szepesva:1 generalized:4 trying:1 performs:3 lattimore:1 novel:1 recently:3 ari:4 common:1 ji:1 perturbing:4 exponentially:1 volume:1 slight:1 caron:1 cambridge:2 ai:1 eryilmaz:1 consistency:1 mathematics:1 similarly:1 centre:1 bruno:1 dj:2 lihong:1 nicolo:1 own:1 inf:6 forcing:1 certain:2 arbitrarily:3 gyorgy:1 seen:1 minimum:3 relaxed:2 gentile:1 maximize:1 signal:1 ii:8 full:10 unimodal:1 match:5 characterized:1 ebe:2 adapt:1 long:1 lin:1 lai:1 a1:1 ensuring:1 prediction:1 essentially:1 expectation:2 arxiv:2 limt:2 achieved:2 szepesv:5 want:1 interval:1 else:4 noga:2 rest:1 sure:1 shie:1 leveraging:1 call:1 integer:1 iii:1 easy:3 enough:3 independence:2 nonstochastic:1 wu1:1 suboptimal:1 observability:10 regarding:1 idea:3 avid:1 motivated:1 expression:1 peter:1 interpolates:1 action:56 remark:4 useful:1 amount:1 extensively:1 concentrated:1 simplest:1 zeugmann:1 xij:2 andr:3 notice:1 waiting:1 key:1 achieving:2 imperial:2 garivier:1 graph:15 asymptotically:4 excludes:1 uncertainty:1 throughout:2 family:1 reader:1 electronic:1 reasonable:1 wu:1 draw:1 decision:1 bound:75 ct:7 distinguish:1 koren:1 correspondence:1 badly:1 precisely:1 kleinberg:1 emi:2 argument:2 min:4 circumvented:1 structured:11 department:1 according:1 kd:1 smaller:2 lp:1 rev:1 pr:2 taken:1 remains:1 dmax:2 count:1 dcr:1 end:7 available:1 generalizes:1 ofer:1 operation:1 observe:2 away:1 generic:1 original:1 thomas:1 denotes:3 remaining:1 publishing:1 log2:1 perturb:1 k1:1 already:2 quantity:1 dependence:3 rt:17 said:3 reason:1 toward:1 aicml:1 index:1 mini:7 setup:8 robert:1 rise:1 kir:1 policy:1 unknown:1 perform:2 allowing:1 upper:8 dmin:1 observation:24 bianchi:4 markov:1 finite:19 magureanu:1 tilde:1 payoff:25 defining:2 situation:1 precise:1 anb:1 mansour:1 perturbation:1 arbitrary:1 tomer:1 introduced:3 pair:3 required:1 specified:1 kl:2 nip:3 beyond:4 able:3 usually:1 below:1 regime:1 encompasses:1 program:1 including:2 max:5 natural:1 warm:1 difficulty:1 quantification:1 valko:1 regret1:1 arm:1 minimax:24 technology:1 mi1:8 imply:1 ne:6 literature:3 nicol:3 asymptotic:22 relative:2 graf:1 law:1 lecture:1 sublinear:2 interesting:2 clark:1 foundation:1 consistent:3 foster:1 editor:1 playing:1 supported:1 jth:2 side:11 formal:1 characterizing:1 zilles:1 munos:2 distributed:1 feedback:28 boundary:1 adaptive:1 simplified:1 oftentimes:1 social:1 log3:3 functionals:1 sj:5 observable:16 global:1 reproduces:1 reveals:1 uai:1 assumed:2 yifan:2 continuous:1 transfer:1 ca:1 orgy:3 hc:6 necessarily:2 marc:1 did:1 pk:1 main:3 aistats:1 allowed:1 sub:2 removing:1 theorem:10 bad:3 xt:8 maxi:1 rakhlin:1 decay:1 explored:1 admits:1 bhagat:1 dominates:1 maximizers:1 exists:2 sit:2 sequential:2 ci:11 horizon:4 nk:1 gap:1 chen:1 logarithmic:5 tze:1 explore:1 bubeck:1 expressed:1 nserc:1 springer:1 satisfies:3 viewed:1 goal:1 price:1 lipschitz:1 hard:1 change:2 argmini:1 infinite:1 except:1 lui:1 called:4 meaningful:1 domination:2 ew:1 exception:1 formally:2 college:1 ucb:1 latter:1 alexander:2 absolutely:1 dept:2 |
5,563 | 6,034 | Fast Rates for Exp-concave
Empirical Risk Minimization
Kfir Y. Levy
Technion
Haifa 32000, Israel
[email protected]
Tomer Koren
Technion
Haifa 32000, Israel
[email protected]
Abstract
We consider Empirical Risk Minimization (ERM) in the context of stochastic optimization with exp-concave and smooth losses?a general optimization framework that captures several important learning problems including linear and logistic regression, learning SVMs with the squared hinge-loss, portfolio selection
and more. In this setting, we establish the first evidence that ERM is able to attain fast generalization rates, and show that the expected loss of the ERM solution
in d dimensions converges to the optimal expected loss in a rate of d/n. This
rate matches existing lower bounds up to constants and improves by a log n factor
upon the state-of-the-art, which is only known to be attained by an online-to-batch
conversion of computationally expensive online algorithms.
1
Introduction
Statistical learning and stochastic optimization with exp-concave loss functions captures several
fundamental problems in statistical machine learning, which include linear regression, logistic regression, learning support-vector machines (SVMs) with the squared hinge loss, and portfolio selection, amongst others. Exp-concave functions constitute a rich class of convex functions, which is
substantially richer than its more familiar subclass of strongly convex functions.
Similarly to their strongly-convex counterparts, it is well-known that exp-concave loss functions
are amenable to fast generalization rates. Specifically, a standard online-to-batch conversion [6]
of either the Online Newton Step algorithm [8] or exponential weighting schemes
? [5, 8] in d dimensions gives rise to convergence rate of d/n, as opposed to the standard 1/ n rate of generic
(Lipschitz) stochastic convex optimization. Unfortunately, the latter online methods are highly inefficient computationally-wise; e.g., the runtime complexity of the Online Newton Step algorithm
scales as d4 with the dimension of the problem, even in very simple optimization scenarios [13].
An alternative and widely-used learning paradigm is that of Empirical Risk Minimization (ERM),
which is often regarded as the strategy of choice due to its generality and its statistical efficiency. In
this scheme, a sample of training instances is drawn from the underlying data distribution, and the
minimizer of the sample average (or the regularized sample average) is computed. As opposed to
methods based on online-to-batch conversions, the ERM approach enables the use of any optimization procedure of choice and does not restrict one to use a specific online algorithm. Furthermore,
the ERM solution often enjoys several distribution-dependent generalization bounds in conjunction,
and thus is able to obliviously adapt to the properties of the underlying data distribution.
In the context of exp-concave functions,
however, nothing is known about the generalization abilities
?
of ERM besides the standard 1/ n convergence rate that applies to any convex losses. Surprisingly,
it appears that even in the specific and extensively-studied case of linear regression with the squared
loss, the state of affairs remains unsettled: this important case was recently addressed by Shamir
1
[19], who proved a ?(d/n) lower bound on the convergence rate of any algorithm, and conjectured
that the rate of an ERM approach should match this lower bound.
In this paper, we explore the convergence rate of ERM for stochastic exp-concave optimization.
We show that when the exp-concave loss functions are also smooth, a slightly-regularized ERM
approach yields a convergence rate of O(d/n), which matches the lower bound of Shamir [19] up
to constants. In fact, our result shows for ERM a generalization rate tighter than the state-of-the-art
obtained by the Online Newton Step algorithm, improving upon the latter by a log n factor. Even in
the specific case of linear regression with the squared loss, our result improves by a log(n/d) factor
upon the best known fast rates provided by the Vovk-Azoury-Warmuth algorithm [3, 22].
Our results open an avenue for potential improvements to the runtime complexity of exp-concave
stochastic optimization, by permitting the use of accelerated methods for large-scale regularized
loss minimization. The latter has been the topic of an extensive research effort in recent years, and
numerous highly-efficient methods have been developed; see, e.g., Johnson and Zhang [10], ShalevShwartz and Zhang [16, 17] and the references therein.
On the technical side, our convergence analysis relies on stability arguments introduced by Bousquet
and Elisseeff [4]. We prove that the expected loss of the regularized ERM solution does not change
significantly when a single instance, picked uniformly at random from the training sample, is discarded. Then, the technique of Bousquet and Elisseeff [4] allows us to translate this average stability
property into a generalization guarantee. We remark that in all previous stability analyses that we
are aware of, stability was shown to hold uniformly over all discarded training intances, either with
probability one [4, 16] or in expectation [20]; in contrast, in the case of exp-concave functions it is
crucial to look at the average stability.
In order to bound the average stability of ERM, we make use of a localized notion of strong convexity, defined with respect to a local norm at a certain point in the optimization domain. Roughly
speaking, we show that when looking at the right norm, which is determined by the local properties
of the empirical risk at the right point, the minimizer of the empirical risk becomes stable. This
part of our analysis is inspired by recent analysis techniques of regularization-based online learning
algorithms [1], that use local norms to study the regret performance of online linear optimization
algorithms.
1.1
Related Work
The study of exp-concave loss functions was initiated in the online learning community by Kivinen
and Warmuth [12], who considered the problem of prediction with expert advice with exp-concave
losses. Later, Hazan et al. [8] considered a more general framework that allows for a continuous
decision set, and proposed the Online Newton Step (ONS) algorithm that attains a regret bound that
grows logarithmically with the number of optimization rounds. Mahdavi et al. [15] considered the
ONS algorithm in the statistical setting, and showed how it can be used to establish generalization
bounds that hold with high probability, while still keeping the fast 1/n rate.
Fast convergence rates in stochastic optimization are known to be achievable under various conditions. Bousquet and Elisseeff [4] and Shalev-Shwartz et al. [18] have shown, via a uniform stability
argument, that ERM guarantees a convergence rate of 1/n for strongly convex functions. Sridharan
et al. [21] proved a similar result, albeit using the notion of localized Rademacher complexity. For
the case of smooth and non-negative losses, Srebro et al. [20] established a 1/n rate in low-noise
conditions, i.e., when the expected loss of the best hypothesis is of order 1/n. For further discussion
of fast rates in stochastic optimization and learning, see [20] and the references therein.
2
Setup and Main Results
We consider the problem of minimizing a stochastic objective
F (w) = E[f (w, Z)]
d
(1)
over a closed and convex domain W ? R in d-dimensional Euclidean space. Here, the expectation is taken with respect to a random variable Z distributed according to an unknown distribution over a parameter space Z. Given a budget of n samples z1 , . . . , zn of the random variable Z, we are required to produce an estimate w
b ? W whose expected excess loss, defined by
2
E[F (w)]
b ? minw?W F (w), is small. (Here, the expectation is with respect the randomization of the
training set z1 , . . . , zn used to produce w.)
b
We make the following assumptions over the loss function f . First, we assume that for any fixed
parameter z ? Z, the function f (?, z) is ?-exp-concave over the domain W for some ? > 0, namely,
that the function exp (??f (?, z)) is concave over W. We will also assume that f (?, z) is ?-smooth
over W with respect to Euclidean norm k ? k2 , which means that its gradient is ?-Lipschitz with
respect to the same norm:
? w, w0 ? W ,
k?f (w, z) ? ?f (w0 , z)k2 ? ?kw ? w0 k2 .
(2)
In particular, this property implies that f (?, z) is differentiable. For simplicity, and without loss of
generality, we assume ? ? 1. Finally, we assume that f (?, z) is bounded over W, in the sense that
|f (w, z) ? f (w0 , z)| ? C for all w, w0 ? W for some C > 0.
In this paper, we analyze a regularized Empirical Risk Minimization (ERM) procedure for optimizing the stochastic objective in Eq. (1), that based on the sample z1 , . . . , zn computes
w
b = arg min Fb(w) ,
(3)
w?W
where
n
1X
1
Fb(w) =
f (w, zi ) + R(w) .
n i=1
n
(4)
The function R : W 7? R serves as a regularizer, which is assumed to be 1-strongly-convex with
respect to the Euclidean norm; for instance, one can simply choose R(w) = 12 kwk22 . The strong
convexity of R implies in particular that Fb is also strongly convex, which ensures that the optimizer
w
b is unique. For our bounds, we will assume that |R(w) ? R(w0 )| ? B for all w, w0 ? W for some
constant B > 0.
Our main result, which we now present, establishes a fast 1/n convergence rate for the expected
excess loss of the ERM estimate w
b given in Eq. (3).
Theorem 1. Let f : W ? Z 7? R be a loss function defined over a closed and convex domain W ?
Rd , which is ?-exp-concave, ?-smooth and B-bounded with respect to its first argument. Let R :
W 7? R be a 1-strongly-convex and B-bounded regularization function. Then, for the regularized
ERM estimate w
b defined in Eqs. (3) and (4) based on an i.i.d. sample z1 , . . . , zn , the expected excess
loss is bounded as
d
24?d 100Cd B
+
+
= O
.
E[F (w)]
b ? min F (w) ?
w?W
?n
n
n
n
In other words, the theorem states that for ensuring an expected excess loss of at most , a sample
of size n = O(d/) suffices. This result improves upon the best known fast convergence rates for
exp-concave functions by a O(log n) factor, and matches the lower bound of Shamir [19] for the
special case where the loss function is the squared loss. For this particular case, our result affirms
the conjecture of Shamir [19] regarding the sample complexity of ERM for the squared loss; see
Section 2.1 below for details.
It is important to note that Theorem 1 establishes a fast convergence rate with respect to the actual
expected loss F itself, and not for a regularized version thereof (and in particular, not with respect
to the expectation of ?
Fb). Notably, the magnitude of the regularization we use is only O(1/n), as
opposed to the O(1/ n) regularization used
? in standard regularized loss minimization methods
(that can only give rise to a traditional O(1/ n) rate).
2.1
Results for the Squared Loss
In this section we focus on the important special case where the loss function f is the squared loss,
namely, f (w; x, y) = 12 (w ?x ? y)2 where x ? Rd is an instance vector and y ? R is a target value.
This case, that was extensively studied in the past, was recently addressed by Shamir [19] who gave
lower bounds on the sample complexity of any learning algorithm under mild assumptions.
3
Shamir [19] analyzed learning with the squared loss in a setting where the domain is W = {w ?
Rd : kwk2 ? B} for some constant B > 0, and the parameters distribution is supported over
{x ? Rd : kxk2 ? 1} ? {y ? R : |y| ? B}. It is not hard to verify that in this setup, for the
squared loss we can take ? = 1, ? = 4B 2 and C = 2B 2 . Furthermore, if we choose the standard
regularizer R(w) = 21 kwk22 , we have |R(w) ? R(w0 )| ? 12 B 2 for all w, w0 ? W. As a consequence,
Theorem 1 implies that the expected excess loss of the regularized ERM estimator w
b we defined in
Eq. (3) is bounded by O(B 2 d/n).
On the other hand, standard uniform convergence results for generalized linear functions
? [e.g., 11]
show that, under the same conditions, ERM also enjoys an upper bound of O(B 2 / n) over its
expected excess risk. Overall, we conclude:
Corollary 2. For the squared loss f (w; x, y) = 12 (w ? x ? y)2 over the domain W = {w ? Rd :
kwk2 ? B} with Z = {x ? Rd : kxk2 ? 1} ? {y ? R : |y| ? B}, the regularized ERM
estimator w
b defined in Eqs. (3) and (4) based on an i.i.d. sample of n instances has
2
B d B2
, ?
E[F (w)]
b ? min F (w) = O min
.
w?W
n
n
This result slightly improves, by a log(n/d) factor, upon the bound conjectured by Shamir [19] for
the ERM estimator, and matches the lower bound proved therein up to constants.1 Previous fast-rate
results for ERM that we are aware of either included excess log factors [2] or were proven under
additional distributional assumptions [14, 9]; see also the discussion in [19]. We remark that Shamir
conjectures this bound for ERM without any regularization. For the specific case of the squared loss,
it is indeed possible to obtain the same rates without regularizing; we defer details to the full version
of the paper. However, in practice, regularization has several additional benefits: it renders the ERM
optimization problem well-posed (i.e., ensures that the underlying matrix that needs to be inverted
is well-conditioned), and guarantees it has a unique minimizer.
3
Proof of Theorem 1
Our proof of Theorem 1 proceeds as follows. First, we relate the expected excess risk of the ERM
estimator w
b to its average leave-one-out stability [4]. Then, we bound this stability in terms of
certain local properties of the empirical risk at the point w.
b To introduce the average stability notion
we study, we first define for each i = 1, . . . , n the following empirical leave-one-out risk:
1
1X
f (w, zj ) + R(w)
(i = 1, . . . , n) .
Fbi (w) =
n
n
j6=i
Namely, Fbi is the regularized empirical risk corresponding to the sample obtained by discarding the instance zi . Then, for each i we let w
bi = arg minw?W Fbi (w) be the ERM estimator
bi . The average leave-one-out stability of w
corresponding
to
F
b is then defined as the quantity
Pn
1
(f
(
w
b
,
z
)
?
f
(
w,
b
z
)).
i
i
i
i=1
n
Intuitively, the average leave-one-out stability serves as an unbiased estimator of the amount of
change in the expected loss of the ERM estimator when one of the instances z1 , . . . , zn , chosen
uniformly at random, is removed from the training sample. We note that looking at the average
is crucial for us, and the stronger condition of (expected) uniform stability does not hold for expconcave functions. For further discussion of the various stability notions, refer to Bousquet and
Elisseeff [4].
Our main step in proving Theorem 1 involves bounding the average leave-one-out stability of w
b
defined in Eq. (3), which is the purpose of the next theorem.
Theorem 3 (average leave-one-out stability). For any z1 , . . . , zn ? Z and for w
b1 , . . . , w
bn and w
b as
defined above, we have
n
24?d 100Cd
1X
f (w
bi , zi ) ? f (w,
b zi ) ?
+
.
n i=1
?n
n
1
We remark that Shamir?s result assumes two different bounds over the magnitude of the predictors w and
the target values y, while here we assume both are bounded by the same constant B. We did not attempt to
capture this refined dependence on the two different parameters.
4
Before proving this theorem, we first show how it can be used to obtain our main theorem. The proof
follows arguments similar to those of Bousquet and Elisseeff [4] and Shalev-Shwartz et al. [18].
Proof of Theorem 1. To obtain the stated result, it is enough to upper bound the expected excess loss
of w
bn , which is the minimizer of the regularized empirical risk over the i.i.d. sample {z1 , . . . , zn?1 }.
To this end, fix an arbitrary w? ? W. We first write
1
R(w? ) = E[Fb(w? )] ? E[Fb(w)]
b ,
n
which holds true since w
b is the minimizer of Fb over W. Hence,
F (w? ) +
1
E[F (w
bn )]? F (w? ) ? E[F (w
bn ) ? Fb(w)]
b + R(w? ) .
n
(5)
Next, notice that the random variables w
b1 , . . . , w
bn have exactly the same distribution: each is the
output of regularized ERM on an i.i.d. sample of n ? 1 examples. Also, notice that w
bi , which is the
minimizer of the sample obtained by discarding the i?th example, is independent of zi . Thus, we
have
n
E[F (w
bn )] =
n
1X
1X
E[F (w
bi )] =
E[f (w
bi , zi )] .
n i=1
n i=1
Furthermore, we can write
n
1
1X
E[f (w,
b zi )] + E[R(w)]
E[Fb(w)]
b =
b .
n i=1
n
Plugging these expressions into Eq. (5) gives a bound over the expected excess loss of w
bn in terms
of the average stability:
n
E[F (w
bn )]? F (w? ) ?
1X
1
E[f (w
bi , zi ) ? f (w,
b zi )] + E[R(w? ) ? R(w)]
b .
n i=1
n
Using Theorem 3 for bounding average stability term on the right-hand side, and our assumption
that supw,w0 ?W |R(w) ? R(w0 )| ? B to bound the second term, we obtain the stated bound over
the expected excess loss of w
bn .
The remainder of the section is devoted to the proof of Theorem 3. Before we begin with the proof
of the theorem itself, we first present a useful tool for analyzing the stability of minimizers of convex
functions, which we later apply to the empirical leave-one-out risks.
3.1
Local Strong Convexity and Stability
Our stability analysis for exp-concave functions is inspired by recent analysis techniques of
regularization-based online learning algorithms, that make use of strong convexity with respect to
local norms [1]. The crucial strong-convexity property is summarized in the following definition.
Definition 4 (Local strong convexity). We say that a function g : K 7? R is locally ?-stronglyconvex over a domain K ? Rd at x with respect to a norm k?k, if
?y?K,
g(y) ? g(x) + ?g(x)?(y ? x) +
?
ky ? xk2 .
2
In words, a function is locally strongly-convex at x if it can be lower bounded (globally over its entire
domain) by a quadratic tangent to the function at x; the nature of the quadratic term in this lower
bound is determined by a choice of a local norm, which is typically adapted to the local properties
of the function at the point x.
With the above definition, we can now prove the following stability result for optima of convex
functions, that underlies our stability analysis for exp-concave functions.
5
Lemma 5. Let g1 , g2 : K 7? R be two convex functions defined over a closed and convex domain
K ? Rd , and let x1 ? arg minx?K g1 (x) and x2 ? arg minx?K g2 (x). Assume that g2 is locally
?-strongly-convex at x1 with respect to a norm k?k. Then, for h = g2 ? g1 we have
kx2 ? x1 k ?
2
k?h(x1 )k? .
?
Furthermore, if h is convex then
2
2
k?h(x1 )k? .
?
0 ? h(x1 ) ? h(x2 ) ?
Proof. The local strong convexity of g2 at x1 implies
?g2 (x1 )?(x1 ? x2 ) ? g2 (x1 ) ? g2 (x2 ) +
?
kx2 ? x1 k2 .
2
Notice that g2 (x1 ) ? g2 (x2 ) ? 0, since x2 is a minimizer of g2 . Also, since x1 is a minimizer of g1 ,
first-order optimality conditions imply that ?g1 (x1 )?(x1 ? x2 ) ? 0, whence
?g2 (x1 )?(x1 ? x2 ) = ?g1 (x1 )?(x1 ? x2 ) + ?h(x1 )?(x1 ? x2 ) ? ?h(x1 )?(x1 ? x2 ) .
Combining the observations yields
?
kx2 ? x1 k2 ? ?h(x1 )?(x1 ? x2 ) ? k?h(x1 )k? ?kx1 ? x2 k ,
2
where we have used H?older?s inequality in the last inequality. This gives the first claim of the lemma.
To obtain the second claim, we first observe that
g1 (x2 ) + h(x2 ) ? g1 (x1 ) + h(x1 ) ? g1 (x2 ) + h(x1 )
where we used the fact that x2 is the minimizer of g2 = g1 + h for the first inequality, and the fact
that x1 is the minimizer of g1 for the second. This establishes the lower bound 0 ? h(x1 ) ? h(x2 ).
For the upper bound, we use the assumed convexity of h to write
h(x1 ) ? h(x2 ) ? ?h(x1 )?(x1 ? x2 ) ? k?h(x1 )k? ?kx1 ? x2 k ?
2
2
k?h(x1 )k? ,
?
where the second inequality follows from H?older?s inequality, and the final one from the first claim
of the lemma.
3.2
Average Stability Analysis
With Lemma 5 at hand, we now turn to prove Theorem 3. First, a few definitions are needed. For
brevity, we henceforth denote fi (?) = f (?, zi ) for all P
i. We let hi = ?fi (w)
b be theP
gradient of fi
n
at the point w
b defined in Eq. (3), and let H = ?1 Id + i=1 hi hTi and Hi = ?1 Id + j6=i hj hTj for
1
all i, where ? = 12 min{ 4C
, ?}. Finally,
we will use k?kM to denote the norm induced by a positive
?
definite matrix M , ?
i.e., kxkM = xT M x. In this case, the dual norm kxk?M induced by M simply
equals kxkM ?1 = xT M ?1 x.
In order to obtain an upper bound over the average stability, we first bound each of the individual
stability expressions fi (w
bi )?fi (w)
b in terms of a certain norm of the gradient hi of the corresponding
function fi . As the proof below reveals, this norm is the local norm at w
b with respect to which the
leave-one-out risk Fbi is locally strongly convex.
Lemma 6. For all i = 1, . . . , n it holds that
fi (w
bi ) ? fi (w)
b ?
2
6?
khi k?Hi .
?
Notice that the expression on the right-hand side might be quite large for a particular function fi ;
indeed, uniform stability does not hold in our case. However, as we show below, the average of these
expressions is small. The proof of Lemma 6 relies on Lemma 5 above and the following property of
exp-concave functions, established by Hazan et al. [8].
6
Lemma 7 (Hazan et al. [8], Lemma 3). Let f : K 7? R be an ?-exp-concave function over a convex
1
domain K ? Rd such that |f (x) ? f (y)| ? C for any x, y ? K. Then for any ? ? 21 min{ 4C
, ?} it
holds that
2
?
? x, y ? K ,
f (y) ? f (x) + ?f (x)?(y ? x) +
(6)
?f (x)?(y ? x) .
2
Proof of Lemma 6. We apply Lemma 5 with g1 = Fb and g2 = Fbi (so that h = ? n1 fi ). We should
first verify that Fbi is indeed (?/n)-strongly-convex at w
b with respect to the norm k?kHi . Since each
fi is ?-exp-concave, Lemma 7 shows that for all w ? W,
2
?
,
(7)
fi (w) ? fi (w)
b + ?fi (w)?(w
b
? w)
b +
hi ?(w ? w)
b
2
with our choice of ? =
1
2
1
min{ 4C
, ?}. Also, the strong convexity of the regularizer R implies that
1
R(w) ? R(w)
b + ?R(w)?(w
b
? w)
b + kw ? wk
b 22 .
2
Summing Eq. (7) over all j 6= i with Eq. (8) and dividing through by n gives
2
? X
1
Fbi (w) ? Fbi (w)
b + ?Fbi (w)?(w
b
? w)
b +
hi ?(w ? w)
b
+
kw ? wk
b 22
2n
2n
(8)
j6=i
?
kw ? wk
b 2Hi ,
= Fbi (w)
b + ?Fbi (w)?(w
b
? w)
b +
2n
which establishes the strong convexity.
Now, applying Lemma 5 gives
2n
2
k?h(w)k
b ?Hi = khi k?Hi .
?
?
On the other hand, since fi is convex, we have
kw
bi ? wk
b Hi ?
(9)
fi ( w
bi ) ? fi (w)
b ? ?fi (w
bi )?(w
bi ? w)
b
= ?fi (w)?(
b w
bi ? w)
b + ?fi (w
bi ) ? ?fi (w)
b ?(w
bi ? w)
b .
(10)
The first term can be bounded using H?older?s inequality and Eq. (9) as
2
2
khi k?Hi .
?
Also, since fi is ?-smooth (with respect to the Euclidean norm), we can bound the second term in
Eq. (10) as follows:
?fi (w
bi ) ? ?fi (w)
b ?(w
bi ? w)
b ? k?fi (w
bi ) ? ?fi (w)k
b 2 ?kw
bi ? wk
b 2 ? ?kw
bi ? wk
b 22 ,
?fi (w
bi )?(w
bi ? w)
b = hi ?(w
bi ? w)
b ? khi k?Hi ?kw
bi ? wk
b Hi ?
and since Hi (1/?)Id , we can further bound using Eq. (9),
2
4
khi k?Hi .
?
Combining the bounds (and simplifying using our assumption ? ? 1) gives the lemma.
kw
bi ? wk
b 22 ? ?kw
bi ? wk
b 2Hi ?
Next, we bound a sum involving the local-norm terms introduced in Lemma 6.
Lemma 8. Let I = {i ? [n] : khi k?H > 12 }. Then |I| ? 2d, and we have
X
2
khi k?Hi
? 2d .
i?I
/
Proof.
Denote ai = hTi H ?1 hi for all i = 1, . . . , n. First, we claim that ai > 0 for all i, and
P
?1
being positive-definite. For the sum of the
i ai ? d. The fact that ai > 0 is evident from H
ai ?s, we write:
n
X
i=1
ai =
n
X
i=1
hTi H ?1 hi =
n
X
tr(H ?1 hi hTi ) ? tr(H ?1 H) = tr(Id ) = d ,
i=1
7
(11)
where we have used the linearity of the trace, and the fact that H
Pn
i=1
hi hTi .
Now, our claim that |I| ? 2d is evident: if khi k?H > 12 for more than 2d terms, then the sum
P
P
T ?1
hi must be larger than d, which is a contradiction to Eq. (11). To prove
i?I ai =
i?I hi H
our second claim, we first write Hi = H ? hi hTi and use the Sherman-Morrison identity [e.g., 7] to
obtain
Hi?1 = (H ? hi hTi )?1 = H ?1 +
H ?1 hi hTi H ?1
1 ? hTi H ?1 hi
for all i ?
/ I. Note that for i ?
/ I we have hTi H ?1 hi < 1, so that the identity applies and the inverse
on the right-hand side is well defined. We therefore have:
2
hT H ?1 hi 2
a2i
= hTi Hi?1 hi = hTi H ?1 hi + i T ?1
khi k?Hi
= ai +
? 2ai ,
1 ? ai
1 ? hi H hi
where the inequality follows from the fact that 1 ? ai ? ai for i ?
/ I. Summing this inequality over
i?
/ I and recalling that the ai ?s are nonnegative, we obtain
X
khi k?Hi
2
? 2
i?I
/
X
ai ? 2
n
X
ai = 2d ,
i=1
i?I
/
which concludes the proof.
Theorem 3 is now obtained as an immediate consequence of our lemmas above.
Proof of Theorem 3. As a consequence of Lemmas 6 and 8, we have
1X
C|I|
2Cd
fi ( w
bi ) ? fi (w)
b ?
?
,
n
n
n
i?I
and
2
1X
6? X
12?d
fi (w
bi ) ? fi (w)
b ?
khi k?Hi
.
?
n
?n
?n
i?I
/
Summing the inequalities and using
4
i?I
/
1
?
= 2 max{4C, ?1 } ? 2(4C + ?1 ) gives the result.
Conclusions and Open Problems
We have proved the first fast convergence rate for a regularized ERM procedure for exp-concave
loss functions. Our bounds match the existing lower bounds in the specific case of the squared loss
up to constants, and improve by a logarithmic factor upon the best known upper bounds achieved by
online methods.
Our stability analysis required us to assume smoothness of the loss functions, in addition to their
exp-concavity. We note, however, that the Online Newton Step algorithm of Hazan et al. [8] for
online exp-concave optimization does not require such an assumption. Even though most of the
popular exp-concave loss functions are also smooth, it would be interesting to understand whether
smoothness is indeed required for the convergence of the ERM estimator we study in the present
paper, or whether it is simply a limitation of our analysis.
Another interesting issue left open in our work is how to obtain bounds on the excess risk of ERM
that hold with high probability, and not only in expectation. Since the excess risk is non-negative,
one can always apply Markov?s inequality to obtain a bound that holds with probability 1 ? ? but
scales linearly with 1/?. Also, using standard concentration inequalities p
(or success amplification
techniques), we may also obtain high probability bounds that scale with log(1/?)/n, losing the
fast 1/n rate. We leave the problem of obtaining bounds that depends both linearly on 1/n and
logarithmically on 1/? for future work.
8
References
[1] J. D. Abernethy, E. Hazan, and A. Rakhlin. Interior-point methods for full-information and
bandit online learning. Information Theory, IEEE Transactions on, 58(7):4164?4175, 2012.
[2] M. Anthony and P. L. Bartlett. Neural network learning: Theoretical foundations. cambridge
university press, 2009.
[3] K. S. Azoury and M. K. Warmuth. Relative loss bounds for on-line density estimation with the
exponential family of distributions. Machine Learning, 43(3):211?246, 2001.
[4] O. Bousquet and A. Elisseeff. Stability and generalization. The Journal of Machine Learning
Research, 2:499?526, 2002.
[5] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[6] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning
algorithms. IEEE Transactions on Information Theory, 50(9):2050?2057, 2004.
[7] G. H. Golub and C. F. Van Loan. Matrix computations, volume 3. JHU Press, 2012.
[8] E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2-3):169?192, 2007.
[9] D. Hsu, S. M. Kakade, and T. Zhang. Random design analysis of ridge regression. Foundations
of Computational Mathematics, 14(3):569?600, 2014.
[10] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance
reduction. In Advances in Neural Information Processing Systems, pages 315?323, 2013.
[11] S. M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk
bounds, margin bounds, and regularization. In Advances in neural information processing
systems, pages 793?800, 2009.
[12] J. Kivinen and M. K. Warmuth. Averaging expert predictions. In Computational Learning
Theory, pages 153?167. Springer, 1999.
[13] T. Koren. Open problem: Fast stochastic exp-concave optimization. In Conference on Learning
Theory, pages 1073?1075, 2013.
[14] G. Lecu?e and S. Mendelson. Performance of empirical risk minimization in linear aggregation.
arXiv preprint arXiv:1402.5763, 2014.
[15] M. Mahdavi, L. Zhang, and R. Jin. Lower and upper bounds on the generalization of stochastic exponentially concave optimization. In Proceedings of The 28th Conference on Learning
Theory, 2015.
[16] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized
loss. The Journal of Machine Learning Research, 14(1):567?599, 2013.
[17] S. Shalev-Shwartz and T. Zhang. Accelerated proximal stochastic dual coordinate ascent for
regularized loss minimization. Mathematical Programming, pages 1?41, 2014.
[18] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Learnability, stability and uniform
convergence. The Journal of Machine Learning Research, 11:2635?2670, 2010.
[19] O. Shamir. The sample complexity of learning linear predictors with the squared loss. arXiv
preprint arXiv:1406.5143, 2014.
[20] N. Srebro, K. Sridharan, and A. Tewari. Smoothness, low noise and fast rates. In Advances in
neural information processing systems, pages 2199?2207, 2010.
[21] K. Sridharan, S. Shalev-Shwartz, and N. Srebro. Fast rates for regularized objectives. In
Advances in Neural Information Processing Systems, pages 1545?1552, 2009.
[22] V. Vovk. Competitive on-line statistics. International Statistical Review, 69(2):213?248, 2001.
9
| 6034 |@word mild:1 version:2 achievable:1 norm:18 stronger:1 open:4 km:1 bn:9 simplifying:1 elisseeff:6 tr:3 reduction:1 past:1 existing:2 must:1 enables:1 warmuth:4 affair:1 zhang:7 mathematical:1 stronglyconvex:1 prove:4 introduce:1 notably:1 indeed:4 expected:17 roughly:1 inspired:2 globally:1 actual:1 becomes:1 provided:1 begin:1 underlying:3 bounded:8 linearity:1 israel:2 substantially:1 developed:1 htj:1 guarantee:3 subclass:1 concave:26 runtime:2 exactly:1 k2:5 before:2 positive:2 local:12 consequence:3 analyzing:1 initiated:1 id:4 lugosi:1 might:1 therein:3 studied:2 bi:29 unique:2 practice:1 regret:3 definite:2 procedure:3 kxkm:2 empirical:12 jhu:1 attain:1 significantly:1 word:2 interior:1 selection:2 risk:18 context:2 applying:1 kale:1 convex:23 simplicity:1 contradiction:1 estimator:8 regarded:1 stability:30 proving:2 notion:4 coordinate:2 shamir:11 target:2 losing:1 programming:1 hypothesis:1 logarithmically:2 expensive:1 distributional:1 preprint:2 capture:3 ensures:2 removed:1 convexity:10 complexity:7 predictive:1 upon:6 efficiency:1 various:2 tx:1 regularizer:3 fast:16 shalev:6 refined:1 abernethy:1 whose:1 richer:1 widely:1 posed:1 quite:1 say:1 larger:1 ability:2 statistic:1 g1:12 itself:2 final:1 online:19 differentiable:1 remainder:1 combining:2 translate:1 kx1:2 amplification:1 ky:1 convergence:15 optimum:1 rademacher:1 produce:2 converges:1 leave:9 ac:2 eq:14 strong:9 dividing:1 involves:1 implies:5 stochastic:14 require:1 suffices:1 generalization:10 fix:1 randomization:1 tighter:1 obliviously:1 hold:9 considered:3 exp:26 claim:6 optimizer:1 xk2:1 purpose:1 estimation:1 establishes:4 tool:1 minimization:8 always:1 pn:2 hj:1 conjunction:1 corollary:1 focus:1 improvement:1 contrast:1 attains:1 sense:1 whence:1 dependent:1 minimizers:1 entire:1 typically:1 bandit:1 arg:4 overall:1 supw:1 dual:3 issue:1 art:2 special:2 equal:1 aware:2 kw:10 look:1 future:1 others:1 few:1 individual:1 familiar:1 n1:1 attempt:1 recalling:1 highly:2 golub:1 analyzed:1 devoted:1 kfir:1 amenable:1 minw:2 euclidean:4 haifa:2 theoretical:1 instance:7 zn:7 uniform:5 technion:4 predictor:2 johnson:2 learnability:1 proximal:1 density:1 fundamental:1 international:1 squared:14 cesa:2 opposed:3 choose:2 henceforth:1 expert:2 inefficient:1 mahdavi:2 potential:1 b2:1 summarized:1 wk:9 depends:1 later:2 picked:1 closed:3 hazan:6 analyze:1 competitive:1 aggregation:1 defer:1 il:2 variance:1 who:3 yield:2 j6:3 definition:4 thereof:1 proof:13 hsu:1 proved:4 popular:1 improves:4 appears:1 attained:1 though:1 strongly:10 generality:2 furthermore:4 hand:6 kfiryl:1 logistic:2 grows:1 verify:2 unbiased:1 true:1 counterpart:1 regularization:8 hence:1 round:1 game:1 unsettled:1 d4:1 generalized:1 evident:2 ridge:1 wise:1 recently:2 fi:31 exponentially:1 volume:1 kwk2:2 refer:1 cambridge:2 ai:15 smoothness:3 rd:9 mathematics:1 similarly:1 portfolio:2 sherman:1 stable:1 recent:3 showed:1 conjectured:2 optimizing:1 scenario:1 certain:3 inequality:11 success:1 kx2:3 lecu:1 inverted:1 additional:2 gentile:1 paradigm:1 morrison:1 full:2 smooth:7 technical:1 match:6 adapt:1 permitting:1 plugging:1 ensuring:1 prediction:4 involving:1 regression:6 underlies:1 expectation:5 arxiv:4 agarwal:1 achieved:1 addition:1 addressed:2 crucial:3 ascent:2 kwk22:2 induced:2 sridharan:5 enough:1 zi:10 gave:1 restrict:1 regarding:1 avenue:1 whether:2 expression:4 bartlett:1 accelerating:1 effort:1 render:1 speaking:1 constitute:1 remark:3 useful:1 tewari:2 amount:1 extensively:2 locally:4 svms:2 zj:1 notice:4 write:5 drawn:1 ht:1 year:1 sum:3 inverse:1 family:1 decision:1 bound:41 hi:41 koren:2 quadratic:2 nonnegative:1 adapted:1 x2:21 bousquet:6 argument:4 min:7 optimality:1 conjecture:2 according:1 slightly:2 kakade:2 intuitively:1 erm:32 taken:1 computationally:2 remains:1 turn:1 needed:1 serf:2 end:1 apply:3 observe:1 generic:1 fbi:11 batch:3 alternative:1 a2i:1 assumes:1 include:1 hinge:2 newton:5 establish:2 objective:3 quantity:1 strategy:1 concentration:1 dependence:1 traditional:1 amongst:1 gradient:4 minx:2 w0:11 topic:1 besides:1 minimizing:1 setup:2 unfortunately:1 relate:1 trace:1 negative:2 rise:2 affirms:1 stated:2 design:1 unknown:1 bianchi:2 conversion:3 upper:6 observation:1 markov:1 discarded:2 descent:1 jin:1 immediate:1 looking:2 arbitrary:1 tomer:1 community:1 introduced:2 namely:3 required:3 extensive:1 z1:7 established:2 able:2 proceeds:1 below:3 including:1 max:1 regularized:17 kivinen:2 scheme:2 older:3 improve:1 imply:1 numerous:1 concludes:1 review:1 tangent:1 relative:1 loss:49 interesting:2 limitation:1 srebro:4 proven:1 localized:2 foundation:2 cd:3 surprisingly:1 supported:1 keeping:1 last:1 enjoys:2 side:4 understand:1 distributed:1 benefit:1 van:1 dimension:3 rich:1 computes:1 fb:10 concavity:1 transaction:2 excess:13 ons:2 reveals:1 b1:2 summing:3 assumed:2 conclude:1 shwartz:6 thep:1 continuous:1 nature:1 obtaining:1 improving:1 anthony:1 domain:10 did:1 main:4 azoury:2 linearly:2 bounding:2 noise:2 nothing:1 x1:37 advice:1 khi:12 exponential:2 kxk2:2 shalevshwartz:1 levy:1 weighting:1 hti:12 theorem:18 specific:5 discarding:2 xt:2 rakhlin:1 evidence:1 mendelson:1 albeit:1 magnitude:2 budget:1 conditioned:1 margin:1 logarithmic:2 simply:3 explore:1 kxk:1 conconi:1 g2:14 applies:2 springer:1 tomerk:1 minimizer:10 relies:2 identity:2 lipschitz:2 change:2 hard:1 included:1 specifically:1 determined:2 uniformly:3 vovk:2 loan:1 averaging:1 lemma:18 support:1 latter:3 brevity:1 accelerated:2 regularizing:1 |
5,564 | 6,035 | Adaptive Low-Complexity Sequential Inference for
Dirichlet Process Mixture Models
Theodoros Tsiligkaridis, Keith W. Forsythe
Massachusetts Institute of Technology, Lincoln Laboratory
Lexington, MA 02421 USA
[email protected], [email protected]
Abstract
We develop a sequential low-complexity inference procedure for Dirichlet process mixtures of Gaussians for online clustering and parameter estimation when
the number of clusters are unknown a-priori. We present an easily computable,
closed form parametric expression for the conditional likelihood, in which hyperparameters are recursively updated as a function of the streaming data assuming
conjugate priors. Motivated by large-sample asymptotics, we propose a novel
adaptive low-complexity design for the Dirichlet process concentration parameter and show that the number of classes grow at most at a logarithmic rate. We
further prove that in the large-sample limit, the conditional likelihood and data
predictive distribution become asymptotically Gaussian. We demonstrate through
experiments on synthetic and real data sets that our approach is superior to other
online state-of-the-art methods.
1
Introduction
Dirichlet process mixture models (DPMM) have been widely used for clustering data Neal (1992);
Rasmussen (2000). Traditional finite mixture models often suffer from overfitting or underfitting
of data due to possible mismatch between the model complexity and amount of data. Thus, model
selection or model averaging is required to find the correct number of clusters or the model with
the appropriate complexity. This requires significant computation for high-dimensional data sets or
large samples. Bayesian nonparametric modeling are alternative approaches to parametric modeling,
an example being DPMM?s which can automatically infer the number of clusters from the data via
Bayesian inference techniques.
The use of Markov chain Monte Carlo (MCMC) methods for Dirichlet process mixtures has made
inference tractable Neal (2000). However, these methods can exhibit slow convergence and their
convergence can be tough to detect. Alternatives include variational methods Blei & Jordan (2006),
which are deterministic algorithms that convert inference to optimization. These approaches can
take a significant computational effort even for moderate sized data sets. For large-scale data sets
and low-latency applications with streaming data, there is a need for inference algorithms that are
much faster and do not require multiple passes through the data. In this work, we focus on lowcomplexity algorithms that adapt to each sample as they arrive, making them highly scalable. An
online algorithm for learning DPMM?s based on a sequential variational approximation (SVA) was
proposed in Lin (2013), and the authors in Wang & Dunson (2011) recently proposed a sequential
maximum a-posterior (MAP) estimator for the class labels given streaming data. The algorithm is
called sequential updating and greedy search (SUGS) and each iteration is composed of a greedy
selection step and a posterior update step.
The choice of concentration parameter ? is critical for DPMM?s as it controls the number of clusters Antoniak (1974). While most fast DPMM algorithms use a fixed ? Fearnhead (2004); Daume
1
(2007); Kurihara et al. (2006), imposing a prior distribution on ? and sampling from it provides more
flexibility, but this approach still heavily relies on experimentation and prior knowledge. Thus, many
fast inference methods for Dirichlet process mixture models have been proposed that can adapt ?
to the data, including the works Escobar & West (1995) where learning of ? is incorporated in the
Gibbs sampling analysis, Blei & Jordan (2006) where a Gamma prior is used in a conjugate manner
directly in the variational inference algorithm. Wang & Dunson (2011) also account for model uncertainty on the concentration parameter ? in a Bayesian manner directly in the sequential inference
procedure. This approach can be computationally expensive, as discretization of the domain of ? is
needed, and its stability highly depends on the initial distribution on ? and on the range of values of
?. To the best of our knowledge, we are the first to analytically study the evolution and stability of
the adapted sequence of ??s in the online learning setting.
In this paper, we propose an adaptive non-Bayesian approach for adapting ? motivated by largesample asymptotics, and call the resulting algorithm ASUGS (Adaptive SUGS). While the basic
idea behind ASUGS is directly related to the greedy approach of SUGS, the main contribution is
a novel low-complexity stable method for choosing the concentration parameter adaptively as new
data arrive, which greatly improves the clustering performance. We derive an upper bound on the
number of classes, logarithmic in the number of samples, and further prove that the sequence of
concentration parameters that results from this adaptive design is almost bounded. We finally prove,
that the conditional likelihood, which is the primary tool used for Bayesian-based online clustering,
is asymptotically Gaussian in the large-sample limit, implying that the clustering part of ASUGS
asymptotically behaves as a Gaussian classifier. Experiments show that our method outperforms
other state-of-the-art methods for online learning of DPMM?s.
The paper is organized as follows. In Section 2, we review the sequential inference framework for
DPMM?s that we will build upon, introduce notation and propose our adaptive modification. In
Section 3, the probabilistic data model is given and sequential inference steps are shown. Section
4 contains the growth rate analysis of the number of classes and the adaptively-designed concentration parameters, and Section 5 contains the Gaussian large-sample approximation to the conditional
likelihood. Experimental results are shown in Section 6 and we conclude in Section 7.
2
Sequential Inference Framework for DPMM
Here, we review the SUGS framework of Wang & Dunson (2011) for online clustering. Here, the
nonparametric nature of the Dirichlet process manifests itself as modeling mixture models with
countably infinite components. Let the observations be given by yi ? Rd , and ?i to denote
the class label of the ith observation (a latent variable). We define the available information at
time i as y(i) = {y1 , . . . , yi } and ? (i?1) = {?1 , . . . , ?i?1 }. The online sequential updating and
greedy search (SUGS) algorithm is summarized next for completeness. Set ?1 = 1 and calculate
?(?1 |y1 , ?1 ). For i ? 2,
1. Choose best class label for yi :
?i ? arg max1?h?ki?1 +1 P (?i = h|y(i) , ? (i?1) ).
2. Update the posterior distribution
f (yi |??i )?(??i |y(i?1) , ? (i?1) ).
using
yi , ?i :
?(??i |y(i) , ? (i) )
?
where ?h are the parameters of class h, f (yi |?h ) is the observation density conditioned on class
h and ki?1 is the number of classes created at time i ? 1. The algorithm sequentially allocates
observations yi to classes based on maximizing the conditional posterior probability.
To calculate the posterior probability P (?i = h|y(i) , ? (i?1) ), define the variables:
def
def
Li,h (yi ) = P (yi |?i = h, y(i?1) , ? (i?1) ),
?i,h (?) = P (?i = h|?, y(i?1) , ? (i?1) )
From Bayes? rule, P (?i = h|y(i) , ? (i?1) ) ? Li,h (yi )?i,h (?) for h = 1, . . . , ki?1 + 1. Here, ? is
considered fixed at this iteration, and is not updated in a fully Bayesian manner.
According to the Dirichlet process prediction, the predictive probability of assigning observation yi
to a class h is:
mi?1 (h)
, h = 1, . . . , ki?1
?i,h (?) = i?1+?
(1)
?
,
h = ki?1 + 1
i?1+?
2
Algorithm 1 Adaptive Sequential Updating and Greedy Search (ASUGS)
Input: streaming data {yi }?
i=1 , rate parameter ? > 0.
Set ?1 = 1 and k1 = 1. Calculate ?(?1 |y1 , ?1 ).
for i ? 2: do
ki?1
(a) Update concentration parameter: ?i?1 = ?+log(i?1)
.
o
n
(i)
L (yi )?i,h (?i?1 )
(b) Choose best label for yi :
?i ? {qh } = P 0 i,h
L 0 (yi )? 0 (?i?1 ) .
h
(c) Update posterior distribution:
end for
i,h
i,h
?(??i |y(i) , ? (i) ) ? f (yi |??i )?(??i |y(i?1) , ? (i?1) ).
Pi?1
where mi?1 (h) = l=1 I(?l = h) counts the number of observations labeled as class h at time
i ? 1, and ? > 0 is the concentration parameter.
2.1
Adaptation of Concentration Parameter ?
It is well known that the concentration parameter ? has a strong influence on the growth of the number of classes Antoniak (1974). Our experiments show that in this sequential framework, the choice
of ? is even more critical. Choosing a fixed ? as in the online SVA algorithm of Lin (2013) requires
cross-validation, which is computationally prohibitive for large-scale data sets. Furthermore, in the
streaming data setting where no estimate on the data complexity exists, it is impractical to perform
cross-validation. Although the parameter ? is handled from a fully Bayesian treatment in Wang &
Dunson (2011), a pre-specified grid of possible values ? can take, say {?l }L
l=1 , along with the prior
distribution over them, needs to be chosen in advance. Storage and updating of a matrix of size
(ki?1 + 1) ? L and further marginalization is needed to compute P (?i = h|y(i) , ? (i?1) ) at each
iteration i. Thus, we propose an alternative data-driven method for choosing ? that works well in
practice, is simple to compute and has theoretical guarantees.
The idea is to start with a prior distribution on ? that favors small ? and shape it into a posterior
distribution using the data. Define pi (?) = p(?|y(i) , ? (i) ) as the posterior distribution formed at
time i, which will be used in ASUGS at time i + 1. Let p1 (?) ? p1 (?|y(1) , ? (1) ) denote the prior
for ?, e.g., an exponential distribution p1 (?) = ?e??? . The dependence on y(i) and ? (i) is trivial
only at this first step. Then, by Bayes rule, pi (?) ? p(yi , ?i |y(i?1) , ? (i?1) , ?)p(?|y(i?1) , ? (i?1) ) ?
pi?1 (?)?i,?i (?) where ?i,?i (?) is given in (1). Once this update is made after the selection of ?i , the
? to be used in the next selection step is the mean of the distribution pi (?), i.e., ?i = E[?|y(i) , ? (i) ].
As will be shown in Section 5, the distribution pi (?) can be approximated by a Gamma distribution
with shape parameter ki and rate parameter ? + log i. Under this approximation, we have ?i =
ki
?+log i , only requiring storage and update of one scalar parameter ki at each iteration i.
The ASUGS algorithm is summarized in Algorithm 1. The selection step may be implemented
(i)
by sampling the probability mass function {qh }. The posterior update step can be efficiently performed by updating the hyperparameters as a function of the streaming data for the case of conjugate
distributions. Section 3 derives these updates for the case of multivariate Gaussian observations and
conjugate priors for the parameters.
3
Sequential Inference under Unknown Mean & Unknown Covariance
We consider the general case of an unknown mean and covariance for each class. The probabilistic
model for the parameters of each class is given as:
yi |?, T ? N (?|?, T),
?|T ? N (?|?0 , co T),
T ? W(?|?0 , V0 )
(2)
where N (?|?, T) denotes the multivariate normal distribution with mean ? and precision matrix
T, and W(?|?, V) is the Wishart distribution with 2? degrees of freedom and scale matrix V. The
d
follow a normal-Wishart joint distribution. The model (2) leads
parameters ? = (?, T) ? Rd ? S++
to closed-form expressions for Li,h (yi )?s due to conjugacy Tzikas et al. (2008).
To calculate the class posteriors, the conditional likelihoods of yi given assignment to class h and
the previous class assignments need to be calculated first. The conditional likelihood of yi given
3
assignment to class h and the history (y(i?1) , ? (i?1) ) is given by:
Z
Li,h (yi ) = f (yi |?h )?(?h |y(i?1) , ? (i?1) )d?h
(3)
Due to the conjugacy of the distributions, the posterior ?(?h |y(i?1) , ? (i?1) ) always has the form:
(i?1)
?(?h |y(i?1) , ? (i?1) ) = N (?h |?h
(i?1)
(i?1)
(i?1)
(i?1)
, ch
(i?1)
Th )W(Th |?h
(i?1)
, Vh
)
(i?1)
where ?h
, ch
, ?h
, Vh
are hyperparameters that can be recursively computed as new
samples come in. The form of this recursive computation of the hyperparameters is derived in
(i)
Appendix A. For ease of interpretation and numerical stability, we define ?h :=
(i)
(i)
W(?|?h , Vh ).
(i)
(Vh )?1
(i)
2?h
as the
(i)
?h
inverse of the mean of the Wishart distribution
The matrix
has the natural
interpretation as the covariance matrix of class h at iteration i. Once the ?i th component is chosen,
the parameter updates for the ?i th class become:
?(i)
?i =
(i?1)
1
y +
(i?1) i
1 + c?i
c?i
1+
(i?1)
c?i
??(i?1)
i
(4)
(i?1)
c(i)
+1
?i = c?i
?(i)
?i =
1
(5)
(i?1)
2??i
?(i?1)
(i?1) ?i
+ 2??i
??(i)
= ??(i?1)
+
i
i
1
+
(i?1)
1 + 2??i
1
(i?1)
c?i
(yi
(i?1)
+ c?i
? ??(i?1)
)(yi ? ?(i?1)
)T
?i
i
1
2
(6)
(7)
(0)
(i)
If the starting matrix ?h is positive definite, then all the matrices {?h } will remain positive
definite. Let us return to the calculation of the conditional likelihood (3). By iterated integration, it
follows that:
!d/2
(i?1)
(i?1)
(i?1) ?1/2
rh
?d (?h
) det(?h
)
Li,h (yi ) ?
(i?1)
?h(i?1) + 21 (8)
(i?1)
2?h
rh
(i?1) T
(i?1) ?1
(i?1)
) (?h
) (yi ? ?h
)
1 + (i?1) (yi ? ?h
2?h
def
where ?d (a) =
?(a+ 21 )
?(a+ 1?d
2 )
(i?1) def
and rh
=
(i?1)
ch
(i?1)
1+ch
. A detailed mathematical derivation of this
conditional likelihood is included in Appendix B. We remark that for the new class h = ki?1 + 1,
Li,ki?1 +1 has the form (8) with the initial choice of hyperparameters r(0) , ? (0) , ?(0) , ?(0) .
4
Growth Rate Analysis of Number of Classes & Stability
In this section, we derive a model for the posterior distribution pn (?) using large-sample approximations, which will allow us to derive growth rates on the number of classes and the sequence of
concentration parameters, showing that the number of classes grows as E[kn ] = O(log1+ n) for
arbitarily small under certain mild conditions.
The probability density of the ? parameter is updated at the jth step in the following fashion:
?
innovation class chosen
j+?
,
pj+1 (?) ? pj (?) ?
1
otherwise
j+?
where only the ?-dependent factors in the update are shown. The ?-independent factors are absorbed
by the normalization to a probability density. Choosing the innovation class pushes mass toward
infinity while choosing any other class pushes mass toward zero. Thus there is a possibility that
the innovation probability grows in a undesired manner. We assess the growth of the number of
def
innovations rn = kn ? 1 under simple assumptions on some likelihood functions that appear
naturally in the ASUGS algorithm.
Assuming that the initial distribution of ? is p1 (?) = ?e??? , the distribution used at step n + 1 is
Qn?1
proportional to ?rn j=1 (1 + ?j )?1 e??? . We make use of the limiting relation
4
Theorem 1. The following asymptotic behavior holds: limn??
log
Qn?1
?
j=1 (1+ j )
? log n
= 1.
Proof. See Appendix C.
Using Theorem 1, a large-sample model for pn (?) is ?rn e?(?+log n)? , suitably normalized. Recognizing this as the Gamma distribution with shape parameter rn + 1 and rate parameter ? + log n, its
rn +1
mean is given by ?n = ?+log
n . We use the mean in this form to choose class membership in Alg. 1.
This asymptotic approximation leads to a very simple scalar update of the concentration parameter;
there is no need for discretization for tracking the evolution of continuous probability distributions
on ?. In our experiments, this approximation is very accurate.
Recall that the innovation class is labeled K+ = kn?1 + 1 at the nth step. The modeled updates
randomly select a previous class or innovation (new class) by sampling from the probability distriP
K+
(n)
bution {qk = P (?n = k|y(n) , ? (n?1) )}k=1
. Note that n ? 1 = k6=K+ mn (k) , where mn (k)
represents the number of members in class k at time n.
We assume the data follows the Gaussian mixture distribution:
def
pT (y) =
K
X
?h N (y|?h , ?h )
(9)
h=1
where ?h are the prior probabilities, and ?h , ?h are the parameters of the Gaussian clusters.
Define the mixture-model probability density function, which plays the role of the predictive distribution:
X mn?1 (k)
? n,K (y) def
L
=
Ln,k (y),
(10)
+
n?1
k6=K+
so that the probabilities of choosing a previous class or an innovation (using Equ. (1)) are proporP
mn?1 (k)
(n?1)
? n,K (yn ) and ?n?1 Ln,K (yn ), respectional to k6=K+ n?1+?
Ln,k (yn ) = n?1+?
L
+
+
n?1+?n?1
n?1
n?1
tively. If ?n?1 denotes the innovation probability at step n, then we have
!
? n,K (yn )
(n ? 1)L
?n?1 Ln,K+ (yn )
+
= (?n?1 , 1 ? ?n?1 )
(11)
, ?n?1
?n?1
n ? 1 + ?n?1
n ? 1 + ?n?1
for some positive proportionality factor ?n?1 .
Define the likelihood ratio (LR) at the beginning of stage n as 1 :
def
ln (y) =
Ln,K+ (y)
? n,K (y)
L
(12)
+
Conceptually, the mixture (10) represents a modeled distribution fitting the currently observed data.
? n,K is a good model
If all ?modes? of the data have been observed, it is reasonable to expect that L
+
for future observations. The LR ln (yn ) is not large when the future observations are well-modeled
? n,K+ ? pT as n ? ?, as discussed in Section 5.
by (10). In fact, we expect L
ln (yn )?n?1
ln (yn )?n?1
Lemma 1. The following bound holds: ?n?1 = n?1+l
?
min
,
1
.
n?1
n (yn )?n?1
Proof. The result follows directly from (11) after a simple calculation.
The innovation random variable rn is described by the random process associated with the probabilities of transition
?n ,
k = rn + 1
P (rn+1 = k|rn ) =
.
(13)
1 ? ?n , k = rn
1
def
Here, L0 (?) = Ln,K+ (?) is independent of n and only depends on the initial choice of hyperparameters
as discussed in Sec. 3.
5
The expectation of rn is majorized by the expectation of a similar random process, r?n , based on the
def
transition probability ?n = min( rna+1
, 1) instead of ?n as Appendix D shows, where the random
n
?1
sequence {an } is given by ln+1 (yn+1 ) n(? + log n). The latter can be described as a modification
of a Polya urn process with selection probability ?n . The asymptotic behavior of rn and related
variables is described in the following theorem.
Theorem 2. Let ?n be a sequence of real-valued random variables 0 ? ?n ? 1 satisfying ?n ? rna+1
n
for n ? N , where an = ln+1 (yn+1 )?1 n(? + log n), and where the nonnegative, integer-valued
random variables rn evolve according to (13). Assume the following for n ? N :
1. ln (yn ) ? ?
(a.s.)
? n,K+ ) ? ?
2. D(pT k L
(a.s.)
where D(p k q) is the Kullback-Leibler divergence between distributions p(?) and q(?). Then, as
n ? ?,
?
?
rn = OP (log1+? ?/2 n),
?n = OP (log? ?/2 n)
(14)
Proof. See Appendix E.
Theorem 2 bounds the growth rate of the mean of the number of class innovations and the concentration parameter ?n in terms of the sample size n and parameter ?. The bounded LR and bounded
KL divergence conditions of Thm. 2 manifest themselves in the rate exponents of (14). The experiments section shows that both of the conditions of Thm. 2 hold for all iterations n ? N for
? n,k +1 converges
some N ? N. In fact, assuming the correct clustering, the mixture distribution L
n?1
to the true mixture distribution pT , implying that the number of class innovations grows at most
as O(log1+ n) and the sequence of concentration parameters is O(log n), where > 0 can be
arbitrarily small.
5
Asymptotic Normality of Conditional Likelihood
In this section, we derive an asymptotic expression for the conditional likelihood (8) in order to gain
insight into the steady-state of the algorithm.
We let ?h denote the true prior probability of class h. Using the bounds of the Gamma function
?d (a)
in Theorem 1.6 from Batir (2008), it follows that lima?? e?d/2 (a?1/2)
d/2 = 1. Under normal
convergence conditions of the algorithm (with the pruning and merging steps included), all classes
h = 1, . . . , K will be correctly identified and populated with approximately ni?1 (h) ? ?h (i ? 1)
observations at time i ? 1. Thus, the conditional class prior for each class h converges to ?h as
i??
ni?1 (h)
??
h
i ? ?, in virtue of (14), ?i,h (?i?1 ) = i?1+?
?? ?h . According
=
? ?/2
i?1
1+
(i?1)
rh
(i?1)
ch
OP (log
(i?1))
i?1
(i?1)
to (5), we expect
? 1 as i ? ? since
? ?h (i ? 1). Also, we expect 2?h
?
(i?1)
(i?1)
?h (i ? 1) as i ? ? according to (7). Also, from before, ?d (?h
) ? e?d/2 (?h
? 1/2)d/2 ?
(i)
(i)
1 d/2
e?d/2 (?h i?1
. The parameter updates (4)-(7) imply ?h ? ?h and ?h ? ?h as i ? ?.
2 ? 2)
This follows from the strong law of large numbers, as the updates are recursive implementations
of the sample mean and sample covariance matrix. Thus, the large-sample approximation to the
conditional likelihood becomes:
? i?1
?1
?1
?h
(i?1) T
(i?1) ?1
(i?1)
2?
h
(y
?
?
)
(?
)
(y
?
?
)
lim
1
+
i??
i
i
h
h
h
i?1
i??
Li,h (yi ) ?
(i?1) 1/2
limi?? det(?h
)
1
i??
?
T
?1
e? 2 (yi ??h ) ?h (yi ??h )
?
det ?h
(15)
where we used limu?? (1+ uc )u = ec . The conditional likelihood (15) corresponds to the multivariate Gaussian distribution with mean ?h and covariance matrix ?h . A similar asymptotic normality
6
result was recently obtained in Tsiligkaridis & Forsythe (2015) for Gaussian observations with a von
(n)
(n)
(h)
Mises prior. The asymptotics mn?1
? ?h , ?h ? ?h , ?h ? ?h , Ln,h (y) ? N (y|?h , ?h )
n?1
? n,K+ in (10) converges to the true Gaussian mixture
as n ? ? imply that the mixture distribution L
? n,K+ ) ? ? for all n ? N ,
distribution pT of (9). Thus, for any small ?, we expect D(pT k L
validating the assumption of Theorem 2.
6
Experiments
We apply the ASUGS learning algorithm to a synthetic 16-class example and to a real data set, to
verify the stability and accuracy of our method. The experiments show the value of adaptation of
the Dirichlet concentration parameter for online clustering and parameter estimation.
Since it is possible that multiple clusters are similar and classes might be created due to outliers, or
due to the particular ordering of the streaming data sequence, we add the pruning and merging step
in the ASUGS algorithm as done in Lin (2013). We compare ASUGS and ASUGS-PM with SUGS,
SUGS-PM, SVA and SVA-PM proposed in Lin (2013), since it was shown in Lin (2013) that SVA
and SVA-PM outperform the block-based methods that perform iterative updates over the entire data
set including Collapsed Gibbs Sampling, MCMC with Split-Merge and Truncation-Free Variational
Inference.
6.1
Synthetic Data set
We consider learning the parameters of a 16-class Gaussian mixture each with equal variance of
? 2 = 0.025. The training set was made up of 500 iid samples, and the test set was made up of
1000 iid samples. The clustering results are shown in Fig. 1(a), showing that the ASUGS-based approaches are more stable than SVA-based algorithms. ASUGS-PM performs best and identifies the
correct number of clusters, and their parameters. Fig. 1(b) shows the data log-likelihood on the test
set (averaged over 100 Monte Carlo trials), the mean and variance of the number of classes at each iteration. The ASUGS-based approaches achieve a higher log-likelihood than SVA-based approaches
asymptotically. Fig. 6.1 provides some numerical verification for the assumptions of Theorem 2.
? i,K+ (10) converges to the true mixture distribution pT (9),
As expected, the predictive likelihood L
and the likelihood ratio li (yi ) is bounded after enough samples are processed.
SVA-PM
0
0
-2
-2
-4
-4
-2
0
2
4
-4
-4
-2
-2
ASUGS
0
2
4
ASUGS-PM
4
4
2
2
0
0
-2
-2
25
Mean Number of Classes
2
Avg. Joint Log-likelihood
4
2
20
-4
15
-6
-8
-10
-2
0
2
4
-4
-4
-2
0
2
4
5
0
0
-4
-4
ASUGS
ASUGS-PM
SUGS
SUGS-PM
SVA
SVA-PM
10
100
200
300
Iteration
400
500
0
100
200
300
Iteration
(a)
400
Variance of Number of Classes
SVA
4
500
5
4
3
2
1
0
0
100
200
300
400
500
Iteration
(b)
Figure 1: (a) Clustering performance of SVA, SVA-PM, ASUGS and ASUGS-PM on synthetic data
set. ASUGS-PM identifies the 16 clusters correctly. (b) Joint log-likelihood on synthetic data, mean
and variance of number of classes as a function of iteration. The likelihood values were evaluated on
a held-out set of 1000 samples. ASUGS-PM achieves the highest log-likelihood and has the lowest
asymptotic variance on the number of classes.
6.2
Real Data Set
We applied the online nonparametric Bayesian methods for clustering image data. We used the
MNIST data set, which consists of 60, 000 training samples, and 10, 000 test samples. Each sample
7
10000
3
9000
2.5
~ i;K+ ! pT k2
kL
2
8000
li (yi )
7000
2
6000
5000
1.5
4000
1
3000
2000
0.5
1000
0
0
100
200
300
400
500
Sample i
0
100
200
300
400
500
Sample i
L
(y )
? i,K+ (?) and true
Figure 2: Likelihood ratio li (yi ) = L? i,K+ (yi ) (left) and L2 -distance between L
i
i,K+
mixture distribution pT (right) for synthetic example (see 1).
is a 28 ? 28 image of a handwritten digit (total of 784 dimensions), and we perform PCA preprocessing to reduce dimensionality to d = 50 dimensions as in Kurihara et al. (2006).
We use only a random 1.667% subset, consisting of 1000 random samples for training. This training
set contains data from all 10 digits with an approximately uniform proportion. Fig. 3 shows the
predictive log-likelihood over the test set, and the mean images for clusters obtained using ASUGSPM and SVA-PM, respectively. We note that ASUGS-PM achieves higher log-likelihood values and
finds all digits correctly using only 23 clusters, while SVA-PM finds some digits using 56 clusters.
0
Predictive Log-Likelihood
-500
ASUGS-PM
SUGS-PM
SVA-PM
-1000
-1500
-2000
-2500
-3000
-3500
-4000
-4500
-5000
0
100
200
300
400
500
600
700
800
900
1000
Iteration
(a)
(b)
(c)
Figure 3: Predictive log-likelihood (a) on test set, mean images for clusters found using ASUGS-PM
(b) and SVA-PM (c) on MNIST data set.
6.3
Discussion
Although both SVA and ASUGS methods have similar computational complexity and use decisions
and information obtained from processing previous samples in order to decide on class innovations, the mechanics of these methods are quite different. ASUGS uses an adaptive ? motivated
by asymptotic theory, while SVA uses a fixed ?. Furthermore, SVA updates the parameters of all
the components at each iteration (in a weighted fashion) while ASUGS only updates the parameters
of the most-likely cluster, thus minimizing leakage to unrelated components. The ? parameter of
ASUGS does not affect performance as much as the threshold parameter of SVA does, which often
leads to instability requiring lots of pruning and merging steps and increasing latency. This is critical for large data sets or streaming applications, because cross-validation would be required to set
appropriately. We observe higher log-likelihoods and better numerical stability for ASUGS-based
methods in comparison to SVA. The mathematical formulation of ASUGS allows for theoretical
guarantees (Theorem 2), and asymptotically normal predictive distribution.
7
Conclusion
We developed a fast online clustering and parameter estimation algorithm for Dirichlet process mixtures of Gaussians, capable of learning in a single data pass. Motivated by large-sample asymptotics,
we proposed a novel low-complexity data-driven adaptive design for the concentration parameter
and showed it leads to logarithmic growth rates on the number of classes. Through experiments on
synthetic and real data sets, we show our method achieves better performance and is as fast as other
state-of-the-art online learning DPMM methods.
8
References
Antoniak, C. E. Mixtures of Dirichlet Processes with Applications to Bayesian Nonparametric
Problems. The Annals of Statistics, 2(6):1152?1174, 1974.
Batir, N. Inequalities for the Gamma Function. Archiv der Mathematik, 91(6):554?563, 2008.
Blei, D. M. and Jordan, M. I. Variational Inference for Dirichlet Process Mixtures. Bayesian Analysis, 1(1):121?144, 2006.
Daume, H. Fast Search for Dirichlet Process Mixture Models. In Conference on Artificial Intelligence and Statistics, 2007.
Escobar, M. D. and West, M. Bayesian Density Estimation and Inference using Mixtures. Journal
of the American Statistical Association, 90(430):577?588, June 1995.
Fearnhead, P. Particle Filters for Mixture Models with an Uknown Number of Components. Statistics and Computing, 14:11?21, 2004.
Kurihara, K., Welling, M., and Vlassis, N. Accelerated Variational Dirichlet Mixture Models. In
Advances in Neural Information Processing Systems (NIPS), 2006.
Lin, Dahua. Online learning of nonparametric mixture models via sequential variational approximation. In Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K.Q. (eds.),
Advances in Neural Information Processing Systems 26, pp. 395?403. Curran Associates, Inc.,
2013.
Neal, R. M. Bayesian Mixture Modeling. In Proceedings of the Workshop on Maximum Entropy
and Bayesian Methods of Statistical Analysis, volume 11, pp. 197?211, 1992.
Neal, R. M. Markov chain sampling methods for Dirichlet process mixture models. Journal of
Computational and Graphical Statistics, 9(2):249?265, June 2000.
Rasmussen, C. E. The infinite gaussian mixture model. In Advances in Neural Information Processing Systems 12, pp. 554?560. MIT Press, 2000.
Tsiligkaridis, T. and Forsythe, K. W. A Sequential Bayesian Inference Framework for Blind Frequency Offset Estimation. In Proceedings of IEEE International Workshop on Machine Learning
for Signal Processing, Boston, MA, September 2015.
Tzikas, D. G., Likas, A. C., and Galatsanos, N. P. The Variational Approximation for Bayesian
Inference. IEEE Signal Processing Magazine, pp. 131?146, November 2008.
Wang, L. and Dunson, D. B. Fast Bayesian Inference in Dirichlet Process Mixture Models. Journal
of Computational and Graphical Statistics, 20(1):196?216, 2011.
9
| 6035 |@word mild:1 trial:1 proportion:1 suitably:1 proportionality:1 covariance:5 recursively:2 initial:4 contains:3 outperforms:1 discretization:2 assigning:1 numerical:3 shape:3 designed:1 update:17 implying:2 greedy:5 prohibitive:1 intelligence:1 beginning:1 ith:1 lr:3 blei:3 provides:2 completeness:1 theodoros:1 mathematical:2 along:1 become:2 sugs:10 batir:2 prove:3 consists:1 fitting:1 underfitting:1 introduce:1 manner:4 expected:1 behavior:2 p1:4 themselves:1 mechanic:1 automatically:1 increasing:1 becomes:1 notation:1 bounded:4 unrelated:1 mass:3 lowest:1 developed:1 lexington:1 impractical:1 guarantee:2 growth:7 classifier:1 k2:1 control:1 appear:1 yn:12 positive:3 before:1 limit:2 approximately:2 merge:1 might:1 co:1 ease:1 range:1 averaged:1 practice:1 recursive:2 definite:2 block:1 digit:4 procedure:2 asymptotics:4 adapting:1 pre:1 selection:6 storage:2 collapsed:1 influence:1 instability:1 deterministic:1 map:1 maximizing:1 starting:1 estimator:1 rule:2 insight:1 stability:6 updated:3 limiting:1 qh:2 pt:9 heavily:1 play:1 lima:1 annals:1 magazine:1 us:2 curran:1 associate:1 expensive:1 approximated:1 updating:5 satisfying:1 labeled:2 observed:2 role:1 wang:5 calculate:4 ordering:1 highest:1 complexity:9 predictive:8 upon:1 max1:1 easily:1 joint:3 derivation:1 fast:6 monte:2 artificial:1 choosing:6 quite:1 widely:1 valued:2 say:1 otherwise:1 favor:1 majorized:1 statistic:5 itself:1 online:14 sequence:7 propose:4 adaptation:2 flexibility:1 achieve:1 lincoln:1 convergence:3 cluster:13 escobar:2 converges:4 derive:4 develop:1 op:3 polya:1 keith:1 strong:2 implemented:1 come:1 correct:3 filter:1 require:1 hold:3 considered:1 normal:4 achieves:3 estimation:5 label:4 currently:1 tool:1 weighted:1 mit:3 gaussian:12 fearnhead:2 always:1 forsythe:4 rna:2 pn:2 derived:1 focus:1 l0:1 june:2 likelihood:28 greatly:1 detect:1 inference:19 dependent:1 membership:1 streaming:8 entire:1 relation:1 arg:1 priori:1 k6:3 exponent:1 art:3 integration:1 uc:1 equal:1 once:2 sampling:6 represents:2 future:2 randomly:1 composed:1 gamma:5 divergence:2 consisting:1 freedom:1 highly:2 possibility:1 mixture:29 behind:1 held:1 chain:2 accurate:1 capable:1 allocates:1 theoretical:2 modeling:4 assignment:3 subset:1 uniform:1 recognizing:1 kn:3 synthetic:7 adaptively:2 density:5 international:1 probabilistic:2 von:1 choose:3 wishart:3 american:1 return:1 li:10 account:1 summarized:2 sec:1 inc:1 depends:2 blind:1 performed:1 lot:1 closed:2 bution:1 start:1 bayes:2 contribution:1 ass:1 formed:1 ni:2 accuracy:1 qk:1 variance:5 efficiently:1 conceptually:1 bayesian:16 handwritten:1 iterated:1 iid:2 carlo:2 history:1 ed:1 pp:4 frequency:1 naturally:1 proof:3 mi:3 associated:1 gain:1 treatment:1 massachusetts:1 manifest:2 knowledge:2 recall:1 improves:1 lim:1 organized:1 dimensionality:1 higher:3 follow:1 formulation:1 done:1 evaluated:1 furthermore:2 stage:1 mode:1 grows:3 usa:1 requiring:2 normalized:1 true:5 verify:1 evolution:2 analytically:1 laboratory:1 leibler:1 neal:4 undesired:1 ll:2 steady:1 demonstrate:1 performs:1 image:4 variational:8 novel:3 recently:2 superior:1 behaves:1 tively:1 volume:1 discussed:2 interpretation:2 association:1 dahua:1 significant:2 imposing:1 gibbs:2 rd:2 grid:1 populated:1 pm:22 particle:1 stable:2 v0:1 add:1 posterior:12 multivariate:3 showed:1 moderate:1 driven:2 certain:1 inequality:1 arbitrarily:1 yi:35 der:1 signal:2 multiple:2 infer:1 likas:1 faster:1 adapt:2 calculation:2 cross:3 lin:6 prediction:1 scalable:1 basic:1 expectation:2 iteration:13 normalization:1 grow:1 limn:1 appropriately:1 pass:1 validating:1 member:1 tough:1 jordan:3 call:1 integer:1 split:1 enough:1 marginalization:1 affect:1 identified:1 reduce:1 idea:2 computable:1 det:3 sva:23 expression:3 motivated:4 handled:1 pca:1 effort:1 suffer:1 remark:1 latency:2 detailed:1 amount:1 nonparametric:5 processed:1 outperform:1 correctly:3 threshold:1 pj:2 asymptotically:5 convert:1 inverse:1 uncertainty:1 arrive:2 almost:1 reasonable:1 decide:1 decision:1 appendix:5 bound:4 ki:12 def:10 nonnegative:1 adapted:1 infinity:1 archiv:1 min:2 urn:1 according:4 lowcomplexity:1 conjugate:4 remain:1 making:1 modification:2 outlier:1 computationally:2 ln:14 conjugacy:2 mathematik:1 count:1 needed:2 tractable:1 end:1 available:1 gaussians:2 experimentation:1 apply:1 observe:1 appropriate:1 alternative:3 weinberger:1 denotes:2 dirichlet:16 clustering:12 include:1 graphical:2 k1:1 build:1 ghahramani:1 leakage:1 parametric:2 concentration:16 primary:1 dependence:1 traditional:1 exhibit:1 september:1 distance:1 trivial:1 toward:2 assuming:3 modeled:3 ratio:3 minimizing:1 innovation:12 dunson:5 design:3 implementation:1 dpmm:9 unknown:4 perform:3 upper:1 observation:11 markov:2 finite:1 november:1 vlassis:1 incorporated:1 y1:3 rn:14 thm:2 required:2 specified:1 kl:2 nip:1 mismatch:1 including:2 critical:3 natural:1 nth:1 mn:5 normality:2 technology:1 imply:2 identifies:2 created:2 log1:3 vh:4 prior:12 review:2 l2:1 evolve:1 asymptotic:8 law:1 fully:2 expect:5 proportional:1 validation:3 degree:1 limu:1 verification:1 pi:6 rasmussen:2 truncation:1 jth:1 free:1 allow:1 burges:1 institute:1 limi:1 calculated:1 dimension:2 transition:2 qn:2 author:1 made:4 adaptive:9 avg:1 preprocessing:1 ec:1 welling:2 pruning:3 countably:1 kullback:1 overfitting:1 sequentially:1 conclude:1 equ:1 search:4 latent:1 continuous:1 iterative:1 tsiligkaridis:3 nature:1 alg:1 bottou:1 domain:1 main:1 rh:4 hyperparameters:6 daume:2 fig:4 west:2 fashion:2 slow:1 precision:1 exponential:1 theorem:9 showing:2 offset:1 virtue:1 derives:1 exists:1 workshop:2 mnist:2 sequential:15 merging:3 conditioned:1 push:2 boston:1 arbitarily:1 entropy:1 logarithmic:3 antoniak:3 likely:1 absorbed:1 tracking:1 scalar:2 ch:5 corresponds:1 relies:1 ma:2 conditional:14 sized:1 included:2 infinite:2 averaging:1 kurihara:3 lemma:1 called:1 total:1 pas:1 experimental:1 select:1 latter:1 accelerated:1 mcmc:2 |
5,565 | 6,036 | Optimistic Gittins Indices
Eli Gutin
Operations Research Center, MIT
Cambridge, MA 02142
[email protected]
Vivek F. Farias
MIT Sloan School of Management
Cambridge, MA 02142
[email protected]
Abstract
Starting with the Thomspon sampling algorithm, recent years have seen a resurgence of interest in Bayesian algorithms for the Multi-armed Bandit (MAB) problem. These algorithms seek to exploit prior information on arm biases and while
several have been shown to be regret optimal, their design has not emerged from a
principled approach. In contrast, if one cared about Bayesian regret discounted over
an infinite horizon at a fixed, pre-specified rate, the celebrated Gittins index theorem
offers an optimal algorithm. Unfortunately, the Gittins analysis does not appear to
carry over to minimizing Bayesian regret over all sufficiently large horizons and
computing a Gittins index is onerous relative to essentially any incumbent index
scheme for the Bayesian MAB problem.
The present paper proposes a sequence of ?optimistic? approximations to the
Gittins index. We show that the use of these approximations in concert with
the use of an increasing discount factor appears to offer a compelling alternative
to state-of-the-art index schemes proposed for the Bayesian MAB problem in
recent years by offering substantially improved performance with little to no
additional computational overhead. In addition, we prove that the simplest of these
approximations yields frequentist regret that matches the Lai-Robbins lower bound,
including achieving matching constants.
1
Introduction
The multi-armed bandit (MAB) problem is perhaps the simplest example of a learning problem
that exposes the tension between exploration and exploitation. Recent years have seen a resurgence
of interest in Bayesian MAB problems wherein we are endowed with a prior on arm rewards, and
a number of policies that exploit this prior have been proposed and/or analyzed. These include
Thompson Sampling [20], Bayes-UCB [12], KL-UCB [9], and Information Directed Sampling [19].
The ultimate motivation for these algorithms appears to be two-fold: superior empirical performance
and light computational burden. The strongest performance results available for these algorithms
establish regret lower bounds that match the Lai-Robbins lower bound [15]. Even among this set of
recently proposed algorithms, there is a wide spread in empirically observed performance.
Interestingly, the design of the index policies referenced above has been somewhat ad-hoc as opposed
to having emerged from a principled analysis of the underlying Markov Decision process. Now if in
contrast to requiring ?small? regret for all sufficiently large time horizons, we cared about minimizing
Bayesian regret over an infinite horizon, discounted at a fixed, pre-specified rate (or equivalently,
maximizing discounted infinite horizon rewards), the celebrated Gittins index theorem provides an
optimal, efficient solution. Importing this celebrated result to the fundamental problem of designing
algorithms that achieve low regret (either frequentist or Bayesian) simultaneously over all sufficiently
large time horizons runs into two substantial challenges:
High-Dimensional State Space: Even minor ?tweaks? to the discounted infinite horizon objective
render the corresponding Markov Decision problem for the Bayesian MAB problem intractable. For
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
instance, it is known that a Gittins-like index strategy is sub-optimal for a fixed horizon [5], let alone
the problem of minimizing regret over all sufficiently large horizons.
Computational Burden: Even in the context of the discounted infinite horizon problem, the
computational burden of calculating a Gittins index is substantially larger than that required for any
of the index schemes for the multi-armed bandit discussed thus far.
The present paper attempts to make progress on these challenges. Specifically, we make the following
contribution:
? We propose a class of ?optimistic? approximations to the Gittins index that can be computed
with significantly less effort. In fact, the computation of the simplest of these approximations
is no more burdensome than the computation of indices for the Bayes UCB algorithm, and
several orders of magnitude faster than the nearest competitor, IDS.
? We establish that an arm selection rule that is greedy with respect to the simplest of these
optimistic approximations achieves optimal regret in the sense of meeting the Lai-Robbins
lower bound (including matching constants) provided the discount factor is increased at a
certain rate.
? We show empirically that even the simplest optimistic approximation to the Gittins index
proposed here outperforms the state-of-the-art incumbent schemes discussed in this introduction by a non-trivial margin. We view this as our primary contribution ? the Bayesian MAB
problem is fundamental making the performance improvements we demonstrate important.
Literature review Thompson Sampling [20] was proposed as a heuristic to the MAB problem
in 1933, but was largely ignored until the last decade. An empirical study by Chapelle and Li [7]
highlighted Thompson Sampling?s superior performance and led to a series of strong theoretical
guarantees for the algorithm being proved in [2, 3, 12] (for specific cases when Gaussian and Beta
priors are used). Recently, these proofs were generalized to the 1D exponential family of distributions
in [13]. A few decades after Thompson Sampling was introduced, Gittins [10] showed that an index
policy was optimal for the infinite horizon discounted MAB problem. Several different proofs for the
optimality of Gittins index, were shown in [21, 22, 23, 6]. Inspired by this breakthrough, Lai and
Robbins [15, 14], while ignoring the original MDP formulation, proved an asymptotic lower bound
on achievable (non-discounted) regret and suggested policies that attained it.
Simple and efficient UCB algorithms were later developed by Agrawal and Auer et al. [1, 4], with
finite time regret bounds. These were followed by the KL-UCB [9] and Bayes UCB [12] algorithms.
The Bayes UCB paper drew attention to how well Bayesian algorithms performed in the frequentist
setting. In that paper, the authors also demonstrated that a policy using indices similar to Gittins? had
the lowest regret. The use of Bayesian techniques for bandits was explored further in [19] where
the authors propose Information Directed Sampling, an algorithm that exploits complex information
structures arising from the prior. There is also a very recent paper, [16], which also focuses on regret
minimization using approximated Gittins Indices. However, in that paper, the time horizon is assumed
to be known and fixed, which is different from the focus in this paper on finding a policy that has low
regret over all sufficiently long horizons.
2
Model and Preliminaries
We consider a multi-armed bandit problem with a finite set of arms A = {1, . . . , A}. Arm i 2 A if
pulled at time t, generates a stochastic reward Xi,Ni (t) where Ni (t) denotes the cumulative number
of pulls of arm i up to and including time t. (Xi,s , s 2 N) is an i.i.d. sequence of random variables,
each distributed according to p?i (?) where ?i 2 ? is a parameter. Denote by ? the tuple of all
?i . The expected reward from the ith arm is denoted by ?i (?i ) := E [Xi,1 | ?i ]. We denote by
?? (?) the maximum expected reward across arms; ?? (?) := maxi ?i (?i ) and let i? be an optimal
arm. The present paper will focus on the Bayesian setting, and so we suppose that each ?i is an
independent draw from some prior distribution q over ?. All random variables are defined on a
common probability space (?, F, P). We define a policy, ? := (?t , t 2 N), to be a stochastic process
taking values in A. We require that ? be adapted to the filtration Ft generated by the history of arm
pulls and their corresponding rewards up to and including time t 1.
2
Over time, the agent accumulates rewards, and we denote by
"
#
X
V (?, T, ?) := E
X?t ,N?t (t) ?
t
the reward accumulated up to time T when using policy ?. We write V (?, T ) := E [V (?, T, ?)]. The
regret of a policy over T time periods, for a specific realization ? 2 ?A , is the expected shortfall
against always pulling the optimal arm, namely
Regret (?, T, ?) := T ?? (?)
V (?, T, ?)
In a seminal paper, [15], Lai and Robbins established a lower bound on achievable regret. They
considered the class of policies under which for any choice of ? and positive constant a, any policy
in the class achieves o(na ) regret. They showed that for any policy ? in this class, and any ? with a
unique maximum, we must have
Regret (?, T, ?) X ?? (?) ?i (?i )
lim inf
(1)
T
log T
dKL (p?i , p?i? )
i
where dKL is the Kullback-Liebler divergence. The Bayes? risk (or Bayesian regret) is simply the
expected regret over draws of ? according to the prior q:
Regret (?, T ) := T E [?? (?)]
V (?, T ).
In yet another landmark paper, [15] showed that for a restricted class of priors q a similar class of
algorithms to those found to be regret optimal in [14] were also Bayes optimal. Interestingly, however,
this class of algorithms ignores information about the prior altogether. A number of algorithms that
do exploit prior information have in recent years received a good deal of attention; these include
Thompson sampling [20], Bayes-UCB [12], KL-UCB [9], and Information Directed Sampling [19].
The Bayesian setting endows us with the structure of a (high dimensional) Markov Decision process.
An alternative objective to minimizing Bayes risk, is the maximization of the cumulative reward
discounted over an infinite horizon. Specifically, for any positive discount factor < 1, define
"1
#
X
t 1
V (?) := Eq
X?t ,N?t (t) .
t=1
The celebrated Gittins index theorem provides an optimal, efficient solution to this problem that we
will describe in greater detail shortly; unfortunately as alluded to earlier even a minor ?tweak? to the
objective above ? such as maximizing cumulative expected reward over a finite horizon renders the
Gittins index sub-optimal [17].
As a final point of notation, every scheme we consider will maintain a posterior on the mean of an
arm at every point in time. We denote by qi,s the posterior on the mean of the ith arm after s 1
pulls of that arm; qi,1 := q. Since our prior on ?i will frequently be conjugate to the distribution of
the reward Xi , qi,s will permit a succinct description via a sufficient statistic we will denote by yi,s ;
denote the set of all such sufficient statistics Y. We will thus use qi,s and yi,s interchangeably and
refer to the latter as the ?state? of the ith arm after s 1 pulls.
3
Gittins Indices and Optimistic Approximations
One way to compute the Gittins Index is via the so-called retirement value formulation [23]. The
Gittins Index for arm i in state y is the value for that solves
"? 1
#
X
t 1
= sup E
Xi,t + ? 1
yi,1 = y .
(2)
1
1
? >1
t=1
We denote this quantity by ? (y). If one thought of the notion of retiring as receiving a deterministic
reward in every period, then the value of that solves the above equation could be interpreted
as the per-period retirement reward that makes us indifferent between retiring immediately and the
option of continuing to play arm i with the potential of retiring at some future time. The Gittins index
policy can thus succinctly be stated as follows: at time t, play an arm in the set arg maxi v (yi,Ni (t) ).
Ignoring computational considerations, we cannot hope for a scheme such as the one above to achieve
acceptable regret or Bayes risk. Specifically, denoting the Gittins policy by ? G, , we have
3
Lemma 3.1. There exists an instance of the multi armed bandit problem with |A| = 2 for which
Regret ? G, , T = ?(T )
for any
2 (0, 1).
The above result is expected. If the posterior means on the two arms are sufficiently apart, the Gittins
index policy will pick the arm with the larger posterior mean. The threshold beyond which the Gittins
policy ?exploits? depends on the discount factor and with a fixed discount factor there is a positive
probability that the superior arm is never explored sufficiently so as to establish that it is, in fact, the
superior arm. Fixing this issue then requires that the discount factor employed increase over time.
Consider then employing discount factors that increase at roughly the rate 1 1/t; specifically,
consider setting
1
t =1
2bln2 tc+1
and consider using the policy that at time t picks an arm from the set arg maxi ? t (yi,Ni (t) ). Denote
this policy by ? D . The following proposition shows that this ?doubling? policy achieves Bayes risk
that is within a factor of log T of the optimal Bayes risk. Specifically, we have:
Proposition 3.1.
Regret(? D , T ) = O log3 T .
where the constant in the big-Oh term depends on the prior q and A. The proof of this simple result
(Appendix A.1) relies on showing that the finite horizon regret achieved by using a Gittins index with
an appropriate fixed discount factor is within a constant factor of the optimal finite horizon regret.
The second ingredient is a doubling trick.
While increasing discount factors does not appear to get us to the optimal Bayes risk (the achievable
lower bound being log2 T ; see [14]); we conjecture that in fact this is a deficiency in our analysis
for Proposition 3.1. In any case, the policy ? D is not the primary subject of the paper but merely a
motivation for the discount factor schedule proposed. Putting aside this issue, one is still left with the
computational burden associated with ? D ? which is clearly onerous relative to any of the incumbent
index rules discussed in the introduction.
3.1
Optimistic Approximations to The Gittins Index
The retirement value formulation makes clear that computing a Gittins index is equivalent to solving
a discounted, infinite horizon stopping problem. Since the state space Y associated with this problem
is typically at least countable, solving this stopping problem, although not necessarily intractable, is a
non-trivial computational task. Consider the following alternative stopping problem that requires as
input the parameters (which has the same interpretation as it did before), and K, an integer limiting
the number of steps that we need to look ahead. For an arm in state y (recall that the state specifies
sufficient statistics for the current prior on the arm reward), let R(y) be a random variable drawn from
the prior on expected arm reward specified by y. Define the retirement value R ,K (s, y) according to
?
,
if s < K + 1
R ,K (s, y) =
max ( , R(y)) , otherwise
For a given K, the Optimistic Gittins Index for arm i in state y is now defined as the value for
solves
"? 1
#
X
s 1
? 1 R ,K (?, yi,? )
= sup E
Xi,s +
yi,1 = y .
1
1
1<? ?K+1
s=1
that
(3)
We denote the solution to this equation by v K (y). The problem above admits a simple, attractive
interpretation: nature reveals the true mean reward for the arm at time K + 1 should we choose to
not retire prior to that time, which enables the decision maker to then instantaneously decide whether
to retire at time K + 1 or else, never retire. In this manner one is better off than in the stopping
problem inherent to the definition of the Gittins index, so that we use the moniker optimistic. Since
we need to look ahead at most K steps in solving the stopping problem implicit in the definition
above, the computational burden in index computation is limited. The following Lemma formalizes
this intuition
4
Lemma 3.2. For all discount factors
and states y 2 Y, we have
K
v (y)
v (y)
8K.
Proof. See Appendix A.2.
It is instructive to consider the simplest version of the approximation proposed here, namely the case
where K = 1. There, equation (3) simplifies to
?
?
=?
?(y) + E (
R(y))+
(4)
where ?
?(y) := E [R(y)] is the mean reward under the prior given by y. The equation for above can
also be viewed as an upper confidence bound to an arm?s expected reward. Solving equation (4) is
often simple in practice, and we list a few examples to illustrate this:
Example 3.1 (Beta). In this case y is the pair (a, b), which specifices a Beta prior distribution. The
1-step Optimistic Gittins Index, is the value of that solves
?
?
a
a
=
+ E (
Beta(a, b))+ =
(1
Fa+1,b ( )) + (1 Fa,b ( ))
a+b
a+b
where Fa,b is the CDF of a Beta distribution with parameters a, b.
Example 3.2 (Gaussian). Here y = (?,
equation is
?
=?+ E (
?
=?+ (
where
and
2
), which specifices a Gaussian prior and the corresponding
?
N (?, 2 ))+
?
?
?
?)
+
?
?
denote the Gaussian PDF and CDF, respectively.
?
Notice that in both the Beta and Gaussian examples, the equations for are in terms of distribution
functions. Therefore it?s straightforward to compute a derivative for these equations (which would be
in terms of the density and CDF of the prior) which makes finding a solution, using a method such as
Newton-Raphson, simple and efficient.
We summarize the Optimistic Gittins Index (OGI) algorithm succinctly as follows.
Assume the state of arm i at time t is given by yi,t , and let
?
t
=1
1/t. Play an arm
K
i 2 arg max v t (yi,t ),
i
and update the posterior on the arm based on the observed reward.
4
Analysis
We establish a regret bound for Optimistic Gittins Indices when the algorithm is given the parameter
K = 1, the prior distribution q is uniform and arm rewards are Bernoulli. The result shows that
the algorithm, in that case, meets the Lai-Robbins lower bound and is thus asymptotically optimal,
in both a frequentist and Bayesian sense. After stating the main theorem, we briefly discuss two
generalizations to the algorithm.
In the sequel, whenever x, y 2 (0, 1), we will simplify notation and let d(x, y) :=
dKL (Ber(x), Ber(y)). Also, we will refer to the Optimistic Gittins Index policy simply as ? OG ,
with the understanding that this refers to the case when K, the ?look-ahead? parameter, equals 1 and
a flat beta prior is used. Moreover, we will denote the Optimistic Gittins Index of the ith arm as
vi,t := v11 1/t (yi,t ). Now we state the main result:
Theorem 1. Let ? > 0. For the multi-armed bandit problem with Bernoulli rewards and any
parameter vector ? ? [0, 1]A , there exists T ? = T ? (?, ?) and C = C(?, ?) such that for all T T ? ,
X (1 + ?)2 (?? ?i )
Regret ? OG , T, ? ?
log T + C(?, ?)
(5)
d(?i , ?? )
i=1,...,A
i6=i?
where C(?, ?) is a constant that is only determined by ? and the parameter ?.
5
Proof. Because we prove frequentist regret, the first few steps of the proof will be similar to that of
UCB and Thompson Sampling.
Assume w.l.o.g that arm 1 is uniquely optimal, and therefore ?? = ?1 . Fix an arbitrary suboptimal
arm, which for convenience we will say is arm 2. Let jt and kt denote the number of pulls of arms
1 and 2, respectively, by (but not including) time t. Finally, we let st and s0t be the corresponding
integer reward accumulated from arms 1 and 2, respectively. That is,
st =
jt
X
s0t
X1,s
=
s=1
kt
X
X2,s .
s=1
Therefore, by definition, j1 = k1 = s1 = s01 = 0. Let ?1 , ?2 , ?3 2 (?2 , ?1 ) be chosen such that
log T
2 ,?1 )
1 ,?3 )
?1 < ?2 < ?3 , d(?1 , ?3 ) = d(?1+?
and d(?2 , ?3 ) = d(?1+?
. Next, we define L(T ) := d(?
.
2 ,?3 )
We upper bound the expected number of pulls of the second arm as follows,
E [kT ] ? L(T ) +
? L(T ) +
? L(T ) +
?
T
X
P ?tOG = 2, kt
L(T )
t=bL(T )c+1
T
X
P (v1,t < ?3 ) +
t=1
T
X
T
X
P ?tOG = 2, v1,t
?3 , k t
L(T )
P ?tOG = 2, v2,t
?3 , k t
L(T )
t=1
P (v1,t < ?3 ) +
t=1
T
X
t=1
1
T
X
(1 + ?)2 log T X
+
P (v1,t < ?3 ) +
P ?tOG = 2, v2,t
d(?2 , ?1 )
t=1
t=1
|
{z
} |
{z
A
B
?3 , k t
(6)
L(T )
}
All that remains is to show that terms A and B are bounded by constants. These bounds are given in
Lemmas 4.1 and 4.2 whose proofs we describe at a high-level with the details in the Appendix.
Lemma 4.1 (Bound on term A). For any ? < ?1 , the following bound holds for some constant
C1 = C1 (?, ?1 )
1
X
P (v1,t < ?) ? C1 .
t=1
Proof outline. The goal is to bound P (v1,t < ?) by an expression that decays fast enough in t
so that the series converges. To prove this, we shall express the event {v1,t < ?} in the form
{Wt < 1/t} for some sequence of random variables Wt . It turns out that for large enough t,
P (Wt < 1/t) ? P cU 1/(1+h) < 1/t where U is a uniform random variable, c, h > 0 and therefore
1
P (v1,t < ?) = O t1+h
. The full proof is in Appendix A.4.
We remark that the core technique in the proof of Lemma 4.1 is the use of the Beta CDF. As such,
our analysis can, in some sense, improve the result for Bayes UCB. In the main theorem of [12], the
authors state that the quantile in their algorithm is required to be 1 1/(t logc T ) for some parameter
c
5, however they show simulations with the quantile 1 1/t and suggest that, in practice, it
should be used instead. By utilizing techniques in our analysis, it is possible to prove that the use of
1 1/t, as a discount factor, in Bayes UCB would lead to the same optimal regret bound. Therefore
the ?scaling? by logc T is unnecessary.
Lemma 4.2 (Bound on term B). There exists T ? = T ? (?, ?) sufficiently large and a constant
C2 = C2 (?, ?1 , ?2 ) so that for any T T ? , we have
T
X
P ?tOG = 2, v2,t
?3 , k t
t=1
L(T ) ? C2 .
Proof outline. This relies on a concentration of measure result and the assumption that the 2nd arm
was sampled at least L(T ) times. The full proof is given in Appendix A.5.
6
Lemma 4.1 and 4.2, together with (6), imply that
(1 + ?)2 log T
E [kT ] ?
+ C1 + C2
d(?2 , ?1 )
from which the regret bound follows.
4.1
Generalizations and a tuning parameter
There is an argument in Agrawal and Goyal [2] which shows that any algorithm optimal for the
Bernoulli bandit problem, can be modified to yield an algorithm that has O(log T ) regret with
general bounded stochastic rewards. Therefore Optimistic Gittins Indices is an effective and practical
alternative to policies such as Thompson Sampling and UCB. We also suspect that the proof of
Theorem 1 can be generalized to all lookahead values (K > 1) and to a general exponential family of
distributions.
Another important observation is that the discount factor for Optimistic Gittins Indices does not
have to be exactly 1 1/t. In fact, a tuning parameter ? > 0 can be added to make the discount
factor t+? = 1 1/(t + ?) instead. An inspection of the proofs of Lemmas 4.1 and 4.2 shows
that the result in Theorem 1 would still hold were one to use such a tuning parameter. In practice,
performance is remarkably robust to our choice of K and ?.
5
Experiments
Our goal is to benchmark Optimistic Gittins Indices (OGI) against state-of-the-art algorithms in the
Bayesian setting. Specifically, we compare ourselves against Thomson sampling, Bayes UCB, and
IDS. Each of these algorithms has in turn been shown to substantially dominate other extant schemes.
We consider the OGI algorithm for two values of the lookahead parameter K (1 and 3) , and in
one experiment included for completeness, the case of exact Gittins indices (K = 1). We used a
common discount factor schedule in all experiments setting t = 1 1/(100 + t). The choice of
? = 100 is second order and our conclusions remain unchanged (and actually appear to improve in
an absolute sense) with other choices (we show this in a second set of experiments).
A major consideration in running the experiments is that the CPU time required to execute IDS
(the closest competitor) based on the current suggested implementation is orders of magnitudes
greater than that of the index schemes or Thompson Sampling. The main bottleneck is that IDS uses
numerical integration, requiring the calculation of a CDF over, at least, hundreds of iterations. By
contrast, the version of OGI with K = 1 uses 10 iterations of the Newton-Raphson method. In the
remainder of this section, we discuss the results.
Gaussian This experiment (Table 1) replicates one in [19]. Here the arms generate Gaussian
rewards Xi,t ? N (?i , 1) where each ?i is independently drawn from a standard Gaussian distribution.
We simulate 1000 independent trials with 10 arms and 1000 time periods. The implementation of
OGI in this experiment uses K = 1. It is difficult to compute exact Gittins indices in this setting, but
a classical approximation for Gaussian bandits does exist; see [18], Chapter 6.1.3. We term the use of
that approximation ?OGI(1) Approx?. In addition to regret, we show the average CPU time taken, in
seconds, to execute each trial.
Algorithm
OGI(1) OGI(1) Approx.
IDS
Mean Regret
49.19
47.64
55.83
S.D.
51.07
50.59
65.88
1st quartile
17.49
16.88
18.61
Median
41.72
40.99
40.79
3rd quartile
73.24
72.26
78.76
CPU time (s)
0.02
0.01
11.18
Table 1: Gaussian experiment. OGI(1) denotes OGI with K =
approximation to the Gaussian Gittins Index from [18].
TS
Bayes UCB
67.40
60.30
47.38
45.35
37.46
31.41
63.06
57.71
94.52
86.40
0.01
0.02
1, while OGI Approx. uses the
The key feature of the results here is that OGI offers an approximately 10% improvement in regret
over its nearest competitor IDS, and larger improvements (20 and 40 % respectively) over Bayes
7
UCB and Thompson Sampling. The best performing policy is OGI with the specialized Gaussian
approximation since it gives a closer approximation to the Gittins Index. At the same time, OGI
is essentially as fast as Thomspon sampling, and three orders of magnitude faster than its nearest
competitor (in terms of regret).
Bernoulli In this experiment regret is simulated over 1000 periods, with 10 arms each having a
uniformly distributed Bernoulli parameter, over 1000 independent trials (Table 2). We use the same
setup as in [19] for consistency.
Algorithm
OGI(1) OGI(3) OGI(1)
IDS
TS
Bayes UCB
Mean Regret
18.12
18.00
17.52
19.03 27.39
22.71
1st quartile
6.26
5.60
4.45
5.85 14.62
10.09
Median
15.08
14.84
12.06
14.06 23.53
18.52
3rd quartile
27.63
27.74
24.93
26.48 36.11
30.58
CPU time (s)
0.19
0.89
(?) hours 8.11
0.01
0.05
Table 2: Bernoulli experiment. OGI(K) denotes the OGI algorithm with a K step approximation and
tuning parameter ? = 100. OGI(1) is the algorithm that uses Gittins Indices.
Each version of OGI outperforms other algorithms and the one that uses (actual) Gittins Indices
has the lowest mean regret. Perhaps, unsurprisingly, when OGI looks ahead 3 steps it performs
marginally better than with a single step. Nevertheless, looking ahead 1 step is a reasonably close
approximation to the Gittins Index in the Bernoulli problem. In fact the approximation error, when
using an optimistic 1 step approximation, is around 15% and if K is increased to 3, the error drops to
around 4%.
(a) Gaussian experiment
(b) Bernoulli experiment
Figure 1: Bayesian regret. In the legend, OGI(K)-? is the format used to indicate parameters K and
?. The OGI Appox policy uses the approximation to the Gittins index from [18].
Longer Horizon and Robustness For this experiment, we simulate the earlier Bernoulli and
Gaussian bandit setups with a longer horizon of 5000 steps and with 3 arms. The arms? parameters
are drawn at random in the same manner as the previous two experiments and regret is averaged over
100,000 independent trials. Results are shown in Figures 1a and 1b. In the Bernoulli experiment
of this section, due to the computational cost, we are only able to simulate OGI with K = 1. In
addition, to show robustness with respect to the choice of tuning parameter ?, we show results for
? = 50, 100, 150. The message here is essentially the same as in the earlier experiments: the OGI
scheme offers a non-trivial performance improvement at a tiny fraction of the computational effort
required by its nearest competitor. We omit Thompson Sampling and Bayes UCB from the plots in
order to more clearly see the difference between OGI and IDS. The complete graphs can be found in
Appendix A.6.
8
References
[1] AGRAWAL , R. Sample mean based index policies with O(log n) regret for the multi-armed
bandit problem. Advances in Applied Probability (1995), 1054?1078.
[2] AGRAWAL , S., AND G OYAL , N. Analysis of Thompson Sampling for the Multi-armed Bandit
Problem. In Proceedings of The 25th Conference on Learning Theory, pp. 39.1?-39.26.
[3] AGRAWAL , S., AND G OYAL , N. Further Optimal Regret Bounds for Thompson Sampling. In
Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics
(2013), pp. 99?107.
[4] AUER , P., C ESA -B IANCHI , N., AND F ISCHER , P. Finite-time analysis of the multiarmed
bandit problem. Machine learning 47, 2-3 (2002), 235?256.
[5] B ERRY, D. A., AND F RISTEDT, B. Bandit problems: sequential allocation of experiments
(Monographs on statistics and applied probability). Springer, 1985.
[6] B ERTSIMAS , D., AND N I?O -M ORA , J. Conservation laws, extended polymatroids and
multiarmed bandit problems; a polyhedral approach to indexable systems. Mathematics of
Operations Research 21, 2 (1996), 257?306.
[7] C HAPELLE , O., AND L I , L. An empirical evaluation of Thompson Sampling. In Advances in
neural information processing systems (2011), pp. 2249?2257.
[8] C OVER , T. M., AND T HOMAS , J. A. Elements of information theory. John Wiley & Sons,
2012.
[9] G ARIVIER , A. The KL-UCB algorithm for bounded stochastic bandits and beyond. In COLT
(2011).
[10] G ITTINS , J. C. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical
Society. Series B (Methodological) (1979), 148?177.
[11] J OGDEO , K., AND S AMUELS , S. M. Monotone convergence of binomial probabilities and a
generalization of ramanujan?s equation. The Annals of Mathematical Statistics (1968), 1191?
1195.
[12] K AUFMANN , E., KORDA , N., AND M UNOS , R. Thompson Sampling: An asymptotically
optimal finite-time analysis. In Algorithmic Learning Theory (2012), Springer, pp. 199?213.
[13] KORDA , N., K AUFMANN , E., AND M UNOS , R. Thompson Sampling for 1-dimensional
exponential family bandits. In Advances in Neural Information Processing Systems (2013),
pp. 1448?1456.
[14] L AI , T. L. Adaptive treatment allocation and the multi-armed bandit problem. The Annals of
Statistics (1987), 1091?1114.
[15] L AI , T. L., AND ROBBINS , H. Asymptotically efficient adaptive allocation rules. Advances in
applied mathematics 6, 1 (1985), 4?22.
[16] L ATTIMORE , T. Regret Analysis of the Finite-Horizon Gittins Index strategy for Multi-Armed
Bandits. In Proceedings of The 29th Conference on Learning Theory (2016), pp. 1?32.
[17] N I?O -M ORA , J. Computing a classic index for finite-horizon bandits. INFORMS Journal on
Computing 23, 2 (2011), 254?267.
[18] P OWELL , W. B., AND RYZHOV, I. O. Optimal learning, vol. 841. John Wiley & Sons, 2012.
[19] RUSSO , D., AND VAN ROY, B. Learning to optimize via information-directed sampling. In
Advances in Neural Information Processing Systems (2014), pp. 1583?1591.
[20] T HOMPSON , W. R. On the likelihood that one unknown probability exceeds another in view of
the evidence of two samples. Biometrika (1933), 285?294.
[21] T SITSIKLIS , J. N. A short proof of the Gittins Index theorem. The Annals of Applied Probability
(1994), 194?199.
[22] W EBER , R. On the Gittins Index for Multi-Armed Bandits. The Annals of Applied Probability
2, 4 (1992), 1024?1033.
[23] W HITTLE , P. Multi-armed bandits and the gittins index. Journal of the Royal Statistical Society.
Series B (Methodological) (1980), 143?149.
9
| 6036 |@word trial:4 exploitation:1 briefly:1 version:3 achievable:3 cu:1 nd:1 seek:1 simulation:1 pick:2 carry:1 celebrated:4 series:4 offering:1 denoting:1 interestingly:2 outperforms:2 current:2 yet:1 must:1 john:2 numerical:1 j1:1 enables:1 drop:1 concert:1 update:1 plot:1 aside:1 alone:1 greedy:1 intelligence:1 inspection:1 ith:4 core:1 short:1 provides:2 completeness:1 mathematical:1 c2:4 beta:8 prove:4 overhead:1 polyhedral:1 manner:2 expected:9 os:2 roughly:1 frequently:1 multi:12 inspired:1 discounted:9 little:1 armed:12 cpu:4 actual:1 increasing:2 ryzhov:1 spain:1 provided:1 underlying:1 notation:2 moreover:1 bounded:3 lowest:2 interpreted:1 substantially:3 developed:1 unos:2 finding:2 guarantee:1 formalizes:1 every:3 exactly:1 biometrika:1 omit:1 appear:3 positive:3 before:1 t1:1 referenced:1 accumulates:1 id:8 meet:1 approximately:1 limited:1 ischer:1 averaged:1 russo:1 directed:4 unique:1 practical:1 practice:3 regret:47 goyal:1 empirical:3 significantly:1 thought:1 matching:2 pre:2 confidence:1 refers:1 v11:1 suggest:1 get:1 cannot:1 convenience:1 selection:1 close:1 context:1 risk:6 seminal:1 optimize:1 equivalent:1 deterministic:1 demonstrated:1 center:1 maximizing:2 ramanujan:1 straightforward:1 attention:2 starting:1 independently:1 thompson:15 immediately:1 rule:3 utilizing:1 dominate:1 pull:6 oh:1 classic:1 notion:1 ianchi:1 limiting:1 annals:4 suppose:1 play:3 exact:2 us:7 designing:1 trick:1 element:1 roy:1 approximated:1 observed:2 ft:1 principled:2 substantial:1 intuition:1 monograph:1 reward:24 dynamic:1 solving:4 tog:5 farias:1 chapter:1 fast:2 describe:2 effective:1 artificial:1 whose:1 emerged:2 larger:3 heuristic:1 say:1 otherwise:1 statistic:7 highlighted:1 final:1 hoc:1 sequence:3 agrawal:5 propose:2 remainder:1 realization:1 achieve:2 lookahead:2 sixteenth:1 description:1 convergence:1 gittins:48 converges:1 illustrate:1 informs:1 stating:1 fixing:1 nearest:4 school:1 minor:2 received:1 progress:1 solves:4 eq:1 strong:1 indicate:1 stochastic:4 quartile:4 exploration:1 require:1 fix:1 generalization:3 preliminary:1 mab:9 proposition:3 hold:2 sufficiently:8 considered:1 around:2 algorithmic:1 major:1 achieves:3 maker:1 expose:1 robbins:7 instantaneously:1 minimization:1 hope:1 mit:4 clearly:2 gaussian:14 always:1 modified:1 og:2 focus:3 improvement:4 methodological:2 bernoulli:10 likelihood:1 contrast:3 sense:4 burdensome:1 stopping:5 accumulated:2 typically:1 bandit:23 arg:3 among:1 issue:2 colt:1 denoted:1 proposes:1 art:3 breakthrough:1 integration:1 logc:2 equal:1 never:2 having:2 sampling:22 look:4 future:1 simplify:1 inherent:1 few:3 simultaneously:1 divergence:1 ourselves:1 maintain:1 attempt:1 interest:2 message:1 evaluation:1 indifferent:1 replicates:1 analyzed:1 light:1 kt:5 tuple:1 closer:1 retirement:4 continuing:1 theoretical:1 korda:2 instance:2 increased:2 earlier:3 compelling:1 maximization:1 cost:1 polymatroids:1 tweak:2 uniform:2 hundred:1 st:4 incumbent:3 fundamental:2 density:1 international:1 shortfall:1 sequel:1 off:1 receiving:1 together:1 na:1 extant:1 management:1 opposed:1 choose:1 derivative:1 li:1 potential:1 importing:1 sloan:1 vi:1 ad:1 depends:2 later:1 view:2 performed:1 optimistic:18 sup:2 bayes:19 option:1 thomspon:2 contribution:2 ni:4 largely:1 yield:2 bayesian:18 marginally:1 history:1 liebler:1 strongest:1 whenever:1 definition:3 competitor:5 against:3 pp:7 proof:15 associated:2 sampled:1 proved:2 treatment:1 recall:1 lim:1 schedule:2 auer:2 actually:1 appears:2 retire:3 attained:1 tension:1 wherein:1 improved:1 gutin:2 formulation:3 execute:2 implicit:1 until:1 perhaps:2 pulling:1 mdp:1 requiring:2 true:1 vivek:1 deal:1 attractive:1 ogi:27 interchangeably:1 uniquely:1 generalized:2 pdf:1 outline:2 thomson:1 demonstrate:1 complete:1 performs:1 consideration:2 recently:2 superior:4 common:2 specialized:1 empirically:2 discussed:3 interpretation:2 refer:2 multiarmed:2 cambridge:2 ai:2 tuning:5 approx:3 rd:2 consistency:1 i6:1 mathematics:2 had:1 chapelle:1 longer:2 posterior:5 closest:1 recent:5 showed:3 inf:1 apart:1 certain:1 meeting:1 yi:10 seen:2 additional:1 somewhat:1 greater:2 employed:1 period:5 full:2 exceeds:1 match:2 faster:2 calculation:1 offer:4 long:1 raphson:2 lai:6 dkl:3 qi:4 essentially:3 iteration:2 achieved:1 c1:4 addition:3 remarkably:1 else:1 median:2 subject:1 suspect:1 legend:1 integer:2 enough:2 suboptimal:1 simplifies:1 bottleneck:1 whether:1 expression:1 ultimate:1 effort:2 render:2 remark:1 ignored:1 clear:1 discount:15 simplest:6 generate:1 specifies:1 exist:1 notice:1 arising:1 per:1 write:1 shall:1 vol:1 express:1 putting:1 key:1 threshold:1 nevertheless:1 achieving:1 drawn:3 v1:8 asymptotically:3 graph:1 merely:1 fraction:1 year:4 monotone:1 run:1 eli:1 family:3 decide:1 draw:2 decision:4 acceptable:1 appendix:6 scaling:1 bound:20 followed:1 fold:1 adapted:1 ahead:5 deficiency:1 x2:1 flat:1 generates:1 simulate:3 argument:1 optimality:1 performing:1 format:1 conjecture:1 according:3 conjugate:1 across:1 remain:1 son:2 making:1 s1:1 restricted:1 taken:1 alluded:1 equation:9 remains:1 discus:2 turn:2 available:1 operation:2 endowed:1 permit:1 v2:3 appropriate:1 frequentist:5 alternative:4 robustness:2 shortly:1 altogether:1 original:1 s01:1 denotes:3 running:1 include:2 binomial:1 log2:1 cared:2 newton:2 calculating:1 retiring:3 exploit:5 k1:1 quantile:2 establish:4 classical:1 society:2 unchanged:1 bl:1 objective:3 added:1 quantity:1 strategy:2 primary:2 fa:3 concentration:1 simulated:1 landmark:1 trivial:3 index:56 minimizing:4 equivalently:1 difficult:1 unfortunately:2 setup:2 stated:1 resurgence:2 filtration:1 design:2 countable:1 implementation:2 policy:25 unknown:1 upper:2 observation:1 markov:3 benchmark:1 finite:9 indexable:1 t:2 extended:1 looking:1 arbitrary:1 esa:1 introduced:1 namely:2 required:4 specified:3 kl:4 pair:1 established:1 barcelona:1 hour:1 nip:1 beyond:2 suggested:2 able:1 challenge:2 summarize:1 including:5 max:2 royal:2 event:1 endows:1 arm:46 scheme:9 improve:2 imply:1 prior:21 literature:1 review:1 understanding:1 relative:2 asymptotic:1 unsurprisingly:1 law:1 erry:1 allocation:4 ingredient:1 agent:1 sufficient:3 tiny:1 succinctly:2 last:1 bias:1 pulled:1 ber:2 wide:1 taking:1 vivekf:1 absolute:1 distributed:2 van:1 cumulative:3 ignores:1 author:3 adaptive:2 far:1 employing:1 log3:1 kullback:1 reveals:1 assumed:1 unnecessary:1 conservation:1 xi:7 decade:2 onerous:2 table:4 nature:1 reasonably:1 robust:1 ignoring:2 complex:1 necessarily:1 did:1 spread:1 main:4 motivation:2 big:1 succinct:1 x1:1 wiley:2 s0t:2 sub:2 exponential:3 theorem:9 specific:2 jt:2 showing:1 oyal:2 maxi:3 explored:2 list:1 admits:1 decay:1 evidence:1 burden:5 intractable:2 exists:3 sequential:1 drew:1 magnitude:3 horizon:22 margin:1 tc:1 led:1 simply:2 doubling:2 springer:2 eber:1 relies:2 ma:2 cdf:5 viewed:1 goal:2 included:1 specifically:6 infinite:8 determined:1 uniformly:1 wt:3 lemma:9 called:1 ucb:19 latter:1 instructive:1 |
5,566 | 6,037 | Sub-sampled Newton Methods
with Non-uniform Sampling
Peng Xu? Jiyan Yang? Farbod Roosta-Khorasani? Christopher R?? Michael W. Mahoney?
? Stanford University
? University of California at Berkeley
[email protected] [email protected] [email protected]
[email protected] [email protected]
Abstract
We consider the problemP
of finding the minimizer of a convex function F : Rd ? R
n
of the form F (w) :=
i=1 fi (w) + R(w) where a low-rank factorization of
2
? fi (w) is readily available. We consider the regime where n d. We propose
randomized Newton-type algorithms that exploit non-uniform sub-sampling of
{?2 fi (w)}ni=1 , as well as inexact updates, as means to reduce the computational
complexity, and are applicable to a wide range of problems in machine learning.
Two non-uniform sampling distributions based on block norm squares and block
partial leverage scores are considered. Under certain assumptions, we show that
our algorithms inherit a linear-quadratic convergence rate in w and achieve a lower
computational complexity compared to similar existing methods. In addition, we
show that our algorithms exhibit more robustness and better dependence on problem
specific quantities, such as the condition number. We empirically demonstrate that
our methods are at least twice as fast as Newton?s methods on several real datasets.
1
Introduction
Many machine learning applications involve finding the minimizer of optimization problems of the
form
n
X
min F (w) :=
fi (w) + R(w)
(1)
w?C
i=1
where fi (w) is a smooth convex function, R(w) is a regularizer, and C ? Rd is a convex constraint
set (e.g., `1 ball). Examples include sparse least squares [21], generalized linear models (GLMs) [8],
and metric learning problems [12].
First-order optimization algorithms have been the workhorse of machine learning applications and
there is a plethora of such methods [3, 17] for solving (1). However, for ill-conditioned problems,
it is often the case that first-order methods return a solution far from w? albeit a low objective
value. On the other hand, most second-order algorithms prove to be more robust to such adversarial
effects. This is so since, using the curvature information, second order methods properly rescale
the gradient, such that it is a more appropriate direction to follow. For example, take the canonical
second order method, i.e., Newton?s method, which, in the unconstrained case, has updates of the
form wt+1 = wt ? [H(wt )]?1 g(wt ) (here, g(wt ) and H(wt ) denote the gradient and the Hessian
of F at wt , respectively). Classical results indicate that under certain assumptions, Newton?s method
can achieve a locally super-linear convergence rate, which can be shown to be problem independent!
Nevertheless, the cost of forming and inverting the Hessian is a major drawback in using Newton?s
method in practice. In this regard, there has been a long line of work aiming at providing sufficient
second-order information more efficiently, e.g., the classical BFGS algorithm and its limited memory
version [14, 17].
As the mere evaluation of H(w) grows linearly in n, a natural idea is to use uniform sub-sampling
{?2 fi (w)}ni=1 as a way to reduce the cost of such evaluation [7, 19, 20]. However, in the presence
of high non-uniformity among {?2 fi (w)}ni=1 , the sampling size required to sufficiently capture the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
curvature information of the Hessian can be very large. In such situations, non-uniform sampling can
indeed be a much better alternative and is addressed in this work in detail.
In this work, we propose novel, robust and highly efficient non-uniformly sub-sampled Newton
methods (SSN) for a P
large sub-class of problem (1), where the Hessian of F (w) in (1) can be
n
T
ki ?d
written as H(w) =
, i = 1, 2, . . . , n,
i=1 Ai (w)Ai (w) + Q(w), where Ai (w) ? R
are readily available and Q(w) is some positive semi-definite matrix. This situation arises very
frequently in machine learning problems. For example, take any problem where fi (w) = `(xTi w),
`(?)
p is any convex loss function and xi ?s are data points. In such situations, Ai (w) is simply
`00 (xTi w)xTi . Under this setting, non-uniformly sub-sampling the Hessians now boils down to
building an appropriate non-uniform distribution to sub-sample the most ?relevant? terms among
e t ), is then used to update the current iterate
{Ai (w)}ni=1 . The approximate Hessian, denoted by H(w
?1
e t )] g(wt ). Furthermore, in order to improve upon the overall efficiency of
as wt+1 = wt ? [H(w
our SSN algorithms, we will allow for the linear system in the sub-problem to be solved inexactly,
i.e., using only a few iterations of any iterative solver such as Conjugate Gradient (CG). Such inexact
updates used in many second-order optimization algorithms have been well studied in [4, 5].
As we shall see (in Section 4), our algorithms converge much faster than other competing methods
for a variety of problems. In particular, on several machine learning datasets, our methods are at least
twice as fast as Newton?s methods in finding a high-precision solution while other methods converge
slowly. Indeed, this phenomenon is well supported by our theoretical findings?the complexity of
our algorithms has a lower dependence on the problem condition number and is immune to any
non-uniformity among {Ai (w)}ni=1 which may cause a factor of n in the complexity (Table 1).
In the following we present details of our main contributions and connections to other prior work.
Readers interested in more details should see the technical report version of this conference paper [23]
for proofs of our main results, additional theoretical results, as well as a more detailed empirical
evaluation.
1.1 Contributions and related work
Recently, within the context of randomized second order methods, many algorithms have been
proposed that aim at reducing the computational costs involving pure Newton?s method. Among
them, algorithms that employ uniform sub-sampling constitute a popular line of work [4, 7, 16, 22].
In particular, [19, 20] consider a more general class of problems and, under a variety of conditions,
thoroughly study the local and global convergence properties of sub-sampled Newton methods where
the gradient and/or the Hessian are uniformly sub-sampled. Our work here, however, is more closely
related to a recent work [18](Newton Sketch), which considers a similar class of problems and
proposes sketching the Hessian using random sub-Gaussian matrices or randomized orthonormal
systems. Furthermore, [1] proposes a stochastic algorithm (LiSSA) that, for solving the sub-problems,
employs some unbiased estimators of the inverse of the Hessian.
In light of these prior works, our contributions can be summarized as follows.
? For the class of problems considered here, unlike the uniform sampling used in [4, 7, 19, 20], we
employ two non-uniform sampling schemes based on block norm squares and a new, and more
general, notion of leverage scores named block partial leverage scores (Definition 1). It can be
shown that in the case of extreme non-uniformity among {Ai (w)}ni=1 , uniform sampling might
require ?(n) samples to capture the Hessian information appropriately. However, we show that our
non-uniform sampling schemes result in sample sizes completely independent of n and immune to
such non-uniformity.
? Within the context of globally convergent randomized second order algorithms, [4, 20] incorporate
inexact updates where the sub-problems are solved only approximately. We extend the study of
inexactness to our local convergence analysis.
? We provide a general structural result (Lemma 2) showing that, as in [7, 18, 19], our main algorithm
exhibits a linear-quadratic solution error recursion. However, we show that by using our nonuniform sampling strategies, the factors appearing in such error recursion enjoy a much better
dependence on problem specific quantities, e.g., such as the condition number (Table 2). For
example, using block partial
? leverage score sampling, the factor for the linear term of the error
recursion (5) is of order O( ?) as opposed to O(?) for uniform sampling.
? We demonstrate that to achieve a locally problem independent linear convergence rate, i.e., kwt+1 ?
w? k ? ?kwt ? w? k for some fixed ? < 1, our algorithms achieve a lower per-iteration complexity
compared to [1, 18, 20] (Table 1). In particular, unlike Newton Sketch [18], which employs random
2
Table 1: Complexity per iteration of different methods to obtain a problem independent local linear
convergence rate. The quantities ?, ?
? , and ?
? are the local condition numbers, defined in (6), satisfying
???
???
? , at the optimum w? . A is defined in Assumption A3 and sr(A) is the stable rank of A
satisfying sr(A) ? d. Here we assume ki = 1, C = Rd , R(w) = 0, and CG is used for solving
sub-problems in our algorithms.
NAME
Newton-CG method
SSN (leverage scores)
SSN (row norm squares)
Newton Sketch (SRHT)
SSN (uniform)
LiSSA
COMPLEXITY PER ITERATION
?
?
O(nnz(A)
?)
?
O(nnz(A)
log n + d2 ?3/2 )
?
O(nnz(A)
+ sr(A)d?5/2 )
4
?
O(nd(log n) + d2 (log n)4 ?3/2 )
?
O(nnz(A)
+ d?
??3/2 )
?
O(nnz(A) + d?
??
?2 )
REFERENCE
[17]
This paper
This paper
[18]
[20]
[1]
projections and fails to preserve the sparsity of {Ai (w)}ni=1 , our methods indeed take advantage
of such sparsity. Also, in the presence of high non-uniformity among {Ai (w)}ni=1 , factors ?
? and
?
? (see Definition (6)) which appear in SSN (uniform) [19], and LiSSA [1], can potentially be as
large as ?(n?); see Section 3.5 for detailed discussions.
? We numerically demonstrate the effectiveness and robustness of our algorithms in recovering the
minimizer of ridge logistic regression on several real datasets (Figures 1 and 2). In particular, our
algorithms are at least twice as fast as Newton?s methods in finding a high-precision solution while
other methods converge slowly.
1.2
Notation and assumptions
Given a function F , the gradient, the exact Hessian and the approximate Hessian are denoted by g, H,
e respectively. Iteration counter is denoted by subscript, e.g., wt . Unless stated specifically, k ? k
and H,
denotes the Euclidean norm for vectors and spectral norm for matrices. Frobenius norm of matrices
is written as k ? kF . By a matrix A having n blocks, we mean that A has a block structure and can
T
be viewed as A = AT1 ? ? ? ATn , for appropriate size blocks Ai . The tangent cone of constraint
set C at the optimum w? is denoted by K and defined as K = {?|w? + t? ? C for some t > 0}.
Given a symmetric matrix A, the K-restricted minimum and maximum eigenvalues of A are defined,
T
T
K
T
T
respectively, as ?K
min (A) = minx?K\{0} x Ax/x x and ?max (A) = maxx?K\{0} x Ax/x x.
2
2
The stable rank of a matrix A is defined as sr(A) = kAkF /kAk2 . We use nnz(A) to denote number
of non-zero elements in A.
Throughout the paper, we make use of the following assumptions:
A.1 Lipschitz Continuity: F (w) is convex and twice differentiable with L-Lipschitz Hessian, i.e.,
kH(u) ? H(v)k ? Lku ? vk, ?u, v ? C.
?
A.2 Local Regularity: F (x) is locally strongly convex and smooth, i.e., ? = ?K
min (H(w )) >
K
?
0, ? = ?max (H(w )) < ?. Here we define the local condition number of the problem as
? := ?/?.
A.3 Hessian Decomposition: For each fi (w) in (1), define ?2 fi (w) := Hi (w) := ATi (w)Ai (w).
For simplicity, we assume k1 = ? ? ? = kn = k and k is independent of d. Furthermore, we
assume that given w, computing Ai (w), Hi (w), and g(w) takes O(d), O(d2 ), and O(nnz(A))
T
time, respectively. We call the matrix A(w) = AT1 , . . . , ATn
? Rnk?d the augmented
T
matrix of {Ai (w)}. Note that H(w) = A(w) A(w) + Q(w).
2
Main Algorithm: SSN with Non-uniform Sampling
Our proposed SSN method with non-uniform sampling is given in Algorithm 1. The core of our
algorithm is based on choosing a sampling scheme S that, at every iteration, constructs a non-uniform
sampling distribution {pi }ni=1 over {Ai (wt )}ni=1 and then samples from {Ai (wt )}ni=1 to form the
e t ). The sampling sizes s needed for different sampling distributions will be
approximate Hessian, H(w
Pn
discussed in Section 3.2. Since H(w) = i=1 ATi (w)Ai (w) + Q(w), the Hessian approximation
essentially boils down to a matrix approximation problem. Here, we generalize the two popular
non-uniform sampling strategies, i.e., leverage score sampling and row norm squares sampling, which
are commonly used in the field of randomized linear algebra, particularly for matrix approximation
3
problems [10, 15]. With an approximate Hessian constructed via non-uniform sampling, we may
choose an appropriate solver A to the solve the sub-problem in Step 11 of Algorithm 1. Below we
elaborate on the construction of the two non-uniform sampling schemes.
Block Norm Squares Sampling This is done by constructing a sampling distribution based on the
Frobenius norm of the blocks Ai , i.e., pi = kAi k2F /kAk2F , i = 1, . . . , n. This is an extension to the
row norm squares sampling in which the intuition is to capture the importance of the blocks based on
the ?magnitudes? of the sub-Hessians [10].
Block Partial Leverage Scores Sampling Recall standard leverage scores of a matrix A are
defined as diagonal elements of the ?hat? matrix A(AT A)?1 AT [15] which prove to be very useful
in matrix approximation algorithms. However, in contrast to the standard case, there are two major
differences in our task. First, blocks, not rows, are being sampled. Second, an additional matrix Q is
involved in the target matrix, i.e., H. In light of this, we introduce a new and more general notion of
leverage scores, called block partial leverage scores.
Definition 1 (Block Partial Leverage Scores). Given a matrix A ? Rkn?d viewed as having n
d?d
blocks of size k ? d and aSPSD
, let {?i }kn+d
i=1 be the (standard) leverage scores
matrix Q ? R
A
1
of the augmented matrix
. The block partial leverage score for the i-th block is defined as
Q2
P
ki
?iQ (A) = j=k(i?1)+1 ?j .
Note that for k = 1 and Q = 0, the block partial leverage
Pscore is simply
the standard leverage score.
n
Q
Q
The sampling distribution is defined as pi = ?i (A)/
i = 1, . . . , n.
j=1 ?j (A) ,
Algorithm 1 Sub-sampled Newton method with Non-uniform Sampling
1: Input: Initialization point w0 , number of iteration T , sampling scheme S and solver A.
2: Output: wT
3: for t = 0, . . . , T ? 1 do
4:
Construct the non-uniform sampling distribution {pi }ni=1 as described in Section 2.
5:
for i = 1, . . . , n do
6:
qi = min{s ? pi , 1}, where s is the sampling size.
?
e i (wt ) = Ai (wt )/ qi , with probability qi ,
7:
A
0,
with probability 1 ? qi .
8:
end for P
e t) = n A
e T (wt )A
e i (wt ) + Q(wt ).
9:
H(w
i
i=1
10:
Compute g(wt )
11:
Use solver A to solve the sub-problem inexactly
1
e t )(w ? wt )i + hg(wt ), w ? wt i}.
wt+1 ? arg min{ h(w ? wt ), H(w
w?C 2
12: end for
13: return wT .
3
(2)
Theoretical Results
In this section we provide detailed complexity analysis of our algorithm.1 Different choices of
sampling scheme S and the sub-problem solver A lead to different complexities in SSN. More
precisely, total complexity is characterized by the following four factors: (i) total number of iterations
T determined by the convergence rate which is affected by the choice of S and A; see Lemma 2 in
Section 3.1, (ii) the time, tgrad , it takes to compute the full gradient g(wt ) (Step 10 in Algorithm 1),
(iii) the time tconst , to construct the sampling distribution {pi }ni=1 and sample s terms at each iteration
(Steps 4-8 in Algorithm 1), which is determined by S; see Section 3.2 for details, and (iv) the time
? and (inexactly) solve the sub-problem at each iteration (Steps 9
tsolve needed to (implicitly) form H
and 11 in Algorithm 1) which is affected by the choices of both S (manifested in the sampling size s)
and A see Section 3.2&3.3 for details. With these, the total complexity can be expressed as
T ? (tgrad + tconst + tsolve ).
(3)
1
In this work, we only focus on local convergence guarantees for Algorithm 1. To ensure global convergence,
one can incorporate an existing globally convergent method, e.g. [20], as initial phase and switch to Algorithm 1
once the iterate is ?close enough? to the optimum; see Lemma 2.
4
Below we study these contributing factors. Moreover, the per iteration complexity of our algorithm for
achieving a problem independent linear convergence rate is presented in Section 3.4 and comparison
to other related work is discussed in Section 3.5.
3.1
Local linear-quadratic error recursion
Before diving into details of the complexity analysis, we state a structural lemma that characterizes
the local convergence rate of our main algorithm, i.e., Algorithm 1. As discussed earlier, there are
two layers of approximation in Algorithm 1, i.e., approximation of the Hessian by sub-sampling and
inexactness of solving (2). For the first layer, we require the approximate Hessian to satisfy one of
the following two conditions (in Section 3.2 we shall see our construction of approximate Hessian
via non-uniform sampling can achieve these conditions with a sampling size independent of n).
e t ) ? H(wt )k ? ? kH(wt )k,
kH(w
or
e t ) ? H(wt ))y| ? ?
|xT (H(w
q
xT H(wt )x ?
q
yT H(wt )y, ?x, y ? K.
(C1)
(C2)
Note that (C1) and (C2) are two commonly seen guarantees for matrix approximation problems. In
particular, (C2) is stronger in the sense that the spectral of the approximated matrix H(wt ) is well
preserved. Below in Lemma 2, we shall see such a stronger condition ensures a better dependence on
the condition number in terms of the convergence rate. For the second layer of approximation, we
require the solver to produce an 0 -approximate solution wt+1 satisfying
?
?
kwt+1 ? wt+1
k ? 0 ? kwt ? wt+1
k,
(4)
?
where wt+1
is the exact optimal solution to (2). Note that (4) implies an 0 -relative error approxima?
tion to the exact update direction, i.e., kv ? v? k ? kv? k where v = wt+1 ? wt , v? = wt+1
? wt .
Lemma 2 (Structural Result). Let ? (0, 1/2) and 0 be given and {wt }Ti=1 be a sequence generated
?
by (2) which satisfies (4). Also assume that the initial point w0 satisfies kw0 ? w? k ? 4L
. Under
Assumptions A1 & A2, the solution error satisfies the following recursion
kwt+1 ? w? k ? (1 + 0 )Cq ? kwt ? w? k2 + (0 + (1 + 0 )Cl ) ? kwt ? w? k,
(5)
where Cl and Cq are specified as below.
2L
4?
and Cl =
, if condition (C1) is met;
(1 ? 2?)?
1 ? 2?
?
2L
3 ?
? Cq =
and Cl =
, if condition (C2) is met.
(1 ? )?
1?
? Cq =
3.2
Complexities related to the choice of sampling scheme S
The following lemma gives the complexity of constructing the sampling distributions used in this
paper. Here, we adopt the fast approximation algorithm for standard leverage scores, [6], to obtain an
efficient approximation to our block partial leverage scores.
Lemma 3 (Construction Complexity). Under Assumption 3, it takes tconst = O(nnz(A)) time to
construct a block norm squares sampling distribution, and it takes tconst = O(nnz(A) log n) time
to construct, with high probability, a distribution with constant factor approximation to the block
partial leverage scores.
The following theorem indicates that if the blocks of the augmented matrix of {Ai (w)} (see Assumption 3) are sampled based on block norm squares or block partial leverage scores with large
enough sampling size, (C1) or (C2) holds, respectively, with high probability.
Theorem 4 (Sufficient Sample Size). Given any ? (0, 1), the following statements hold:
Pn
e as in Steps 5-9 of
(i) Let ri = kAi k2F , i = 1, . . . , n, set pi = ri /( j=1 rj ) and construct H
2
Algorithm 1. Then if s ? 4sr(A) ? log (min{4sr(A), d}/?) / , with probability at least 1 ? ?,
(C1) holds.
(ii) Let {?
?iQ (A)}ni=1 be some overestimate of the block partial leverage scores, i.e., ??iQ (A) ?
Pn
e as in
?iQ (A), i = 1, . . . , n and set pi = ??iQ (A)/( j=1 ??jQ (A)), i = 1, . . . , n. Construct H
P
n
Steps 5-9 of Algorithm 1. Then if s ? 4
?iQ (A) ? log (4d/?) /2 , with probability at
i=1 ?
least 1 ? ?, (C2) holds.
5
Remarks: Part (i) of Theorem 4 is an extension of [10] to our particular augmented matrix setting.
Pn
Also, as for the exact block partial leverage scores we have i=1 ?iQ (A) ? d, part (ii) of Theorem 4
implies that, using exact scores, less than O(d log d/2 ) blocks are needed for (C2) to hold.
Complexities related to the choice of solver A
3.3
We now discuss how tsolve in (3) is affected by the choice of the solver A in Algorithm 1. The
e t ) is of the form A
?TA
? + Q where A
? ? Rsk?d . As a result, the complexity
approximate Hessian H(w
for solving the sub-problem (2) essentially depends on the choice A, the constraint set C, s and d,
i.e., tsolve = T?(A, C, s, d). For example, when the problem is unconstrained (C ?
= Rd ), CG takes
tsolve = O(sd ?t log(1/)) to return a solution with approximation quality 0 = ?t in (4) where
e t ))/?min (H(w
e t )).
?t = ?max (H(w
3.4
Total complexity per iteration
Lemma 2 implies that, by choosing appropriate values for and 0 , SSN inherits a local constant
linear convergence rate, i.e., kwt+1 ? w? k ? ?kwt ? w? k with ? < 1. The following Corollary
gives the total complexity per iteration of Algorithm 1 to obtain a locally linear rate.
Corollary 5. Suppose C = Rd and CG is used to solve the sub-problem (2). Then under Assumption 3, to obtain a constant local linear convergence rate with a constant probability, the complexity
per iteration of Algorithm 1 using the block partial leverage scores sampling and block norm squares
?
?
sampling is O(nnz(A)
log n + d2 ?3/2 ) and O(nnz(A)
+ sr(A)d?5/2 ), respectively. 2
3.5
Comparison with existing similar methods
As discussed above, the sampling scheme S plays a crucial role in the overall complexity of SSN.
We first compare our proposed non-uniform sampling schemes with the uniform alternative [20],
in terms of complexities tconst and tsolve as well as the quality of the locally linear-quadratic error
recursion (5), measured by Cq and Cl . Table 2 gives a summary of such comparison where, for
simplicity, we assume that k = 1, C = Rd , and a direct solver is used for the linear system subproblem (2). Also, throughout this subsection, for randomized algorithms, we choose parameters
such that the failure probability is a constant. One advantage of uniform sampling is its simplicity of
construction. However, as shown in Section 3.2, it takes nearly input-sparsity time to construct the
proposed non-uniform sampling distribution. In addition, when rows of A are very non-uniform, i.e.,
maxi kAi k u kAk, uniform scheme requires ?(n) samples to achieve (C1). It can also be seen that
for a given , row norm squares sampling requires the smallest sampling size, yielding the smallest
tsolve in Table 2. More importantly, although either (C1) or (C2) is sufficient to give (5), having (C2)
as in SSN with leverage score sampling yields constants Cq and Cl with much better dependence on
the local condition number, ?, than other methods. This fact can drastically improve the performance
of SSN for ill-conditioned problems; see Figure 1 in Section 4.
Table 2: Comparison between standard Newton?s methods and sub-sampled Newton methods (SSN)
with different sampling schemes. Cq and Cl are the constants appearing in (5), A is the augmented
matrix of {Ai (w)} with stable rank sr(A), ? = ?/? is the local condition number and ?
? = L/?.
Here, we assume that k = 1, C = Rd , and a direct solver is used in Algorithm 1.
Newton?s method
SSN (leverage scores)
SSN (row norm squares)
tconst
0
O(nnz(A) log n)
O(nnz(A))
SSN (uniform) [20]
O(1)
NAME
tsolve = sd2
O(nd2 )
P Q
?
O(( i ?i (A))d2 /2 )
2 2
?
O(sr(A)d /2 )
max
kA
2
i
ik
? nd
O
/2
2
kAk
Cq
?
?
?
?
1?
?
?
1??
?
?
1??
Cl
0
?
?
1?
?
1??
?
1??
Next, recall that in Table 1, we summarize the per-iteration complexity needed by our algorithm and
other similar methods [20, 1, 18] to achieve a given local linear convergence rate. Here we provide
more details. First, the definition of various notions of condition number used in Table 1 is given
below. For any given w ? Rd , define
Pn
?max ( i=1 Hi (w))
maxi ?max (Hi (w))
maxi ?max (Hi (w))
Pn
Pn
?(w) =
,?
? (w) = n?
,?
? (w) =
, (6)
mini ?min (Hi (w))
?min ( i=1 Hi (w))
?min ( i=1 Hi (w))
2
?
In this paper, O(?)
hides logarithmic factors of d, ? and 1/?.
6
assuming that the denominators are non-zero. It is easy to see that ?(w) ? ?
? (w) ? ?
? (w).
However, the degree of the discrepancy among these inequalities depends on P
the properties of
n
K
H
Pin(w). KRoughly speaking, whenKall Hi (w)?s are ?similar?, one has that ?max ( i=1 Hi (w)) ?
? (w) ? ?
? (w). However, in many
i=1 ?max (Hi (w)) ? n ? maxi ?max (Hi (w)), and thus ?(w) ? ?
real applications, such uniformity doesn?t simply exist. For example, it is not hard to design a matrix
A with non-uniform rows such that for H = AT A, ?
? and ?
? are larger than ? by a factor of n. This
implies although SSN with leverage score sampling has a quadratic dependence on d, its dependence
on the condition number is significantly better than all other methods such as SSN (uniform) and
LiSSA. Moreover compared to Newton?s method, all these stochastic variants replace the coefficient
of the leading term, i.e., O(nd), with some lower order terms that only depend on d and condition
numbers (assuming nnz(A) ? nd). Therefore, one should expect these algorithms to perform well
when n d and the problem is moderately conditioned.
4
Numerical Experiments
We consider an estimation problem in GLMs with Gaussian prior. Assume X ? Rn?d , Y ? Y n are
the data matrix and response vector. The problem of minimizing the negative log-likelihood with
n
X
ridge penalty can be written as
min
?(xTi w, yi ) + ?kwk22 ,
w?Rd
i=1
where ? : R ? Y ? R is a convex cumulant generating function and ? ? 0 is the ridge penalty
Pn
00
parameter. In this case, the Hessian is H(w) = i=1 ? (xTi w, yi )xi xTi +?I := XT D2 (w)X+?I,
T
where
p 00 xi is i-th column of X and D(w) is a diagonal matrix with the diagonal [D(w)]ii =
T
? (xi w, yi ). The augmented matrix of {Ai (w)} can be written as A(w) = DX ? Rn?d where
Ai (w) = [D(w)]ii xTi .
For our numerical simulations, we consider a very popular instance of GLMs, namely, logistic
regression, where ?(u, y) = log(1 + exp(?uy)) and Y = {?1}. Table 3 summarizes the datasets
used in our experiments.
Table 3: Datasets used in ridge logistic regression. In the above, ? and ?
? are the local condition
numbers of ridge logistic regression problem with ? = 0.01 as defined in (6).
DATASET
n
d
?
?
?
CT slices[9]
53,500
385
368
47,078
Forest[2]
581,012
55
221
322,370
Adult[13]
32,561
123
182
69,359
Buzz[11]
59,535
78
37
384,580
We compare the performance of the following five algorithms: (i) Newton: the standard Newton?s
method, (ii) Uniform: SSN with uniform sampling, (iii) PLevSS: SSN with partial leverage scores
sampling, (iv) RNormSS: SSN with block (row) norm squares sampling, and (v) LBFGS-k is the
standard L-BFGS method [14] with history size k.
All algorithms are initialized with a zero vector.3 We also use CG to solve the sub-problem approximately to within 10?6 relative residue error. In order to compute the relative error kwt ? w? k/kw? k,
an estimate of w? is obtained by running the standard Newton?s method for sufficiently long time.
Note here, in SSN with partial leverage score sampling, we recompute the leverage scores every 10
iterations. Roughly speaking, these ?stale? leverage scores can be viewed as approximate leverage
scores for the current iteration with approximation quality that can be upper bounded by the change
of the Hessian and such quantity is often small in practice. So reusing the leverage scores allows us
to further drive down the running time.
We first investigate the effect of the condition number, controlled by varying ?, on the performance
of different methods, and the results are depicted in Figure 1. It can be seen that in well-conditioned
cases, all sampling schemes work equally well. However, as the condition number worsens, the
performance of uniform sampling deteriorates, while non-uniform sampling, in particular leverage
score sampling, shows a great degree of robustness to such ill-conditioning effect. The experiments
shown in Figure 1 are consistent with the theoretical results of Table 2, showing that the theory
presented here can indeed be a reliable guide to practice.
3
Theoretically, the suitable initial point for all the algorithms is the one with which the standard Newton?s
method converges with a unit stepsize. Here, w0 = 0 happens to be one such good starting point.
7
4
3.5 ?10
6
10 4
10
2
10 0
-6
-5
-4
-3
-2
-1
1.2
3
Newton
Uniform
PLevSS
RNormSS
2.5
2
1.5
1
0.5
0
0
running time (s)
10
best sampling size
condition number
10 8
-6
-5
-4
-3
-2
-1
Newton
Uniform
PLevSS
RNormSS
LBFGS-50
1
0.8
0.6
0.4
0.2
0
0
-6
-5
log(lambda)
log(lambda)
(a) condition number
-4
-3
-2
-1
0
log(lambda)
(b) sampling size
(c) running time
Figure 1: Ridge logistic regression on Adult with different ??s: (a) local condition number ?, (b)
sample size for different SSN methods giving the best overall running time, (c) running time for
different methods to achieve 10?8 relative error.
Next, we compare the performance of various methods as measured by relative-error of the solution
vs. running time and the results are shown in Figure 24 . It can be seen that, in most cases, SSN with
non-uniform sampling schemes outperforms the other algorithms, especially Newton?s method. In
particular, uniform sampling scheme performs poorly, e.g., in Figure 2(b), when the problem exhibits
a high non-uniformity among data points which is reflected in the difference between ? and ?
? shown
in Table 3.
10 -15
0
2
4
6
8
time (s)
(a) CT Slice
10
2
2
10
-5
10 -10
10 -15
0
2
4
6
logistic - lambda=0.01
10 0
Newton
Uniform (27500)
PLevSS (3300)
RNormSS (3300)
LBFGS-100
LBFGS-50
8
10
-5
10 -10
10 -15
10
time (s)
0
0.5
1
time (s)
(b) Forest
(c) Adult
1.5
logistic - lambda=0.01
10 0
Newton
Uniform (24600)
PLevSS (2460)
RNormSS (2460)
LBFGS-100
LBFGS-50
||w - w*||2/||w*||2
10 -10
||w - w*|| /||w*||
||w - w*||2/||w*||2
10
-5
logistic - lambda=0.01
10 0
Newton
Uniform (7700)
PLevSS (3850)
RNormSS (3850)
LBFGS-100
LBFGS-50
||w - w*||2/||w*||2
logistic - lambda=0.01
10 0
2
10
Newton
Uniform (39000)
PLevSS (1560)
RNormSS (1560)
LBFGS-100
LBFGS-50
-5
10 -10
10 -15
0
2
4
6
8
10
time (s)
(d) Buzz
Figure 2: Iterate relative solution error vs. time(s) for various methods on four datasets with ridge
penalty parameter ? = 0.01. The values in brackets denote the sample size used for each method.
We would like to remind the reader that for the locally strongly convex problems that we consider
here, one can provably show that the behavior of the error in the loss function, i.e., F (wk ) ?
F (w? )/|F (w? )| follows the same pattern as that of the solution error, i.e., kwk ? w? k/kw? k; see
[23] for details. As a result, our algorithms remain to be effective for cases where the primary goal is
to reduce the loss (as opposed to the solution error).
5
Conclusions
In this paper, we propose non-uniformly sub-sampled Newton methods with inexact update for a class
of constrained problems. We show that our algorithms have a better dependence on the condition
number and enjoy a lower per-iteration complexity, compared to other similar existing methods.
Theoretical advantages are numerically demonstrated.
Acknowledgments. We would like to thank the Army Research Office and the Defense Advanced
Research Projects Agency as well as Intel, Toshiba and the Moore Foundation for support along
with DARPA through MEMEX (FA8750-14-2-0240), SIMPLEX (N66001-15-C-4043), and XDATA
(FA8750-12-2-0335) programs, and the Office of Naval Research (N000141410102, N000141210041
and N000141310129). Any opinions, findings, and conclusions or recommendations expressed in
this material are those of the authors and do not necessarily reflect the views of DARPA, ONR, or the
U.S. government.
References
[1] Naman Agarwal, Brian Bullins, and Elad Hazan. Second order stochastic optimization in linear time. arXiv
preprint arXiv:1602.03943, 2016.
4
For each sub-sampled Newton method, the sampling size is determined by choosing the best value from
{10d, 20d, 30d, ..., 100d, 200d, 300d, ..., 1000d} in the sense that the objective value drops to 1/3 of initial
function value first.
8
[2] Jock A Blackard and Denis J Dean. Comparative accuracies of artificial neural networks and discriminant
analysis in predicting forest cover types from cartographic variables. Computers and electronics in
agriculture, 24(3):131?151, 1999.
[3] S?bastien Bubeck. Theory of convex optimization for machine learning. arXiv preprint arXiv:1405.4980,
2014.
[4] Richard H Byrd, Gillian M Chin, Will Neveitt, and Jorge Nocedal. On the use of stochastic Hessian
information in optimization methods for machine learning. SIAM Journal on Optimization, 21(3):977?995,
2011.
[5] Ron S Dembo, Stanley C Eisenstat, and Trond Steihaug. Inexact Newton methods. SIAM Journal on
Numerical Analysis, 19(2):400?408, 1982.
[6] Petros Drineas, Malik Magdon-Ismail, Michael W Mahoney, and David P Woodruff. Fast approximation
of matrix coherence and statistical leverage. The Journal of Machine Learning Research, 13(1):3475?3506,
2012.
[7] Murat A Erdogdu and Andrea Montanari. Convergence rates of sub-sampled Newton methods. In Advances
in Neural Information Processing Systems, pages 3034?3042, 2015.
[8] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning, volume 1.
Springer series in statistics Springer, Berlin, 2001.
[9] Franz Graf, Hans-Peter Kriegel, Matthias Schubert, Sebastian P?lsterl, and Alexander Cavallaro. 2d image
registration in ct images using radial image descriptors. In Medical Image Computing and ComputerAssisted Intervention?MICCAI 2011, pages 607?614. Springer, 2011.
[10] John T. Holodnak and Ilse C. F. Ipsen. Randomized approximation of the Gram matrix: Exact computation
and probabilistic bounds. SIAM J. Matrix Analysis Applications, 36(1):110?137, 2015.
[11] Fran?ois Kawala, Ahlame Douzal-Chouakria, Eric Gaussier, and Eustache Dimert. Pr?dictions d?activit?
dans les r?seaux sociaux en ligne. In 4i?me conf?rence sur les mod?les et l?analyse des r?seaux: Approches
math?matiques et informatiques, page 16, 2013.
[12] Brian Kulis. Metric learning: A survey. Foundations and Trends in Machine Learning, 5(4):287?364,
2012.
[13] M. Lichman. UCI machine learning repository, 2013.
[14] Dong C. Liu and Jorge Nocedal. On the limited memory BFGS method for large scale optimization.
45:503?528, 1989.
[15] Michael W Mahoney. Randomized Algorithms for Matrices and Data. Foundations and Trends in Machine
Learning. NOW Publishers, Boston, 2011. Also available at arXiv:1104.5557v2.
[16] James Martens. Deep learning via Hessian-free optimization. In Proceedings of the 27th International
Conference on Machine Learning (ICML-10), pages 735?742, 2010.
[17] Jorge Nocedal and Stephen Wright. Numerical optimization. Springer Science & Business Media, 2006.
[18] Mert Pilanci and Martin J Wainwright. Newton sketch: A linear-time optimization algorithm with
linear-quadratic convergence. arXiv preprint arXiv:1505.02250, 2015.
[19] Farbod Roosta-Khorasani and Michael W Mahoney. Sub-Sampled Newton Methods I: Globally Convergent
Algorithms. arXiv preprint arXiv:1601.04737, 2016.
[20] Farbod Roosta-Khorasani and Michael W Mahoney. Sub-Sampled Newton Methods II: Local Convergence
Rates. arXiv preprint arXiv:1601.04738, 2016.
[21] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society.
Series B (Methodological), pages 267?288, 1996.
[22] Oriol Vinyals and Daniel Povey.
arXiv:1111.4259, 2011.
Krylov subspace descent for deep learning.
arXiv preprint
[23] Peng Xu, Jiyan Yang, Farbod Roosta-Khorasani, Christopher R?, and Michael W Mahoney. Sub-sampled
Newton methods with non-uniform sampling. arXiv preprint arXiv:1607.00559, 2016.
9
| 6037 |@word worsens:1 repository:1 version:2 kulis:1 norm:16 stronger:2 nd:4 d2:6 simulation:1 decomposition:1 electronics:1 sociaux:1 series:2 score:32 lichman:1 liu:1 woodruff:1 daniel:1 initial:4 fa8750:2 ati:2 outperforms:1 existing:4 current:2 ka:1 naman:1 dx:1 written:4 readily:2 john:1 numerical:4 drop:1 update:7 v:2 rsk:1 dembo:1 core:1 recompute:1 math:1 denis:1 ron:1 five:1 along:1 constructed:1 c2:9 lku:1 direct:2 ik:1 prove:2 introduce:1 theoretically:1 peng:2 indeed:4 behavior:1 andrea:1 frequently:1 roughly:1 globally:3 byrd:1 xti:7 solver:10 spain:1 project:1 notation:1 moreover:2 bounded:1 medium:1 q2:1 finding:6 guarantee:2 berkeley:3 every:2 ti:1 k2:1 unit:1 medical:1 enjoy:2 appear:1 intervention:1 overestimate:1 positive:1 before:1 local:17 sd:1 aiming:1 subscript:1 approximately:2 might:1 twice:4 initialization:1 studied:1 factorization:1 limited:2 range:1 uy:1 acknowledgment:1 practice:3 block:31 definite:1 nnz:14 empirical:1 ssn:25 maxx:1 significantly:1 projection:1 radial:1 close:1 selection:1 context:2 cartographic:1 dean:1 demonstrated:1 yt:1 marten:1 chouakria:1 starting:1 convex:9 survey:1 simplicity:3 chrismre:1 pure:1 eisenstat:1 estimator:1 importantly:1 orthonormal:1 srht:1 notion:3 construction:4 target:1 suppose:1 play:1 exact:6 element:3 trend:2 satisfying:3 particularly:1 approximated:1 role:1 subproblem:1 preprint:7 solved:2 capture:3 ensures:1 counter:1 icsi:1 intuition:1 agency:1 complexity:25 moderately:1 neveitt:1 uniformity:7 solving:5 depend:1 algebra:1 upon:1 efficiency:1 eric:1 completely:1 drineas:1 darpa:2 various:3 regularizer:1 fast:5 effective:1 artificial:1 choosing:3 stanford:4 solve:5 kai:3 larger:1 elad:1 lsterl:1 statistic:1 analyse:1 advantage:3 eigenvalue:1 differentiable:1 sequence:1 matthias:1 propose:3 mert:1 dans:1 relevant:1 uci:1 poorly:1 achieve:8 ismail:1 frobenius:2 kh:3 kv:2 spsd:1 convergence:18 regularity:1 farbod:5 plethora:1 optimum:3 produce:1 generating:1 comparative:1 converges:1 mmahoney:1 iq:7 stat:1 measured:2 rescale:1 approxima:1 recovering:1 c:1 ois:1 indicate:1 implies:4 met:2 direction:2 drawback:1 closely:1 stochastic:4 khorasani:4 opinion:1 material:1 require:3 government:1 brian:2 extension:2 hold:5 sufficiently:2 considered:2 wright:1 exp:1 great:1 major:2 rkn:1 adopt:1 a2:1 smallest:2 agriculture:1 estimation:1 applicable:1 gaussian:2 super:1 aim:1 pn:8 shrinkage:1 varying:1 office:2 corollary:2 ax:2 focus:1 inherits:1 naval:1 properly:1 methodological:1 vk:1 rank:4 indicates:1 likelihood:1 nd2:1 contrast:1 adversarial:1 cg:6 sense:2 jq:1 interested:1 provably:1 schubert:1 overall:3 among:8 ill:3 arg:1 denoted:4 activit:1 proposes:2 constrained:1 field:1 construct:8 once:1 having:3 sampling:70 kw:2 k2f:2 nearly:1 icml:1 discrepancy:1 simplex:1 report:1 richard:1 few:1 employ:4 preserve:1 kwt:10 phase:1 friedman:1 highly:1 investigate:1 evaluation:3 mahoney:6 extreme:1 bracket:1 yielding:1 light:2 hg:1 partial:16 unless:1 kawala:1 iv:2 euclidean:1 initialized:1 theoretical:5 instance:1 column:1 earlier:1 n000141310129:1 cover:1 cost:3 uniform:45 kn:2 thoroughly:1 international:1 randomized:8 siam:3 probabilistic:1 dong:1 michael:6 sketching:1 reflect:1 trond:1 opposed:2 choose:2 slowly:2 lambda:7 conf:1 leading:1 return:3 reusing:1 bfgs:3 de:1 summarized:1 wk:1 coefficient:1 satisfy:1 depends:2 tion:1 view:1 kwk:1 characterizes:1 hazan:1 contribution:3 square:13 ni:14 accuracy:1 descriptor:1 efficiently:1 yield:1 generalize:1 steihaug:1 mere:1 buzz:2 drive:1 history:1 sebastian:1 trevor:1 definition:4 inexact:5 failure:1 involved:1 james:1 proof:1 boil:2 petros:1 sampled:14 dataset:1 popular:3 recall:2 subsection:1 stanley:1 ta:1 follow:1 reflected:1 response:1 done:1 strongly:2 furthermore:3 miccai:1 jerome:1 glms:3 hand:1 sketch:4 christopher:2 atn:2 continuity:1 logistic:9 quality:3 grows:1 stale:1 building:1 effect:3 name:2 unbiased:1 symmetric:1 moore:1 kak:2 generalized:1 chin:1 ridge:7 demonstrate:3 workhorse:1 performs:1 image:4 novel:1 fi:10 recently:1 matiques:1 empirically:1 conditioning:1 volume:1 extend:1 discussed:4 numerically:2 ai:22 rd:9 unconstrained:2 xdata:1 immune:2 stable:3 han:1 curvature:2 recent:1 hide:1 rence:1 diving:1 certain:2 manifested:1 inequality:1 onr:1 jorge:3 yi:3 seen:4 minimum:1 additional:2 converge:3 semi:1 ii:7 full:1 stephen:1 rj:1 smooth:2 technical:1 faster:1 characterized:1 jiyan:3 long:2 equally:1 a1:1 controlled:1 qi:4 involving:1 regression:6 variant:1 denominator:1 essentially:2 metric:2 jock:1 arxiv:15 iteration:18 agarwal:1 c1:7 preserved:1 addition:2 residue:1 addressed:1 crucial:1 appropriately:1 publisher:1 unlike:2 sr:9 kwk22:1 mod:1 effectiveness:1 call:1 structural:3 yang:2 leverage:33 presence:2 iii:2 enough:2 easy:1 iterate:3 variety:2 switch:1 hastie:1 competing:1 lasso:1 reduce:3 idea:1 defense:1 penalty:3 peter:1 hessian:26 cause:1 constitute:1 remark:1 speaking:2 deep:2 useful:1 detailed:3 involve:1 locally:6 exist:1 canonical:1 deteriorates:1 per:9 tibshirani:2 shall:3 affected:3 four:2 nevertheless:1 achieving:1 povey:1 registration:1 nocedal:3 n66001:1 cone:1 inverse:1 named:1 throughout:2 reader:2 fran:1 coherence:1 summarizes:1 rnk:1 bound:1 ki:3 hi:12 layer:3 ct:3 convergent:3 quadratic:6 constraint:3 precisely:1 toshiba:1 ri:2 min:11 martin:1 ball:1 conjugate:1 remain:1 happens:1 bullins:1 restricted:1 pr:1 sd2:1 kw0:1 discus:1 pin:1 needed:4 end:2 available:3 magdon:1 v2:1 appropriate:5 spectral:2 appearing:2 stepsize:1 alternative:2 robustness:3 hat:1 cavallaro:1 denotes:1 running:7 include:1 ensure:1 newton:39 exploit:1 giving:1 k1:1 especially:1 classical:2 society:1 objective:2 malik:1 quantity:4 strategy:2 primary:1 dependence:8 kak2:1 diagonal:3 exhibit:3 gradient:6 minx:1 subspace:1 thank:1 berlin:1 w0:3 kak2f:1 me:1 considers:1 discriminant:1 assuming:2 sur:1 remind:1 cq:8 providing:1 mini:1 minimizing:1 roosta:4 gaussier:1 ipsen:1 robert:2 potentially:1 statement:1 stated:1 negative:1 design:1 murat:1 perform:1 upper:1 datasets:6 descent:1 situation:3 rn:2 nonuniform:1 david:1 inverting:1 namely:1 required:1 specified:1 connection:1 california:1 diction:1 barcelona:1 nip:1 adult:3 kriegel:1 krylov:1 below:5 pattern:1 regime:1 sparsity:3 summarize:1 douzal:1 program:1 max:10 memory:2 reliable:1 royal:1 wainwright:1 suitable:1 natural:1 business:1 predicting:1 recursion:6 advanced:1 scheme:14 improve:2 prior:3 tangent:1 kf:1 contributing:1 relative:6 graf:1 loss:3 expect:1 kakf:1 at1:2 foundation:3 degree:2 sufficient:3 consistent:1 inexactness:2 pi:8 row:9 summary:1 supported:1 free:1 drastically:1 guide:1 allow:1 wide:1 erdogdu:1 sparse:1 regard:1 slice:2 gram:1 doesn:1 author:1 commonly:2 franz:1 far:1 approximate:9 implicitly:1 blackard:1 global:2 xi:4 iterative:1 table:13 robust:2 pilanci:1 forest:3 cl:8 necessarily:1 constructing:2 inherit:1 main:5 montanari:1 linearly:1 xu:2 augmented:6 intel:1 en:1 elaborate:1 precision:2 sub:32 fails:1 down:3 theorem:4 specific:2 xt:3 bastien:1 showing:2 maxi:4 a3:1 albeit:1 n000141210041:1 importance:1 magnitude:1 conditioned:4 boston:1 depicted:1 logarithmic:1 simply:3 army:1 lbfgs:10 forming:1 bubeck:1 vinyals:1 expressed:2 ilse:1 recommendation:1 springer:4 minimizer:3 inexactly:3 satisfies:3 viewed:3 goal:1 memex:1 seaux:2 lipschitz:2 replace:1 hard:1 change:1 specifically:1 determined:3 uniformly:4 reducing:1 wt:42 lemma:9 called:1 total:5 support:1 arises:1 cumulant:1 alexander:1 oriol:1 incorporate:2 gillian:1 phenomenon:1 |
5,567 | 6,038 | Budgeted stream-based active learning
via adaptive submodular maximization
Kaito Fujii
Kyoto University
JST, ERATO, Kawarabayashi Large Graph Project
[email protected]
Hisashi Kashima
Kyoto University
[email protected]
Abstract
Active learning enables us to reduce the annotation cost by adaptively selecting
unlabeled instances to be labeled. For pool-based active learning, several effective methods with theoretical guarantees have been developed through maximizing some utility function satisfying adaptive submodularity. In contrast, there have
been few methods for stream-based active learning based on adaptive submodularity. In this paper, we propose a new class of utility functions, policy-adaptive
submodular functions, which includes many existing adaptive submodular functions appearing in real world problems. We provide a general framework based
on policy-adaptive submodularity that makes it possible to convert existing poolbased methods to stream-based methods and give theoretical guarantees on their
performance. In addition we empirically demonstrate their effectiveness by comparing with existing heuristics on common benchmark datasets.
1
Introduction
Active learning is a problem setting for sequentially selecting unlabeled instances to be labeled, and
it has been studied with much practical interest as an efficient way to reduce the annotation cost. One
of the most popular settings of active learning is the pool-based one, in which the learner is given
the entire set of unlabeled instances in advance, and iteratively selects an instance to be labeled next.
The stream-based setting, which we deal with in this paper, is another important setting of active
learning, in which the entire set of unlabeled instances are hidden initially, and presented one by one
to the learner. This setting also has many real world applications, for example, sentiment analysis of
web stream data [26], spam filtering [25], part-of-speech tagging [10], and video surveillance [23].
Adaptive submodularity [19] is an adaptive extension of submodularity, a natural diminishing return
condition. It provides a framework for designing effective algorithms for several adaptive problems
including pool-based active learning. For instance, the ones for noiseless active learning [19, 21]
and the ones for noisy active learning [20, 9, 8] have been developed in recent years. Not only they
have strong theoretical guarantees on their performance, but they perform well in practice compared
with existing widely-used heuristics.
In spite of its considerable success in the pool-based setting, little is known about benefits of adaptive
submodularity in the stream-based setting. This paper answers the question: is it possible to construct algorithms for stream-based active learning based on adaptive submodularity? We propose a
general framework for creating stream-based algorithms from existing pool-based algorithms.
In this paper, we tackle the problem of stream-based active learning with a limited budget for making
queries. The goal is collecting an informative set of labeled instances from a data stream of a
certain length. The stream-based active learning problem has been typically studied in two settings:
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
the stream setting and the secretary setting, which correspond to memory constraints and timing
constraints respectively; we treat both in this paper.
We formalize these problems as the adaptive stochastic maximization problem in the stream or secretary setting. For solving this problem, we propose a new class of stochastic utility functions:
policy-adaptive submodular functions, which is another adaptive extension of submodularity. We
prove this class includes many existing adaptive submodular functions used in various applications.
Assuming the objective function satisfies policy-adaptive submodularity, we propose simple methods for each problem, and give theoretical guarantees on their performance in comparison to the
optimal pool-based method. Experiments conducted on benchmark datasets show the effectiveness
of our methods compared with several heuristics. Due to our framework, many algorithms developed
in the pool-based setting can be converted to the stream-based setting.
In summary, our main contributions are the following:
? We provide a general framework that captures budgeted stream-based active learning and other
applications.
? We propose a new class of stochastic utility functions, policy-adaptive submodular functions,
which is a subclass of the adaptive submodular functions, and prove this class includes many
existing adaptive submodular functions in real world problems.
? We propose two simple algorithms, AdaptiveStream and AdaptiveSecretary, and give theoretical performance guarantees on them.
2
Problem Settings
In this section, we first describe the general framework, then illustrate applications including streambased active learning.
2.1
Adaptive Stochastic Maximization in the Stream and Secretary Settings
Here we specify the problem statement. This problem is a generalization of budgeted stream-based
active learning and other applications.
Let V = {v1 , ? ? ? , vn } denote the entire set of n items, and each item vi is in a particular state out of
the set Y of possible states. Denote by ? : V ? Y a realization of the states of the items. Let ? be a
random realization, and Yi a random variable representing the state of each item vi for i = 1, ? ? ? , n,
i.e., Yi = ?(vi ). Assume that ? is generated from a known prior distribution p(?). Suppose the
state Yi is revealed when vi is selected. Let ?A : A ? Y denote the partial realization obtained after
the states of items A ? V are observed. Note that a partial realization ?A can be regarded as the set
of observations {(s, ?A (s)) | s ? A} ? V ? Y.
We are given a set function1 f : 2V ?Y ? R?0 that defines the utility of observations made when
some items are selected. Consider iteratively selecting an item to observe its state and aiming to
make observations of high utility value. A policy ? is some decision tree that represents a strategy
for adaptively selecting items. Formally it is defined to be a partial mapping that determines an
item to be selected next from the observations made so far. Given some budget k ? Z>0 , the goal
is constructing a policy ? maximizing E? [f (?(?, ?))] subject to |?(?, ?)| ? k for all ? where
?(?, ?) denotes the observations obtained by executing policy ? under realization ?.
This problem has been studied mainly in the pool-based setting, where we are given the entire set V
from the beginning and adaptively observe the states of items in any order. In this paper we tackle
the stream-based setting, where the items are hidden initially and arrive one by one. The streambased setting arises in two kinds of scenarios: one is the stream setting2 , in which we can postpone
deciding whether or not to select an item by keeping it in a limited amount of memory, and at any
time observe the state of the stored items. The other is the secretary setting, in which we must decide
1
In the original definition of stochastic utility functions [19], the objective value depends not only on the
partial realization ?, but also on the realization ?. However, given such f : 2V ? Y V ? R?0 , we can redefine
f? : 2V ?Y ? R?0 as f?(?A ) = E? [f (A, ?) | ? ? p(?|?A )], and it does not critically change the overall
discussion in our problem settings. Thus for notational convenience, we use the simpler definition.
2
In this paper, ?stream-based setting? and ?stream setting? are distinguished.
2
v1
v2
v3
v6 v5
v1
+1
v4
?1
+1
v5
v6
+1
v1
?1
v7
v2
t
v4
+1
?1
v3
+1
v2
v4 v7
?1
v5
?1
+1
v3
v6
?1
v7
(b) A policy tree for the stream-based setting
(a) A policy tree for the pool-based setting
Figure 1: Examples of a pool-based policy and a stream-based policy in the case of Y = {+1, ?1}.
(a) A pool-based policy can select items in an arbitrary order. (b) A stream-based policy must select
items under memory or timing constraints taking account of only items that arrived so far.
immediately whether or not to select an item at each arrival. In both settings we assume the items
arrive in a random order. The comparison of policies for the pool-based and stream-based settings
is indicated in Figure 1.
2.2
Budgeted Stream-based Active Learning
We consider a problem setting called Bayesian active learning. Here V represents the set of instances, Y1 , ? ? ? , Yn the initially unknown labels of the instances, and Y the set of possible labels.
Let H denote the set of candidates for the randomly generated true hypothesis H, and pH denote
a prior probability over H. When observations of the labels are noiseless, every hypothesis h ? H
represents a particular realization, i.e., h corresponds to some ? ? Y V . When observations are noisy,
the probability distribution P[Y1 , ? ? ? , Yn |H = h] of the labels is not necessarily deterministic for
each h ? H. In both cases, we can iteratively select an instance and query its label to the annotation
oracle. The objective is to determine the true hypothesis or one whose prediction error is small. Both
the pool-based and stream-based settings have been extensively studied. The stream-based setting
contains the stream and secretary settings, both of which have a lot of real world applications.
A common approach for devising a pool-based algorithm is designing some utility function that
represents the informativeness of a set of labeled instances, and greedily selecting the instance maximizing this utility in terms of the expected value. We introduce the utility into stream-based active
learning, and aim to collect k labeled instances of high utility where k ? Z>0 is the budget on
the number of queries. While most of the theoretical results for stream-based active learning are
obtained assuming the data stream is infinite, we assume the length of the total data stream is given
in advance.
2.3
Other Applications
We give a brief sketch of two examples that can be formalized as the adaptive stochastic maximization problem in the secretary setting. Both are variations for streaming data of the problems first
proposed by Golovin and Krause [19].
One is adaptive viral marketing whose aim is spreading information about a new product through
social networks. In this problem we adaptively select k people to whom a free promotional sample
of the product is offered so as to let them recommend the product to their friends. We cannot know if
he recommends the product before actually offering a sample to each. The objective is maximizing
the number of people that information of the product reaches. There arise some situations where
people come sequentially, and at each arrival we must decide whether or not to offer a sample to
them.
Another is adaptive sensor placement. We want to adaptively place k unreliable sensors to cover the
information obtained by them. The informativeness of each sensor is unknown before its deploy3
ment. We can consider the cases where the timing of placing sensors at each location is restricted
for some reasons such as transportation cost.
3
Policy-Adaptive Submodularity
In this section, we discuss conditions satisfied by the utility functions of adaptive stochastic maximization problems.
Submodularity [17] is known as a natural diminishing return condition satisfied by various set functions appearing in a lot of applications, and adaptive submodularity was proposed by Golovin and
Krause [19] as an adaptive extension of submodularity. Adaptive submodularity is defined as the
diminishing return property about the expected marginal gain of a single item, i.e., ?(s|?A ) ?
?(s|?B ) for any partial realization ?A ? ?B and item s ? V \ B, where
?(s|?) = E? [f (? ? {(s, ?(s))}) ? f (?) | ? ? p(?|?)].
Similarly, adaptive monotonicity, an adaptive analog of monotonicity, is defined to be ?(s|?A ) ? 0
for any partial realization ?A and item s ? V . It is known that many utility functions used in the
above applications satisfy the adaptive submodularity and the adaptive monotonicity. In the poolbased setting, greedily selecting the item of the maximal expected marginal gain yields (1 ? 1/e)approximation if the objective function is adaptive submodular and adaptive monotone [19].
Here we propose a new class of stochastic utility functions, policy-adaptive submodular functions.
Let range(?) denote the set containing all items that ? selects for some ?, and we define policyadaptive submodularity as the diminishing return property about the expected marginal gain of any
policy as follows.
Definition 3.1 (Policy-adaptive submodularity). A set function f : 2V ?Y ? R?0 is policy-adaptive
submodular with respect to a prior distribution p(?), or (f, p) is policy-adaptive submodular, if
?(?|?A ) ? ?(?|?B ) holds for any partial realization ?A , ?B and policy ? such that ?A ? ?B
and range(?) ? V \ B, where
?(?|?) = E? [f (? ? ?(?, ?)) ? f (?) | ? ? p(?|?)].
Since a single item can be regarded as a policy selecting only one item, policy-adaptive submodularity is a stricter condition than adaptive submodularity.
Policy-adaptive submodularity is also a natural extension of submodularity. The submodularity of a
set function f : 2V ? R?0 is defined as the condition that f (A?{s})?f (A) ? f (B ?{s})?f (B)
for any A ? B ? V and s ? V \ B, which is equivalent to the condition that f (A ? P ) ? f (A) ?
f (B ? P ) ? f (B) for any A ? B ? V and P ? V \ B. Adaptive extensions of these conditions
are adaptive submodularity and policy-adaptive submodularity respectively. Nevertheless there is
a counterexample to the equivalence of adaptive submodularity and policy-adaptive submodularity,
which is given in the supplementary materials.
Surprisingly, many existing adaptive submodular functions in applications also satisfy the policyadaptive submodularity. In active learning, the objective function of generalized binary search
[12, 19], EC2 [20], ALuMA [21], and the maximum Gibbs error criterion [9, 8] are not only adaptive
submodular, but policy-adaptive submodular. In other applications including influence maximization and sensor placements, it is often assumed that the variables Y1 , ? ? ? , Yn are independent, and
the policy-adaptive submodularity always holds in this case. The proofs of these propositions are
given in the supplementary materials.
To give the theoretical guarantees for the algorithms introduced in the next section, we assume
not only the adaptive submodularity and the adaptive monotonicity, but also the policy-adaptive
submodularity. However, our theoretical analyses can still be applied to many applications.
4
Algorithms
In this section we describe our proposed algorithms for each of the stream and secretary settings, and
state the theoretical guarantees on their performance. The full versions of pseudocodes are given in
the supplementary materials.
4
Algorithm 1 AdaptiveStream algorithm & AdaptiveSecretary algorithm
Input: A set function f : 2V ?Y ? R?0 and a prior distribution p(?) such that (f, p) is policyadaptive submodular and adaptive monotone. The number of items in the entire stream n ? Z>0 .
A budget k ? Z>0 . Randomly permuted stream of the items, denoted by (s1 , ? ? ? , sn ).
Output: Some observations ?k ? V ? Y such that |?k | ? k.
1: Let ?0 := ?.
2: for each segment Sl = {si | (l ? 1)n/k < i ? ln/k} do
3:
Select
an item s out of Sl by
{
selecting the item of the largest expected marginal gain (AdaptiveStream)
applying the classical secretary algorithm
(AdaptiveSecretary)
4:
Observe the state y of item s and let ?l := ?l?1 ? {(s, y)}.
5: return ?k as the solution
4.1
Algorithm for the Stream Setting
The main idea of our proposed method is simple: divide the entire stream into k segments and select
the best item from each one. For simplicity, we consider the case where n is a multiple integer of
k. If n is not, we can add k? nk ? ? n dummy items with no benefit and prove the same guarantee.
Our algorithm first divides the item sequence s1 , ? ? ? , sn into Sl = {si | (l ? 1)n/k < i ? ln/k}
for l = 1, ? ? ? , k. In each segment, the algorithm selects the item of the largest expected marginal
gain, that is, argmax{?(s|?l?1 ) | s ? Sl } where ?l?1 is the partial realization obtained before
the lth segment. This can be implemented with only O(1) space by storing only the item of the
maximal expected marginal gain so far in the current segment. We provide the theoretical guarantee
on the performance of this algorithm by utilizing the policy-adaptive submodularity of the objective
function.
Theorem 4.1. Suppose f : 2V ?Y ? R?0 is policy-adaptive submodular and adaptive monotone
w.r.t. a prior p(?). Assume the items come sequentially in a random order. For any policy ? such
that |?(?, ?)| ? k holds for all ?, AdaptiveStream selects k items using O(1) space and achieves
at least 0.16 times the expected total gain of ? in expectation.
4.2
Algorithm for the Secretary Setting
Though our proposed algorithm for the secretary setting is similar in its approach to the one for the
stream setting, it is impossible to select the item of the maximal expected marginal gain from each
segment in the secretary setting. Then we use classical secretary algorithm [13] as a subroutine to
obtain the maximal item at least with some constant probability. The classical secretary algorithm
lets the first ?n/(ek)? items pass and then selects the first item whose value is larger than all items
so far. The probability that this subroutine selects the item of the largest expected marginal gain is
at least 1/e at each segment. This algorithm can be viewed as an adaptive version of the algorithm
for the monotone submodular secretary problem [3]. We give the guarantee similar to the one for
the stream setting.
Theorem 4.2. Suppose f : 2V ?Y ? R?0 is policy-adaptive submodular and adaptive monotone
w.r.t. a prior p(?). Assume the items come sequentially in a random order. For any policy ? such
that |?(?, ?)| ? k holds for all ?, AdaptiveSecretary selects at most k items and achieves at
least 0.08 times the expected total gain of ? in expectation.
5
Overview of Theoretical Analysis
In this section we briefly describe the proofs of Theorem 4.1 and 4.2, and compare our techniques
with the previous work. The full proofs are given in the supplementary materials.
The methods used in the proofs of both theorems are almost the same. They consist of two steps:
in the first step, we bound the expected marginal gain of each item and in the second step, we take
summation of one step marginal gains and derive the overall bound for the algorithms. Though
our techniques used in the second step are taken from the previous work [3], the first step contains
several novel techniques.
5
Let ?i be the expected marginal gain of an item picked from the ith segment Si . First we bound it
from below with the difference between the optimal pool-based policy ?T? for selecting k items from
?
T and the policy ?i?1
that encodes the algorithm until i ? 1th step under a permutation ? in which
the items arrive. For the non-adaptive setting, the items in the optimal set are distributed among
the segments uniformly at random, then we can evaluate ?i by considering whether Si contains
an item included in the optimal set [3]. On the other hand, in the adaptive setting, it is difficult to
consider how ?T? is distributed in the unarrived items because the policy is closely related not only
to the contained items but also to the order of items. Then we compare ?i and the marginal gain
k
of ?T? directly. With the adaptive monotonicity, we obtain ?i ? (1 ? exp(? k?i+1
))(favg (?T? ) ?
?
favg (?i?1 ))/k where favg (?) = E? [f (?(?, ?))].
Next we bound favg (?T? ) with the optimal pool-based policy ?V? that selects k items from V . For the
non-adaptive setting, we can apply a widely-used lemma proved by Feige, Mirrokni, and Vondr?k
[15]. This lemma provides a bound for the expected value of a randomly deleted subset. To extend
this lemma to the adaptive setting, we define a partially deleted policy tree, grafted policy, and prove
the adaptive version of the lemma with the policy-adaptive submodularity. From this lemma we can
obtain the bound E? [favg (?T? )] ? (k ? i + 1)favg (?V? )/k. We also provide an example that shows
adaptive submodularity is not enough to prove this lemma.
Summing the bounds for each one-step expected marginal gain until lth step (l is specified in the
full proof for optimizing the resulting guarantees), we can conclude that our proposed algorithms
achieve some constant factor approximation in comparison to the optimal pool-based policy. Though
AdaptiveSecretary is the adaptive version of the existing algorithm, our resulting constant factor
is a little worse than the original (1 ? 1/e)/7 due to the above new analyses.
6
Experiments
6.1
Experimental Setting
We conducted experiments on budgeted active learning in the following three settings: the poolbased, stream, and secretary settings. For each setting, we compare two methods: one is based
on the policy-adaptive submodularity and the other is based on uncertainty sampling as baselines.
Uncertainty sampling is the most widely-used approach in applications. Selecting random instances,
which we call random, is also implemented as another baseline that can be used in every setting.
We select ALuMA [21] out of several pool-based methods based on adaptive submodularity, and
convert it to the stream and secretary settings with AdaptiveStream and AdaptiveSecretary,
which we call stream submodular and secretary submodular respectively. For comparison, we
also implement the original pool-based method, which we call pool submodular. Though ALuMA
is designed for the noiseless case, there is a modification method that makes its hypotheses sampling
more noise-tolerant [7], which we employ. The number of hypotheses sampled at each time is set
N = 1000 in all settings.
For the pool-based setting, uncertainty sampling is widely-known as a generic and easy-toimplement heuristic in many applications. This selects the most uncertain instance, i.e., the instance
that is closest to the current linear separator. In contrast, there is no standard heuristic for the stream
and secretary settings. We apply the same conversion to the pool-based uncertain sampling method
as AdaptiveStream and AdaptiveSecretary, i.e., in the stream setting, selecting the most uncertain instance from the segment at each step, and in the secretary setting, running the classical
secretary algorithm to select the most uncertain instance at least with probability 1/e. A similar one
to this approach in the stream setting is used in some applications [26]. In every setting, we first
randomly select 10 instances for the initial training of a classifier and after that, select k ? 10 instances with each method. We use the linear SVM trained with instances labeled so far to judge the
uncertainty. We call these methods pool uncertainty, stream uncertainty, secretary uncertainty
respectively, and use them as baselines.
We conducted experiments on two benchmark datasets, WDBC3 and MNIST4 . The WDBC dataset
contains 569 instances, each of which consists of 32-dimensional features of cells and their diagnosis
3
4
https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)
http://yann.lecun.com/exdb/mnist/
6
0.14
0.030
0.12
0.025
0.10
Error rate
Error rate
0.020
0.08
0.06
random
pool uncertainty
pool submodular
stream uncertainty
stream submodular
secretary uncertainty
secretary submodular
0.015
0.010
0.04
0.005
0.02
0.000
0.00
(b) MNIST dataset, error
(a) WDBC dataset, error
0.14
0.12
0.12
0.10
0.10
0.08
Error rate
Error rate
k = 30
k = 40
k = 50
Budget on the number of queries
k = 30
k = 40
k = 50
Budget on the number of queries
0.06
0.04
random
pool uncertainty
pool submodular
stream uncertainty
stream submodular
secretary uncertainty
secretary submodular
0.08
0.06
0.04
0.02
0.02
0.00
10
15
20
25
30
35
40
Number of labels obtained
45
(c) WDBC dataset, convergence
50
0.00
10
15
20
25
30
35
40
Number of labels obtained
45
50
(d) MNIST dataset, convergence
Figure 2: Experimental results
results. From the MNIST dataset, the dataset of handwritten digits, we extract 14780 images of the
two classes, 0 and 1, so as to consider the binary classification problem, and apply PCA to reduce
its dimensions from 784 to 10. We standardize both datasets so that the values of each feature have
zero mean and unit variance.
We evaluate the performance through 100 trials, where at each time an order in which the instances
arrive is generated randomly. For all the methods, we calculate the error rate by training linear SVM
with the obtained labeled instances and testing with the entire dataset.
6.2
Experimental Results
Figure 2(a)(b) illustrate the average error rate achieved by each method with budget k = 30, 40, 50.
Our methods stream submodular and secretary submodular outperform not only random, but
also stream uncertainty and secretary uncertainty respectively, i.e., the methods based on policyadaptive submodularity perform better than the methods based on uncertainty sampling in each of
the stream and secretary settings. Moreover, we can observe our methods are stabler than the other
methods from the error bars representing the standard deviation.
Figure 2(c)(d) show how the error rate decreases as labels are queried in the case of k = 50. In
both datasets, we can observe the performance of stream submodular is competitive with pool
submodular.
7
Related Work
Stream-based active learning. Much amount of work has been dedicated to devising algorithms
for stream-based active learning (also known as selective sampling) from both the theoretical and
practical aspects. From the theoretical aspects, several bounds on the label complexity have been
provided [16, 2, 4], but their interest lies in the guarantees compared to the passive learning, not the
optimal algorithm. From the practical aspects, it has been applied to many real world problems such
as sentiment analysis of web stream data [26], spam filtering [25], part-of-speech tagging [10], and
video surveillance [23], but there is no definitive widely-used heuristic.
7
Of particular relevance to our work is the one presented by Sabato and Hess [24]. They devised
general methods for constructing stream-based algorithms satisfying a budget based on pool-based
algorithms, but their theoretical guarantees are bounding the length of the stream needed to emulate
the pool-based algorithm, which is a large difference from our work. Das et al. [11] designed the
algorithm for adaptively collecting water samples, referring to the submodular secretary problem,
but they focused on applications to marine ecosystem monitoring, and did not give any theoretical
analysis about its performance.
Adaptive submodular maximization. The framework of adaptive submodularity, which is an adaptive counterpart of submodularity, is established by Golovin and Krause [19]. It provides the simple
greedy algorithm with the near-optimal guarantees in several adaptive real world problems. Specifically it achieves remarkable success in pool-based active learning. For the noiseless cases, Golovin
and Krause [19] described the generalized binary search algorithm [12] as the greedy algorithm
for some adaptive submodular function, and improved its approximation factor. Golovin et al. [20]
provided an algorithm for Bayesian active learning with noisy observations by reducing it to the
equivalence class determination problem. On the other hand, there have been several studies on
adaptive submodular maximization in other settings, for example, selecting multiple instances at the
same time before observing their states [7], guessing an unknown prior distribution in the bandit
setting [18], and maximizing non-monotone adaptive submodular functions [22].
Submodular maximization in the stream and secretary settings. Submodular maximization in
the stream setting, called streaming submodular maximization, has been studied under several constraints. Badanidiyuru et al. [1] provided a (1/2 ? ?)-approximation algorithm that can be executed
in O(k log k) space for the cardinality constraint. For more general constraints including matching
and multiple matroids constraints, Chakrabarti and Kale [5] proposed constant factor approximation
algorithms. Chekuri et al. [6] devised algorithms for non-monotone submodular functions.
On the other hand, much effort is also devoted to submodular maximization in the secretary setting, called submodular secretary problem, under various constraints. Bateni et al. [3] specified the
problem first and provided algorithms for both monotone and non-monotone submodular secretary
problems under several constraints, one of which our methods are based on. Feldman et al. [14]
improved constant factors of the theoretical guarantees for monotone cases.
8
Concluding Remarks
In this paper, we investigated stream-based active learning with a budget constraint in the view of
adaptive submodular maximization. To tackle this problem, we introduced the adaptive stochastic
maximization problem in the stream and secretary settings, which can formalize stream-based active
learning. We provided a new class of objective functions, policy-adaptive submodular functions, and
showed this class contains many utility functions that have been used in pool-based active learning
and other applications. AdaptiveStream and AdaptiveSecretary, which we proposed in this paper, are simple algorithms guaranteed to be constant factor competitive with the optimal pool-based
policy. We empirically demonstrated their performance by applying our algorithms to the budgeted
stream-based active learning problem, and our experimental results indicate their effectiveness compared to the existing methods.
There are two natural directions for future work. One is exploring the possibility of the concept,
policy-adaptive submodularity. By studying the nature of this class, we can probably yield theoretical insight for other problems. Another is further developing the practical aspects of our results. In
real world problems sometimes it happens that the items arrive not in a random order. For example,
in sequential adaptive sensor placement [11], an order of items is restricted to some transportation
constraint. In this setting our guarantees do not hold and another algorithm is needed. In contrast to
the non-adaptive setting, even in the stream setting, it seems much more difficult to design a constant
factor approximation algorithm because the full information of each item is totally revealed when
its state is observed and memory is not so powerful as in the non-adaptive setting.
Acknowledgments
The second author is supported by Grant-in-Aid for Scientific Research on Innovative Areas, Exploration of nanostructure-property relationships for materials innovation.
8
References
[1] A. Badanidiyuru, B. Mirzasoleiman, A. Karbasi, and A. Krause. Streaming submodular maximization:
Massive data summarization on the fly. Proceedings of the 20th ACM SIGKDD International Conference
on Knowledge Discovery and Data Mining (KDD), pp. 671?680, 2014.
[2] M.-F. Balcan, A. Beygelzimer, and J. Langford. Agnostic active learning. Proceedings of the 23rd International Conference on Machine Learning (ICML), pp. 65?72, 2006.
[3] M. Bateni, M. Hajiaghayi, and M. Zadimoghaddam. Submodular secretary problem and extensions. ACM
Transactions on Algorithms (TALG), 9(4):32, 2013.
[4] A. Beygelzimer, S. Dasgupta, and J. Langford. Importance-weighted active learning. Proceedings of the
26th International Conference on Machine Learning (ICML), pp. 49?56, 2009.
[5] A. Chakrabarti and S. Kale. Submodular maximization meets streaming: Matchings, matroids, and more.
Mathematical Programming Series B, 154(1), pp. 225?247, 2015.
[6] C. Chekuri, S. Gupta, and K. Quanrud. Streaming algorithms for submodular function maximization.
Automata, Languages, and Programming (ICALP), pp. 318?330, 2015.
[7] Y. Chen and A. Krause. Near-optimal batch mode active learning and adaptive submodular optimization.
Proceedings of the 30th International Conference on Machine Learning (ICML), pp. 160?168, 2013.
[8] N. V. Cuong, W. S. Lee, and N. Ye. Near-optimal adaptive pool-based active learning with general loss.
Uncertainty in Artificial Intelligence (UAI), 2014.
[9] N. V. Cuong, W. S. Lee, N. Ye, K. M. A. Chai, and H. L. Chieu. Active learning for probabilistic hypotheses using the maximum Gibbs error criterion. Advances in Neural Information Processing Systems
(NIPS), pp. 1457?1465, 2013.
[10] I. Dagan and S. Engelson. Committee-based sampling for training probabilistic classifiers. Proceedings
of the 12th International Conference on Machine Learning (ICML), pp. 150?157, 1995.
[11] J. Das, F. Py, J. B. J. Harvey, J. P. Ryan, A. Gellene, R. Graham, D. A. Caron, K. Rajan, and G. S.
Sukhatme. Data-driven robotic sampling for marine ecosystem monitoring. The International Journal of
Robotics Research, 34(12), pp. 1435?1452, 2015.
[12] S. Dasgupta. Analysis of a greedy active learning strategy. Advances in Neural Information Processing
Systems (NIPS), pp. 337?344, 2004.
[13] E. B. Dynkin. The optimum choice of the instant for stopping a Markov process. Soviet Math. Dokl, 4,
pp. 627?629, 1963.
[14] M. Feldman, J. S. Naor, and R. Schwartz. Improved competitive ratios for submodular secretary problems. Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques
(APPROX-RANDOM), pp. 218?229, 2011.
[15] U. Feige, V. Mirrokni, and J. Vondr?k. Maximizing non-monotone submodular functions. SIAM Journal
on Computing, 40(4), pp. 1133?1153, 2011.
[16] Y. Freund, H. S. Seung, E. Shamir, and N. Tishby. Selective sampling using the query by committee
algorithm. Machine Learning, 28, pp. 133?168, 1997.
[17] S. Fujishige. Submodular Functions and Optimization, Second Edition. Annals of Discrete Mathematics,
Vol. 58, Elsevier, 2005.
[18] V. Gabillon, B. Kveton, Z. Wen, B. Eriksson, and S. Muthukrishnan. Adaptive submodular maximization
in bandit setting. Advances in Neural Information Processing Systems (NIPS), pp. 2697?2705, 2013.
[19] D. Golovin and A. Krause. Adaptive submodularity: Theory and applications in active learning and
stochastic optimization. Journal of Artificial Intelligence Research (JAIR), 42, pp. 427?486, 2011.
[20] D. Golovin, A. Krause, and D. Ray. Near-optimal Bayesian active learning with noisy observations.
Advances in Neural Information Processing Systems (NIPS), pp. 766?774, 2010.
[21] A. Gonen, S. Sabato, and S. Shalev-Shwartz. Efficient active learning of halfspaces: An aggressive approach. The Journal of Machine Learning Research (JMLR), 14(1), pp. 2583?2615, 2013.
[22] A. Gotovos, A. Karbasi, and A. Krause. Non-monotone adaptive submodular maximization. Proceedings
of the 24th International Joint Conference on Artificial Intelligence (IJCAI), pp. 1996?2003, 2015.
[23] C. C. Loy, T. M. Hospedales, T. Xiang, and S. Gong. Stream-based joint exploration-exploitation active
learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),
2012.
[24] S. Sabato and T. Hess. Interactive algorithms: From pool to stream. In Proceedings of the 29th Annual
Conference on Learning Theory (COLT), pp. 1419?1439, 2016.
[25] D. Sculley. Online active learning methods for fast label-efficient spam filtering. Proceedings of Fourth
Conference on Email and Anti-Spam (CEAS), 2007.
[26] J. Smailovi?c, M. Gr?car, N. Lavra?c, and M. ?nidar?i?c. Stream-based active learning for sentiment analysis
in the financial domain. Information Sciences, 285, pp. 181?203, 2014.
9
| 6038 |@word trial:1 exploitation:1 briefly:1 version:4 seems:1 initial:1 contains:5 series:1 selecting:12 offering:1 existing:10 current:2 comparing:1 com:1 beygelzimer:2 si:4 must:3 informative:1 kdd:1 enables:1 designed:2 greedy:3 selected:3 devising:2 item:58 intelligence:3 beginning:1 ith:1 marine:2 provides:3 math:1 location:1 simpler:1 fujii:2 mathematical:1 chakrabarti:2 prove:5 consists:1 naor:1 redefine:1 ray:1 introduce:1 tagging:2 expected:15 little:2 considering:1 cardinality:1 totally:1 project:1 spain:1 moreover:1 provided:5 agnostic:1 kind:1 developed:3 guarantee:16 every:3 collecting:2 subclass:1 hajiaghayi:1 tackle:3 interactive:1 stricter:1 classifier:2 schwartz:1 unit:1 grant:1 yn:3 before:4 timing:3 treat:1 aiming:1 meet:1 studied:5 equivalence:2 collect:1 lavra:1 limited:2 range:2 practical:4 lecun:1 acknowledgment:1 testing:1 kveton:1 practice:1 postpone:1 implement:1 digit:1 area:1 matching:1 spite:1 convenience:1 unlabeled:4 cannot:1 eriksson:1 influence:1 applying:2 impossible:1 py:1 equivalent:1 deterministic:1 demonstrated:1 transportation:2 maximizing:6 kale:2 automaton:1 focused:1 formalized:1 simplicity:1 immediately:1 insight:1 utilizing:1 regarded:2 financial:1 variation:1 annals:1 shamir:1 suppose:3 massive:1 programming:2 designing:2 hypothesis:6 standardize:1 satisfying:2 recognition:1 labeled:8 observed:2 poolbased:3 fly:1 capture:1 calculate:1 decrease:1 halfspaces:1 complexity:1 seung:1 trained:1 solving:1 segment:10 badanidiyuru:2 learner:2 matchings:1 joint:2 various:3 emulate:1 soviet:1 muthukrishnan:1 fast:1 effective:2 describe:3 query:6 artificial:3 gotovos:1 shalev:1 whose:3 heuristic:6 widely:5 supplementary:4 larger:1 cvpr:1 noisy:4 online:1 ceas:1 sequence:1 propose:7 ment:1 product:5 maximal:4 uci:1 realization:12 bateni:2 achieve:1 chai:1 convergence:2 ijcai:1 optimum:1 mirzasoleiman:1 executing:1 illustrate:2 friend:1 ac:2 gong:1 derive:1 strong:1 implemented:2 come:3 judge:1 indicate:1 direction:1 submodularity:40 closely:1 stochastic:10 exploration:2 jst:1 material:5 generalization:1 quanrud:1 randomization:1 proposition:1 ryan:1 summation:1 extension:6 exploring:1 hold:5 ic:1 deciding:1 exp:1 mapping:1 achieves:3 label:10 spreading:1 combinatorial:1 largest:3 weighted:1 sensor:6 always:1 aim:2 surveillance:2 notational:1 mainly:1 contrast:3 sigkdd:1 greedily:2 baseline:3 elsevier:1 secretary:36 promotional:1 stopping:1 streaming:5 entire:7 typically:1 initially:3 hidden:2 diminishing:4 bandit:2 selective:2 subroutine:2 selects:9 overall:2 among:1 classification:1 colt:1 denoted:1 dynkin:1 marginal:13 construct:1 sampling:10 represents:4 placing:1 icml:4 future:1 recommend:1 few:1 employ:1 engelson:1 randomly:5 wen:1 argmax:1 interest:2 possibility:1 mining:1 devoted:1 partial:8 pseudocodes:1 tree:4 divide:2 theoretical:17 uncertain:4 instance:25 cover:1 maximization:19 cost:3 deviation:1 subset:1 conducted:3 gr:1 tishby:1 stored:1 answer:1 adaptively:6 referring:1 international:7 ec2:1 siam:1 v4:3 lee:2 probabilistic:2 pool:35 gabillon:1 satisfied:2 containing:1 worse:1 v7:3 creating:1 ek:1 return:5 account:1 converted:1 aggressive:1 hisashi:1 includes:3 satisfy:2 vi:4 stream:70 depends:1 view:1 lot:2 picked:1 observing:1 competitive:3 annotation:3 contribution:1 variance:1 correspond:1 yield:2 bayesian:3 handwritten:1 critically:1 monitoring:2 reach:1 email:1 definition:3 sukhatme:1 pp:21 proof:5 gain:15 sampled:1 proved:1 dataset:8 popular:1 kawarabayashi:1 nanostructure:1 knowledge:1 car:1 formalize:2 actually:1 jair:1 specify:1 improved:3 though:4 marketing:1 chekuri:2 until:2 langford:2 sketch:1 hand:3 web:2 defines:1 mode:1 indicated:1 scientific:1 ye:2 concept:1 true:2 counterpart:1 iteratively:3 deal:1 erato:1 criterion:2 generalized:2 arrived:1 exdb:1 demonstrate:1 dedicated:1 passive:1 balcan:1 image:1 novel:1 common:2 viral:1 permuted:1 empirically:2 overview:1 function1:1 jp:2 analog:1 he:1 extend:1 ecosystem:2 hospedales:1 caron:1 counterexample:1 gibbs:2 queried:1 hess:2 feldman:2 rd:1 approx:1 similarly:1 mathematics:1 submodular:56 language:1 add:1 closest:1 recent:1 showed:1 optimizing:1 zadimoghaddam:1 driven:1 scenario:1 certain:1 harvey:1 binary:3 success:2 yi:3 determine:1 v3:3 full:4 multiple:3 kyoto:4 determination:1 offer:1 devised:2 prediction:1 breast:1 noiseless:4 expectation:2 vision:1 sometimes:1 achieved:1 cell:1 robotics:1 addition:1 want:1 krause:9 sabato:3 archive:1 probably:1 subject:1 fujishige:1 effectiveness:3 integer:1 call:4 near:4 revealed:2 recommends:1 enough:1 easy:1 reduce:3 idea:1 whether:4 pca:1 utility:15 effort:1 sentiment:3 speech:2 remark:1 amount:2 extensively:1 ph:1 http:2 sl:4 outperform:1 diagnostic:1 dummy:1 diagnosis:1 discrete:1 dasgupta:2 vol:1 rajan:1 ist:1 nevertheless:1 deleted:2 budgeted:6 v1:4 graph:1 monotone:12 convert:2 year:1 uncertainty:17 powerful:1 fourth:1 arrive:5 place:1 almost:1 decide:2 yann:1 vn:1 decision:1 graham:1 bound:8 guaranteed:1 oracle:1 annual:1 placement:3 constraint:11 encodes:1 aspect:4 innovative:1 concluding:1 developing:1 feige:2 making:1 s1:2 modification:1 happens:1 restricted:2 karbasi:2 taken:1 ln:2 discus:1 committee:2 needed:2 know:1 studying:1 apply:3 observe:6 v2:3 generic:1 appearing:2 distinguished:1 kashima:2 batch:1 original:3 denotes:1 running:1 instant:1 classical:4 objective:8 question:1 v5:3 strategy:2 mirrokni:2 guessing:1 whom:1 reason:1 water:1 assuming:2 length:3 relationship:1 ratio:1 innovation:1 loy:1 difficult:2 executed:1 grafted:1 statement:1 design:1 policy:47 unknown:3 perform:2 summarization:1 conversion:1 observation:10 datasets:6 markov:1 benchmark:3 anti:1 situation:1 y1:3 arbitrary:1 introduced:2 specified:2 established:1 barcelona:1 nip:5 bar:1 dokl:1 below:1 pattern:1 gonen:1 including:4 memory:4 video:2 natural:4 representing:2 brief:1 extract:1 sn:2 prior:7 discovery:1 xiang:1 wisconsin:1 freund:1 loss:1 permutation:1 icalp:1 filtering:3 remarkable:1 offered:1 informativeness:2 storing:1 cancer:1 summary:1 surprisingly:1 supported:1 keeping:1 free:1 cuong:2 dagan:1 taking:1 matroids:2 benefit:2 distributed:2 streambased:2 dimension:1 world:7 author:1 made:2 adaptive:90 spam:4 far:5 social:1 transaction:1 vondr:2 unreliable:1 monotonicity:5 ml:2 active:42 sequentially:4 tolerant:1 uai:1 summing:1 robotic:1 assumed:1 conclude:1 shwartz:1 search:2 nature:1 golovin:7 investigated:1 necessarily:1 separator:1 constructing:2 domain:1 da:2 did:1 main:2 bounding:1 noise:1 arise:1 arrival:2 definitive:1 edition:1 aid:1 candidate:1 lie:1 jmlr:1 theorem:4 svm:2 gupta:1 consist:1 mnist:4 sequential:1 importance:1 budget:9 nk:1 chen:1 wdbc:3 contained:1 v6:3 partially:1 chieu:1 corresponds:1 satisfies:1 determines:1 acm:2 lth:2 goal:2 viewed:1 sculley:1 considerable:1 change:1 included:1 infinite:1 specifically:1 uniformly:1 reducing:1 talg:1 lemma:6 called:3 total:3 pas:1 experimental:4 formally:1 select:13 people:3 arises:1 relevance:1 evaluate:2 |
5,568 | 6,039 | Sequential Neural Models with Stochastic Layers
Marco Fraccaro?
S?ren Kaae S?nderby?
Ulrich Paquet*
?
Technical University of Denmark
?
University of Copenhagen
*
Google DeepMind
Ole Winther??
Abstract
How can we efficiently propagate uncertainty in a latent state representation with
recurrent neural networks? This paper introduces stochastic recurrent neural
networks which glue a deterministic recurrent neural network and a state space
model together to form a stochastic and sequential neural generative model. The
clear separation of deterministic and stochastic layers allows a structured variational
inference network to track the factorization of the model?s posterior distribution.
By retaining both the nonlinear recursive structure of a recurrent neural network
and averaging over the uncertainty in a latent path, like a state space model, we
improve the state of the art results on the Blizzard and TIMIT speech modeling data
sets by a large margin, while achieving comparable performances to competing
methods on polyphonic music modeling.
1
Introduction
Recurrent neural networks (RNNs) are able to represent long-term dependencies in sequential data,
by adapting and propagating a deterministic hidden (or latent) state [5, 16]. There is recent evidence
that when complex sequences such as speech and music are modeled, the performances of RNNs can
be dramatically improved when uncertainty is included in their hidden states [3, 4, 7, 11, 12, 15]. In
this paper we add a new direction to the explorer?s map of treating the hidden RNN states as uncertain
paths, by including the world of state space models (SSMs) as an RNN layer. By cleanly delineating
a SSM layer, certain independence properties of variables arise, which are beneficial for making
efficient posterior inferences. The result is a generative model for sequential data, with a matching
inference network that has its roots in variational auto-encoders (VAEs).
SSMs can be viewed as a probabilistic extension of RNNs, where the hidden states are assumed to
be random variables. Although SSMs have an illustrious history [24], their stochasticity has limited
their widespread use in the deep learning community, as inference can only be exact for two relatively
simple classes of SSMs, namely hidden Markov models and linear Gaussian models, neither of
which are well-suited to modeling long-term dependencies and complex probability distributions
over high-dimensional sequences. On the other hand, modern RNNs rely on gated nonlinearities
such as long short-term memory (LSTM) [16] cells or gated recurrent units (GRUs) [6], that let the
deterministic hidden state of the RNN act as an internal memory for the model. This internal memory
seems fundamental to capturing complex relationships in the data through a statistical model.
This paper introduces the stochastic recurrent neural network (SRNN) in Section 3. SRNNs combine
the gated activation mechanism of RNNs with the stochastic states of SSMs, and are formed by
stacking a RNN and a nonlinear SSM. The state transitions of the SSM are nonlinear and are
parameterized by a neural network that also depends on the corresponding RNN hidden state. The
SSM can therefore utilize long-term information captured by the RNN.
We use recent advances in variational inference to efficiently approximate the intractable posterior
distribution over the latent states with an inference network [19, 23]. The form of our variational
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
xt?1
xt
xt+1
xt?1
xt
xt+1
dt?1
dt
dt+1
zt?1
zt
zt+1
ut?1
ut
ut+1
ut?1
ut
ut+1
(a) RNN
(b) SSM
Figure 1: Graphical models to generate x1:T with a recurrent neural network (RNN) and a state space
model (SSM). Diamond-shaped units are used for deterministic states, while circles are used for
stochastic ones. For sequence generation, like in a language model, one can use ut = xt?1 .
approximation is inspired by the independence properties of the true posterior distribution over the
latent states of the model, and allows us to improve inference by conveniently using the information
coming from the whole sequence at each time step. The posterior distribution over the latent states of
the SRNN is highly non-stationary while we are learning the parameters of the model. To further
improve the variational approximation, we show that we can construct the inference network so that
it only needs to learn how to compute the mean of the variational approximation at each time step
given the mean of the predictive prior distribution.
In Section 4 we test the performances of SRNN on speech and polyphonic music modeling tasks.
SRNN improves the state of the art results on the Blizzard and TIMIT speech data sets by a large
margin, and performs comparably to competing models on polyphonic music modeling. Finally,
other models that extend RNNs by adding stochastic units will be reviewed and compared to SRNN
in Section 5.
2
Recurrent Neural Networks and State Space Models
Recurrent neural networks and state space models are widely used to model temporal sequences
of vectors x1:T = (x1 , x2 , . . . , xT ) that possibly depend on inputs u1:T = (u1 , u2 , . . . , uT ). Both
models rest on the assumption that the sequence x1:t of observations up to time t can be summarized
by a latent state dt or zt , which is deterministically determined (dt in a RNN) or treated as a random
variable which is averaged away (zt in a SSM). The difference in treatment of the latent state has
traditionally led to vastly different models: RNNs recursively compute dt = f (dt?1 , ut ) using a
parameterized nonlinear function f , like a LSTM cell or a GRU. The RNN observation probabilities
p(xt |dt ) are equally modeled with nonlinear functions. SSMs, like linear Gaussian or hidden Markov
models, explicitly model uncertainty in the latent process through z1:T . Parameter inference in a
SSM requires z1:T to be averaged out, and hence p(zt |zt?1 , ut ) and p(xt |zt ) are often restricted
to the exponential family of distributions to make many existing approximate inference algorithms
applicable. On the other hand, averaging a function over the deterministic path d1:T in a RNN is a
trivial operation. The striking similarity in factorization between these models is illustrated in Figures
1a and 1b.
Can we combine the best of both worlds, and make the stochastic state transitions of SSMs nonlinear
whilst keeping the gated activation mechanism of RNNs? Below, we show that a more expressive
model can be created by stacking a SSM on top of a RNN, and that by keeping them layered, the
functional form of the true posterior distribution over z1:T guides the design of a backward-recursive
structured variational approximation.
3
Stochastic Recurrent Neural Networks
We define a SRNN as a generative model p? by temporally interlocking a SSM with a RNN, as
illustrated in Figure 2a. The joint probability of a single sequence and its latent states, assuming
knowledge of the starting states z0 = 0 and d0 = 0, and inputs u1:T , factorizes as
2
xt?1
xt
xt+1
xt?1
xt
xt+1
zt?1
zt
zt+1
zt?1
zt
zt+1
dt?1
dt
dt+1
at?1
at
at+1
ut?1
ut
ut+1
dt?1
dt
dt+1
(a) Generative model p?
(b) Inference network q?
Figure 2: A SRNN as a generative model p? for a sequence x1:T . Posterior inference of z1:T and d1:T
is done through an inference network q? , which uses a backward-recurrent state at to approximate
the nonlinear dependence of zt on future observations xt:T and states dt:T ; see Equation (7).
p? (x1:T , z1:T , d1:T |u1:T , z0 , d0 ) = p?x (x1:T |z1:T , d1:T ) p?z (z1:T |d1:T , z0 ) p?d (d1:T |u1:T , d0 )
=
T
Y
p?x (xt |zt , dt ) p?z (zt |zt?1 , dt ) p?d (dt |dt?1 , ut ) .
(1)
t=1
The SSM and RNN are further tied with skip-connections from dt to xt . The joint density in (1) is
parameterized by ? = {?x , ?z , ?d }, which will be adapted together with parameters ? of a so-called
?inference network? q? to best model N independently observed data sequences {xi1:Ti }N
i=1 that are
described by the log marginal likelihood or evidence
X
X
L(?) = log p? {xi1:Ti } | {ui1:Ti , zi0 , di0 }N
log p? (xi1:Ti |ui1:Ti , zi0 , di0 ) =
Li (?) . (2)
i=1 =
i
i
Throughout the paper, we omit superscript i when only one sequence is referred to, or when it is
clear from the context. In each log likelihood term Li (?) in (2), the latent states z1:T and d1:T
were averaged out of (1). Integrating out d1:T is done by simply substituting its deterministically
obtained value, but z1:T requires more care, and we return to it in Section 3.2. Following Figure 2a,
the states d1:T are determined from d0 and u1:T through the recursion dt = f?d (dt?1 , ut ). In our
implementation f?d is a GRU network with parameters ?d . For later convenience we denote the value
e t ), i.e.
e 1:T . Therefore p? (dt |dt?1 , ut ) = ?(dt ? d
of d1:T , as computed by application of f?d , by d
d
e 1:T .
d1:T follows a delta distribution centered at d
Unlike the VRNN [7], zt directly depends on zt?1 , as it does in a SSM, via p?z (zt |zt?1 , dt ). This
split makes a clear separation between the deterministic and stochastic parts of p? ; the RNN remains
entirely deterministic and its recurrent units do not depend on noisy samples of zt , while the prior
over zt follows the Markov structure of SSMs. The split allows us to later mimic the structure of
the posterior distribution over z1:T and d1:T in its approximation q? . We let the prior transition
(p)
(p)
distribution p?z (zt |zt?1 , dt ) = N (zt ; ?t , vt ) be a Gaussian with a diagonal covariance matrix,
whose mean and log-variance are parameterized by neural networks that depend on zt?1 and dt ,
(p)
?t
(p)
(p)
= NN1 (zt?1 , dt ) ,
log vt
(p)
= NN2 (zt?1 , dt ) ,
(p)
NN1
(3)
(p)
NN2 ,
where NN denotes a neural network. Parameters ?z denote all weights of
and
which
are two-layer feed-forward networks in our implementation. Similarly, the parameters of the emission
distribution p?x (xt |zt , dt ) depend on zt and dt through a similar neural network that is parameterized
by ?x .
3.1
Variational inference for the SRNN
The stochastic variables z1:T of the nonlinear SSM cannot be analytically integrated out to obtain
L(?) in (2). Instead of maximizing L with respect to ?, we maximize a variational evidence lower
3
P
bound (ELBO) F(?, ?) = i Fi (?, ?) ? L(?) with respect to both ? and the variational parameters
? [17]. The ELBO is a sum of lower bounds Fi (?, ?) ? Li (?), one for each sequence i,
ZZ
p? (x1:T , z1:T , d1:T |A)
Fi (?, ?) =
q? (z1:T , d1:T |x1:T , A) log
dz1:T dd1:T ,
(4)
q? (z1:T , d1:T |x1:T , A)
where A = {u1:T , z0 , d0 } is a notational shorthand. Each sequence?s approximation q? shares
parameters ? with all others, to form the auto-encoding variational Bayes inference network or
variational auto encoder (VAE) [19, 23] shown in Figure 2b. Maximizing F(?, ?) ? which we
call ?training? the neural network architecture with parameters ? and ? ? is done by stochastic
gradient ascent, and in doing so, both the posterior and its approximation q? change simultaneously.
All the intractable expectations in (4) would typically be approximated by sampling, using the
reparameterization trick [19, 23] or control variates [22] to obtain low-variance estimators of its
gradients. We use the reparameterization trick in our implementation. Iteratively maximizing F over
? and ? separately would yield an expectation maximization-type algorithm, which has formed a
backbone of statistical modeling for many decades [8]. The tightness of the bound depends on how
well we can approximate the i = 1, . . . , N factors p? (zi1:Ti , di1:Ti |xi1:Ti , Ai ) that constitute the true
posterior over all latent variables with their corresponding factors q? (zi1:Ti , di1:Ti |xi1:Ti , Ai ). In what
follows, we show how q? could be judiciously structured to match the posterior factors.
We add initial structure to q? by noticing that the prior p?d (d1:T |u1:T , d0 ) in the generative model
e 1:T , and so is the posterior p? (d1:T |x1:T , u1:T , d0 ). Consequently, we let
is a delta function over d
e 1:T as that of the generative
the inference network use exactly the same deterministic state setting d
model, and we decompose it as
q? (z1:T , d1:T |x1:T , u1:T , z0 , d0 ) = q? (z1:T |d1:T , x1:T , z0 ) q(d1:T |x1:T , u1:T , d0 ) .
|
{z
}
(5)
= p?d (d1:T |u1:T ,d0 )
This choice exactly approximates one delta-function by itself, and simplifies the ELBO by letting
them cancel out. By further taking the outer average in (4), one obtains
h
i
e 1:T ) ? KL q? (z1:T |d
e 1:T , x1:T , z0 )
p? (z1:T |d
e 1:T , z0 ) ,
Fi (?, ?) = Eq? log p? (x1:T |z1:T , d
(6)
e 1:T . The first term is an expected log likelihood
which still depends on ?d , u1:T and d0 via d
e 1:T , x1:T , z0 ), while KL denotes the Kullback-Leibler divergence between two
under q? (z1:T |d
distributions. Having stated the second factor in (5), we now turn our attention to parameterizing the
first factor in (5) to resemble its posterior equivalent, by exploiting the temporal structure of p? .
3.2
Exploiting the temporal structure
The true posterior distribution of the stochastic states z1:T , given
Q both the data and the deterministic
states d1:T , factorizes as p? (z1:T |d1:T , x1:T , u1:T , z0 ) = t p? (zt |zt?1 , dt:T , xt:T ). This can be
verified by considering the conditional independence properties of the graphical model in Figure 2a
using d-separation [13]. This shows that, knowing zt?1 , the posterior distribution of zt does not
depend on the past outputs and deterministic states, but only on the present and future ones; this was
also noted in [20]. Instead of factorizing q? as a mean-field approximation across time steps, we keep
the structured form of the posterior factors, including zt ?s dependence on zt?1 , in the variational
approximation
Y
Y
q? (z1:T |d1:T , x1:T , z0 ) =
q? (zt |zt?1 , dt:T , xt:T ) =
q?z (zt |zt?1 , at = g?a (at+1 , [dt , xt ])) ,
t
t
(7)
where [dt , xt ] is the concatenation of the vectors dt and xt . The graphical model for the inference
network is shown in Figure 2b. Apart from the direct dependence of the posterior approximation at
time t on both dt:T and xt:T , the distribution also depends on d1:t?1 and x1:t?1 through zt?1 . We
mimic each posterior factor?s nonlinear long-term dependence on dt:T and xt:T through a backwardrecurrent function g?a , shown in (7), which we will return to in greater detail in Section 3.3. The
inference network in Figure 2b is therefore parameterized by ? = {?z , ?a } and ?d .
In (7) all time steps are taken into account when constructing the variational approximation at time
t; this can therefore be seen as a smoothing problem. In our experiments we also consider filtering,
4
where only the information up to time t is used to define q? (zt |zt?1 , dt , xt ). As the parameters ?
are shared across time steps, we can easily handle sequences of variable length in both cases.
As both the generative model and inference network factorize over time steps in (1) and (7), the
ELBO in (6) separates as a sum over the time steps
h
X
et) +
Fi (?, ?) =
Eq?? (zt?1 ) Eq? (zt |zt?1 ,det:T ,xt:T ) log p? (xt |zt , d
t
i
e t:T , xt:T )
p? (zt |zt?1 , d
et) ,
? KL q? (zt |zt?1 , d
(8)
where q?? (zt?1 ) denotes the marginal distribution of zt?1 in the variational approximation to the
e 1:T , x1:T , z0 ), given by
posterior q? (z1:t?1 |d
Z
h
i
e 1:T , x1:T , z0 ) dz1:t?2 = Eq? (z ) q? (zt?1 |zt?2 , d
e t?1:T , xt?1:T ) .
q?? (zt?1 ) = q? (z1:t?1 |d
t?2
?
(9)
We can interpret (9) as having a VAE at each time step t, with the VAE being conditioned on the past
through the stochastic variable zt?1 . To compute (8), the dependence on zt?1 needs to be integrated
out, using our posterior knowledge at time t ? 1 which is given by q?? (zt?1 ). We approximate the
outer expectation in (8) using a Monte Carlo estimate, as samples from q?? (zt?1 ) can be efficiently
obtained by ancestral sampling. The sequential formulation of the inference model in (7) allows
(s)
(s)
such samples to be drawn and reused, as given a sample zt?2 from q?? (zt?2 ), a sample zt?1 from
(s) e
q? (zt?1 |z , dt?1:T , xt?1:T ) will be distributed according to q ? (zt?1 ).
t?2
3.3
?
Parameterization of the inference network
The variational distribution q? (zt |zt?1 , dt:T , xt:T ) needs to approximate the dependence of the
true posterior p? (zt |zt?1 , dt:T , xt:T ) on dt:T and xt:T , and as alluded to in (7), this is done by
e t:T and xt:T backwards in time. Specifically, we initialize the hidrunning a RNN with inputs d
den state of the backward-recursive RNN in Figure 2b as aT +1 = 0, and recursively compute
e t , xt ]). The function g? represents a recurrent neural network with, for examat = g?a (at+1 , [d
a
ple, LSTM or GRU units.QEach sequence?s variational approximation factorizes over time with
q? (z1:T |d1:T , x1:T , z0 ) = t q?z (zt |zt?1 , at ), as shown in (7). We let q?z (zt |zt?1 , at ) be a Gaussian with diagonal covariance, whose mean and the log-variance are parameterized with ?z as
(q)
?t
(q)
(q)
= NN1 (zt?1 , at ) ,
log vt
(q)
= NN2 (zt?1 , at ) .
(10)
Instead of smoothing, we can also do filtering by using a neural network to approximate the dependence of the true posterior p? (zt |zt?1 , dt , xt ) on dt and xt , through for instance at = NN(a) (dt , xt ).
Improving the posterior approximation. In our experiments we found that during training, the parameterization introduced in (10) can lead to small values of the KL term
e t )) in the ELBO in (8). This happens when g? in the inference
KL(q? (zt |zt?1 , at ) k p? (zt |zt?1 , d
network does not rely on the information propagated back from future outputs in at , but it is mostly
e t to imitate the behavior of the prior. The inference network could therefore
using the hidden state d
get stuck by trying to optimize the ELBO through sampling from the prior of the model, making
the variational approximation to the posterior useless. To overcome this issue, we directly include
some knowledge of the predictive prior dynamics in the parameterization of the inference network,
using our approximation of the posterior distribution q?? (zt?1 ) over the previous latent states. In the
spirit of sequential Monte Carlo methods [10], we improve the parameterization of q? (zt |zt?1 , at )
by using q?? (zt?1 ) from (9). As we are constructing the variational distribution sequentially, we
approximate the predictive prior mean, i.e. our ?best guess? on the prior dynamics of zt , as
Z
Z
(p)
(p)
(p)
b t = NN1 (zt?1 , dt ) p(zt?1 |x1:T ) dzt?1 ? NN1 (zt?1 , dt ) q?? (zt?1 ) dzt?1 , (11)
?
where we used the parameterization of the prior distribution in (3). We estimate the integral required
(p)
b t by reusing the samples that were needed for the Monte Carlo estimate of the ELBO
to compute ?
5
in (8). This predictive prior mean can then be used in the parameterization of the mean of the
variational approximation q? (zt |zt?1 , at ),
(q)
?t
(p)
bt
=?
(q)
+ NN1 (zt?1 , at ) ,
(12)
and we refer to this parameterization as Resq in the results
Algorithm 1 Inference of SRNN with
(q)
in Section 4. Rather than directly learning ?t , we learn Resq parameterization from (12).
(q)
(p)
b t and ?t . It is straightforward
the residual between ?
e 1:T and a1:T
1: inputs: d
to show that with this parameterization the KL-term in
2:
initialize
z0
(p)
(q)
b t , but only on NN1 (zt?1 , at ). 3: for t = 1 to T do
(8) will not depend on ?
(p)
(p)
et)
Learning the residual improves inference, making it seem- 4: ?
b t = NN1 (zt?1 , d
ingly easier for the inference network to track changes
(q)
(p)
(q)
b t + NN1 (zt?1 , at )
5:
?t = ?
in the generative model while the model is trained, as it
(q)
(q)
6:
log
v
= NN2 (zt?1 , at )
will only have to learn how to ?correct? the predictive
t
(q)
(q)
prior dynamics by using the information coming from
7:
zt ? N (zt ; ?t , vt )
e
dt:T and xt:T . We did not see any improvement in results 8: end for
(q)
by parameterizing log vt in a similar way. The inference
procedure of SRNN with Resq parameterization for one sequence is summarized in Algorithm 1.
4
Results
In this section the SRNN is evaluated on the modeling of speech and polyphonic music data, as they
have shown to be difficult to model without a good representation of the uncertainty in the latent
states [3, 7, 11, 12, 15]. We test SRNN on the Blizzard [18] and TIMIT raw audio data sets (Table 1)
used in [7]. The preprocessing of the data sets and the testing performance measures are identical
to those reported in [7]. Blizzard is a dataset of 300 hours of English, spoken by a single female
speaker. TIMIT is a dataset of 6300 English sentences read by 630 speakers. As done in [7], for
Blizzard we report the average log-likelihood for half-second sequences and for TIMIT we report
the average log likelihood per sequence for the test set sequences. Note that the sequences in the
TIMIT test set are on average 3.1s long, and therefore 6 times longer than those in Blizzard. For
the raw audio datasets we use a fully factorized Gaussian output distribution. Additionally, we test
SRNN for modeling sequences of polyphonic music (Table 2), using the four data sets of MIDI
songs introduced in [4]. Each data set contains more than 7 hours of polyphonic music of varying
complexity: folk tunes (Nottingham data set), the four-part chorales by J. S. Bach (JSB chorales),
orchestral music (MuseData) and classical piano music (Piano-midi.de). For polyphonic music we
use a Bernoulli output distribution to model the binary sequences of piano notes. In our experiments
we set ut = xt?1 , but ut could also be used to represent additional input information to the model.
All models where implemented using Theano [2], Lasagne [9] and Parmesan1 . Training using a
NVIDIA Titan X GPU took around 1.5 hours for TIMIT, 18 hours for Blizzard, less than 15 minutes
for the JSB chorales and Piano-midi.de data sets, and around 30 minutes for the Nottingham and
MuseData data sets. To reduce the computational requirements we use only 1 sample to approximate
all the intractable expectations in the ELBO (notice that the KL term can be computed analytically).
Further implementation and experimental details can be found in the Supplementary Material.
Blizzard and TIMIT. Table 1 compares the average log-likelihood per test sequence of SRNN to
the results from [7]. For RNNs and VRNNs the authors of [7] test two different output distributions,
namely a Gaussian distribution (Gauss) and a Gaussian Mixture Model (GMM). VRNN-I differs
from the VRNN in that the prior over the latent variables is independent across time steps, and it is
therefore similar to STORN [3]. For SRNN we compare the smoothing and filtering performance
(denoted as smooth and filt in Table 1), both with the residual term from (12) and without it (10)
(denoted as Resq if present). We prefer to only report the more conservative evidence lower bound
for SRNN, as the approximation of the log-likelihood using standard importance sampling is known
to be difficult to compute accurately in the sequential setting [10]. We see from Table 1 that SRNN
outperforms all the competing methods for speech modeling. As the test sequences in TIMIT are
on average more than 6 times longer than the ones for Blizzard, the results obtained with SRNN for
1
github.com/casperkaae/parmesan. The code for SRNN is available at github.com/marcofraccaro/srnn.
6
Models
Blizzard
TIMIT
SRNN
(smooth+Resq )
?11991 ? 60550
? 10991 ? 59269
? 10572 ? 52126
? 10846 ? 50524
? 9107
? 28982
? 9392
? 29604
VRNN-Gauss
? 9223
? 28805
? 9516
? 30235
VRNN-I-Gauss
? 8933
? 28340
? 9188
? 29639
RNN-GMM
7413
26643
RNN-Gauss
3539
-1900
Table 1: Average log-likelihood per sequence
on the test sets. For TIMIT the average test set
length is 3.1s, while the Blizzard sequences
are all 0.5s long. The non-SRNN results are
reported as in [7]. Smooth: g?a is a GRU running backwards; filt: g?a is a feed-forward
network; Resq : parameterization with residual in (12).
SRNN (smooth)
SRNN (filt+Resq )
SRNN (filt)
VRNN-GMM
Figure 3: Visualization of the average KL term and
reconstructions of the output mean and log-variance
for two examples from the Blizzard test set.
Models
Nottingham JSB chorales MuseData Piano-midi.de
SRNN (smooth+Resq )
? ?2.94
? ?4.74
? ?6.28
? ?8.20
TSBN
? ?3.67
? ?7.48
? ?6.81
? ?7.98
NASMC
? ?2.72
? ?3.99
? ?6.89
? ?7.61
STORN
? ?2.85
? ?6.91
? ?6.16
? ?7.13
RNN-NADE
? ?2.31
? ?5.19
? ?5.60
? ?7.05
? ?4.46
? ?8.71
? ?8.13
? ?8.37
RNN
Table 2: Average log-likelihood on the test sets. The TSBN results are from [12], NASMC from [15],
STORN from [3], RNN-NADE and RNN from [4].
TIMIT are in line with those obtained for Blizzard. The VRNN, which performs well when the voice
of the single speaker from Blizzard is modeled, seems to encounter difficulties when modeling the
630 speakers in the TIMIT data set. As expected, for SRNN the variational approximation that is
obtained when future information is also used (smoothing) is better than the one obtained by filtering.
Learning the residual between the prior mean and the mean of the variational approximation, given in
(12), further improves the performance in 3 out of 4 cases.
In the first two lines of Figure 3 we plot two raw signals from the Blizzard test set and the average
KL term between the variational approximation and the prior distribution. We see that the KL
term increases whenever there is a transition in the raw audio signal, meaning that the inference
network is using the information coming from the output symbols to improve inference. Finally, the
reconstructions of the output mean and log-variance in the last two lines of Figure 3 look consistent
with the original signal.
Polyphonic music. Table 2 compares the average log-likelihood on the test sets obtained with
SRNN and the models introduced in [3, 4, 12, 15]. As done for the speech data, we prefer to report the
more conservative estimate of the ELBO in Table 2, rather than approximating the log-likelihood with
importance sampling as some of the other methods do. We see that SRNN performs comparably to
other state of the art methods in all four data sets. We report the results using smoothing and learning
the residual between the mean of the predictive prior and the mean of the variational approximation,
but the performances using filtering and directly learning the mean of the variational approximation
are now similar. We believe that this is due to the small amount of data and the fact that modeling
MIDI music is much simpler than modeling raw speech signals.
7
5
Related work
A number of works have extended RNNs with stochastic units to model motion capture, speech
and music data [3, 7, 11, 12, 15]. The performances of these models are highly dependent on how
the dependence among stochastic units is modeled over time, on the type of interaction between
stochastic units and deterministic ones, and on the procedure that is used to evaluate the typically
intractable log likelihood. Figure 4 highlights how SRNN differs from some of these works.
In STORN [3] (Figure 4a) and DRAW [14] the stochastic units at each time step have an isotropic
Gaussian prior and are independent between time steps. The stochastic units are used as an input
to the deterministic units in a RNN. As in our work, the reparameterization trick [19, 23] is used to
optimize an ELBO.
The authors of the VRNN [7] (Figure
4b) note that it is beneficial to add
information coming from the past
states to the prior over latent varizt?1
zt?1
zt
zt
zt
ables zt . The VRNN lets the prior
p?z (zt |dt ) over the stochastic units
depend on the deterministic units dt ,
dt?1
dt?1
dt
dt
which in turn depend on both the deterministic and the stochastic units at
ut
ut
ut
the previous time step through the
recursion dt = f (dt?1 , zt?1 , ut ).
(a) STORN
(b) VRNN
(c) Deep Kalman Filter
The SRNN differs by clearly separating the deterministic and stochastic
Figure 4: Generative models of x1:T that are related to SRNN. part, as shown in Figure 2a. The sepaFor sequence modeling it is typical to set ut = xt?1 .
ration of deterministic and stochastic
units allows us to improve the posterior approximation by doing smoothing, as the stochastic units
still depend on each other when we condition on d1:T . In the VRNN, on the other hand, the stochastic
units are conditionally independent given the states d1:T . Because the inference and generative
networks in the VRNN share the deterministic units, the variational approximation would not improve
by making it dependent on the future through at , when calculated with a backward GRU, as we
do in our model. Unlike STORN, DRAW and VRNN, the SRNN separates the ?noisy? stochastic
units from the deterministic ones, forming an entire layer of interconnected stochastic units. We
found in practice that this gave better performance and was easier to train. The works by [1, 20]
(Figure 4c) show that it is possible to improve inference in SSMs by using ideas from VAEs, similar
to what is done in the stochastic part (the top layer) of SRNN. Towards the periphery of related
works, [15] approximates the log likelihood of a SSM with sequential Monte Carlo, by learning
flexible proposal distributions parameterized by deep networks, while [12] uses a recurrent model
with discrete stochastic units that is optimized using the NVIL algorithm [21].
xt
6
xt
xt
Conclusion
This work has shown how to extend the modeling capabilities of recurrent neural networks by
combining them with nonlinear state space models. Inspired by the independence properties of the
intractable true posterior distribution over the latent states, we designed an inference network in a
principled way. The variational approximation for the stochastic layer was improved by using the
information coming from the whole sequence and by using the Resq parameterization to help the
inference network to track the non-stationary posterior. SRNN achieves state of the art performances
on the Blizzard and TIMIT speech data set, and performs comparably to competing methods for
polyphonic music modeling.
Acknowledgements
We thank Casper Kaae S?nderby and Lars Maal?e for many fruitful discussions, and NVIDIA
Corporation for the donation of TITAN X and Tesla K40 GPUs. Marco Fraccaro is supported by
Microsoft Research through its PhD Scholarship Programme.
8
References
[1] E. Archer, I. M. Park, L. Buesing, J. Cunningham, and L. Paninski. Black box variational inference for
state space models. arXiv:1511.07367, 2015.
[2] F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. Goodfellow, A. Bergeron, N. Bouchard, D. Warde-Farley,
and Y. Bengio. Theano: new features and speed improvements. arXiv:1211.5590, 2012.
[3] J. Bayer and C. Osendorfer. Learning stochastic recurrent networks. arXiv:1411.7610, 2014.
[4] N. Boulanger-Lewandowski, Y. Bengio, and P. Vincent. Modeling temporal dependencies in highdimensional sequences: Application to polyphonic music generation and transcription. arXiv:1206.6392,
2012.
[5] K. Cho, B. Van Merri?nboer, ?. G?l?ehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. Learning
phrase representations using RNN encoder?decoder for statistical machine translation. In EMNLP, pages
1724?1734, 2014.
[6] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on
sequence modeling. arXiv:1412.3555, 2014.
[7] J. Chung, K. Kastner, L. Dinh, K. Goel, A. C. Courville, and Y. Bengio. A recurrent latent variable model
for sequential data. In NIPS, pages 2962?2970, 2015.
[8] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society, Series B, 39(1), 1977.
[9] S. Dieleman, J. Schl?ter, C. Raffel, E. Olson, S. K. S?nderby, D. Nouri, E. Battenberg, and A. van den
Oord. Lasagne: First release, 2015.
[10] A. Doucet, N. de Freitas, and N. Gordon. An introduction to sequential Monte Carlo methods. In Sequential
Monte Carlo Methods in Practice, Statistics for Engineering and Information Science. 2001.
[11] O. Fabius and J. R. van Amersfoort. Variational recurrent auto-encoders. arXiv:1412.6581, 2014.
[12] Z. Gan, C. Li, R. Henao, D. E. Carlson, and L. Carin. Deep temporal sigmoid belief networks for sequence
modeling. In NIPS, pages 2458?2466, 2015.
[13] D. Geiger, T. Verma, and J. Pearl. Identifying independence in Bayesian networks. Networks, 20:507?534,
1990.
[14] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. DRAW: A recurrent neural network for image
generation. In ICML, 2015.
[15] S. Gu, Z. Ghahramani, and R. E. Turner. Neural adaptive sequential Monte Carlo. In NIPS, pages
2611?2619, 2015.
[16] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735?1780, Nov.
1997.
[17] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for
graphical models. Machine Learning, 37(2):183?233, 1999.
[18] S. King and V. Karaiskos. The Blizzard challenge 2013. In The Ninth Annual Blizzard Challenge, 2013.
[19] D. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014.
[20] R. G. Krishnan, U. Shalit, and D. Sontag. Deep Kalman filters. arXiv:1511.05121, 2015.
[21] A. Mnih and K. Gregor. Neural variational inference and learning in belief networks. arXiv:1402.0030,
2014.
[22] J. W. Paisley, D. M. Blei, and M. I. Jordan. Variational Bayesian inference with stochastic search. In ICML,
2012.
[23] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in
deep generative models. In ICML, pages 1278?1286, 2014.
[24] S. Roweis and Z. Ghahramani. A unifying review of linear Gaussian models. Neural Computation,
11(2):305?45, 1999.
9
| 6039 |@word seems:2 glue:1 reused:1 cleanly:1 dz1:2 propagate:1 covariance:2 recursively:2 initial:1 contains:1 series:1 past:3 existing:1 outperforms:1 freitas:1 com:2 activation:2 gpu:1 treating:1 plot:1 designed:1 polyphonic:10 stationary:2 generative:12 half:1 guess:1 parameterization:12 imitate:1 isotropic:1 fabius:1 short:2 blei:1 pascanu:1 ssm:14 simpler:1 wierstra:2 direct:1 shorthand:1 combine:2 expected:2 behavior:1 inspired:2 tsbn:2 considering:1 blizzard:18 spain:1 factorized:1 what:2 backbone:1 deepmind:1 whilst:1 spoken:1 corporation:1 temporal:5 act:1 ti:11 delineating:1 exactly:2 control:1 unit:21 omit:1 danihelka:1 engineering:1 encoding:2 path:3 black:1 rnns:10 lasagne:2 storn:6 factorization:2 limited:1 zi0:2 averaged:3 testing:1 recursive:3 practice:2 differs:3 backpropagation:1 procedure:2 empirical:1 rnn:25 adapting:1 matching:1 integrating:1 bergeron:1 get:1 convenience:1 cannot:1 layered:1 context:1 dzt:2 optimize:2 equivalent:1 deterministic:19 map:1 interlocking:1 maximizing:3 raffel:1 straightforward:1 attention:1 starting:1 independently:1 identifying:1 estimator:1 parameterizing:2 lamblin:1 lewandowski:1 reparameterization:3 handle:1 traditionally:1 merri:1 exact:1 us:2 goodfellow:1 trick:3 approximated:1 nderby:3 jsb:3 observed:1 capture:1 k40:1 principled:1 dempster:1 complexity:1 ration:1 warde:1 dynamic:3 trained:1 depend:9 predictive:6 gu:1 srnn:35 easily:1 joint:2 schwenk:1 train:1 monte:7 ole:1 whose:2 widely:1 supplementary:1 tightness:1 elbo:10 encoder:2 statistic:1 paquet:1 noisy:2 itself:1 superscript:1 laird:1 sequence:30 took:1 reconstruction:2 interaction:1 coming:5 interconnected:1 combining:1 roweis:1 olson:1 exploiting:2 requirement:1 help:1 donation:1 recurrent:21 propagating:1 schl:1 eq:4 implemented:1 skip:1 resemble:1 kaae:2 direction:1 correct:1 filter:2 stochastic:34 lars:1 centered:1 amersfoort:1 material:1 decompose:1 extension:1 marco:2 around:2 dieleman:1 substituting:1 achieves:1 applicable:1 di0:2 clearly:1 gaussian:9 ingly:1 rather:2 factorizes:3 varying:1 vae:3 jaakkola:1 release:1 emission:1 rezende:1 notational:1 improvement:2 bernoulli:1 likelihood:14 inference:39 dependent:2 nn:2 integrated:2 typically:2 bt:1 entire:1 hidden:9 cunningham:1 archer:1 henao:1 issue:1 among:1 flexible:1 denoted:2 retaining:1 art:4 smoothing:6 initialize:2 marginal:2 field:1 construct:1 shaped:1 having:2 sampling:5 zz:1 identical:1 represents:1 park:1 look:1 cancel:1 kastner:1 carin:1 osendorfer:1 icml:3 future:5 mimic:2 report:5 others:1 di1:2 gordon:1 modern:1 simultaneously:1 divergence:1 microsoft:1 highly:2 mnih:1 evaluation:1 introduces:2 mixture:1 farley:1 nvil:1 integral:1 bayer:1 folk:1 incomplete:1 circle:1 shalit:1 battenberg:1 uncertain:1 instance:1 modeling:18 maximization:1 phrase:1 stacking:2 reported:2 dependency:3 encoders:2 cho:2 grus:1 winther:1 lstm:3 fundamental:1 density:1 ancestral:1 oord:1 probabilistic:1 xi1:5 together:2 vastly:1 possibly:1 emnlp:1 chung:2 return:2 li:4 reusing:1 account:1 nonlinearities:1 de:4 bergstra:1 summarized:2 titan:2 explicitly:1 depends:5 later:2 root:1 doing:2 bayes:2 capability:1 bouchard:1 timit:14 formed:2 variance:5 efficiently:3 yield:1 buesing:1 raw:5 vincent:1 bayesian:2 accurately:1 comparably:3 ren:1 carlo:7 history:1 whenever:1 mohamed:1 propagated:1 dataset:2 treatment:1 knowledge:3 ut:23 improves:3 fruitful:1 back:1 feed:2 dt:58 improved:2 formulation:1 done:7 evaluated:1 box:1 nottingham:3 hand:3 expressive:1 nonlinear:10 google:1 widespread:1 believe:1 true:7 hence:1 vrnn:13 analytically:2 read:1 iteratively:1 leibler:1 illustrated:2 conditionally:1 during:1 noted:1 speaker:4 trying:1 performs:4 motion:1 meaning:1 variational:33 nouri:1 image:1 fi:5 sigmoid:1 dd1:1 functional:1 nn2:4 extend:2 approximates:2 interpret:1 bougares:1 refer:1 dinh:1 ai:2 paisley:1 similarly:1 stochasticity:1 language:1 similarity:1 longer:2 add:3 posterior:28 recent:2 female:1 apart:1 periphery:1 schmidhuber:1 certain:1 nvidia:2 binary:1 vt:5 captured:1 seen:1 greater:1 care:1 ssms:9 additional:1 casperkaae:1 goel:1 maximize:1 signal:4 d0:11 smooth:5 technical:1 match:1 bach:1 long:8 equally:1 a1:1 expectation:4 resq:9 arxiv:8 represent:2 ui1:2 hochreiter:1 cell:2 proposal:1 separately:1 rest:1 unlike:2 ascent:1 bahdanau:1 spirit:1 seem:1 jordan:2 call:1 backwards:2 ter:1 split:2 bengio:5 krishnan:1 independence:5 variate:1 gave:1 architecture:1 competing:4 reduce:1 simplifies:1 idea:1 knowing:1 judiciously:1 det:1 filt:4 song:1 sontag:1 speech:10 constitute:1 deep:6 dramatically:1 clear:3 tune:1 amount:1 generate:1 notice:1 delta:3 track:3 per:3 discrete:1 four:3 achieving:1 drawn:1 neither:1 gmm:3 verified:1 utilize:1 backward:4 sum:2 parameterized:8 uncertainty:5 noticing:1 striking:1 family:1 throughout:1 separation:3 geiger:1 draw:3 prefer:2 comparable:1 capturing:1 layer:8 entirely:1 bound:4 courville:1 annual:1 adapted:1 x2:1 u1:14 speed:1 ables:1 nboer:1 relatively:1 gpus:1 structured:4 according:1 beneficial:2 across:3 em:1 making:4 happens:1 den:2 restricted:1 theano:2 fraccaro:2 taken:1 equation:1 alluded:1 remains:1 visualization:1 turn:2 mechanism:2 needed:1 letting:1 end:1 maal:1 gulcehre:1 available:1 operation:1 away:1 voice:1 encounter:1 original:1 top:2 denotes:3 include:1 running:1 gan:1 graphical:4 nasmc:2 unifying:1 music:15 carlson:1 scholarship:1 ghahramani:3 approximating:1 classical:1 boulanger:1 society:1 gregor:2 dependence:8 diagonal:2 karaiskos:1 gradient:2 iclr:1 separate:2 thank:1 separating:1 concatenation:1 decoder:1 outer:2 trivial:1 denmark:1 assuming:1 length:2 code:1 modeled:4 relationship:1 useless:1 kalman:2 difficult:2 mostly:1 stated:1 design:1 implementation:4 zt:108 gated:5 diamond:1 observation:3 markov:3 datasets:1 musedata:3 zi1:2 extended:1 ninth:1 community:1 introduced:3 copenhagen:1 namely:2 gru:5 kl:10 z1:26 connection:1 required:1 sentence:1 optimized:1 barcelona:1 hour:4 nip:4 pearl:1 kingma:1 able:1 below:1 vrnns:1 challenge:2 including:2 memory:4 royal:1 belief:2 explorer:1 rely:2 treated:1 difficulty:1 recursion:2 residual:6 turner:1 chorale:4 improve:8 github:2 temporally:1 created:1 auto:5 prior:19 piano:5 acknowledgement:1 review:1 graf:1 fully:1 highlight:1 generation:3 filtering:5 consistent:1 rubin:1 ulrich:1 verma:1 share:2 casper:1 ehre:1 translation:1 supported:1 last:1 keeping:2 english:2 guide:1 saul:1 taking:1 distributed:1 van:3 overcome:1 calculated:1 world:2 transition:4 forward:2 stuck:1 author:2 preprocessing:1 adaptive:1 ple:1 programme:1 welling:1 approximate:10 obtains:1 nov:1 midi:5 kullback:1 transcription:1 keep:1 doucet:1 sequentially:1 assumed:1 factorize:1 factorizing:1 search:1 latent:18 decade:1 reviewed:1 table:9 additionally:1 learn:3 improving:1 complex:3 constructing:2 did:1 whole:2 arise:1 tesla:1 x1:25 referred:1 nade:2 deterministically:2 exponential:1 tied:1 z0:15 minute:2 xt:47 bastien:1 symbol:1 evidence:4 intractable:5 sequential:12 adding:1 importance:2 phd:1 conditioned:1 margin:2 easier:2 suited:1 led:1 simply:1 paninski:1 forming:1 conveniently:1 u2:1 conditional:1 viewed:1 king:1 consequently:1 towards:1 nn1:9 shared:1 change:2 included:1 determined:2 specifically:1 typical:1 averaging:2 conservative:2 called:1 experimental:1 gauss:4 vaes:2 highdimensional:1 internal:2 evaluate:1 audio:3 d1:28 |
5,569 | 604 | Metamorphosis Networks:
An Alternative to Constructive Methods
Brian
v.
Bonnlander
Michael C. Mozer
Department of Computer Science &
Institute of Cognitive Science
University of Colorado
Boulder, CO 80309-0430
Abstract
Given a set oft raining examples, determining the appropriate number of free parameters is a challenging problem. Constructive
learning algorithms attempt to solve this problem automatically by
adding hidden units, and therefore free parameters, during learning. We explore an alternative class of algorithms-called metamorphosis algorithms-in which the number of units is fixed, but
the number of free parameters gradually increases during learning.
The architecture we investigate is composed of RBF units on a lattice, which imposes flexible constraints on the parameters of the
network. Virtues of this approach include variable subset selection, robust parameter selection, multiresolution processing, and
interpolation of sparse training data.
1
INTRODUCTION
Generalization performance on a fixed-size training set is closely related to the
number of free parameters in a network. Selecting either too many or too few
parameters can lead to poor generalization. Geman et al. (1991) refer to this
problem as the bias/variance dilemma: introducing too many free parameters incurs
high variance in the set of possible solutions, and restricting the network to too few
free parameters incurs high bias in the set of possible solutions.
Constructive learning algorithms (e.g., Fahlman & Lebiere, 1990; Platt, 1991) have
131
132
Bonnlander and Mozer
output
layer
RBFunit
layer
iDpIlt
layer
Figure 1: Architecture of an RBF network.
been proposed as a way of automatically selecting the number of free parameters in
the network during learning. In these approaches, the learning algorithm gradually
increases the number of free parameters by adding hidden units to the network.
The algorithm stops adding hidden units when some validation criterion indicates
that network performance is good enough.
We explore an alternative class of algorithms-called metamorphosis algorithmsfor which the number of units is fixed, but heavy initial constraints are placed on
the unit response properties. During learning, the constraints are gradually relaxed,
increasing the flexibility of the network. Within this general framework, we develop a learning algorithm that builds the virtues of recursive partitioning strategies
(Breiman et aI., 1984j Friedman, 1991) into a Radial Basis Function (RBF) network architecture. We argue that this framework offers two primary advantages
over constructive RBF networks: for problems with low input variable interaction,
it can find solutions with far fewer free parameters, and it is less susceptible to noise
in the training data. Other virtues include multiresolution processing and built-in
interpolation of sparse training data.
Section 2 introduces notation for RBF networks and reviews the advantages of
using these networks in constructive learning. Section 3 describes the idea behind
metamorphosis algorithms and how they can be combined with RBF networks.
Section 4 describes the advantages of this class of algorithm. The final section
suggests directions for further research.
2
RBF NETWORKS
RBF networks have been used successfully for learning difficult input-output mappings such as phoneme recognition (Wettschereck & Dietterich, 1991), digit classification (Nowlan, 1990), and time series prediction (Moody & Darken, 1989j Platt,
1991). The basic architecture is shown in Figure 1. The response properties of each
RBF unit are determined by a set of parameter values, which we'll call a pset. The
pset for unit i, denoted ri, includes: the center location of the RBF unit in the
input space, pij the width of the unit, Uij and the strength of the connection(s)
from the RBF unit to the output unit(s), hi.
One reason why RBF networks work well with constructive algorithms is because
Metamorphosis Networks: An Alternative to Constructive Methods
the hidden units have the property of noninterference: the nature of their activation
functions, typically Gaussian, allows new RBF units to be added without changing
the global input-output mapping already learned by the network.
However, the advantages of constructive learning with RBF networks diminish for
problems with high-dimensional input spaces (Hartman & Keeler, 1991). For these
problems, a large number of RBF units are needed to cover the input space, even
when the number of input dimensions relevant for the problem is small. The relevant input dimensions can be different for different parts of the input space, which
limits the usefulness of a global estimation of input dimension relevance, as in Poggio and Girosi (1990). Metamorphosis algorithms, on the other hand, allow RBF
networks to solve problems such as these without introducing a large number of free
parameters.
3
METAMORPHOSIS ALGORITHMS
Metamorphosis networks contrast with constructive learning algorithms in that the
number of units in the network remains fixed, but degrees of freedom are gradually
added during learning. While metamorphosis networks have not been explored in
the context of supervised learning, there is at least one instance of a metamorphosis
network in unsupervised learning: a Kohonen net. Units in a Kohonen net are
arranged on a lattice; updating the weights of a unit causes weight updates of the
unit's neighbors. Units nearby on the lattice are thereby forced to have similar
responses, reducing the effective number of free parameters in the network. In one
variant of Kohonen net learning, the neighborhood of each unit gradually shrinks,
increasing the degrees of freedom in the network.
3.1
MRBF NETWORKS
We have applied the concept of metamorphosis algorithms to ordinary RBF networks in supervised learning, yielding MRBF networks. Units are arranged on an
n-dimensional lattice, where n is picked ahead of time and is unrelated to the dimensionality of the input space. The response of RBF unit i is constrained by deriving
its pset, ri, from a collection of underlying psets, each denoted Uj, that also reside
on the lattice. The elements of Uj correspond to those of ri: Uj = (I-'i, uj, hj).
Due to the orderly arrangement of the Uj, the lattice is divided into nonoverlapping hyperrectangular regions that are bounded by 2n Uj. Consequently, each ri is
enclosed by 2n Uj. The pset ri can then be derived by linear interpolation of the
enclosing underlying psets Uj, as shown in Figure 2 for a one-dimensional lattice.
Learning in MRBF networks proceeds by minimizing an error function E in the Uj
components via gradient descent:
where NEIGHj is the set of RBF units whose values are affected by underlying pset
i, and k indexes the input units of the network. The update expression is similar
for uj and hi- To better condition the search space, instead of optimizing the
133
134
Bonnlander and Mozer
(a)
(b)
Figure 2: Constrained RBF units. (a) Four RBF units with psets rl-r4 are arranged
on a one-dimensional lattice, enclosed by underlying psets Ul and U2. (b) An input
space representation of the constrained RBF units. RBF center locations, widths,
and heights are linearly interpolated.
0'[ directly, we follow Nowlan and Hinton's (1991) suggestion of computing each
RBF unit width according to the transformation O'i = ezp{"Yi!2) and searching for
the optimum value of "Yi. This forces RBF widths to remain positive and makes it
difficult for a width to approach zero.
When a local optimum is reached, either learning is stopped or additional underlying
psets are placed on the lattice in a process called metamorphosis.
3.2
METAMORPHOSIS
Metamorphosis is the process that gradually adds new degrees of freedom to the
network during learning. For the MRBF network explored in this paper, introducing new free parameters corresponds to placing additional underlying psets on the
lattice. The new psets split one hyperrectangular region-an n-dimensional sublattice bounded by 2n underlying psets-into two nonoverlapping hyperrectangular
regions. To achieve this, 2n - 1 additional underlying psets, which we call the split
group, are required (Figure 3). The splitting process implements a recursive partitioning strategy similar to the strategies employed in the CART (Breiman et aI.,
1984) and MARS (Friedman, 1991) statistical learning algorithms.
Many possible rules for region splitting exist. In the simulations presented later,
we consider every possible region and every possible split of the region into two
subregions. For each split group k, we compute the tension of the split, defined as
jE'Pl~QUP .11 :! II'
We then select the split group that has the greatest tension. This heuristic is based
on the assumption that the error gradient at the point in weight space where a split
would take place reflects the long-term benefit of that split.
It may appear that this splitting process is computationally expensive, but it can be
implemented quite efficiently; the cost of computing all possible splits and choosing
Metamorphosis Networks: An Alternative to Constructive Methods
?
Key
RBF llllit 1*[
.,
split &JOIIP pse[
0
UDdel'1ying pse[
~::: :::::
Latria: repOll boundary
"0
deriv.live of RBF 1*[
derivalive of "Pli[ sroup poe[
Figure 3: Computing the tension of a split group. Arrows are meant to represent
derivatives of corresponding pset components.
the best one is linear in the number of RBF units on the lattice.
4
4.1
VIRTUES OF METAMORPHOSIS NETS
VARIABLE SUBSET SELECTION
One advantage ofMRBF networks is that they can perform variable subset selection;
that is, they can select a subset of input dimensions more relevant to the problem
and ignore the other input dimensions. This is also a property of other recursive
partitioning algorithms such as CART and MARS. In MRBF networks, however,
region splitting occurs on a lattice structure, rather than in the input space. Consequently, the learning algorithm can orient a small number of regions to fit data
that is not aligned with the lattice to begin with. CART and MARS have to create
many regions to fit this kind of data (Friedman, 1991).
To see if this style oflearning algorithm could learn to solve a difficult problem, we
trained an MRBF network on the Mackey-Glass chaotic time series. Figure 4(a)
compares normalized RMS error on the test set with Platt's (1991) RAN algorithm
as the number of parameters increases during learning. Although RAN eventually
finds a superior solution, the MRBF network requires a much smaller number of
free parameters to find a reasonably accurate solution. This result agrees with the
idea that ordinary RBF networks must use many free parameters to cover an input
space with RBF units, whereas MRBF networks may use far fewer by concentrating
resources on only the most relevant input dimensions.
4.2
ROBUST PARAMETER SELECTION
In RBF networks, the local response of a hidden unit makes it difficult for back
propagation to move RBF centers far from where they are originally placed. Consequently, the choice of initial RBF center locations is critical for constructive al-
135
136
Bonnlander and Mozer
(b) 70
(a)
RAN-
1S
8
!
?
"
t.t-1
en
::;g
~
~
-'.
RAN "'*MRBF -tI-
60
MRBF .....
II)
50
S
40
0?
30
~
0.1
20
0 100 200 300 400 500 600700 800 900
Degrees of Freedom
o
0.1
0.2
0.3
0.4
0.5
0.6
0.7
Level of Noise
Figure 4: (a) Comparison on the Mackey-Glass chaotic time series. The curves
for RAN and MRBF represent an average over ten and three simulation runs, respectively. The simulations used 300 training patterns and 500 test patterns as
described in (Platt 1991). Simulation parameters for RAN match those reported in (Platt 1991) with ? = 0.02. (b) Gaussian noise was added to the function
y = sin81[':l, 0 < :l < 1, where the task was to predict y given x. The horizontal
axis represents the standard deviation of the Gaussian distribution. For both algorithms, 20 simulations were run at each noise level. The number of degrees of
freedom (DOF) needed to achieve a fixed error level was averaged.
gorithms. Poor choices could result in the allocation of more RBF units than are
necessary. One apparent weakness of the RAN algorithm is that it chooses RBF
center locations based on individual examples, which makes it susceptible to noise.
Metamorphosis in MRBF networks, on the other hand, is based on the more global
measure of tension.
Figure 4(b) shows the average number of degrees of freedom allocated by RAN
and an MRBF network on a simple, one-dimensional function approximation task,
Gaussian noise was added to the target output values in the training and test sets.
As the amount of noise increases, the average number of free parameters allocated
by RAN also increases, whereas for the MRBF network, the average remains low.
One interesting property of RAN is that allocating many extra RBF units does not
necessarily hurt generalization performance. This is true when RAN starts with
wide RBF units and decreases the widths of candidate RBF units slowly. The main
disadvantage to this approach is wasted computational resources.
4.3
MULTIRESOLUTION PROCESSING
Our approach has the property of initially finding solutions sensitive to coarse problem features and using these solutions to find refinements more sensitive to finer
features (Figure 5). This idea of multiresolution processing has been studied in the
context of computer vision relaxation algorithms and is a property of algorithms
proposed by other authors (e.g. Moody, 1989, Platt, 1991).
Metamorphosis Networks: An Alternative to Constructive Methods
(b)
(a)
two underlying psets
(c)
five Wlderlying psets
three Wlderlying psets
Figure 5: Example of multiresolution processing. The figure shows performance on a
two-dimensional classification task, where the goal is to classify all inputs inside the
U-shape as belonging to the same category. An MRBF network is constrained using
a one-dimensional lattice. Circles represent RBF widths, and squares represent the
height of each RBF.
4.4
INTERPOLATION OF SPARSE TRAINING DATA
For a problem with sparse training data, it is often necessary to make assumptions
about the appropriate response at points in the input space far away from the
training data. Like nearest-neighbor algorithms, MRBF networks have such an
assumption built in. The constrained RBF units in the network serve to interpolate
the values of underlying psets (Figure 6). Although ordinary RBF networks can,
in principle, interpolate between sparse data points, the local response of an RBF
unit makes it difficult to find this sort of solution by back propagation.
..
:t
.,................\ ""--
1
\/~da'
MRBF Network Output
I-Nearest Neighbor Asswnption
:t
Plain RBFNetwork Output
Figure 6: Assumptions made for sparse training data on a task with a onedimensional input space and one-dimensional output space. Target output values
are marked with an 'x'. Like nearest-neighbor algorithms, the assumption made by
MRBF networks causes network response to interpolate between sparse data points.
This assumption is not built into ordinary RBF networks.
137
138
Bonnlander and Mozer
5
DIRECTIONS FOR FURTHER RESEARCH
In our simulations to date, we have not observed astonishingly better generalization
performance with metamorphosis nets than with alternative approaches, such as
Platt's RAN algorithm. Nonetheless, we believe the approach worthy of further
exploration. We've examined but one type of metamorphosis net and in only a few
domains. The sorts of investigations we are considering next include: substituting
finite-element basis functions for RBFs, implementing a "soft" version of the RBF
pset constraint using regularization techniques, and using a supervised learning
algorithm similar to Kohonen networks, where updating the weights of a unit causes
weight updates of the unit's neighbors.
A cknowledgeIllent s
This research was supported by NSF PYI award IRI-9058450 and grant 90-21 from the
James S. McDonnell Foundation. We thank John Platt for providing the Mackey-Glass
time series data, and Chris Williams, Paul Smolensky, and the members of the Boulder
Connectionist Research Group for helpful discussions.
References
L. Breiman, J. Friedman, R. A. Olsen & C. J. Stone. (1984) Clauification and Regreuion
Trees. Belmont, CA: Wadsworth.
S. E. Fahlman & C. Lebiere. (1990) The cascade-correlation learning architecture. In
D. S. Touretzky (ed.), Advance! in Neural Information Proceuing Sy!tem! ~, 524-532.
San Mateo, CA: Morgan Kaufmann.
J. Friedman. (1991) Multivariate Adaptive Regression Splines. Annab of Stati&tic! 19:1141.
S. Geman, E. Bienenstock & R. Doursat. (1992) Neural networks and the bias/variance
dilemma. Neural Computation 4(1):1-58.
E. Hartman & J. D. Keeler. (1991) Predicting the future: advantages of semilocal units.
Neural Computation 3( 4):566-578.
T. Kohonen. (1982) Self-organized formation of topologically correct feature maps. Biological Cybernetic! 43:59-69.
J. Moody & C. Darken. (1989) Fast learning in networks oflocally-tuned processing units.
Neural Computation 1(2):281-294.
J. Moody. (1989) Fast learning in multi-resolution hierarchies. In D. S. Touretzky (ed.),
Advance! in Neural Information Proceuing 1, 29-39. San Mateo, CA: Morgan Kaufmann.
S. J. Nowlan. (1990) Maximum likelihood competition in RBF networks. Tech. Rep.
CRG-TR-90-2, Department of Computer Science, University of Toronto, Toronto, Canada.
S. J. Nowlan & G. Hinton. (1991) Adaptive soft weight-tying using Gaussian Mixtures.
In Moody, Hanson, & Lippmann (eds.), Advance! in Neural Information Proce!!ing 4,
993-1000. San Mateo, CA: Morgan-Kaufmann.
J. Platt. (1991) A resource-allocating network for function interpolation. Neural Computation 3(2):213-225.
T. Poggio & F. Girosi. (1990) Regularization algorithms for learning that are equivalent
to multilayer networks. Science 247:978-982.
D. Wettschereck & T. Dietterich. (1991) Improving the performance of radial basis function networks by learning center locations. In Moody, Hanson, & Lippmann (eds.), Advances in Neural Info. Proceuing 4, 1133-1140. San Mateo, CA: Morgan Kaufmann.
| 604 |@word version:1 simulation:6 pset:7 incurs:2 thereby:1 tr:1 initial:2 series:4 selecting:2 tuned:1 nowlan:4 activation:1 must:1 john:1 belmont:1 shape:1 girosi:2 update:3 mackey:3 fewer:2 coarse:1 location:5 toronto:2 five:1 height:2 oflocally:1 inside:1 multi:1 automatically:2 considering:1 increasing:2 begin:1 notation:1 unrelated:1 underlying:10 bounded:2 tic:1 kind:1 tying:1 finding:1 transformation:1 every:2 ti:1 platt:9 partitioning:3 unit:43 grant:1 appear:1 positive:1 local:3 limit:1 proceuing:3 interpolation:5 studied:1 examined:1 r4:1 suggests:1 challenging:1 mateo:4 co:1 averaged:1 recursive:3 implement:1 chaotic:2 digit:1 cascade:1 radial:2 selection:5 context:2 live:1 equivalent:1 clauification:1 map:1 center:6 williams:1 iri:1 pyi:1 resolution:1 splitting:4 rule:1 deriving:1 searching:1 hurt:1 pli:1 target:2 hierarchy:1 colorado:1 element:2 recognition:1 expensive:1 updating:2 geman:2 observed:1 region:9 decrease:1 ran:12 mozer:5 trained:1 dilemma:2 serve:1 basis:3 forced:1 fast:2 effective:1 formation:1 neighborhood:1 choosing:1 dof:1 whose:1 heuristic:1 quite:1 solve:3 apparent:1 hartman:2 final:1 advantage:6 net:6 interaction:1 kohonen:5 relevant:4 aligned:1 date:1 flexibility:1 multiresolution:5 achieve:2 competition:1 optimum:2 develop:1 nearest:3 implemented:1 direction:2 closely:1 correct:1 exploration:1 implementing:1 generalization:4 investigation:1 brian:1 biological:1 crg:1 keeler:2 pl:1 diminish:1 mapping:2 predict:1 substituting:1 estimation:1 sensitive:2 agrees:1 create:1 successfully:1 reflects:1 gaussian:5 rather:1 hj:1 breiman:3 derived:1 indicates:1 likelihood:1 tech:1 contrast:1 glass:3 helpful:1 typically:1 initially:1 hidden:5 uij:1 bienenstock:1 classification:2 flexible:1 denoted:2 constrained:5 wadsworth:1 placing:1 represents:1 unsupervised:1 tem:1 future:1 connectionist:1 spline:1 few:3 composed:1 ve:1 interpolate:3 individual:1 attempt:1 friedman:5 freedom:6 investigate:1 weakness:1 introduces:1 mixture:1 yielding:1 behind:1 accurate:1 allocating:2 necessary:2 poggio:2 tree:1 circle:1 stopped:1 instance:1 classify:1 soft:2 cover:2 disadvantage:1 lattice:14 ordinary:4 cost:1 introducing:3 oflearning:1 subset:4 deviation:1 usefulness:1 too:4 reported:1 combined:1 chooses:1 hyperrectangular:3 michael:1 moody:6 slowly:1 cognitive:1 derivative:1 style:1 wettschereck:2 nonoverlapping:2 includes:1 later:1 picked:1 reached:1 start:1 sort:2 rbfs:1 square:1 variance:3 phoneme:1 efficiently:1 sy:1 correspond:1 kaufmann:4 finer:1 touretzky:2 ed:4 nonetheless:1 james:1 lebiere:2 stop:1 concentrating:1 dimensionality:1 organized:1 back:2 originally:1 supervised:3 follow:1 tension:4 response:8 bonnlander:5 arranged:3 shrink:1 mar:3 correlation:1 hand:2 horizontal:1 propagation:2 believe:1 dietterich:2 concept:1 normalized:1 true:1 regularization:2 ll:1 during:7 width:7 self:1 criterion:1 stone:1 superior:1 rl:1 onedimensional:1 refer:1 ai:2 add:1 multivariate:1 optimizing:1 rep:1 proce:1 yi:2 morgan:4 ezp:1 relaxed:1 additional:3 employed:1 ii:2 ing:1 match:1 offer:1 long:1 divided:1 award:1 prediction:1 variant:1 regression:1 basic:1 multilayer:1 vision:1 represent:4 whereas:2 allocated:2 extra:1 doursat:1 cart:3 member:1 call:2 split:11 enough:1 fit:2 architecture:5 idea:3 cybernetic:1 poe:1 expression:1 pse:2 rms:1 ul:1 cause:3 amount:1 ten:1 subregions:1 category:1 exist:1 nsf:1 affected:1 group:5 key:1 four:1 changing:1 wasted:1 relaxation:1 orient:1 run:2 topologically:1 place:1 layer:3 hi:2 strength:1 ahead:1 constraint:4 ri:5 nearby:1 interpolated:1 department:2 according:1 mcdonnell:1 poor:2 belonging:1 describes:2 remain:1 smaller:1 gradually:6 boulder:2 computationally:1 resource:3 remains:2 eventually:1 needed:2 away:1 appropriate:2 alternative:7 include:3 build:1 uj:10 move:1 added:4 already:1 arrangement:1 occurs:1 strategy:3 primary:1 gradient:2 thank:1 chris:1 argue:1 reason:1 index:1 providing:1 minimizing:1 ying:1 difficult:5 susceptible:2 info:1 enclosing:1 perform:1 darken:2 finite:1 descent:1 hinton:2 worthy:1 canada:1 required:1 connection:1 hanson:2 learned:1 proceeds:1 pattern:2 smolensky:1 oft:1 built:3 greatest:1 critical:1 force:1 predicting:1 axis:1 review:1 determining:1 suggestion:1 interesting:1 allocation:1 enclosed:2 validation:1 foundation:1 astonishingly:1 degree:6 pij:1 imposes:1 principle:1 heavy:1 placed:3 fahlman:2 free:15 algorithmsfor:1 supported:1 bias:3 allow:1 institute:1 neighbor:5 wide:1 sparse:7 benefit:1 boundary:1 raining:1 dimension:6 curve:1 plain:1 reside:1 collection:1 refinement:1 author:1 made:2 san:4 adaptive:2 far:4 lippmann:2 ignore:1 olsen:1 orderly:1 global:3 search:1 why:1 nature:1 learn:1 robust:2 reasonably:1 ca:5 improving:1 necessarily:1 domain:1 da:1 main:1 linearly:1 arrow:1 noise:7 paul:1 je:1 en:1 gorithms:1 candidate:1 explored:2 deriv:1 virtue:4 restricting:1 adding:3 explore:2 u2:1 corresponds:1 stati:1 goal:1 marked:1 consequently:3 rbf:46 determined:1 reducing:1 called:3 select:2 meant:1 relevance:1 constructive:12 |
5,570 | 6,040 | Stochastic Gradient Methods for Distributionally
Robust Optimization with f -divergences
Hongseok Namkoong
Stanford University
[email protected]
John C. Duchi
Stanford University
[email protected]
Abstract
We develop efficient solution methods for a robust empirical risk minimization
problem designed to give calibrated confidence intervals on performance and
provide optimal tradeoffs between bias and variance. Our methods apply to distributionally robust optimization problems proposed by Ben-Tal et al., which put
more weight on observations inducing high loss via a worst-case approach over a
non-parametric uncertainty set on the underlying data distribution. Our algorithm
solves the resulting minimax problems with nearly the same computational cost
of stochastic gradient descent through the use of several carefully designed data
structures. For a sample of size n, the per-iteration cost of our method scales as
O(log n), which allows us to give optimality certificates that distributionally robust
optimization provides at little extra cost compared to empirical risk minimization
and stochastic gradient methods.
1
Introduction
In statistical learning or other data-based decision-making problems, it is desirable to give solutions
that come with guarantees on performance, at least to some specified confidence level. For tasks
such as driving or medical diagnosis where safety and reliability are crucial, confidence levels
have additional importance. Classical techniques in machine learning and statistics, including
regularization, stability, concentration inequalities, and generalization guarantees [6, 25] provide
such guarantees, though often a more fine-tuned certificate?one with calibrated confidence?is
desirable. In this paper, we leverage techniques from the robust optimization literature [e.g. 2],
building an uncertainty set around the empirical distribution of the data and studying worst case
performance in this uncertainty set. Recent work [15, 13] shows how this approach can give (i)
calibrated statistical optimality certificates for stochastic optimization problems, (ii) performs a
natural type of regularization based on the variance of the objective and (iii) achieves fast rates of
convergence under more general conditions than empirical risk minimization by trading off bias
(approximation error) and variance (estimation error) optimally. In this paper, we propose efficient
algorithms for such distributionally robust optimization problems.
We now provide our formal setting. Let X ? Rd be a compact convex set, and for a convex
function f : RR+ ! R with f (1) = 0, define the f -divergence between distributions P and Q by
dP
Df (P ||Q) = f ( dQ
)dQ. Letting P?,n := {p 2 Rn : p> = 1, p 0, Df (p|| /n) ? n? } be an
uncertainty set around the uniform distribution /n, we develop methods for solving the robust
empirical risk minimization problem
minimize
x2X
sup
n
X
p2P?,n i=1
pi `i (x).
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(1)
In problem (1), the functions `i : X ! R+ are convex and subdifferentiable, and we consider the
iid
situation in which `i (x) = `(x; ?i ) for ?i ? P0 . We let `(x) = [`1 (x) ? ? ? `n (x)]> 2 Rn denote the
vector of convex losses, so the robust objective (1) is supp2P?,n pT `(x).
A number of authors show how the robust formulation (1) provides guarantees. Duchi et al. [15]
show that the objective (1) is a convex approximation to regularizing the empirical risk by variance,
r
n
n
X
1
1X
?
sup
pi `i (x) =
`i (x) +
VarP0 (`(x; ?)) + oP0 (n 2 )
(2)
n i=1
n
p2P?,n i=1
uniformly in x 2 X . Since the right hand side naturally trades off good loss performance (approximation error) and minimizing variance (estimation error) which is usually non-convex, the
robust formulation (1) provides a convex regularization for the standard empirical risk minimization
(ERM) problem. This trading between bias and variance leads to certificates on the optimal value
inf x2X EP0 [`(x; ?)] so that under suitable conditions, we have
?
?
p
lim P inf EP0 [`(x; ?)] ? un = P (W
?) for W ? N(0, 1)
(3)
n!1
x2X
where un = inf x2X supp2P?,n pT `(x) is the optimal robust objective. Duchi and Namkoong [13]
provide finite sample guarantees for the special case that f (t) = 12 (t 1)2 , making the expansion (2)
more explicit and providing a number of consequences for estimation and optimization based on this
expansion (including fast rates for risk minimization). A special case of their results [13, ?3.1] is as
follows. Let x
brob 2 argminx2X supp2P?,n pT `(x), let VC(F) denote the VC-(subgraph)-dimension
of the class of functions F := {`(x; ?) | x 2 X }, assume that M `(x; ?) for all x 2 X , ? 2 ?, and
for some fixed > 0, define ? = log 1 + 10 VC(F) log VC(F). Then, with probability at least 1
,
8
9
s
<
2?VarPbn (`(x; ?)) =
M?
M?
EP0 [`(b
xrob ; ?)] ? un + O(1)
? inf EP0 [`(x; ?)] + 2
+ O(1)
x2X :
;
n
n
n
(4)
For large n, evaluating the objective (1) may be expensive; with fixed p = /n, this has motivated an
extensive literature in stochastic and online optimization [27, 23, 19, 16, 18]. The problem (1) does
not admit quite such a straightforward approach. A first idea, common in the robust optimization
literature [3], is to obtain a problem that may be written as a sum of individual terms by taking the
dual of the inner supremum, yielding the convex problem
?
?
n
1 X ? `i (x) ?
?
inf sup p> `(x) =
inf
f
+
+ ?.
(5)
x2X p2P?,n
x2X ,
0,?2R n
n
i=1
Here f ? (s) = supt 0 {st f (t)} is the Fenchel conjugate of the convex function f . While the
above dual reformulation is jointly convex in (x, , ?), canonical stochastic gradient descent (SGD)
procedures [23] generally fail because the variance of the objective (and its subgradients) explodes as
! 0. (This is not just a theoretical issue: in extensive simulations that we omit because they are a
bit boring, SGD and other heuristic approaches that impose shrinking bounds of the form t ct > 0
at each iteration t all fail to optimize the objective (5).)
Instead, we view the robust ERM problem (1) as a game between the x (minimizing) player and p
(maximizing) player. Each player performs a variant of mirror descent (ascent), and we show how
such an approach yields strong convergence guarantees, as well as good empirical performance. In
particular, we show (for many suitable divergences f ) that if `i is L-Lipschitz and X has radius
2 2
bounded by R, then our procedure requires at most O( R L?2 +? ) iterations to achieve an ?-accurate
solution to problem (1), which is comparable to the number of iterations required by SGD [23]. Our
solution strategy builds off of similar algorithms due to Nemirovski et al. [23, Sec. 3] and Ben-Tal et al.
[4], and more directly procedures developed by Clarkson et al. [10] for solving two-player convex
games. Most directly relevant to our approach is that of Shalev-Shwartz and Wexler [26], which
solves problem (1) under the assumptionP
that P?,n = {p 2 Rn+ : pT = 1} and that there is some
n
x with perfect loss performance, that is, i=1 `i (x) = 0. We generalize these approaches to more
challenging f -divergence-constrained problems, and, for the 2 divergence with f (t) = 12 (t 1)2 ,
2
develop efficient data structures that give a total run-time for solving problem (1) to ?-accuracy
2 2
scaling as O((Cost(grad) + log n) R L?2 +? ). Here Cost(grad) is the cost to compute the gradient of
a single term r`i (x) and perform a mirror descent step with x. Using SGD to solve the empirical
2 2
minimization problem to ?-accuracy has run-time O(Cost(grad) R?2L ), so we see that we can achieve
the guarantees (3)?(4) offered by the robust formulation (1) at little additional computational cost.
The remainder of the paper is organized as follows. We present our abstract algorithm in Section 2
and give guarantees on its performance in Section 3. In Section 4, we give efficient computational
schemes for the case that f (t) = 12 (t 1)2 , presenting experiments in Section 5.
2
A bandit mirror descent algorithm for the minimax problem
Under the conditions that ` is convex and X is compact, standard results [7] show that there exists a
saddle point (x? , p? ) 2 X ? P?,n for the robust problem (1) satisfying
sup p> `(x? ) | p 2 P?,n ? p?> `(x? ) ? inf p?> `(x) | x 2 X .
We now describe a procedure for finding this saddle point by alternating a linear bandit-convex
optimization procedure [8] for p and a stochastic mirror descent procedure for x. Our approach builds
off of Nemirovski et al.?s [23] development of mirror descent for two-player stochastic games.
To describe our algorithm, we require a few standard tools. Let k?kx denote a norm on the space
X with dual norm kykx,? = sup{hx, yi : kxk ? 1}, and let x be a differentiable strongly convex
2
>
function on X , meaning x (x+ )
+ 12 k kx for all . Let p a differentiable
x (x)+r x (x)
strictly convex function on P?,n . For a differentiable convex function h, we define the Bregman
divergence Bh (x, y) = h(x) h(y) hrh(y), x yi 0. The Fenchel conjugate p? of p is
?
p (s)
:= sup{hs, pi
p
p (p)}
and r
?
p (s)
= argmax {hs, pi
p
p (p)} .
( p? is differentiable because p is strongly convex [20, Chapter X].) We let gi (x) 2 @`i (x) be a
particular subgradient selection.
With this notation in place, we now give our algorithm, which alternates between gradient ascent
steps on p and subgradient descent steps on x. Roughly, we would like to alternate gradient ascent
steps for p, pt+1
pt + ?p `(xt ), and descent steps xt+1
xt ?x gi (xt ) for x, where i is a random
index drawn according to pt . This procedure is inefficient?requiring time of order nCost(grad) in
each iteration?so that we use stochastic estimates of the loss vector `(xt ) developed in the linear
bandit literature [8] and variants of mirror descent to implement our algorithm.
Algorithm 1 Two-player Bandit Mirror Descent
1: Input: Stepsize ?x , ?p > 0, initialize: x1 2 X , p1 = /n
2: for t = 1, 2, . . . , T do
3:
Sample It ? pt , that is, set It = i with probability pt,i
Compute estimated loss for i 2 [n]: `bt,i (x) = `pi (x)
1 {It = i}
i,t
?
b
5:
Update p: wt+1
r p (r p (pt ) + ?p `t (xt )), pt+1
argminp2P?,n B p (p, wt+1 )
6:
Update x: yt+1
r x? ( x (xt ) ?x gIt (xt )), xt+1
argminx2X B x (x, yt+1 )
7: end for
4:
We specialize this general algorithm for specific choices of the divergence f and the functions x and
p presently, first briefly discussing the algorithm. Note that in Step 5, the updates for p depend only
b t ) is 1-sparse), which, as long as the updates for p
on a single index It 2 {1, . . . , n} (the vector `(x
are efficiently computable, can yield substantial performance benefits.
3
Regret bounds
With our algorithm described, we now describe its convergence properties, specializing later to
specific families of f -divergences. We begin with the following result on pseudo-regret, which (with
minor modifications) is known [23, 10, 26]. We provide a proof for completeness in Appendix A.1.
3
Lemma 1. Let the sequences xt and pt be generated by Algorithm 1. Define x
bT :=
PT
1
? ?
pbT := T t=1 pt . Then for the saddle point (x , p ) we have
T E[p? > `(b
xT )
?
pb>
T `(x )] ?
1
T
PT
t=1
xt and
T
T
X
1
?x X
2
B x (x? , x1 ) +
E[kgIt (xt )kx,? ] +
E[`bt (xt )> (p? pt )]
?x
2 t=1
t=1
|
{z
} |
{z
}
T1 : ERM regret
T2 : robust regret
where the expectation is taken over the random draws It ? pt . Moreover, E[`bt (xt )> (p
E[`(xt )> (p pt )] for any vector p.
pt )] =
In the lemma, T1 is the standard regret when applying mirror descent to the ERMp
problem. In
particular, if B x (x? , x1 ) ? R2 and `i (x) is L-Lipschitz, then choosing ?x = R
2/T yields
L
p
T1 ? RL T . Because it is (relatively) easy to bound the term T1 , the remainder of our arguments
focus on bounding the the second term T2 , which is the regret that comes as a consequence of the
random sampling for the loss vector `bt . This regret depends strongly on the distance-generating
function p . To the end of bounding T2 , we use the following bound for the pseudo-regret of p, which
is standard [9, Chapter 11], [8, Thm 5.3]. For completeness we outline the proof in Appendix A.2.
Lemma 2. For any p 2 P?,n , Algorithm 1 satisfies
T
T
?
?
X
B p (p, p1 )
1 X
`bt (xt )> (p pt ) ?
+
B p? r p (pt ) + ?p `bt (xt ), r p (pt ) .
(6)
?p
?p t=1
t=1
Lemma 2 shows that controlling the Bregman divergences B
the basic regret bound of Lemma 1.
p
and B
?
p
is sufficient to bound T2 in
Now, we narrow our focus slightly to a specialized?but broad?family of divergences for which we
can give more explicit results. For k 2 R, the Cressie-Read divergence [12] of order k is
tk
kt + k 1
,
(7)
k(k 1)
where fk (t) = 1 for t < 0, and for k 2 {0, 1} we define fk by its limits as k ! 0 or 1 (we have
f1 (t) = t log t t + 1 and f0 (t) = log t + t 1). Inspecting expression (6), we might hope that
careful choices of p could yield regret bounds that grow slowly with T and have small dependence
on the sample size n. Indeed, this is the case, as we show in the sequel: for each divergence fk , we
may carefully choose p to achieve small regret. To prove our bounds, however, it is crucial that
the importance sampling estimator `bt has small variance, which in turn necessitates that pt,i is not
too small. Generally, this means that in the update (Alg. 1, Line 5) to construct pt+1 , we choose
@
(p) to grow quickly as pi ! 0 (e.g. | @p
p (p)| ! 1), but there is a tradeoff in that this may cause
i
large Bregman divergence terms (6). In the coming sections, we explore this tradeoff for various k,
providing regret bounds for each of the Cressie-Read divergences (7).
fk (t) =
To control the B p? terms in the bound (6), we use the curvature of p (dually, smoothness of p? )
P
to show that B p? (u, v) ? (ui vi )2 . For this approximation to hold, we shift our loss functions
based on the f -divergence. When k 2, we assume that `(x) 2 [0, 1]n . If k < 2, we instead apply
Algorithm 1 with shifted losses `0 (x) = `(x)
, so that `0 (x) 2 [ 1, 0]n . We call the method with
t) 1
`0 Algorithm 1?, noting that `bt,i (xt ) = `i (xpt,i
1 {It = i} in this case.
3.1
Power divergences when k 62 {0, 1}
For our first results, we prove a generic regretP
bound for Algorithm 1 when k 62 {0, 1} by taking the
n
distance-generating function p (p) = k(k1 1) i=1 pki , which is differentiable and strictly convex on
Rn+ . Before proceeding further, we first note that for p 2 P?,n and p1 = n1 , we have
B p (p, p1 ) =
=
p (p)
n
k(k
k
1)
p (p1 )
n
X
r
(npi )k
p (p1 )
>
(p
knpi + k
i=1
4
p1 )
1 =n
k
Df (p|| /n) ? n
k
?
(8)
bounding the first term in expression (6). From Lemma 2, it remains to bound the Bregman divergence
terms B p? . Using smoothness of p? in the positive orthant, we obtain the following bound.
Theorem 1. Assume that `(x) 2 [0, 1]n . For any real-valued k
satisfies
T
X
E[`(xt )> (p
pt )] =
t=1
T
X
t=1
2 and any p 2 P?,n , Algorithm 1
2
3
T
n k ? ?p X 4 X 1 k 5
pt )] ?
+
E
pt,i
.
?p
2 t=1
i:p >0
E[`bt (xt )> (p
(9)
t,i
For k ? 2 with k 62 {0, 1}, an identical bound holds for Algorithm 1? with `0 (x) = `(x)
.
See Appendix A.3 for the proof. We now use Theorem 1 to obtain concrete convergence guarantees for
Cressie-Read divergences with parameter k < 1, giving sublinear (in T ) regret bounds independent
of n. In the corollary, whose proof we provide in Appendix A.4, we let Ck,? = (1 k)(1k k?) , which
is positive for k < 0.
k 1
p
Corollary 1. For k 2 ( 1, 0) and ?p = Ck,?2 n k 2?/T Algorithm 1? with `0 (x) = `(x)
2
[ 1, 0]n acheives the regret bound
T
X
E[`(xt )> (p
t=1
pt )] =
T
X
t=1
p
E[`bt (xt )> (p
pt )] ?
q
1 k
2Ck,?
?T .
For k 2 (0, 1) and ?p = n
2?/T , Algorithm 1? with `0 (x) = `(x)
2 [ 1, 0]n acheives the
regret bound
T
T
X
X
p
E[`(xt )> (p pt )] =
E[`bt (xt )> (p pt )] ? 2?T .
k
t=1
t=1
It is worth noting that despite the robustification, the above regret is independent of n. In the special
case that k 2 (0, 1), Theorem 1 is the regret bound for the implicitly normalized forecaster of
Audibert and Bubeck [1] (cf. [8, Ch 5.4]).
3.2
Regret bounds using the KL divergences (k = 1 and k = 0)
The choice
Pnf1 (t) = t log t t + 1 yields Df (P ||Q) = Dkl (P ||Q), and in this case, we take
p (p) =
i=1 pi log pi , which means that Algorithm 1 performs entropic gradient ascent. To control
the divergence B p? , we use the rescaled losses `0 (x) = `(x)
(as we have k < 2). Then we have
the following bound, whose proof we provide in Appendix A.5.
Theorem 2. Algorithm 1? with loss `0 (x) = `(x)
yields
T
X
E[`(xt )> (p
t=1
In particular, when ?p =
1
n
q
pt )] =
T
X
t=1
2?
T ,
we have
E[`bt (xt )> (p
PT
t=1
E[`(xt )> (p
pt )] ?
?
?p
+
nT.
n?p
2
pt )] ?
p
(10)
2?T .
Using k = 0, so that f0 (t) = log t + t 1, we obtain Df (P ||Q) = Dkl (Q||P ), which results
in a robustification technique identical to Owen?s original empirical likelihood [24]. We again use
thePrescaled losses `0 (x) = `(x)
, but in this scenario we use the proximal function p (p) =
n
i=1 log pi in Algorithm 1?. Then we have the following regret bound (see Appendix A.6).
Theorem 3. Algorithm 1? with loss `0 (x) = `(x)
yields
T
X
E[`(xt )> (p
t=1
In particular, when ?p =
q
pt )] =
T
X
t=1
2?
T ,
we have
PT
t=1
E[`bt (xt )> (p
E[`(xt )> (p
pt )] ?
pt )] ?
?
?p
+
T.
?p
2
p
2?T .
In both of these p
cases, the expected pseudo-regret of our robust gradient procedure is independent of
n and grows as T , which is essentially identical to that achieved by pure online gradient methods.
5
3.3
Power divergences (k > 1)
Corollary 1 provides convergence guarantees for power P
divergences fk with k < 1, but says nothing
n
about the case that k > 1; the choice p (p) = k(k1 1) i=1 pki allows the individual probabilities
b To remedy this, we regularize the robust
pt,i to be too small, which can cause excess variance of `.
problem (1) by re-defining our robust empirical distributions set, taking
n
n
o
X
n
P?,n, := p 2 R+ | p
,
f (npi ) ? ? ,
n i=1
where we no longer constrain the weights p to satisfy > p = 1. Nonetheless, it is still possible to
show that the guarantees (2) and (3) hold with P?,n, replacing P?,n . Indeed, we may give bounds for
the pseudo-regret of the regularized problem with P?,n, , where we apply Algorithm 1 withP
a slightly
n
modified sampling strategy, drawing indices i according to the normalized distribution pt / i=1 pt,i
and appropriately normalizing the loss estimate via
!
n
X
`i (xt )
`bt,i (xt ) =
pt,i
1 {It = i} .
pt,i
i=1
This vector is still unbiased for `(xt ). Define the constant Ck := max {t : fk (t) ? t} _ n? < 1 (so
p
Pn
C2 = 2 + 3). With our choice p (p) = k(k1 1) i=1 pki and for > 0, we obtain the following
result, whose proof we provide in Appendix A.7.
p
Theorem 4. For k 2 [2, 1), any p 2 P?,n, , Algorithm 1 with ?p = n k ? k 1 / (4Ck3 T ) yields
T
X
E[`(xt )> (p
pt )] =
t=1
T
X
t=1
E[`bt (xt )> (p
pt )] ? 2Ck
p
?Ck
1 kT
For k 2 (1, 2), assume that `(x) 2 [ 1, 0]n . Then, Algorithm 1 gives identical bounds.
4
Efficient updates when k = 2
The previous section shows that Algorithm 1 with careful choice of p yields sublinear regret bounds.
The projection step pt+1 = argminp2P?,n, B p (p, wt+1 ), however, can still take time linear in n
b t ) (see Appendix B for concrete updates for each of our cases). In this
despite the sparsity of `(x
section, we show how to compute the bandit mirror
Pn descent update in Alg. 1, line 5, in time O(log n)
time for f2 (t) = 12 (t 1)2 and p (p) = 12 i=1 p2i . Building off of Duchi et al. [14], we use
carefully designed balanced binary search trees (BSTs) to this end.
The Lagrangian for the update pt+1 = argminp2P?,n, B p (p, wt+1 ) (suppressing t) is
!
?
?
n
X
>
L(p, , ?) = B p (p, w)
?
f
(np
)
?
p
2
i
n2
n
i=1
where
yields
0, ? 2 Rn+ . The KKT conditions imply (1+ )p = w + n +?, and strict complementarity
?
?
1
1
p( ) =
w+
+
,
(11)
1+
1+ n n + n
where p( ) = argminp2P?,n, inf ?2Rn+ L(p, , ?). Substituting this into the Lagrangian, we obtain
the concave dual objective
!
n
X
g( ) := sup inf L(p, , ?) = B p (p( ), w)
?
fk (npi ( )) .
?
p2P?,n,
i=1
We can run a bisection search on the nondecreasing function g ( ) to find such that g 0 ( ) = 0.
After algebraic manipulations, we have that
X
X
@
(1
)2
?
g( ) = g1 ( )
wi2 + g2 ( )
wi + g3 ( )|I( )| +
,
@
2n
n2
0
i2I( )
i2I( )
6
where I( ) := {1 ? i ? n : wi
g1 ( ) =
n
+ (n
1) } and (see expression (18) in Appendix B.4)
1
2
1
, g2 ( ) =
, g3 ( ) = 2
(1 + )2
n(1 + )2
n (1 + )2
)2
(1
2n
.
To see that
for ? that acheives |g 0 ( ? )| ? ? in O(log n + log 1? ) time, it suffices to
Pwe can solve
q
evaluate i2I( ) wi for q = 0, 1, 2 in time O(log n). To this end, we store the w?s in a balanced
search tree (e.g., red-black tree) keyed on the weights up to a multiplicative and an additive constant.
A key ingredient in our implementation is that the BST stores in each node the sum of the appropriate
powers of values in the left and right subtree [14]. See Appendix C for detailed pseudocode for all
operations required in Algorithm 1: each subroutine (sampling It ? pt , updating w, computing ? ,
and updating p( ? )) require time O(log n) using standard BST operations.
5
Experiments
In this section, we present experimental results demonstrating the efficiency of our algorithm. We first
compare our method with existing algorithms for solving the robust problem (1) on a synthetic dataset,
then investigating the robust formulation on real datasets to show how the calibrated confidence
guarantees behave in practice, especially in comparison to the ERM. We experiment on natural high
dimensional datasets as well as those with many training examples.
Our implementation uses the efficient updates outlined in Section 4. Throughout our experiments,
we use the best tuned step sizes for all methods. For the first two experiments, we set ? = 21,.9
so that the resulting robust objective (1) will be a calibrated 95% upper confidence bound on the
optimal population risk. For our last experiment, the asymptotic regime (3) fails to hold due to the
high dimensional nature of the problem, so we choose ? = 50 (somewhat arbitrarily, but other ? give
similar behavior). We take X = x 2 Rd : kxk2 ? R for our experiments.
For the experiment with synthetic data, we compare our algorithm against two benchmark methods
for solving the robust problem (1). The first is the interior point method for the dual reformulation (5)
using the Gurobi solver [17]. The second is using gradient descent, viewing the robust formulation (1)
as a minimization problem with the objective x 7! supp2P?,n, p> `(x). To efficiently compute the
gradient, we bisect over the dual form (5) with respect to
0, ?. We use the best step sizes for
both our proposed bandit-based algorithm and gradient descent.
iid
To generate the data, we choose a true classifier x? 2 Rd and sample the feature vectors ai ? N(0, I)
?
for i 2 [n]. We set the labels to be bi = sign(a>
i x ) and flip them with probability 10%. We use
>
the hinge loss `i (x) = 1 bi ai x + with n = 2000, d = 500 and R = 10 in our experiment.
In Figure 1a, we plot the log optimality ratio (log of current objective value over optimal value)
with respect to the runtime for the three algorithms. While the interior point method (IPM) obtains
accurate solutions, it scales relatively poorly in n and d (the initial flat region in the plot is due to
pre-computations for factorizing within the solver). Gradient descent performs quite well in this
moderate sized example although each iteration takes time ?(n).
We also perform experiments on two datasets with larger n: the Adult dataset [22] and the Reuters
RCV1 Corpus [21]. The Adult dataset has n = 32,561 training and 16,281 test examples with
123-dimensional features. We use binary logistic loss `i (x) = log(1 + exp( bi a>
i x)) to classify
whether the income level is greater than $5K. For the Reuters RCV1 Corpus, our task is to classify
whether a document belongs to the Corporate category. With d = 47,236 features, we randomly
split the 804,410 examples into 723,969 training (90% of data) and 80,441 (10% of data) test
examples. We use the hinge loss and solve the binary classification problem for the document type.
(x)
To test the efficiency of our method in large scale settings, we plot the log ratio log RRnn(x
? ) , where
Rn (x) = supp2P?,n, p> `(x), versus CPU time for our algorithm and gradient descent in Figure 1b.
As is somewhat typical of stochastic gradient-based methods, our bandit-based optimization algorithm
quickly obtains a solution with small optimality gap (about 2% relative error), while the gradient
descent method eventually achieves better loss.
In Figures 2a?2d, we plot the loss value and the classification error compared with applying pure
stochastic gradient descent to the standard empirical loss, plotting the confidence bound for the robust
7
(a) Synthetic Data (n = 2000, d = 500)
(b) Reuters Corpus (n = 7.2 ? 105 , d ? 5 ? 104 )
Figure 1: Comparison of Solvers
(a) Adult: Logistic Loss
(b) Adult: Classification Error
(c) Reuters: Hinge Loss
(d) Reuters: Classification Error
Figure 2: Comparison with ERM
method as well. As the theory suggests [15, 13], the robust objective provides upper confidence
bounds on the true risk (approximated by the average loss on the test sample).
Acknowledgments
JCD and HN were partially supported by the SAIL-Toyota Center for AI Research and the National
Science Foundation award NSF-CAREER-1553086. HN was also partially supported Samsung
Fellowship.
8
References
[1] J.-Y. Audibert and S. Bubeck. Regret bounds and minimax policies under partial monitoring. In
Journal of Machine Learning Research, pages 2635?2686, 2010.
[2] A. Ben-Tal, L. E. Ghaoui, and A. Nemirovski. Robust Optimization. Princeton University Press,
2009.
[3] A. Ben-Tal, D. den Hertog, A. D. Waegenaere, B. Melenberg, and G. Rennen. Robust solutions
of optimization problems affected by uncertain probabilities. Management Science, 59(2):
341?357, 2013.
[4] A. Ben-Tal, E. Hazan, T. Koren, and S. Mannor. Oracle-based robust optimization via online
learning. Operations Research, 63(3):628?638, 2015.
[5] J. Borwein, A. J. Guirao, P. H?jek, and J. Vanderwerff. Uniformly convex functions on Banach
spaces. Proceedings of the American Mathematical Society, 137(3):1081?1091, 2009.
[6] S. Boucheron, O. Bousquet, and G. Lugosi. Theory of classification: a survey of some recent
advances. ESAIM: Probability and Statistics, 9:323?375, 2005.
[7] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
[8] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed
bandit problems. Foundations and Trends in Machine Learning, 5(1):1?122, 2012.
[9] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[10] K. Clarkson, E. Hazan, and D. Woodruff. Sublinear optimization for machine learning. Journal
of the Association for Computing Machinery, 59(5), 2012.
[11] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms. MIT
Press, 2001.
[12] N. Cressie and T. R. Read. Multinomial goodness-of-fit tests. Journal of the Royal Statistical
Society. Series B (Methodological), pages 440?464, 1984.
[13] J. C. Duchi and H. Namkoong. Statistics of robust optimization: A generalized empirical
likelihood approach. arXiv:1610.02581 [stat.ML], 2016. URL https://arxiv.org/abs/
1610.02581.
[14] J. C. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the
`1 -ball for learning in high dimensions. In Proceedings of the 25th International Conference on
Machine Learning, 2008.
[15] J. C. Duchi, P. W. Glynn, and H. Namkoong. Statistics of robust optimization: A generalized
empirical likelihood approach. arXiv:1610.03425 [stat.ML], 2016. URL https://arxiv.
org/abs/1610.03425.
[16] S. Ghadimi and G. Lan. Optimal stochastic approximation algorithms for strongly convex
stochastic composite optimization, I: a generic algorithmic framework. SIAM Journal on
Optimization, 22(4):1469?1492, 2012.
[17] I. Gurobi Optimization. Gurobi optimizer reference manual, 2015. URL http://www.gurobi.
com.
[18] E. Hazan. The convex optimization approach to regret minimization. In Optimization for
Machine Learning, chapter 10. MIT Press, 2012.
[19] E. Hazan and S. Kale. An optimal algorithm for stochastic strongly convex optimization. In
Proceedings of the Twenty Fourth Annual Conference on Computational Learning Theory, 2011.
[20] J. Hiriart-Urruty and C. Lemar?chal. Convex Analysis and Minimization Algorithms I & II.
Springer, New York, 1993.
[21] D. Lewis, Y. Yang, T. Rose, and F. Li. RCV1: A new benchmark collection for text categorization
research. Journal of Machine Learning Research, 5:361?397, 2004.
[22] M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/
ml.
[23] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[24] A. B. Owen. Empirical likelihood. CRC press, 2001.
[25] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to
Algorithms. Cambridge University Press, 2014.
[26] S. Shalev-Shwartz and Y. Wexler. Minimizing the maximal loss: How and why? In Proceedings
of the 32nd International Conference on Machine Learning, 2016.
[27] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In
Proceedings of the Twentieth International Conference on Machine Learning, 2003.
9
| 6040 |@word h:2 repository:1 briefly:1 norm:2 nd:1 simulation:1 forecaster:1 wexler:2 git:1 p0:1 sgd:4 ipm:1 initial:1 series:1 lichman:1 woodruff:1 tuned:2 document:2 suppressing:1 existing:1 current:1 com:1 nt:1 written:1 john:1 additive:1 designed:3 plot:4 update:10 juditsky:1 certificate:4 provides:5 completeness:2 node:1 mannor:1 org:2 mathematical:1 c2:1 specialize:1 prove:2 expected:1 indeed:2 behavior:1 p1:7 roughly:1 multi:1 little:2 cpu:1 armed:1 solver:3 spain:1 begin:1 underlying:1 bounded:1 notation:1 moreover:1 rivest:1 namkoong:4 developed:2 finding:1 jduchi:1 guarantee:12 pseudo:4 concave:1 runtime:1 classifier:1 control:2 bst:2 medical:1 omit:1 safety:1 t1:4 before:1 positive:2 limit:1 consequence:2 despite:2 lugosi:2 might:1 black:1 suggests:1 challenging:1 nemirovski:4 bi:3 sail:1 acknowledgment:1 practice:1 regret:25 implement:1 subdifferentiable:1 procedure:8 empirical:15 composite:1 projection:2 boyd:1 confidence:8 pre:1 onto:1 interior:2 selection:1 bh:1 put:1 risk:9 applying:2 optimize:1 ghadimi:1 www:1 lagrangian:2 yt:2 maximizing:1 center:1 straightforward:1 kale:1 zinkevich:1 convex:25 survey:1 pure:2 estimator:1 regularize:1 vandenberghe:1 stability:1 population:1 pt:49 controlling:1 programming:2 cressie:4 us:1 complementarity:1 trend:1 expensive:1 satisfying:1 updating:2 hrh:1 approximated:1 worst:2 region:1 trade:1 rescaled:1 bisect:1 substantial:1 balanced:2 rose:1 ui:1 depend:1 solving:5 regretp:1 f2:1 efficiency:2 necessitates:1 samsung:1 chapter:3 various:1 fast:2 describe:3 shalev:4 choosing:1 quite:2 heuristic:1 stanford:4 solve:3 valued:1 whose:3 say:1 drawing:1 larger:1 statistic:4 gi:2 bsts:1 g1:2 jointly:1 nondecreasing:1 online:4 sequence:1 rr:1 differentiable:5 propose:1 hiriart:1 coming:1 maximal:1 remainder:2 relevant:1 uci:2 subgraph:1 poorly:1 achieve:3 inducing:1 convergence:5 generating:2 perfect:1 hertog:1 ben:6 tk:1 categorization:1 develop:3 stat:2 minor:1 strong:1 solves:2 come:2 trading:2 radius:1 stochastic:17 vc:4 viewing:1 kgit:1 crc:1 require:2 hx:1 f1:1 generalization:1 suffices:1 inspecting:1 strictly:2 hold:4 around:2 ic:1 exp:1 algorithmic:1 substituting:1 driving:1 achieves:2 entropic:1 optimizer:1 estimation:3 label:1 tool:1 minimization:10 hope:1 mit:2 supt:1 pki:3 ck:6 modified:1 pn:2 corollary:3 focus:2 methodological:1 likelihood:4 bt:16 bandit:8 subroutine:1 issue:1 dual:6 classification:5 development:1 constrained:1 special:3 initialize:1 construct:1 sampling:4 identical:4 broad:1 nearly:1 t2:4 np:1 few:1 randomly:1 divergence:22 national:1 individual:2 argmax:1 n1:1 ab:2 leiserson:1 acheives:3 withp:1 yielding:1 accurate:2 kt:2 bregman:4 partial:1 machinery:1 tree:3 re:1 theoretical:1 uncertain:1 fenchel:2 classify:2 goodness:1 cost:8 uniform:1 too:2 optimally:1 proximal:1 synthetic:3 calibrated:5 st:1 international:3 siam:2 sequel:1 off:5 hongseok:1 quickly:2 concrete:2 argminx2x:2 again:1 borwein:1 cesa:2 management:1 choose:4 slowly:1 hn:2 admit:1 american:1 inefficient:1 li:1 jek:1 sec:1 satisfy:1 audibert:2 depends:1 vi:1 later:1 view:1 multiplicative:1 hazan:4 sup:7 red:1 npi:3 p2p:4 minimize:1 accuracy:2 variance:9 efficiently:2 yield:10 generalize:1 i2i:3 iid:2 bisection:1 monitoring:1 worth:1 manual:1 infinitesimal:1 against:1 nonetheless:1 glynn:1 naturally:1 proof:6 dataset:3 lim:1 organized:1 carefully:3 formulation:5 though:1 strongly:5 just:1 hand:1 replacing:1 logistic:2 grows:1 building:2 requiring:1 normalized:2 remedy:1 unbiased:1 true:2 regularization:3 alternating:1 read:4 boucheron:1 pwe:1 game:4 generalized:3 presenting:1 outline:1 duchi:7 performs:4 meaning:1 common:1 specialized:1 pseudocode:1 multinomial:1 rl:1 banach:1 association:1 jcd:1 cambridge:3 ai:3 smoothness:2 rd:3 fk:7 outlined:1 reliability:1 f0:2 longer:1 curvature:1 recent:2 inf:9 varpbn:1 moderate:1 scenario:1 manipulation:1 store:2 belongs:1 inequality:1 binary:3 arbitrarily:1 discussing:1 yi:2 additional:2 somewhat:2 impose:1 greater:1 ii:2 desirable:2 corporate:1 long:1 award:1 dkl:2 specializing:1 prediction:1 variant:2 basic:1 xpt:1 expectation:1 df:5 essentially:1 arxiv:4 iteration:6 chandra:1 achieved:1 fellowship:1 fine:1 x2x:7 interval:1 grow:2 crucial:2 appropriately:1 extra:1 archive:1 explodes:1 ascent:5 strict:1 call:1 leverage:1 noting:2 yang:1 iii:1 easy:1 split:1 fit:1 nonstochastic:1 inner:1 idea:1 tradeoff:3 computable:1 grad:4 shift:1 vanderwerff:1 whether:2 motivated:1 expression:3 url:4 clarkson:2 algebraic:1 york:1 cause:2 generally:2 detailed:1 stein:1 category:1 generate:1 http:4 shapiro:1 canonical:1 nsf:1 shifted:1 sign:1 estimated:1 per:1 diagnosis:1 affected:1 key:1 reformulation:2 demonstrating:1 pb:1 lan:2 drawn:1 subgradient:2 sum:2 run:3 uncertainty:4 fourth:1 place:1 family:2 throughout:1 draw:1 decision:1 appendix:10 scaling:1 comparable:1 bit:1 bound:28 ct:1 koren:1 oracle:1 annual:1 waegenaere:1 constrain:1 flat:1 tal:5 bousquet:1 argument:1 optimality:4 subgradients:1 rcv1:3 relatively:2 according:2 alternate:2 ball:1 conjugate:2 cormen:1 slightly:2 wi:3 g3:2 making:2 modification:1 presently:1 den:1 erm:5 ghaoui:1 taken:1 remains:1 turn:1 eventually:1 fail:2 singer:1 urruty:1 letting:1 flip:1 end:4 studying:1 operation:3 apply:3 generic:2 appropriate:1 stepsize:1 rennen:1 rrnn:1 original:1 cf:1 hinge:3 giving:1 k1:3 build:2 especially:1 classical:1 society:2 objective:12 parametric:1 concentration:1 strategy:2 dependence:1 gradient:19 dp:1 distance:2 index:3 providing:2 minimizing:3 ratio:2 implementation:2 policy:1 twenty:1 perform:2 bianchi:2 upper:2 observation:1 datasets:3 benchmark:2 finite:1 descent:19 orthant:1 behave:1 situation:1 defining:1 rn:7 dually:1 thm:1 david:1 required:2 specified:1 extensive:2 kl:1 gurobi:4 narrow:1 barcelona:1 nip:1 adult:4 usually:1 wi2:1 regime:1 sparsity:1 chal:1 including:2 max:1 kykx:1 royal:1 power:4 suitable:2 natural:2 regularized:1 minimax:3 scheme:1 esaim:1 imply:1 text:1 literature:4 ep0:4 understanding:1 asymptotic:1 relative:1 loss:24 sublinear:3 versus:1 ingredient:1 foundation:2 offered:1 sufficient:1 dq:2 plotting:1 pi:9 supported:2 last:1 bias:3 formal:1 side:1 taking:3 sparse:1 benefit:1 dimension:2 evaluating:1 author:1 collection:1 income:1 excess:1 robustification:2 compact:2 obtains:2 implicitly:1 supremum:1 ml:3 kkt:1 investigating:1 corpus:3 shwartz:4 factorizing:1 un:3 search:3 why:1 nature:1 robust:32 pbt:1 career:1 alg:2 expansion:2 bounding:3 reuters:5 n2:2 nothing:1 p2i:1 x1:3 shrinking:1 fails:1 explicit:2 kxk2:1 toyota:1 theorem:6 xt:36 boring:1 specific:2 r2:1 normalizing:1 exists:1 importance:2 mirror:9 subtree:1 kx:3 gap:1 saddle:3 explore:1 bubeck:3 twentieth:1 melenberg:1 kxk:1 keyed:1 g2:2 partially:2 springer:1 ch:1 satisfies:2 lewis:1 sized:1 careful:2 lipschitz:2 owen:2 lemar:1 typical:1 uniformly:2 wt:4 lemma:6 total:1 experimental:1 player:6 distributionally:4 evaluate:1 princeton:1 regularizing:1 |
5,571 | 6,041 | Optimal Tagging with Markov Chain Optimization
Nir Rosenfeld
School of Computer Science and Engineering
Hebrew University of Jerusalem
[email protected]
Amir Globerson
The Blavatnik School of Computer Science
Tel Aviv University
[email protected]
Abstract
Many information systems use tags and keywords to describe and annotate content.
These allow for efficient organization and categorization of items, as well as
facilitate relevant search queries. As such, the selected set of tags for an item can
have a considerable effect on the volume of traffic that eventually reaches an item.
In tagging systems where tags are exclusively chosen by an item?s owner, who in
turn is interested in maximizing traffic, a principled approach for assigning tags
can prove valuable. In this paper we introduce the problem of optimal tagging,
where the task is to choose a subset of tags for a new item such that the probability
of browsing users reaching that item is maximized.
We formulate the problem by modeling traffic using a Markov chain, and asking
how transitions in this chain should be modified to maximize traffic into a certain
state of interest. The resulting optimization problem involves maximizing a certain
function over subsets, under a cardinality constraint.
We show that the optimization problem is NP-hard, but has a (1? 1e )-approximation
via a simple greedy algorithm due to monotonicity and submodularity. Furthermore,
the structure of the problem allows for an efficient computation of the greedy step.
To demonstrate the effectiveness of our method, we perform experiments on three
tagging datasets, and show that the greedy algorithm outperforms other baselines.
1
Introduction
To allow for efficient navigation and search, modern information systems rely on the usage of nonhierarchical tags, keywords, or labels to describe items and content. These tags are then used either
explicitly by users when searching for content, or implicitly by the system to recommend related
items or to augment search results.
Many online systems where users can create or upload content support tagging. Examples of such
systems are media-sharing platforms, social bookmarking websites, and consumer to consumer
auctioning services. While in some systems any user can tag any item, in many ad-hoc systems tags
are exclusively set by the item?s owner alone. She, in turn, is free to select any set of tags or keywords
which she believes best describe the item. Typically, the only concrete limitation is on the number
of tags, words, or characters used. Tags are often chosen on a basis of their ability to best describe,
classify, or categorize items and content. By choosing relevant tags, users aid in creating a more
organized information system. However, content owners may have their own individual objective,
such as maximizing the exposure of their items to other browsing users. This is true for many artists,
artisans, content creators, and merchants whose services and items are provided online.
This suggests that choosing tags should in fact be done strategically. For instance, for a user uploading
a new song, tagging it as ?Rock? may be informative, but will probably only contribute marginally to
the song?s traffic, as the competition for popularity under this tag can be fierce. On the other hand,
choosing a unique or obscure tag may be appealing, but will not help much either. Strategic tagging
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
or keyword selection is clearly exhibited in search systems, where keywords are explicitly used for
filtering and ordering search results or ad placements, and users have a clear incentive of maximizing
an item?s exposure. Nonetheless, the selection process is typically heuristic.
Recent years have seen an abundance of work on methods for user-specific tag recommendations
[8, 10, 5]. Such methods aim to support collaborative tagging systems, where any user can tag any
item in the repository. In contrast, in this paper we take a complementary perspective and focus on
taxonomic tagging systems, where only the owner of an item can determine its tags. We formalize
the task of optimal tagging and suggest an efficient, provably-approximate algorithm. While the
problem is shown to be NP-hard, we prove that the objective is in fact monotone and submodular,
which suggests a straightforward greedy (1 ? 1e )-approximation algorithm [13]. We also show how
the greedy step, which consists of solving a set of linear equations, can be greatly simplified. This
results in a significant improvement of runtime.
We begin by modeling a user browsing a tagged information system as a random walk. Items and
tags act as states in a Markov chain, whose transition probabilities describe the probability of users
transitioning between items and tags. Given a new item, our task is to choose a subset of k tags for
this item. When an item is tagged, positive probabilities are assigned to transitioning both from the
tag to the item and from the item to the tag. Our objective is to choose the subset of k tags which will
maximize traffic to that item, namely the probability of a random walk reaching the item. Intuitively,
tagging an item causes probability to flow from the tag to the item, on account of other items with
this tag. Our goal is to take as much probability mass as possible from the system as a whole.
Our method shares some similarities with other PageRank (PR, [2]) based methods, which optimize
measures based on the stationary distribution [14, 4, 6, 15]. Here we argue that our approach, which
focuses on maximizing the probability of a random walk reaching a new item?s state, is better suited
to the task of optimal tagging. First, an item?s popularity should only increase when assigned a new
tag. Since tagging an item creates bidirectional links, its stationary probability may undesirably
decrease. Hence, optimizing the PR of an item will lead to an undesired non-monotone objective [1].
Second, PR considers a single Markov chain for all items, and is thus not item-centric. In contrast,
our method considers a unique instance of the transition system for every item we consider. While an
item-specific Personalized PR based objective can be constructed, it would consider random walks
from a given state, not to it. Third, a stationary distribution does not always exist, and hence may
require modifications of the Markov chain. Finally, optimizing PR is known to be hard. While some
approximations exist, our method provides superior guarantees and potentially better runtime [16].
Although the Markov chain model we propose for optimal tagging is bipartite, our results apply to
general Markov chains. We therefore first formulate a general problem in Sec. 3, where the task is
to choose k states to link a new state to such that the probability of reaching that state is maximal.
Then, in Sec. 4 we prove that this problem is NP-hard by a reduction from vertex cover. In Sec. 5 we
prove that for a general Markov chain the optimal objective is both monotonically non-decreasing
and submodular. Based on this, in Sec. 6 we suggest a basic greedy (1 ? 1e )-approximation algorithm,
and describe a method for significantly improving its runtime. In Sec. 7 we revisit the optimal tagging
problem and show how to construct a bipartite Markov chain for a given tag-supporting information
system. In Sec. 8 we present experimental results on three tagging datasets (musical artists in Last.fm,
bookmarks in Delicious, and movies in Movielens) and show that our algorithm outperforms other
baselines. Concluding remarks are given in Sec. 9.
2
Related Work
One of the main roles of tags is to aid in the categorization and classification of content. An active
line of research in tagging systems focuses on the task of tag recommendations, where the goal is
to predict the tags a given user may assign an item. This task is applicable in collaborative tagging
systems and folksonomies, where any user can tag any item. Methods for this task are based on
random walks [8, 10] and tensor factorization [5]. While the goal in tag recommendation is also to
output a set of tags, our task is very different in nature. Tag recommendation is a prediction task for
item-user pairs, is based on ground-truth evaluation, and is applied in collaborative tagging systems.
In contrast, ours is an item-centric optimization task for tag-based taxonomies, and is counterfactual
in nature, as the selection of tags is assumed to affect future outcomes.
2
A line of work similar to ours is optimizing the PageRank of web pages in different settings. In [4]
the authors consider the problem of computing the maximal and minimal PageRank value for a set of
?fragile? links. The authors of [1] analyze the effects of additional outgoing links on the PageRank
value. Perhaps the works most closely related to ours are [16, 14], where an approximation algorithm
is given for the problem of maximizing the PageRank value by adding at most k incoming links. The
authors prove that the probability of reaching a web page is submodular and monotone in a fashion
similar to ours (but with a different parameterization), and use it as a proxy for PageRank.
Our framework uses absorbing Markov chains, whose relation to submodular optimization has been
explored in [6] for opinion maximization and in [12] for computing centrality measures in networks.
Following the classic work of Nemhauser [13], submodular optimization is now a very active line of
research. Many interesting optimization problems across diverse domains have been shown to be
submodular. Examples include sensor placement [11] and social influence maximization [9].
3
Problem Formulation
Before presenting our approach to optimal tagging, we first describe a general combinatorial optimization task over Markov chains, of which optimal tagging is a special case. Consider a Markov
chain over n + 1 states. Assume there is a state ? to which we would like to add k new incoming
transitions, where w.l.o.g. ? = n + 1. In the tagging problem, ? will be an item (e.g., song or product)
and the incoming transitions will be from possible tags for the item, or from related items.
The optimization problem is then to choose a subset S ? [n] of k states so as to maximize the
probability of visiting ? at some point in time. Formally, let Xt ? [n + 1] be the random variable
corresponding to the state of the Markov chain at time t. Then the optimal tagging problem is:
max
S?[n], |S|?k
PS [Xt = ? for some t ? 0]
(1)
At first glance, it is not clear how to compute the objective function in Eq. (1). However, with a slight
modification of the Markov chain, the objective function can be expressed as a simple function of the
Markov chain parameters, as explained next.
In general, ? may have outgoing edges, and random walks reaching ? may continue to other states
afterward. Nonetheless, as we are only interested in the probability of reaching ?, the states visited
after ? have no effect on our objective. Hence, the edges out of ? can be safely replaced with a single
self-edge without affecting the probability of reaching ?. This essentially makes ? an absorbing
state, and our task becomes to maximize the probability of the Markov chain being absorbed in ?. In
the remainder of the paper we consider this equivalent formulation.
When the Markov chain includes other absorbing states, optimizing over S can be intuitively thought
of as trying to transfer as much probability mass from the competing absorbing states to ?, under
a budget on the number of states that can be connected to ?.1 As we discuss in Section 7, having
competing absorbing states arises naturally in optimal tagging.
To fully specify the problem, we need the Markov chain parameters. Denote the initial distribution
by ?. For the transition probabilities, each node i will have two sets of transitions: one when it is
allowed to transition to ? (i.e., i ? S) and one when no transition is allowed. Using two distinct sets
is necessary since in both cases outgoing probabilities must sum to one. We use qij to denote the
+
transition probability from state i to j when transition from i to ? is not allowed, and qij
when it is.
+
We also denote the corresponding transition matrices by Q and Q .
It is natural to assume that when adding a link from i to ?, transition into ? will become more likely,
and transition to other states can only be less likely. Thus, we add the assumptions that:
+
?i : 0 = qi? ? qi?
,
+
?i , ?j 6= ? : qij
? qij
(2)
Given a subset S of states from which transitions to ? are allowed, we construct a new transition
matrix, taking corresponding rows from Q and Q+ . We denote this matrix by ?(S), with
+
qij i ? S
?ij (S) =
(3)
qij i ?
/S
1
In an ergodic chain with one absorbing state, all walks reach ? w.p. 1, and the problem becomes trivial.
3
4
NP-Hardness
We now show that for a general Markov chain, the optimal tagging problem in Eq. (1) is NP-hard
by a reduction from vertex cover. Given an undirected graph G = (V, E) with n nodes as input to
the vertex cover problem, we construct an instance of optimal tagging such that there exists a vertex
cover S ? V of size at most k iff the probability of reaching ? reaches some threshold.
To construct the absorbing Markov chain, we create a transient state i for every node i ? V , and add
two absorbing states ? and ?. We set the initial distribution to be uniform, and for some 0 < < 1
set the transitions for transient states i as follows:
?
j=?
? 0
1 j=?
+
j=?
qij
=
,
qij =
(4)
0 j 6= ?
? 1?
otherwise
deg(i)
Let U ? V of size k, and S(U ) the set of states corresponding to the nodes in U . We claim that U is
a vertex cover in G iff the probability of reaching ? when S(U ) is chosen is 1 ? (n?k)
n .
Assume U is a vertex cover. For every i ? S(U ), a walk starting in i will reach ? with probability
1 in one step. For every i 6? S(U ), with probability a walk will reach ? in one step, and with
probability 1 ? it will visit one of its neighbors j. Since U is a vertex cover, it will then reach ?
in one step with probability 1. Hence, in total it will reach ? with probability 1 ? . Overall, the
probability of reaching ? is k+(n?k)(1?)
= 1 ? (n?k)
n
n as needed. Note that this is the maximal
possible probability of reaching ? for any subset of V of size k.
Assume now that U is not a vertex cover, then there exists an edge (i, j) ? E such that both i 6? S(U )
and j 6? S(U ). A walk starting in i will reach ? in one step with probability , and in two steps (via
j) with probability ? qij > 0. Hence, it will reach ? with probability strictly smaller than 1 ? , and
the overall probability of reaching will be strictly smaller than 1 ? (n?k)
n .
5
Proof of Monotonicity and Submodularity
Denote by PS [A] the probability of event A when transitions from S to ? are allowed. We define:
(k)
ci (S) = PS [Xt = ? for some t ? k|X0 = i]
ci (S) = PS [Xt = ? for some t|X0 = i] =
(k)
limk?? ci
(5)
(6)
Using c(S) = (c1 (S), . . . , cn (S)), the objective in Eq. (1) now becomes:
max
f (S),
S?[n],|S|?k
f (S) = h?, c(S)i = PS [Xt = ? for some t]
(7)
We now prove that f (S) is both monotonically non-decreasing and submodular.
5.1
Monotonicity
When a link is created from i to ?, the probability of reaching ? directly from i increases. However,
due to the renormalization constraints, the probability of reaching ? via longer paths may decrease.
Trying to prove that for every random walk f is monotone and using additive closure is bound to fail.
Nonetheless, our proof of monotonicity shows that the overall probability cannot decrease.
(k)
Theorem 5.1. For every k ? 0 and i ? [n], ci is non-decreasing. Namely, for all S ? [n] and
(k)
(k)
z ? [n] \ S, it holds that ci (S) ? ci (S ? {z}).
Proof. We prove by induction on k. For k = 0, as ? is independent of S and z, we have:
c0i (S) = ?? 1{i=?} = c0i (S ? {z})
Assume now that the claim holds for some k ? 0. For any T ? [n], it holds that:
(k+1)
ci
(T ) =
n
X
(k)
?ij (T )cj (T ) + ?i? 1{i?T }
j=1
4
(8)
We separate into cases. When i 6= z, we have:
(k+1)
i ? S : ci
(k+1)
i 6? S : ci
(S) =
(S) =
n
X
j=1
n
X
(k)
+
+
qij
cj (S) + qi?
?
n
X
(k)
(k+1)
+
+
qij
cj (S ? z) + qi?
= ci
(S ? z)
(9)
j=1
(k)
qij cj (S) ?
j=1
n
X
(k)
(k+1)
qij cj (S ? z) = ci
(S ? z)
(10)
j=1
using the inductive assumption and Eq. (8). When i = z, we have:
(k+1)
ci
(S)
?
?
n
X
j=1
n
X
(k)
qij cj (S
? z) =
n
X
+ (k)
qij
cj (S
n
X
+ (k)
? z) +
(qij ? qij
)cj (S ? z)
j=1
j=1
n
n
X
X
(k+1)
+ (k)
+
+ (k)
+
qij
cj (S ? z) +
(qij ? qij
)=
qij
cj (S ? z) + qz?
= ci
(S ? z)
j=1
j=1
+
due to to qij ? qij
, c ? 1,
Pn
j=1 qij
j=1
= 1, and
Pn
+
j=1 qij
+
= 1 ? qi?
.
Corollary 5.2. ?i ? [n], ci (S) is non-decreasing, hence f (S) = h?, c(S)i is non-decreasing.
5.2
Submodularity
Submodularity captures the principle of diminishing returns. A function f (S) is submodular if:
?X ? Y ? [n], z ?
/ X,
f (X ? {z}) ? f (X) ? f (Y ? {z}) ? f (Y )
In what follows we will use the following equivalent definition:
?S ? [n], z1 , z2 ? [n] \ S,
f (S ? {z1 }) + f (S ? {z2 }) ? f (S ? {z1 , z2 }) + f (S)
(11)
Using this formulation, we now show that f (S) as defined in Eq. (7) is submodular.
(k)
Theorem 5.3. For every k ? 0 and i ? [n], ci (S) is a submodular function.
Proof. We prove by induction on k. For k = 0, once again ? is independent of S and hence c0i is
modular. Assume now that the claim holds for some k ? 0. For brevity we define:
(k)
ci
(k)
= ci (S),
(k)
(k)
ci,1 = ci (S ? {z1 }),
(k+1)
We?d like to show that ci,1
(k+1)
+ ci,2
(k)
(k)
(k)
ci,2 = ci (S ? {z2 }),
(k+1)
? ci,12
(k+1)
+ ci
(k)
(k)
(k)
ci,12 = ci (S ? {z1 , z2 })
. For every j ? [n], we?ll prove that:
(k)
(k)
?ij (S ? {z1 })cj,1 + ?ij (S ? {z2 })cj,2 ? ?ij (S ? {z1 , z2 })cj,12 + ?ij (S)cj
(12)
By summing over all j ? [n] and adding ?i? 1{i?T } we get Eq. (8) and conclude our proof.
We separate into different cases for i. If i ? S, then we have ?ij (S ? {z1 , z2 }) = ?ij (S ? {z1 }) =
+
?ij (S ? {z2 }) = ?ij (S) = qij
. Similarly, if i ?
/ S ? {z1 , z2 }, then all terms now equal qij . Eq. (12)
then follows from the inductive assumption.
Assume i = z1 (and analogously for i = z2 ). From the assumption in Eq. (2) we can write
+
qij = (1 + ?)qij
for some ? ? 0. Then Eq. (12) becomes:
(k)
(k)
(k)
(k)
+
+
+
+
qij
cj,1 + (1 + ?)qij
cj,2 ? qij
cj,12 + (1 + ?)qij
cj
(13)
+
Divide by qij
> 0 if needed and reorder to get:
(k)
(k)
(k)
(k)
cj,1 + ckj,2 ? cj,12 ? cj
+ ?(ckj,2 ? cj ) ? 0
(14)
This indeed holds since the first term is non-negative from the inductive assumption, and the second
term is non-negative because of monotonicity and ? ? 0.
Corollary 5.4. ? i ? [n], ci (S) is submodular, hence f (S) = h?, c(S)i is submodular.
5
Algorithm 1
1: function S IMPLE G REEDY TAG O PT(Q, Q+ , ?, k)
2:
Initialize S = ?
3:
for i ? 1 to k do
4:
for z ? [n] \ S do
5:
c = I ? A(S ? {z}) \ b(S ? {z})
6:
v(z) = h?, ci
7:
S ? S ? argmaxz v(z)
8:
Return S
6
. See supp. for efficient implementation
. A, b are set by Q, Q+ using Eqs. (3), (15)
Optimization
Maximizing submodular functions is hard in general. However, a classic result by Nemhauser [13]
shows that a non-decreasing submodular set function, such as our f (S), can be efficiently optimized
via a simple greedy algorithm, with a guaranteed (1 ? 1e )-approximation of the optimum. The greedy
algorithm initializes S = ?, and then sequentially adds elements to S. For a given S, the algorithm
iterates over all z ? [n] \ S and computes f (S ? {z}). Then, it adds the highest scoring z to S, and
continues to the next step. We now discuss its implementation for our problem.
Computing f (S) for a given S reduces to solving a set of linear equations. For transient states
{1, . . . , n ? r} and absorbing states {n ? r + 1, . . . , n + 1 = ?}, the transition matrix ?(S) becomes:
A(S) B(S)
?(S) =
(15)
0
I
where A(S) are the transition probabilities between transient states, B(S) are the transition probabilities from transient states to absorbing states, and I is the identity matrix. When clear from context
we will drop the dependence of A, B on S. Note that ?(S) has at least one absorbing state (namely
?). We denote by b the column of B corresponding to state ? (i.e., B?s rightmost column).
We would like to calculate f (S). By Eq. (6), the probability of reaching ? given an initial state i is:
!
?
?
X
X
X
t
ci (S) =
PS [Xt = ?|Xt?1 = j] PS [Xt?1 = j|X0 = i] =
Ab
t=0 j?[n?r]
t=0
i
The above series has a closed form solution:
?
X
At = (I ? A)?1
?
c = (I ? A)?1 b
t=0
Thus, c(S) is the solution of the set of linear equations, which readily gives us f (S):
f (S) = h?, ci s.t.
(I ? A(S))c = b(S)
(16)
The greedy algorithm can thus be implemented by sequentially considering candidate sets S of
increasing size, and for each z calculating f (S ? {z}) by solving a set of linear equations (see
Algorithm 1). Though parallelizable, this na?ve implementation may be costly as it requires solving
O(n2 ) sets of n ? r linear equations, one for every addition of z to S. Fast submodular solvers [7]
can reduce the number of f (S) evaluations by an order of magnitude. In addition, we now show how
a significant speedup in computing f (S) itself can be achieved using certain properties of f (S).
A standard method for solving the set of linear equations (I ? A)c = b if to first compute an LU P
decomposition for (I ? A), namely find lower and upper diagonal matrices L, U and a permutation
matrix P such that LU = P (I ? A). Then, it suffices to solve Ly = P b and U c = y. Since L and U
are diagonal, solving these equations can be performed efficiently. The costly operation is computing
the decomposition in the first place.
Recall that ?(S) is composed of rows from Q+ corresponding to S and rows from Q corresponding to
[n] \ S. This means that ?(S) and ?(S ? {z}) differ only in one row, or equivalently, that ?(S ? {z})
can be obtained from ?(S) by adding a rank-1 matrix. Given an LU P decomposition of ?(S), we can
6
efficiently compute f (S ? {z}) (and the corresponding decomposition) using efficient rank-1-update
techniques such as Bartels-Golub-Reid [17], which are especially efficient for sparse matrices. As a
result, it suffices to compute only a single LU P decomposition once at the beginning, and perform
cheap updates at every step. We give an efficient implementation in the supp. material.
7
Optimal Tagging
In this section we return to the task of optimal tagging and show how the Markov chain optimization
framework described above can be applied. We use a random surfer model, where a browsing user
hops between items and tags in a bipartite Markov chain. In its explicit form, our model captures the
activity of browsing users whom, when viewing an item, are presented with the item?s tags and may
choose to click on them (and similarly when viewing tags).
In reality, many systems also include direct links between related items, often in the form of a ranked
list of item recommendations. The relatedness of two items is in many cases, at least to some extent,
based on their mutual tags. Our model captures this notion of similarity by indirect transitions via
tag states. This allows us to encode tags as variables in the objective. Furthermore, adding direct
transitions between items is straightforward as our results apply to general Markov chains. Note that
in contrast to models for tag recommendation, we do not need to explicitly model users, as our setup
defines only one distinct optimization task per item.
In what follows we formalize the above notions. Consider a system of m items ? = {?1 , . . . , ?m }
and n tags T = {?1 , . . . , ?n }. Each item ?i has a set of tags Ti ? T , and each tag ?j has a set of
items ?j ? ?. The items and tags constitute the states of a bipartite Markov chain, where users hop
between items and tags. Specifically, the transition matrix ? can have non-zero entries ?ij and ?ji for
items ?i tagged by ?j . To model the fact that browsing users eventually leave the system, we add a
global absorbing state ? and add transition probabilities ?i? = i > 0 for all items ?i . For simplicity
we assume that i = for all i, and that ? can be non-zero only for tag states.
In our setting, when a new item ? is uploaded, its owner may choose a set S ? T of at most k tags
for ?. Her goal is to choose S such that the probability of an arbitrary browsing user reaching (or
equivalently, being absorbed in) ? while browsing the system is maximal. As in the general case, the
choice of S affects the transition matrix ?(S). Denote by Pij the transition probability from item ?i
to tag ?j , by Rji (S) the transition probability from ?j to ?i under S, and let rj (S) = Rj? (S). Using
Eq. (15), ? can be written as:
A B
0 R(S)
0 r(S)
1 0
?(S) =
,
A=
, B=
, I2 =
0 I2
P
0
1?
0
0 1
where 0 and 1 are appropriately sized vectors or matrices. Since we are only interested in selecting
tags, we may consider a chain that includes only the tag states, with the item states marginalized out.
The transition matrix between tags is given by ?2 (S) = R(S)P . The transition probabilities from
tags to ? remain r(S). Our objective of maximizing the probability of reaching ? under S is then:
f (S) = h?, ci s.t.
(I ? R(S)P ) c = r(S)
(17)
which is a special case of the general objective presented in Eq. (16), and hence can be optimized
efficiently. In the supplementary material we prove that this special case is still NP-hard.
8
Experiments
To demonstrate the effectiveness of our approach, we perform experiments on optimal tagging in data
collected from Last.fm, Delicious, and Movielens by the HetRec 2011 workshop [3]. The datasets
include all items (between 10,197 and 59,226) and tags (between 11,946 and 53,388) reached by
crawling a set of about 2,000 users in each system, as well as some metadata.
For each dataset, we first created a bipartite graph of items and tags. Next, we generated 100 different
instances of our problem per dataset by expanding each of the 100 highest-degree tags and creating a
Markov chain for their items and their tags. We discarded nodes with less than 10 edges.
To create an interesting tag selection setup, for each item in each instance we augmented its true
tags with up to 100 similar tags (based on [18]). These served as the set of candidate tags for which
7
Last.fm
Delicious
0.2
0.2
Pr(?)
0.1
0.25
0.15
Pr(?)
0.15
Pr(?)
Movielens
0.2
Greedy
PageRank
BiFolkRank*
High degree
Low degree
True tags
One step
Random
0.1
0.15
0.1
0.05
0.05
0
1
5
9
13
17
21
25
0
1
0.05
5
9
13
k
17
21
25
0
1
5
k
9
13
17
21
25
k
Figure 1: The probability of reaching a focal item ? under a budget of k tags for various methods.
transitions to the item were allowed. We focused on items which were ranked first in at least 10 of
their 100 candidate tags, giving a total of 18,167 focal items for comparison. For each such item, our
task was to choose the k tags which maximize the probability of reaching the focal item.
Transition probabilities from tags to items were set to be proportional to the item weights - number of
listens for artists in Last.fm, tag counts for bookmarks in Delicious, and averaged ratings for movies
in Movielens. As the datasets do not include explicit weights for tags, we used uniform transition
probabilities from items to tags. The initial distribution was set to be uniform over the set of candidate
tags, and the transition probability from items to ? was set to = 0.1.
We compared the performance of our greedy algorithm with several baselines. Random-walk
based methods included PageRank and an adaptation2 of BiFolkRank [10], a state-of-the-art tag
recommendation method that operates on item-tag relations. Heuristics included choosing tags with
highest and lowest degree, true labels (for relevant k-s) sorted by weight, and random. To measure
the added value of long random walks, we also display the probability of reaching ? in one step.
Results for all three datasets are provided in Fig. 1, which shows the average probability of reaching
the focal item for values of k ? {1, . . . , 25}. As can be seen, the greedy method clearly outperforms
other baselines. Considering paths of all lengths improves results by a considerable 20-30% for
k = 1, and roughly 5% for k = 25. An interesting observation is that the performance of the true
tags is rather poor. A plausible explanation for this is that the data we use are taken from collaborative
tagging systems, where items can be tagged by any user. In such systems, tags typically play a
categorical or hierarchical role, and as such are probably not optimal for promoting item popularity.
The supplementary material includes an interesting case analysis.
9
Conclusions
In this paper we introduced the problem of optimal tagging, along with the general problem of
optimizing probability mass in Markov chains by adding links. We proved that the problem is NPhard, but can be (1 ? 1e )-approximated due to the submodularity and monotonicity of the objective.
Our efficient greedy algorithm can be used in practice for choosing optimal tags or keywords in
various domains. Our experimental results show that simple heuristics and PageRank variants
underperform our disciplined approach, and na?vely selecting the true tags can be suboptimal.
In our work we assumed access to the transition probabilities between tags and items and vice versa.
While the transition probabilities for existing items can be easily estimated by a system?s operator,
estimating the probabilities from tags to new items is non-trivial. This is an interesting problem to
pursue. Even so, users do not typically have access to the information required for estimation. Our
results suggest that users can simply apply the greedy steps sequentially via trial-and-error [9].
Finally, since our task is of a counterfactual nature, it is hard to draw conclusions from the experiments
as to the effectiveness of our method in real settings. It would be interesting to test it in realty, and
compare it to strategies used by both lay users and experts. Especially interesting in this context are
competitive domains such as ad placements and viral marketing. We leave this for future research.
Acknowledgments: This work was supported by the ISF Centers of Excellence grant 2180/15, and by the Intel
Collaborative Research Institute for Computational Intelligence (ICRI-CI).
2
To apply the method to our setting, we used a uniform prior over user-tag relations.
8
References
[1] Konstantin Avrachenkov and Nelly Litvak. The effect of new links on google pagerank.
Stochastic Models, 22(2):319?331, 2006.
[2] Sergey Brin and Lawrence Page. Reprint of: The anatomy of a large-scale hypertextual web
search engine. Computer networks, 56(18):3825?3833, 2012.
[3] Iv?n Cantador, Peter Brusilovsky, and Tsvi Kuflik. 2nd workshop on information heterogeneity
and fusion in recommender systems (hetrec 2011). In Proceedings of the 5th ACM conference
on Recommender systems, RecSys 2011, New York, NY, USA, 2011. ACM.
[4] Bal?zs Csan?d Cs?ji, Rapha?l M Jungers, and Vincent D Blondel. Pagerank optimization in
polynomial time by stochastic shortest path reformulation. In Algorithmic Learning Theory,
pages 89?103. Springer, 2010.
[5] Xiaomin Fang, Rong Pan, Guoxiang Cao, Xiuqiang He, and Wenyuan Dai. Personalized tag
recommendation through nonlinear tensor factorization using gaussian kernel. In Twenty-Ninth
AAAI Conference on Artificial Intelligence, 2015.
[6] Aristides Gionis, Evimaria Terzi, and Panayiotis Tsaparas. Opinion maximization in social
networks. In SDM, pages 387?395. SIAM, 2013.
[7] Amit Goyal, Wei Lu, and Laks VS Lakshmanan. CELF++: optimizing the greedy algorithm for
influence maximization in social networks. In Proceedings of the 20th international conference
companion on World wide web, pages 47?48. ACM, 2011.
[8] Andreas Hotho, Robert J?schke, Christoph Schmitz, Gerd Stumme, and Klaus-Dieter Althoff.
Folkrank: A ranking algorithm for folksonomies. In LWA, volume 1, pages 111?114, 2006.
[9] David Kempe, Jon Kleinberg, and ?va Tardos. Maximizing the spread of influence through
a social network. In Proceedings of the ninth ACM SIGKDD international conference on
Knowledge discovery and data mining, pages 137?146. ACM, 2003.
[10] Heung-Nam Kim and Abdulmotaleb El Saddik. Personalized pagerank vectors for tag recommendations: inside folkrank. In Proceedings of the fifth ACM conference on Recommender
systems, pages 45?52. ACM, 2011.
[11] Andreas Krause, Ajit Singh, and Carlos Guestrin. Near-optimal sensor placements in gaussian
processes: Theory, efficient algorithms and empirical studies. The Journal of Machine Learning
Research, 9:235?284, 2008.
[12] Charalampos Mavroforakis, Michael Mathioudakis, and Aristides Gionis. Absorbing randomwalk centrality: Theory and algorithms. arXiv preprint arXiv:1509.02533, 2015.
[13] George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approximations
for maximizing submodular set functions. Mathematical Programming, 14(1):265?294, 1978.
[14] Martin Olsen. Maximizing pagerank with new backlinks. In International Conference on
Algorithms and Complexity, pages 37?48. Springer, 2010.
[15] Martin Olsen and Anastasios Viglas. On the approximability of the link building problem.
Theoretical Computer Science, 518:96?116, 2014.
[16] Martin Olsen, Anastasios Viglas, and Ilia Zvedeniouk. A constant-factor approximation algorithm for the link building problem. In Combinatorial Optimization and Applications, pages
87?96. Springer, 2010.
[17] John Ker Reid. A sparsity-exploiting variant of the Bartels-Golub decomposition for linear
programming bases. Mathematical Programming, 24(1):55?69, 1982.
[18] B?rkur Sigurbj?rnsson and Roelof Van Zwol. Flickr tag recommendation based on collective
knowledge. In Proceedings of the 17th international conference on World Wide Web, pages
327?336. ACM, 2008.
9
| 6041 |@word trial:1 repository:1 polynomial:1 laurence:1 nd:1 closure:1 underperform:1 decomposition:6 lakshmanan:1 reduction:2 initial:4 series:1 exclusively:2 selecting:2 ours:4 rightmost:1 outperforms:3 existing:1 z2:11 assigning:1 crawling:1 must:1 readily:1 written:1 john:1 additive:1 informative:1 cheap:1 drop:1 update:2 v:1 alone:1 greedy:15 selected:1 website:1 item:83 amir:1 stationary:3 parameterization:1 intelligence:2 beginning:1 provides:1 iterates:1 contribute:1 node:5 mathematical:2 along:1 constructed:1 direct:2 become:1 qij:34 prove:11 consists:1 owner:5 inside:1 introduce:1 excellence:1 blondel:1 x0:3 indeed:1 tagging:31 hardness:1 roughly:1 decreasing:6 cardinality:1 considering:2 becomes:5 provided:2 spain:1 begin:1 increasing:1 solver:1 medium:1 mass:3 lowest:1 what:2 estimating:1 pursue:1 z:1 guarantee:1 safely:1 every:10 act:1 ti:1 runtime:3 ly:1 grant:1 reid:2 positive:1 service:2 engineering:1 before:1 path:3 suggests:2 christoph:1 factorization:2 averaged:1 unique:2 globerson:1 acknowledgment:1 practice:1 goyal:1 mathioudakis:1 terzi:1 litvak:1 bookmarking:1 ker:1 empirical:1 significantly:1 thought:1 word:1 suggest:3 get:2 cannot:1 selection:4 operator:1 context:2 influence:3 optimize:1 equivalent:2 center:1 maximizing:11 jerusalem:1 exposure:2 straightforward:2 starting:2 uploaded:1 ergodic:1 formulate:2 focused:1 simplicity:1 nam:1 fang:1 classic:2 searching:1 notion:2 tardos:1 pt:1 play:1 user:27 programming:3 us:1 element:1 approximated:1 continues:1 lay:1 role:2 preprint:1 capture:3 calculate:1 connected:1 keyword:1 ordering:1 decrease:3 highest:3 valuable:1 principled:1 complexity:1 imple:1 avrachenkov:1 singh:1 solving:6 creates:1 bipartite:5 basis:1 easily:1 indirect:1 various:2 distinct:2 fast:1 describe:7 query:1 artificial:1 klaus:1 choosing:5 outcome:1 whose:3 heuristic:3 modular:1 solve:1 supplementary:2 plausible:1 otherwise:1 ability:1 rosenfeld:2 itself:1 online:2 hoc:1 sdm:1 rock:1 propose:1 maximal:4 product:1 remainder:1 relevant:3 cao:1 iff:2 competition:1 exploiting:1 p:7 optimum:1 categorization:2 leave:2 help:1 ac:2 ij:11 keywords:5 school:2 eq:13 implemented:1 c:1 involves:1 differ:1 submodularity:5 anatomy:1 closely:1 stochastic:2 transient:5 viewing:2 opinion:2 material:3 brin:1 require:1 assign:1 suffices:2 strictly:2 rong:1 hold:5 ground:1 lawrence:1 surfer:1 predict:1 algorithmic:1 claim:3 estimation:1 applicable:1 label:2 combinatorial:2 visited:1 vice:1 create:3 schmitz:1 clearly:2 sensor:2 gaussian:2 always:1 aim:1 modified:1 reaching:22 rather:1 pn:2 corollary:2 encode:1 focus:3 she:2 improvement:1 rank:2 kuflik:1 greatly:1 contrast:4 sigkdd:1 baseline:4 kim:1 el:1 typically:4 diminishing:1 her:1 relation:3 bartels:2 interested:3 provably:1 overall:3 classification:1 augment:1 platform:1 special:3 kempe:1 initialize:1 mutual:1 art:1 equal:1 construct:4 having:1 once:2 hop:2 cantador:1 hotho:1 jon:1 future:2 np:6 recommend:1 strategically:1 modern:1 composed:1 ve:1 individual:1 replaced:1 ab:1 organization:1 interest:1 mining:1 gamir:1 evaluation:2 golub:2 navigation:1 chain:29 edge:5 necessary:1 vely:1 iv:1 divide:1 walk:13 theoretical:1 minimal:1 instance:5 classify:1 modeling:2 column:2 asking:1 konstantin:1 cover:8 marshall:1 maximization:4 strategic:1 vertex:8 subset:7 entry:1 uniform:4 rapha:1 international:4 siam:1 huji:1 michael:1 analogously:1 concrete:1 na:2 again:1 aaai:1 choose:9 creating:2 expert:1 return:3 rji:1 supp:2 account:1 sec:7 includes:3 gionis:2 explicitly:3 ranking:1 ad:3 performed:1 closed:1 undesirably:1 traffic:6 analyze:1 reached:1 competitive:1 carlos:1 collaborative:5 il:2 gerd:1 musical:1 who:1 efficiently:4 maximized:1 vincent:1 artist:3 marginally:1 lu:5 served:1 evimaria:1 reach:9 parallelizable:1 flickr:1 sharing:1 randomwalk:1 definition:1 nonetheless:3 naturally:1 proof:5 dataset:2 proved:1 counterfactual:2 recall:1 knowledge:2 improves:1 organized:1 formalize:2 cj:22 centric:2 bidirectional:1 specify:1 disciplined:1 wei:1 formulation:3 done:1 though:1 c0i:3 furthermore:2 marketing:1 hand:1 web:5 nonlinear:1 glance:1 google:1 defines:1 perhaps:1 icri:1 aviv:1 building:2 usage:1 effect:4 facilitate:1 true:6 usa:1 inductive:3 tagged:4 assigned:2 hence:9 i2:2 undesired:1 ll:1 self:1 bal:1 trying:2 presenting:1 demonstrate:2 superior:1 absorbing:13 viral:1 ji:2 volume:2 slight:1 he:1 isf:1 significant:2 versa:1 focal:4 similarly:2 submodular:16 access:2 similarity:2 longer:1 add:7 base:1 own:1 recent:1 perspective:1 optimizing:6 certain:3 continue:1 delicious:4 scoring:1 seen:2 guestrin:1 additional:1 dai:1 george:1 determine:1 maximize:5 shortest:1 monotonically:2 rj:2 reduces:1 anastasios:2 long:1 post:1 visit:1 va:1 qi:5 prediction:1 variant:2 basic:1 essentially:1 arxiv:2 annotate:1 sergey:1 kernel:1 achieved:1 c1:1 affecting:1 addition:2 nelly:1 krause:1 appropriately:1 exhibited:1 limk:1 probably:2 undirected:1 flow:1 effectiveness:3 near:1 affect:2 fm:4 competing:2 ckj:2 reduce:1 click:1 cn:1 suboptimal:1 andreas:2 fragile:1 song:3 peter:1 york:1 cause:1 constitute:1 remark:1 clear:3 exist:2 revisit:1 estimated:1 popularity:3 per:2 diverse:1 write:1 incentive:1 lwa:1 reformulation:1 threshold:1 heung:1 graph:2 nonhierarchical:1 monotone:4 year:1 bookmark:2 sum:1 taxonomic:1 place:1 wenyuan:1 draw:1 bound:1 guaranteed:1 display:1 hypertextual:1 activity:1 placement:4 constraint:2 personalized:3 tag:89 kleinberg:1 concluding:1 approximability:1 martin:3 speedup:1 poor:1 across:1 smaller:2 remain:1 character:1 pan:1 appealing:1 modification:2 folksonomies:2 intuitively:2 explained:1 pr:8 brusilovsky:1 dieter:1 taken:1 equation:7 turn:2 eventually:2 discus:2 fail:1 needed:2 count:1 operation:1 apply:4 promoting:1 hierarchical:1 centrality:2 creator:1 include:4 marginalized:1 laks:1 ilium:1 calculating:1 giving:1 especially:2 amit:1 tensor:2 objective:14 initializes:1 added:1 strategy:1 costly:2 dependence:1 diagonal:2 visiting:1 nemhauser:3 link:12 separate:2 recsys:1 mail:1 argue:1 considers:2 whom:1 trivial:2 extent:1 induction:2 collected:1 consumer:2 length:1 hebrew:1 equivalently:2 setup:2 fierce:1 potentially:1 taxonomy:1 robert:1 negative:2 implementation:4 collective:1 twenty:1 perform:3 upper:1 recommender:3 observation:1 markov:26 datasets:5 discarded:1 supporting:1 heterogeneity:1 merchant:1 ninth:2 ajit:1 arbitrary:1 rating:1 introduced:1 david:1 namely:4 pair:1 required:1 z1:11 optimized:2 engine:1 barcelona:1 nip:1 sparsity:1 pagerank:13 max:2 tau:1 explanation:1 belief:1 event:1 natural:1 rely:1 ranked:2 movie:2 created:2 reprint:1 categorical:1 metadata:1 nir:2 prior:1 discovery:1 fully:1 permutation:1 interesting:7 limitation:1 filtering:1 afterward:1 proportional:1 wolsey:1 hetrec:2 degree:4 pij:1 proxy:1 principle:1 share:1 obscure:1 row:4 supported:1 last:4 free:1 allow:2 institute:1 neighbor:1 wide:2 taking:1 tsaparas:1 fifth:1 sparse:1 van:1 celf:1 transition:36 world:2 computes:1 author:3 simplified:1 social:5 approximate:1 olsen:3 implicitly:1 relatedness:1 monotonicity:6 deg:1 global:1 active:2 incoming:3 sequentially:3 summing:1 assumed:2 conclude:1 reorder:1 search:6 reality:1 qz:1 nature:3 transfer:1 aristides:2 expanding:1 tel:1 improving:1 listens:1 domain:3 main:1 spread:1 whole:1 n2:1 allowed:6 complementary:1 augmented:1 fig:1 intel:1 nphard:1 fashion:1 renormalization:1 ny:1 aid:2 explicit:2 candidate:4 third:1 abundance:1 theorem:2 companion:1 transitioning:2 specific:2 xt:8 explored:1 list:1 fusion:1 exists:2 workshop:2 adding:6 ci:33 magnitude:1 budget:2 argmaxz:1 browsing:8 reedy:1 suited:1 simply:1 likely:2 absorbed:2 expressed:1 recommendation:10 springer:3 truth:1 acm:8 goal:4 identity:1 sized:1 sorted:1 fisher:1 content:8 considerable:2 hard:8 included:2 movielens:4 specifically:1 operates:1 total:2 experimental:2 select:1 formally:1 support:2 arises:1 categorize:1 brevity:1 outgoing:3 |
5,572 | 6,042 | Learning to Communicate with
Deep Multi-Agent Reinforcement Learning
Jakob N. Foerster1,?
[email protected]
Nando de Freitas1,2,3
[email protected]
Yannis M. Assael1,?
[email protected]
Shimon Whiteson1
[email protected]
1
2
University of Oxford, United Kingdom
Canadian Institute for Advanced Research, CIFAR NCAP Program
3
Google DeepMind
Abstract
We consider the problem of multiple agents sensing and acting in environments
with the goal of maximising their shared utility. In these environments, agents must
learn communication protocols in order to share information that is needed to solve
the tasks. By embracing deep neural networks, we are able to demonstrate endto-end learning of protocols in complex environments inspired by communication
riddles and multi-agent computer vision problems with partial observability. We
propose two approaches for learning in these domains: Reinforced Inter-Agent
Learning (RIAL) and Differentiable Inter-Agent Learning (DIAL). The former uses
deep Q-learning, while the latter exploits the fact that, during learning, agents can
backpropagate error derivatives through (noisy) communication channels. Hence,
this approach uses centralised learning but decentralised execution. Our experiments introduce new environments for studying the learning of communication
protocols and present a set of engineering innovations that are essential for success
in these domains.
1
Introduction
How language and communication emerge among intelligent agents has long been a topic of intense
debate. Among the many unresolved questions are: Why does language use discrete structures?
What role does the environment play? What is innate and what is learned? And so on. Some of the
debates on these questions have been so fiery that in 1866 the French Academy of Sciences banned
publications about the origin of human language.
The rapid progress in recent years of machine learning, and deep learning in particular, opens the
door to a new perspective on this debate. How can agents use machine learning to automatically
discover the communication protocols they need to coordinate their behaviour? What, if anything,
can deep learning offer to such agents? What insights can we glean from the success or failure of
agents that learn to communicate?
In this paper, we take the first steps towards answering these questions. Our approach is programmatic:
first, we propose a set of multi-agent benchmark tasks that require communication; then, we formulate
several learning algorithms for these tasks; finally, we analyse how these algorithms learn, or fail to
learn, communication protocols for the agents.
?
These authors contributed equally to this work.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
The tasks that we consider are fully cooperative, partially observable, sequential multi-agent decision
making problems. All the agents share the goal of maximising the same discounted sum of rewards.
While no agent can observe the underlying Markov state, each agent receives a private observation
correlated with that state. In addition to taking actions that affect the environment, each agent can
also communicate with its fellow agents via a discrete limited-bandwidth channel. Due to the partial
observability and limited channel capacity, the agents must discover a communication protocol that
enables them to coordinate their behaviour and solve the task.
We focus on settings with centralised learning but decentralised execution. In other words, communication between agents is not restricted during learning, which is performed by a centralised
algorithm; however, during execution of the learned policies, the agents can communicate only via the
limited-bandwidth channel. While not all real-world problems can be solved in this way, a great many
can, e.g., when training a group of robots on a simulator. Centralised planning and decentralised
execution is also a standard paradigm for multi-agent planning [1, 2].
To address this setting, we formulate two approaches. The first, reinforced inter-agent learning
(RIAL), uses deep Q-learning [3] with a recurrent network to address partial observability. In one
variant of this approach, which we refer to as independent Q-learning, the agents each learn their
own network parameters, treating the other agents as part of the environment. Another variant trains
a single network whose parameters are shared among all agents. Execution remains decentralised, at
which point they receive different observations leading to different behaviour.
The second approach, differentiable inter-agent learning (DIAL), is based on the insight that centralised learning affords more opportunities to improve learning than just parameter sharing. In
particular, while RIAL is end-to-end trainable within an agent, it is not end-to-end trainable across
agents, i.e., no gradients are passed between agents. The second approach allows real-valued messages to pass between agents during centralised learning, thereby treating communication actions as
bottleneck connections between agents. As a result, gradients can be pushed through the communication channel, yielding a system that is end-to-end trainable even across agents. During decentralised
execution, real-valued messages are discretised and mapped to the discrete set of communication
actions allowed by the task. Because DIAL passes gradients from agent to agent, it is an inherently
deep learning approach.
Experiments on two benchmark tasks, based on the MNIST dataset and a well known riddle, show,
not only can these methods solve these tasks, they often discover elegant communication protocols
along the way. To our knowledge, this is the first time that either differentiable communication or
reinforcement learning with deep neural networks has succeeded in learning communication protocols
in complex environments involving sequences and raw images. The results also show that deep
learning, by better exploiting the opportunities of centralised learning, is a uniquely powerful tool
for learning such protocols. Finally, this study advances several engineering innovations that are
essential for learning communication protocols in our proposed benchmarks.
2
Related Work
Research on communication spans many fields, e.g. linguistics, psychology, evolution and AI. In AI,
it is split along a few axes: a) predefined or learned communication protocols, b) planning or learning
methods, c) evolution or RL, and d) cooperative or competitive settings.
Given the topic of our paper, we focus on related work that deals with the cooperative learning of
communication protocols. Out of the plethora of work on multi-agent RL with communication,
e.g., [4?7], only a few fall into this category. Most assume a pre-defined communication protocol,
rather than trying to learn protocols. One exception is the work of Kasai et al. [7], in which
tabular Q-learning agents have to learn the content of a message to solve a predator-prey task with
communication. Another example of open-ended communication learning in a multi-agent task is
given in [8]. Here evolutionary methods are used for learning the protocols which are evaluated
on a similar predator-prey task. Their approach uses a fitness function that is carefully designed to
accelerate learning. In general, heuristics and handcrafted rules have prevailed widely in this line of
research. Moreover, typical tasks have been necessarily small so that global optimisation methods,
such as evolutionary algorithms, can be applied. The use of deep representations and gradient-based
optimisation as advocated in this paper is an important departure, essential for scalability and further
2
progress. A similar rationale is provided in [9], another example of making an RL problem end-to-end
differentiable.
Unlike the recent work in [10], we consider discrete communication channels. One of the key
components of our methods is the signal binarisation during the decentralised execution. This is
related to recent research on fitting neural networks in low-powered devices with memory and
computational limitations using binary weights, e.g. [11], and previous works on discovering binary
codes for documents [12].
3
Background
Deep Q-Networks (DQN). In a single-agent, fully-observable, RL setting [13], an agent observes the
current state st ? S at each discrete time step t, chooses an action ut ? U according to a potentially
stochastic policy ?, observes a reward signal rt , and transitions to a new state st+1 . Its objective
is to maximise an expectation over the discounted return, Rt = rt + ?rt+1 + ? 2 rt+2 + ? ? ? , where
rt is the reward received at time t and ? ? [0, 1] is a discount factor. The Q-function of a policy ?
is Q? (s, u) = E [Rt |st = s, ut = u]. The optimal action-value function Q? (s, u) = max? Q? (s, u)
obeys the Bellman optimality equation Q? (s, u) = Es0 [r + ? maxu0 Q? (s0 , u0 ) | s, u]. Deep Qlearning [3] uses neural networks parameterised by ? to represent Q(s, u; ?). DQNs are optimised
by minimising: Li (?i ) = Es,u,r,s0 [(yiDQN ? Q(s, u; ?i ))2 ], at each iteration i, with target yiDQN =
r + ? maxu0 Q(s0 , u0 ; ?i? ). Here, ?i? are the parameters of a target network that is frozen for a number
of iterations while updating the online network Q(s, u; ?i ). The action u is chosen from Q(s, u; ?i )
by an action selector, which typically implements an -greedy policy that selects the action that
maximises the Q-value with a probability of 1 ? and chooses randomly with a probability of .
DQN also uses experience replay: during learning, the agent builds a dataset of episodic experiences
and is then trained by sampling mini-batches of experiences.
Independent DQN. DQN has been extended to cooperative multi-agent settings, in which each agent
a observes the global st , selects an individual action uat , and receives a team reward, rt , shared
among all agents. Tampuu et al. [14] address this setting with a framework that combines DQN
with independent Q-learning, in which each agent a independently and simultaneously learns its
own Q-function Qa (s, ua ; ?ia ). While independent Q-learning can in principle lead to convergence
problems (since one agent?s learning makes the environment appear non-stationary to other agents),
it has a strong empirical track record [15, 16], and was successfully applied to two-player pong.
Deep Recurrent Q-Networks. Both DQN and independent DQN assume full observability, i.e., the
agent receives st as input. By contrast, in partially observable environments, st is hidden and the
agent receives only an observation ot that is correlated with st , but in general does not disambiguate
it. Hausknecht and Stone [17] propose deep recurrent Q-networks (DRQN) to address single-agent,
partially observable settings. Instead of approximating Q(s, u) with a feed-forward network, they
approximate Q(o, u) with a recurrent neural network that can maintain an internal state and aggregate
observations over time. This can be modelled by adding an extra input ht?1 that represents the hidden
state of the network, yielding Q(ot , ht?1 , u). For notational simplicity, we omit the dependence of Q
on ?.
4
Setting
In this work, we consider RL problems with both multiple agents and partial observability. All the
agents share the goal of maximising the same discounted sum of rewards Rt . While no agent can
observe the underlying Markov state st , each agent a receives a private observation oat correlated with
st . In every time-step t, each agent selects an environment action uat ? U that affects the environment,
and a communication action mat ? M that is observed by other agents but has no direct impact on the
environment or reward. We are interested in such settings because it is only when multiple agents and
partial observability coexist that agents have the incentive to communicate. As no communication
protocol is given a priori, the agents must develop and agree upon such a protocol to solve the task.
Since protocols are mappings from action-observation histories to sequences of messages, the space
of protocols is extremely high-dimensional. Automatically discovering effective protocols in this
space remains an elusive challenge. In particular, the difficulty of exploring this space of protocols
is exacerbated by the need for agents to coordinate the sending and interpreting of messages. For
3
example, if one agent sends a useful message to another agent, it will only receive a positive reward
if the receiving agent correctly interprets and acts upon that message. If it does not, the sender will be
discouraged from sending that message again. Hence, positive rewards are sparse, arising only when
sending and interpreting are properly coordinated, which is hard to discover via random exploration.
We focus on settings where communication between agents is not restricted during centralised
learning, but during the decentralised execution of the learned policies, the agents can communicate
only via a limited-bandwidth channel.
5
Methods
In this section, we present two approaches for learning communication protocols.
5.1
Reinforced Inter-Agent Learning
The most straightforward approach, which we call reinforced inter-agent learning (RIAL), is to
combine DRQN with independent Q-learning for action and communication selection. Each agent?s
0
Q-network represents Qa (oat , mat?1 , hat?1 , ua ), which conditions on that agent?s individual hidden
0
state hat?1 and observation oat as well as messages from other agents mat?1 .
To avoid needing a network with |U ||M | outputs, we split the network into Qau and Qam , the Q-values
for the environment and communication actions, respectively. Similarly to [18], the action selector
separately picks uat and mat from Qu and Qm , using an -greedy policy. Hence, the network requires
only |U | + |M | outputs and action selection requires maximising over U and then over M , but not
maximising over U ? M .
Both Qu and Qm are trained using DQN with the following two modifications, which were found to be
essential for performance. First, we disable experience replay to account for the non-stationarity that
occurs when multiple agents learn concurrently, as it can render experience obsolete and misleading.
Second, to account for partial observability, we feed in the actions u and m taken by each agent
as inputs on the next time-step. Figure 1(a) shows how information flows between agents and the
environment, and how Q-values are processed by the action selector in order to produce the action,
uat , and message mat . Since this approach treats agents as independent networks, the learning phase is
not centralised, even though our problem setting allows it to be. Consequently, the agents are treated
exactly the same way during decentralised execution as during learning.
Action
Select
Agent 1
2
m t-1
Q-Net
o t1
Q-Net
t
Action
Select
m t1
Action
Select
2
u t+1
Agent 2
Q-Net
t+1
m 2t+1
u 1t
Q-Net
Agent 1
Agent 2
t
Action
Select
o 2t+1
Environment
C-Net
m 1t
C-Net
o t1
t+1
Action
Select
C-Net
m 2t+1
DRU
Action
Select
Action
Select
u 1t
C-Net
u 2t+1
DRU
Action
Select
o 2t+1
Environment
(a) RIAL - RL based communication
(b) DIAL - Differentiable communication
Figure 1: The bottom and top rows represent the communication flow for agent a1 and agent a2 ,
respectively. In RIAL (a), all Q-values are fed to the action selector, which selects both environment
and communication actions. Gradients, shown in red, are computed using DQN for the selected
action and flow only through the Q-network of a single agent. In DIAL (b), the message mat bypasses
the action selector and instead is processed by the DRU (Section 5.2) and passed as a continuous
value to the next C-network. Hence, gradients flow across agents, from the recipient to the sender.
For simplicity, at each time step only one agent is highlighted, while the other agent is greyed out.
Parameter Sharing. RIAL can be extended to take advantage of the opportunity for centralised
learning by sharing parameters among the agents. This variation learns only one network, which is
used by all agents. However, the agents can still behave differently because they receive different
4
observations and thus evolve different hidden states. In addition, each agent receives its own index
a as input, allowing it to specialise. The rich representations in deep Q-networks can facilitate
the learning of a common policy while also allowing for specialisation. Parameter sharing also
dramatically reduces the number of parameters that must be learned, thereby speeding learning.
0
Under parameter sharing, the agents learn two Q-functions Qu (oat , mat?1 , hat?1 , uat?1 , mat?1 , a, uat )
0
and Qm (oat , mat?1 , hat?1 , uat?1 , mat?1 , a, uat ). During decentralised execution, each agent uses its
own copy of the learned network, evolving its own hidden state, selecting its own actions, and
communicating with other agents only through the communication channel.
5.2
Differentiable Inter-Agent Learning
While RIAL can share parameters among agents, it still does not take full advantage of centralised
learning. In particular, the agents do not give each other feedback about their communication actions.
Contrast this with human communication, which is rich with tight feedback loops. For example,
during face-to-face interaction, listeners send fast nonverbal queues to the speaker indicating the level
of understanding and interest. RIAL lacks this feedback mechanism, which is intuitively important
for learning communication protocols.
To address this limitation, we propose differentiable inter-agent learning (DIAL). The main insight
behind DIAL is that the combination of centralised learning and Q-networks makes it possible, not
only to share parameters but to push gradients from one agent to another through the communication
channel. Thus, while RIAL is end-to-end trainable within each agent, DIAL is end-to-end trainable
across agents. Letting gradients flow from one agent to another gives them richer feedback, reducing
the required amount of learning by trial and error, and easing the discovery of effective protocols.
DIAL works as follows: during centralised learning, communication actions are replaced with direct
connections between the output of one agent?s network and the input of another?s. Thus, while
the task restricts communication to discrete messages, during learning the agents are free to send
real-valued messages to each other. Since these messages function as any other network activation,
gradients can be passed back along the channel, allowing end-to-end backpropagation across agents.
In particular, the network, which we call a C-Net, outputs two distinct types of values, as shown in
Figure 1(b), a) Q(?), the Q-values for the environment actions, which are fed to the action selector,
and b) mat , the real-valued vector message to other agents, which bypasses the action selector and
is instead processed by the discretise/regularise unit (DRU(mat )). The DRU regularises it during
centralised learning, DRU(mat ) = Logistic(N (mat , ?)), where ? is the standard deviation of the noise
added to the channel, and discretises it during decentralised execution, DRU(mat ) = 1{mat > 0}.
Figure 1 shows how gradients flow differently in RIAL and DIAL. The gradient chains for Qu , in
RIAL and Q, in DIAL, are based on the DQN loss. However, in DIAL the gradient term for m is the
backpropagated error from the recipient of the message to the sender. Using this inter-agent gradient
for training provides a richer training signal than the DQN loss for Qm in RIAL. While the DQN
error is nonzero only for the selected message, the incoming gradient is a |m|-dimensional vector
that can contain more information. It also allows the network to directly adjust messages in order to
minimise the downstream DQN loss, reducing the need for trial and error learning of good protocols.
While we limit our analysis to discrete messages, DIAL naturally handles continuous message spaces,
as they are used anyway during centralised learning. At the same time, DIAL can also scale to large
discrete message spaces, since it learns binary encodings instead of the one-hot encoding in RIAL,
|m| = O(log(|M |). Further algorithmic details and pseudocode are in the supplementary material.
6
Experiments
In this section, we evaluate RIAL and DIAL with and without parameter sharing in two multi-agent
problems and compare it with a no-communication shared-parameter baseline (NoComm). Results
presented are the average performance across several runs, where those without parameter sharing (NS), are represented by dashed lines. Across plots, rewards are normalised by the highest average
reward achievable given access to the true state (Oracle). In our experiments, we use an -greedy
policy with = 0.05, the discount factor is ? = 1, and the target network is reset every 100 episodes.
To stabilise learning, we execute parallel episodes in batches of 32. The parameters are optimised
using RMSProp [19] with a learning rate of 5 ? 10?4 . The architecture uses rectified linear units
5
(ReLU), and gated recurrent units (GRU) [20], which have similar performance to long short-term
memory [21] (LSTM) [22]. Unless stated otherwise, we set the standard deviation of noise added to
the channel to ? = 2, which was found to be essential for good performance.1
6.1
Model Architecture
(Q ,m )
(Q )
?
?
RIAL and DIAL share the same individual model archi- ( Q , m )
?
?
tecture. For brevity, we describe only the DIAL model
here. As illustrated in Figure 2, each agent consists of a reh
h
h
h
current neural network (RNN), unrolled for T time-steps,
h
h
h
h
that maintains an internal state h, an input network for
h
h
h
h
producing a task embedding z, and an output network for
the Q-values and the messages m. The input for agent a is
h
h
h
h
0
defined as a tuple of (oat , mat?1 , uat?1 , a). The inputs a and
z
z
z
z
0
?
?
uat?1 are passed through lookup tables, and mat?1 through
a 1-layer MLP, both producing embeddings of size 128. ( o , m , u , a ) ? ( o , m , u , a ) ? ( o , m , u , a )
oat is processed through a task-specific network that produces an additional embedding of the same size. The state
Figure 2: DIAL architecture.
embedding is produced by element-wise summation of
these embeddings, zta = TaskMLP(oat ) + MLP[|M |, 128](mt?1 ) + Lookup(uat?1 ) + Lookup(a) .
We found that performance and stability improved when a batch normalisation layer [23]
was used to preprocess mt?1 . zta is processed through a 2-layer RNN with GRUs, ha1,t =
GRU[128, 128](zta , ha1,t?1 ), which is used to approximate the agent?s action-observation history.
Finally, the output ha2,t of the top GRU layer, is passed through a 2-layer MLP Qat , mat =
MLP[128, 128, (|U | + |M |)](ha2,t ).
a
1
a
3
a
1
a
21
a
22
a
21
a
11
6.2
a
23
a
1T
a
12
a
13
a
1T -1
a
3
a
3
0
a
2T -1
a
13
a
2
a
0
a
2T
a
22
a
11
a
1
a
23
a
12
a
1
a
T
a
3
a
T
a
2
a
T
2
a
T-1
T-1
Switch Riddle
Action:
On
None
None
Tell
The first task is inspired by a well-known riddle described
as follows: ?One hundred prisoners have been newly Prisoner:
3
2
3
1
in IR
ushered into prison. The warden tells them that starting
Switch:
tomorrow, each of them will be placed in an isolated cell,
Day 1
Day 2
Day 3
Day 4
unable to communicate amongst each other. Each day,
the warden will choose one of the prisoners uniformly
Figure 3: Switch: Every day one prisat random with replacement, and place him in a central
oner gets sent to the interrogation room
interrogation room containing only a light bulb with a
where he sees the switch and chooses
toggle switch. The prisoner will be able to observe the
from ?On?, ?Off?, ?Tell? and ?None?.
current state of the light bulb. If he wishes, he can toggle
the light bulb. He also has the option of announcing that he believes all prisoners have visited the
interrogation room at some point in time. If this announcement is true, then all prisoners are set free,
but if it is false, all prisoners are executed[...]? [24].
On
On
On
On
Off
Off
Off
Off
Architecture. In our formalisation, at time-step t, agent a observes oat ? {0, 1}, which indicates if
the agent is in the interrogation room. Since the switch has two positions, it can be modelled as a
1-bit message, mat . If agent a is in the interrogation room, then its actions are uat ? {?None?,?Tell?};
otherwise the only action is ?None?. The episode ends when an agent chooses ?Tell? or when the
maximum time-step, T , is reached. The reward rt is 0 unless an agent chooses ?Tell?, in which
case it is 1 if all agents have been to the interrogation room and ?1 otherwise. Following the riddle
definition, in this experiment mat?1 is available only to the agent a in the interrogation room. Finally,
we set the time horizon T = 4n ? 6 in order to keep the experiments computationally tractable.
Complexity. The switch riddle poses significant protocol learning challenges. At any time-step t,
there are |o|t possible observation histories for a given agent, with |o| = 3: the agent either is not
in the interrogation room or receives one of two messages when it is. For each of these histories,
an agent can chose between 4 = |U ||M | different options, so at time-step t, the single-agent policy
t
|o|t
space is (|U ||M |)
= 43 . The product of all policies for all time-steps defines the total policy
Q 3t
T +1
space for an agent: 4 = 4(3 ?3)/2 , where T is the final time-step. The size of the multi-agent
1
Source code is available at: https://github.com/iassael/learning-to-communicate
6
RIAL
RIAL-NS
NoComm
Oracle
DIAL
DIAL-PS
1.0
0.9
0.9
Norm. R (Optimal)
Norm. R (Optimal)
DIAL
DIAL-NS
1.0
0.8
0.7
0.6
0.5
RIAL
RIAL-NS
NoComm
Oracle
Day
0.8
2k
3k
# Epochs
4k
(a) Evaluation of n = 3
5k
0.5
2
Yes
Off
No
On
Yes
None
No
Switch?
Has Been?
3+
0.7
Has Been?
0.6
1k
On
1
10k
20k
30k
# Epochs
On
Tell
Off
On
40k
(b) Evaluation of n = 4
(c) Protocol of n = 3
Figure 4: Switch: (a-b) Performance of DIAL and RIAL, with and without ( -NS) parameter sharing,
and NoComm-baseline, for n = 3 and n = 4 agents. (c) The decision tree extracted for n = 3 to
interpret the communication protocol discovered by DIAL.
T +1
policy space grows exponentially in n, the number of agents: 4n(3 ?3)/2 . We consider a setting
O(n)
where T is proportional to the number of agents, so the total policy space is 4n3
. For n = 4, the
size is 4354288 . Our approach using DIAL is to model the switch as a continuous message, which is
binarised during decentralised execution.
Experimental results. Figure 4(a) shows our results for n = 3 agents. All four methods learn an
optimal policy in 5k episodes, substantially outperforming the NoComm baseline. DIAL with parameter sharing reaches optimal performance substantially faster than RIAL. Furthermore, parameter
sharing speeds both methods. Figure 4(b) shows results for n = 4 agents. DIAL with parameter
sharing again outperforms all other methods. In this setting, RIAL without parameter sharing was
unable to beat the NoComm baseline. These results illustrate how difficult it is for agents to learn the
same protocol independently. Hence, parameter sharing can be crucial for learning to communicate.
DIAL-NS performs similarly to RIAL, indicating that the gradient provides a richer and more robust
source of information. We also analysed the communication protocol discovered by DIAL for n = 3
by sampling 1K episodes, for which Figure 4(c) shows a decision tree corresponding to an optimal
strategy. When a prisoner visits the interrogation room after day two, there are only two options:
either one or two prisoners may have visited the room before. If three prisoners had been, the third
prisoner would have finished the game. The other options can be encoded via the ?On? and ?Off?
positions respectively.
6.3
MNIST Games
In this section, we consider two tasks based on the well known MNIST digit classification dataset [25].
Agent 2
?
u 22
u 12
Agent 1
Agent 1
Agent 2
Colour-Digit MNIST is a two-player
game in which each agent observes the
pixel values of a random MNIST digit in
red or green, while the colour label and
?
?
?
u 12
u 25
u 21
m1
m2
m3
m4
m1
digit value are hidden. The reward consists
1
1
?
?
?
u1
u5
u 11
of two components that are antisymmetric
in the action, colour, and parity of the digits. As only one bit of information can be
?
?
?
sent, agents must agree to encode/decode
either colour or parity, with parity yielding
Figure 5: MNIST games architectures.
greater rewards. The game has two steps;
in the first step, both agents send a 1-bit message, in the second step they select a binary action.
?
Multi-Step MNIST is a grayscale variant that requires agents to develop a communication protocol
that integrates information across 5 time-steps in order to guess each others? digits. At each step,
the agents exchange a 1-bit message and at he final step, t = 5, they are awarded r = 0.5 for each
correctly guessed digit. Further details on both tasks are provided in the supplementary material.
Architecture. The input processing network is a 2-layer MLP TaskMLP[(|c|?28?28), 128, 128](oat ).
Figure 5 depicts the generalised setting for both games. Our experimental evaluation showed improved
training time using batch normalisation after the first layer.
7
1.0
DIAL
DIAL-NS
RIAL
RIAL-NS
NoComm
Oracle
1.0
DIAL
DIAL-NS
RIAL
RIAL-NS
NoComm
Oracle
9
0.6
0.4
0.2
0.0
0.8
7
True Digit
Norm. R (Optimal)
Norm. R (Optimal)
8
0.8
0.6
0.4
20k
30k
# Epochs
40k
(a) Evaluation of Multi-Step
50k
5
4
3
2
0.2
1
0
0.0
10k
6
5k
10k
# Epochs
15k
(b) Evaluation of Colour-Digit
20k
1
2
3
4
Step
(c) Protocol of Multi-Step
Figure 6: MNIST Games: (a,b) Performance of DIAL and RIAL, with and without (-NS) parameter
sharing, and NoComm, for both MNIST games. (c) Extracted coding scheme for multi-step MNIST.
Experimental results. Figures 6(a) and 6(b) show that DIAL substantially outperforms the other
methods on both games. Furthermore, parameter sharing is crucial for reaching the optimal protocol.
In multi-step MNIST, results were obtained with ? = 0.5. In this task, RIAL fails to learn, while in
colour-digit MNIST it fluctuates around local minima in the protocol space; the NoComm baseline
is stagnant at zero. DIAL?s performance can be attributed to directly optimising the messages in
order to reduce the global DQN error while RIAL must rely on trial and error. DIAL can also
optimise the message content with respect to rewards taking place many time-steps later, due to the
gradient passing between agents, leading to optimal performance in multi-step MNIST. To analyse
the protocol that DIAL learned, we sampled 1K episodes. Figure 6(c) illustrates the communication
bit sent at time-step t by agent 1, as a function of its input digit. Thus, each agent has learned a binary
encoding and decoding of the digits. These results illustrate that differentiable communication in
DIAL is essential to fully exploiting the power of centralised learning and thus is an important tool
for studying the learning of communication protocols.
6.4
Effect of Channel Noise
Probability
?=0
? = 2: 0
The question of why language evolved to be discrete has
1.0
Epoch 1k
been studied for centuries, see e.g., the overview in [26].
Epoch 3k
Since DIAL learns to communicate in a continuous channel,
Epoch 5k
our results offer an illuminating perspective on this topic. In
0.5
particular, Figure 7 shows that, in the switch riddle, DIAL
without noise in the communication channel learns centred
activations. By contrast, the presence of noise forces mes0.0
-10 0 10
-10 0 10
sages into two different modes during learning. Similar
Activation
Activation
observations have been made in relation to adding noise
when training document models [12] and performing clasFigure 7: DIAL?s learned activations
sification [11]. In our work, we found that adding noise
with and without noise in DRU.
was essential for successful training. More analysis on this
appears in the supplementary material.
7
Conclusions
This paper advanced novel environments and successful techniques for learning communication
protocols. It presented a detailed comparative analysis covering important factors involved in the
learning of communication protocols with deep networks, including differentiable communication,
neural network architecture design, channel noise, tied parameters, and other methodological aspects.
This paper should be seen as a first attempt at learning communication and language with deep
learning approaches. The gargantuan task of understanding communication and language in their
full splendour, covering compositionality, concept lifting, conversational agents, and many other
important problems still lies ahead. We are however optimistic that the approaches proposed in this
paper can help tackle these challenges.
8
References
[1] F. A. Oliehoek, M. T. J. Spaan, and N. Vlassis. Optimal and approximate Q-value functions for decentralized
POMDPs. JAIR, 32:289?353, 2008.
[2] L. Kraemer and B. Banerjee. Multi-agent reinforcement learning as a rehearsal for decentralized planning.
Neurocomputing, 190:82?94, 2016.
[3] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller,
A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran,
D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature,
518(7540):529?533, 2015.
[4] M. Tan. Multi-agent reinforcement learning: Independent vs. cooperative agents. In ICML, 1993.
[5] F. S. Melo, M. Spaan, and S. J. Witwicki. QueryPOMDP: POMDP-based communication in multiagent
systems. In Multi-Agent Systems, pages 189?204. 2011.
[6] L. Panait and S. Luke. Cooperative multi-agent learning: The state of the art. Autonomous Agents and
Multi-Agent Systems, 11(3):387?434, 2005.
[7] T. Kasai, H. Tenmoto, and A. Kamiya. Learning of communication codes in multi-agent reinforcement
learning problem. In IEEE Soft Computing in Industrial Applications, pages 1?6, 2008.
[8] C. L. Giles and K. C. Jim. Learning communication for multi-agent systems. In Innovative Concepts for
Agent-Based Systems, pages 377?390. Springer, 2002.
[9] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image
generation. arXiv preprint arXiv:1502.04623, 2015.
[10] S. Sukhbaatar, A. Szlam, and R. Fergus. Learning multiagent communication with backpropagation. arXiv
preprint arXiv:1605.07736, 2016.
[11] M. Courbariaux and Y. Bengio. BinaryNet: Training deep neural networks with weights and activations
constrained to +1 or -1. arXiv preprint arXiv:1602.02830, 2016.
[12] G. Hinton and R. Salakhutdinov. Discovering binary codes for documents by learning deep generative
models. Topics in Cognitive Science, 3(1):74?91, 2011.
[13] R. S. Sutton and A. G. Barto. Introduction to reinforcement learning. MIT Press, 1998.
[14] A. Tampuu, T. Matiisen, D. Kodelja, I. Kuzovkin, K. Korjus, J. Aru, J. Aru, and R. Vicente. Multiagent
cooperation and competition with deep reinforcement learning. arXiv preprint arXiv:1511.08779, 2015.
[15] Y. Shoham and K. Leyton-Brown. Multiagent Systems: Algorithmic, Game-Theoretic, and Logical
Foundations. Cambridge University Press, New York, 2009.
[16] E. Zawadzki, A. Lipson, and K. Leyton-Brown. Empirically evaluating multiagent learning algorithms.
arXiv preprint 1401.8074, 2014.
[17] M. Hausknecht and P. Stone. Deep recurrent Q-learning for partially observable MDPs. arXiv preprint
arXiv:1507.06527, 2015.
[18] K. Narasimhan, T. Kulkarni, and R. Barzilay. Language understanding for text-based games using deep
reinforcement learning. arXiv preprint arXiv:1506.08941, 2015.
[19] T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent
magnitude. COURSERA: Neural Networks for Machine Learning, 4(2), 2012.
[20] K. Cho, B. van Merri?nboer, D. Bahdanau, and Y. Bengio. On the properties of neural machine translation:
Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
[21] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780, 1997.
[22] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on
sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
[23] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. In ICML, pages 448?456, 2015.
[24] W. Wu. 100 prisoners and a lightbulb. Technical report, OCF, UC Berkeley, 2002.
[25] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[26] M. Studdert-Kennedy. How did language go discrete? In M. Tallerman, editor, Language Origins:
Perspectives on Evolution, chapter 3. Oxford University Press, 2005.
9
| 6042 |@word trial:3 private:2 achievable:1 norm:4 open:2 pick:1 thereby:2 united:1 selecting:1 document:4 yidqn:2 outperforms:2 current:3 com:2 analysed:1 activation:6 must:6 enables:1 treating:2 designed:1 plot:1 v:1 stationary:1 greedy:3 discovering:3 device:1 obsolete:1 selected:2 guess:1 generative:1 sukhbaatar:1 short:2 record:1 provides:2 qam:1 nandodefreitas:1 wierstra:2 along:3 direct:2 tomorrow:1 consists:2 fitting:1 combine:2 introduce:1 inter:9 rapid:1 planning:4 prison:1 multi:23 simulator:1 bellman:1 inspired:2 discounted:3 salakhutdinov:1 automatically:2 es0:1 ua:2 spain:1 discover:4 underlying:2 moreover:1 provided:2 kuzovkin:1 what:5 evolved:1 substantially:3 deepmind:1 narasimhan:1 ended:1 fellow:1 every:3 berkeley:1 act:1 tackle:1 exactly:1 qm:4 uk:3 control:1 unit:3 szlam:1 omit:1 appear:1 producing:2 danihelka:1 positive:2 maximise:1 engineering:2 t1:3 treat:1 before:1 limit:1 local:1 generalised:1 sutton:1 encoding:3 oxford:2 optimised:2 easing:1 chose:1 studied:1 luke:1 limited:4 obeys:1 lecun:1 implement:1 backpropagation:2 digit:12 episodic:1 riedmiller:1 empirical:2 evolving:1 rnn:2 shoham:1 word:1 pre:1 petersen:1 get:1 coexist:1 selection:2 bellemare:1 elusive:1 straightforward:1 send:3 starting:1 independently:2 go:1 pomdp:1 formulate:2 simplicity:2 communicating:1 insight:3 rule:1 m2:1 embedding:3 handle:1 anyway:1 coordinate:3 variation:1 stability:1 century:1 merri:1 target:3 play:1 tan:1 decode:1 autonomous:1 us:8 origin:2 element:1 recognition:1 updating:1 cooperative:6 observed:1 role:1 bottom:1 preprint:9 solved:1 oliehoek:1 coursera:1 episode:6 highest:1 observes:5 environment:20 pong:1 rmsprop:2 complexity:1 reward:14 trained:2 tight:1 upon:2 decentralised:11 accelerate:1 differently:2 represented:1 listener:1 chapter:1 train:1 distinct:1 fast:1 effective:2 describe:1 tell:7 aggregate:1 whose:1 heuristic:1 widely:1 solve:5 valued:4 richer:3 supplementary:3 otherwise:3 encoded:1 fluctuates:1 encoder:1 analyse:2 noisy:1 highlighted:1 final:2 online:1 tampuu:2 sequence:3 differentiable:9 frozen:1 net:9 advantage:2 propose:4 interaction:1 product:1 unresolved:1 reset:1 loop:1 academy:1 competition:1 scalability:1 exploiting:2 convergence:1 p:1 plethora:1 produce:2 comparative:1 silver:1 help:1 illustrate:2 recurrent:8 ac:3 pose:1 develop:2 received:1 barzilay:1 advocated:1 progress:2 exacerbated:1 strong:1 c:3 riddle:7 announcing:1 stochastic:1 exploration:1 nando:1 human:3 material:3 require:1 exchange:1 behaviour:3 kodelja:1 announcement:1 summation:1 exploring:1 around:1 great:1 mapping:1 algorithmic:2 a2:1 integrates:1 label:1 visited:2 him:1 successfully:1 tool:2 mit:1 concurrently:1 rather:1 reaching:1 avoid:1 rusu:1 lightbulb:1 barto:1 publication:1 rial:31 encode:1 ax:1 focus:3 notational:1 properly:1 methodological:1 indicates:1 legg:1 contrast:3 industrial:1 baseline:5 stabilise:1 typically:1 hidden:6 relation:1 selects:4 interested:1 ocf:1 pixel:1 among:6 classification:1 priori:1 art:1 constrained:1 uc:1 field:1 veness:1 sampling:2 optimising:1 represents:2 icml:2 tabular:1 others:1 report:1 intelligent:1 few:2 randomly:1 simultaneously:1 neurocomputing:1 individual:3 m4:1 fitness:1 replaced:1 phase:1 replacement:1 assael:1 maintain:1 attempt:1 stationarity:1 mlp:5 ostrovski:1 interest:1 message:29 reh:1 mnih:1 normalisation:2 evaluation:6 adjust:1 yielding:3 light:3 behind:1 chain:1 predefined:1 succeeded:1 tuple:1 partial:6 experience:5 hausknecht:2 intense:1 unless:2 tree:2 divide:1 isolated:1 soft:1 giles:1 modeling:1 deviation:2 kasai:2 hundred:1 successful:2 ha2:2 chooses:5 cho:2 st:9 grus:1 lstm:1 off:8 receiving:1 decoding:1 again:2 central:1 centralised:16 containing:1 choose:1 cognitive:1 derivative:1 leading:2 return:1 chung:1 li:1 szegedy:1 account:2 de:1 lookup:3 centred:1 coding:1 coordinated:1 performed:1 later:1 optimistic:1 red:2 competitive:1 reached:1 maintains:1 parallel:1 option:4 predator:2 lipson:1 tecture:1 ir:1 oner:1 reinforced:4 guessed:1 preprocess:1 yes:2 modelled:2 raw:1 kavukcuoglu:1 produced:1 none:6 pomdps:1 rectified:1 kennedy:1 history:4 reach:1 sharing:15 definition:1 failure:1 rehearsal:1 involved:1 naturally:1 attributed:1 nonverbal:1 newly:1 dataset:3 sampled:1 logical:1 knowledge:1 ut:2 carefully:1 back:1 appears:1 feed:2 jair:1 day:8 improved:2 evaluated:1 ox:3 though:1 execute:1 furthermore:2 just:1 parameterised:1 receives:7 banerjee:1 lack:1 google:2 french:1 defines:1 logistic:1 mode:1 grows:1 innate:1 dqn:14 facilitate:1 effect:1 contain:1 true:3 concept:2 brown:2 former:1 hence:5 evolution:3 matiisen:1 nonzero:1 illustrated:1 deal:1 during:20 game:11 uniquely:1 covering:2 speaker:1 anything:1 discretises:1 trying:1 stone:2 toggle:2 theoretic:1 demonstrate:1 performs:1 interpreting:2 image:2 wise:1 novel:1 zawadzki:1 common:1 pseudocode:1 mt:2 rl:6 overview:1 empirically:1 handcrafted:1 exponentially:1 drqn:2 he:6 m1:2 interpret:1 refer:1 significant:1 cambridge:1 ai:2 similarly:2 language:9 had:1 robot:1 access:1 own:6 recent:4 showed:1 perspective:3 awarded:1 schmidhuber:1 binary:6 success:2 outperforming:1 embracing:1 seen:1 minimum:1 additional:1 greater:1 disable:1 paradigm:1 dashed:1 signal:3 u0:2 multiple:4 full:3 needing:1 reduces:1 greyed:1 technical:1 faster:1 offer:2 long:3 cifar:1 minimising:1 melo:1 equally:1 visit:1 a1:1 impact:1 variant:3 involving:1 vision:1 optimisation:2 expectation:1 foerster:1 iteration:2 represent:2 arxiv:17 normalization:1 hochreiter:1 cell:1 programmatic:1 receive:3 addition:2 background:1 separately:1 source:2 sends:1 crucial:2 ot:2 extra:1 unlike:1 warden:2 binarised:1 pass:1 elegant:1 sent:3 bahdanau:1 flow:6 call:2 presence:1 door:1 canadian:1 split:2 embeddings:2 bengio:4 switch:11 affect:2 relu:1 psychology:1 architecture:7 bandwidth:3 interprets:1 observability:7 reduce:1 haffner:1 specialise:1 minimise:1 shift:1 bottleneck:1 utility:1 colour:6 passed:5 accelerating:1 render:1 queue:1 passing:1 kraemer:1 york:1 action:43 deep:24 dramatically:1 useful:1 detailed:1 amount:1 discount:2 backpropagated:1 u5:1 processed:5 category:1 http:1 affords:1 restricts:1 stagnant:1 arising:1 track:1 glean:1 correctly:2 discrete:10 mat:21 incentive:1 group:1 key:1 four:1 prey:2 ht:2 downstream:1 year:1 sum:2 run:1 powerful:1 communicate:10 place:2 wu:1 draw:1 decision:3 pushed:1 bit:5 layer:7 oracle:5 ahead:1 n3:1 archi:1 witwicki:1 u1:1 speed:1 aspect:1 span:1 optimality:1 extremely:1 performing:1 conversational:1 innovative:1 nboer:1 freitas1:1 according:1 combination:1 across:8 spaan:2 qu:4 making:2 modification:1 intuitively:1 restricted:2 taken:1 computationally:1 equation:1 agree:2 remains:2 fail:1 mechanism:1 needed:1 letting:1 fed:2 antonoglou:1 tractable:1 end:16 sending:3 studying:2 dru:8 available:2 decentralized:2 gulcehre:1 observe:3 studdert:1 batch:5 hassabis:1 hat:4 recipient:2 top:2 running:1 linguistics:1 opportunity:3 dial:42 exploit:1 build:1 approximating:1 gregor:1 objective:1 question:4 added:2 occurs:1 strategy:1 rt:10 dependence:1 evolutionary:2 gradient:18 discouraged:1 amongst:1 unable:2 mapped:1 fidjeland:1 capacity:1 decoder:1 topic:4 maximising:5 code:4 discretise:1 index:1 mini:1 unrolled:1 innovation:2 kingdom:1 executed:1 difficult:1 potentially:1 debate:3 stated:1 sage:1 korjus:1 design:1 policy:14 contributed:1 maximises:1 allowing:3 gated:2 observation:11 kumaran:1 markov:2 zta:3 benchmark:3 behave:1 beat:1 extended:2 communication:59 team:1 vlassis:1 jim:1 discovered:2 hinton:2 jakob:2 compositionality:1 discretised:1 required:1 gru:3 connection:2 learned:9 binarynet:1 barcelona:1 nip:1 qa:2 address:5 able:2 departure:1 panait:1 challenge:3 program:1 including:1 memory:3 max:1 belief:1 endto:1 ia:1 hot:1 green:1 difficulty:1 treated:1 rely:1 optimise:1 power:1 force:1 advanced:2 prevailed:1 scheme:1 improve:1 github:1 misleading:1 mdps:1 finished:1 speeding:1 regularise:1 text:1 epoch:7 understanding:3 discovery:1 powered:1 evolve:1 graf:2 fully:3 loss:3 multiagent:5 rationale:1 lecture:1 interrogation:9 limitation:2 proportional:1 generation:1 foundation:1 illuminating:1 agent:152 bulb:3 s0:3 principle:1 editor:1 courbariaux:1 bypass:2 share:6 translation:1 row:1 cooperation:1 placed:1 parity:3 copy:1 free:2 normalised:1 aru:2 institute:1 fall:1 taking:2 face:2 emerge:1 sification:1 sparse:1 van:1 ha1:2 feedback:4 world:1 transition:1 rich:2 evaluating:1 author:1 forward:1 reinforcement:9 made:1 approximate:3 observable:5 qlearning:1 selector:7 keep:1 global:3 incoming:1 ioffe:1 kamiya:1 fergus:1 grayscale:1 continuous:4 why:2 table:1 disambiguate:1 learn:12 channel:16 robust:1 nature:1 inherently:1 dqns:1 whiteson:1 bottou:1 complex:2 necessarily:1 protocol:38 domain:2 antisymmetric:1 did:1 main:1 oat:10 noise:9 allowed:1 depicts:1 n:11 formalisation:1 fails:1 position:2 wish:1 replay:2 lie:1 answering:1 tied:1 third:1 uat:12 learns:5 yannis:2 shimon:2 specific:1 covariate:1 sensing:1 specialisation:1 essential:7 mnist:13 false:1 sequential:1 adding:3 lifting:1 magnitude:1 execution:12 illustrates:1 push:1 horizon:1 backpropagate:1 sender:3 partially:4 prisoner:12 regularises:1 sadik:1 springer:1 leyton:2 tieleman:1 extracted:2 goal:3 king:1 consequently:1 towards:1 room:10 shared:4 content:2 hard:1 vicente:1 typical:1 reducing:3 uniformly:1 acting:1 beattie:1 total:2 pas:1 e:1 experimental:3 player:2 m3:1 exception:1 select:9 indicating:2 internal:3 latter:1 brevity:1 kulkarni:1 evaluate:1 trainable:5 correlated:3 |
5,573 | 6,043 | Unified Methods for Exploiting
Piecewise Linear Structure in Convex Optimization
Tyler B. Johnson
University of Washington, Seattle
[email protected]
Carlos Guestrin
University of Washington, Seattle
[email protected]
Abstract
We develop methods for rapidly identifying important components of a convex
optimization problem for the purpose of achieving fast convergence times. By
considering a novel problem formulation?the minimization of a sum of piecewise
functions?we describe a principled and general mechanism for exploiting piecewise linear structure in convex optimization. This result leads to a theoretically
justified working set algorithm and a novel screening test, which generalize and
improve upon many prior results on exploiting structure in convex optimization.
In empirical comparisons, we study the scalability of our methods. We find that
screening scales surprisingly poorly with the size of the problem, while our working
set algorithm convincingly outperforms alternative approaches.
1
Introduction
Scalable optimization methods are critical for many machine learning applications. Due to tractable
properties of convexity, many optimization tasks are formulated as convex problems, many of which
exhibit useful structure at their solutions. For example, when training a support vector machine, the
optimal model is uninfluenced by easy-to-classify training instances. For sparse regression problems,
the optimal model makes predictions using a subset of features, ignoring its remaining inputs.
In these examples and others, the problem?s ?structure? can be exploited to perform optimization
efficiently. Specifically, given the important components of a problem (for example the relevant
training examples or features) we could instead optimize a simpler objective that results in the same
solution. In practice, since the important components are unknown prior to optimization, we focus on
methods that rapidly discover the relevant components as progress is made toward convergence.
One principled method for exploiting structure in optimization is screening, a technique that identifies
components of a problem guaranteed to be irrelevant to the solution. First proposed by [1], screening
rules have been derived for many objectives in recent years. These approaches are specialized to
particular objectives, so screening tests do not readily translate between optimization tasks. Prior
works have separately considered screening irrelevant features [1?8], training examples [9, 10], or
constraints [11]. No screening test applies to all of these applications.
Working set algorithms are a second approach to exploiting structure in optimization. By minimizing
a sequence of simplified objectives, working set algorithms quickly converge to the problem?s global
solution. Perhaps the most prominent working set algorithms for machine learning are those of the
LIBLINEAR library [12]. As is common with working set approaches, there is little theoretical
understanding of these algorithms. Recently a working set algorithm with some theoretical guarantees
was proposed [11]. This work fundamentally relies on the objective being a constrained function,
however, making it unclear how to use this algorithm for other problems with structure.
The purpose of this work is to both unify and improve upon prior ideas for exploiting structure in
convex optimization. We begin by formalizing the concept of ?structure? using a novel problem
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
formulation: the minimization of a sum of many piecewise functions. Each piecewise function is
defined by multiple simpler subfunctions, at least one of which we assume to be linear. With this
formulation, exploiting structure amounts to selectively replacing piecewise terms in the objective
with corresponding linear subfunctions. The resulting objective can be considerably simpler to solve.
Using our piecewise formulation, we first present a general theoretical result on exploiting structure
in optimization. This result guarantees quantifiable progress toward a problem?s global solution by
minimizing a simplified objective. We apply this result to derive a new working set algorithm that
compares favorably to [11] in that (i) our algorithm results from a minimax optimization of new
bounds, and (ii) our algorithm is not limited to constrained objectives. Later, we derive a state-ofthe-art screening test by applying the same initial theoretical result. Compared to prior screening
tests, our screening result is more effective at simplifying the objective function. Moreover, unlike
previous screening results, our screening test applies to a broad class of objectives.
We include empirical evaluations that compare the scalability of screening and working set methods
on real-world problems. While many screening tests have been proposed for large-scale optimization,
we have not seen the scalability of screening studied in prior literature. Surprisingly, although our
screening test significantly improves upon many prior results, we find that screening scales poorly as
the size of the problem increases. In fact, in many cases, screening has negligible effect on overall
convergence times. In contrast, our working set algorithm improves convergence times considerably
in a number of cases. This result suggests that compared to screening, working set algorithms are
significantly more useful for scaling optimization to large problems.
2
Piecewise linear optimization framework
We consider optimization problems of the form
Pm
minimize
f (x) := ?(x) + i=1 ?i (x) ,
n
(P)
x?R
where ? is ?-strongly convex, and each ?i is convex and piecewise; for each ?i , we assume a function
?i : Rn ? {1, 2, . . . , pi } and convex subfunctions ?1i , . . . , ?pi i such that ?x ? Rn , we have
? (x)
?i (x) = ?i i
(x) .
As will later become clear, we focus on instances of (P) for which many of the subfunctions ?ki are
linear. We denote by Xik the subset of Rn corresponding to the kth piecewise subdomain of ?i :
Xik := {x : ?i (x) = k} .
The purpose of this work is to develop efficient and principled methods for solving (P) by exploiting
the piecewise structure of f . Our approach is based on the following observation:
Proposition 2.1 (Exploiting piecewise structure at x? ). Let x? be the minimizer of f . For each
? (x? )
i ? [m], assume knowledge of ?i (x? ) and whether x? ? int(Xi i
). Define
? (x? )
?i (x? )
i
?
?i
if x ? int(Xi
),
??i =
?i
otherwise ,
where int(?) denotes the interior of a set. Then x? is also the solution to
Pm
minimize
f ? (x) := ?(x) + i=1 ??i (x) .
n
(P? )
x?R
?i (x? )
In words, Proposition 2.1 states that if x? does not lie on the boundary of the subdomain Xi
? (x? )
then replacing ?i with the subfunction ?i i
in f does not affect the minimizer of f .
,
Despite having identical solutions, solving (P? ) can require far less computation than solving (P).
This is especially true when many ??i are linear, since the sum of linear functions is also linear. More
?
?
?
?
formally, consider a set W ? ? [m] such
?
/ W ? , ?P
i is linear, meaning ?i (x) = hai , xi + bi
P that ?i
?
?
?
?
for some a?i and b?i . Defining a? = i?W
a
and
b
=
b
,
then
(P
)
is
equivalent
to
/ ? i
i?W
/ ? i
P
?
?
?
minimize
f
(x)
:=
?(x)
+
ha
,
xi
+
b
+ i?W ? ??i (x) .
(P?? )
n
x?R
That is, (P) has been reduced from a problem with m piecewise functions to a problem of size |W ? |.
Since often |W ? | m, solving (P? ) can be tremendously simpler than solving (P). The scenario is
quite common in machine learning applications. Some important examples include:
2
? Piecewise loss minimization: ?i is a piecewise loss with at least one linear subfunction.
? Constrained optimization: ?i takes value 0 for a subset of Rn and +? otherwise.
? Optimization with sparsity inducing penalties: `1 -regularized regression, group lasso, fused
lasso, etc., are instances of (P) via duality [13].
We include elaboration on these examples in Appendix A.
3
Theoretical results
We have seen that solving (P? ) can be more efficient than solving (P). However, since W ? is unknown
prior to optimization, solving (P? ) is impractical. Instead, we can hope to design algorithms that
rapidly learn W ? . In this section, we propose principled methods for achieving this goal.
3.1
A general mechanism for exploiting piecewise linear structure
In this section, we focus on the consequences of minimizing the function
Pm
f 0 (x) := ?(x) + i=1 ?0i (x) ,
where ?0i ? {?i } ? {?1i , . . . , ?pi i }. That is, ?0i is either the original piecewise function ?i or one of
its subfunctions ?ki . With (P? ) unknown, it is natural to consider this more general class of objectives
(in the case that ?0i = ??i for all i, we see f 0 is the objective function of (P? )). The goal of this section
is to establish choices of f 0 such that by minimizing f 0 , we can make progress toward minimizing f .
We later introduce working set and screening methods based on this result.
To guide the choice of f 0 , we assume points x0 ? Rn , y0 ? dom(f ), where x0 minimizes a
?-strongly convex function f0 that lower bounds f . The point y0 represents an existing approximation
of x? , while x0 can be viewed as a second approximation related to a point in (P)?s dual space. Since
f0 lower bounds f and x0 minimizes f0 , note that f0 (x0 ) ? f0 (x? ) ? f (x? ). Using this fact, we
quantify the suboptimality of x0 and y0 in terms of the suboptimality gap
?0 := f (y0 ) ? f0 (x0 ) ? f (y0 ) ? f (x? ) .
(1)
0
0
0
0
Importantly, we consider choices of f such that by minimizing f , we can form points (x , y ) that
improve upon the existing approximations (x0 , y0 ) in terms of the suboptimality gap. Specifically,
we define x0 as the minimizer of f 0 , while y0 is a point on the segment [y0 , x0 ] (to be defined precisely
later). Our result in this section applies to choices of f 0 that satisfy three natural requirements:
R1. Tight in a neighborhood of y0 : For a closed set S with y0 ? int(S), f 0 (x) = f (x) ?x ? S.
R2. Lower bound on f : For all x, we have f 0 (x) ? f (x).
R3. Upper bound on f0 : For all x, we have f 0 (x) ? f0 (x).
Each of these requirements serves a specific purpose. After solving x0 := argminx f 0 (x), R1 enables
a backtracking operation to obtain a point y0 such that f (y0 ) < f (y0 ) (assuming y0 6= x? ). We
define y0 as the point on the segment (y0 , x0 ] that is closest to x0 while remaining in the set S:
?0 := max {? ? (0, 1] : ?x0 + (1 ? ?)y0 ? S} , y0 := ?0 x0 + (1 ? ?0 )y0 .
(2)
0
0
0
0
Since (i) f is convex, (ii) x minimizes f , and (iii) y0 ? int(S), it follows that f (y ) ? f (y0 ).
Applying R2 leads to the new suboptimality gap
?0 := f (y0 ) ? f 0 (x0 ) ? f (y0 ) ? f (x? ) .
(3)
R2 is also a natural requirement since we are interested in the scenario that many ?0i are linear, in
which case (i) ?0i lower bounds ?i as a result of convexity, and (ii) the resulting f 0 likely can be
minimized efficiently. Finally, R3 is useful for ensuring f 0 (x0 ) ? f0 (x0 ) ? f0 (x0 ). It follows that
?0 ? ?0 . Moreover, this improvement in suboptimality gap can be quantified as follows:
Lemma 3.1 (Guaranteed suboptimality gap progress?proven in Appendix B). Consider points
x0 ? Rn , y0 ? dom(f ) such that x0 minimizes a ?-strongly convex function f0 that lower bounds f .
For any function f 0 that satisfies R1, R2, and R3, let x0 be the minimizer of f 0 , and define ?0 and y0
via backtracking as in (2). Then defining suboptimality gaps ?0 and ?0 as in (1) and (3), we have
2
0 ?
2
? 0 x0 +y0
?0 ?
?0 ? (1 ? ?0 ) ?0 ? 1+?
min
?
?
kx
?
y
k
.
z
0
0
? 02 2
1+? 0
1+? 0 2
z?int(S)
/
The primary significance of Lemma 3.1 is the bound?s relatively simple dependence on S. We next
design working set and screening methods that choose S to optimize this bound.
3
Algorithm 1 PW-B LITZ
initialize y0 ? dom(f )
# Initialize x0 by minimizing a simple lower bound on f :
?i ? [m], ?0i,0 (x) := ?i (y0 ) + hgi , x ? y0 i, where gi ? ??i (y0 )
Pm
x0 ? argminx f00 (x) := ?(x) + i=1 ?0i,0 (x)
for t = 1, . . . , T until xT = yT do
# Form subproblem:
Select ?t ? [0, 12 ]
ct ? ?t xt?1 + (1 ? ?t )yt?1
Select threshold ?t > ?t kxt?1 ? yt?1 k
St := {x : kx ? ct k ? ?t }
for i = 1, . . . , m do
k ? ?i (yt?1 )
if (C1 and C2 and C3) then ?0i,t := ?ki else ?0i,t := ?i
# Solve subproblem:
Pm
xt ? argminx ft0 (x) := ?(x) + i=1 ?0i,t (x)
# Backtrack:
?t ? argmin??(0,1] f (?xt + (1 ? ?)yt?1 )
yt ? ?t xt + (1 ? ?t )yt?1
return yT
3.2
Piecewise working set algorithm
Lemma 3.1 suggests
an iterative algorithm that, at each iteration t, minimizes a modified objective
Pm
ft0 (x) := ?(x) + i=1 ?0i,t (x), where ?0i,t ? {?i } ? {?1i , . . . , ?pi i }. To guide the choice of each
0
?0i,t , our algorithm considers previous iterates xt?1 and yt?1 , where xt?1 minimizes ft?1
. For all
j
0
i ? [m], j = ?i (yt?1 ), we define ?i,t = ?i if the following three conditions are satisfied:
C1. Tight in the neighborhood of yt?1 : We have St ? Xik (implying ?i (x) = ?ki (x) ?x ? St ).
C2. Lower bound on ?i : For all x, we have ?ki (x) ? ?i (x).
C3. Upper bound on ?0i,t?1 in the neighborhood of xt?1 : For all x ? Rn and gi ? ??0i,t?1 (xt?1 ),
we have ?ki (x) ? ?0i,t?1 (xt?1 ) + hgi , x ? xt?1 i.
If any of the above conditions are unmet, then we let ?0i,t = ?i . As detailed in Appendix C, this
choice of ?0i,t ensures ft0 satisfies conditions analogous to conditions R1, R2, and R3 for Lemma 3.1.
After determining ft0 , the algorithm proceeds by solving xt ? argminx ft0 (x). We then set
yt ? ?t xt + (1 ? ?t )yt?1 , where ?t is chosen via backtracking. Lemma 3.1 implies the suboptimality gap ?t := f (yt ) ? ft0 (xt ) decreases with t until xT = yT , at which point ?T = 0 and
xT and yT solve (P). Defined in Algorithm 1, we call this algorithm ?PW-B LITZ? as it extends the
B LITZ algorithm for constrained problems from [11] to a broader class of piecewise objectives.
An important consideration of Algorithm 1 is the choice of St . If St is large, C1 is easily violated,
meaning ?0i,t = ?i for many i. This implies ft0 is difficult to minimize. In contrast, if St is small,
then ?0i,t is potentially linear for many i. In this case, ft0 is simpler to minimize, but ?t may be large.
Interestingly, conditioned on oracle knowledge of ?t := max {? ? (0, 1] : ?xt + (1 ? ?)yt?1 ? St },
we can derive an optimal St according to Lemma 3.1 subject to a volume constraint vol(St ) ? V :
+yt?1
St? := argmax
min
z ? ?t xt?1
.
1+?t
/
S : vol(S)?V z?int(S)
+yt?1
St? is a ball with center ?t xt?1
. Of course, this result cannot be used in practice directly, since
1+?t
?t is unknown when choosing St . Motivated by this result, Algorithm 1 instead defines St as a ball
with radius ?t and a similar center ct := ?t xt?1 + (1 ? ?t )yt?1 for some ?t ? [0, 12 ].
4
By choosing St in this manner, we can quantify the amount of progress Algorithm 1 makes at ieration
t. Our first theorem lower bounds the amount of progress during iteration t of Algorithm 1 for the
+yt?1
.
case in which ?t happens to be chosen optimally. That is, St is a ball with center ?t xt?1
1+?t
Theorem 3.2 (Convergence progress with optimal ?t ). Let ?t?1 and ?t be the suboptimality gaps
after iterations t ? 1 and t of Algorithm 1, and suppose that ?t = ?t (1 + ?t )?1 . Then
1/3
.
?t ? ?t?1 + ?2 ?t2 ? 32 ??t2 ?2t?1
Since the optimal ?t is unknown when choosing St , our second theorem characterizes the worst-case
performance of extremal choices of ?t (the cases ?t = 0 and ?t = 12 ).
Theorem 3.3 (Convergence progress with suboptimal ?t ). Let ?t?1 and ?t be the suboptimality
gaps after iterations t ? 1 and t of Algorithm 1, and suppose that ?t = 0. Then
?t ? ?t?1 + ?2 ?t2 ? (2??t2 ?t?1 )1/2 .
1
2
, and define dt := kxt?1 ? yt?1 k. Then
1/3
?t ? ?t?1 + ?2 (?t ? 21 dt )2 ? 32 ?(?t ? 12 dt )2 ?2t?1
.
Alternatively, suppose that ?t =
These results are proven in Appendices D and E. Note that it is often desirable to choose ?t such that
? 2
0
2 ?t is significantly less than ?t?1 . (In the alternative case, the subproblem objective ft may be no
simpler than f . One could choose ?t such that ?t = 0, for example, but as we will see in ?3.3, we are
only performing screening in this scenario.) Assuming ?2 ?t2 is small in relation to ?t?1 , the ability to
choose ?t is advantageous in terms of worst case bounds if one manages to select ?t ? ?t (1 + ?t )?1 .
At the same time, Theorem 3.3 suggests that Algorithm 1 is robust to the choice of ?t ; the algorithm
makes progress toward convergence even with worst-case choices of this parameter.
Practical considerations We make several notes about using Algorithm 1 in practice. Since
subproblem solvers are iterative, it is important to only compute xt approximately. In Appendix F,
we include a modified version of Lemma 3.1 that considers this case. This result suggests terminating
subproblem t when ft0 (xt ) ? minx ft0 (x) ? ?t?1 for some ? (0, 1). Here trades off the amount
of progress resulting from solving subproblem t with the time dedicated to solving this subproblem.
To choose ?t , we find it practical to initialize ?0 = 0 and let ?t = ?t?1 (1 + ?t?1 )?1 for t > 0. This
roughly approximates the optimal choice ?t = ?t (1 + ?t )?1 , since ?t can be viewed as a worst-case
version of ?t , and ?t often changes gradually with t. Selecting ?t is problem dependent. By letting
1/2
?t = ?t kxt?1 ? yt?1 k + ??t?1 for a small ? > 0, Algorithm 1 converges linearly in t. It can also
be beneficial to choose ?t in other ways?for example, choosing ?t so subproblem t fits in memory.
It is also important to recognize the relative amount of time required for each stage of Algorithm 1.
When forming subproblem t, the time consuming step is checking condition C1. In the most common
scenarios that Xik is a half-space or ball, this condition is testable in O(n) time. However, for
arbitrary regions, this condition could be difficult to test. The time required for solving subproblem
t is clearly application dependent, but we note it can be helpful to select subproblem termination
criteria to balance time usage between stages of the algorithm. The backtracking stage is a 1D convex
problem that at most requires evaluating f a logarithmic number of times. Simpler backtracking
approaches are available for many objectives. It is also not necessary to perform exact backtracking.
Relation to B LITZ algorithm Algorithm 1 is related to the B LITZ algorithm [11]. B LITZ applies
only to constrained problems, however, while Algorithm 1 applies to a more general class of piecewise
objectives. In Appendix G, we ellaborate on Algorithm 1?s connection to B LITZ and other algorithms.
3.3
Piecewise screening test
Lemma 3.1 can also be used to simplify the objective f in such a way that the minimizer x? is
unchanged. Recall Lemma 3.1 assumes a function f 0 and set S for which f 0 (x) = f (x) for all x ? S.
The idea of this section is to select the smallest region S such that in Lemma 3.1, ?0 must equal 0
(according to the lemma). In this case, the minimizer of f 0 is equal to the minimizer of f ?even
though f 0 is potentially much simpler to minimize. This results in the following screening test:
5
Theorem 3.4 (Piecewise screening test?proven in Appendix H). Consider any x0 , y0 ? Rn such
that x0 minimizes a ?-strongly convex function f0 that lower bounds f . Define the suboptimality gap
0
?0 := f (y0 ) ? f0 (x0 ) as well as the point c0 := x0 +y
. Then for any i ? [m] and k = ?i (y0 ), if
2
q
2
1
1
S := x : kx ? c0 k ? ? ?0 ? 4 kx0 ? y0 k ? int(Xik ) ,
then x? ? int(Xik ). This implies ?i may be replaced with ?ki in (P) without affecting x? .
Theorem 3.4 applies to general Xik , and testing if S ? int(Xik ) may be difficult. Fortunately, Xik
often is (or is a superset of) a simple region that makes applying Theorem 3.4 simple.
Corollary 3.5 (Piecewise screening test for half-space Xik ). Suppose that Xik ? {x : hai , xi ? bi }
for some ai ? Rn , bi ? R. Define x0 , y0 , ?0 , and c0 as in Theorem 3.4. Then x? ? int(Xik ) if
q
bi ? hai , c0 i
2
> ?1 ?0 ? 41 kx0 ? y0 k .
kai k
Corollary 3.6 (Piecewise screening test for ball Xik ). Suppose that Xik ? {x : kx ? ai k ? bi } for
some ai ? Rn , bi ? R>0 . Define x0 , y0 , ?0 , and c0 as in Theorem 3.4. Then x? ? int(Xik ) if
q
2
bi ? kai ? c0 k > ?1 ?0 ? 41 kx0 ? y0 k .
Corollary 3.5 applies to piecewise loss minimization (for SVMs, discarding examples that are not
marginal support vectors), `1 -regularized learning (discarding irrelevant features), and optimization
with linear constraints (discarding superfluous constraints). Applications of Corollary 3.6 include
group lasso and many constrained objectives. In order to obtain
Pm the point x0 , it is usually practical to
choose f0 as the sum of ? and a first-order lower bound on i=1 ?i . In this case, computing x0 is as
simple as finding the conjugate of ?. We illustrate this idea with an SVM example in Appendix I.
Since ?0 decreases over the course of an iterative algorithm, Theorem 3.4 is ?adaptive,? meaning
it increases in effectiveness as progress is made toward convergence. In contrast, most screening
tests are ?nonadaptive.? Nonadaptive screening tests depend on knowledge of an exact solution to a
related problem, which is disadvantageous, since (i) solving a related problem exactly is generally
computationally expensive, and (ii) the screening test can only be applied prior to optimization.
Relation to existing screening tests Theorem 3.4 generalizes and improves upon many existing
screening tests. We summarize Theorem 3.4?s relation to previous results below. Unlike Theorem 3.4,
existing tests typically apply to only one or two objectives. Elaboration is included in Appendix J.
? Adaptive tests for sparse optimization: Recently, [6], [7], and [8] considered adaptive screening
tests for several sparse optimization problems, including `1 -regularized learning and group
lasso. These tests rely on knowledge of primal and dual points (analogous to x0 and y0 ), but
the tests are not as effective (nor as general) as Theorem 3.4.
? Adaptive tests for constrained optimization: [11] considered screening with primal-dual pairs
for constrained optimization problems. The resulting test is a more general version (applies to
more objectives) of [6], [7], and [8]. Thus, Theorem 3.4 improves upon [11] as well.
? Nonadaptive tests for degree 1 homogeneous loss minimization: [10] considered screening for
`2 -regularized learning with hinge and `1 loss functions. This is a special non-adaptive case of
Theorem 3.4, which requires solving the problem with greater regularization prior to screening.
? Nonadaptive tests for sparse optimization: Some tests, such as [4] for the lasso, may screen
components that Theorem 3.4 does not eliminate. In Appendix J, we show how Theorem 3.4
can be modified to generalize [4], but this change increases the time needed for screening. In
practice, we were unable to overcome this drawback to speed up iterative algorithms.
Relation to working set algorithm Theorem 3.4 is closely related to Algorithm 1. In particular,
our screening test can be viewed as a working set algorithm that converges
in one iteration. In the
q
context of Algorithm 1, this amounts to choosing ?1 =
1
2
and ?1 =
1
? ?0
?
1
4
2
kx0 ? y0 k .
It is important to understand that it is usually not desirable that a working set algorithm converges in
one iteration. Since screening rules do not make errors, these methods simplify the objective by only
a modest amount. In many cases, screening may fail to simplify the objective in any meaningful way.
In the following section, we consider real-world scenarios to demonstrate these points.
6
|g ? g ? |/|g ? |
|g ? g ? |/|g ? |
10?1
10?2
10?3
10?4
10?5
10?6
1
2
3
4
5
6
0
7
5
10
0
1
2
3
4
5
6
7
Support set precision
Support set precision
1.00
0.95
0.90
0.85
0.80
0.75
0.70
15
20
1.00
0.95
0.90
0.85
0.80
0.75
0.70
0
5
10
30
35
0 100 200 300 400 500 600 700 800
15
20
Time (s)
25
30
35
1.00
0.95
0.90
0.85
0.80
0.75
0.70
0 100 200 300 400 500 600 700 800
Time (s)
Time (s)
DCA + working sets + piecewise screening
DCA + working sets
(a) m = 100
25
Time (s)
Time (s)
Support set precision
0
|g ? g ? |/|g ? |
10?1
10?2
10?3
10?4
10?5
10?6
10?1
10?2
10?3
10?4
10?5
10?6
Time (s)
DCA + piecewise screening
DCA + gap screening
(b) m = 400
DCA
(c) m = 1600
Figure 1: Group lasso convergence comparison. While screening is marginally useful for the
problem with only 100 groups, screening becomes ineffective as m increases. The working set
algorithm convincingly outperforms dual coordinate descent in all cases.
4
Comparing the scalability of screening and working set methods
This section compares the scalability of our working set and screening approaches. We consider
two popular instances of (P): group lasso and linear SVMs. For each problem, we examine the
performance of our working set algorithm and screening rule as m increases. This is an important
comparison, as we have not seen such scalability experiments in prior works on screening.
We implemented dual coordinate ascent (DCA) to solve each instance of (P). DCA is known to be
simple and fast, and there are no parameters to tune. We compare DCA to three alternatives:
1. DCA + screening: After every five DCA epochs we apply screening. ?Piecewise screening?
refers to Theorem 3.4. For group lasso, we also implement ?gap screening? [7].
2. DCA + working sets: Implementation of Algorithm 1. DCA is used to solve each subproblem.
3. DCA + working sets + screening: Algorithm 1 with Theorem 3.4 applied after each iteration.
Group lasso comparisons We define the group lasso objective as
Pm
2
gGL (?) := 12 kA? ? bk + ? i=1 k?Gi k2 .
A ? Rn?q is a design matrix, and b ? Rn is a labels vector. ? > 0 is a regularization parameter, and
G1 , . . . , Gm are disjoint sets of feature indices such that ?m
i=1 Gi = [q]. Denote a minimizer of gGL by
? ? . For large ?, groups of elements, ?G? i , have value 0 for many Gi . While gGL is not directly an
instance of (P), the dual of gGL is strongly concave with m constraints (and thus an instance of (P)).
We consider an instance of gGL to perform feature selection for an insurance claim prediction task1 .
Given n = 250,000 training instances, we learned an ensemble of 1600 decision trees. To make
predictions more efficiently, we use group lasso to reduce the number of trees in the model. The
resulting problem has m = 1600 groups and q = 28,733 features. To evaluate the dependence of
the algorithms on m, we form smaller problems by uniformly subsampling 100 and 400 groups. For
each problem we set ? so that exactly 5% of groups have nonzero weight in the optimal model.
Figure 1 contains results of this experiment. Our metrics include the relative suboptimality of the
current iterate as well as the agreement of this iterate?s nonzero groups with those of the optimal
solution in terms of precision (all algorithms had high recall). This second metric is arguably more
important, since the task is feature selection. Our results illustrate that while screening is marginally
helpful when m is small, our working set method is more effective when scaling to large problems.
1
https://www.kaggle.com/c/ClaimPredictionChallenge
7
10?1
10?2
10?3
10?4
10?5
10?6
0.10
0.15
0
0.20
1
1.0
0.8
0.6
0.4
0.2
0.0
20
4
30
40
6
7
8
0
10
103
20
30
40
101
102
103
C/C0
20
30
40
50
60
Time (s)
Fraction of examples screened
1.0
0.8
0.6
0.4
0.2
0.0
DCA + working sets + piecewise screening
DCA + working wets
(a) m = 104
5
10
# Epochs
# Epochs
10
102
C/C0
3
Fraction of examples screened
Fraction of examples screened
101
2
Time (s)
Time (s)
104
1.0
0.8
0.6
0.4
0.2
0.0
10
# Epochs
0.05
(f ? f ? )/f ?
(f ? f ? )/f ?
0.00
10?1
10?2
10?3
10?4
10?5
10?6
(f ? f ? )/f ?
10?1
10?2
10?3
10?4
10?5
10?6
20
30
40
101
102
103 104
C/C0
105
106
DCA + piecewise screening
DCA
(b) m = 105
(c) m = 106
Figure 2: SVM convergence comparison. (above) Relative suboptimality vs. time. (below) Heat
map depicting fraction of examples screened by Theorem 3.4 when used in conjunction with dual
coordinate ascent. y-axis is the number of epochs completed; x-axis is the tuning parameter C.
C0 is the largest value of C for which each element of the dual solution takes value C. Darker
regions indicate more successful screening. The vertical line indicates the choice of C that minimizes
validation loss?this is also the choice of C for the above plots. As the number of examples increases,
screening becomes progressively less effective near the desirable choice of C.
SVM comparisons We define the linear SVM objective as
Pm
2
fSVM (x) := 21 kxk + C i=1 (1 ? bi hai , xi)+ .
Here C is a tuning parameter, while ai ? Rn , bi ? {?1, +1} represents the ith training instance. We
train an SVM model on the Higgs boson dataset2 . This dataset was generated by a team of particle
physicists. The classification task is to determine whether an event corresponds to the Higgs boson.
In order to learn an accurate model, we performed feature engineering on this dataset, resulting in
8010 features. In this experiment, we consider subsets of examples with size m = 104 , 105 , and 106 .
Results of this experiment are shown in Figure 2. For this problem, we plot the relative suboptimality
in terms of objective value. We also include a heat map that shows screening?s effectiveness for
different values of C. Similar to the group lasso results, the utility of screening decreases as m
increases. Meanwhile, working sets significantly improve convergence times, regardless of m.
5
Discussion
Starting from a broadly applicable problem formulation, we have derived principled and unified
methods for exploiting piecewise structure in convex optimization. In particular, we have introduced
a versatile working set algorithm along with a theoretical understanding of the progress this algorithm
makes with each iteration. Using the same analysis, we have also proposed a screening rule that
improves upon many prior screening results as well as enables screening for many new objectives.
Our empirical results highlight a significant disadvantage of using screening: unless a good approximate solution is already known, screening is often ineffective. This is perhaps understandable, since
screening rules operate under the constraint that erroneous simplifications are forbidden. Working set
algorithms are not subject to this constraint. Instead, working set algorithms achieve fast convergence
times by aggressively simplifying the objective function, correcting for mistakes only as needed.
2
https://archive.ics.uci.edu/ml/datasets/HIGGS
8
Acknowledgments
We thank Hyunsu Cho, Christopher Aicher, and Tianqi Chen for their helpful feedback as well as
assistance preparing datasets used in our experiments. This work is supported in part by PECASE
N00014-13-1-0023, NSF IIS-1258741, and the TerraSwarm Research Center 00008169.
References
[1] L. E. Ghaoui, V. Viallon, and T. Rabbani. Safe feature elimination for the lasso and sparse
supervised learning problems. Pacific Journal of Optimization, 8(4):667?698, 2012.
[2] Z. J. Xiang and P. J. Ramadge. Fast lasso screening tests based on correlations. In IEEE
International Conference on Acoustics, Speech, and Signal Processing, 2012.
[3] R. Tibshirani, J. Bien, J. Friedman, T. Hastie, N. Simon, J. Taylor, and R. J. Tibshirani. Strong
rules for discarding predictors in lasso-type problems. Journal of the Royal Statistical Society,
Series B, 74(2):245?266, 2012.
[4] J. Liu, Z. Zhao, J. Wang, and J. Ye. Safe screening with variational inequalities and its
application to lasso. In International Conference on Machine Learning, 2014.
[5] J. Wang, P. Wonka, and J. Ye. Lasso screening rules via dual polytope projection. Journal of
Machine Learning Research, 16(May):1063?1101, 2015.
[6] O. Fercoq, A. Gramfort, and J. Salmon. Mind the duality gap: safer rules for the lasso. In
International Conference on Machine Learning, 2015.
[7] E. Ndiaye, O. Fercoq, A. Gramfort, and J. Salmon. GAP safe screening rules for sparse
multi-task and multi-class models. In Advances in Neural Information Processing Systems 28,
2015.
[8] E. Ndiaye, O. Fercoq, A. Gramfort, and J. Salmon. Gap safe screening rules for sparse-group
lasso. Technical Report arXiv:1602.06225, 2016.
[9] I. Takeuchi K. Ogawa, Y. Suzuki. Safe screening of non-support vectors in pathwise SVM
computation. In International Conference on Machine Learning, 2013.
[10] J. Wang, P. Wonka, and J. Ye. Scaling SVM and least absolute deviations via exact data
reduction. In International Conference on Machine Learning, 2014.
[11] T. B. Johnson and C. Guestrin. Blitz: a principled meta-algorithm for scaling sparse optimization.
In International Conference on Machine Learning, 2015.
[12] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for
large linear classification. Journal of Machine Learning Research, 9:1871?1874, 2008.
[13] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties.
Foundations and Trends in Machine Learning, 4(1):1?106, 2012.
9
| 6043 |@word version:3 pw:2 advantageous:1 c0:10 termination:1 simplifying:2 hsieh:1 versatile:1 reduction:1 liblinear:2 liu:1 contains:1 series:1 selecting:1 initial:1 interestingly:1 outperforms:2 existing:5 kx0:4 ka:1 comparing:1 task1:1 current:1 com:1 ndiaye:2 must:1 readily:1 enables:2 plot:2 progressively:1 v:1 implying:1 half:2 ith:1 iterates:1 simpler:8 five:1 rabbani:1 along:1 c2:2 become:1 manner:1 introduce:1 x0:34 theoretically:1 roughly:1 nor:1 examine:1 multi:2 little:1 considering:1 solver:1 becomes:2 begin:1 discover:1 spain:1 formalizing:1 moreover:2 argmin:1 minimizes:8 unified:2 finding:1 impractical:1 guarantee:2 every:1 concave:1 exactly:2 k2:1 arguably:1 negligible:1 engineering:1 mistake:1 consequence:1 physicist:1 despite:1 approximately:1 studied:1 quantified:1 suggests:4 ramadge:1 limited:1 bi:9 practical:3 acknowledgment:1 testing:1 practice:4 implement:1 empirical:3 significantly:4 projection:1 word:1 refers:1 cannot:1 interior:1 selection:2 context:1 applying:3 optimize:2 equivalent:1 www:1 map:2 yt:23 center:4 regardless:1 starting:1 convex:15 unify:1 identifying:1 correcting:1 rule:10 importantly:1 coordinate:3 analogous:2 suppose:5 gm:1 exact:3 homogeneous:1 agreement:1 element:2 trend:1 expensive:1 ft:2 subproblem:12 wang:4 worst:4 region:4 ensures:1 decrease:3 trade:1 principled:6 convexity:2 dom:3 terminating:1 depend:1 solving:15 segment:2 tight:2 upon:7 easily:1 train:1 heat:2 fast:4 describe:1 effective:4 neighborhood:3 choosing:5 quite:1 kai:2 solve:5 otherwise:2 ability:1 gi:5 g1:1 sequence:1 kxt:3 propose:1 relevant:2 uci:1 rapidly:3 translate:1 poorly:2 achieve:1 inducing:2 scalability:6 quantifiable:1 ogawa:1 exploiting:12 seattle:2 convergence:12 requirement:3 r1:4 converges:3 tianqi:1 derive:3 develop:2 illustrate:2 boson:2 progress:12 strong:1 implemented:1 c:1 implies:3 indicate:1 quantify:2 safe:5 radius:1 drawback:1 closely:1 f00:1 elimination:1 require:1 proposition:2 considered:4 ic:1 tyler:1 claim:1 uninfluenced:1 smallest:1 purpose:4 hgi:2 applicable:1 wet:1 label:1 extremal:1 largest:1 minimization:5 hope:1 clearly:1 modified:3 broader:1 conjunction:1 corollary:4 derived:2 focus:3 improvement:1 indicates:1 contrast:3 tremendously:1 helpful:3 dependent:2 typically:1 eliminate:1 relation:5 interested:1 overall:1 dual:9 classification:2 constrained:8 art:1 initialize:3 special:1 marginal:1 equal:2 gramfort:3 having:1 washington:4 identical:1 represents:2 broad:1 preparing:1 minimized:1 others:1 t2:5 piecewise:31 fundamentally:1 simplify:3 report:1 recognize:1 replaced:1 argmax:1 argminx:4 friedman:1 screening:73 insurance:1 evaluation:1 primal:2 superfluous:1 accurate:1 necessary:1 modest:1 unless:1 tree:2 taylor:1 terraswarm:1 theoretical:6 instance:10 classify:1 disadvantage:1 deviation:1 subset:4 predictor:1 successful:1 johnson:2 optimally:1 considerably:2 cho:1 st:16 international:6 off:1 quickly:1 fused:1 pecase:1 satisfied:1 choose:7 zhao:1 return:1 int:12 satisfy:1 later:4 higgs:3 performed:1 closed:1 characterizes:1 disadvantageous:1 carlos:1 simon:1 minimize:6 takeuchi:1 efficiently:3 ensemble:1 ofthe:1 generalize:2 backtrack:1 manages:1 marginally:2 fsvm:1 ggl:5 dataset:2 popular:1 recall:2 knowledge:4 improves:5 jenatton:1 dca:17 dt:3 supervised:1 formulation:5 though:1 strongly:5 stage:3 until:2 correlation:1 working:32 replacing:2 christopher:1 defines:1 perhaps:2 usage:1 effect:1 ye:3 concept:1 true:1 regularization:2 aggressively:1 nonzero:2 assistance:1 during:1 suboptimality:14 criterion:1 prominent:1 demonstrate:1 dedicated:1 meaning:3 subdomain:2 consideration:2 novel:3 recently:2 variational:1 salmon:3 common:3 specialized:1 volume:1 approximates:1 significant:1 ai:4 tuning:2 kaggle:1 pm:9 particle:1 had:1 f0:14 etc:1 closest:1 recent:1 forbidden:1 irrelevant:3 scenario:5 n00014:1 inequality:1 dataset2:1 meta:1 exploited:1 guestrin:3 seen:3 fortunately:1 greater:1 converge:1 determine:1 signal:1 ii:5 multiple:1 desirable:3 technical:1 bach:1 lin:1 elaboration:2 ensuring:1 prediction:3 scalable:1 regression:2 metric:2 arxiv:1 iteration:8 c1:4 justified:1 affecting:1 separately:1 else:1 operate:1 unlike:2 archive:1 ineffective:2 ascent:2 subject:2 effectiveness:2 call:1 near:1 iii:1 easy:1 superset:1 iterate:2 affect:1 fit:1 hastie:1 lasso:19 suboptimal:1 reduce:1 idea:3 whether:2 motivated:1 utility:1 penalty:2 speech:1 useful:4 generally:1 clear:1 detailed:1 tune:1 amount:7 svms:2 reduced:1 http:2 nsf:1 disjoint:1 tibshirani:2 broadly:1 vol:2 group:17 threshold:1 achieving:2 viallon:1 nonadaptive:4 fraction:4 sum:4 year:1 screened:4 extends:1 decision:1 appendix:10 scaling:4 bound:16 ki:7 ct:3 guaranteed:2 simplification:1 fan:1 oracle:1 constraint:7 precisely:1 speed:1 min:2 fercoq:3 performing:1 relatively:1 pacific:1 according:2 ball:5 conjugate:1 beneficial:1 smaller:1 y0:40 making:1 happens:1 gradually:1 ghaoui:1 computationally:1 subfunctions:5 r3:4 mechanism:2 fail:1 needed:2 mind:1 letting:1 tractable:1 serf:1 available:1 operation:1 generalizes:1 apply:3 alternative:3 original:1 denotes:1 remaining:2 include:7 assumes:1 subsampling:1 completed:1 hinge:1 testable:1 especially:1 establish:1 society:1 unchanged:1 objective:29 already:1 primary:1 dependence:2 unclear:1 exhibit:1 hai:4 kth:1 minx:1 unable:1 thank:1 polytope:1 considers:2 toward:5 assuming:2 index:1 minimizing:7 balance:1 difficult:3 potentially:2 favorably:1 xik:15 wonka:2 design:3 implementation:1 understandable:1 unknown:5 perform:3 upper:2 vertical:1 observation:1 datasets:2 descent:1 defining:2 team:1 rn:13 arbitrary:1 bk:1 introduced:1 pair:1 required:2 c3:2 connection:1 acoustic:1 learned:1 barcelona:1 nip:1 proceeds:1 usually:2 below:2 sparsity:2 summarize:1 convincingly:2 bien:1 max:2 memory:1 including:1 royal:1 critical:1 subfunction:2 natural:3 rely:1 regularized:4 event:1 minimax:1 improve:4 library:2 identifies:1 axis:2 prior:12 understanding:2 literature:1 checking:1 epoch:5 determining:1 relative:4 xiang:1 loss:6 highlight:1 proven:3 validation:1 foundation:1 degree:1 pi:4 course:2 surprisingly:2 supported:1 guide:2 understand:1 absolute:1 sparse:8 boundary:1 overcome:1 feedback:1 world:2 evaluating:1 made:2 adaptive:5 suzuki:1 simplified:2 far:1 approximate:1 ml:1 global:2 mairal:1 consuming:1 xi:7 alternatively:1 iterative:4 learn:2 robust:1 ignoring:1 depicting:1 ft0:10 meanwhile:1 significance:1 linearly:1 screen:1 darker:1 precision:4 lie:1 theorem:23 erroneous:1 specific:1 xt:23 discarding:4 r2:5 svm:7 blitz:1 conditioned:1 kx:4 gap:15 chen:1 logarithmic:1 backtracking:6 likely:1 forming:1 kxk:1 pathwise:1 chang:1 applies:8 corresponds:1 minimizer:8 satisfies:2 relies:1 obozinski:1 goal:2 formulated:1 viewed:3 change:2 safer:1 included:1 specifically:2 uniformly:1 lemma:11 duality:2 meaningful:1 formally:1 selectively:1 select:5 support:6 violated:1 evaluate:1 |
5,574 | 6,044 | Minimizing Quadratic Functions in Constant Time
Kohei Hayashi
National Institute of Advanced Industrial Science and Technology
[email protected]
Yuichi Yoshida
National Institute of Informatics and Preferred Infrastructure, Inc.
[email protected]
Abstract
A sampling-based optimization method for quadratic functions is proposed.
Our method approximately solves the following n-dimensional quadratic minimization problem in constant time, which is independent of n: z ? =
minv?Rn hv, Avi + nhv, diag(d)vi + nhb, vi, where A ? Rn?n is a matrix and
d, b ? Rn are vectors. Our theoretical analysis specifies the number of samples
k(?, ) such that the approximated solution z satisfies |z ? z ? | = O(n2 ) with
probability 1 ? ?. The empirical performance (accuracy and runtime) is positively
confirmed by numerical experiments.
1
Introduction
A quadratic function is one of the most important function classes in machine learning, statistics,
and data mining. Many fundamental problems such as linear regression, k-means clustering, principal component analysis, support vector machines, and kernel methods [14] can be formulated as a
minimization problem of a quadratic function.
In some applications, it is sufficient to compute the minimum value of a quadratic function rather
than its solution. For example, Yamada et al. [21] proposed an efficient method for estimating the
Pearson divergence, which provides useful information about data, such as the density ratio [18].
They formulated the estimation problem as the minimization of a squared loss and showed that the
Pearson divergence can be estimated from the minimum value. The least-squares mutual information [19] is another example that can be computed in a similar manner.
Despite its importance, the minimization of a quadratic function has the issue of scalability. Let
n ? N be the number of variables (the ?dimension? of the problem). In general, such a minimization
problem can be solved by quadratic programming (QP), which requires poly(n) time. If the problem
is convex and there are no constraints, then the problem is reduced to solving a system of linear
equations, which requires O(n3 ) time. Both methods easily become infeasible, even for mediumscale problems, say, n > 10000.
Although several techniques have been proposed to accelerate quadratic function minimization, they
require at least linear time in n. This is problematic when handling problems with an ultrahigh
dimension, for which even linear time is slow or prohibitive. For example, stochastic gradient
descent (SGD) is an optimization method that is widely used for large-scale problems. A nice
property of this method is that, if the objective function is strongly convex, it outputs a point that
is sufficiently close to an optimal solution after a constant number of iterations [5]. Nevertheless,
in each iteration, we need at least O(n) time to access the variables. Another technique is lowrank approximation such as Nystr?om?s method [20]. The underlying idea is the approximation
of the problem by using a low-rank matrix, and by doing so, we can drastically reduce the time
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
complexity. However, we still need to compute the matrix?vector product of size n, which requires
O(n) time. Clarkson et al. [7] proposed sublinear-time algorithms for special cases of quadratic
function minimization. However, it is ?sublinear? with respect to the number of pairwise interactions
of the variables, which is O(n2 ), and their algorithms require O(n logc n) time for some c ? 1.
Our contributions: Let A ? Rn?n be a matrix and d, b ? Rn be vectors. Then, we consider the
following quadratic problem:
minimize
pn,A,d,b (v), where pn,A,d,b (v) = hv, Avi + nhv, diag(d)vi + nhb, vi.
n
v?R
(1)
Here, h?, ?i denotes the inner product and diag(d) denotes the matrix whose diagonal entries are
specified by d. Note that a constant term can be included in (1); however, it is irrelevant when
optimizing (1), and hence we ignore it.
Let z ? ? R be the optimal value of (1) and let , ? ? (0, 1) be parameters. Then, the main goal of
this paper is the computation of z with |z ? z ? | = O(n2 ) with probability at least 1 ? ? in constant
time, that is, independent of n. Here, we assume the real RAM model [6], in which we can perform
basic algebraic operations on real numbers in one step. Moreover, we assume that we have query
accesses to A, b, and d, with which we can obtain an entry of them by specifying an index. We note
that z ? is typically ?(n2 ) because hv, Avi consists of ?(n2 ) terms, and hv, diag(d)vi and hb, vi
consist of ?(n) terms. Hence, we can regard the error of ?(n2 ) as an error of ?() for each term,
which is reasonably small in typical situations.
Let ?|S be an operator that extracts a submatrix (or subvector) specified by an index set S ? N; then,
our algorithm is defined as follows, where the parameter k := k(, ?) will be determined later.
Algorithm 1
Input: An integer n ? N, query accesses to the matrix A ? Rn?n and to the vectors d, b ? Rn ,
and , ? > 0
1: S ? a sequence of k = k(, ?) indices independently and uniformly sampled from
{1, 2, . . . , n}.
2
n
2: return n
k2 minv?R pk,A|S ,d|S ,b|S (v).
In other words, we sample a constant number of indices from the set {1, 2, . . . , n}, and then solve
the problem (1) restricted to these indices. Note that the number of queries and the time complexity
are O(k 2 ) and poly(k), respectively. In order to analyze the difference between the optimal values
of pn,A,d,b and pk,A|S ,d|S ,b|S , we want to measure the ?distances? between A and A|S , d and d|S ,
and b and b|S , and want to show them small. To this end, we exploit graph limit theory, initiated by
Lov?asz and Szegedy [11] (refer to [10] for a book), in which we measure the distance between two
graphs on different number of vertices by considering continuous versions. Although the primary
interest of graph limit theory is graphs, we can extend the argument to analyze matrices and vectors.
Using synthetic and real settings, we demonstrate that our method is orders of magnitude faster than
standard polynomial-time algorithms and that the accuracy of our method is sufficiently high.
Related work: Several constant-time approximation algorithms are known for combinatorial optimization problems such as the max cut problem on dense graphs [8, 13], constraint satisfaction
problems [1, 22], and the vertex cover problem [15, 16, 25]. However, as far as we know, no such
algorithm is known for continuous optimization problems.
A related notion is property testing [9, 17], which aims to design constant-time algorithms that
distinguish inputs satisfying some predetermined property from inputs that are ?far? from satisfying
it. Characterizations of constant-time testable properties are known for the properties of a dense
graph [2, 3] and the affine-invariant properties of a function on a finite field [23, 24].
2
Preliminaries
For an integer n, let [n] denote the set {1, 2, . . . , n}. The notation a = b ? c means that b ? c ? a ?
b + c. In this paper, we only consider functions and sets that are measurable.
2
Let S = (x1 , . . . , xk ) be a sequence of k indices in [n]. For a vector v ? Rn , we denote the
restriction of v to S by v|S ? Rk ; that is, (v|S )i = vxi for every i ? [k]. For the matrix A ? Rn?n ,
we denote the restriction of A to S by A|S ? Rk?k ; that is, (A|S )ij = Axi xj for every i, j ? [k].
2.1
Dikernels
Following [12], we call a (measurable) function f : [0, 1]2 ? R a dikernel. A dikernel is a generalization of a graphon [11], which is symmetric and whose range is bounded in [0, 1]. We can regard a
dikernel as a matrix whose index is specified by a real value in [0, 1]. We stress that the term dikernel
has nothing to do with kernel methods.
R1
For two functions f, g : [0, 1] ? R, we define their inner product as hf, gi = 0 f (x)g(x)dx. For a
dikernel W : [0, 1]2 ? R and a function f : [0, 1] ? R, we define a function W f : [0, 1] ? R as
(W f )(x) = hW (x, ?), f i.
Let W : [0, 1]2 ? R be a dikernel. The Lp norm kW kp for p ? 1 and the cut norm kW k of W are
R R
R R
1/p
1 1
defined as kW kp = 0 0 |W (x, y)|p dxdy
and kW k = supS,T ?[0,1] S T W (x, y)dxdy ,
respectively, where the supremum is over all pairs of subsets. We note that these norms satisfy the
triangle inequalities and kW k ? kW k1 .
Let ? be a Lebesgue measure. A map ? : [0, 1] ? [0, 1] is said to be measure-preserving, if
the pre-image ? ?1 (X) is measurable for every measurable set X, and ?(? ?1 (X)) = ?(X). A
measure-preserving bijection is a measure-preserving map whose inverse map exists and is also
measurable (and then also measure-preserving). For a measure preserving bijection ? : [0, 1] ?
[0, 1] and a dikernel W : [0, 1]2 ? R, we define the dikernel ?(W ) : [0, 1]2 ? R as ?(W )(x, y) =
W (?(x), ?(y)).
2.2
Matrices and Dikernels
Let W : [0, 1]2 ? R be a dikernel and S = (x1 , . . . , xk ) be a sequence of elements in [0, 1]. Then,
we define the matrix W |S ? Rk?k so that (W |S )ij = W (xi , xj ).
b : [0, 1]2 ? R from the matrix A ? Rn?n as follows. Let I1 =
We can construct the dikernel A
1
1 2
[0, n ], I2 = ( n , n ], . . . , In = ( n?1
n , . . . , 1]. For x ? [0, 1], we define in (x) ? [n] as a unique
b y) = Ai (x)i (y) . The main motivation for creating a
integer such that x ? Ii . Then, we define A(x,
n
n
dikernel from a matrix is that, by doing so, we can define the distance between two matrices A and
b ? Bk
b .
B of different sizes via the cut norm, that is, kA
We note that the distribution of A|S , where S is a sequence of k indices that are uniformly and
b S , where S is a sequence of
independently sampled from [n] exactly matches the distribution of A|
k elements that are uniformly and independently sampled from [0, 1].
3
Sampling Theorem and the Properties of the Cut Norm
In this section, we prove the following theorem, which states that, given a sequence of dikernels
W 1 , . . . , W T : [0, 1]2 ? [?L, L], we can obtain a good approximation to them by sampling a
sequence of a small number of elements in [0, 1]. Formally, we prove the following:
Theorem 3.1. Let W 1 , . . . , W T : [0, 1]2 ? [?L, L] be dikernels. Let S be a sequence of k
elements uniformly and independently sampled from [0, 1]. Then, with a probability of at least
1 ? exp(??(kT / log2 k)), there exists a measure-preserving bijection ? : [0, 1] ? [0, 1] such that,
for any functions f, g : [0, 1] ? [?K, K] and t ? [T ], we have
p
t | )gi| = O LK 2 T / log k .
[
|hf, W t gi ? hf, ?(W
S
2
We start with the following lemma, which states that, if a dikernel W : [0, 1]2 ? R has a small cut
norm, then hf, W f i is negligible no matter what f is. Hence, we can focus on the cut norm when
proving Theorem 3.1.
3
Lemma 3.2. Let ? 0 and W : [0, 1]2 ? R be a dikernel with kW k ? . Then, for any functions
f, g : [0, 1] ? [?K, K], we have |hf, W gi| ? K 2 .
Proof. For ? ? R and the function h : [0, 1] ? R, let L? (h) := {x ? [0, 1] | h(x) = ? } be the level
set of h at ? . For f 0 = f /K and g 0 = g/K, we have
Z
Z
Z 1 Z 1
?1 ?2
W (x, y)dxdyd?1 d?2
|hf, W gi| = K 2 |hf 0 , W g 0 i| = K 2
?1
? K2
1
Z
?1
? K 2
Z
?1
L? (f 0 )
L? (g 0 )
1
2
Z
Z
1
W (x, y)dxdy d?1 d?2
|?1 ||?2 |
0
0
L?1 (f ) L?2 (g )
?1
Z 1
|?1 ||?2 |d?1 d?2 = K 2 .
Z
1
?1
?1
To introduce the next technical tool, we need several definitions. We say that the partition Q is a
refinement of the partition P = (V1 , . . . , Vp ) if Q is obtained by splitting each set Vi into one or more
parts. The partition P = (V1 , . . . , Vp ) of the interval [0, 1] is called an equipartition if ?(Vi ) = 1/p
for every i ? [p]. For the dikernel W : [0, 1]2 ? R and the equipartition P = (V1 , . . . , Vp ) of [0, 1],
we define WP : [0, 1]2 ? R as the function obtained by averaging each Vi ? Vj for i, j ? [p]. More
formally, we define
Z
Z
1
WP (x, y) =
W (x0 , y 0 )dx0 dy 0 = p2
W (x0 , y 0 )dx0 dy 0 ,
?(Vi )?(Vj ) Vi ?Vj
Vi ?Vj
where i and j are unique indices such that x ? Vi and y ? Vj , respectively.
The following lemma states that any function W : [0, 1]2 ? R can be well approximated by WP
for the equipartition P into a small number of parts.
2
Lemma 3.3 (Weak regularity lemma for functions on [0, 1] [8]). Let P be an equipartition of [0, 1]
into k sets. Then, for any dikernel W : [0, 1]2 ? R and > 0, there exists a refinement Q of P with
2
|Q| ? k2C/ for some constant C > 0 such that
kW ? WQ k ? kW k2 .
1
T
Corollary 3.4. Let W , . . . , W : [0, 1]2 ? R be dikernels. Then, for any > 0, there exists an
2
equipartition P into |P| ? 2CT / parts for some constant C > 0 such that, for every t ? [T ],
kW t ? WPt k ? kW t k2 .
Proof. Let P 0 be a trivial partition, that is, a partition consisting of a single part [n]. Then, for each
t ? [T ], we iteratively apply Lemma 3.3 with P t?1 , W t , and , and we obtain the partition P t into
2
at most |P t?1 |2C/ parts such that kW t ? WPt t k ? kW t k2 . Since P t is a refinement of P t?1 ,
we have kW i ? WPi t k ? kW i ? WPi t?1 k for every i ? [t ? 1]. Then, P T satisfies the desired
2
2
property with |P T | ? (2C/ )T = 2CT / .
d
As long as S is sufficiently large, W and W
|S are close in the cut norm:
2
Lemma 3.5 ((4.15) of [4]). Let W : [0, 1] ? [?L, L] be a dikernel and S be a sequence of k
elements uniformly and independently sampled from [0, 1]. Then, we have
?
2L
8L
d
? ES kW
|S k ? kW k < 1/4 .
k
k
Finally, we need the following concentration inequality.
Lemma 3.6 (Azuma?s inequality). Let (?, A, P ) be a probability space, k be a positive integer, and
C > 0. Let z = (z1 , . . . , zk ), where z1 , . . . , zk are independent random variables, and zi takes
values in some measure space (?i , Ai ). Let f : ?1 ? ? ? ? ? ?k ? R be a function. Suppose that
|f (x) ? f (y)| ? C whenever x and y only differ in one coordinate. Then
h
i
2
Pr |f (z) ? Ez [f (z)]| > ?C < 2e?? /2k .
4
Now we prove the counterpart of Theorem 3.1 for the cut norm.
Lemma 3.7. Let W 1 , . . . , W T : [0, 1]2 ? [?L, L] be dikernels. Let S be a sequence of k
elements uniformly and independently sampled from [0, 1]. Then, with a probability of at least
1 ? exp(??(kT / log2 k)), there exists a measure-preserving bijection ? : [0, 1] ? [0, 1] such that,
for every t ? [T ], we have
p
t | )k = O L T / log k .
[
kW t ? ?(W
S
2
Proof. First, we bound the expectations and then prove their concentrations. We apply Corollary 3.4
2
to W 1 , . . . , W T and , and let P = (V1 , . . . , Vp ) be the obtained partition with p ? 2CT / parts
such that
kW t ? WPt k ? L.
for every t ? [T ]. By Lemma 3.5, for every t ? [T ], we have
8L
t
t
t
\
t
[
ES kW
\
P ? W )|S k ? L + 1/4 .
P |S ? W |S k = ES k(W
k
Then, for any measure-preserving bijection ? : [0, 1] ? [0, 1] and t ? [T ], we have
t
t
\
\
t | )k ? kW t ? W t k + E kW t ? ?(W
t
[
[
ES kW t ? ?(W
S
S
P
P
P |S )k + ES k?(WP |S ) ? ?(W |S )k
8L
t
\
? 2L + 1/4 + ES kWPt ? ?(W
(2)
P |S )k .
k
Thus, we are left with the problem of sampling from P. Let S = {x1 , . . . , xk } be a sequence of
independent random variables that are uniformly distributed in [0, 1], and let Zi be the number of
points xj that fall into the set Vi . It is easy to compute that
1
k
1
k
E[Zi ] =
and Var[Zi ] =
? 2 k< .
p
p p
p
The partition P 0 of [0, 1] is constructed into the sets V10 , . . . , Vp0 such that ?(Vi0 ) = Zi /k and ?(Vi ?
t
Vi0 ) = min(1/p, Zi /k). For each t ? [T ], we construct the dikernel W : [0, 1] ? R such that the
t
t
value of W on Vi0 ? Vj0 is the same as the value of WPt on Vi ? Vj . Then, W agrees with WPt on
S
t
t
\
the set Q = i,j?[p] (Vi ?Vi0 )?(Vj ?Vj0 ). Then, there exists a bijection ? such that ?(W
P |S ) = W
for each t ? [T ]. Then, for every t ? [T ], we have
t
t
t
t
t
\
kWPt ? ?(W
P |S )k = kWP ? W k ? kWP ? W k1 ? 2L(1 ? ?(Q))
1 Z
X
1 Z 2
X
i
i
? 4L 1 ?
min ,
= 2L 1 ?
min ,
p k
p k
i?[p]
i?[p]
X 1 Z 2 1/2
X 1 Zi
i
= 2L
?
,
? ? 2L p
p
k
p
k
i?[p]
i?[p]
which we rewrite as
2
2
t
\
kWPt ? ?(W
P |S )k ? 4L p
X1
i?[p]
p
?
Z i 2
.
k
P
The expectation of the right hand side is (4L2 p/k 2 ) i?[p] Var(Zi ) < 4L2 p/k. By the Cauchyp
t | )k ? 2L p/k.
\
Schwartz inequality, EkW t ? ?(W
S
P
P
Inserted this into (2), we obtain
r
2
8L
p
8L
2L
t
[
EkW ? ?(W |S )k ? 2L + 1/4 + 2L
? 2L + 1/4 + 1/2 2CT / .
k
k
k
k
p
p
Choosing = CT /(log2 k 1/4 ) = 4CT /(log2 k), we obtain the upper bound
s
s
4CT
8L
2L
T
t | )k ? 2L
[
EkW t ? ?(W
+ 1/4 + 1/4 = O L
.
S
log2 k k
log2 k
k
t
5
t | )k changes by at most O(L/k) if one element in S changes, we
[
Observing that kW t ? ?(W
S p
apply Azuma?s inequality with ? = k T / log2 k and the union bound to complete the proof.
The proof of Theorem 3.1 is immediately follows from Lemmas 3.2 and 3.7.
4
Analysis of Algorithm 1
In this section, we analyze Algorithm 1. Because we want to use dikernels for the analysis, we
introduce a continuous version of pn,A,d,b (recall (1)). The real-valued function Pn,A,d,b on the
functions f : [0, 1] ? R is defined as
d> 1i + hf, b1
d
> 1i,
b i + hf 2 , d1
Pn,A,d,b (f ) = hf, Af
where f 2 : [0, 1] ? R is a function such that f 2 (x) = f (x)2 for every x ? [0, 1] and 1 : [0, 1] ? R
is the constant function that has a value of 1 everywhere. The following lemma states that the
minimizations of pn,A,d,b and Pn,A,d,b are equivalent:
Lemma 4.1. Let A ? Rn?n be a matrix and d, b ? Rn?n be vectors. Then, we have
min
v?[?K,K]n
pn,A,d,b (v) = n2 ?
inf
f :[0,1]?[?K,K]
Pn,A,d,b (f ).
for any K > 0.
Proof. First, we show that n2 ? inf f :[0,1]?[?K,K] Pn,A,d,b (f ) ? minv?[?K,K]n pn,A,d,b (v). Given
a vector v ? [?K, K]n , we define f : [0, 1] ? [?K, K] as f (x) = vin (x) . Then,
X Z Z
1
1 X
b i=
Aij vi vj = 2 hv, Avi,
hf, Af
Aij f (x)f (y)dxdy = 2
n
n
I
I
i
j
i,j?[n]
i,j?[n]
XZ
X Z Z
1 X
1
d> 1i =
di f (x)2 dxdy =
di f (x)2 dx =
hf 2 , d1
di vi2 = hv, diag(d)vi,
n
n
i?[n] Ii
i,j?[n] Ii Ij
i?[n]
X Z Z
XZ
X
1
1
d
> 1i =
hf, b1
bi f (x)dxdy =
bi f (x)dx =
bi vi = hv, bi.
n
n
Ii Ij
Ii
i,j?[n]
i?[n]
i?[n]
2
Then, we have n Pn,A,d,b (f ) ? pn,A,d,b (v).
Next, we show that minv?[?K,K]n pn,A,d,b (v) ? n2 ? inf f :[0,1]?[?K,K] Pn,A,d,b (f ). Let f :
[0, 1] ? [?K, K] be a measurable function. Then, for x ? [0, 1], we have
XZ
XZ
?Pn,A,d,b (f (x))
=
Aiin (x) f (y)dy +
Ain (x)j f (y)dy + 2din (x) f (x) + bin (x) .
?f (x)
Ii
Ij
i?[n]
j?[n]
Note that the form of this partial derivative only depends on in (x); hence, in the optimal solution
f ? : [0, 1] ? [?K, K], we can assume f ? (x) = f ? (y) if in (x) = in (y). In other words, f ?
is constant on each of the intervals I1 , . . . , In . For such f ? , we define the vector v ? Rn as
vi = f ? (x), where x ? [0, 1] is any element in Ii . Then, we have
X Z Z
X
2
b ? i,
hv, Avi =
Aij vi vj = n
Aij f ? (x)f ? (y)dxdy = n2 hf ? , Af
i,j?[n]
hv, diag(d)vi =
hv, bi =
X
di vi2 = n
i?[n]
X
XZ
i?[n]
i?[n]
Ij
dT 1i,
di f ? (x)2 dx = nh(f ? )2 , d1
Ii
i?[n]
bi v i = n
Ii
i,j?[n]
XZ
dT 1i.
bi f ? (x)dx = nhf ? , b1
Ii
Finally, we have pn,A,d,b (v) ? n2 Pn,A,d,b (f ? ).
Now we show that Algorithm 1 well-approximates the optimal value of (1) in the following sense:
6
Theorem 4.2. Let v ? and z ? be an optimal solution and the optimal value, respectively, of prob2
lem (1). By choosing k(, ?) = 2?(1/ ) + ?(log 1? log log 1? ), with a probability of at least
1 ? ?, a sequence S of k indices independently and uniformly sampled from [n] satisfies the following: Let v?? and z?? be an optimal solution and the optimal value, respectively, of the problem
minv?Rk pk,A|S ,d|S ,b|S (v). Then, we have
n2
2 z?? ? z ? ? LK 2 n2 ,
k
where K = max{maxi?[n] |vi? |, maxi?[n] |?
vi? |} and L = max{maxi,j |Aij |, maxi |di |, maxi |bi |}.
2
b
Proof. We instantiate Theorem 3.1 with k = 2?(1/ ) + ?(log 1? log log 1? ) and the dikernels A,
d
d
d1> , and b1> . Then, with a probability of at least 1 ? ?, there exists a measure preserving bijection
? : [0, 1] ? [0, 1] such that
n
o LK 2
d> ? ?(d1
d
dS ))f i|, |hf 2 , (d1
\
\
> | ))1i|, |hf, (b1
> ? ?(b1
> | ))1i| ?
b ? ?(A|
max |hf, (A
S
S
3
for any function f : [0, 1] ? [?K, K]. Then, we have
z?? = min pk,A|S ,d|S ,b|S (v) =
v?Rk
= k2 ?
inf
f :[0,1]?[?K,K]
= k2 ?
inf
f :[0,1]?[?K,K]
? k2 ?
inf
min
v?[?K,K]k
pk,A|S ,d|S ,b|S (v)
Pk,A|S ,d|S ,b|S (f )
(By Lemma 4.1)
d> )1i+
dS ) ? A)f
\
> | ) ? d1
b i + hf, Af
b i + hf 2 , (?(d1
hf, (?(A|
S
d> 1i + hf, (?(b1
d
d
\
> | ) ? b1
> )1i + hf, b1
> 1i
hf 2 , d1
S
d> 1i + hf, b1
d
> 1i ? LK 2
b i + hf 2 , d1
hf, Af
f :[0,1]?[?K,K]
k2
?
min pn,A,d,b (v) ? LK 2 k 2 .
n2 v?[?K,K]n
k2
k2
= 2 ? minn pn,A,d,b (v) ? LK 2 k 2 = 2 z ? ? LK 2 k 2 .
n v?R
n
=
(By Lemma 4.1)
Rearranging the inequality, we obtain the desired result.
We can show that K is bounded when A is symmetric and full rank. To see this, we first note
that we can assume A + ndiag(d) is positive-definite, as otherwise pn,A,d,b is not bounded and
the problem is uninteresting. Then, for any set S ? [n] of k indices, (A + ndiag(d))|S is again
positive-definite because it is a principal submatrix. Hence, we have v ? = (A + ndiag(d))?1 nb/2
and v?? = (A|S + ndiag(d|S ))?1 nb|S /2, which means that K is bounded.
5
Experiments
In this section, we demonstrate the effectiveness of our method by experiment.1 All experiments
were conducted on an Amazon EC2 c3.8xlarge instance. Error bars indicate the standard deviations
over ten trials with different random seeds.
Numerical simulation We investigated the actual relationships between n, k, and . To this end,
we prepared synthetic data as follows. We randomly generated inputs as Aij ? U[?1,1] , di ? U[0,1] ,
and bi ? U[?1,1] for i, j ? [n], where U[a,b] denotes the uniform distribution with the support [a, b].
After that, we solved (1) by using Algorithm 1 and compared it with the exact solution obtained by
QP.2 The result (Figure 1) show the approximation errors were evenly controlled regardless of n,
which meets the error analysis (Theorem 4.2).
1
2
The program codes are available at https://github.com/hayasick/CTOQ.
We used GLPK (https://www.gnu.org/software/glpk/) for the QP solver.
7
?
?
?
?
?
?
?
?
?
?
?
?
?
?
40
80
160
?
1000
?
?
10
?
20
2000
?
Table 1: Pearson divergence: runtime (second).
Nystr?om Proposed
?
500
|z ? z ?| n 2
n=200
0.10
0.05
0.00
0.10
0.05
0.00
0.10
0.05
0.00
0.10
0.05
0.00
k
20
40
80
160
20
40
80
160
n = 500
0.002
0.003
0.007
0.030
0.005
0.010
0.022
0.076
1000
0.002
0.003
0.007
0.030
0.012
0.022
0.049
0.116
2000
0.002
0.003
0.008
0.033
0.046
0.087
0.188
0.432
5000
0.002
0.003
0.008
0.035
0.274
0.513
0.942
1.972
k
Figure 1: Numerical simulation: absolute approximation error scaled by n2 .
Nystr?om Proposed
Table 2: Pearson divergence: absolute approximation error.
k
20
40
80
160
20
40
80
160
n = 500
0.0027 ? 0.0028
0.0018 ? 0.0023
0.0007 ? 0.0008
0.0003 ? 0.0003
0.3685 ? 0.9142
0.3549 ? 0.6191
0.0184 ? 0.0192
0.0143 ? 0.0209
1000
0.0012 ? 0.0012
0.0006 ? 0.0007
0.0004 ? 0.0003
0.0002 ? 0.0001
1.3006 ? 2.4504
0.4207 ? 0.7018
0.0398 ? 0.0472
0.0348 ? 0.0541
2000
0.0021 ? 0.0019
0.0012 ? 0.0011
0.0008 ? 0.0008
0.0003 ? 0.0003
3.1119 ? 6.1464
0.9838 ? 1.5422
0.2056 ? 0.2725
0.0585 ? 0.1112
5000
0.0016 ? 0.0022
0.0011 ? 0.0020
0.0007 ? 0.0017
0.0002 ? 0.0003
0.6989 ? 0.9644
0.3744 ? 0.6655
0.5705 ? 0.7918
0.0254 ? 0.0285
Application to kernel methods Next, we considered the kernel approximation of the Pearson
divergence [21]. The problem is defined as follows. Suppose we have the two different data sets
0
x = (x1 , . . . , xn ) ? Rn and x0 = (x01 , . . . , x0n0 ) ? Rn where n, n0 ? N. Let H ? Rn?n
P
Pn0
n
1??
0
0
be a gram matrix such that Hl,m = ?
i=1 ?(xi , xl )?(xi , xm ) + n0
j=1 ?(xj , xl )?(xj , xm ),
n
n
where ?(?, ?) is aP
kernel function and ? ? (0, 1) is a parameter. Also, let h ? R be a vector
n
such that hl = n1 i=1 ?(xi , xl ). Then, an estimator of the ?-relative Pearson divergence between
the distributions of x and x0 is obtained by ? 21 ? minv?Rn 12 hv, Hvi ? hh, vi + ?2 hv, vi. Here,
? > 0 is a regularization parameter. In this experiment, we used the Gaussian kernel ?(x, y) =
exp((x ? y)2 /2? 2 ) and set n0 = 200 and ? = 0.5; ? 2 and ? were chosen by 5-fold cross-validation
as suggested in [21]. We randomly generated the data sets as xi ? N (1, 0.5) for i ? [n] and
x0j ? N (1.5, 0.5) for j ? [n0 ] where N (?, ? 2 ) denotes the Gaussian distribution with mean ? and
variance ? 2 .
?
We encoded this problem into (1) by setting A = 12 H, b = ?h, and d = 2n
1n , where 1n denotes
the n-dimensional vector whose elements are all one. After that, given k, we computed the second
step of Algorithm 1 with the pseudoinverse of A|S +kdiag(d|S ). Absolute approximation errors and
runtimes were compared with Nystr?om?s method whose approximated rank was set to k. In terms of
accuracy, our method clearly outperformed Nystr?om?s method (Table 2). In addition, the runtimes
of our method were nearly constant, whereas the runtimes of Nystr?om?s method grew linearly in k
(Table 1).
6
Acknowledgments
We would like to thank Makoto Yamada for suggesting a motivating problem of our method. K. H. is
supported by MEXT KAKENHI 15K16055. Y. Y. is supported by MEXT Grant-in-Aid for Scientific
Research on Innovative Areas (No. 24106001), JST, CREST, Foundations of Innovative Algorithms
for Big Data, and JST, ERATO, Kawarabayashi Large Graph Project.
8
References
[1] N. Alon, W. F. de la Vega, R. Kannan, and M. Karpinski. Random sampling and approximation of MAXCSP problems. In STOC, pages 232?239, 2002.
[2] N. Alon, E. Fischer, I. Newman, and A. Shapira. A combinatorial characterization of the testable graph
properties: It?s all about regularity. SIAM Journal on Computing, 39(1):143?167, 2009.
[3] C. Borgs, J. Chayes, L. Lov?asz, V. T. S?os, B. Szegedy, and K. Vesztergombi. Graph limits and parameter
testing. In STOC, pages 261?270, 2006.
[4] C. Borgs, J. T. Chayes, L. Lov?asz, V. T. S?os, and K. Vesztergombi. Convergent sequences of dense graphs
i: Subgraph frequencies, metric properties and testing. Advances in Mathematics, 219(6):1801?1851,
2008.
[5] L. Bottou. Stochastic learning. In Advanced Lectures on Machine Learning, pages 146?168. 2004.
[6] V. Brattka and P. Hertling. Feasible real random access machines. Journal of Complexity, 14(4):490?526,
1998.
[7] K. L. Clarkson, E. Hazan, and D. P. Woodruff. Sublinear optimization for machine learning. Journal of
the ACM, 59(5):23:1?23:49, 2012.
[8] A. Frieze and R. Kannan. The regularity lemma and approximation schemes for dense problems. In
FOCS, pages 12?20, 1996.
[9] O. Goldreich, S. Goldwasser, and D. Ron. Property testing and its connection to learning and approximation. Journal of the ACM, 45(4):653?750, 1998.
[10] L. Lov?asz. Large Networks and Graph Limits. American Mathematical Society, 2012.
[11] L. Lov?asz and B. Szegedy. Limits of dense graph sequences. Journal of Combinatorial Theory, Series B,
96(6):933?957, 2006.
[12] L. Lov?asz and K. Vesztergombi. Non-deterministic graph property testing. Combinatorics, Probability
and Computing, 22(05):749?762, 2013.
[13] C. Mathieu and W. Schudy. Yet another algorithm for dense max cut: go greedy. In SODA, pages 176?182,
2008.
[14] K. P. Murphy. Machine learning: a probabilistic perspective. The MIT Press, 2012.
[15] H. N. Nguyen and K. Onak. Constant-time approximation algorithms via local improvements. In FOCS,
pages 327?336, 2008.
[16] K. Onak, D. Ron, M. Rosen, and R. Rubinfeld. A near-optimal sublinear-time algorithm for approximating the minimum vertex cover size. In SODA, pages 1123?1131, 2012.
[17] R. Rubinfeld and M. Sudan. Robust characterizations of polynomials with applications to program testing.
SIAM Journal on Computing, 25(2):252?271, 1996.
[18] M. Sugiyama, T. Suzuki, and T. Kanamori. Density Ratio Estimation in Machine Learning. Cambridge
University Press, 2012.
[19] T. Suzuki and M. Sugiyama. Least-Squares Independent Component Analysis. Neural Computation,
23(1):284?301, 2011.
[20] C. K. I. Williams and M. Seeger. Using the nystr?om method to speed up kernel machines. In NIPS, 2001.
[21] M. Yamada, T. Suzuki, T. Kanamori, H. Hachiya, and M. Sugiyama. Relative density-ratio estimation for
robust distribution comparison. In NIPS, 2011.
[22] Y. Yoshida. Optimal constant-time approximation algorithms and (unconditional) inapproximability results for every bounded-degree CSP. In STOC, pages 665?674, 2011.
[23] Y. Yoshida. A characterization of locally testable affine-invariant properties via decomposition theorems.
In STOC, pages 154?163, 2014.
[24] Y. Yoshida. Gowers norm, function limits, and parameter estimation. In SODA, pages 1391?1406, 2016.
[25] Y. Yoshida, M. Yamamoto, and H. Ito. Improved constant-time approximation algorithms for maximum
matchings and other optimization problems. SIAM Journal on Computing, 41(4):1074?1093, 2012.
9
| 6044 |@word trial:1 version:2 polynomial:2 norm:10 simulation:2 decomposition:1 sgd:1 nystr:7 series:1 nii:1 woodruff:1 ka:1 com:2 gmail:1 dx:5 yet:1 numerical:3 partition:8 predetermined:1 n0:4 greedy:1 prohibitive:1 instantiate:1 xk:3 yamada:3 infrastructure:1 provides:1 characterization:4 bijection:7 ron:2 org:1 mathematical:1 constructed:1 become:1 focs:2 consists:1 prove:4 introduce:2 manner:1 x0:4 pairwise:1 lov:6 xz:6 ekw:3 actual:1 considering:1 solver:1 spain:1 estimating:1 underlying:1 moreover:1 notation:1 bounded:5 project:1 what:1 onak:2 every:12 runtime:2 exactly:1 k2:11 scaled:1 schwartz:1 grant:1 positive:3 negligible:1 local:1 limit:6 despite:1 initiated:1 meet:1 approximately:1 ap:1 specifying:1 schudy:1 range:1 bi:9 unique:2 acknowledgment:1 testing:6 union:1 minv:6 definite:2 area:1 empirical:1 kohei:2 word:2 pre:1 shapira:1 close:2 operator:1 nb:2 glpk:2 restriction:2 measurable:6 map:3 equivalent:1 vxi:1 www:1 deterministic:1 yoshida:5 regardless:1 go:1 independently:7 convex:2 williams:1 amazon:1 splitting:1 immediately:1 prob2:1 estimator:1 proving:1 notion:1 coordinate:1 suppose:2 exact:1 programming:1 element:9 approximated:3 satisfying:2 cut:9 inserted:1 solved:2 hv:12 complexity:3 solving:1 rewrite:1 triangle:1 matchings:1 easily:1 accelerate:1 goldreich:1 kp:2 query:3 newman:1 avi:5 pearson:6 choosing:2 whose:6 encoded:1 widely:1 solve:1 valued:1 say:2 otherwise:1 statistic:1 gi:5 fischer:1 chayes:2 sequence:14 interaction:1 product:3 sudan:1 subgraph:1 scalability:1 regularity:3 r1:1 aiin:1 alon:2 ac:1 v10:1 ij:6 lowrank:1 p2:1 solves:1 indicate:1 differ:1 stochastic:2 vp0:1 jst:2 bin:1 require:2 generalization:1 preliminary:1 graphon:1 sufficiently:3 considered:1 exp:3 seed:1 hvi:1 estimation:4 outperformed:1 combinatorial:3 makoto:1 ain:1 agrees:1 tool:1 minimization:8 mit:1 clearly:1 gaussian:2 aim:1 csp:1 rather:1 pn:22 corollary:2 focus:1 kakenhi:1 improvement:1 rank:3 industrial:1 seeger:1 sense:1 typically:1 equipartition:5 i1:2 issue:1 special:1 mutual:1 logc:1 field:1 construct:2 sampling:5 runtimes:3 kw:24 nearly:1 rosen:1 randomly:2 frieze:1 national:2 divergence:6 murphy:1 consisting:1 lebesgue:1 n1:1 interest:1 mining:1 unconditional:1 kt:2 partial:1 vi0:4 yamamoto:1 desired:2 theoretical:1 instance:1 cover:2 deviation:1 subset:1 vertex:3 entry:2 uninteresting:1 uniform:1 conducted:1 motivating:1 synthetic:2 density:3 fundamental:1 ec2:1 pn0:1 siam:3 probabilistic:1 informatics:1 squared:1 again:1 book:1 creating:1 derivative:1 american:1 return:1 szegedy:3 suggesting:1 de:1 inc:1 matter:1 satisfy:1 combinatorics:1 vi:27 depends:1 later:1 doing:2 analyze:3 sup:1 start:1 hf:26 dikernels:8 observing:1 vin:1 hazan:1 contribution:1 minimize:1 square:2 om:7 accuracy:3 variance:1 vp:4 weak:1 confirmed:1 hachiya:1 whenever:1 definition:1 frequency:1 proof:7 di:7 sampled:7 kawarabayashi:1 recall:1 dt:2 improved:1 strongly:1 d:2 hand:1 o:2 scientific:1 vj0:2 counterpart:1 hence:5 regularization:1 din:1 symmetric:2 wp:4 iteratively:1 i2:1 erato:1 stress:1 complete:1 demonstrate:2 image:1 vega:1 qp:3 jp:1 wpi:2 nh:1 extend:1 approximates:1 x0n0:1 refer:1 cambridge:1 ai:2 mathematics:1 sugiyama:3 dikernel:17 access:4 showed:1 perspective:1 optimizing:1 inf:6 irrelevant:1 kwp:2 inequality:6 preserving:9 minimum:3 dxdy:7 ii:10 full:1 technical:1 faster:1 match:1 af:5 cross:1 long:1 wpt:5 controlled:1 regression:1 basic:1 expectation:2 metric:1 iteration:2 kernel:7 karpinski:1 addition:1 want:3 whereas:1 interval:2 asz:6 effectiveness:1 integer:4 call:1 near:1 vesztergombi:3 easy:1 hb:1 xj:5 zi:8 reduce:1 idea:1 inner:2 goldwasser:1 clarkson:2 algebraic:1 useful:1 prepared:1 ten:1 locally:1 reduced:1 http:2 specifies:1 problematic:1 estimated:1 nevertheless:1 v1:4 ram:1 graph:13 inverse:1 everywhere:1 soda:3 x0j:1 k2c:1 dy:4 submatrix:2 bound:3 ct:7 gnu:1 distinguish:1 convergent:1 fold:1 quadratic:11 constraint:2 n3:1 software:1 nhb:2 speed:1 argument:1 min:7 innovative:2 rubinfeld:2 lp:1 lem:1 hl:2 restricted:1 invariant:2 pr:1 equation:1 yyoshida:1 hh:1 know:1 end:2 available:1 operation:1 apply:3 denotes:5 clustering:1 log2:7 exploit:1 testable:3 k1:2 approximating:1 society:1 objective:1 primary:1 concentration:2 diagonal:1 said:1 gradient:1 distance:3 thank:1 evenly:1 dxdyd:1 trivial:1 kannan:2 code:1 index:11 minn:1 relationship:1 ratio:3 minimizing:1 stoc:4 design:1 perform:1 upper:1 finite:1 descent:1 situation:1 grew:1 rn:17 bk:1 pair:1 subvector:1 specified:3 c3:1 z1:2 connection:1 barcelona:1 nip:3 bar:1 suggested:1 xm:2 azuma:2 program:2 max:5 vi2:2 satisfaction:1 advanced:2 scheme:1 github:1 technology:1 mathieu:1 lk:7 extract:1 nice:1 l2:2 ultrahigh:1 relative:2 loss:1 lecture:1 sublinear:4 var:2 validation:1 foundation:1 x01:1 degree:1 affine:2 sufficient:1 supported:2 infeasible:1 kanamori:2 drastically:1 side:1 aij:6 institute:2 fall:1 absolute:3 distributed:1 regard:2 dimension:2 axi:1 xn:1 xlarge:1 gram:1 kdiag:1 suzuki:3 refinement:3 nguyen:1 far:2 crest:1 ignore:1 preferred:1 supremum:1 pseudoinverse:1 b1:10 xi:5 yuichi:1 continuous:3 table:4 reasonably:1 zk:2 rearranging:1 robust:2 investigated:1 poly:2 bottou:1 diag:6 vj:9 pk:6 main:2 dense:6 linearly:1 motivation:1 big:1 n2:15 nothing:1 positively:1 x1:4 slow:1 aid:1 xl:3 ito:1 hw:1 rk:5 theorem:10 borgs:2 maxi:5 consist:1 exists:7 importance:1 magnitude:1 ez:1 inapproximability:1 hayashi:2 satisfies:3 acm:2 goal:1 formulated:2 feasible:1 change:2 included:1 typical:1 determined:1 uniformly:8 averaging:1 principal:2 lemma:16 called:1 e:6 la:1 formally:2 wq:1 support:2 mext:2 dx0:2 d1:10 handling:1 |
5,575 | 6,045 | Learning shape correspondence with
anisotropic convolutional neural networks
Davide Boscaini1 , Jonathan Masci1 , Emanuele Rodol`a1 , Michael Bronstein1,2,3
1
2
3
USI Lugano, Switzerland
Tel Aviv University, Israel
Intel, Israel
[email protected]
Abstract
Convolutional neural networks have achieved extraordinary results in many computer vision and pattern recognition applications; however, their adoption in the
computer graphics and geometry processing communities is limited due to the
non-Euclidean structure of their data. In this paper, we propose Anisotropic Convolutional Neural Network (ACNN), a generalization of classical CNNs to nonEuclidean domains, where classical convolutions are replaced by projections over
a set of oriented anisotropic diffusion kernels. We use ACNNs to effectively learn
intrinsic dense correspondences between deformable shapes, a fundamental problem in geometry processing, arising in a wide variety of applications. We tested
ACNNs performance in challenging settings, achieving state-of-the-art results on
recent correspondence benchmarks.
1
Introduction
In geometry processing, computer graphics, and vision, finding intrinsic correspondence between
3D shapes affected by different transformations is one of the fundamental problems with a wide
spectrum of applications ranging from texture mapping to animation [25]. Of particular interest is
the setting in which the shapes are allowed to deform non-rigidly. Traditional hand-crafted correspondence approaches are divided into two main categories: point-wise correspondence methods
[17], which establish the matching between (a subset of) the points on two or more shapes by minimizing metric distortion, and soft correspondence methods [23], which establish a correspondence
among functions defined over the shapes, rather than the vertices themselves. Recently, the emergence of 3D sensing technology has brought the need to deal with acquisition artifacts, such as
missing parts, geometric, and topological noise, as well as matching 3D shapes in different representations, such as meshes and point clouds. With new and broader classes of artifacts, comes the
need of learning from data invariance that is otherwise impossible to model axiomatically.
In the past years, we have witnessed the emergence of learning-based approaches for 3D shape
analysis. The first attempts were aimed at learning local shape descriptors [15, 5, 27], and shape
correspondence [20]. The dramatic success of deep learning (in particular, convolutional neural
networks [8, 14]) in computer vision [13] has led to a recent keen interest in the geometry processing
and graphics communities to apply such methodologies to geometric problems [16, 24, 28, 4, 26].
Extrinsic deep learning. Many machine learning techniques successfully working on images were
tried ?as is? on 3D geometric data, represented for this purpose in some way ?digestible? by standard frameworks. Su et al. [24] used CNNs applied to range images obtained from multiple views
of 3D objects for retrieval and classification tasks. Wei et al. [26] used view-based representation
to find correspondence between non-rigid shapes. Wu et al. [28] used volumetric CNNs applied to
rasterized volumetric representation of 3D shapes. The main drawback of such approaches is their
treatment of geometric data as Euclidean structures. Such representations are not intrinsic, and vary
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: Illustration of the difference between extrinsic (left)
and intrinsic (right) deep learning
methods on geometric data. Intrinsic methods work on the manifold rather than its Euclidean realization and are isometry-invariant
by construction.
as the result of pose or deformation of the object. For instance, in Figure 1, the filter that responds
to features on a straight cylinder would not respond to a bent one. Achieving invariance to shape deformations, a common requirement in many applications, is extremely hard with the aforementioned
methods and requires complex models and huge training sets due to the large number of degrees of
freedom involved in describing non-rigid deformations.
Intrinsic deep learning approaches try to apply learning techniques to geometric data by generalizing the main ingredients such as convolutions to non-Euclidean domains. In an intrinsic representation, the filter is applied to some data on the surface itself, thus being invariant to deformations by
construction (see Figure 1). The first intrinsic convolutional neural network architecture (Geodesic
CNN) was presented in [16]. While producing impressive results on several shape correspondence
and retrieval benchmarks, GCNN has a number of significant drawbacks. First, the charting procedure is limited to meshes, and second, there is no guarantee that the chart is always topologically
meaningful. Another intrinsic CNN construction (Localized Spectral CNN) using an alternative
charting technique based on the windowed Fourier transform [22] was proposed in [4]. This method
is a generalization of a previous work [6] on spectral deep learning on graphs. One of the key advantages of LSCNN is that the same framework can be applied to different shape representations, in
particular, meshes and point clouds. A drawback of this approach is its memory and computation
requirements, as each window needs to be explicitly produced.
Contributions. We present Anisotropic Convolutional Neural Networks (ACNN), a method for
intrinsic deep learning on non-Euclidean domains. Though it is a generic framework that can be
used to handle different tasks, we focus here on learning correspondence between shapes. Our approach is related to two previous methods for deep learning on manifolds, GCNN [16] and ADD [5].
Compared to [5], where a learned spectral filter applied to the eigenvalues of anisotropic LaplaceBeltrami operator, we use anisotropic heat kernels as spatial weighting functions allowing to extract
a local intrinsic representation of a function defined on the manifold. Unlike ADD, our ACNN is a
convolutional neural network architecture. Compared to GCNN, our construction of the ?patch operator? is much simpler, does not depend on the injectivity radius of the manifold, and is not limited
to triangular meshes. Overall, ACNN combines all the best properties of the previous approaches
without inheriting their drawbacks. We show that the proposed framework outperforms GCNN,
ADD, and other state-of-the-art approaches on challenging correspondence benchmarks.
2
Background
We model a 3D shape as a two-dimensional compact Riemannian manifold (surface) X. Let Tx X
denote the tangent plane at x, modeling the surface locally as a Euclidean space. A Riemannian
metric is an inner product h?, ?iTx X : Tx X ? Tx X ! R on the tangent plane, depending smoothly
on x. Quantities which are expressible entirely in terms of Riemannian metric, and therefore independent on the way the surface is embedded, are called intrinsic. Such quantities are invariant to
isometric (metric-preserving) deformations.
Heat diffusion on manifolds is governed by the heat equation, which has the most general form
ft (x, t) =
divX (D(x)rX f (x, t)),
(1)
with appropriate boundary conditions if necessary. Here rX and divX denote the intrinsic gradient
and divergence operators, and f (x, t) is the temperature at point x at time t. D(x) is the thermal
conductivity tensor (2 ? 2 matrix) applied to the intrinsic gradient in the tangent plane. This formulation allows modeling heat flow that is position- and direction-dependent (anisotropic). Andreux et
2
al. [1] considered anisotropic diffusion driven by the surface curvature. Boscaini et al. [5], assuming
that at each point x the tangent vectors are expressed w.r.t. the orthogonal basis vm , vM of principal
curvature directions, used a thermal conductivity tensor of the form
?
?
D?? (x) = R? (x)
R>
(2)
? (x),
1
where the 2 ? 2 matrix R? (x) performs rotation of ? w.r.t. to the maximum curvature direction
vM (x), and ? > 0 is a parameter controlling the degree of anisotropy (? = 1 corresponds to the
classical isotropic case). We refer to the operator
?? f (x)
=
divX (D?? (x)rX f (x))
as the anisotropic Laplacian, and denote by { ??i , ??i }i 0 its eigenfunctions and eigenvalues
(computed, if applicable, with the appropriate boundary conditions) satisfying ?? ??i (x) =
??i ??i (x).
Given some initial heat distribution f0 (x) = f (x, 0), the solution of heat equation (1) at time t is
t
obtained by applying the anisotropic heat operator H??
= e t ?? to f0 ,
Z
t
f (x, t) = H?? f0 (x) =
f0 (?)h??t (x, ?) d? ,
(3)
X
where h??t (x, ?) is the anisotropic heat kernel, and the above equation can be interpreted as a nonshift-invariant version of convolution. In the spectral domain, the heat kernel is expressed as
X
h??t (x, ?) =
e t ??k ??k (x) ??k (?).
(4)
k 0
Appealing to the signal processing intuition, the eigenvalues play the role of ?frequencies?, e t
acts as a low-pass filter (larger t corresponding to longer diffusion results in a filter with a narrower
pass band). This construction was used in ADD [5] to generalize the OSD approach [15] using
anisotropic heat kernels (considering the diagonal h??t (x, x) and learning a set of optimal taskspecific spectral filters replacing the low-pass filters e t ??k ).
Discretization. In the discrete setting, the surface X
is sampled at n points V = {x1 , . . . , xn }. The points
are connected by edges E and faces F , forming a
manifold triangular mesh (V, E, F ). To each triangle
ijk 2 F , we attach an orthonormal reference frame
? m, n
? ), where n
? is the unit normal vec- h
Uijk = (?
uM , u
?M , u
? m 2 R3 are the directions
tor to the triangle and u
of principal curvature. The thermal conductivity tensor for the triangle ijk operating on tangent
is
? ? vectors
?
1
expressed w.r.t. Uijk as a 3 ? 3 matrix
.
?m
R? u
j
?m n
u
?
?hj
e
?kj
e
?
ij
?hi
e
i
?ki
e
?M
R? u
?M
u
?ij
k
0
The discretization of the anisotropic Laplacian takes the form of an n ?P
n sparse matrix L =
S 1 W. The mass matrix S is a diagonal matrix of area elements si = 13 jk:ijk2F Aijk , where
Aijk denotes the area of triangle ijk. The stiffness matrix W is composed of weights
8 ?
?
h?
ekj ,?
eki iH?
h?
ehj ,?
ehi iH?
>
< 12
+
(i, j) 2 E;
sin ij
P sin ?ij
wij =
(5)
w
i = j;
k6=i ik
>
:
0
else ,
where the
is according to the inset figure, and the shear matrix H? =
? ? notation
?
>
1
R? Uijk
U>
R
ijk ? encodes the anisotropic scaling up to an orthogonal basis change. Here
0
R? denotes the 3 ? 3 rotation matrix, rotating the basis vectors Uijk on each triangle around the
? by angle ?.
normal n
3
3
Intrinsic deep learning
This paper deals with the extension of the popular convolutional neural networks (CNN) [14] to
non-Euclidean domains. The key feature of CNNs is the convolutional layer, implementing the idea
of ?weight sharing?, wherein a small set of templates (filters) is applied to different parts of the data.
In image analysis applications, the input into the CNN is a function representing pixel values given
on a Euclidean domain (plane); due to shift-invariance the convolution can be thought of as passing
a template across the plane and recording the correlation of the template with the function at that
location. One of the major problems in applying the same paradigm to non-Euclidean domains is
the lack of shift-invariance, the template now has to be location-dependent.
Among the recent attempts to develop intrinsic CNNs on non-Euclidean domain [6, 4, 16], the most
related to our work is GCNN [16]. The latter approach was introduced as a generalization of CNN
to triangular meshes based on geodesic local patches. The core of this method is the construction of
local geodesic polar coordinates using a procedure previously employed for intrinsic shape context
descriptors [12]. The patch operator (D(x)f )(?, ?) in GCNN maps the values of the function
f around vertex x into the local polar coordinates ?, ?, leading to the definition of the geodesic
convolution
Z
(f ? a)(x)
=
max
?2[0,2?)
a(? +
?, ?)(D(x)f )(?, ?)d?d?,
(6)
which follows the idea of multiplication by template, but is defined up to arbitrary rotation ? 2
[0, 2?) due to the ambiguity in the selection of the origin of the angular coordinate. The authors
propose to take the maximum over all possible rotations of the template a(?, ?) to remove this
ambiguity. Here, and in the following, f is some feature vector that is defined on the surface (e.g.
texture, geometric descriptors, etc.)
There are several drawbacks to this construction. First, the charting method relies on a fast marchinglike procedure requiring a triangular mesh. While relatively insensitive to triangulation [12], it may
fail if the mesh is very irregular. Second, the radius of the geodesic patches must be sufficiently
small compared to the injectivity radius of the shape, otherwise the resulting patch is not guaranteed
to be a topological disk. In practice, this limits the size of the patches one can safely use, or requires
an adaptive radius selection mechanism.
4
Anisotropic convolutional neural networks
The key idea of the Anisotropic CNN presented in this paper is the construction of a patch operator
using anisotropic heat kernels. We interpret heat kernels as local weighting functions and construct
R
h??t (x, ?)f (?)d?
(D? (x)f )(?, t) = XR
,
(7)
h (x, ?)d?
X ??t
for some anisotropy level ? > 1. This way, the values of f around point x are mapped to a local
system of coordinates (?, t) that behaves like a polar system (here t denotes the scale of the heat
kernel and ? is its orientation). We define intrinsic convolution as
Z
(f ? a)(x) =
a(?, t)(D? (x)f )(?, t)dtd?,
(8)
Note that unlike the arbitrarily oriented geodesic patches in GCNN, necessitating to take a maximum
over all the template rotations (6), in our construction it is natural to use the principal curvature
direction as the reference ? = 0.
Such an approach has a few major advantages compared to previous intrinsic CNN models. First,
being a spectral construction, our patch operator can be applied to any shape representation (like
LSCNN and unlike GCNN). Second, being defined in the spatial domain, the patches and the resulting filters have a clear geometric interpretation (unlike LSCNN). Third, our construction accounts
for local directional patterns (like GCNN and unlike LSCNN). Fourth, the heat kernels are always
well defined independently of the injectivity radius of the manifold (unlike GCNN). We summarize
the comparative advantages in Table 1.
ACNN architecture. Similarly to Euclidean CNNs, our ACNN consists of several layers that are
applied subsequently, i.e. the output of the previous layer is used as the input into the subsequent one.
4
Method
OSD [15]
ADD [5]
RF [20]
GCNN [16]
SCNN [6]
LSCNN [4]
ACNN
Repr.
Any
Any
Any
Mesh
Any
Any
Any
Input
Geometry
Geometry
Any
Any
Any
Any
Any
Generalizable
Yes
Yes
Yes
Yes
No
Yes
Yes
Filters
Spectral
Spectral
Spectral
Spatial
Spectral
Spectral
Spatial
Context
No
No
No
Yes
Yes
Yes
Yes
Directional
No
Yes
No
Yes
No
No
Yes
Task
Descriptors
Any
Correspondence
Any
Any
Any
Any
Table 1: Comparison of different intrinsic learning models. Our ACNN model combines all the best
properties of the other models. Note that OSD and ADD are local spectral descriptors operating
with intrinsic geometric information of the shape and cannot be applied to arbitrary input, unlike the
Random Forest (RF) and convolutional models.
ACNN, as any convolutional network, is applied in a point-wise manner on a function defined on
the manifolds, producing a point-wise output that is interpreted as soft correspondence, as described
below. Our intrinsic convolutional layer ICQ, with Q output maps, is defined as follows and replaces
the convolutional layer used in classical Euclidean CNNs with the construction (8). The ICQ layer
contains P Q filters arranged in banks (P filters in Q banks); each bank corresponds to an output
dimension. The filters are applied to the input as follows,
fqout (x) =
P
X
p=1
(fpin ? aqp )(x),
q = 1, . . . , Q,
(9)
where aqp (?, t) are the learnable coefficients of the pth filter in the qth filter bank. A visualization
of such filters is available in the supplementary material.
Overall, the ACNN architecture combining several layers of different type, acts as a non-linear
parametric mapping of the form f? (x) at each point x of the shape, where ? denotes the set of
all learnable parameters of the network. The choice of the parameters is done by an optimization
process, minimizing a task-specific cost, and can thus be rather general. Here, we focus on learning
shape correspondence.
Learning correspondence Finding correspondence in a collection of shapes can be cast as a labelling problem, where one tries to label each vertex of a given query shape X with the index of a
corresponding point on some reference shape Y [20]. Let n and m denote the number of vertices in
X and Y , respectively. For a point x on a query shape, the output of ACNN f? (x) is m-dimensional
and is interpreted as a probability distribution (?soft correspondence?) on Y . The output of the
network at all the points of the query shape represents the probability of x mapped to y.
Let us denote by y ? (x) the ground-truth correspondence of x on the reference shape. We assume
to be provided with examples of points from shapes across the collection and their ground-truth
correspondence, T = {(x, y ? (x))}. The optimal parameters of the network are found by minimizing
the multinomial regression loss
X
`reg (?) =
log f? (x, y ? (x)).
(10)
(x,y ? (x))2T
5
Results
In this section, we evaluate the proposed ACNN method and compare it to state-of-the-art approaches. Anisotropic Laplacians were computed according to (5). Heat kernels were computed
in the frequency domain using all the eigenpairs. In all experiments, we used L = 16 orientations
and the anisotropy parameter ? = 100. Neural networks were implemented in Theano [2]. The
ADAM [11] stochastic optimization algorithm was used with initial learning rate of 10 3 , 1 = 0.9,
and 2 = 0.999. As the input to the networks, we used the local SHOT descriptor [21] with 544
dimensions and using default parameters. For all experiments, training was done by minimizing the
loss (10). For shapes with 6.9K vertices, Laplacian computation and eigendecomposition took 1 sec
and 4 seconds per angle, respectively on a desktop workstation with 64Gb of RAM and i7-4820K
CPU. Forward propagation of the trained model takes approximately 0.5 sec to produce the dense
soft correspondence for all the vertices.
5
% correspondences
1
0
10
cm
20
30
40
0.8
0.6
BIM
LSCNN
RF
ADD
GCNN
ACNN
0.4
0.2
0
0
0.05
0.1
0.15
% geodesic diameter
Geodesic error
0.2
1
1
0.8
0.8
0.6
0.6
0.4
0.4
RF
PFM
ACNN
0.2
0
0.05
0.1
0.15
% geodesic diameter
Geodesic error
RF
PFM
ACNN
0.2
0.2
0
0.05
0.1
0.15
% geodesic diameter
Geodesic error
0.2
Figure 2: Performance of different correspondence methods, left to right: FAUST meshes,
SHREC?16 Partial cuts and holes. Evaluation of the correspondence was done using the Princeton protocol.
Full mesh correspondence We used the FAUST humans dataset [3], containing 100 meshes of
10 scanned subjects, each in 10 different poses. The shapes in the collection manifest strong
non-isometric deformations. Vertex-wise groundtruth correspondence is known between all the
shapes. The zeroth FAUST shape containing 6890 vertices was used as reference; for each point
on the query shape, the output of the network represents the soft correspondence as a 6890dimensional vector which was then converted to point correspondence with the technique explained
in Section 4. First 80 shapes for training and the remaining 20 for testing, following verbatim the settings of [16]. Batch normalization [9] allowed to effectively train larger and deeper
networks. For this experiment, we adopted the following architecture inspired by GCNN [16]:
FC64+IC64+IC128+IC256+FC1024+FC512+Softmax. The soft correspondences produced by the
net were refined using functional map [18]. We refer to the supplementary material for the details.
We compare to Random Forests (RF) [20], Blended Intrinsic Maps (BIM) [10], Localized Spectral
CNN (LSCNN) [4], and Anisotropic Diffusion Descriptors (ADD) [5].
Figure 2 (left) shows the performance of different methods. The performance was evaluated using the Princeton protocol [10], plotting the percentage of matches that are at most r-geodesically
distant from the groundtruth correspondence on the reference shape. Two versions of the protocol consider intrinsically symmetric matches as correct (symmetric setting, solid curves) or wrong
(asymmetric, more challenging setting, dashed curves). Some methods based on intrinsic structures
(e.g. LSCNN or RF applied on WKS descriptors) are invariant under intrinsic symmetries and thus
cannot distinguish between symmetric points. The proposed ACNN method clearly outperforms
all the compared approaches and also perfectly distinguishes symmetric points. Figure 3 shows the
pointwise geodesic error of different correspondence methods (distance of the correspondence at a
point from the groundtruth). ACNN shows dramatically smaller distortions compared to other methods. Over 60% of matches are exact (zero geodesic error), while only a few points have geodesic
error larger than 10% of the geodesic diameter of the shape 1 . Please refer to the supplementary
material for an additional visualization of the quality of the correspondences obtained with ACNN
in terms of texture transfer.
Partial correspondence We used the recent very challenging SHREC?16 Partial Correspondence benchmark [7], consisting of nearly-isometrically deformed shapes from eight classes, with
different parts removed. Two types of partiality in the benchmark are cuts (removal of a few
large parts) and holes (removal of many small parts). In each class, the vertex-wise groundtruth
correspondence between the full shape and its partial versions is given. The dataset was split
into training and testing disjoint sets. For cuts, training was done on 15 shapes per class; for
holes, training was done on 10 shapes per class. We used the following ACNN architecture:
IC32+FC1024+DO(0.5)+FC2048+DO(0.5)+Softmax. The soft correspondences produced by the
net were refined using partial functional correspondence [19]. We refer to the supplementary mate1
Per subject leave-one-out produces comparable results with mean accuracy of 59.6 ? 3.7%.
6
0.1
Blended Intrinsic Maps
0
Geodesic CNN
Anisotropic CNN
Figure 3: Pointwise geodesic error (in % of geodesic diameter) of different correspondence methods
(top to bottom: Blended Intrinsic Maps, GCNN, ACNN) on the FAUST dataset. Error values are
saturated at 10% of the geodesic diameter. Hot colors correspond to large errors.
rial for the details. The dropout regularization, with ?drop = 0.5, was crucial to avoid overfitting on
such a small training set. We compared ACNN to RF [20] and Partial Functional Maps (PFM) [19].
For the evaluation, we used the protocol of [7], which closely follows the Princeton benchmark.
Figure 2 (middle) compares the performance of different partial matching methods on the
SHREC?16 Partial (cuts) dataset. ACNN outperforms other approaches with a significant margin.
Figure 4 (top) shows examples of partial correspondence on the horse shape as well as the pointwise geodesic error. We observe that the proposed approach produces high-quality correspondences
even in such a challenging setting. Figure 2 (right) compares the performance of different partial
matching methods on the SHREC?16 Partial (holes) dataset. In this setting as well, ACNN outperforms other approaches with a significant margin. Figure 4 (bottom) shows examples of partial
correspondence on the dog shape as well as the pointwise geodesic error.
6
Conclusions
We presented Anisotropic CNN, a new framework generalizing convolutional neural networks to
non-Euclidean domains, allowing to perform deep learning on geometric data. Our work follows
the very recent trend in bringing machine learning methods to computer graphics and geometry
processing applications, and is currently the most generic intrinsic CNN model. Our experiments
show that ACNN outperforms previously proposed intrinsic CNN models, as well as additional
state-of-the-art methods in the shape correspondence application in challenging settings. Being a
generic model, ACNN can be used for many other applications. The most promising future work
direction is applying ACNN to learning on graphs.
7
0.1
Anisotropic CNN
0
Random Forest
0.1
0
Anisotropic CNN
Random Forest
Figure 4: Examples of partial correspondence on the SHREC?16 Partial cuts (top) and holes (bottom)
datasets. Rows 1 and 4: correspondence produced by ACNN. Corresponding points are shown in
similar color. Reference shape is shown on the left. Rows 2, 5 and 3, 6: pointwise geodesic error
(in % of geodesic diameter) of the ACNN and RF correspondence, respectively. Error values are
saturated at 10% of the geodesic diameter. Hot colors correspond to large errors.
8
Acknowledgments
The authors wish to thank Matteo Sala for the textured models. This research was supported by
the ERC Starting Grant No. 307047 (COMET), a Google Faculty Research Award, and Nvidia
equipment grant.
References
[1] M. Andreux, E. Rodol`a, M. Aubry, and D. Cremers. Anisotropic Laplace-Beltrami operators for shape
analysis. In Proc. NORDIA, 2014.
[2] J. Bergstra et al. Theano: a CPU and GPU math expression compiler. In Proc. SciPy, June 2010.
[3] F. Bogo, J. Romero, M. Loper, and M. J. Black. FAUST: Dataset and evaluation for 3D mesh registration.
In Proc. CVPR, 2014.
[4] D. Boscaini, J. Masci, S. Melzi, M. M. Bronstein, U. Castellani, and P. Vandergheynst. Learning classspecific descriptors for deformable shapes using localized spectral convolutional networks. Computer
Graphics Forum, 34(5):13?23, 2015.
[5] D. Boscaini, J. Masci, E. Rodol`a, M. M. Bronstein, and D. Cremers. Anisotropic diffusion descriptors.
Computer Graphics Forum, 35(2), 2016.
[6] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral networks and locally connected networks on
graphs. In Proc. ICLR, 2014.
[7] L. Cosmo, E. Rodol`a, M. M. Bronstein, A. Torsello, D. Cremers, and Y. Sahillio?glu. Shrec?16: Partial
matching of deformable shapes. In Proc. 3DOR, 2016.
[8] K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4):193?202, 1980.
[9] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. In Proc. ICML, pages 448?456, 2015.
[10] V. G. Kim, Y. Lipman, and T. Funkhouser. Blended intrinsic maps. TOG, 30(4):79, 2011.
[11] D. P. Kingma and J. Ba. ADAM: A method for stochastic optimization. In ICLR, 2015.
[12] I. Kokkinos, M. M. Bronstein, R. Litman, and A. M. Bronstein. Intrinsic shape context descriptors for
deformable shapes. In Proc. CVPR, 2012.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural
networks. In Proc. NIPS, 2012.
[14] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541?551, 1989.
[15] R. Litman and A. M. Bronstein. Learning spectral descriptors for deformable shape correspondence.
PAMI, 36(1):170?180, 2014.
[16] J. Masci, D. Boscaini, M. M. Bronstein, and P. Vandergheynst. Geodesic convolutional neural networks
on riemannian manifolds. In Proc. 3dRR, 2015.
[17] F. M?emoli. Gromov-Wasserstein Distances and the Metric Approach to Object Matching. Foundations of
Computational Mathematics, pages 1?71, 2011.
[18] M. Ovsjanikov, M. Ben-Chen, J. Solomon, A. Butscher, and L. Guibas. Functional maps: a flexible
representation of maps between shapes. TOG, 31(4):1?11, 2012.
[19] E. Rodol`a, L. Cosmo, M. M. Bronstein, A. Torsello, and D. Cremers. Partial functional correspondence.
Computer Graphics Forum, 2016.
[20] E. Rodol`a, S. Rota Bul`o, T. Windheuser, M. Vestner, and D. Cremers. Dense non-rigid shape correspondence using random forests. In Proc. CVPR, 2014.
[21] S. Salti, F. Tombari, and L. Di Stefano. SHOT: unique signatures of histograms for surface and texture
description. CVIU, 125:251?264, 2014.
[22] D. I Shuman, B. Ricaud, and P. Vandergheynst. Vertex-frequency analysis on graphs. arXiv:1307.5708,
2013.
[23] J. Solomon, A. Nguyen, A. Butscher, M. Ben-Chen, and L. Guibas. Soft maps between surfaces. Computer Graphics Forum, 31(5):1617?1626, 2012.
[24] H. Su, S. Maji, E. Kalogerakis, and E. Learned-Miller. Multi-view convolutional neural networks for 3D
shape recognition. In Proc. ICCV, 2015.
[25] O. van Kaick, H. Zhang, G. Hamarneh, and D. Cohen-Or. A survey on shape correspondence. Computer
Graphics Forum, 20:1?23, 2010.
[26] L. Wei, Q. Huang, D. Ceylan, E. Vouga, and H. Li. Dense human body correspondences using convolutional networks. In Proc. CVPR, 2016.
[27] T. Windheuser, M. Vestner, E. Rodol`a, R. Triebel, and D. Cremers. Optimal intrinsic descriptors for
non-rigid shape analysis. In Proc. BMVC, 2014.
[28] Z. Wu, S. Song, A. Khosla, et al. 3D ShapeNets: A deep representation for volumetric shapes. In Proc.
CVPR, 2015.
9
| 6045 |@word deformed:1 cnn:16 faculty:1 version:3 middle:1 kokkinos:1 disk:1 tried:1 dramatic:1 solid:1 shot:2 initial:2 contains:1 past:1 outperforms:5 qth:1 discretization:2 si:1 acnns:2 must:1 gpu:1 mesh:13 subsequent:1 distant:1 romero:1 shape:58 remove:1 drop:1 plane:5 isotropic:1 desktop:1 rodol:7 core:1 math:1 location:2 simpler:1 zhang:1 windowed:1 ik:1 kalogerakis:1 consists:1 combine:2 manner:1 themselves:1 kaick:1 multi:1 inspired:1 anisotropy:3 cpu:2 window:1 considering:1 spain:1 provided:1 notation:1 mass:1 israel:2 cm:1 interpreted:3 generalizable:1 finding:2 transformation:1 guarantee:1 safely:1 act:2 isometrically:1 zaremba:1 um:1 wrong:1 litman:2 szlam:1 unit:1 conductivity:3 grant:2 producing:2 eigenpairs:1 local:10 limit:1 rigidly:1 matteo:1 approximately:1 pami:1 black:1 zeroth:1 challenging:6 limited:3 bim:2 range:1 adoption:1 acknowledgment:1 lecun:2 unique:1 testing:2 practice:1 backpropagation:1 xr:1 procedure:3 area:2 thought:1 projection:1 matching:6 rota:1 gcnn:14 cannot:2 selection:2 operator:9 context:3 impossible:1 applying:3 map:11 missing:1 starting:1 independently:1 survey:1 scipy:1 usi:2 orthonormal:1 handle:1 coordinate:4 laplace:1 construction:12 controlling:1 play:1 exact:1 origin:1 element:1 trend:1 recognition:4 satisfying:1 jk:1 asymmetric:1 cut:5 bottom:3 cloud:2 ft:1 role:1 connected:2 removed:1 intuition:1 geodesic:26 signature:1 trained:1 depend:1 tog:2 basis:3 triangle:5 textured:1 represented:1 tx:3 maji:1 train:1 heat:15 fast:1 query:4 horse:1 refined:2 larger:3 supplementary:4 cvpr:5 distortion:2 faust:5 otherwise:2 triangular:4 emergence:2 itself:1 transform:1 eki:1 advantage:3 eigenvalue:3 net:2 took:1 propose:2 product:1 combining:1 realization:1 organizing:1 deformable:5 description:1 sutskever:1 requirement:2 produce:3 comparative:1 adam:2 leave:1 ben:2 object:3 depending:1 develop:1 pose:2 ij:4 strong:1 taskspecific:1 implemented:1 come:1 switzerland:1 direction:6 radius:5 drawback:5 correct:1 closely:1 cnns:7 filter:16 uijk:4 subsequently:1 stochastic:2 human:2 material:3 implementing:1 surname:1 generalization:3 biological:1 ceylan:1 extension:1 around:3 considered:1 sufficiently:1 normal:2 ground:2 guibas:2 mapping:2 tor:1 vary:1 major:2 purpose:1 polar:3 proc:14 applicable:1 label:1 currently:1 jackel:1 hubbard:1 successfully:1 brought:1 clearly:1 always:2 rather:3 avoid:1 hj:1 broader:1 rial:1 focus:2 june:1 loper:1 rasterized:1 equipment:1 geodesically:1 kim:1 dependent:2 rigid:4 hamarneh:1 expressible:1 wij:1 pixel:1 overall:2 among:2 classification:2 aforementioned:1 orientation:2 k6:1 flexible:1 art:4 spatial:4 softmax:2 noneuclidean:1 construct:1 lipman:1 represents:2 icml:1 nearly:1 future:1 few:3 distinguishes:1 oriented:2 composed:1 verbatim:1 divergence:1 replaced:1 geometry:7 consisting:1 dor:1 classspecific:1 fukushima:1 attempt:2 cylinder:1 freedom:1 interest:2 huge:1 evaluation:3 saturated:2 henderson:1 edge:1 partial:16 necessary:1 orthogonal:2 euclidean:13 rotating:1 deformation:6 witnessed:1 instance:1 soft:8 modeling:2 fc512:1 blended:4 cost:1 vertex:10 subset:1 krizhevsky:1 graphic:9 wks:1 ehi:1 fundamental:2 vm:3 michael:1 butscher:2 ambiguity:2 solomon:2 containing:2 huang:1 leading:1 li:1 szegedy:1 deform:1 account:1 converted:1 bergstra:1 sec:2 coefficient:1 cremers:6 explicitly:1 reg:1 view:3 try:2 compiler:1 contribution:1 chart:1 accuracy:1 convolutional:20 descriptor:13 miller:1 correspond:2 directional:2 yes:13 generalize:1 handwritten:1 produced:4 shuman:1 rx:3 cybernetics:1 straight:1 unaffected:1 sharing:1 volumetric:3 definition:1 acquisition:1 frequency:3 involved:1 keen:1 riemannian:4 di:1 workstation:1 bogo:1 sampled:1 dataset:6 treatment:1 popular:1 davide:1 intrinsically:1 manifest:1 color:3 aubry:1 isometric:2 methodology:1 wherein:1 wei:2 bmvc:1 formulation:1 arranged:1 though:1 done:5 evaluated:1 angular:1 correlation:1 hand:1 working:1 replacing:1 su:2 lack:1 propagation:1 google:1 artifact:2 quality:2 aviv:1 name:1 requiring:1 regularization:1 symmetric:4 funkhouser:1 deal:2 ehj:1 sin:2 self:1 drr:1 please:1 neocognitron:1 necessitating:1 performs:1 temperature:1 stefano:1 dtd:1 ranging:1 wise:5 image:3 recently:1 common:1 rotation:5 behaves:1 shear:1 multinomial:1 functional:5 cohen:1 itx:1 insensitive:1 anisotropic:25 interpretation:1 interpret:1 significant:3 refer:4 vec:1 mathematics:1 similarly:1 erc:1 repr:1 emanuele:1 bruna:1 f0:4 impressive:1 surface:9 longer:1 operating:2 add:8 aijk:2 etc:1 curvature:5 isometry:1 recent:5 triangulation:1 driven:1 nvidia:1 success:1 arbitrarily:1 preserving:1 injectivity:3 additional:2 pfm:3 wasserstein:1 zip:1 employed:1 paradigm:1 signal:1 dashed:1 multiple:1 full:2 match:3 retrieval:2 divided:1 bent:1 award:1 a1:1 laplacian:3 regression:1 vestner:2 vision:3 metric:5 arxiv:1 histogram:1 kernel:10 normalization:2 achieved:1 boscaini:4 irregular:1 background:1 else:1 crucial:1 unlike:7 bringing:1 eigenfunctions:1 recording:1 subject:2 flow:1 split:1 castellani:1 variety:1 axiomatically:1 architecture:6 perfectly:1 inner:1 idea:3 triebel:1 shift:4 i7:1 expression:1 gb:1 accelerating:1 song:1 passing:1 deep:12 dramatically:1 clear:1 aimed:1 locally:2 band:1 category:1 diameter:8 glu:1 percentage:1 ovsjanikov:1 arising:1 extrinsic:2 per:4 disjoint:1 discrete:1 affected:1 key:3 comet:1 achieving:2 registration:1 diffusion:6 emoli:1 ram:1 graph:4 year:1 angle:2 fourth:1 respond:1 topologically:1 groundtruth:4 wu:2 patch:10 scaling:1 comparable:1 dropout:1 entirely:1 ki:1 hi:1 layer:7 guaranteed:1 distinguish:1 correspondence:52 topological:2 replaces:1 scanned:1 encodes:1 fourier:1 extremely:1 relatively:1 according:2 across:2 smaller:1 appealing:1 ekj:1 explained:1 invariant:5 iccv:1 theano:2 equation:3 visualization:2 previously:2 describing:1 r3:1 fail:1 mechanism:2 adopted:1 available:1 stiffness:1 apply:2 eight:1 observe:1 denker:1 spectral:16 generic:3 appropriate:2 alternative:1 batch:2 denotes:4 remaining:1 top:3 establish:2 classical:4 forum:5 tensor:3 quantity:2 parametric:1 shapenets:1 traditional:1 responds:1 diagonal:2 gradient:2 iclr:2 distance:2 thank:1 mapped:2 manifold:10 charting:3 assuming:1 code:1 index:1 pointwise:5 illustration:1 minimizing:4 ba:1 bronstein:8 perform:1 allowing:2 convolution:6 datasets:1 benchmark:6 howard:1 thermal:3 hinton:1 frame:1 arbitrary:2 community:2 introduced:1 cast:1 dog:1 imagenet:1 learned:2 boser:1 beltrami:1 barcelona:1 kingma:1 nip:2 below:1 pattern:3 laplacians:1 summarize:1 rf:9 max:1 memory:1 hot:2 natural:1 attach:1 representing:1 technology:1 extract:1 kj:1 geometric:10 tangent:5 removal:2 multiplication:1 embedded:1 loss:2 ingredient:1 localized:3 vandergheynst:3 eigendecomposition:1 foundation:1 degree:2 plotting:1 bank:4 row:2 supported:1 deeper:1 wide:2 template:7 face:1 torsello:2 sparse:1 van:1 boundary:2 dimension:2 xn:1 default:1 curve:2 author:2 collection:3 adaptive:1 forward:1 osd:3 pth:1 nguyen:1 melzi:1 compact:1 overfitting:1 ioffe:1 spectrum:1 khosla:1 table:2 promising:1 learn:1 transfer:1 tel:1 symmetry:1 forest:5 laplacebeltrami:1 complex:1 acnn:28 domain:11 inheriting:1 protocol:4 dense:4 main:3 noise:1 animation:1 allowed:2 x1:1 body:1 crafted:1 intel:1 extraordinary:1 position:2 wish:1 lugano:1 governed:1 weighting:2 third:1 masci:3 specific:1 covariate:1 inset:1 sensing:1 learnable:2 intrinsic:32 ih:2 effectively:2 texture:4 labelling:1 hole:5 margin:2 chen:2 cviu:1 smoothly:1 generalizing:2 led:1 forming:1 expressed:3 ch:1 corresponds:2 truth:2 relies:1 gromov:1 narrower:1 bul:1 hard:1 change:1 reducing:1 principal:3 called:1 pas:3 invariance:4 ijk:4 meaningful:1 internal:1 latter:1 jonathan:1 evaluate:1 princeton:3 tested:1 |
5,576 | 6,046 | Value Iteration Networks
Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel
Dept. of Electrical Engineering and Computer Sciences, UC Berkeley
Abstract
We introduce the value iteration network (VIN): a fully differentiable neural network with a ?planning module? embedded within. VINs can learn to plan, and are
suitable for predicting outcomes that involve planning-based reasoning, such as
policies for reinforcement learning. Key to our approach is a novel differentiable
approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation.
We evaluate VIN based policies on discrete and continuous path-planning domains,
and on a natural-language based search task. We show that by learning an explicit
planning computation, VIN policies generalize better to new, unseen domains.
1
Introduction
Over the last decade, deep convolutional neural networks (CNNs) have revolutionized supervised
learning for tasks such as object recognition, action recognition, and semantic segmentation [3, 15, 6,
19]. Recently, CNNs have been applied to reinforcement learning (RL) tasks with visual observations
such as Atari games [21], robotic manipulation [18], and imitation learning (IL) [9]. In these tasks, a
neural network (NN) is trained to represent a policy ? a mapping from an observation of the system?s
state to an action, with the goal of representing a control strategy that has good long-term behavior,
typically quantified as the minimization of a sequence of time-dependent costs.
The sequential nature of decision making in RL is inherently different than the one-step decisions
in supervised learning, and in general requires some form of planning [2]. However, most recent
deep RL works [21, 18, 9] employed NN architectures that are very similar to the standard networks
used in supervised learning tasks, which typically consist of CNNs for feature extraction, and fully
connected layers that map the features to a probability distribution over actions. Such networks are
inherently reactive, and in particular, lack explicit planning computation. The success of reactive
policies in sequential problems is due to the learning algorithm, which essentially trains a reactive
policy to select actions that have good long-term consequences in its training domain.
To understand why planning can nevertheless be an important ingredient in a policy, consider the
grid-world navigation task depicted in Figure 1 (left), in which the agent can observe a map of its
domain, and is required to navigate between some obstacles to a target position. One hopes that after
training a policy to solve several instances of this problem with different obstacle configurations, the
policy would generalize to solve a different, unseen domain, as in Figure 1 (right). However, as we
show in our experiments, while standard CNN-based networks can be easily trained to solve a set of
such maps, they do not generalize well to new tasks outside this set, because they do not understand
the goal-directed nature of the behavior. This observation suggests that the computation learned by
reactive policies is different from planning, which is required to solve a new task1 .
1
In principle, with enough training data that covers all possible task configurations, and a rich enough policy
representation, a reactive policy can learn to map each task to its optimal policy. In practice, this is often
too expensive, and we offer a more data-efficient approach by exploiting a flexible prior about the planning
computation underlying the behavior.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
In this work, we propose a NN-based policy that
can effectively learn to plan. Our model, termed
a value-iteration network (VIN), has a differentiable ?planning program? embedded within the
NN structure.
The key to our approach is an observation that
the classic value-iteration (VI) planning algorithm [1, 2] may be represented by a specific
type of CNN. By embedding such a VI network Figure 1: Two instances of a grid-world domain.
module inside a standard feed-forward classifi- Task is to move to the goal between the obstacles.
cation network, we obtain a NN model that can learn the parameters of a planning computation
that yields useful predictions. The VI block is differentiable, and the whole network can be trained
using standard backpropagation. This makes our policy simple to train using standard RL and IL
algorithms, and straightforward to integrate with NNs for perception and control.
Connections between planning algorithms and recurrent NNs were previously explored by Ilin
et al. [12]. Our work builds on related ideas, but results in a more broadly applicable policy
representation. Our approach is different from model-based RL [25, 4], which requires system
identification to map the observations to a dynamics model, which is then solved for a policy. In
many applications, including robotic manipulation and locomotion, accurate system identification
is difficult, and modelling errors can severely degrade the policy performance. In such domains, a
model-free approach is often preferred [18]. Since a VIN is just a NN policy, it can be trained model
free, without requiring explicit system identification. In addition, the effects of modelling errors in
VINs can be mitigated by training the network end-to-end, similarly to the methods in [13, 11].
We demonstrate the effectiveness of VINs within standard RL and IL algorithms in various problems,
among which require visual perception, continuous control, and also natural language based decision
making in the WebNav challenge [23]. After training, the policy learns to map an observation to a
planning computation relevant for the task, and generate action predictions based on the resulting
plan. As we demonstrate, this leads to policies that generalize better to new, unseen, task instances.
2
Background
In this section we provide background on planning, value iteration, CNNs, and policy representations
for RL and IL. In the sequel, we shall show that CNNs can implement a particular form of planning
computation similar to the value iteration algorithm, which can then be used as a policy for RL or IL.
Value Iteration: A standard model for sequential decision making and planning is the Markov
decision process (MDP) [1, 2]. An MDP M consists of states s ? S, actions a ? A, a reward
function R(s, a), and a transition kernel P (s0 |s, a) that encodes the probability of the next state given
the current state and action. A policy ?(a|s) prescribes an action distribution for each state. The goal
in an MDP is to find a policy that obtains high rewards in the long term. Formally, the value V ? (s)
of a state under policy ? is the expected
P? discounted sum of rewards when starting from that state and
.
executing policy ?, V ? (s) = E? [ t=0 ? t r(st , at )| s0 = s], where ? ? (0, 1) is a discount factor,
and E? denotes an expectation over trajectories of states and actions (s0 , a0 , s1 , a1 . . . ), in which
actions are selected according to ?, and states evolve according to the transition kernel P (s0 |s, a).
.
The optimal value function V ? (s) = max? V ? (s) is the maximal long-term return possible from a
?
state. A policy ? ? is said to be optimal if V ? (s) = V ? (s) ?s. A popular algorithm for calculating
?
?
V and ? is value iteration (VI):
P
Vn+1 (s) = maxa Qn (s, a) ?s, where Qn (s, a) = R(s, a) + ? s0 P (s0 |s, a)Vn (s0 ). (1)
It is well known that the value function Vn in VI converges as n ? ? to V ? , from which an optimal
policy may be derived as ? ? (s) = arg maxa Q? (s, a).
Convolutional Neural Networks (CNNs) are NNs with a particular architecture that has proved
useful for computer vision, among other domains [8, 16, 3, 15]. A CNN is comprised of
stacked convolution and max-pooling layers. The input to each convolution layer is a 3dimensional signal X, typically, an image with l channels, m horizontal pixels, and n verti0
cal pixels, and
h is a l0 -channel
convolution of the image with kernels W 1 , . . . , W l ,
Pits output
l0
0
0
hl0 ,i0 ,j 0 = ?
l,i,j Wl,i,j Xl,i ?i,j ?j , where ? is some scalar activation function. A max-pooling
layer selects, for each channel l and pixel i, j in h, the maximum value among its neighbors N (i, j),
hmaxpool
= maxi0 ,j 0 ?N (i,j) hl,i0 ,j 0 . Typically, the neighbors N (i, j) are chosen as a k ? k image
l,i,j
2
patch around pixel i, j. After max-pooling, the image is down-sampled by a constant factor d, commonly 2 or 4, resulting in an output signal with l0 channels, m/d horizontal pixels, and n/d vertical
pixels. CNNs are typically trained using stochastic gradient descent (SGD), with backpropagation for
computing gradients.
Reinforcement Learning and Imitation Learning: In MDPs where the state space is very large or
continuous, or when the MDP transitions or rewards are not known in advance, planning algorithms
cannot be applied. In these cases, a policy can be learned from either expert supervision ? IL,
or by trial and error ? RL. While the learning algorithms in both cases are different, the policy
representations ? which are the focus of this work ? are similar. Additionally, most state-of-the-art
algorithms such as [24, 21, 26, 18] are agnostic to the policy representation, and only require it to be
differentiable, for performing gradient descent on some algorithm-specific loss function. Therefore,
in this paper we do not commit to a specific learning algorithm, and only consider the policy.
Let ?(s) denote an observation for state s. The policy is specified as a parametrized function
?? (a|?(s)) mapping observations to a probability over actions, where ? are the policy parameters.
For example, the policy could be represented as a neural network, with ? denoting the network
weights. The goal is to tune the parameters such that the policy behaves well in the sense that
?? (a|?(s)) ? ? ? (a|?(s)), where ? ? is the optimal policy for the MDP, as defined in Section 2.
In
a dataset of N state observations and corresponding optimal actions
IL,
?(si ), ai ? ? ? (?(si )) i=1,...,N is generated by an expert. Learning a policy then becomes
an instance of supervised learning [24, 9]. In RL, the optimal action is not available, but instead,
the agent can act in the world and observe the rewards and state transitions its actions effect. RL
algorithms such as in [27, 21, 26, 18] use these observations to improve the value of the policy.
3
The Value Iteration Network Model
In this section we introduce a general policy representation that embeds an explicit planning module.
As stated earlier, the motivation for such a representation is that a natural solution to many tasks, such
as the path planning described above, involves planning on some model of the domain.
Let M denote the MDP of the domain for which we design our policy ?. We assume that there
? such that the optimal plan in M
? contains useful information about the
is some unknown MDP M
? in
optimal policy in the original task M . However, we emphasize that we do not assume to know M
?
advance. Our idea is to equip the policy with the ability to learn and solve M , and to add the solution
? as an element in the policy ?. We hypothesize that this will lead to a policy that automatically
of M
? to plan on. We denote by s? ? S,
? s, a
? a
? R(?
?), and P? (?
s0 |?
s, a
?) the states, actions,
learns a useful M
? ? A,
?
?
? and P? depend
rewards, and transitions in M . To facilitate a connection between M and M , we let R
? = fR (?(s)) and P? = fP (?(s)), and we will later learn the
on the observation in M , namely, R
functions fR and fP as a part of the policy learning process.
? have the same state and action
For example, in the grid-world domain described above, we can let M
spaces as the true grid-world M . The reward function fR can map an image of the domain to a
high reward at the goal, and negative reward near an obstacle, while fP can encode deterministic
movements in the grid-world that do not depend on the observation. While these rewards and
? will
transitions are not necessarily the true rewards and transitions in the task, an optimal plan in M
still follow a trajectory that avoids obstacles and reaches the goal, similarly to the optimal plan in M .
? has been specified, any standard planning algorithm can be used to obtain the value
Once an MDP M
function V? ? . In the next section, we shall show that using a particular implementation of VI for
planning has the advantage of being differentiable, and simple to implement within a NN framework.
In this section however, we focus on how to use the planning result V? ? within the NN policy ?. Our
approach is based on two important observations. The first is that the vector of values V? ? (s) ?s
? . Thus, adding the vector V? ? as additional
encodes all the information about the optimal plan in M
?.
features to the policy ? is sufficient for extracting information about the optimal plan in M
However, an additional property of V? ? is that the optimal decision ?
? ? (?
s) atP
a state s? can depend
? s, a
only on a subset of the values of V? ? , since ?
? ? (?
s) = arg maxa? R(?
?) + ? s?0 P? (?
s0 |?
s, a
?)V? ? (?
s0 ).
Therefore, if the MDP has a local connectivity structure, such as in the grid-world example above,
?
the states for which P? (?
s0 |?
s, a
?) > 0 is a small subset of S.
In NN terminology, this is a form of attention [31], in the sense that for a given label prediction
(action), only a subset of the input features (value function) is relevant. Attention is known to improve
learning performance by reducing the effective number of network parameters during learning.
Therefore, the second element in our network is an attention module that outputs a vector of (attention
3
modulated) values ?(s). Finally, the vector ?(s) is added as additional features to a reactive policy
?re (a|?(s), ?(s)). The full network architecture is depicted in Figure 2 (left).
Returning to our grid-world example, at a particular state s, the reactive policy only needs to query
the values of the states neighboring s in order to select the correct action. Thus, the attention module
in this case could return a ?(s) vector with a subset of V? ? for these neighboring states.
VI Module
Prev. Value
New Value
Reward
Q
V
R
P
K recurrence
Figure 2: Planning-based NN models. Left: a general policy representation that adds value function
features from a planner to a reactive policy. Right: VI module ? a CNN representation of VI algorithm.
Let ? denote all the parameters of the policy, namely, the parameters of fR , fP , and ?re , and note
that ?(s) is in fact a function of ?(s). Therefore, the policy can be written in the form ?? (a|?(s)),
similarly to the standard policy form (cf. Section 2). If we could back-propagate through this function,
then potentially we could train the policy using standard RL and IL algorithms, just like any other
standard policy representation. While it is easy to design functions fR and fP that are differentiable
(and we provide several examples in our experiments), back-propagating the gradient through the
planning algorithm is not trivial. In the following, we propose a novel interpretation of an approximate
VI algorithm as a particular form of a CNN. This allows us to conveniently treat the planning module
as just another NN, and by back-propagating through it, we can train the whole policy end-to-end.
3.1
The VI Module
We now introduce the VI module ? a NN that encodes a differentiable planning computation.
Our starting point is the VI algorithm (1). Our main observation is that each iteration of VI may
be seen as passing the previous value function Vn and reward function R through a convolution
layer and max-pooling layer. In this analogy, each channel in the convolution layer corresponds to
the Q-function for a specific action, and convolution kernel weights correspond to the discounted
transition probabilities. Thus by recurrently applying a convolution layer K times, K iterations of VI
are effectively performed.
Following this idea, we propose the VI network module, as depicted in Figure 2B. The inputs to the
? of dimensions l, m, n, where here, for the purpose of clarity, we
VI module is a ?reward image? R
follow the CNN formulation and explicitly assume that the state space S? maps to a 2-dimensional
grid. However, our approach can be extended to general discrete state spaces, for example, a graph,
?
as we report in the WikiNav experiment in Section 4.4. The reward
is fed into a convolutional layer Q
a
?
? a?,i0 ,j 0 = P
? l,i0 ?i,j 0 ?j . Each channel
with A? channels and a linear activation function, Q
W
R
l,i,j
l,i,j
? s, a
in this layer corresponds to Q(?
?) for a particular action a
?. This layer is then max-pooled along
? a, i, j).
the actions channel to produce the next-iteration value function layer V? , V?i,j = maxa? Q(?
?
?
The next-iteration value function layer V is then stacked with the reward R, and fed back into the
convolutional layer and max-pooling layer K times, to perform K iterations of value iteration.
The VI module is simply a NN architecture that has the capability of performing an approximate VI
computation. Nevertheless, representing VI in this form makes learning the MDP parameters and
reward function natural ? by backpropagating through the network, similarly to a standard CNN. VI
modules can also be composed hierarchically, by treating the value of one VI module as additional
input to another VI module. We further report on this idea in the supplementary material.
3.2
Value Iteration Networks
We now have all the ingredients for a differentiable planning-based policy, which we term a value
iteration network (VIN). The VIN is based on the general planning-based policy defined above, with
the VI module as the planning algorithm. In order to implement a VIN, one has to specify the state
4
? the reward and transition functions fR and fP ,
and action spaces for the planning module S? and A,
and the attention function; we refer to this as the VIN design. For some tasks, as we show in our
experiments, it is relatively straightforward to select a suitable design, while other tasks may require
more thought. However, we emphasize an important point: the reward, transitions, and attention can
be defined by parametric functions, and trained with the whole policy2 . Thus, a rough design can be
specified, and then fine-tuned by end-to-end training.
Once a VIN design is chosen, implementing the VIN is straightforward, as it is simply a form of a
CNN. The networks in our experiments all required only several lines of Theano [28] code. In the
next section, we evaluate VIN policies on various domains, showing that by learning to plan, they
achieve a better generalization capability.
4
Experiments
In this section we evaluate VINs as policy representations on various domains. Additional experiments
investigating RL and hierarchical VINs, as well as technical implementation details are discussed in
the supplementary material. Source code is available at https://github.com/avivt/VIN.
Our goal in these experiments is to investigate the following questions:
1. Can VINs effectively learn a planning computation using standard RL and IL algorithms?
2. Does the planning computation learned by VINs make them better than reactive policies at
generalizing to new domains?
An additional goal is to point out several ideas for designing VINs for various tasks. While this is not
an exhaustive list that fits all domains, we hope that it will motivate creative designs in future work.
4.1 Grid-World Domain
Our first experiment domain is a synthetic grid-world with randomly placed obstacles, in which the
observation includes the position of the agent, and also an image of the map of obstacles and goal
position. Figure 3 shows two random instances of such a grid-world of size 16 ? 16. We conjecture
that by learning the optimal policy for several instances of this domain, a VIN policy would learn the
planning computation required to solve a new, unseen, task.
In such a simple domain, an optimal policy can easily be calculated using exact VI. Note, however,
that here we are interested in evaluating whether a NN policy, trained using RL or IL, can learn
to plan. In the following results, policies were trained using IL, by standard supervised learning
from demonstrations of the optimal policy. In the supplementary material, we report additional RL
experiments that show similar findings.
?
We design a VIN for this task following the guidelines described above, where the planning MDP M
is a grid-world, similar to the true MDP. The reward mapping fR is a CNN mapping the image input to
a reward map in the grid-world. Thus, fR should potentially learn to discriminate between obstacles,
non-obstacles and the goal, and assign a suitable reward to each. The transitions P? were defined as
3 ? 3 convolution kernels in the VI block, exploiting the fact that transitions in the grid-world are
local3 . The recurrence K was chosen in proportion to the grid-world size, to ensure that information
can flow from the goal state to any other state. For the attention module, we chose a trivial approach
? values in the VI block for the current state, i.e., ?(s) = Q(s,
? ?). The final reactive
that selects the Q
policy is a fully connected network that maps ?(s) to a probability over actions.
We compare VINs to the following NN reactive policies:
CNN network: We devised a CNN-based reactive policy inspired by the recent impressive results of
DQN [21], with 5 convolution layers, and a fully connected output. While the network in [21] was
trained to predict Q values, our network outputs a probability over actions. These terms are related,
since ? ? (s) = arg maxa Q(s, a). Fully Convolutional Network (FCN): The problem setting for
this domain is similar to semantic segmentation [19], in which each pixel in the image is assigned a
semantic label (the action in our case). We therefore devised an FCN inspired by a state-of-the-art
semantic segmentation algorithm [19], with 3 convolution layers, where the first layer has a filter that
spans the whole image, to properly convey information from the goal to every other state.
In Table 1 we present the average 0 ? 1 prediction loss of each model, evaluated on a held-out test-set
of maps with random obstacles, goals, and initial states, for different problem sizes. In addition, for
each map, a full trajectory from the initial state was predicted, by iteratively rolling-out the next-states
2
VINs are fundamentally different than inverse RL methods [22], where transitions are required to be known.
Note that the transitions defined this way do not depend on the state s?. Interestingly, we shall see that the
network learned to plan successful trajectories nevertheless, by appropriately shaping the reward.
3
5
Figure 3: Grid-world domains (best viewed in color). A,B: Two random instances of the 28 ? 28
synthetic gridworld, with the VIN-predicted trajectories and ground-truth shortest paths between
random start and goal positions. C: An image of the Mars domain, with points of elevation sharper
than 10? colored in red. These points were calculated from a matching image of elevation data
(not shown), and were not available to the learning algorithm. Note the difficulty of distinguishing
between obstacles and non-obstacles. D: The VIN-predicted (purple line with cross markers), and the
shortest-path ground truth (blue line) trajectories between between random start and goal positions.
Domain
8?8
16 ? 16
28 ? 28
Prediction
loss
0.004
0.05
0.11
VIN
Success
rate
99.6%
99.3%
97%
Traj.
diff.
0.001
0.089
0.086
Pred.
loss
0.02
0.10
0.13
CNN
Succ.
rate
97.9%
87.6%
74.2%
Traj.
diff.
0.006
0.06
0.078
Pred.
loss
0.01
0.07
0.09
FCN
Succ.
rate
97.3%
88.3%
76.6%
Traj.
diff.
0.004
0.05
0.08
Table 1: Performance on grid-world domain. Top: comparison with reactive policies. For all domain
sizes, VIN networks significantly outperform standard reactive networks. Note that the performance
gap increases dramatically with problem size.
predicted by the network. A trajectory was said to succeed if it reached the goal without hitting
obstacles. For each trajectory that succeeded, we also measured its difference in length from the
optimal trajectory. The average difference and the average success rate are reported in Table 1.
Clearly, VIN policies generalize to domains outside the training set. A visualization of the reward
mapping fR (see supplementary material) shows that it is negative at obstacles, positive at the goal,
and a small negative constant otherwise. The resulting value function has a gradient pointing towards
a direction to the goal around obstacles, thus a useful planning computation was learned. VINs also
significantly outperform the reactive networks, and the performance gap increases dramatically with
the problem size. Importantly, note that the prediction loss for the reactive policies is comparable to
the VINs, although their success rate is significantly worse. This shows that this is not a standard
case of overfitting/underfitting of the reactive policies. Rather, VIN policies, by their VI structure,
focus prediction errors on less important parts of the trajectory, while reactive policies do not make
this distinction, and learn the easily predictable parts of the trajectory yet fail on the complete task.
The VINs have an effective depth of K, which is larger than the depth of the reactive policies. One
may wonder, whether any deep enough network would learn to plan. In principle, a CNN or FCN of
depth K has the potential to perform the same computation as a VIN. However, it has much more
parameters, requiring much more training data. We evaluate this by untying the weights in the K
recurrent layers in the VIN. Our results, reported in the supplementary material, show that untying
the weights degrades performance, with a stronger effect for smaller sizes of training data.
4.2 Mars Rover Navigation
In this experiment we show that VINs can learn to plan from natural image input. We demonstrate
this on path-planning from overhead terrain images of a Mars landscape.
Each domain is represented by a 128 ? 128 image patch, on which we defined a 16 ? 16 grid-world,
where each state was considered an obstacle if the terrain in its corresponding 8 ? 8 image patch
contained an elevation angle of 10 degrees or more, evaluated using an external elevation data base.
An example of the domain and terrain image is depicted in Figure 3. The MDP for shortest-path
planning in this case is similar to the grid-world domain of Section 4.1, and the VIN design was
similar, only with a deeper CNN in the reward mapping fR for processing the image.
The policy was trained to predict the shortest-path directly from the terrain image. We emphasize that
the elevation data is not part of the input, and must be inferred (if needed) from the terrain image.
6
After training, VIN achieved a success rate of 84.8%. To put this rate in context, we compare with
the best performance achievable without access to the elevation data, which is 90.3%. To make
this comparison, we trained a CNN to classify whether an 8 ? 8 patch is an obstacle or not. This
classifier was trained using the same image data as the VIN network, but its labels were the true
obstacle classifications from the elevation map (we reiterate that the VIN did not have access to
these ground-truth obstacle labels during training or testing). The success rate of planner that uses
the obstacle map generated by this classifier from the raw image is 90.3%, showing that obstacle
identification from the raw image is indeed challenging. Thus, the success rate of the VIN, which was
trained without any obstacle labels, and had to ?figure out? the planning process is quite remarkable.
4.3 Continuous Control
We now consider a 2D path planning domain
Network Train Error Test Error
with continuous states and continuous actions,
VIN
0.30
0.35
which cannot be solved using VI, and therefore
CNN
0.39
0.59
a VIN cannot be naively applied. Instead, we
will construct the VIN to perform ?high-level?
planning on a discrete, coarse, grid-world representation of the continuous domain. We shall
show that a VIN can learn to plan such a ?highlevel? plan, and also exploit that plan within its
?low-level? continuous control policy. Moreover,
the VIN policy results in better generalization
than a reactive policy.
Consider the domain in Figure 4. A red-colored Figure 4: Continuous control domain. Top: averparticle needs to be navigated to a green goal us- age distance to goal on training and test domains
ing horizontal and vertical forces. Gray-colored for VIN and CNN policies. Bottom: trajectories
obstacles are randomly positioned in the domain, predicted by VIN and CNN on test domains.
and apply an elastic force and friction when contacted. This domain presents a non-trivial control problem, as the agent needs to both plan a feasible
trajectory between the obstacles (or use them to bounce off), but also control the particle (which has
mass and inertia) to follow it. The state observation consists of the particle?s continuous position and
velocity, and a static 16 ? 16 downscaled image of the obstacles and goal position in the domain. In
principle, such an observation is sufficient to devise a ?rough plan? for the particle to follow.
As in our previous experiments, we investigate whether a policy trained on several instances of this
domain with different start state, goal, and obstacle positions, would generalize to an unseen domain.
For training we chose the guided policy search (GPS) algorithm with unknown dynamics [17], which
is suitable for learning policies for continuous dynamics with contacts, and we used the publicly
available GPS code [7], and Mujoco [29] for physical simulation. We generated 200 random training
instances, and evaluate our performance on 40 different test instances from the same distribution.
Our VIN design is similar to the grid-world cases, with some important modifications: the attention
module selects a 5 ? 5 patch of the value V? , centered around the current (discretized) position in the
map. The final reactive policy is a 3-layer fully connected network, with a 2-dimensional continuous
output for the controls. In addition, due to the limited number of training domains, we pre-trained the
VIN with transition weights that correspond to discounted grid-world transitions. This is a reasonable
prior for the weights in a 2-d task, and we emphasize that even with this initialization, the initial
value function is meaningless, since the reward map fR is not yet learned. We compare with a
CNN-based reactive policy inspired by the state-of-the-art results in [21, 20], with 2 CNN layers for
image processing, followed by a 3-layer fully connected network similar to the VIN reactive policy.
Figure 4 shows the performance of the trained policies, measured as the final distance to the target.
The VIN clearly outperforms the CNN on test domains. We also plot several trajectories of both
policies on test domains, showing that VIN learned a more sensible generalization of the task.
4.4 WebNav Challenge
In the previous experiments, the planning aspect of the task corresponded to 2D navigation. We now
consider a more general domain: WebNav [23] ? a language based search task on a graph.
In WebNav [23], the agent needs to navigate the links of a website towards a goal web-page, specified
by a short 4-sentence query. At each state s (web-page), the agent can observe average wordembedding features of the state ?(s) and possible next states ?(s0 ) (linked pages), and the features of
the query ?(q), and based on that has to select which link to follow. In [23], the search was performed
7
on the Wikipedia website. Here, we report experiments on the ?Wikipedia for Schools? website, a
simplified Wikipedia designed for children, with over 6000 pages and at most 292 links per page.
In [23], a NN-based policy was proposed, which first learns a NN mapping from (?(s), ?(q))to a
hidden state vector h. The action is then selected according to ?(s0 |?(s), ?(q)) ? exp h> ?(s0 ) . In
essence, this policy is reactive, and relies on the word embedding features at each state to contain
meaningful information about the path to the goal. Indeed, this property naturally holds for an
encyclopedic website that is structured as a tree of categories, sub-categories, sub-sub-categories, etc.
We sought to explore whether planning, based on a VIN, can lead to better performance in this task,
with the intuition that a plan on a simplified model of the website can help guide the reactive policy in
difficult queries. Therefore, we designed a VIN that plans on a small subset of the graph that contains
only the 1st and 2nd level categories (< 3% of the graph), and their word-embedding features.
Designing this VIN requires a different approach from the grid-world VINs described earlier, where
the most challenging aspect is to define a meaningful mapping between nodes in the true graph and
nodes in the smaller VIN graph. For the reward mapping fR , we chose a weighted similarity measure
between the query features ?(q), and the features of nodes in the small graph ?(?
s). Thus, intuitively,
nodes that are similar to the query should have high reward. The transitions were fixed based on the
graph connectivity of the smaller VIN graph, which is known, though different from the true graph.
The attention module was also based on a weighted similarity measure between the features of the
possible next states ?(s0 ) and the features of each node in the simplified graph ?(?
s). The reactive
policy part of the VIN was similar to the policy of [23] described above. Note that by training such a
VIN end-to-end, we are effectively learning how to exploit the small graph for doing better planning
on the true, large graph.
Both the VIN policy and the baseline reactive policy were trained by supervised learning, on random
trajectories that start from the root node of the graph. Similarly to [23], a policy is said to succeed a
query if all the correct predictions along the path are within its top-4 predictions.
After training, the VIN policy performed mildly better than the baseline on 2000 held-out test queries
when starting from the root node, achieving 1030 successful runs vs. 1025 for the baseline. However,
when we tested the policies on a harder task of starting from a random position in the graph, VINs
significantly outperformed the baseline, achieving 346 successful runs vs. 304 for the baseline, out of
4000 test queries. These results confirm that indeed, when navigating a tree of categories from the
root up, the features at each state contain meaningful information about the path to the goal, making
a reactive policy sufficient. However, when starting the navigation from a different state, a reactive
policy may fail to understand that it needs to first go back to the root and switch to a different branch
in the tree. Our results indicate such a strategy can be better represented by a VIN.
We remark that there is still room for further improvements of the WebNav results, e.g., by better
models for reward and attention functions, and better word-embedding representations of text.
5
Conclusion and Outlook
The introduction of powerful and scalable RL methods has opened up a range of new problems
for deep learning. However, few recent works investigate policy architectures that are specifically
tailored for planning under uncertainty, and current RL theory and benchmarks rarely investigate the
generalization properties of a trained policy [27, 21, 5]. This work takes a step in this direction, by
exploring better generalizing policy representations.
Our VIN policies learn an approximate planning computation relevant for solving the task, and we
have shown that such a computation leads to better generalization in a diverse set of tasks, ranging
from simple gridworlds that are amenable to value iteration, to continuous control, and even to
navigation of Wikipedia links. In future work we intend to learn different planning computations,
based on simulation [10], or optimal linear control [30], and combine them with reactive policies, to
potentially develop RL solutions for task and motion planning [14].
Acknowledgments
This research was funded in part by Siemens, by ONR through a PECASE award, by the Army
Research Office through the MAST program, and by an NSF CAREER grant. A. T. was partially
funded by the Viterbi Scholarship, Technion. Y. W. was partially funded by a DARPA PPAML
program, contract FA8750-14-C-0011.
8
References
[1] R. Bellman. Dynamic Programming. Princeton University Press, 1957.
[2] D. Bertsekas. Dynamic Programming and Optimal Control, Vol II. Athena Scientific, 4th edition, 2012.
[3] D. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. In
Computer Vision and Pattern Recognition, pages 3642?3649, 2012.
[4] M. Deisenroth and C. E. Rasmussen. Pilco: A model-based and data-efficient approach to policy search.
In ICML, 2011.
[5] Y. Duan, X. Chen, R. Houthooft, J. Schulman, and P. Abbeel. Benchmarking deep reinforcement learning
for continuous control. arXiv preprint arXiv:1604.06778, 2016.
[6] C. Farabet, C. Couprie, L. Najman, and Y. LeCun. Learning hierarchical features for scene labeling. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 35(8):1915?1929, 2013.
[7] C. Finn, M. Zhang, J. Fu, X. Tan, Z. McCarthy, E. Scharff, and S. Levine. Guided policy search code
implementation, 2016. Software available from rll.berkeley.edu/gps.
[8] K. Fukushima. Neural network model for a mechanism of pattern recognition unaffected by shift in
position- neocognitron. Transactions of the IECE, J62-A(10):658?665, 1979.
[9] A. Giusti et al. A machine learning approach to visual perception of forest trails for mobile robots. IEEE
Robotics and Automation Letters, 2016.
[10] X. Guo, S. Singh, H. Lee, R. L. Lewis, and X. Wang. Deep learning for real-time atari game play using
offline monte-carlo tree search planning. In NIPS, 2014.
[11] X. Guo, S. Singh, R. Lewis, and H. Lee. Deep learning for reward design to improve monte carlo tree
search in atari games. arXiv:1604.07095, 2016.
[12] R. Ilin, R. Kozma, and P. J. Werbos. Efficient learning in cellular simultaneous recurrent neural networks-the
case of maze navigation problem. In ADPRL, 2007.
[13] J. Joseph, A. Geramifard, J. W. Roberts, J. P. How, and N. Roy. Reinforcement learning with misspecified
model classes. In ICRA, 2013.
[14] L. P. Kaelbling and T. Lozano-P?rez. Hierarchical task and motion planning in the now. In IEEE
International Conference on Robotics and Automation (ICRA), pages 1470?1477, 2011.
[15] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural
networks. In NIPS, 2012.
[16] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[17] S. Levine and P. Abbeel. Learning neural network policies with guided policy search under unknown
dynamics. In NIPS, 2014.
[18] S. Levine, C. Finn, T. Darrell, and P. Abbeel. End-to-end training of deep visuomotor policies. JMLR, 17,
2016.
[19] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In IEEE
Conference on Computer Vision and Pattern Recognition, pages 3431?3440, 2015.
[20] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu.
Asynchronous methods for deep reinforcement learning. arXiv preprint arXiv:1602.01783, 2016.
[21] V. Mnih, K. Kavukcuoglu, D. Silver, A. Rusu, J. Veness, M. Bellemare, A. Graves, M. Riedmiller,
A. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature,
518(7540):529?533, 2015.
[22] G. Neu and C. Szepesv?ri. Apprenticeship learning using inverse reinforcement learning and gradient
methods. In UAI, 2007.
[23] R. Nogueira and K. Cho. Webnav: A new large-scale task for natural language based sequential decision
making. arXiv preprint arXiv:1602.02261, 2016.
[24] S. Ross, G. Gordon, and A. Bagnell. A reduction of imitation learning and structured prediction to no-regret
online learning. In AISTATS, 2011.
[25] J. Schmidhuber. An on-line algorithm for dynamic reinforcement learning and planning in reactive
environments. In International Joint Conference on Neural Networks. IEEE, 1990.
[26] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz. Trust region policy optimization. In ICML,
2015.
[27] R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press, 1998.
[28] Theano Development Team. Theano: A Python framework for fast computation of mathematical expressions. arXiv e-prints, abs/1605.02688, May 2016.
[29] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In Intelligent
Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages 5026?5033. IEEE, 2012.
[30] M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent
dynamics model for control from raw images. In NIPS, 2015.
[31] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. Zemel, and Y. Bengio. Show, attend and
tell: Neural image caption generation with visual attention. In ICML, 2015.
9
| 6046 |@word trial:1 cnn:21 achievable:1 proportion:1 stronger:1 nd:1 pieter:1 simulation:2 propagate:1 sgd:1 outlook:1 harder:1 reduction:1 initial:3 configuration:2 contains:2 denoting:1 tuned:1 interestingly:1 fa8750:1 document:1 task1:1 outperforms:1 current:4 com:1 activation:2 si:2 yet:2 written:1 must:1 hypothesize:1 treating:1 plot:1 designed:2 v:2 intelligence:1 selected:2 website:5 short:1 colored:3 coarse:1 node:7 zhang:1 mathematical:1 along:2 contacted:1 ilin:2 consists:2 prev:1 overhead:1 downscaled:1 inside:1 underfitting:1 introduce:3 apprenticeship:1 combine:1 indeed:3 expected:1 behavior:3 planning:52 kiros:1 multi:1 discretized:1 untying:2 inspired:3 discounted:3 bellman:1 automatically:1 duan:1 becomes:1 spain:1 underlying:1 mitigated:1 moreover:1 agnostic:1 mass:1 atari:3 maxa:5 finding:1 berkeley:2 every:1 j62:1 act:1 returning:1 classifier:2 control:17 grant:1 bertsekas:1 positive:1 engineering:1 local:1 treat:1 attend:1 consequence:1 severely:1 sutton:1 path:11 chose:3 initialization:1 quantified:1 suggests:1 challenging:2 pit:1 mujoco:2 limited:1 range:1 directed:1 acknowledgment:1 lecun:2 testing:1 practice:1 block:3 implement:3 regret:1 backpropagation:3 riedmiller:2 thought:1 significantly:4 matching:1 pre:1 word:3 cannot:3 cal:1 put:1 context:1 applying:1 bellemare:1 map:17 deterministic:1 rll:1 straightforward:3 attention:12 starting:5 go:1 importantly:1 classic:1 embedding:4 target:2 tan:1 play:1 exact:1 programming:2 gps:3 distinguishing:1 designing:2 us:1 locomotion:1 trail:1 caption:1 element:2 velocity:1 recognition:6 expensive:1 roy:1 werbos:1 bottom:1 levine:5 module:21 preprint:3 electrical:1 solved:2 wang:1 region:1 connected:5 movement:1 intuition:1 predictable:1 environment:1 classifi:1 reward:30 dynamic:8 trained:19 prescribes:1 depend:4 motivate:1 solving:1 singh:2 rover:1 avivt:1 easily:3 darpa:1 succ:2 joint:1 represented:5 various:4 ppaml:1 train:5 stacked:2 fast:1 effective:2 monte:2 query:9 zemel:1 corresponded:1 labeling:1 visuomotor:1 outcome:1 outside:2 exhaustive:1 tell:1 quite:1 supplementary:5 solve:6 larger:1 otherwise:1 ability:1 unseen:5 commit:1 final:3 online:1 sequence:1 differentiable:9 advantage:1 highlevel:1 propose:3 maximal:1 fr:12 neighboring:2 relevant:3 achieve:1 exploiting:2 sutskever:1 darrell:2 produce:1 silver:2 executing:1 converges:1 object:1 help:1 recurrent:3 develop:1 propagating:2 measured:2 school:1 vins:16 predicted:5 involves:1 indicate:1 direction:2 guided:3 correct:2 cnns:7 stochastic:1 filter:1 centered:1 opened:1 human:1 material:5 implementing:1 require:3 adprl:1 assign:1 abbeel:5 generalization:5 elevation:7 exploring:1 hold:1 around:3 considered:1 ground:3 exp:1 mapping:9 predict:2 viterbi:1 pointing:1 sought:1 salakhudinov:1 purpose:1 outperformed:1 applicable:1 label:5 ross:1 wl:1 weighted:2 minimization:1 hope:2 rough:2 clearly:2 mit:1 rather:1 rusu:1 mobile:1 barto:1 office:1 hl0:1 derived:1 focus:3 l0:3 encode:1 properly:1 improvement:1 modelling:2 baseline:5 sense:2 dependent:1 nn:17 i0:4 typically:5 a0:1 hidden:1 selects:3 interested:1 pixel:7 arg:3 among:3 flexible:1 classification:3 geramifard:1 development:1 plan:21 art:3 uc:1 once:2 construct:1 extraction:1 veness:1 icml:3 fcn:4 future:2 report:4 mirza:1 fundamentally:1 gordon:1 few:1 intelligent:1 randomly:2 composed:1 fukushima:1 harley:1 ab:1 ostrovski:1 investigate:4 mnih:2 maze:1 navigation:6 held:2 amenable:1 accurate:1 succeeded:1 fu:1 tree:5 re:2 instance:10 classify:1 earlier:2 obstacle:26 column:1 cover:1 cost:1 kaelbling:1 subset:5 rolling:1 comprised:1 wonder:1 successful:3 technion:1 krizhevsky:1 too:1 reported:2 synthetic:2 nns:3 cho:2 st:2 international:3 sequel:1 contract:1 off:1 lee:2 physic:1 pecase:1 connectivity:2 worse:1 external:1 expert:2 return:2 potential:1 pooled:1 includes:1 automation:2 explicitly:1 vi:29 reiterate:1 later:1 performed:3 root:4 linked:1 doing:1 red:2 start:4 reached:1 capability:2 vin:51 il:11 purple:1 publicly:1 convolutional:8 yield:1 correspond:2 landscape:1 generalize:6 identification:4 raw:3 kavukcuoglu:2 carlo:2 trajectory:15 unaffected:1 cation:1 simultaneous:1 reach:1 farabet:1 neu:1 naturally:1 static:1 sampled:1 proved:1 dataset:1 popular:1 color:1 segmentation:4 shaping:1 garrett:1 positioned:1 back:5 feed:1 supervised:6 follow:5 specify:1 formulation:1 evaluated:2 though:1 mar:3 just:3 horizontal:3 web:2 trust:1 gridworlds:1 marker:1 lack:1 gray:1 scientific:1 aviv:1 mdp:13 dqn:1 facilitate:1 effect:3 lillicrap:1 requiring:2 true:7 contain:2 lozano:1 assigned:1 moritz:1 iteratively:1 semantic:5 game:3 during:2 recurrence:2 backpropagating:1 essence:1 neocognitron:1 complete:1 demonstrate:3 motion:2 reasoning:1 image:28 ranging:1 novel:2 recently:1 misspecified:1 wikipedia:4 behaves:1 rl:20 physical:1 tassa:1 discussed:1 interpretation:1 refer:1 ai:1 atp:1 grid:23 similarly:5 erez:1 particle:3 language:4 had:1 funded:3 access:2 robot:2 supervision:1 impressive:1 similarity:2 etc:1 add:2 base:1 badia:1 mccarthy:1 recent:3 revolutionized:1 manipulation:2 termed:1 schmidhuber:2 onr:1 success:7 yi:1 devise:1 seen:1 additional:7 employed:1 shortest:4 signal:2 pilco:1 branch:1 full:2 policy2:1 ii:1 ing:1 technical:1 offer:1 long:5 cross:1 dept:1 devised:2 award:1 a1:1 prediction:10 scalable:1 essentially:1 expectation:1 vision:3 arxiv:8 iteration:19 sergey:1 represent:1 kernel:5 tailored:1 achieved:1 robotics:2 addition:3 background:2 fine:1 szepesv:1 source:1 appropriately:1 meaningless:1 pooling:5 flow:1 effectiveness:1 jordan:1 extracting:1 near:1 bengio:2 enough:3 easy:1 switch:1 todorov:1 fit:1 architecture:5 ciresan:1 idea:5 tamar:1 haffner:1 shift:1 bounce:1 whether:5 expression:1 giusti:1 passing:1 action:27 remark:1 deep:12 dramatically:2 useful:5 involve:1 tune:1 discount:1 locally:1 category:5 generate:1 http:1 outperform:2 nsf:1 webnav:6 per:1 blue:1 broadly:1 diverse:1 discrete:3 shall:4 vol:1 key:2 terminology:1 nevertheless:3 achieving:2 navigated:1 clarity:1 iros:1 graph:15 sum:1 houthooft:1 run:2 inverse:2 angle:1 powerful:1 uncertainty:1 letter:1 springenberg:1 planner:2 reasonable:1 wu:1 vn:4 patch:5 decision:7 comparable:1 layer:22 followed:1 courville:1 scene:1 software:1 encodes:3 ri:1 aspect:2 friction:1 span:1 performing:2 relatively:1 conjecture:1 structured:2 according:3 creative:1 smaller:3 joseph:1 making:5 s1:1 modification:1 hl:1 intuitively:1 theano:3 visualization:1 previously:1 fail:2 mechanism:1 needed:1 know:1 fed:2 finn:2 end:12 available:5 apply:1 observe:3 hierarchical:3 wikinav:1 thomas:1 original:1 denotes:1 top:3 cf:1 ensure:1 calculating:1 exploit:2 scharff:1 scholarship:1 build:1 rsj:1 icra:2 contact:1 move:1 intend:1 added:1 question:1 print:1 strategy:2 parametric:1 degrades:1 bagnell:1 said:3 gradient:7 navigating:1 distance:2 link:4 fidjeland:1 parametrized:1 sensible:1 athena:1 degrade:1 cellular:1 trivial:3 equip:1 code:4 length:1 demonstration:1 difficult:2 robert:1 potentially:3 sharper:1 stated:1 negative:3 ba:1 design:11 implementation:3 guideline:1 policy:116 unknown:3 perform:3 vertical:2 observation:17 convolution:10 markov:1 benchmark:1 descent:2 najman:1 extended:1 hinton:1 team:1 gridworld:1 inferred:1 pred:2 namely:2 required:5 specified:4 meier:1 connection:2 sentence:1 imagenet:1 engine:1 learned:7 distinction:1 barcelona:1 nip:5 perception:3 pattern:4 fp:6 challenge:2 program:3 including:1 max:7 green:1 nogueira:1 suitable:4 natural:6 difficulty:1 force:2 predicting:1 representing:2 improve:3 github:1 mdps:1 text:1 prior:2 schulman:2 python:1 evolve:1 graf:2 embedded:2 fully:8 loss:6 generation:1 analogy:1 ingredient:2 remarkable:1 age:1 shelhamer:1 integrate:1 agent:6 degree:1 sufficient:3 s0:15 principle:3 encyclopedic:1 placed:1 last:1 free:2 maxi0:1 rasmussen:1 asynchronous:1 offline:1 guide:1 understand:3 deeper:1 neighbor:2 depth:3 dimension:1 calculated:2 world:23 transition:17 rich:1 qn:2 avoids:1 forward:1 commonly:1 reinforcement:10 evaluating:1 inertia:1 simplified:3 transaction:2 approximate:3 obtains:1 emphasize:4 preferred:1 confirm:1 robotic:2 investigating:1 overfitting:1 uai:1 imitation:3 terrain:5 continuous:14 search:9 latent:1 decade:1 why:1 table:3 additionally:1 learn:16 nature:3 channel:8 inherently:2 elastic:1 career:1 forest:1 traj:3 bottou:1 necessarily:1 domain:45 did:1 aistats:1 main:1 hierarchically:1 whole:4 motivation:1 edition:1 child:1 convey:1 xu:1 benchmarking:1 embeds:1 sub:3 position:11 explicit:4 xl:1 watter:1 jmlr:1 learns:3 rez:1 down:1 embed:1 specific:4 navigate:2 wordembedding:1 showing:3 recurrently:1 explored:1 list:1 consist:1 naively:1 sequential:4 effectively:4 adding:1 gap:2 mildly:1 chen:1 depicted:4 generalizing:2 simply:2 explore:1 army:1 boedecker:1 visual:4 conveniently:1 hitting:1 contained:1 partially:2 scalar:1 corresponds:2 truth:3 relies:1 lewis:2 succeed:2 kozma:1 goal:26 viewed:1 towards:2 room:1 couprie:1 feasible:1 specifically:1 reducing:1 diff:3 discriminate:1 siemens:1 meaningful:3 rarely:1 select:4 formally:1 mast:1 deisenroth:1 guo:2 modulated:1 reactive:31 evaluate:5 princeton:1 tested:1 |
5,577 | 6,047 | Global Analysis of Expectation Maximization
for Mixtures of Two Gaussians
Ji Xu
Columbia University
[email protected]
Daniel Hsu
Columbia University
[email protected]
Arian Maleki
Columbia University
[email protected]
Abstract
Expectation Maximization (EM) is among the most popular algorithms for estimating parameters of statistical models. However, EM, which is an iterative algorithm
based on the maximum likelihood principle, is generally only guaranteed to find
stationary points of the likelihood objective, and these points may be far from any
maximizer. This article addresses this disconnect between the statistical principles
behind EM and its algorithmic properties. Specifically, it provides a global analysis
of EM for specific models in which the observations comprise an i.i.d. sample
from a mixture of two Gaussians. This is achieved by (i) studying the sequence of
parameters from idealized execution of EM in the infinite sample limit, and fully
characterizing the limit points of the sequence in terms of the initial parameters;
and then (ii) based on this convergence analysis, establishing statistical consistency
(or lack thereof) for the actual sequence of parameters produced by EM.
1
Introduction
Since Fisher?s 1922 paper (Fisher, 1922), maximum likelihood estimators (MLE) have become one
of the most popular tools in many areas of science and engineering. The asymptotic consistency
and optimality of MLEs have provided users with the confidence that, at least in some sense, there
is no better way to estimate parameters for many standard statistical models. Despite its appealing
properties, computing the MLE is often intractable. Indeed, this is the case for many latent variable
models {f (Y, z; ?)}, where the latent variables z are not observed. For each setting of the parameters
?, the marginal distribution of the observed data Y is (for discrete z)
X
f (Y; ?) =
f (Y, z; ?) .
z
It is this marginalization over latent variables that typically causes the computational difficulty.
Furthermore, many algorithms based on the MLE principle are only known to find stationary points
of the likelihood objective (e.g., local maxima), and these points are not necessarily the MLE.
1.1
Expectation Maximization
Among the algorithms mentioned above, Expectation Maximization (EM) has attracted more attention
for the simplicity of its iterations, and its good performance in practice (Dempster et al., 1977; Redner
and Walker, 1984). EM is an iterative algorithm for climbing the likelihood objective starting from
? h0i . In iteration t, EM performs the following steps:
an initial setting of the parameters ?
X
? |?
? hti ) ,
? hti ) log f (Y, z; ?) ,
E-step:
Q(?
f (z | Y; ?
(1)
z
M-step:
?
?
ht+1i
? |?
? hti ) ,
, arg max Q(?
?
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
(2)
In many applications, each step is intuitive and can be performed very efficiently.
Despite the popularity of EM, as well as the numerous theoretical studies of its behavior, many important questions about its performance?such as its convergence rate and accuracy?have remained
unanswered. The goal of this paper is to address these questions for specific models (described in
Section 1.2) in which the observation Y is an i.i.d. sample from a mixture of two Gaussians.
Towards this goal, we study an idealized execution of EM in the large sample limit, where the E-step
is modified to be computed over an infinitely large i.i.d. sample from a Gaussian mixture distribution
? |?
? hti ), we replace the observed data Y with a random
in the model. In effect, in the formula for Q(?
?
variable Y ? f (y; ? ) for some Gaussian mixture parameters ? ? and then take its expectation. The
resulting E- and M-steps in iteration t are
"
#
X
hti
hti
E-step:
Q(? | ? ) , EY
f (z | Y ; ? ) log f (Y , z; ?) ,
(3)
z
? ht+1i , arg max Q(? | ? hti ) .
M-step:
(4)
?
This sequence of parameters (? hti )t?0 is fully determined by the initial setting ? h0i . We refer to
this idealization as Population EM, a procedure considered in previous works of Srebro (2007) and
Balakrishnan et al. (2014). Not only does Population EM shed light on the dynamics of EM in
the large sample limit, but it can also reveal some of the fundamental limitations of EM. Indeed, if
Population EM cannot provide an accurate estimate for the parameters ? ? , then intuitively, one would
not expect the EM algorithm with a finite sample size to do so either. (To avoid confusion, we refer
the original EM algorithm run with a finite sample as Sample-based EM.)
1.2
Models and Main Contributions
In this paper, we study EM in the context of two simple yet popular and well-studied Gaussian
mixture models. The two models, along with the corresponding Sample-based EM and Population
EM updates, are as follows:
Model 1. The observation Y is an i.i.d. sample from the mixture distribution 0.5N (?? ? , ?) +
0.5N (? ? , ?); ? is a known covariance matrix in Rd , and ? ? is the unknown parameter of interest.
1. Sample-based EM iteratively updates its estimate of ? ? according to the following equation:
n
X
? ht+1i = 1
? hti ? 1 y ,
?
2wd y i , ?
(5)
i
n i=1
where y 1 , . . . , y n are the independent draws that comprise Y,
?d (y ? ?)
,
?d (y ? ?) + ?d (y + ?)
and ?d is the density of a Gaussian random vector with mean 0 and covariance ?.
2. Population EM iteratively updates its estimate according to the following equation:
wd (y, ?) ,
? ht+1i
?
= E(2wd (Y , ? hti ) ? 1)Y ,
(6)
?
where Y ? 0.5N (?? , ?) + 0.5N (? , ?).
Model 2. The observation Y is an i.i.d. sample from the mixture distribution 0.5N (??1 , ?) +
0.5N (??2 , ?). Again, ? is known, and (??1 , ??2 ) are the unknown parameters of interest.
1. Sample-based EM iteratively updates its estimate of ??1 and ??2 at every iteration according
to the following equations:
Pn
hti
hti
?1 , ?
? 2 )y i
ht+1i
i=1 vd (y i , ?
?1
?
=
,
(7)
Pn
hti
hti
?1 , ?
?2 )
i=1 vd (y i , ?
Pn
hti
hti
?1 , ?
? 2 ))y i
ht+1i
i=1 (1 ? vd (y i , ?
?2
?
=
,
(8)
Pn
hti
hti
?1 , ?
? 2 ))
i=1 (1 ? vd (y i , ?
2
where y 1 , . . . , y n are the independent draws that comprise Y, and
?d (y ? ?1 )
.
?d (y ? ?1 ) + ?d (y ? ?2 )
vd (y, ?1 , ?2 ) ,
2. Population EM iteratively updates its estimates according to the following equations:
hti
ht+1i
?1
=
hti
Evd (Y , ?1 , ?2 )Y
hti
hti
hti
ht+1i
?2
,
(9)
Evd (Y , ?1 , ?2 )
=
hti
E(1 ? vd (Y , ?1 , ?2 ))Y
hti
hti
E(1 ? vd (Y , ?1 , ?2 ))
,
(10)
where Y ? 0.5N (??1 , ?) + 0.5N (??2 , ?).
Our main contribution in this paper is a new characterization of the stationary points and dynamics of
EM in both of the above models.
1. We prove convergence for the sequence of iterates for Population EM from each model:
hti
hti
the sequence (? hti )t?0 converges to either ? ? , ?? ? , or 0; the sequence ((?1 , ?2 ))t?0
converges to either (??1 , ??2 ), (??2 , ??1 ), or ((??1 + ??2 )/2, (??1 + ??2 )/2). We also fully
characterize the initial parameter settings that lead to each limit point.
2. Using this convergence result for Population EM, we also prove that the limits of the Samplebased EM iterates converge in probability to the unknown parameters of interest, as long
as Sample-based EM is initialized at points where Population EM would converge to these
parameters as well.
Formal statements of our results are given in Section 2.
1.3
Background and Related Work
The EM algorithm was formally introduced by Dempster et al. (1977) as a general iterative method
for computing parameter estimates from incomplete data. Although EM is billed as a procedure for
maximum likelihood estimation, it is known that with certain initializations, the final parameters
returned by EM may be far from the MLE, both in parameter distance and in log-likelihood value (Wu,
1983). Several works characterize convergence of EM to stationary points of the log-likelihood
objective under certain regularity conditions (Wu, 1983; Tseng, 2004; Vaida, 2005; Chr?tien and
Hero, 2008). However, these analyses do not distinguish between global maximizers and other
stationary points (except, e.g., when the likelihood function is unimodal). Thus, as an optimization
algorithm for maximizing the log-likelihood objective, the ?worst-case? performance of EM is
somewhat discouraging.
For a more optimistic perspective on EM, one may consider a ?best-case? analysis, where (i) the
data are an iid sample from a distribution in the given model, (ii) the sample size is sufficiently
large, and (iii) the starting point for EM is sufficiently close to the parameters of the data generating
distribution. Conditions (i) and (ii) are ubiquitous in (asymptotic) statistical analyses, and (iii) is a
generous assumption that may be satisfied in certain cases. Redner and Walker (1984) show that
in such a favorable scenario, EM converges to the MLE almost surely for a broad class of mixture
models. Moreover, recent work of Balakrishnan et al. (2014) gives non-asymptotic convergence
guarantees in certain models; importantly, these results permit one to quantify the accuracy of a
pilot estimator required to effectively initialize EM. Thus, EM may be used in a tractable two-stage
estimation procedures given a first-stage pilot estimator that can be efficiently computed.
Indeed, for the special case of Gaussian mixtures, researchers in theoretical computer science and
machine learning have developed efficient algorithms that deliver the highly accurate parameter
estimates under appropriate conditions. Several of these algorithms, starting with that of Dasgupta
(1999), assume that the means of the mixture components are well-separated?roughly at distance
either d? or k ? for some ?, ? > 0 for a mixture of k Gaussians in Rd (Dasgupta, 1999; Arora
and Kannan, 2005; Dasgupta and Schulman, 2007; Vempala and Wang, 2004; Kannan et al., 2008;
Achlioptas and McSherry, 2005; Chaudhuri and Rao, 2008; Brubaker and Vempala, 2008; Chaudhuri
et al., 2009a). More recent work employs the method-of-moments, which permit the means of the
3
mixture components to be arbitrarily close, provided that the sample size is sufficiently large (Kalai
et al., 2010; Belkin and Sinha, 2010; Moitra and Valiant, 2010; Hsu and Kakade, 2013; Hardt and
Price, 2015). In particular, Hardt and Price (2015) characterize the information-theoretic limits of
parameter estimation for mixtures of two Gaussians, and that they are achieved by a variant of the
original method-of-moments of Pearson (1894).
Most relevant to this paper are works that specifically analyze EM (or variants thereof) for Gaussian
mixture models, especially when the mixture components are well-separated. Xu and Jordan (1996)
show favorable convergence properties (akin to super-linear convergence near the MLE) for wellseparated mixtures. In a related but different vein, Dasgupta and Schulman (2007) analyze a variant
of EM with a particular initialization scheme, and proves fast convergence to the true parameters,
again for well-separated mixtures in high-dimensions. For mixtures of two Gaussians, it is possible to
exploit symmetries to get sharper analyses. Indeed, Chaudhuri et al. (2009b) uses these symmetries to
prove that a variant of Lloyd?s algorithm (MacQueen, 1967; Lloyd, 1982) (which may be regarded as
a hard-assignment version of EM) very quickly converges to the subspace spanned by the two mixture
component means, without any separation assumption. Lastly, for the specific case of our Model
1, Balakrishnan et al. (2014) proves linear convergence of EM (as well as a gradient-based variant
of EM) when started in a sufficiently small neighborhood around the true parameters, assuming a
minimum separation between the mixture components. Here, the permitted size of the neighborhood
grows with the separation between the components, and a recent result of Klusowski and Brinda
(2016) quantitatively improves this aspect of the analysis (but still requires a minimum separation).
Remarkably, by focusing attention on the local region around the true parameters, they obtain nonasymptotic bounds on the parameter estimation error. Our work is complementary to their results
in that we focus on asymptotic limits rather than finite sample analysis. This allows us to provide a
global analysis of EM without separation or initialization conditions, which cannot be deduced from
the results of Balakrishnan et al. or Klusowski and Brinda by taking limits.
Finally, two related works have appeared following the initial posting of this article (Xu et al., 2016).
First, Daskalakis et al. (2016) concurrently and independently proved a convergence result comparable
to our Theorem 1 for Model 1; for this case, they also provide an explicit rate of linear convergence.
Second, Jin et al. (2016) show that similar results do not hold in general for uniform mixtures of
three or more spherical Gaussian distributions: common initialization schemes for (Population or
Sample-based) EM may lead to local maxima that are arbitrarily far from the global maximizer.
Similar results were well-known for Lloyd?s algorithm, but were not previously established for
Population EM (Srebro, 2007).
2
Analysis of EM for Mixtures of Two Gaussians
In this section, we present our results for Population EM and Sample-based EM under both Model
1 and Model 2, and also discuss further implications about the expected log-likelihood function.
Without loss of generality, we may assume that the known covariance matrix ? is the identity matrix
I d . Throughout, we denote the Euclidean norm by k ? k, and the signum function by sgn(?) (where
sgn(0) = 0, sgn(z) = 1 if z > 0, and sgn(z) = ?1 if z < 0).
2.1
Main Results for Population EM
We present results for Population EM for both models, starting with Model 1.
Theorem 1. Assume ? ? ? Rd \ {0}. Let (? hti )t?0 denote the Population EM iterates for Model 1,
and suppose h? h0i , ? ? i =
6 0. There exists ?? ? (0, 1)?depending only on ? ? and ? h0i ?such that
ht+1i
? sgn(h? h0i , ? ? i)? ?
? ?? ?
? hti ? sgn(h? h0i , ? ? i)? ?
.
?
The proof of Theorem 1 and all other omitted proofs are given in the full version of this article (Xu
et al., 2016). Theorem 1 asserts that if ? h0i is not on the hyperplane {x ? Rd : hx, ? ? i = 0}, then
the sequence (? hti )t?0 converges to either ? ? or ?? ? .
Our next result shows that if h? h0i , ? ? i = 0, then (? hti )t?0 still converges, albeit to 0.
4
Theorem 2. Let (? hti )t?0 denote the Population EM iterates for Model 1. If h? h0i , ? ? i = 0, then
? hti
? 0
as t ? ? .
Theorems 1 and 2 together characterize the fixed points of Population EM for Model 1, and fully
specify the conditions under which each fixed point is reached. The results are simply summarized in
the following corollary.
Corollary 1. If (? hti )t?0 denote the Population EM iterates for Model 1, then
? hti
? sgn(h? h0i , ? ? i)? ?
as t ? ? .
We now discuss Population EM with Model 2. To state our results more concisely, we use the
following re-parameterization of the model parameters and Population EM iterates:
hti
ahti ,
hti
?1 + ?2
?? + ??2
? 1
,
2
2
hti
bhti ,
hti
hti
?2 ? ?1
,
2
?? ,
??2 ? ??1
.
2
(11)
hti
If the sequence of Population EM iterates ((?1 , ?2 ))t?0 converges to (??1 , ??2 ), then we expect
bhti ? ? ? . Hence, we also define ? hti as the angle between bhti and ? ? , i.e.,
!
hti
?
hb
,
?
i
? hti , arccos
? [0, ?] .
kbhti kk? ? k
(This is well-defined as long as bhti 6= 0 and ? ? 6= 0.)
We first present results on Population EM with Model 2 under the initial condition hbh0i , ? ? i =
6 0.
Theorem 3. Assume ? ? ? Rd \ {0}. Let (ahti , bhti )t?0 denote the (re-parameterized) Population
EM iterates for Model 2, and suppose hbh0i , ? ? i =
6 0. Then bhti 6= 0 for all t ? 0. Furthermore, there
exist ?a ? (0, 1)?depending only on k? ? k and |hbh0i , ? ? i/kbh0i k|?and ?? ? (0, 1)?depending
only on k? ? k, hbh0i , ? ? i/kbh0i k, kah0i k, and kbh0i k?such that
kaht+1i k2
? ?2a ? kahti k2 +
k? ? k2 sin2 (? hti )
,
4
sin(? ht+1i ) ? ?t? ? sin(? h0i ) .
By combining the two inequalities from Theorem 3, we conclude
kaht+1i k2
h0i 2
= ?2t
k +
a ka
t
k? ? k2 X 2?
? ? sin2 (? ht?? i )
4 ? =0 a
t
k? ? k2 X 2? 2(t?? )
?
+
? ?
? sin2 (? h0i )
4 ? =0 a ?
t 2 h0i
k? ? k2
h0i 2
? ?2t
k +
t max ?a , ??
sin (? ) .
a ka
4
h0i 2
?2t
k
a ka
Theorem 3 shows that the re-parameterized Population EM iterates converge, at a linear rate, to the
average of the two means (??1 + ??2 )/2, as well as the line spanned by ? ? . The theorem, however,
does not provide any information on the convergence of the magnitude of bhti to the magnitude of ? ? .
This is given in the next theorem.
Theorem 4. Assume ? ? ? Rd \ {0}. Let (ahti , bhti )t?0 denote the (re-parameterized) Population
EM iterates for Model 2, and suppose hbh0i , ? ? i 6= 0. Then there exist T0 > 0, ?b ? (0, 1), and
cb > 0?all depending only on k? ? k, |hbh0i , ? ? i/kbh0i k|, kah0i k, and kbh0i k?such that
2
2
ht+1i
? sgn(hbh0i , ? ? i)? ?
? ?2b ?
bhti ? sgn(hbh0i , ? ? i)? ?
+ cb ? kahti k ?t > T0 .
b
5
If hbh0i , ? ? i = 0, then we show convergence of the (re-parameterized) Population EM iterates to the
degenerate solution (0, 0).
Theorem 5. Let (ahti , bhti )t?0 denote the (re-parameterized) Population EM iterates for Model 2.
If hbh0i , ? ? i = 0, then
(ahti , bhti ) ? (0, 0) as t ? ? .
Theorems 3, 4, and 5 together characterize the fixed points of Population EM for Model 2, and fully
specify the conditions under which each fixed point is reached. The results are simply summarized in
the following corollary.
Corollary 2. If (ahti , bhti )t?0 denote the (re-parameterized) Population EM iterates for Model 2,
then
??1 + ??2
ahti ?
as t ? ? ,
2
?? ? ??1
bhti ? sgn(hbh0i , ??2 ? ??1 i) 2
as t ? ? .
2
2.2
Main Results for Sample-based EM
Using the results on Population EM presented in the above section, we can now establish consistency
of (Sample-based) EM. We focus attention on Model 2, as the same results for Model 1 easily follow
as a corollary. First, we state a simple connection between the Population EM and Sample-based EM
iterates.
Theorem 6. Suppose Population EM and Sample-based EM for Model 2 have the same initial
h0i
h0i
h0i
h0i
? 1 = ?1 and ?
? 2 = ?2 . Then for each iteration t ? 0,
parameters: ?
hti
?1
?
hti
? ?1
and
hti
?2
?
hti
? ?2
as n ? ? ,
where convergence is in probability.
Note that Theorem 6 does not necessarily imply that the fixed point of Sample-based EM (when
h0i
h0i
h0i
h0i
?1 , ?
? 2 ) = (?1 , ?2 )) is the same as that of Population EM. It is conceivable that
initialized at (?
as t ? ?, the discrepancy between (the iterates of) Sample-based EM and Population EM increases.
We show that this is not the case: the fixed points of Sample-based EM indeed converge to the fixed
points of Population EM.
Theorem 7. Suppose Population EM and Sample-based EM for Model 2 have the same initial
h0i
h0i
h0i
h0i
h0i
h0i
? 1 = ?1 and ?
? 2 = ?2 . If h?2 ? ?1 , ? ? i =
parameters: ?
6 0, then
hti
hti
? 1 ? ?1 | ? 0
lim sup |?
t??
and
hti
hti
? 2 ? ?2 | ? 0
lim sup |?
as n ? ? ,
t??
where convergence is in probability.
2.3
Population EM and Expected Log-likelihood
Do the results we derived in the last section regarding the performance of EM provide any information
on the performance of other ascent algorithms, such as gradient ascent, that aim to maximize the loglikelihood function? To address this question, we show how our analysis can determine the stationary
points of the expected log-likelihood and characterize the shape of the expected log-likelihood in a
neighborhood of the stationary points. Let G(?) denote the expected log-likelihood, i.e.,
Z
G(?) , E(log f? (Y )) = f (y; ? ? ) log f (y; ?) dy,
where ? ? denotes the true parameter value. Also consider the following standard regularity conditions:
R1 The family of probability density functions f (y; ?) have common support.
R
R
R2 ?? f (y; ? ? ) log f (y; ?) dy = f (y; ? ? )?? log f (y; ?) dy, where ?? denotes the gradient
with respect to ?.
6
R3 ?? (E
P
z
f (z | Y ; ? hti )) log f (Y , z; ?) = E
P
z
f (z | Y ; ? hti )?? log f (Y , z; ?).
These conditions can be easily confirmed for many models including the Gaussian mixture models.
The following theorem connects the fixed points of the Population EM and the stationary points of
the expected log-likelihood.
? ? Rd denote a stationary point of G(?). Also assume that Q(? | ? hti ) has a unique
Lemma 1. Let ?
and finite stationary point in terms of ? for every ? hti , and this stationary point is its global maxima.
? , it
Then, if the model satisfies conditions R1?R3, and the Population EM algorithm is initialized at ?
? . Conversely, any fixed point of Population EM is a stationary point of G(?).
will stay at ?
? denote a stationary point of G(?). We first prove that ?
? is a stationary point of Q(? | ?
? ).
Proof. Let ?
Z X
?? f (y, z; ?) ?=??
? )?=?? =
?)
?? Q(? | ?
f (z | y; ?
f (y; ? ? ) dy
?
f
(y,
z;
?
)
z
Z X ? f (y, z; ?)
?
?=?
?
f (y; ? ? ) dy
=
?
f
(y;
?
)
z
Z ? f (y, ?)
?
?=?
?
f (y; ? ? ) dy = 0 ,
=
?)
f (y; ?
? ) has a
? is a stationary point of G(?). Since Q(? | ?
where the last equality is using the fact that ?
unique stationary point, and we have assumed that the unique stationary point is its global maxima,
then Population EM will stay at that point. The proof of the other direction is similar.
Remark 1. The fact that ? ? is the global maximizer of G(?) is well-known in the statistics and
machine learning literature (e.g., Conniffe, 1987). Furthermore, the fact that ? ? is a global maximizer
of Q(? | ? ? ) is known as the self-consistency property (Balakrishnan et al., 2014).
It is straightforward to confirm the conditions of Lemma 1 for mixtures of Gaussians. This lemma
confirms that Population EM may be trapped in every local maxima. However, less intuitively it may
get stuck at local minima or saddle points as well. Our next result characterizes the stationary points
of G(?) for Model 1.
Corollary 3. G(?) has only three stationary points. If d = 1 (so ? = ? ? R), then 0 is a local
minima of G(?), while ?? and ??? are global maxima. If d > 1, then 0 is a saddle point, and ? ? and
?? ? are global maxima.
The proof is a straightforward result of Lemma 1 and Corollary 1. The phenomenon that Population
EM may stuck in local minima or saddle points also happens in Model 2. We can employ Corollary 2
and Lemma 1 to explain the shape of the expected log-likelihood function G. To simplify the notation,
2
1
we consider the re-parametrization a , ?1 +?
and b , ?2 ??
.
2
2
Corollary 4. G(a, b) has three stationary points:
?
?
?
?1 + ??2 ??2 ? ??1
?1 + ??2 ??1 ? ??2
?1 + ??2 ??1 + ??2
,
,
,
,
and
,
.
2
2
2
2
2
2
The first two points are global maxima. The third point is a saddle point.
3
Concluding Remarks
Our analysis of Population EM and Sample-based EM shows that the EM algorithm can, at least
for the Gaussian mixture models studied in this work, compute statistically consistent parameter
estimates. Previous analyses of EM only established such results for specific methods of initializing
EM (e.g., Dasgupta and Schulman, 2007; Balakrishnan et al., 2014); our results show that they are not
really necessary in the large sample limit. However, in any real scenario, the large sample limit may
not accurately characterize the behavior of EM. Therefore, these specific methods for initialization,
as well as non-asymptotic analysis, are clearly still needed to understand and effectively apply EM.
There are several interesting directions concerning EM that we hope to pursue in follow-up work.
The first considers the behavior of EM when the dimension d = dn may grow with the sample size
7
n. Our proof ofpTheorem 7 reveals that the parameter error of the t-th iterate (in Euclidean norm)
is of the order d/n as t ? ?. Therefore, we conjecture that the theorem still holds as long as
dn = o(n). This would be consistent with results from statistical physics on the MLE for Gaussian
mixtures, which characterize the behavior when dn ? n as n ? ? (Barkai and Sompolinsky, 1994).
Another natural direction is to extend these results to more general Gaussian mixture models (e.g.,
with unequal mixing weights or unequal covariances) and other latent variable models.
Acknowledgements. The second named author thanks Yash Deshpande and Sham Kakade for
many helpful initial discussions. JX and AM were partially supported by NSF grant CCF-1420328.
DH was partially supported by NSF grant DMREF-1534910 and a Sloan Fellowship.
References
D. Achlioptas and F. McSherry. On spectral learning of mixtures of distributions. In Eighteenth
Annual Conference on Learning Theory, pages 458?469, 2005.
S. Arora and R. Kannan. Learning mixtures of separated nonspherical Gaussians. The Annals of
Applied Probability, 15(1A):69?92, 2005.
S. Balakrishnan, M. J. Wainwright, and B. Yu. Statistical guarantees for the EM algorithm: From
population to sample-based analysis. arXiv preprint arXiv:1408.2156, August 2014.
N. Barkai and H. Sompolinsky. Statistical mechanics of the maximum-likelihood density estimation.
Physical Review E, 50(3):1766?1769, Sep 1994.
M. Belkin and K. Sinha. Polynomial learning of distribution families. In Fifty-First Annual IEEE
Symposium on Foundations of Computer Science, pages 103?112, 2010.
S. C. Brubaker and S. Vempala. Isotropic PCA and affine-invariant clustering. In Forty-Ninth Annual
IEEE Symposium on Foundations of Computer Science, 2008.
K. Chaudhuri and S. Rao. Learning mixtures of product distributions using correlations and independence. In Twenty-First Annual Conference on Learning Theory, pages 9?20, 2008.
K. Chaudhuri, S. M. Kakade, K. Livescu, and K. Sridharan. Multi-view clustering via canonical
correlation analysis. In ICML, 2009a.
K. Chaudhuri, S. Dasgupta, and A. Vattani. Learning mixtures of Gaussians using the k-means
algorithm. CoRR, abs/0912.0086, 2009b.
S. Chr?tien and A. O. Hero. On EM algorithms and their proximal generalizations. ESAIM:
Probability and Statistics, 12:308?326, May 2008.
D. Conniffe. Expected maximum log likelihood estimation. Journal of the Royal Statistical Society.
Series D, 36(4):317?329, 1987.
S. Dasgupta. Learning mixutres of Gaussians. In Fortieth Annual IEEE Symposium on Foundations
of Computer Science, pages 634?644, 1999.
S. Dasgupta and L. Schulman. A probabilistic analysis of EM for mixtures of separated, spherical
Gaussians. Journal of Machine Learning Research, 8(Feb):203?226, 2007.
C. Daskalakis, C. Tzamos, and M. Zampetakis. Ten steps of EM suffice for mixtures of two Gaussians.
arXiv preprint arXiv:1609.00368, September 2016.
A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum-likelihood from incomplete data via the
EM algorithm. J. Royal Statist. Soc. Ser. B, 39:1?38, 1977.
R. A. Fisher. On the mathematical foundations of theoretical statistics. Philosophical Transactions of
the Royal Society, London, A., 222:309?368, 1922.
M. Hardt and E. Price. Tight bounds for learning a mixture of two Gaussians. In Proceedings of the
Forty-Seventh Annual ACM on Symposium on Theory of Computing, pages 753?760, 2015.
D. Hsu and S. M. Kakade. Learning mixtures of spherical Gaussians: moment methods and spectral
decompositions. In Fourth Innovations in Theoretical Computer Science, 2013.
C. Jin, Y. Zhang, S. Balakrishnan, M. J. Wainwright, and M. Jordan. Local maxima in the likelihood
of Gaussian mixture models: Structural results and algorithmic consequences. arXiv preprint
arXiv:1609.00978, September 2016.
8
A. T. Kalai, A. Moitra, and G. Valiant. Efficiently learning mixtures of two Gaussians. In Forty-second
ACM Symposium on Theory of Computing, pages 553?562, 2010.
R. Kannan, H. Salmasian, and S. Vempala. The spectral method for general mixture models. SIAM
Journal on Computing, 38(3):1141?1156, 2008.
J. M. Klusowski and W. D. Brinda. Statistical guarantees for estimating the centers of a twocomponent Gaussian mixture by EM. arXiv preprint arXiv:1608.02280, August 2016.
S. P. Lloyd. Least squares quantization in PCM. IEEE Trans. Information Theory, 28(2):129?137,
1982.
J. B. MacQueen. Some methods for classification and analysis of multivariate observations. In
Proceedings of the fifth Berkeley Symposium on Mathematical Statistics and Probability, volume 1,
pages 281?297. University of California Press, 1967.
A. Moitra and G. Valiant. Settling the polynomial learnability of mixtures of Gaussians. In Fifty-First
Annual IEEE Symposium on Foundations of Computer Science, pages 93?102, 2010.
K. Pearson. Contributions to the mathematical theory of evolution. Philosophical Transactions of the
Royal Society, London, A., 185:71?110, 1894.
R. A. Redner and H. F. Walker. Mixture densities, maximum likelihood and the EM algorithm. SIAM
Review, 26(2):195?239, 1984.
N. Srebro. Are there local maxima in the infinite-sample likelihood of Gaussian mixture estimation?
In 20th Annual Conference on Learning Theory, pages 628?629, 2007.
P. Tseng. An analysis of the EM algorithm and entropy-like proximal point methods. Mathematics of
Operations Research, 29(1):27?44, Feb 2004.
F. Vaida. Parameter convergence for EM and MM. Statistica Sinica, 15, 2005.
S. Vempala and G. Wang. A spectral algorithm for learning mixtures models. Journal of Computer
and System Sciences, 68(4):841?860, 2004.
C. F. J. Wu. On the convergence properties of the EM algorithm. The Annals of Statistics, 11(1):
95?103, Mar 1983.
J. Xu, D. Hsu, and A. Maleki. Global analysis of Expectation Maximization for mixtures of two
Gaussians. arXiv preprint arXiv:1608.07630, 2016.
L. Xu and M. I. Jordan. On convergence properties of the EM algorithm for Gaussian mixtures.
Neural Computation, 8:129?151, 1996.
9
| 6047 |@word version:2 polynomial:2 norm:2 confirms:1 covariance:4 decomposition:1 moment:3 initial:9 series:1 daniel:1 ka:3 wd:3 yet:1 attracted:1 shape:2 update:5 stationary:20 parameterization:1 isotropic:1 parametrization:1 provides:1 characterization:1 iterates:15 zhang:1 mathematical:3 along:1 dn:3 become:1 symposium:7 prove:4 expected:8 indeed:5 roughly:1 behavior:4 mechanic:1 multi:1 spherical:3 actual:1 provided:2 estimating:2 spain:1 moreover:1 notation:1 suffice:1 pursue:1 developed:1 guarantee:3 berkeley:1 every:3 shed:1 k2:7 ser:1 grant:2 engineering:1 local:9 limit:11 consequence:1 despite:2 establishing:1 initialization:5 studied:2 conversely:1 statistically:1 unique:3 practice:1 procedure:3 area:1 confidence:1 klusowski:3 get:2 cannot:2 close:2 context:1 eighteenth:1 maximizing:1 center:1 straightforward:2 attention:3 starting:4 independently:1 simplicity:1 twocomponent:1 estimator:3 importantly:1 regarded:1 spanned:2 unanswered:1 population:45 annals:2 suppose:5 user:1 us:1 livescu:1 vein:1 observed:3 preprint:5 wang:2 initializing:1 worst:1 region:1 sompolinsky:2 mentioned:1 dempster:3 zampetakis:1 dynamic:2 tight:1 deliver:1 easily:2 sep:1 yash:1 separated:5 fast:1 london:2 pearson:2 neighborhood:3 loglikelihood:1 statistic:5 laird:1 final:1 sequence:9 product:1 relevant:1 combining:1 mixing:1 chaudhuri:6 degenerate:1 intuitive:1 asserts:1 convergence:19 regularity:2 r1:2 generating:1 converges:7 depending:4 stat:1 soc:1 c:2 quantify:1 direction:3 sgn:10 hx:1 generalization:1 really:1 hold:2 mm:1 sufficiently:4 considered:1 around:2 cb:2 algorithmic:2 generous:1 jx:1 omitted:1 estimation:7 favorable:2 djhsu:1 tool:1 hope:1 concurrently:1 clearly:1 gaussian:15 super:1 modified:1 rather:1 kalai:2 avoid:1 pn:4 aim:1 signum:1 corollary:9 derived:1 focus:2 likelihood:23 sense:1 am:1 sin2:3 helpful:1 typically:1 arg:2 among:2 classification:1 arccos:1 special:1 initialize:1 marginal:1 comprise:3 broad:1 yu:1 icml:1 discrepancy:1 quantitatively:1 simplify:1 employ:2 belkin:2 connects:1 ab:1 interest:3 highly:1 mixture:46 light:1 behind:1 wellseparated:1 mcsherry:2 implication:1 accurate:2 necessary:1 arian:2 incomplete:2 euclidean:2 initialized:3 re:8 theoretical:4 sinha:2 rao:2 assignment:1 maximization:5 uniform:1 seventh:1 learnability:1 characterize:8 proximal:2 mles:1 deduced:1 density:4 fundamental:1 thanks:1 siam:2 stay:2 probabilistic:1 physic:1 together:2 quickly:1 again:2 satisfied:1 moitra:3 vattani:1 nonasymptotic:1 lloyd:4 summarized:2 disconnect:1 sloan:1 idealized:2 performed:1 view:1 optimistic:1 analyze:2 sup:2 reached:2 characterizes:1 contribution:3 square:1 accuracy:2 efficiently:3 climbing:1 accurately:1 produced:1 iid:1 salmasian:1 confirmed:1 researcher:1 explain:1 deshpande:1 thereof:2 proof:6 hsu:4 pilot:2 proved:1 hardt:3 popular:3 lim:2 improves:1 ubiquitous:1 redner:3 focusing:1 follow:2 permitted:1 specify:2 conniffe:2 mar:1 generality:1 furthermore:3 stage:2 lastly:1 achlioptas:2 correlation:2 maximizer:4 lack:1 reveal:1 grows:1 barkai:2 effect:1 true:4 ccf:1 maleki:2 hence:1 equality:1 evolution:1 iteratively:4 sin:3 self:1 theoretic:1 confusion:1 performs:1 common:2 ji:1 physical:1 volume:1 extend:1 refer:2 rd:7 consistency:4 mathematics:1 feb:2 multivariate:1 recent:3 perspective:1 scenario:2 certain:4 inequality:1 arbitrarily:2 tien:2 minimum:5 somewhat:1 ey:1 surely:1 converge:4 maximize:1 determine:1 forty:3 ii:3 full:1 unimodal:1 sham:1 long:3 concerning:1 mle:8 variant:5 expectation:6 arxiv:10 iteration:5 achieved:2 background:1 remarkably:1 fellowship:1 walker:3 grow:1 fifty:2 billed:1 ascent:2 balakrishnan:8 sridharan:1 jordan:3 structural:1 near:1 iii:2 hb:1 iterate:1 marginalization:1 independence:1 regarding:1 t0:2 pca:1 akin:1 returned:1 cause:1 remark:2 generally:1 ten:1 statist:1 nonspherical:1 exist:2 nsf:2 canonical:1 trapped:1 popularity:1 discrete:1 dasgupta:8 ht:12 idealization:1 run:1 angle:1 parameterized:6 fortieth:1 fourth:1 named:1 almost:1 throughout:1 family:2 wu:3 separation:5 draw:2 dy:6 comparable:1 bound:2 guaranteed:1 distinguish:1 tzamos:1 annual:8 aspect:1 optimality:1 concluding:1 vempala:5 conjecture:1 according:4 em:113 h0i:30 appealing:1 kakade:4 happens:1 intuitively:2 invariant:1 samplebased:1 equation:4 previously:1 discus:2 r3:2 needed:1 hero:2 tractable:1 studying:1 gaussians:18 operation:1 permit:2 apply:1 appropriate:1 spectral:4 original:2 denotes:2 clustering:2 exploit:1 especially:1 prof:2 establish:1 society:3 objective:5 question:3 september:2 gradient:3 conceivable:1 subspace:1 distance:2 vd:7 considers:1 tseng:2 kannan:4 assuming:1 kk:1 innovation:1 sinica:1 statement:1 sharper:1 unknown:3 twenty:1 observation:5 macqueen:2 finite:4 jin:2 brubaker:2 ninth:1 august:2 introduced:1 required:1 connection:1 philosophical:2 california:1 concisely:1 unequal:2 established:2 barcelona:1 nip:1 trans:1 address:3 appeared:1 max:3 including:1 royal:4 wainwright:2 difficulty:1 natural:1 settling:1 scheme:2 esaim:1 imply:1 numerous:1 arora:2 started:1 columbia:6 review:2 literature:1 schulman:4 discouraging:1 acknowledgement:1 asymptotic:5 fully:5 expect:2 loss:1 interesting:1 limitation:1 srebro:3 foundation:5 affine:1 consistent:2 article:3 principle:3 rubin:1 supported:2 last:2 formal:1 understand:1 characterizing:1 evd:2 taking:1 fifth:1 dimension:2 stuck:2 author:1 far:3 transaction:2 confirm:1 global:13 reveals:1 conclude:1 assumed:1 daskalakis:2 iterative:3 latent:4 symmetry:2 necessarily:2 main:4 statistica:1 complementary:1 xu:6 explicit:1 third:1 hti:59 posting:1 formula:1 remained:1 theorem:19 specific:5 r2:1 maximizers:1 intractable:1 exists:1 quantization:1 albeit:1 effectively:2 valiant:3 corr:1 magnitude:2 execution:2 entropy:1 simply:2 saddle:4 infinitely:1 pcm:1 partially:2 satisfies:1 dh:1 acm:2 goal:2 identity:1 towards:1 replace:1 fisher:3 price:3 hard:1 specifically:2 infinite:2 determined:1 except:1 hyperplane:1 lemma:5 formally:1 chr:2 support:1 phenomenon:1 |
5,578 | 6,048 | Matrix Completion has No Spurious Local Minimum
Rong Ge
Duke University
308 Research Drive, NC 27708
[email protected].
Jason D. Lee
University of Southern California
3670 Trousdale Pkwy, CA 90089
[email protected].
Tengyu Ma
Princeton University
35 Olden Street, NJ 08540
[email protected].
Abstract
Matrix completion is a basic machine learning problem that has wide applications, especially in collaborative filtering and recommender systems. Simple
non-convex optimization algorithms are popular and effective in practice. Despite
recent progress in proving various non-convex algorithms converge from a good
initial point, it remains unclear why random or arbitrary initialization suffices in
practice. We prove that the commonly used non-convex objective function for
positive semidefinite matrix completion has no spurious local minima ? all local
minima must also be global. Therefore, many popular optimization algorithms such
as (stochastic) gradient descent can provably solve positive semidefinite matrix
completion with arbitrary initialization in polynomial time. The result can be
generalized to the setting when the observed entries contain noise. We believe that
our main proof strategy can be useful for understanding geometric properties of
other statistical problems involving partial or noisy observations.
1
Introduction
Matrix completion is the problem of recovering a low rank matrix from partially observed entries. It
has been widely used in collaborative filtering and recommender systems [Kor09, RS05], dimension
reduction [CLMW11] and multi-class learning [AFSU07]. There has been extensive work on
designing efficient algorithms for matrix completion with guarantees. One earlier line of results
(see [Rec11, CT10, CR09] and the references therein) rely on convex relaxations. These algorithms
achieve strong statistical guarantees, but are quite computationally expensive in practice.
More recently, there has been growing interest in analyzing non-convex algorithms for matrix
completion [KMO10, JNS13, Har14, HW14, SL15, ZWL15, CW15]. Let M 2 Rd?d be the target
matrix with rank r ? d that we aim to recover, and let ? = {(i, j) : Mi,j is observed} be the
set of observed entries. These methods are instantiations of optimization algorithms applied to the
objective1 ,
?2
1 X ?
f (X) =
Mi,j (XX > )i,j ,
(1.1)
2
(i,j)2?
These algorithms are much faster than the convex relaxation algorithms, which is crucial for their
empirical success in large-scale collaborative filtering applications [Kor09].
1
In this paper, we focus on the symmetric case when the true M has a symmetric decomposition M = ZZ T .
Some of previous papers work on the asymmetric case when M = ZW T , which is harder than the symmetric
case.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
All of the theoretical analysis for the nonconvex procedures require careful initialization schemes:
the initial point should already be close to optimum. In fact, Sun and Luo [SL15] showed that after
this initialization the problem is effectively strongly-convex, hence many different optimization
procedures can be analyzed by standard techniques from convex optimization.
However, in practice people typically use a random initialization, which still leads to robust and
fast convergence. Why can these practical algorithms find the optimal solution in spite of the nonconvexity? In this work we investigate this question and show that the matrix completion objective
has no spurious local minima. More precisely, we show that any local minimum X of objective
function f (?) is also a global minimum with f (X) = 0, and recovers the correct low rank matrix M .
Our characterization of the structure in the objective function implies that (stochastic) gradient
descent from arbitrary starting point converge to a global minimum. This is because gradient
descent converges to a local minimum [GHJY15, LSJR16], and every local minimum is also a global
minimum.
1.1
Main results
Assume the target matrix M is symmetric and each entry of M is observed with probability p
independently 2 . We assume M = ZZ > for some matrix Z 2 Rd?r .
There are two known issues with matrix completion. First, the choice of Z is not unique since
M = (ZR)(ZR)> for any orthonormal matrix Z. Our goal is to find one of these equivalent
solutions.
Another issue is that matrix completion is impossible when M is ?aligned? with standard basis. For
example, when M is the identity matrix in its first r ? r block, we will very likely be observing only
0 entries. To address this issue, we make the following standard assumption:
p
Assumption 1. For any row Zi of Z, we have kZi k 6 ?/ d ? kZkF . Moreover, Z has a bounded
condition number max (Z)/ min (Z) = ?.
Throughout this paper we think of ? and ? as small constants, and the sample complexity depends
polynomially on these two parameters. Also note that this assumption is independent of the choice of
Z: all Z such that ZZ T = M have the same row norms and Frobenius norm.
This assumption is similar to the ?incoherence? assumption [CR09]. Our assumption is the same as
the one used in analyzing non-convex algorithms [KMO10, SL15].
We enforce X to also satisfy this assumption by a regularizer
?2
1 X ?
f (X) =
Mi,j (XX > )i,j + R(X),
2
(1.2)
(i,j)2?
where R(X) is a function that penalizes X when one of its rows is too large. See Section 4 and
Section A for the precise definition. Our main result shows that in this setting, the regularized
objective function has no spurious local minimum:
Theorem 1.1. [Informal] All local minimum of the regularized objective (1.1) satisfy XX T =
ZZ T = M when p > poly(?, r, ?, log d)/d.
Combined with the results in [GHJY15, LSJR16] (see more discussions in Section 1.2), we have,
Theorem 1.2 (Informal). With high probability, stochastic gradient descent on the regularized
objective (1.1) will converge to a solution X such that XX T = ZZ T = M in polynomial time from
any starting point. Gradient descent will converge to such a point with probability 1 from a random
starting point.
Our results are also robust to noise. Even if each entry is corrupted with Gaussian noise of standard
deviation ?2 kZk2F /d (comparable to the magnitude of the entry itself!), we can still guarantee that
all the local minima satisfy kXX T ZZ T kF 6 " when p is large enough. See the discussion in
Appendix B for results on noisy matrix completion.
2
The entries (i, j) and (j, i) are the same. With probability p we observe both entries and otherwise we
observe neither.
2
Our main technique is to show that every point that satisfies the first and second order necessary
conditions for optimality must be a desired solution. To achieve this we use new ideas to analyze the
effect of the regularizer and show how it is useful in modifying the first and second order conditions
to exclude any spurious local minimum.
1.2
Related Work
Matrix Completion. The earlier theoretical works on matrix completion analyzed the nuclear
norm heuristic [Rec11, CT10, CR09]. This line of work has the cleanest and strongest theoretical
guarantees; [CT10, Rec11] showed that if |?| & dr?2 log2 d the nuclear norm convex relaxation
recovers the exact underlying low rank matrix. The solution can be computed via the solving a
convex program in polynomial time. However the primary disadvantage of nuclear norm methods
is their computational and memory requirements. The fastest known algorithms have running time
O(d3 ) and require O(d2 ) memory, which are both prohibitive for moderate to large values of d.
These concerns led to the development of the low-rank factorization paradigm of [BM03]; Burer and
c = XX T , and optimizing over X 2 Rd?r
Monteiro proposed factorizing the optimization variable M
c 2 Rd?d . This approach only requires O(dr) memory, and a single gradient iteration
instead of M
takes time O(r|?|), so has much lower memory requirement and computational complexity than the
nuclear norm relaxation. On the other hand, the factorization causes the optimization problem to be
non-convex in X, which leads to theoretical difficulties in analyzing algorithms. Under incoherence
and sufficient sample size assumptions, [KMO10] showed that well-initialized gradient descent
recovers M . Similary, [HW14, Har14, JNS13] showed that well-initialized alternating least squares
or block coordinate descent converges to M , and [CW15] showed that well-initialized gradient
descent converges to M . [SL15, ZWL15] provided a more unified analysis by showing that with
careful initialization many algorithms, including gradient descent and alternating least squres, succeed.
[SL15] accomplished this by showing an analog of strong convexity in the neighborhood of the
solution M .
Non-convex Optimization. Recently, a line of work analyzes non-convex optimization by separating the problem into two aspects: the geometric aspect which shows the function has no spurious
local minimum and the algorithmic aspect which designs efficient algorithms can converge to local
minimum that satisfy first and (relaxed versions) of second order necessary conditions.
Our result is the first that explains the geometry of the matrix completion objective. Similar geometric
results are only known for a few problems: phase retrieval/synchronization, orthogonal tensor
decomposition, dictionary learning [GHJY15, SQW15, BBV16]. The matrix completion objective
requires different tools due to the sampling of the observed entries, as well as carefully managing the
regularizer to restrict the geometry. Parallel to our work Bhojanapalli et al.[BNS16] showed similar
results for matrix sensing, which is closely related to matrix completion. Loh and Wainwright [LW15]
showed that for many statistical settings that involve missing/noisy data and non-convex regularizers,
any stationary point of the non-convex objective is close to global optima; furthermore, there is a
unique stationary point that is the global minimum under stronger assumptions [LW14].
On the algorithmic side, it is known that second order algorithms like cubic regularization [NP06]
and trust-region [SQW15] algorithms converge to local minima that approximately satisfy first and
second order conditions. Gradient descent is also known to converge to local minima [LSJR16] from
a random starting point. Stochastic gradient descent can converge to a local minimum in polynomial
time from any starting point [Pem90, GHJY15]. All of these results can be applied to our setting,
implying various heuristics used in practice are guaranteed to solve matrix completion.
2
Preliminaries
Notations: For ? ? [d] ? [d], let P? be the linear operator that maps a matrix A to P? (A),
where P? (A) has the same values as A on ?, and 0 outside of ?. We will use the following
matrix norms: k ? kF the frobenius norm, k ? k spectral norm, |A|1 elementwise infinity norm, and
|A|p!q = maxkxkp =1 kAkq . We use the shorthand kAk? = kP? AkF . The trace inner product of
two matrices is hA, Bi = tr(A> B), and min (X), max (X) are the smallest and largest singular
values of X. We also use Xi to denote the i-th row of a matrix X.
3
2.1
Necessary conditions for Optimality
Given an objective function f (x) : Rn ! R, we use rf (x) to denote the gradient of the function,
and r2 f (x) to denote the Hessian of the function (r2 f (x) is an n ? n matrix where [r2 f (x)]i,j =
@2
@xi @xj f (x)). It is well known that local minima of the function f (x) must satisfy some necessary
conditions:
Definition 2.1. A point x satisfies the first order necessary condition for optimality (later abbreviated
as first order optimality condition) if rf (x) = 0. A point x satisfies the second order necessary
condition for optimality (later abbreviated as second order optimality condition)if r2 f (x) ? 0.
These conditions are necessary for a local minimum because otherwise it is easy to find a direction
where the function value decreases. We will also consider a relaxed second order necessary condition,
where we only require the smallest eigenvalue of the Hessian r2 f (x) to be not very negative:
Definition 2.2. For ? > 0, a point x satisfies the ? -relaxed second order optimality condition, if
r2 f (x) ? ? ? I.
This relaxation to the second order condition makes the conditions more robust, and allows for
efficient algorithms.
Theorem 2.3. [NP06, SQW15, GHJY15] If every point x that satisfies first order and ? -relaxed
second order necessary condition is a global minimum, then many optimization algorithms (cubic
regularization, trust-region, stochastic gradient descent) can find the global minimum up to " error in
function value in time poly(1/", 1/?, d).
3
Proof Strategy: ?simple? proofs are more generalizable
In this section, we demonstrate the key ideas behind our analysis using the rank r = 1 case. In
particular, we first give a ?simple? proof for the fully observed case. Then we show this simple
proof can be easily generalized to the random observation case. We believe that this proof strategy is
applicable to other statistical problems involving partial/noisy observations. The proof sketches in
this section are only meant to be illustrative and may not be fully rigorous in various places. We refer
the readers to Section 4 and Section A for the complete proofs.
In the rank r = 1 case, we assume M = zz > , where kzk = 1, and kzk1 6 p?d . Let " ? 1 be the
target accuracy that we aim to achieve in this section and let p = poly(?, log d)/(d").
For simplicity, we focus on the following domain B of incoherent vectors where the regularizer R(x)
vanishes,
?
2?
B = x : kxk1 < p
.
(3.1)
d
Inside this domain B, we can restrict our attention to the objective function without the regularizer,
defined as,
1
g?(x) = ? kP? (M xx> )k2F .
(3.2)
2
The global minima of g?(?) are z and z with function p
value 0. Our goal of this section is to
(informally) prove that all the local minima of g?(?) are O( ")-close to ?z. In later section we will
formally prove that the only local minima are ?z.
Lemma 3.1 (Partial observation case, informally stated).
p Under the setting of this section, in the
domain B, all local mimina of the function g?(?) are O( ")-close to ?z.
It turns out to be insightful to consider the full observation case when ? = [d]?[d]. The corresponding
objective is
1
g(x) = ? kM xx> k2F .
(3.3)
2
Observe that g?(x) is a sampled version of the g(x), and therefore we expect that they share the same
geometric properties. In particular, if g(x) does not have spurious local minima then neither does
g?(x).
4
Lemma 3.2 (Full observation case, informally stated). Under the setting of this section, in the domain
B, the function g(?) has only two local minima {?z} .
Before introducing the ?simple? proof, let us first look at a delicate proof that does not generalize
well.
Difficult to Generalize Proof of Lemma 3.2. We compute the gradient and Hessian of g(x),
rg(x) = M x kxk2 x,
r2 g(x) = 2xx> M + kxk2 ? I .Therefore, a critical point x satisfies rg(x) = M x kxk2 x = 0,
and thus it must be an eigenvector of M and kxk2 is the corresponding eigenvalue. Next, we
prove that the hessian is only positive definite at the top eigenvector . Let x be an eigenvector with
eigenvalue = kxk2 , and is strictly less than the top eigenvalue ? . Let z be the top eigenvector.
?
We have that hz, r2 g(x)zi = hz, M zi + kxk2 =
+ < 0, which shows that x is not a local
minimum. Thus only z can be a local minimizer, and it is easily verified that r2 g(z) is indeed
positive definite.
The difficulty of generalizing the proof above to the partial observation case is that it uses the
properties of eigenvectors heavily. Suppose we want to imitate the proof above for the partial
observation case, the first difficulty is how to solve the equation g?(x) = P? (M xx> )x = 0.
Moreover, even if we could have a reasonable approximation for the critical points (the solution of
r?
g (x) = 0), it would be difficult to examine the Hessian of these critical points without having the
orthogonality of the eigenvectors.
?Simple? and Generalizable proof. The lessons from the subsection above suggest us find an
alternative proof for the full observation case which is generalizable. The alternative proof will be
simple in the sense that it doesn?t use the notion of eigenvectors and eigenvalues. Concretely, the key
observation behind most of the analysis in this paper is the following,
Proofs that consist of inequalities that are linear in 1? are often easily generalizable to partial
observation case.
P
Here statements that are linear in 1? mean the statements of the form ij 1(i,j)2? Tij 6 a. We
will call these kinds of proofs ?simple? proofs in this section. Roughly speaking, the observation
follows from the law of large numbers
Tij , (i, j) 2 [d] ? [d] is a sequence of bounded real
P ? SupposeP
numbers, then the sampled sum (i,j)2? Tij = i,j 1(i,j)2? Tij is an accurate estimate of the sum
P
p i,j Tij , when the sampling probability p is relatively large. Then, the mathematical implications
P
P
of p Tij 6 a are expected to be similar to the implications of (i,j)2? Tij 6 a, up to some small
error introduced by the approximation. To make this concrete, we give below informal proofs for
Lemma 3.2 and Lemma 3.1 that only consists of statements that are linear in 1? . Readers will see that
due to the linearity, the proof for the partial observation case (shown on the right column) is a direct
generalization of the proof for the full observation case (shown on the left column) via concentration
inequalities (which will be discussed more at the end of the section).
A ?simple? proof for Lemma 3.2.
Generalization to Lemma 3.1.
g (x) = 0,
Claim 1f. Suppose x 2 B satisfies rg(x) = 0, Claim 1p. Suppose x 2 B satisfies r?
then hx, zi2 = kxk4 ".
then hx, zi2 = kxk4 .
Proof. Imitating the proof on the left, we have
Proof. We have,
rg(x) = (zz
>
>
r?
g (x) = P? (zz >
xx )x = 0
) hx, rg(x)i = hx, (zz >
xx> )xi = 0
xx> )x = 0
) hx, r?
g (x)i = hx, P? (zz >
(3.4)
) hx, zi2 > kxk4
) hx, zi2 = kxk4
Intuitively, this proof says that the norm of a critical point x is controlled by its correlation with z.
Here at the lasa sampling version of the f the lasa
sampling ver the f the lasa sampling vesio
xx> )xi = 0
(3.5)
"
The last step uses the fact that equation (3.4)
and (3.5) are approximately equal up to scaling
factor p for any x 2 B, since (3.5) is a sampled
version of (3.4).
5
Claim 2f. If x 2 B has positive Hessian
r2 g(x) ? 0, then kxk2 > 1/3.
Claim 2p. If x 2 B has positive Hessian
r2 g?(x) ? 0, then kxk2 > 1/3 ".
Proof. By the assumption on x, we have that
hz, r2 g(x)zi > 0. Calculating the quadratic
form of the Hessian (see Proposition 4.1 for details),
hz, r2 g(x)zi
>
2z (zz
hz, r2 g?(x)zi
> 2
= kzx + xz k
>
Proof. Imitating the proof on the left, calculating the quadratic form over the Hessian at z (see Proposition 4.1) , we have
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
>
>
xx )z > 0aaaaaa
= kP? (zx> + xz > )k2
2z > P? (zz >
) ??????
(3.6)
) kxk2 + 2hz, xi2 > 1
) kxk2 > 1/3
) kxk2 > 1/3
(since hz, xi2 6 kxk2 )
xx> )z > 0
(3.7)
(same step as the left)
"
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Here we use the fact that hz, r2 g?(x)zi ?
phz, r2 g(x)zi for any x 2 B.
With these two claims, we are ready to prove Lemma 3.2 and 3.1 by using another step that is linear
in 1? .
Proof of Lemma 3.2. By Claim 1f and 2f, we Proof of Lemma 3.1. By Claim 1p and 2p, we
have x satisfies hx, zi2 > kxk4 > 1/9. More- have x satisfies hx, zi2 > kxk4 > 1/9 O(").
over, we have that rg(x) = 0 implies
Moreover, we have that r?
g (x) = 0 implies
hz, rg(x)i = hz, (zz >
) hx, zi(1
2
) kxk = 1
kxk2 ) = 0
xx> )xi = 0
(3.8)
hz, r?
g (x)i = hz, P? (zz >
xx> )xi = 0
(3.9)
(same step as the left)
) ??????
(by hx, zi > 1/9)
) kxk2 = 1 ? O(")
2
(same step as the left)
Then by Claim 1f again we obtain hx, zi = 1, Since (3.9) is the sampled version of equaand therefore x = ?z. aaaaaaaaaaaaaaaaaaa tion (3.8), we expect they lead to the same conaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
clusion up to some approximation. Then by
aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
Claim 1p again wepobtain hx, zi2 = 1?O("), and
therefore x is O( ")-close to either of ?z.
2
Subtleties regarding uniform convergence. In the proof sketches above, our key idea is to use
concentration inequalities to link the full observation objective g(x) with the partial observation
counterpart. However, we require a uniform convergence result. For example, we need a statement
like ?w.h.p over the choice of ?, equation (3.4) and (3.5) are similar to each other up to scaling?. This
type of statement is often only true for x inside the incoherent ball B. The fix to this is the regularizer.
For non-incoherent x, we will use a different argument that uses the property of the regularizer. This
is besides the main proof strategy of this section and will be discussed in subsequent sections.
4
Warm-up: Rank-1 Case
In this section, using the general proof strategy described in previous section, we provide a formal
proof for the rank-1 case. In subsection 4.1, we formally work out the proof sketches of Section 3
inside the incoherent ball. The rest of the proofs is deferred to supplementary material.
In the rank-1 case, the objective function simplifies to,
f (x) =
1
kP? (M
2
xx> )k2F + R(x) .
Here we use the the regularization R(x)
R(x) =
d
X
h(xi ), and h(t) = (|t|
i=1
6
?)4 It>? .
(4.1)
p
The parameters and ? will be chosen later as in Theorem 4.2. We will choose ? > 10?/ d so
that R(x) = 0 for incoherent x, and thus it only penalizes coherent x. Moreover, we note R(x) has
Lipschitz second order derivative. 3
We first state the optimality conditions, whose proof is deferred to Appendix A.
Proposition 4.1. The first order optimality condition of objective (4.1) is,
xx> )x = rR(x) ,
2P? (M
(4.2)
and the second order optimality condition requires:
8v 2 Rd , kP? (vx> + xv > )k2F + v > r2 R(x)v > 2v > P? (M
xx> )v .
(4.3)
Moreover, The ? -relaxed second order optimality condition requires
8v 2 Rd , kP? (vx> + xv > )k2F + v > r2 R(x)v > 2v > P? (M
xx> )v
? kvk2 .
(4.4)
We give the precise version of Theorem 1.1 for the rank-1 case.
p
6
1.5
d
Theorem 4.2. For p > c? log
where c is a large enough absolute constant, set ? = 10? 1/d
d
and > ?2 p/?2 .Then, with high probability over the randomness of ?, the only points in Rd that
satisfy both first and second order optimality conditions (or ? -relaxed optimality conditions with
? < 0.1p) are z and z.
In the rest of this section, we will first prove that when x is constrained to be incoherent (and hence
the regularizer is 0 and concentration is straightforward) and satisfies the optimality conditions, then
x has to be z or z. Then we go on to explain how the regularizer helps us to change the geometry
of those points that are far away from z so that we can rule out them from being local minimum. For
simplicity, we will focus on the part that shows a local minimum x must be close enough to z.
Lemma 4.3. In the setting of Theorem 4.2, suppose x satisfies the first-order and second-order
optimality condition (4.2) and (4.3). Then when p is defined as in Theorem 4.2,
xx>
where " = ?3 (pd)
1/2
zz >
2
F
6 O(") .
.
This turns out to be the main challenge. Once we proved x is close, we can apply the result of Sun
and Luo [SL15] (see Lemma C.1), and obtain Theorem 4.2.
4.1
Handling incoherent x
To demonstrate the key idea, in this section we restrict our attention to the subset of Rd which contains
incoherent x with `2 norm bounded by 1, that is, we consider,
?
2?
B = x : kxk1 6 p , kxk 6 1 .
(4.5)
d
Note that the desired solution z is in B, and the regularization R(x) vanishes inside B.
The following lemmas assume x satisfies the first and second order optimality conditions, and deduce
a sequence of properties that x must satisfy.
Lemma 4.4. Under the setting of Theorem 4.2 , with high probability over the choice of ?, for any
x 2 B that satisfies second-order optimality condition (4.3) we have,
kxk2 > 1/4.
The same is true if x 2 B only satisfies ? -relaxed second order optimality condition for ? 6 0.1p.
Proof. We plug in v = z in the second-order optimality condition (4.3), and obtain that
P? (zx> + xz > )
3
2
F
> 2z > P? (M
xx> )z .
This is the main reason for us to choose 4-th power instead of 2-nd power.
7
(4.6)
Intuitively, when restricted to ?, the squared Frobenius on the LHS and the quadratic form on the
RHS should both be approximately a p fraction of the unrestricted case. In fact, both LHS and RHS
can be written as the sum of terms of the form hP? (uv T ), P? (stT )i, because
P? (zx> + xz > )
2z > P? (M
2
F
= 2hP? (zxT ), P? (zxT )i + 2hP? (zxT ), P? (xz T )i
xx> )z = 2hP? (zz T ), P? (zz T )i
2hP? (xxT ), P? (zz T )i.
Therefore we can use concentration inequalities (Theorem D.1), and simplify the equation
p
2
LHS of (4.6) = p zx> + xz > F ? O( pdkxk21 kzk21 kxk2 kzk2 )
where " = O(?2
q
= 2pkxk2 kzk2 + 2phx, zi2 ? O(p") ,
log d
pd ).
(Since x, z 2 B)
Similarly, by Theorem D.1 again, we have
RHS of (4.6) = 2 hP? (zz > ), P? (zz > )i
= 2pkzk4
hP? (xx> ), P? (zz > )i
2phx, zi2 ? O(p")
(Since M = zz > )
(by Theorem D.1 and x, z 2 B)
(Note that even we use the ? -relaxed second order optimality condition, the RHS only becomes
1.99pkzk4 2phx, zi2 ? O(p") which does not effect the later proofs.)
Therefore plugging in estimates above back into equation (4.6), we have that
2pkxk2 kzk2 + 2phx, zi2 ? O(p") > 2kzk4
2hx, zi2 ? O(p") ,
which implies that 6pkxk2 kzk2 > 2pkxk2 kzk2 + 4phx, zi2 > 2pkzk4
and " being sufficiently small, we complete the proof.
O(p"). Using kzk2 = 1,
Next we use first order optimality condition to pin down another property of x ? it has to be close
to z after scaling. Note that this doesn?t mean directly that x has to be close to z since x = 0 also
satisfies first order optimality condition (and therefore the conclusion (4.7) below).
Lemma 4.5. With high probability over the randomness of ?, for any x 2 B that satisfies first-order
optimality condition (4.2), we have that x also satisfies
hz, xiz
? 3 (pd)
where " = O(?
1/2
kxk2 x 6 O(") .
(4.7)
).
Finally we combine the two optimality conditions and show equation (4.7) implies xxT must be
close to zz T .
Lemma 4.6. Suppose vector x satisfies that kxk2 > 1/4, and that hz, xiz
for 2 (0, 0.1),
xx>
5
zz >
2
F
kxk2 x 6 . Then
6 O( ) .
Conclusions
Although the matrix completion objective is non-convex, we showed the objective function has very
nice properties that ensures the local minima are also global. This property gives guarantees for many
basic optimization algorithms. An important open problem is the robustness of this property under
different model assumptions: Can we extend the result to handle asymmetric matrix completion? Is
it possible to add weights to different entries (similar to the settings studied in [LLR16])? Can we
replace the objective function with a different distance measure rather than Frobenius norm (which is
related to works on 1-bit matrix sensing [DPvdBW14])? We hope this framework of analyzing the
geometry of objective function can be applied to other problems.
8
References
[AFSU07] Yonatan Amit, Michael Fink, Nathan Srebro, and Shimon Ullman. Uncovering shared structures in multiclass classification. In
Proceedings of the 24th international conference on Machine learning, pages 17?24. ACM, 2007.
[BBV16] Afonso S Bandeira, Nicolas Boumal, and Vladislav Voroninski. On the low-rank approach for semidefinite programs arising
in synchronization and community detection. arXiv preprint arXiv:1602.04426, 2016.
[BM03] Samuel Burer and Renato DC Monteiro. A nonlinear programming algorithm for solving semidefinite programs via low-rank
factorization. Mathematical Programming, 95(2):329?357, 2003.
[BNS16] S. Bhojanapalli, B. Neyshabur, and N. Srebro. Global Optimality of Local Search for Low Rank Matrix Recovery. ArXiv
e-prints, May 2016.
[CLMW11] Emmanuel J Cand`es, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? Journal of the ACM
(JACM), 58(3):11, 2011.
[CR09] Emmanuel J Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational
mathematics, 9(6):717?772, 2009.
[CT10] Emmanuel J Cand`es and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. Information Theory,
IEEE Transactions on, 56(5):2053?2080, 2010.
[CW15] Yudong Chen and Martin J Wainwright. Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees. arXiv preprint arXiv:1509.03025, 2015.
[DPvdBW14] Mark A Davenport, Yaniv Plan, Ewout van den Berg, and Mary Wootters. 1-bit matrix completion. Information and Inference,
3(3):189?223, 2014.
[GHJY15] Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points?online stochastic gradient for tensor decomposition. arXiv:1503.02101, 2015.
[Har14] Moritz Hardt. Understanding alternating minimization for matrix completion. In FOCS 2014. IEEE, 2014.
[HKZ12] Daniel Hsu, Sham M Kakade, and Tong Zhang. A tail inequality for quadratic forms of subgaussian random vectors. Electron.
Commun. Probab, 17(52):1?6, 2012.
[HW14] Moritz Hardt and Mary Wootters. Fast matrix completion without the condition number. In COLT 2014, pages 638?678, 2014.
[Imb10] R. Imbuzeiro Oliveira. Sums of random Hermitian matrices and an inequality by Rudelson. ArXiv e-prints, April 2010.
[JNS13] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 665?674. ACM, 2013.
[KMO10] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a few entries. Information Theory,
IEEE Transactions on, 56(6):2980?2998, 2010.
[Kor09] Yehuda Koren. The bellkor solution to the netflix grand prize. Netflix prize documentation, 81, 2009.
[LLR16] Yuanzhi Li, Yingyu Liang, and Andrej Risteski. Recovery guarantee of weighted low-rank approximation via alternating
minimization. arXiv preprint arXiv:1602.02262, 2016.
[LSJR16] Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent converges to minimizers. University
of California, Berkeley, 1050:16, 2016.
[LW14] Po-Ling Loh and Martin J Wainwright. Support recovery without incoherence: A case for nonconvex regularization. arXiv
preprint arXiv:1412.5632, 2014.
[LW15] Po-Ling Loh and Martin J. Wainwright. Regularized m-estimators with nonconvexity: statistical and algorithmic theory for
local optima. Journal of Machine Learning Research, 16:559?616, 2015.
[NP06] Yurii Nesterov and Boris T Polyak. Cubic regularization of Newton method and its global performance. Mathematical Programming, 108(1):177?205, 2006.
[Pem90] Robin Pemantle. Nonconvergence to unstable points in urn models and stochastic approximations. The Annals of Probability,
pages 698?712, 1990.
[Rec11] Benjamin Recht. A simpler approach to matrix completion. The Journal of Machine Learning Research, 12:3413?3430, 2011.
[RS05] Jasson DM Rennie and Nathan Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings
of the 22nd international conference on Machine learning, pages 713?719. ACM, 2005.
[SL15] Ruoyu Sun and Zhi-Quan Luo. Guaranteed matrix completion via nonconvex factorization. In Foundations of Computer
Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages 270?289. IEEE, 2015.
[SQW15] Ju Sun, Qing Qu, and John Wright. When are nonconvex problems not scary? arXiv preprint arXiv:1510.06096, 2015.
[ZWL15] Tuo Zhao, Zhaoran Wang, and Han Liu. A nonconvex optimization framework for low rank matrix estimation. In Advances in
Neural Information Processing Systems, pages 559?567, 2015.
9
| 6048 |@word version:6 polynomial:4 norm:13 stronger:1 nd:2 open:1 d2:1 km:1 decomposition:3 tr:1 harder:1 reduction:1 initial:2 liu:1 contains:1 daniel:1 luo:3 jns13:3 must:7 written:1 john:2 subsequent:1 stationary:2 implying:1 prohibitive:1 imitate:1 prize:2 characterization:1 simpler:1 zhang:1 mathematical:3 kvk2:1 direct:1 symposium:2 yuan:1 prove:6 shorthand:1 consists:1 combine:1 focs:2 inside:4 yingyu:1 hermitian:1 expected:1 indeed:1 andrea:1 cand:3 examine:1 growing:1 multi:1 xz:6 chi:1 roughly:1 zhi:1 becomes:1 spain:1 xx:26 bounded:3 moreover:5 underlying:1 provided:1 notation:1 bhojanapalli:2 prateek:1 linearity:1 kind:1 eigenvector:4 rec11:4 generalizable:4 unified:1 nj:1 guarantee:7 berkeley:1 every:3 fink:1 k2:1 positive:6 before:1 local:30 xv:2 despite:1 analyzing:4 lsjr16:4 incoherence:3 approximately:3 initialization:6 therein:1 studied:1 fastest:1 factorization:5 bi:1 practical:1 unique:2 kxk4:6 practice:5 block:2 definite:2 yehuda:1 procedure:2 empirical:1 jasson:1 bns16:2 spite:1 suggest:1 close:10 operator:1 andrej:1 aaaaaa:1 impossible:1 equivalent:1 map:1 missing:1 straightforward:1 attention:2 starting:5 independently:1 convex:19 go:1 simplicity:2 recovery:3 rule:1 estimator:1 orthonormal:1 nuclear:4 oh:1 proving:1 handle:1 notion:1 coordinate:1 annals:1 target:3 suppose:5 heavily:1 exact:2 duke:2 programming:3 us:3 designing:1 documentation:1 expensive:1 asymmetric:2 observed:7 kxk1:2 preprint:5 wang:1 region:2 ensures:1 nonconvergence:1 sun:4 decrease:1 benjamin:3 vanishes:2 convexity:1 complexity:2 pd:3 nesterov:1 solving:2 bellkor:1 basis:1 easily:3 po:2 various:3 regularizer:9 xxt:2 jain:1 fast:4 effective:1 kp:6 neighborhood:1 outside:1 quite:1 heuristic:2 widely:1 solve:3 supplementary:1 say:1 whose:1 otherwise:2 pemantle:1 rennie:1 think:1 noisy:4 itself:1 online:1 sequence:2 eigenvalue:5 rr:1 simchowitz:1 product:1 aligned:1 achieve:3 ghjy15:6 frobenius:4 convergence:3 yaniv:1 optimum:3 requirement:2 boris:1 converges:4 help:1 completion:28 ij:1 progress:1 scary:1 strong:2 netrapalli:1 recovering:1 c:2 implies:5 direction:1 closely:1 correct:1 modifying:1 stochastic:7 vx:2 material:1 explains:1 require:4 hx:15 suffices:1 generalization:2 har14:3 preliminary:1 fix:1 proposition:3 rong:2 kakq:1 strictly:1 sufficiently:1 stt:1 wright:2 algorithmic:4 claim:9 electron:1 dictionary:1 smallest:2 estimation:2 applicable:1 largest:1 tool:1 weighted:1 hope:1 minimization:3 gaussian:1 aim:2 rather:1 focus:3 rank:18 rigorous:1 sense:1 inference:1 hkz12:1 minimizers:1 typically:1 xiz:2 spurious:7 voroninski:1 tao:1 provably:1 monteiro:2 issue:3 classification:1 uncovering:1 colt:1 development:1 plan:1 constrained:1 equal:1 once:1 having:1 sampling:5 zz:25 look:1 k2f:5 phx:5 sanghavi:1 simplify:1 few:2 qing:1 usc:1 geometry:4 phase:1 raghunandan:1 delicate:1 detection:1 interest:1 investigate:1 deferred:2 analyzed:2 semidefinite:4 behind:2 regularizers:1 implication:2 accurate:1 partial:8 necessary:9 ewout:1 lh:3 orthogonal:1 vladislav:1 kxx:1 penalizes:2 desired:2 initialized:3 theoretical:4 column:2 earlier:2 marshall:1 disadvantage:1 introducing:1 deviation:1 entry:12 subset:1 uniform:2 too:1 corrupted:1 combined:1 recht:3 ju:1 international:2 grand:1 lee:2 terence:1 michael:2 concrete:1 again:3 squared:1 choose:2 huang:1 davenport:1 dr:2 derivative:1 zhao:1 ullman:1 li:2 exclude:1 zhaoran:1 satisfy:8 kzk2:6 depends:1 later:5 tion:1 jason:2 observing:1 analyze:1 netflix:2 recover:1 parallel:1 collaborative:4 square:1 accuracy:1 lesson:1 generalize:2 drive:1 zx:4 similary:1 randomness:2 explain:1 strongest:1 afonso:1 definition:3 dm:1 proof:42 mi:3 recovers:3 sampled:4 hsu:1 proved:1 hardt:2 popular:2 subsection:2 carefully:1 back:1 furong:1 april:1 strongly:1 furthermore:1 correlation:1 hand:1 sketch:3 lasa:3 trust:2 keshavan:1 nonlinear:1 believe:2 mary:2 xiaodong:1 effect:2 bm03:2 contain:1 true:3 counterpart:1 hence:2 regularization:6 alternating:5 symmetric:4 moritz:2 pkwy:1 illustrative:1 kak:1 samuel:1 generalized:2 complete:2 demonstrate:2 recently:2 analog:1 discussed:2 extend:1 elementwise:1 tail:1 refer:1 rd:8 uv:1 sujay:1 mathematics:1 hp:7 similarly:1 risteski:1 han:1 deduce:1 add:1 recent:1 showed:8 optimizing:1 moderate:1 commun:1 yonatan:1 nonconvex:5 bandeira:1 inequality:6 success:1 accomplished:1 yi:1 ruoyu:1 minimum:33 analyzes:1 relaxed:8 unrestricted:1 managing:1 converge:8 paradigm:1 forty:1 full:5 sham:1 faster:1 burer:2 plug:1 retrieval:1 kzk2f:1 sqw15:4 plugging:1 controlled:1 prediction:1 involving:2 basic:2 rongge:1 arxiv:13 iteration:1 cr09:4 want:1 singular:1 crucial:1 zw:1 rest:2 hz:14 quan:1 jordan:1 call:1 subgaussian:1 near:1 yang:1 enough:3 easy:1 xj:1 zi:11 restrict:3 escaping:1 polyak:1 inner:1 idea:4 regarding:1 simplifies:1 multiclass:1 praneeth:1 loh:3 bbv16:2 hessian:9 cause:1 speaking:1 wootters:2 useful:2 tij:7 involve:1 informally:3 eigenvectors:3 oliveira:1 arising:1 key:4 np06:3 d3:1 neither:2 verified:1 nonconvexity:2 relaxation:6 fraction:1 sum:4 place:1 throughout:1 reader:2 reasonable:1 appendix:2 scaling:3 comparable:1 bit:2 renato:1 guaranteed:2 koren:1 quadratic:4 annual:2 precisely:1 infinity:1 orthogonality:1 aspect:3 nathan:2 argument:1 min:2 optimality:25 urn:1 tengyu:2 relatively:1 martin:3 ball:2 kakade:1 qu:1 intuitively:2 restricted:1 den:1 imitating:2 computationally:1 equation:6 remains:1 abbreviated:2 turn:2 pin:1 xi2:2 ge:2 end:1 yurii:1 informal:3 neyshabur:1 apply:1 ct10:4 observe:3 away:1 enforce:1 spectral:1 yuanzhi:1 zi2:13 alternative:2 robustness:1 top:3 running:1 rudelson:1 kzx:1 log2:1 newton:1 calculating:2 emmanuel:3 especially:1 amit:1 tensor:2 objective:20 already:1 question:1 print:2 strategy:5 primary:1 concentration:4 unclear:1 southern:1 gradient:17 distance:1 link:1 zxt:3 separating:1 street:1 olden:1 unstable:1 reason:1 besides:1 nc:1 difficult:2 liang:1 statement:5 kzk1:1 trace:1 negative:1 stated:2 design:1 recommender:2 observation:16 descent:14 jin:1 precise:2 dc:1 rn:1 arbitrary:3 community:1 tuo:1 introduced:1 extensive:1 california:2 coherent:1 barcelona:1 akf:1 nip:1 address:1 below:2 challenge:1 program:3 rf:2 max:3 memory:4 including:1 wainwright:4 power:3 critical:4 difficulty:3 rely:1 regularized:4 warm:1 zr:2 scheme:1 ready:1 incoherent:8 nice:1 understanding:2 geometric:4 probab:1 kf:2 law:1 synchronization:2 fully:2 expect:2 filtering:3 srebro:3 foundation:2 sufficient:1 sewoong:1 share:1 row:4 last:1 clmw11:2 side:1 formal:1 wide:1 boumal:1 absolute:1 fifth:1 van:1 kzk:1 dimension:1 yudong:1 doesn:2 concretely:1 commonly:1 projected:1 far:1 polynomially:1 kzi:1 transaction:2 global:12 instantiation:1 ver:1 xi:7 factorizing:1 search:1 why:2 robin:1 robust:4 ca:1 nicolas:1 poly:3 domain:4 cleanest:1 main:7 montanari:1 rh:4 noise:3 ling:2 cubic:3 tong:1 kxk2:19 shimon:1 theorem:13 down:1 showing:2 insightful:1 sensing:2 r2:18 concern:1 consist:1 effectively:1 magnitude:1 margin:1 chen:1 rg:7 generalizing:1 led:1 likely:1 jacm:1 saddle:1 kxk:2 partially:1 subtlety:1 minimizer:1 satisfies:19 acm:5 ma:2 kzkf:1 succeed:1 goal:2 identity:1 careful:2 lipschitz:1 replace:1 shared:1 change:1 lemma:16 principal:1 e:3 formally:2 berg:1 people:1 mark:1 support:1 meant:1 princeton:2 handling:1 |
5,579 | 6,049 | Bootstrap Model Aggregation for Distributed
Statistical Learning
Jun Han
Department of Computer Science
Dartmouth College
[email protected]
Qiang Liu
Department of Computer Science
Dartmouth College
[email protected]
Abstract
In distributed, or privacy-preserving learning, we are often given a set of probabilistic models estimated from different local repositories, and asked to combine them
into a single model that gives efficient statistical estimation. A simple method is to
linearly average the parameters of the local models, which, however, tends to be
degenerate or not applicable on non-convex models, or models with different parameter dimensions. One more practical strategy is to generate bootstrap samples from
the local models, and then learn a joint model based on the combined bootstrap
set. Unfortunately, the bootstrap procedure introduces additional noise and can
significantly deteriorate the performance. In this work, we propose two variance
reduction methods to correct the bootstrap noise, including a weighted M-estimator
that is both statistically efficient and practically powerful. Both theoretical and
empirical analysis is provided to demonstrate our methods.
1
Introduction
Modern data science applications increasingly involve learning complex probabilistic models over
massive datasets. In many cases, the datasets are distributed into multiple machines at different
locations, between which communication is expensive or restricted; this can be either because the
data volume is too large to store or process in a single machine, or due to privacy constraints as
these in healthcare or financial systems. There has been a recent growing interest in developing
communication-efficient algorithms for probabilistic learning with distributed datasets; see e.g., Boyd
et al. (2011); Zhang et al. (2012); Dekel et al. (2012); Liu and Ihler (2014); Rosenblatt and Nadler
(2014) and reference therein.
This work focuses on a one-shot approach for distributed learning, in which we first learn a set
of local models from local machines, and then combine them in a fusion center to form a single
model that integrates all the information in the local models. This approach is highly efficient in
both computation and communication costs, but casts a challenge in designing statistically efficient
combination strategies. Many studies have been focused on a simple linear averaging method that
linearly averages the parameters of the local models (e.g., Zhang et al., 2012, 2013; Rosenblatt
and Nadler, 2014); although nearly optimal asymptotic error rates can be achieved, this simple
method tends to degenerate in practical scenarios for models with non-convex log-likelihood or
non-identifiable parameters (such as latent variable models, and neural models), and is not applicable
at all for models with non-additive parameters (e.g., when the parameters have discrete or categorical
values, or the parameter dimensions of the local models are different).
A better strategy that overcomes all these practical limitations of linear averaging is the KL-averaging
method (Liu and Ihler, 2014; Merugu and Ghosh, 2003), which finds a model that minimizes the
sum of Kullback-Leibler (KL) divergence to all the local models. In this way, we directly combine
the models, instead of the parameters. The exact KL-averaging is not computationally tractable
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
because of the intractability of calculating KL divergence; a practical approach is to draw (bootstrap)
samples from the given local models, and then learn a combined model based on all the bootstrap
data. Unfortunately, the bootstrap noise can easily dominate in this approach and we need a very large
bootstrap sample size to obtain accurate results. In Section 3, we show that the MSE of the estimator
obtained from the naive way is O(N ?1 + (dn)?1 ), where N is the total size of the observed data,
and n is bootstrap sample size of each local model and d is the number of machines. This means that
to ensure a MSE of O(N ?1 ), which is guaranteed by the centralized method and the simple linear
averaging, we need dn & N ; this is unsatisfying since N is usually very large by assumption.
In this work, we use variance reduction techniques to cancel out the bootstrap noise and get better
KL-averaging estimates. The difficulty of this task is first illustrated using a relatively straightforward
control variates method, which unfortunately suffers some of the practical drawback of the linear
averaging method due to the use of a linear correction term. We then propose a better method based
on a weighted M-estimator, which inherits all the practical advantages of KL-averaging. On the
theoretical part, we show that our methods give a MSE of O(N ?1 + (dn2 )?1 ), which significantly
improves over the original bootstrap estimator. Empirical studies are provided to verify our theoretical
results and demonstrate the practical advantages of our methods.
This paper is organized as follows. Section 2 introduces the background, and Section 3 introduces
our methods and analyze their theoretical properties. We present numerical results in Section 4 and
conclude the paper in Section 5. Detailed proofs can be found in the appendix.
2
Background and Problem Setting
Suppose we have a dataset X = {xj , j = 1, 2, ..., N } of size N , i.i.d. drawn from a probabilistic
model p(x|? ? ) within a parametric family P = {p(x|?) : ? ? ?}; here ? ? is the unknown true
parameter that we want to estimate based on X. In the distributed setting, the dataset X is partitioned
Sd
into d disjoint subsets, X = k=1 X k , where X k denotes the k-th subset which we assume is stored
in a local machine. For simplicity, we assume all the subsets have the same data size (N/d).
The traditional maximum likelihood estimator (MLE) provides a natural way for estimating the true
parameter ? ? based on the whole dataset X,
Global MLE: ??mle = arg max
???
d N/d
X
X
log p(xkj | ?),
where X k = {xkj }.
(1)
k=1 j=1
However, directly calculating the global MLE is challenging due to the distributed partition of the
dataset. Although distributed optimization algorithms exist (e.g., Boyd et al., 2011; Shamir et al.,
2014), they require iterative communication between the local machines and a fusion center, which
can be very time consuming in distributed settings, for which the number of communication rounds
forms the main bottleneck (regardless of the amount of information communicated at each round).
We instead consider a simpler one-shot approach that first learns a set of local models based on each
subset, and then send them to a fusion center in which they are combined into a global model that
captures all the information. We assume each of the local models is estimated using a MLE based on
subset X k from the k-th machine:
N/d
Local MLE: ??k = arg max
???
X
log p(xkj | ?), where k ? [d] = {1, 2, ? ? ? , d}.
(2)
j=1
The major problem is how to combine these local models into a global model. The simplest way is to
linearly average all local MLE parameters:
d
Linear Average:
1X?
??linear =
?k .
d
k=1
Comprehensive theoretical analysis has been done for ??linear (e.g., Zhang et al., 2012; Rosenblatt and
Nadler, 2014), which show that it has an asymptotic MSE of E||??linear ? ? ? ||2 = O(N ?1 ). In fact,
it is equivalent to the global MLE ??mle up to the first order O(N ?1 ), and several improvements have
been developed to improve the second order term (e.g., Zhang et al., 2012; Huang and Huo, 2015).
2
Unfortunately, the linear averaging method can easily break down in practice, or is even not applicable
when the underlying model is complex. For example, it may work poorly when the likelihood has
multiple modes, or when there exist non-identifiable parameters for which different parameter values
correspond to a same model (also known as the label-switching problem); models of this kind include
latent variable models and neural networks, and appear widely in machine learning. In addition, the
linear averaging method is obviously not applicable when the local models have different numbers of
parameters (e.g., Gaussian mixtures with unknown numbers of components), or when the parameters
are simply not additive (such as parameters with discrete or categorical values). Further discussions
on the practical limitations of the linear averaging method can be found in Liu and Ihler (2014).
All these problems of linear averaging can be well addressed by a KL-averaging method which
averages the model (instead of the parameters) by finding a geometric center of the local models
in terms of KL divergence (Merugu and Ghosh, 2003; Liu and Ihler, 2014). Specifically, it finds a
Pd
model p(x | ? ?KL ) where ? ?KL is obtained by ? ?KL = arg min? k=1 KL(p(x|??k ) || p(x|?)), which
is equivalent to,
d Z
X
Exact KL Estimator: ? ?KL = arg max ?(?) ?
p(x | ??k ) log p(x | ?)dx .
(3)
???
k=1
Liu and Ihler (2014) studied the theoretical properties of the KL-averaging method, and showed
that it exactly recovers the global MLE, that is, ? ?KL = ??mle , when the distribution family is a full
exponential family, and achieves an optimal asymptotic error rate (up to the second order) among all
the possible combination methods of {??k }.
Despite the attractive properties, the exact KL-averaging is not computationally tractable except for
very simple models. Liu and Ihler (2014) suggested a naive bootstrap method for approximation: it
draws parametric bootstrap sample {e
xkj }nj=1 from each local model p(x|??k ), k ? [d] and use it to
approximate each integral in (3). The optimization in (3) then reduces to a tractable one,
d
n
1 XX
k
?
KL-Naive Estimator: ? KL = arg max ??(?) ?
log p(e
xj | ?) .
(4)
n
???
j=1
k=1
ek =
Intuitively, we can treat each X
N/d
{xkj }j=1 ,
{e
xkj }nj=1
as an approximation of the original subset X k =
and hence can be used to approximate the global MLE in (1).
Unfortunately, as we show in the sequel, the accuracy of ??KL critically depends on the bootstrap
sample size n, and one would need n to be nearly as large as the original data size N/d to make ??KL
achieve the baseline asymptotic rate O(N ?1 ) that the simple linear averaging achieves; this is highly
undesirably since N is often assumed to be large in distributed learning settings.
3
Main Results
We propose two variance reduction techniques for improving the KL-averaging estimates and discuss
their theoretical and practical properties. We start with a concrete analysis on the KL-naive estimator
??KL , which was missing in Liu and Ihler (2014).
Assumption 1. 1. log p(x | ?),
2
? log p(x|?)
,
??
? 2 log p(x|?)
are continuous for ?x ? X and
???? >
? 2 log p(x|?)
C1 ? k ????> k ? C2 in a neighbor of ? ? for
and
log p(x|?)
?? ? ?; 2. ? ????
is positive definite and
>
?x ? X , and C1 , C2 are some positive constans.
Theorem 2. Under Assumption 1, ??KL is a consistent estimator of ? ?KL as n ? ?, and
E(??KL ? ? ?KL ) = o(
1
),
dn
Ek??KL ? ? ?KL k2 = O(
1
),
dn
where d is the number of machines and n is the bootstrap sample size for each local model p(x | ??k ).
The proof is in Appendix A. Because the MSE between the exact KL estimator ? ?KL and the true
parameter ? ? is O(N ?1 ) as shown in Liu and Ihler (2014), the MSE between ??KL and the true
3
parameter ? ? is
Ek??KL ? ? ? k2 ? Ek??KL ? ? ?KL k2 + Ek? ?KL ? ? ? k2 = O(N ?1 + (dn)?1 ).
(5)
To make the MSE between ??KL and ? ? equal O(N ?1 ), as what is achieved by the simple linear
averaging, we need draw dn & N bootstrap data points in total, which is undesirable since N is often
assumed to be very large by the assumption of distributed learning setting (one exception is when the
data is distributed due to privacy constraint, in which case N may be relatively small).
Therefore, it is a critical task to develop more accurate methods that can reduce the noise introduced
by the bootstrap process. In the sequel, we introduce two variance reduction techniques to achieve
this goal. One is based a (linear) control variates method that improves ??KL using a linear correction
term, and another is a multiplicative control variates method that modifies the M-estimator in (4) by
assigning each bootstrap data point with a positive weight to cancel the noise. We show that both
method achieves a higher O(N ?1 + (dn2 )?1 ) rate under mild assumptions, while the second method
has more attractive practical advantages.
3.1
Control Variates Estimator
The control variates method is a technique for variance reduction on Monte Carlo estimation (e.g.,
Wilson, 1984). It introduces a set of correlated auxiliary random variables with known expectations
or asymptotics (referred as the control variates), to balance the variation of the original estimator. In
e k = {e
xkj }nj=1 is know to be drawn from the local
our case, since each bootstrapped subsample X
e k:
model p(x | ??k ), we can construct a control variate by re-estimating the local model based on X
Bootstrapped Local MLE: ?ek = arg max
???
n
X
log p(e
xkj | ?),
for k ? [d],
(6)
j=1
where ?ek is known to converge to ??k asymptotically. This allows us to define the following control
variates estimator:
KL-Control Estimator: ??KL?C = ??KL +
d
X
Bk (?ek ? ??k ),
(7)
k=1
where Bk is a matrix chosen to minimize the asymptotic variance of ??KL?C ; our derivation shows
that the asymptotically optimal Bk has a form of
Bk = ?(
d
X
I(??k ))?1 I(??k ),
k ? [d],
(8)
k=1
where I(??k ) is the empirical Fisher information matrix of the local model p(x | ??k ). Note that this
differentiates our method from the typical control variates methods where Bk is instead estimated
using empirical covariance between the control variates and the original estimator (in our case, we
can not directly estimate the covariance because ??KL and ?ek are not averages of i.i.d. samples).The
procedure of our method is summarized in Algorithm 1. Note that the form of (7) shares some
similarity with the one-step estimator in Huang and Huo (2015), but Huang and Huo (2015) focuses
on improving the linear averaging estimator, and is different from our setting.
We analyze the asymptotic property of the estimator ??KL?C , and summarize it as follows.
Theorem 3. Under Assumption (1), ??KL?C is a consistent estimator of ? ?KL as n ? ?, and its
asymptotic MSE is guaranteed to be smaller than the KL-naive estimator ??KL , that is,
nEk??KL?C ? ? ?KL k2 < nEk??KL ? ? ?KL k2 ,
as n ? ?.
In addition, when N > n?d, the ??KL?C has ?zero-variance? in that Ek??KL ?? ?KL k2 = O((dn2 )?1 ).
Further, in terms of estimating the true parameter, we have
Ek??KL?C ? ? ? k2 = O(N ?1 + (dn2 )?1 ).
4
(9)
Algorithm 1 KL-Control Variates Method for Combining Local Models
?k }d .
1: Input: Local model parameters {?
k=1
k n
2: Generate bootstrap data {e
xj }j=1 from each p(x|??k ), for k ? [d].
P
P
?KL = arg max??? d 1 n
3: Calculate the KL-Naive estimator, ?
k=1 n
j=1
log p(e
xkj |?).
k
ek via (6) based on the bootstrapped data subset {e
4: Re-estimate the local parameters ?
xj }nj=1 , for
k ? [d].
?k ) =
5: Estimate the empirical Fisher information matrix I(?
1
n
Pn
j=1
?
?
?log p(e
xk
xk
j |? k ) ?log p(e
j |? k )
??
??
>
,
for k ? [d].
?KL?C of the combined model is given by (7) and (8).
6: Ouput: The parameter ?
The proof is in Appendix B. From (9), we can see that the MSE between ??KL?C and ? ? reduces to
O(N ?1 ) as long as n & (N/d)1/2 , which is a significant improvement over the KL-naive method
which requires n & N/d. When the goal is to achieve an O() MSE, we would just need to take
n & 1/(d)1/2 when N > 1/, that is, n does not need to increase with N when N is very large.
Meanwhile, because ??KL?C requires a linear combination of ??k , ?ek and ??KL , it carries the practical
drawbacks of the linear averaging estimator as we discuss in Section 2. This motivates us to develop
another KL-weighted method shown in the next section, which achieves the same asymptotical
efficiency as ??KL?C , while still inherits all the practical advantages of KL-averaging.
3.2
KL-Weighted Estimator
Our KL-weighted estimator is based on directly modifying the M-estimator for ??KL in (4), by
ekj a positive weight according to the probability ratio p(e
assigning each bootstrap data point x
xkj |
k
??k )/p(e
xj | ?ek ) of the actual local model p(x|??k ) and the re-estimated model p(x|?ek ) in (6). Here
the probability ratio acts like a multiplicative control variate (Nelson, 1987), which has the advantage
of being positive and applicable to non-identifiable, non-additive parameters. Our estimator is defined
as
d
n
X
xkj |??k )
1 X p(e
k
?
KL-Weighted Estimator: ? KL?W = arg max ?e(?) ?
log p(e
xj |?) .
n j=1 p(e
???
xkj |?ek )
k=1
(10)
We first show that this weighted estimator ?e(?) gives a more accurate estimation of ?(?) in (3) than
the straightforward estimator ??(?) defined in (4) for any ? ? ?.
Lemma 4. As n ? ?, ?e(?) is a more accurate estimator of ?(?) than ??(?), in that
nVar(e
? (?)) ? nVar(?
? (?)),
as n ? ?,
for any ? ? ?.
(11)
This estimator is motivated by Henmi et al. (2007) in which the same idea is applied to reduce the
asymptotic variance in importance sampling. Similar result is also found in Hirano et al. (2003), in
which it is shown that a similar weighted estimator with estimated propensity score is more efficient
than the estimator using true propensity score in estimating the average treatment effects. Although
being a very powerful tool, results of this type seem to be not widely known in machine learning,
except several applications in semi-supervised learning (Sokolovska et al., 2008; Kawakita and
Kanamori, 2013), and off-policy learning (Li et al., 2015).
We go a step further to analyze the asymptotic property of our weighted M-estimator ??KL?W that
maximizes ?e(?). It is natural to expect that the asymptotic variance of ??KL?W is smaller than that of
??KL based on maximizing ??(?); this is shown in the following theorem.
Theorem 5. Under Assumption 1, ??KL?W is a consistent estimator of ? ?KL as n ? ?, and has a
better asymptotic variance than ??KL , that is,
nEk??KL?W ? ? ?KL k2 ? nEk??KL ? ? ?KL k2 ,
5
when n ? ?.
Algorithm 2 KL-Weighted Method for Combining Local Models
?k }d .
1: Input: Local MLEs {?
k=1
2: Generate bootstrap sample {e
xkj }nj=1 from each p(x|??k ), for k ? [d].
xkj }nj=1 , for
3: Re-estimate the local model parameter ?ek in (6) based on bootstrap subsample {e
each k ? [d].
?KL?W of the combined model is given by (10).
4: Output: The parameter ?
When N > n ? d, we have Ek??KL?W ? ? ?KL k2 = O((dn2 )?1 ) as n ? ?. Further, its MSE for
estimating the true parameter ? ? is
Ek??KL?W ? ? ? k2 = O(N ?1 + (dn2 )?1 ).
(12)
The proof is in Appendix C. This result is parallel to Theorem 3 for the linear control variates
estimator ??KL?C . Similarly, it reduces to an O(N ?1 ) rate once we take n & (N/d)1/2 .
Meanwhile, unlike the linear control variates estimator, ??KL?W inherits all the practical advantages of
KL-averaging: it can be applied whenever the KL-naive estimator can be applied, including for models
with non-identifiable parameters, or with different numbers of parameters. The implementation of
??KL?W is also much more convenient (see Algorithm 2), since it does not need to calculate the
Fisher information matrix as required by Algorithm 1.
4
Empirical Experiments
We study the empirical performance of our methods on both simulated and real world datasets. We
first numerically verify the convergence rates predicted by our theoretical results using simulated
data, and then demonstrate the effectiveness of our methods in a challenging setting when the number
of parameters of the local models are different as decided by Bayesian information criterion (BIC).
Finally, we conclude our experiments by testing our methods on a set of real world datasets.
The models we tested include probabilistic principal components analysis (PPCA),
mixture of
Pm
PPCA and Gaussian Mixtures Models (GMM). GMM is given by p(x | ?) = s=1 ?s N (?s , ?s )
where
? = (?s , ?s , ?s ). PPCA model is defined with the help of a hidden variable t, p(x | ?) =
R
p(x | t; ?)p(t | ?)dt, where p(x | t; ?) = N (x; ?P
+ W t, ? 2 ), and p(t | ?) = N (t; 0, I) and
m
2
? = {?, W, ? }. The mixture of PPCA is p(x | ?) = s=1 ?s ps (x | ? s ), where ? = {?s , ? s }m
s=1
and each ps (x | ? s ) is a PPCA model.
Because all these models are latent variable models with unidentifiable parameters, the direct linear
averaging method are not applicable. For GMM, it is still possible to use a matched linear averaging
which matches the mixture components of the different local models by minimizing a symmetric KL
divergence; the same idea can be used on our linear control variates method to make it applicable to
GMM. On the other hand, because the parameters of PPCA-based models are unidentifiable up to
arbitrary orthonormal transforms, linear averaging and linear control variates can no longer be applied
easily. We use expectation maximization (EM) to learn the parameters in all these three models.
4.1
Numerical Verification of the Convergence Rates
We start with verifying the convergence rates in (5), (9) and (12) of MSE E||?? ? ? ? ||2 of the different
estimators for estimating the true parameters. Because there is also an non-identifiability problem in
calculating the MSE, we again use the symmetric KL divergence to match the mixture components,
and evaluate the MSE on W W > to avoid the non-identifiability w.r.t. orthonormal transforms. To
verify the convergence rates w.r.t. n, we fix d and let the total dataset N be very large so that N ?1 is
negligible. Figure 1 shows the results when we vary n, where we can see that the MSE of KL-naive
? KL?C and KL-weighted ?
? KL?W are O(n?2 ); both are
??KL is O(n?1 ) while that of KL-control ?
consistent with our results in (5), (9) and (12).
In Figure 2(a), we increase the number d of local machines, while using a fix n and a very large
N , and find that both ??KL and ??KL?W scales as O(d?1 ) as expected. Note that since the total
6
observation data size N is fixed, the number of data in each local machine is (N/d) and it decreases
as we increase d. It is interesting to see that the performance of the KL-based methods actually
increases with more partitions; this is, of course, with a cost of increasing the total bootstrap sample
size dn as d increases. Figure 2(b) considers a different setting, in which we increase d when fixing
the total observation data size N , and the total bootstrap sample size ntot = n ? d. According to (5)
?2
and (12), the MSEs of ??KL and ??KL?W should be about O(n?1
tot ) and O(dntot ) respectively when
N is very large, and this is consistent with the results in Figure 2(b). It is interesting to note that
the MSE of ??KL is independent with d while that of ??KL?W increases linearly with d. This is not
conflict with the fact that ??KL?W is better than ??KL , since we always have d ? ntot .
Figure 2(c) shows the result when we set n = (N/d)? and vary ?, where we find that ??KL?W
quickly converges to the global MLE as ? increases, while the KL-naive estimator ??KL converges
significantly slower. Figure 2(d) demonstrates the case when we increase N while fix d and n, where
we see our KL-weighted estimator ??KL?W matches closely with N , except when N is very large in
which case the O((dn2 )?1 ) term starts to dominate, while KL-naive is much worse. We also find the
linear averaging estimator performs poorly, and does not scale with O(N ?1 ) as the theoretical rate
claims; this is due to unidentifiable orthonormal transform in the PPCA model that we test on.
Log MSE
Log MSE
-1
-2
-3
1
-1
0
-2
Log MSE
0
-1
-2
KL-Naive
KL-Control
KL-Weighted
-3
-4
-5
-4
-3
100
1000
100
Bootstrap Size (n)
(a) PPCA
100
1000
1000
Bootstrap Size (n)
Bootstrap Size (n)
(b) Mixture of PPCA
(c) GMM
Figure 1: Results on different models with simulated data when we change the bootstrap sample size
n, with fixed d = 10 and N = 6 ? 107 . The dimensions of the PPCA models in (a)-(b) are 5, and
that of GMM in (c) is 3. The numbers of mixture components in (b)-(c) are 3. Linear averaging and
KL-Control are not applicable for the PPCA-based models, and are not shown in (a) and (b).
-2
-3
Log MSE
-1
-1
-2
-3
-1.5
-2
-2.5
-3
-4
-4
10
100
d
(a) Fix N and n
1000
-3.5
0.5
200 400 600 800 1000
d
(b) Fix N and ntot
0
Log MSE
Global MLE
Linear
KL-Naive
KL-Weighted
-0.5
-1
Log MSE
Log MSE
0
-1
-2
-3
0.6
0.7
0.8
0.9
100000
,
(c) Fix N , n =
(N
)?
d
and d
1e+06
N
1e+07
(d) Fix n and d
Figure 2: Further experiments on PPCA with simulated data. (a) varying n with fixed N = 5 ? 107 .
(b) varying d with N = 5 ? 107 , ntot = n ? d = 3 ? 105 . (c) varying ? with n = ( Nd )? , N = 107
and d. (d) varying N with n = 103 and d = 20. The dimension of data x is 5 and the dimension of
latent variables t is 4.
4.2
Gaussian Mixture with Unknown Number of Components
We further apply our methods to a more challenging setting for distributed learning of GMM when
the number of mixture components is unknown. In this case, we first learn each local model with EM
and decide its number of components using BIC selection. Both linear averaging and KL-control
??KL?C are not applicable in this setting, and and we only test KL-naive ??KL and KL-weighted
??KL?W . Since the MSE is also not computable due to the different dimensions, we evaluate ??KL
and ??KL?W using the log-likelihood on a hold-out testing dataset as shown in Figure 3. We can
see that ??KL?W generally outperforms ??KL as we expect, and the relative improvement increases
7
-2
-0.04
-3
-0.06
-0.08
-0.1
-0.12
2
4
6
-4
-5
-6
KL-Naive
KL-Weighted
-0.14
-0.5
Average LL
-0.02
Average LL
Average LL
significantly as the dimension of the observation data x increases. This suggests that our variance
reduction technique works very efficiently in high dimension problems.
(a) Dimension of x = 3
-1.5
-2
-7
8
#10 4
N
-1
2
4
6
N
8 10
#10 4
20 40 60 80
Dimension of Data
(b) Dimension of x = 80
100
(c) varying the dimension of x
Figure 3: GMM with the number of mixture components estimated by BIC. We set n = 600 and
the true number of mixtures to be 10 in all the cases. (a)-(b) vary the total data size N when the
dimension of x is 3 and 80, respectively. (c) varies the dimension of the data with fixed N = 105 .
The y-axis is the testing log likelihood compared with that of global MLE.
4.3
Results on Real World Datasets
Finally, we apply our methods to several real world datasets, including the SensIT Vehicle dataset on
which mixture of PPCA is tested, and the Covertype and Epsilon datasets on which GMM is tested.
From Figure 4, we can see that our KL-Weight and KL-Control (when it is applicable) again perform
the best. The (matched) linear averaging performs poorly on GMM (Figure 4(b)-(c)), while is not
applicable on mixture of PPCA.
0
0
-3
-4
-5
-6
-0.2
-1
Average LL
-2
Average LL
Average LL
-1
-2
-3
-0.4
-0.6
-0.8
-7
1
2
3
N
4
5
#10 4
(a) Mixture of PPCA, SensIT Vehicle
-4
0
5
N
10
#10 4
(b) GMM, Covertype
-1
0
Linear-Matched
KL-Naive
KL-Control
KL-Weighted
5
N
10
4
#10
(c) GMM, Epsilon
Figure 4: Testing log likelihood (compared with that of global MLE) on real world datasets. (a)
Learning Mixture of PPCA on SensIT Vehicle. (b)-(c) Learning GMM on Covertype and Epsilon.
The number of local machines is 10 in all the cases, and the number of mixture components are
taken to be the number of labels in the datasets. The dimension of latent variables in (a) is 90. For
Epsilon, a PCA is first applied and the top 100 principal components are chosen. Linear-matched and
KL-Control are not applicable on Mixture of PPCA and are not shown on (a).
5
Conclusion and Discussion
We propose two variance reduction techniques for distributed learning of complex probabilistic
models, including a KL-weighted estimator that is both statistically efficient and widely applicable
for even challenging practical scenarios. Both theoretical and empirical analysis is provided to
demonstrate our methods. Future directions include extending our methods to discriminant learning
tasks, as well as the more challenging deep generative networks on which the exact MLE is not
computable tractable, and surrogate likelihood methods with stochastic gradient descent are need.
We note that the same KL-averaging problem also appears in the ?knowledge distillation" problem
in Bayesian deep neural networks (Korattikara et al., 2015), and it seems that our technique can be
applied straightforwardly.
Acknowledgement This work is supported in part by NSF CRII 1565796.
8
References
S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical
R in Machine
learning via the alternating direction method of multipliers. Foundations and Trends
Learning, 3(1), 2011.
Y. Zhang, M. J. Wainwright, and J. C. Duchi. Communication-efficient algorithms for statistical
optimization. In NIPS, 2012.
O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using
mini-batches. In JMLR, 2012.
Q. Liu and A. T. Ihler. Distributed estimation, information loss and exponential families. In NIPS,
2014.
J. Rosenblatt and B. Nadler. On the optimality of averaging in distributed statistical learning. arXiv
preprint arXiv:1407.2724, 2014.
Y. Zhang, J. Duchi, M. I. Jordan, and M. J. Wainwright. Information-theoretic lower bounds for
distributed statistical estimation with communication constraints. In NIPS, 2013.
S. Merugu and J. Ghosh. Privacy-preserving distributed clustering using generative models. In Data
Mining, 2003. ICDM 2003. Third IEEE International Conference on, pages 211?218. IEEE, 2003.
O. Shamir, N. Srebro, and T. Zhang. Communication efficient distributed optimization using an
approximate Newton-type method. In ICML, 2014.
C. Huang and X. Huo. A distributed one-step estimator. arXiv preprint arXiv:1511.01443, 2015.
J. R. Wilson. Variance reduction techniques for digital simulation. American Journal of Mathematical
and Management Sciences, 4, 1984.
B. L. Nelson. On control variate estimators. Computers & Operations Research, 14, 1987.
M. Henmi, R. Yoshida, and S. Eguchi. Importance sampling via the estimated sampler. Biometrika,
94(4), 2007.
K. Hirano, G. W. Imbens, and G. Ridder. Efficient estimation of average treatment effects using the
estimated propensity score. Econometrica, 71, 2003.
N. Sokolovska, O. Capp?, and F. Yvon. The asymptotics of semi-supervised learning in discriminative
probabilistic models. In ICML. ACM, 2008.
M. Kawakita and T. Kanamori. Semi-supervised learning with density-ratio estimation. Machine
learning, 91, 2013.
L. Li, R. Munos, and C. Szepesv?ri. Toward minimax off-policy value estimation. In AISTATS, 2015.
A. Korattikara, V. Rathod, K. Murphy, and M. Welling. Bayesian dark knowledge. arXiv preprint
arXiv:1506.04416, 2015.
9
| 6049 |@word mild:1 repository:1 seems:1 nd:1 dekel:2 simulation:1 covariance:2 shot:2 carry:1 reduction:8 liu:11 score:3 bootstrapped:3 outperforms:1 assigning:2 dx:1 chu:1 tot:1 additive:3 numerical:2 partition:2 generative:2 xk:2 huo:4 provides:1 location:1 simpler:1 zhang:7 mathematical:1 dn:7 c2:2 direct:1 ouput:1 combine:4 introduce:1 privacy:4 deteriorate:1 expected:1 growing:1 actual:1 increasing:1 provided:3 spain:1 estimating:6 underlying:1 xx:1 maximizes:1 matched:4 what:1 kind:1 minimizes:1 developed:1 ghosh:3 finding:1 nj:6 act:1 exactly:1 biometrika:1 k2:12 demonstrates:1 healthcare:1 control:25 appear:1 positive:5 negligible:1 local:39 treat:1 tends:2 sd:1 switching:1 despite:1 crii:1 therein:1 studied:1 suggests:1 challenging:5 ms:1 statistically:3 decided:1 practical:14 testing:4 practice:1 definite:1 communicated:1 bootstrap:29 procedure:2 asymptotics:2 empirical:8 significantly:4 boyd:3 convenient:1 get:1 undesirable:1 selection:1 equivalent:2 center:4 missing:1 send:1 straightforward:2 regardless:1 modifies:1 go:1 convex:2 focused:1 maximizing:1 bachrach:1 simplicity:1 yoshida:1 estimator:46 dominate:2 orthonormal:3 financial:1 variation:1 shamir:3 suppose:1 massive:1 exact:5 designing:1 trend:1 expensive:1 observed:1 preprint:3 capture:1 verifying:1 calculate:2 nek:4 decrease:1 pd:1 asked:1 econometrica:1 efficiency:1 capp:1 easily:3 joint:1 derivation:1 monte:1 widely:3 transform:1 online:1 obviously:1 advantage:6 propose:4 combining:2 korattikara:2 degenerate:2 poorly:3 achieve:3 convergence:4 p:2 extending:1 converges:2 help:1 develop:2 fixing:1 auxiliary:1 predicted:1 direction:2 sensit:3 drawback:2 correct:1 closely:1 modifying:1 stochastic:1 require:1 fix:7 correction:2 hold:1 practically:1 nadler:4 claim:1 major:1 achieves:4 vary:3 estimation:8 integrates:1 applicable:13 label:2 propensity:3 tool:1 weighted:18 gaussian:3 always:1 pn:1 avoid:1 varying:5 wilson:2 focus:2 inherits:3 nvar:2 improvement:3 likelihood:7 baseline:1 hidden:1 arg:8 among:1 equal:1 construct:1 once:1 sampling:2 qiang:2 cancel:2 nearly:2 icml:2 future:1 modern:1 divergence:5 comprehensive:1 murphy:1 interest:1 centralized:1 highly:2 mining:1 introduces:4 mixture:18 accurate:4 integral:1 re:4 yvon:1 theoretical:10 maximization:1 cost:2 hirano:2 subset:7 gr:1 too:1 stored:1 straightforwardly:1 varies:1 combined:5 mles:1 density:1 international:1 sequel:2 probabilistic:7 off:2 quickly:1 concrete:1 again:2 management:1 huang:4 worse:1 ek:19 american:1 li:2 summarized:1 ntot:4 unsatisfying:1 depends:1 multiplicative:2 break:1 vehicle:3 analyze:3 undesirably:1 start:3 aggregation:1 parallel:1 identifiability:2 minimize:1 accuracy:1 variance:13 merugu:3 efficiently:1 correspond:1 bayesian:3 critically:1 carlo:1 suffers:1 whenever:1 proof:4 ihler:9 recovers:1 ppca:17 dataset:7 treatment:2 knowledge:2 improves:2 organized:1 actually:1 appears:1 higher:1 dt:1 supervised:3 done:1 unidentifiable:3 just:1 hand:1 mode:1 effect:2 verify:3 true:9 multiplier:1 hence:1 alternating:1 symmetric:2 leibler:1 illustrated:1 attractive:2 round:2 ll:6 criterion:1 theoretic:1 demonstrate:4 performs:2 duchi:2 parikh:1 xkj:14 volume:1 numerically:1 significant:1 distillation:1 pm:1 similarly:1 han:2 similarity:1 longer:1 recent:1 showed:1 scenario:2 store:1 preserving:2 additional:1 converge:1 sokolovska:2 semi:3 multiple:2 full:1 reduces:3 match:3 long:1 icdm:1 mle:18 prediction:1 expectation:2 arxiv:6 gilad:1 achieved:2 c1:2 background:2 szepesv:1 want:1 addition:2 addressed:1 unlike:1 asymptotical:1 seem:1 effectiveness:1 jordan:1 xj:6 variate:17 bic:3 dartmouth:4 reduce:2 idea:2 computable:2 bottleneck:1 motivated:1 pca:1 ridder:1 deep:2 generally:1 detailed:1 involve:1 amount:1 transforms:2 dark:1 simplest:1 generate:3 exist:2 nsf:1 estimated:8 disjoint:1 rosenblatt:4 discrete:2 drawn:2 gmm:13 asymptotically:2 sum:1 powerful:2 family:4 decide:1 draw:3 appendix:4 bound:1 guaranteed:2 identifiable:4 covertype:3 constraint:3 ri:1 min:1 optimality:1 relatively:2 department:2 developing:1 according:2 combination:3 smaller:2 increasingly:1 em:2 partitioned:1 ekj:1 imbens:1 intuitively:1 restricted:1 taken:1 computationally:2 discus:2 differentiates:1 know:1 tractable:4 operation:1 apply:2 batch:1 slower:1 original:5 denotes:1 top:1 ensure:1 include:3 clustering:1 newton:1 calculating:3 epsilon:4 strategy:3 parametric:2 traditional:1 surrogate:1 gradient:1 simulated:4 nelson:2 considers:1 discriminant:1 toward:1 mini:1 ratio:3 balance:1 minimizing:1 unfortunately:5 implementation:1 motivates:1 policy:2 unknown:4 perform:1 observation:3 datasets:10 descent:1 communication:8 arbitrary:1 peleato:1 introduced:1 bk:5 cast:1 required:1 kl:142 eckstein:1 conflict:1 barcelona:1 nip:4 suggested:1 usually:1 challenge:1 summarize:1 including:4 max:7 wainwright:2 critical:1 difficulty:1 natural:2 minimax:1 improve:1 axis:1 categorical:2 jun:2 naive:16 geometric:1 acknowledgement:1 rathod:1 asymptotic:11 relative:1 loss:1 expect:2 interesting:2 limitation:2 srebro:1 digital:1 foundation:1 verification:1 consistent:5 xiao:1 intractability:1 share:1 course:1 supported:1 kanamori:2 neighbor:1 munos:1 distributed:22 dimension:15 world:5 dn2:7 welling:1 approximate:3 kullback:1 overcomes:1 global:11 conclude:2 assumed:2 consuming:1 discriminative:1 continuous:1 latent:5 iterative:1 learn:5 improving:2 mse:24 complex:3 meanwhile:2 aistats:1 main:2 linearly:4 whole:1 noise:6 subsample:2 referred:1 exponential:2 jmlr:1 third:1 learns:1 down:1 theorem:5 fusion:3 importance:2 simply:1 acm:1 goal:2 fisher:3 change:1 specifically:1 except:3 typical:1 averaging:31 sampler:1 lemma:1 principal:2 total:8 exception:1 eguchi:1 college:2 evaluate:2 tested:3 correlated:1 |
5,580 | 605 | Nets with Unreliable Hidden Nodes Learn
Error-Correcting Codes
Stephen Judd
Paul W. Munro
Siemens Corporate Research
755 College Road East
Princeton NJ 08540
Department of Infonnation Science
University of Pittsburgh
Pittsburgh, PA 15260
[email protected]
[email protected]
ABSTRACT
In a multi-layered neural network, anyone of the hidden layers can be
viewed as computing a distributed representation of the input. Several
"encoder" experiments have shown that when the representation space is
small it can be fully used. But computing with such a representation
requires completely dependable nodes. In the case where the hidden
nodes are noisy and unreliable, we find that error correcting schemes
emerge simply by using noisy units during training; random errors injected during backpropagation result in spreading representations apart.
Average and minimum distances increase with misfire probability, as
predicted by coding-theoretic considerations. Furthennore, the effect of
this noise is to protect the machine against permanent node failure,
thereby potentially extending the useful lifetime of the machine.
1 INTRODUCTION
The encoder task described by Ackley, Hinton, and Sejnowski (1985) for the Boltzmann
machine, and by Rumelhart, Hinton, and Williams (1986) for feed-forward networks. has
been used as one of several standard benchmarks in the neural network literature.
Cottrell, Munro, and Zipser (1987) demonstrated the potential of such autoencoding architectures to lossy compression of image data. In the encoder architecture, the weights connecting the input layer to the hidden layer play the role of an encoding mechanism. and
the hidden-output weights are analogous to a decoding device. In the terminology of
Shannon and Weaver (1949), the hidden layer corresponds to the communication channel.
By analogy, channel noise corresponds to a fault (misfiring) in the hidden layer. Previous
89
90
Judd and Munro
encoder studies have shown that the representations in the hidden layer correspond to optimally efficient (i.e., fully compressed) codes, which suggests that introducing noise in
the fonn of random interference with hidden unit function may lead to the development of
codes more robust to noise of the kind that prevailed during learning. Many of these
ideas also appear in Chiueh and Goodman (1987) and Sequin and Clay (1990).
We have tested this conjecture empirically, and analyzed the resulting solutions, using a
standard gradient-descent procedure (backpropagation). Although there are alternative techniques to encourage fault tolerance through construction of specialized error functions
(eg., Chauvin, 1989) or direct attacks (eg., Neti, Schneider, and Young, 1990), we have
used a minimalist approach that simply introduces intennittent node misfirings during
training that mimic the errors anticipated during nonnal performance.
In traditional approaches to developing error-correcting codes (eg., Hamming, 1980), each
symbol from a source alphabet is mapped to a codeword (a sequence of symbols from a
code alphabet); the distance between codewords is directly related to the code's robustness.
2 METHODOLOGY
Computer simulations were performed using strictly layered feed forward networks. The
nodes of one of the hidden layers randomly misfrre during training; in most experiments,
this "channel" layer was the sole hidden layer. Each input node corresponds to a transmitted symbol, output nodes to received symbols, channel representations to codewords;
other layers are introduced as needed to enable nonlinear encoding and/or decoding. After
training, the networks were analyzed under various conditions, in terms of performance
and coding-theoretic measures, such as Hamming distance between codewords.
The response, r, of each unit in the channel layer is computed by passing the weighted
sum, x , through the hyperbolic tangent (a sigmoid that ranges from -1 to +1). The responses of those units randomly designated to misfire are then multiplied by -1 as this is
most comparable with concepts from coding theory for binary channels" The misfire operation influences the course of learning in two ways, since the erroneous information is
both passed on to units further "downstream" in the net, and used as the presynaptic factor
in the synaptic modification rule. Note that the derivative factor in the backpropagation
procedure is unaffected for units using the hyperbolic k'Ulgent, since dr/dx (l+r )(l-r )/2.
=
These misfrrings were random Iy assigned according to various kinds of probability distributions: independent identically distributed (i.i.d), k~f-n, correlated across hidden units,
and correlated over the input distribution. The hidden unit representations required to h,mdie uncorrelated noise roughly correspond to Hamming spheres2 ,and can be decoded by a
Other possible misfire modes include setting the node's activity to zero (or some other
constant) or randomizing it. The most appropriate mode depends on various factors, ineluding the situation to be simulated and the type of analysis to be performed. For exampIe, simulating neuronal death in a biological situation may warrant a different failure
mode than simulating failure of an electronic component.
2 Consider an n-bit block code, where each codeword lies on the vertex of an n-cube. The
Hamming sphere of radius k is the neighborhood of vertices that differ from the codeword
by a number of bits less than or equal to k.
1
Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes
single layer of weights; thus the entire network consists of just three sets of units:
source-channel-sink. However, correlated noise generally necessitates additional layers.
All the experiments described below use the encoder task described by Ackley, Hinton,
and Sejnowki (1986); that is, the input pattern consists of just one unit active and the
others inactive. The task is to activate only the corresponding unit in the output layer.
By comparison with coding theory, the input units are thus analogous to symbols to be
encoded, and the hidden unit representations are analogous to the code words.
3 RESULTS
3.1.
PERFORMANCE
The ftrst experiment supports the claim of Sequin and Clay (1990) that training with
faults improves network robustness. Four 8-30-8 encoders were trained with fault probability p = 0, 0.05, 0.1, and 0.3 respectively. After training, each network was tested with
fault probabilities varying from 0.05 to 1.0. The results show enhanced performance for
networks trained with a higher rate of hidden unit misftring. Figure 1 shows four performance curves (one for each training fault probability), each as a function of test fault
probability.
Interesting convergence properties were also observed; as the training fault probabilty, p,
was varied from 0 to 0.4, networks converge reliably faster for low nonzero values
(0.05<p<0.15) than they do at p=O.
1.0
......
C,.)
Q)
''-
0.8
0
C,.)
......
c:
Q)
0.6
C,.)
training fault probability
'Q)
a.
Q)
..
0.4
C>
m
'Q)
>
?
0.2
0.0
EI
p=O.OO
p=O.05
9
p=O. 10
p=O.30
?
0.0
0.2
0.4
0.6
0.8
1.0
Test fault probability
Figure 1. Performance for various training conditions. Four 8-30-8 encoders were
trained with different probabilities for hidden unit misfiring. Each data point is an
average over 1000 random stimuli with random hidden unit faults. Outputs are
scored correct if the most active output node corresponds to the active input node.
91
92
Judd and Munro
3.2.
DISTANCE
3.2.1 Distances increase with fault probability
Distances were measured between all pairs of hidden unit representations. Several networks trained with different fault probabilities and various numbers of hidden units were
examined. As expected, both the minimum distances and average distances increase with
the training fault probability until it approaches 0.5 per node (see Figure 2). For probabilities above 0.25, the minimum distances fall within the theoretical bounds for a 30 bit
code of a 16 symbol alphabet given by Gilbert and Elias (see Blahut, 1987).
14
12
-
.s
Elias Bound
10
CD
(.)
c
as
~
c
8
6
o
?
average
minimum
4
0.0
0.1
0.2
0.3
training fault probability
0.4
Figure 2. Distance increases with fault probability. Average and minimum L1
distances are plotted for 16-30-16 networks trained with fault probabilities
ranging from 0.0 to 0.4. Each data point represents an average over 100
networks trained using different weight initializations.
3.2.2. Input probabilities affect distance
The probability distribution over the inputs influences the relative distances of the representations at the hidden unit level. To illustrate this, a 4-10-4 encoder was trained using
various probabilities for one of the four inputs (denoted P*), distributing the remaining
probabilty unifonnly among the other three. The average distance between the representation of p* and the others increases with its probability, while the average distmlce among
the other three decreases as shown in the upper part of Figure 3. The more frequent patterns are generally expected to "claim" a larger region of representation space.
Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes
6 ~~~------------~------------------~
5
CD
U
c
as
1/1
is
CD
m 4
...as
CD
>
<t
3
5~==========~============~
CD
u
-
4
c
as
1/1
is
CD
m
...as
CD
>
<t
3
0.0
0.1
0.2
0 .3
0.4
0.5
Prob(P*)
Figure 3. Non-uniform input distribution. 4-10-4 encoders were trained usingfailure
probabilities of 0 (squares), 0.1 (circles), and 0.2 (triangles) . The input distribution was
skewed by varying the probability of one of the four items (denoted P*) in the training set
from 0.05 to 0.5, keeping the other probabilities uniform . Average L1 distances are
shown from the manipulated pattern to the other three (open symbols) and among the
equiprobables (filled symbols) as well. In the upper figure, failure is independent of the
input, while in the lower figure , failure is induced only when P* is presented .
93
94
Judd and Munro
The dashed line in Figure 3 indicates a uniform input distribution, hence in the top figure, the average distance to p* is equal to the average distances among the other patterns.
However this does not hold in the lower figure, indicating that the representations of
stimuli that induce more frequent channel errors also claim more representation space.
3.3.
CORRELATED MISFIRING
If the error probability for each bit in a message (or each hiddoo unit in a network layer)
is uncorrelated with the other message bits (hidden units), then the principles of distance
between codewords (representations) applies. On the other hand, if there is some structure
to the noise (i.e. the misfrrings are correlated across the hidden units), there may be different strategies to encoding and decoding, that require computations other than simple distance. While a Hamming distance criterion on a hypercube is a linearly separable classification function, and hence computable by a single layer of weights, the more general case
is not linearly separable, as is demonstrated below.
Example: Misfiring in 2 of 6 channel units.
In this example, up to two of six channel units are randomly selected to misfire with each
learning trial. In order to guarantee full recovery from two simultaneous faults, only two
symbols can be represented, if the faults are independent; however, if one fault is always
in one three-unit subset and the other is always in the complementary subset, it is possible to store four patterns. The following code can be considered with no loss of generality: Let the six hidden units (code bits) be partitioned into two sets of three, where there is
at most one fault in each subset. The four code words, 000000, 000111, 111000,
111111 form an error correcting code under this condition; i.e. each subset is a triplicate
code. Under the allowed fault combinations specified above, any given transmitted code
string will be converted by noise to one of 9 strings of the 15 that lie at a Hamming distance of 2 (the 15 unconstrained two-bit errors of the string 000000 are shown in the
table below with the 9 that satisfy the constraint in a box). Because of the symmetric
distribution of these 9 allowed states, any category that includes all of them and is defined
by a linear (hyperplane) boundary, must include all 15. Thus, this code cannot be decoded
by a single layer of threshold (or sigmoidal) units; hence even if a 4-6-4 network discovers this code, it will not decode it accurately. However, our experiments show that inserting a reliable (fault-free) hidden layer of just two units between the channel layer and
the output layer (i.e., a 4-6-2-4 encoder) enables the discovery of a code that is robust to
errors of this kind. The representations of the four patterns in the channel layer show a
triply redundant code in each half of the channel layer (Figure 4). The 2-unit layer provides a transformation that allows successful decoding of channel representations with
faults.
Table. Possible two-bit error masks
000011
000101
001001
010001
100001
000110
001010 001100
010010 010100 011000
100010 100100 101000 110000
'---------------------------
Nets with Unreliable Hidden Nodes Learn Error-Correcting Codes
Input
Channel
Decoder
Output
Figure 4. Sample solution to 3-3 channel task. Thresholded activation
patterns are shown for a 4-6-2-4 network. Errors are introduced into the first
hidden (channel) layer only. With each iteration, the outputs of one hidden
unit from the left half of the hidden layer and one unit from the right half can be
inverted. Note that the channel develops a triplicate code for each half-layer.
4
DISCUSSION
Results indicate that vanilla backpropagation on its own does not spread out the hidden
unit representations (codewords) optimally, and that deliberate random misfiring during
training induces wider separations, increasing resistance to node misfiring. Furthermore,
non-uniform input distributions and non-uniform channel properties lead to asymmetries
among the similarity relationships between hidden unit representations that are consistent
with optimizing mutual information.
A mechanism of this kind may be useful for increasing fault tolerance in electronic systerns, and may be used in neurobiological systems. The potential usefulness of inducing
faults during training extends beyond fault tolerance. Clay and Sequin (1992) point out
that training of this kind can enhance the capacity of a network to generalize. In effect,
the probability of random faults can be used to vary the number of "effective parameters"
(a term coined by Moody, 1992) available for adaptation, without dynamically altering
network architecture. Thus, a naive system might begin with a relatively high probability of misfiring, and gradually reduce it as storage capacity needs increase with experience.
This technique may be particularly valuable for designing efficient, robust codes for channels with high order statistical properties, which defy traditional coding techniques. In
such cases, a single layer of weights for encoding is not generally sufficient, as was
shown above in the 4-6-2-4 example. Additional layers may enhance code efficiency for
complex noiseless applications, such as image compression (Cottrell, Munro, and Zipser,
1987).
Acknowledgements
The second author participated in this research as a visiting research scientist during the
summers of 1991 and 1992 at Siemens Corporate Research, which kindly provided fmancial support and a stimulating research environment.
95
96
Judd and Munro
References
Ackley, D. H., Hinton, G. E., and Sejnowski, T. J. (1985) A learning algorithm for
Boltzmann machines. Cognitive Science. 9: 147-169.
Blahut, R. E. (1987) Principle and Practise of Information Theory. Reading MA, Addison
Wesley.
Chauvin, Y. (1989) A back-propagation algorithm with optimal use of hidden units. In:
Touretsky, D.S. (ed.) Advances in Neural Information Processing Systems I. San Mateo,
CA: Morgan Kaufmann Publishers.
Chiueh, Tz-Dar and Rodney Goodman. (1987) A neural network classifier based on coding theory. In: Dana Z. Anderson, editor, Neural Information Processing Systems, pp
174--183, New York, A.I.P.
Clay, Reed D. and Sequin, Carlo H. (1992) Fault tolerance training improves generalization and robustness. Proceedings of JJCNN92 , 1-769, Baltimore.
Cottrell, G. W., P. Munro, and D. Zipser (1987) Image compression by back propagation: An example of extensional programming. Ninth Ann Meeting of the Cognitive
Science Society, pp. 461-473.
Hamming, R. W. (1980) Coding and Iriformation Theory. Prentice Hall: Englewood
Cliffs, N.J.
Moody, J. (1992) The effective number of parameters. In: Moody, J. E., Hanson, S. J.,
Lippman, R., (eds.) Advances in Neural Iriformation Processing Systems 4. San Mateo,
CA: Morgan Kaufmann Publishers.
Neti, C., M. H. Schneider, and E. D. Young. (1990) Maximally fault-tolerant neural networks and nonlinear programming. Proceedings of JJCNN, 11-483, San Diego.
Rumelhart D., Hinton G., and Williams R. (1986) Learning representations by backpropagating errors. Nature 323:533-536.
Sequin, Carlo H. and Reed D. Clay (1990) Fault tolerance in artificial neural networks.
Proceedings of JJCNN, 1-703, San Diego.
Shannon, C. and Weaver, W. (1949) The Mathematical Theory of Communication.
University of Illinois Press.
PART II
ARCHITECTURES
AND ALGORITHMS
| 605 |@word trial:1 compression:3 open:1 simulation:1 fonn:1 thereby:1 com:1 activation:1 dx:1 must:1 cottrell:3 extensional:1 enables:1 half:4 selected:1 device:1 item:1 provides:1 node:16 attack:1 sigmoidal:1 mathematical:1 direct:1 consists:2 mask:1 expected:2 roughly:1 multi:1 increasing:2 begin:1 provided:1 kind:5 string:3 transformation:1 nj:1 guarantee:1 classifier:1 unit:34 appear:1 scientist:1 encoding:4 cliff:1 might:1 initialization:1 mateo:2 examined:1 dynamically:1 suggests:1 range:1 block:1 backpropagation:4 lippman:1 procedure:2 hyperbolic:2 word:2 road:1 induce:1 cannot:1 layered:2 storage:1 prentice:1 influence:2 gilbert:1 demonstrated:2 williams:2 recovery:1 correcting:7 rule:1 analogous:3 construction:1 play:1 enhanced:1 decode:1 diego:2 programming:2 designing:1 pa:1 sequin:5 rumelhart:2 particularly:1 observed:1 ackley:3 role:1 region:1 decrease:1 valuable:1 environment:1 practise:1 trained:8 efficiency:1 completely:1 sink:1 triangle:1 necessitates:1 various:6 represented:1 alphabet:3 effective:2 activate:1 sejnowski:2 artificial:1 neighborhood:1 encoded:1 larger:1 furthennore:1 compressed:1 encoder:7 noisy:2 autoencoding:1 sequence:1 net:5 adaptation:1 frequent:2 inserting:1 inducing:1 convergence:1 asymmetry:1 extending:1 wider:1 oo:1 illustrate:1 measured:1 sole:1 received:1 predicted:1 indicate:1 differ:1 radius:1 correct:1 dependable:1 enable:1 require:1 generalization:1 biological:1 strictly:1 hold:1 considered:1 hall:1 minimalist:1 pitt:1 claim:3 vary:1 spreading:1 infonnation:1 weighted:1 always:2 varying:2 indicates:1 entire:1 hidden:33 nonnal:1 among:5 classification:1 denoted:2 development:1 mutual:1 cube:1 equal:2 represents:1 warrant:1 anticipated:1 mimic:1 others:2 stimulus:2 develops:1 randomly:3 manipulated:1 blahut:2 message:2 englewood:1 chiueh:2 introduces:1 analyzed:2 misfire:5 encourage:1 experience:1 unifonnly:1 filled:1 circle:1 plotted:1 theoretical:1 triplicate:2 altering:1 introducing:1 vertex:2 subset:4 uniform:5 usefulness:1 successful:1 optimally:2 encoders:3 randomizing:1 decoding:4 enhance:2 connecting:1 iy:1 moody:3 dr:1 cognitive:2 tz:1 derivative:1 li:1 potential:2 converted:1 coding:7 includes:1 permanent:1 satisfy:1 depends:1 performed:2 rodney:1 square:1 kaufmann:2 correspond:2 generalize:1 accurately:1 carlo:2 unaffected:1 simultaneous:1 synaptic:1 ed:2 against:1 failure:5 pp:2 hamming:7 improves:2 clay:5 back:2 feed:2 wesley:1 higher:1 methodology:1 response:2 maximally:1 box:1 generality:1 lifetime:1 just:3 furthermore:1 anderson:1 until:1 hand:1 ei:1 nonlinear:2 propagation:2 mode:3 lossy:1 effect:2 concept:1 hence:3 assigned:1 symmetric:1 death:1 nonzero:1 eg:3 during:9 skewed:1 backpropagating:1 criterion:1 theoretic:2 l1:2 image:3 ranging:1 consideration:1 discovers:1 sigmoid:1 specialized:1 empirically:1 unconstrained:1 vanilla:1 illinois:1 similarity:1 own:1 optimizing:1 apart:1 codeword:3 store:1 binary:1 fault:31 meeting:1 inverted:1 transmitted:2 minimum:5 additional:2 morgan:2 schneider:2 converge:1 redundant:1 dashed:1 stephen:1 ii:1 full:1 corporate:2 faster:1 sphere:1 noiseless:1 iteration:1 participated:1 baltimore:1 source:2 publisher:2 goodman:2 induced:1 zipser:3 identically:1 affect:1 architecture:4 reduce:1 idea:1 computable:1 inactive:1 six:2 munro:9 distributing:1 passed:1 resistance:1 passing:1 york:1 dar:1 useful:2 generally:3 probabilty:2 induces:1 category:1 deliberate:1 per:1 four:8 terminology:1 threshold:1 thresholded:1 downstream:1 sum:1 prob:1 injected:1 extends:1 electronic:2 separation:1 comparable:1 bit:8 layer:28 bound:2 summer:1 activity:1 jjcnn:2 constraint:1 anyone:1 separable:2 relatively:1 conjecture:1 department:1 developing:1 designated:1 according:1 combination:1 across:2 partitioned:1 modification:1 gradually:1 interference:1 mechanism:2 neti:2 needed:1 addison:1 available:1 operation:1 multiplied:1 appropriate:1 simulating:2 alternative:1 robustness:3 top:1 remaining:1 include:2 coined:1 hypercube:1 society:1 codewords:5 strategy:1 traditional:2 visiting:1 gradient:1 distance:21 mapped:1 simulated:1 capacity:2 decoder:1 presynaptic:1 chauvin:2 code:25 touretsky:1 relationship:1 reed:2 ftrst:1 potentially:1 reliably:1 boltzmann:2 upper:2 benchmark:1 descent:1 situation:2 hinton:5 communication:2 varied:1 ninth:1 introduced:2 pair:1 required:1 specified:1 hanson:1 protect:1 beyond:1 below:3 pattern:7 reading:1 reliable:1 weaver:2 prevailed:1 scheme:1 naive:1 literature:1 discovery:1 tangent:1 acknowledgement:1 relative:1 fully:2 loss:1 interesting:1 analogy:1 dana:1 elia:2 sufficient:1 consistent:1 principle:2 editor:1 uncorrelated:2 cd:7 course:1 keeping:1 free:1 fall:1 emerge:1 distributed:2 tolerance:5 curve:1 judd:6 boundary:1 forward:2 author:1 san:4 neurobiological:1 unreliable:5 active:3 tolerant:1 pittsburgh:2 misfiring:7 table:2 learn:4 channel:20 robust:3 defy:1 ca:2 nature:1 complex:1 kindly:1 spread:1 linearly:2 noise:8 paul:1 scored:1 allowed:2 complementary:1 neuronal:1 decoded:2 lie:2 young:2 erroneous:1 symbol:9 simply:2 applies:1 corresponds:4 ma:1 stimulating:1 viewed:1 ann:1 hyperplane:1 siemens:3 shannon:2 east:1 indicating:1 college:1 support:2 princeton:1 tested:2 correlated:5 |
5,581 | 6,050 | Differential Privacy without Sensitivity
Kentaro Minami
The University of Tokyo
kentaro [email protected]
Issei Sato
The University of Tokyo
[email protected]
Hiromi Arai
The University of Tokyo
[email protected]
Hiroshi Nakagawa
The University of Tokyo
[email protected]
Abstract
The exponential mechanism is a general method to construct a randomized estimator that satisfies (?, 0)-differential privacy. Recently, Wang et al. showed that the
Gibbs posterior, which is a data-dependent probability distribution that contains
the Bayesian posterior, is essentially equivalent to the exponential mechanism under certain boundedness conditions on the loss function. While the exponential
mechanism provides a way to build an (?, 0)-differential private algorithm, it requires boundedness of the loss function, which is quite stringent for some learning
problems. In this paper, we focus on (?, ?)-differential privacy of Gibbs posteriors
with convex and Lipschitz loss functions. Our result extends the classical exponential mechanism, allowing the loss functions to have an unbounded sensitivity.
1
Introduction
Differential privacy is a notion of privacy that provides a statistical measure of privacy protection
for randomized statistics. In the field of privacy-preserving learning, constructing estimators that
satisfy (?, ?)-differential privacy is a fundamental problem. In recent years, differentially private
algorithms for various statistical learning problems have been developed [8, 14, 3].
Usually, the estimator construction procedure in statistical learning contains the following minimization problem of a data-dependent function. Given a dataset Dn = {x1 , . . . , xn }, a statistician
chooses a parameter ? that minimizes a cost function L(?, Dn ). A typical example of cost function
is the empirical risk function, that is, a sum of loss function `(?, xi ) evaluated at each sample point
xi ? Dn . For example, the maximum likelihood estimator (MLE) is given by the minimizer of
empirical risk with loss function `(?, x) = ? log p(x | ?).
To achieve a differentially private estimator, one natural idea is to construct an algorithm based on a
posterior sampling, namely drawing a sample from a certain data-dependent probability distribution.
The exponential mechanism [16], which can be regarded as a posterior sampling, provides a general
method to construct a randomized estimator that satisfies (?, 0)-differential privacy. The probability density of the output of the exponential mechanism is proportional to exp(??L(?, Dn ))?(?),
where ?(?) is an arbitrary prior density function, and ? > 0 is a parameter that controls the
degree of concentration. The resulting distribution is highly concentrated around the minimizer
?? ? argmin? L(?, Dn ). Note that most differential private algorithms involve a procedure to add
some noise (e.g. the Laplace mechanism [12], objective perturbation [8, 14], and gradient perturbation [3]), while the posterior sampling explicitly designs the density of the output distribution.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Loss
Difference
`(?, x+)
Loss gradient
`(?, x?)
?
?
|?`(?, x?)|
|?`(?, x+)|
?
Figure 1: An example of a logistic loss function `(?, x) := log(1 + exp(?y?> z)). Considering two
points x? = (z, ?1), the difference of the loss |`(?, x+ ) ? `(?, x? )| increases proportionally to the
size of the parameter space (solid lines). In this case, the value of the ? in the exponential mechanism, which is inversely proportional to the maximum difference of the loss function, becomes very
small. On the other hand, the difference of the gradient |?`(?, x+ ) ? ?`(?, x? )| does not exceed
twice of the Lipschitz constant (dashed lines). Hence, our analysis based on Lipschitz property does
not be influenced by the size of the parameter space.
Table 1: Regularity conditions for (?, ?)-differential privacy of the Gibbs posterior. Instead of the
boundedness of the loss function, our analysis in Theorem 7 requires its Lipschitz property and
convexity. Unlike the classical exponential mechanism, our result explains ?shrinkage effect? or
?contraction effect?, namely, the upper bound for ? depends on the concavity of the prior ? and the
size of the dataset n.
Exponential
mechanism [16]
Theorem 7
Theorem 10
(?, ?)
?=0
Loss function `
Bounded sensitivity
?>0
?>0
Lipschitz and convex
Bounded, Lipschitz
and strongly convex
Prior ?
Arbitrary
Shrinkage
No
Log-concave
Log-concave
Yes
Yes
We define the density of the Gibbs posterior distribution as
Pn
exp(?? i=1 `(?, xi ))?(?)
R
P
G? (? | Dn ) :=
.
n
exp(?? i=1 `(?, xi ))?(?)d?
(1)
The Gibbs posterior plays important roles in several learning problems, especially in PAC-Bayesian
learning theory [6, 21]. In the context of differential privacy, Wang et al. [20] recently pointed out
that the Bayesian posterior, which is a special version of (1) with ? = 1 and a specific loss function,
satisfies (?, 0)-differential privacy because it is equivalent to the exponential mechanism under a
certain regularity condition. Bassily, et al. [3] studied an application of the exponential mechanism
to private convex optimization.
In this paper, we study the (?, ?)-differential privacy of the posterior sampling with ? > 0. In
particular, we consider the following statement.
Claim 1. Under a suitable condition on loss function ` and prior ?, there exists an upper bound
B(?, ?) > 0, and the Gibbs posterior G? (? | Dn ) with ? ? B(?, ?) satisfies (?, ?)-differential
privacy. The value of B(?, ?) does not depend on the boundedness of the loss function.
2
We point out here the analyses of (?, 0)-differential privacy and (?, ?)-differential privacy with ? > 0
are conceptually different in the regularity conditions they require. On one hand, the exponential
mechanism essentially requires the boundedness of the loss function to satisfy (?, 0)-differential
privacy. On the other hand, the boundedness is not a necessary condition in (?, ?)-differential privacy. In this paper, we give a new sufficient condition for (?, ?)-differential privacy based on the
convexity and the Lipschitz property. Our analysis widens the application ranges of the exponential
mechanism in the following aspects (See also Table 1).
? (Removal of boundedness assumption) If the loss function is unbounded, which is usually
the case when the parameter space is unbounded, the Gibbs posterior does not satisfy (?, 0)differential privacy in general. Still, in some cases, we can build an (?, ?)-differential
private estimator.
? (Tighter evaluation of ?) Even when the difference of the loss function is bounded, our
analysis can yield a better scheme in determining the appropriate value of ? for a given
privacy level. Figure 1 shows an example of logistic loss.
? (Shrinkage and contraction effect) Intuitively speaking, the Gibbs posterior becomes robust
against a small change of the dataset, if the prior ? has a strong shrinkage effect (e.g. a
Gaussian prior with a small variance), or if the size of the dataset n tends to infinity. In
our analysis, the upper bound of ? depends on ? and n, which explains such shrinkage and
contraction effects.
1.1
Related work
(?, ?)-differential privacy of Gibbs posteriors has been studied by several authors. Mir ([18], Chapter
5) proved that a Gaussian posterior in a specific problem satisfies (?, ?)-differential privacy. Dimitrakakis et al. [10] considered Lipschitz-type sufficient conditions, yet their result requires some
modification of the definition of the neighborhood on the database.
In general, the utility of sensitivity-based methods suffers from the size of the parameter space
?. Thus, getting around the dependency on the size of ? is a fundamental problem in the study
of differential privacy. For discrete parameter spaces, a general range-independent algorithm for
(?, ?)-differential private maximization was developed in [7].
1.2
Notations
The set of all probability measures on a measurable space (?, T ) is denoted by M1+ (?). A map
between two metric spaces f : (X, dX ) ? (Y, dY ) is said to be L-Lipschitz, if dY (f (x1 ), f (x2 )) ?
LdX (x1 , x2 ) holds for all x1 , x2 ? X. Let f be a twice continuously differentiable function f
defined on a subset of Rd . f is said to be m(> 0)-strongly convex, if the eigenvalues of its Hessian
?2 f are bounded by m from below. f is said to be M -smooth,
2
Differential privacy with sensitivity
In this section, we review the definition of (?, ?)-differential privacy and the exponential mechanism.
2.1
Differential privacy
Differential privacy is a notion of privacy that provides a degree of privacy protection in a statistical
sense. More precisely, differential privacy defines a closeness between any two output distributions
that correspond to adjacent datasets.
In this paper, we assume that a dataset D = Dn = (x1 , . . . , xn ) is a vector that consists of n points
in abstract attribute space X , where each entry xi ? X represents information contributed by one
individual. Two datasets D, D0 are said to be adjacent if dH (D, D0 ) = 1, where dH is the Hamming
distance defined on the space of all possible datasets X d .
We describe the definition of differential privacy in terms of randomized estimators. A randomized
estimator is a map ? : X n ? M1+ (?) from the space of datasets to the space of probability
measures.
3
Definition 2 (Differential privacy). Let ? > 0 and ? ? 0 be given privacy parameters. We say that
a randomized estimator ? : X n ? M1+ (?) satisfies (?, ?)-differential privacy, if for any adjacent
datasets D, D0 ? X n , an inequality
?D (A) ? e? ?D0 (A) + ?
(2)
holds for every measurable set A ? ?.
2.2
The exponential mechanism
The exponential mechanism [16] is a general construction of (?, 0)-differentially private distributions. For an arbitrary function L : ? ? X n ? R, we define the sensitivity by
?L :=
sup
0
n
sup |L(?, D) ? L(?, D0 )|,
(3)
D,D ?X : ???
dH (D,D 0 )=1
which is the largest possible difference of two adjacent functions f (?, D) and f (?, D0 ) with respect
to supremum norm.
Theorem 3 (McSherry and Talwar). Suppose that the sensitivity of the function L(?, Dn ) is finite.
Let ? be an arbitrary base measure on ?. Take a positive number ? so that ? ? ?/2?L . Then a
probability distribution whose density with respect to ? is proportional to exp(??L(?, Dn )) satisfies
(?, 0)-differential privacy.
We
Pn consider the particular case that the cost function is given as sum form L(?, Dn ) =
i=1 `(?, xi ). Recently, Wang et al. [20] examined two typical cases in which ?L is finite. The
following statement slightly generalizes their result.
Theorem 4 (Wang, et al.). (a) Suppose that the loss function ` is bounded by A, namely |`(?, x)| ?
A holds for all x ? X and ? ? ?. Then ?L ? 2A, and the Gibbs posterior (1) satisfies (4?A, 0)differential privacy.
(b) Suppose that for any fixed ? ? ?, the difference |`(?, x1 ) ? `(?, x2 )| is bounded by L for all
x1 , x2 ? X . Then ?L ? L, and the Gibbs posterior (1) satisfies (2?L, 0)-differential privacy.
The condition ?L < ? is crucial for Theorem 3 and cannot be removed. However, in practice,
statistical models of interest do not necessarily satisfy such boundedness conditions. Here we have
two simple examples of Gaussian and Bernoulli mean estimation problems, in which the sensitivities
are unbounded.
? (Bernoulli mean) Let `(p, x) = ?x log p?(1?x) log(1?p) (p ? (0, 1), x ? {0, 1}) be the
negative log-likelihood of the Bernoulli distribution. Then |`(p, 0) ? `(p, 1)| is unbounded.
? (Gaussian mean) Let `(?, x) = 21 (? ? x)2 (? ? R, x ? R) be the negative log-likelihood
of the Gaussian distribution with a unit variance. Then |`(?, x) ? `(?, x0 )| is unbounded if
x 6= x0 .
Thus, in the next section, we will consider an alternative proof technique for (?, ?)-differential privacy so that it does not require such boundedness conditions.
3
Differential privacy without sensitivity
In this section, we state our main results for (?, ?)-differential privacy in the form of Claim 1.
There is a well-known sufficient condition for the (?, ?)-differential privacy:
Theorem 5 (See for example Lemma 2 of [13]). Let ? > 0 and ? > 0 be privacy parameters.
Suppose that a randomized estimator ? : X n ? M1+ (?) satisfies a tail-bound inequality of logdensity ratio
d?D
?? ??
(4)
?D log
d?D0
for every adjacent pair of datasets D, D0 . Then ? satisfies (?, ?)-differential privacy.
4
d?D
To control the tail behavior (4) of the log-density ratio function log d?
, we consider the concenD0
tration around its expectation. Roughly speaking, inequality (4) holds if there exists an increasing
function ?(t) that satisfies an inequality
d?D
?t > 0, ?D log
? DKL (?D , ?D0 ) + t ? exp(??(t)),
(5)
d?D0
dG
d?D
where log dG ?,D0 is the log-density ratio function, and DKL (?D , ?D0 ) := E?D log d?
is the
?,D
D0
Kullback-Leibler (KL) divergence. Suppose that the Gibbs posterior G?,D , whose density G(? | D)
is defined by (1), satisfies an inequality (5) for a certain ?(t) = ?(t, ?). Then G?,D satisfies (4) if
there exist ?, t > 0 that satisfy the following two conditions.
1. KL-divergence bound: DKL (G?,D , G?,D0 ) + t ? ?
2. Tail-probability bound: exp(??(t, ?)) ? ?
3.1
Convex and Lipschitz loss
Here, we examine the case in which the loss function ` is Lipschitz and convex, and the parameter
space ? is the entire Euclidean space Rd . Due to the unboundedness of the domain, the sensitivity
?L can be infinite, in which case the exponential mechanism cannot be applied.
Assumption 6. (i) ? = Rd .
(ii) For any x ? X , `(?, x) is non-negative, L-Lipschitz and convex.
(iii) ? log ?(?) is twice differentiable and m? -strongly convex.
In Assumption 6, the loss function `(?, x) and the difference |`(?, x1 ) ? `(?, x2 )| can be unbounded.
Thus, the classical argument of the exponential mechanism in Section 2.2 cannot be applied. Nevertheless, our analysis shows that the Gibbs posterior satisfies (?, ?)-differential privacy.
Theorem 7. Let ? ? (0, 1] be a fixed parameter, and D, D0 ? X n be an adjacent pair of datasets.
Under Assumption 6, inequality
2 !
m?
dG?,D
2L2 ? 2
G?,D log
? ? ? exp ? 2 2 ? ?
(6)
dG?,D0
8L ?
m?
holds for any ? >
2L2 ? 2
m? .
Gibbs posterior G?,D satisfies (?, ?)-differential privacy if ? > 0 is taken so that the right-hand side
of (6) is bounded by ?. It is elementary to check the following statement:
Corollary 8. Let ? > 0 and 0 < ? < 1 be privacy parameters. Taking ? so that it satisfies
r
?
m?
??
,
(7)
2L 1 + 2 log(1/?)
Gibbs posterior G?,D satisfies (?, ?)-differential privacy.
Note that the right-hand side of (6) depends on the strong concavity m? . The strong concavity
parameter corresponds to the precision (i.e. inverse variance) of the Gaussian, and a distribution
with large m? becomes spiky. Intuitively, if we use a prior that has a strong shrinkage effect, then
the posterior becomes robust against a small change of the dataset, and consequently the differential
privacy can be satisfied with a little effort. ?
This observation is justified in the following sense: the
upper bound of ? grows proportionally to m? . In contrast, the classical exponential mechanism
does not have that kind of prior-dependency.
3.2
Strongly convex loss
Let `? be a strongly convex function defined on the entire Euclidean space Rd . If ` is a restriction
of `? to a compact L2 -ball, the Gibbs posterior can satisfy (?, 0)-differential privacy with a certain
privacy level ? > 0 because of the boundedness of `. However, using the boundedness of ?` rather
than that of ` itself, we can give another guarantee for (?, ?)-differential privacy.
5
Assumption 9. Suppose that a function `? : Rd ? X ? R is a twice differentiable and m` -strongly
convex with respect to its first argument. Let ?
? be a finite measure over Rd that ? log ?
? (?) is twice
?
differentiable and m? -strongly convex. Let G?,D is a Gibbs posterior on Rd whose density with
P ?
respect to the Lebesgue measure is proportional to exp(?? i `(?,
xi ))?
? (?). Assume that the mean
? ?,D is contained in a L2 -ball of radius ?:
of G
(8)
?D ? X n ,
EG? ?,D [?]
? ?.
2
Define a positive number ? > 1. Assume that (?, `, ?) satisfies the following conditions.
p
(i) ? is a compact L2 -ball centered at the origin, and its radius R? satisfies R? ? ? + ? d/m? .
(ii) For any x ? X , `(?, x) is L-Lipschitz, and convex.
supx?X sup??? k?? `(?, x)k2 is bounded.
In other words, L
:=
(iii) ? is given by a restriction of ?
? to ?.
The following statements are the counterparts of Theorem 7 and its corollary.
Theorem 10. Let ? ? (0, 1] be a fixed parameter, and D, D0 ? X n be an adjacent pair of datasets.
Under Assumption 9, inequality
2 !
nm` ? + m?
dG?,D
C 0?2
G?,D log
? ? ? exp ?
??
(9)
dG?,D0
4C 0 ? 2
nm` ? + m?
0
2
?
. Here, we defined C 0 := 2CL2 (1 + log(?2 /(?2 ? 1))), where C > 0
holds for any ? > nmC` ?+m
?
is a universal constant that does not depend on any other quantities.
Corollary 11. Under Assumption 9, there exists an upper bound B(?, ?) = B(?, ?, n, m` , m? , ?) >
0, and G? (? | Dn ) with ? ? B(?, ?) satisfies (?, ?)-differential privacy.
Similar to Corollary 8, the upper bound on ? depends on the prior. Moreover, the right-hand side of
(9) decreases to 0 as the size of dataset n increases, which implies that (?, ?)-differential privacy is
satisfied almost for free if the size of the dataset is large.
3.3
Example: Logistic regression
In this section, we provide an application of Theorem 7 to the problem of linear binary classification.
Let Z := {z ? Rd , kzk2 ? r} be a space of the input variables. The space of the observation is the
set of input variables equipped with binary label X := {x = (z, y) ? Z ? {?1, +1}}. The problem
is to determine a parameter ? = (a, b) of linear classifier f? (z) = sgn(a> z + b).
Define a loss function `LR by
`LR (?, x) := log(1 + exp(?y(a> z + b))).
The `2 -regularized logistic regression estimator is given by
)
( n
X
1
?
2
??LR = argmin
`LR (?, xi ) + k?k2 ,
n i=1
2
??Rd+1
(10)
(11)
where ? > 0 is a regularization parameter. Corresponding Gibbs posterior has a density
G? (? | D) ?
n
Y
i=1
?(yi (a> zi + b))? ?d+1 (? | 0, (n?)?1 I),
(12)
where ?(u) = (1 + exp(?u))?1 is a sigmoid function, and ?d+1 (? | ?, ?) is a density of (d + 1)dimensional Gaussian distribution. It is easy to check that `LR(?,x) is r-Lipschitz and convex, and
? log ?d+1 (? | 0, (n??1 )I) is (n?)-strongly convex. Hence, by Corollary 8, the Gibbs posterior
satisfies (?, ?)-differential privacy if
s
?
n?
??
.
(13)
2r 1 + 2 log(1/?)
6
4
Approximation Arguments
In practice, exact samplers of Gibbs posteriors (1) are rarely available. Actual implementations
involve some approximation processes. Markov Chain Monte Carlo (MCMC) methods and Variational Bayes (VB) [1] are commonly used to obtain approximate samplers of Gibbs posteriors. The
next proposition, which is easily obtained as a variant of Proposition 3 of [20], gives a differential
privacy guarantee under approximation.
Proposition 12. Let ? : X n ? M1+ (?) be a randomized estimator that satisfies (?, ?)-differential
privacy. If for all D, there exist approximate sampling procedure ?0D such that dTV (?D , ?0D ) ? ?,
then the randomized mechanism D 7? ?0D satisfies (?? + (1 + e? )?)-differential privacy. Here,
dTV (?, ?) = supA?T |?(A) ? ?(A)| is the total variation distance.
We now describe a concrete example of MCMC, the Langevin Monte Carlo (LMC). Let ?(0) ? Rd
be an initial point of the Markov chain. The LMC algorithm for Gibbs posterior G?,D contains the
following iterations:
?
?(t+1) = ?(t) ? h?U (?(t) ) + 2h? (t+1)
(14)
n
X
U (?) = ?
`(?, xi ) ? log ?(?).
(15)
i=1
(1)
(2)
d
Here ? , ? , . . . ? R are noise vectors independently drawn from a centered Gaussian N (0, I).
This algorithm can be regarded as a discretization of a stochastic differential equation that has a
stationary distribution G?,D , and its convergence property has been studied in finite-time sense
[9, 5, 11]. Let us denote by ?(t) the law of ?(t) . If dTV (?(t) , G?,D ) ? ? holds for all t ? T , then
the privacy of the LMC sampler is obtained from Proposition 12. In fact, we can prove by Corollary
1 of [9] the following proposition.
Proposition 13. Assume that Assumption 6 holds. Let `(?, x) be M` -smooth for all x ? X , and
? log ?(?) be M? -smooth. Let d ? 2 and ? ? (0, 1/2). We can choose ? > 0, by Corollary 8, so
that G?,D satisfies (?, ?)-differential privacy. Let us set step size h of the LMC algorithm (14) as
h=
d(n?M` + M? )2
h
2m? ? 2
i ,
4 log ?1 + d log n?Mm` ?+M?
(16)
and set T as
2
d(n?M` + M? )2
1
n?M` + M?
4 log
+ d log
.
T =
4m? ? 2
?
m?
(17)
Then, after T iterations of (14), ?(T ) satisfies (?, ? + (1 + e? )?)-differential privacy.
The algorithm suggested in Proposition 13 is closely related to the differentially private stochastic
gradient Langevin dynamics (DP-SGLD) proposed by Wang, et al. [20]. Ignoring the computational
cost, we can take the approximation error level ? > 0 arbitrarily small, while the convergence
property to the target posterior distribution is not necessarily ensured about DP-SGLD.
5
Proofs
In this section, we give a formal proof of Theorem 7 and a proof sketch of 10.
There is a vast literature on techniques to obtain a concentration inequality in (5) (see, for example,
[4]). Logarithmic Sobolev inequality (LSI) is a useful tool for this purpose. We say that a probability
measure ? over ? ? Rd satisfies LSI with constant DLS if inequality
2
E? [f 2 log f 2 ] ? E? [f 2 ] log E? [f 2 ] ? 2DLS E? k?f k2
(18)
holds for any integrable function f , provided the expectations in the expression are defined. It is
known that [15, 4], if ? satisfies LSI, then every real-valued L-Lipschitz function F behaves in a
sub-Gaussian manner:
t2
?{F ? E? [F ] + t} ? exp ? 2
.
(19)
2L DLS
7
In our analysis, we utilize the LSI technique for the following two reasons: (a) a sub-Gaussian tail
bound of the log-density ratio is obtained from (19), and (b) an upper bound on the KL-divergence
is directly obtained from LSI, which appears to be difficult to prove by any other argument.
Roughly speaking, LSI holds if the logarithm of the density is strongly concave. In particular, for a
Gibbs measure on Rd , the following fact is known.
Lemma 14 ([15]). Let U : Rd ? R be a twice differential, m-strongly convex and integrable
function. Let ? be a probability measure on Rd whose density is proportional to exp(?U ). Then ?
satisfies LSI (18) with constant DLS = m?1 .
In this context, the strong convexity of U is related to the curvature-dimension condition CD(m, ?),
which can be used to prove LSI on general Riemannian manifolds [19, 2].
Proof of Theorem 7. For simplicity, we assume that `(?, x) (?x ? X ) is twice differentiable. For
general Lipschitz and convex loss functions
(e.g. hinge loss), the theorem can be proved using a
P
mollifier argument. Since U (?) = ? i `(?, xi ) ? log ?(?) is m? -strongly convex, Gibbs posterior
G?,D satisfies LSI with constant m?1
? .
Let D, D0 ? X n be a pair of adjacent datasets. Considering appropriate permutation of the elements,
we can assume that D = (x1 , . . . , xn ) and D0 = (x01 , . . . , x0n ) differ in the first element, namely,
x1 6= x01 and xi = x0i (i = 2, . . . , n). By the assumption that `(?, x) is L-Lipschitz, we have
? log dG?,D
= ?k?(`(?, x1 ) ? `(?, x01 ))k2 ? 2?L,
(20)
dG?,D0
2
and log-density ratio log
function (19), we have
?t > 0,
dG?,D
dG?,D0
is 2?L-Lipschitz. Then, by concentration inequality for Lipschitz
m ? t2
dG?,D
? DKL (G?,D , G?,D0 ) + t ? exp ? 2 2
G?,D log
dG?,D0
8L ?
(21)
We will show an upper bound of the KL-divergence. To simplify the notation, we will write F :=
dG?,D
dG?,D0 . Noting that
?
? 2
F
F
?1
2
? log F k22 ?
? (2?L)2
(22)
k? F k2 = k? exp(2 log F )k2 = k
2
4
and that
DKL (G?,D , G?,D0 ) = EG?,D [log F ]
?
= EG?,D0 [F log F ] ? EG?,D0 [F ]EG?,D0 [log F ],
F,
?
2
2L2 ? 2
2L2 ? 2
DKL (G?,D , G?,D0 ) ?
EG?,D0 k? F k22 ?
EG?,D0 [F ] =
.
m?
m?
m?
we have, from LSI (18) with f =
(23)
(24)
Combining (21) and (24), we have
dG?,D
dG?,D
2L2 ? 2
0
G?,D log
? ? ? G?,D log
? ? + DKL (G?,D , G?,D ) ?
dG?,D0
dG?,D0
m?
!
2
m?
2L2 ? 2
? exp ? 2 2 ? ?
(25)
8L ?
m?
for any ? >
2L2 ? 2
m? .
Proof sketch for Theorem 10. The proof is almost the same as that of Theorem 7. It is sufficient to
show that the set of Gibbs posteriors {G?,D , D ? X n } simultaneously satisfies LSI with the same
constant. Since the logarithm of the density is m := (nm` ? + m? )-strongly convex, a probability
? ?,D satisfies LSI with constant m?1 . By the Poincar?e inequality for G
? ?,D , the variance
measure G
of k?k2 is bounded by d/m ? d/m? . By the Chebyshev inequality, we can check that the mass
? ?,D (?) ? p := 1 ? ??2 . Then, by Corollary 3.9 of
of parameter space is lower-bounded as G
?
[17], G?,D := G?,D |? satisfies LSI with constant C(1 + log p?1 )m?1 , where C > 0 is a universal
numeric constant.
8
Acknowledgments
This work was supported by JSPS KAKENHI Grant Number JP15H02700.
References
[1] P. Alquier, J. Ridgway, and N. Chopin. On the properties of variational approximations of
Gibbs posteriors, 2015. Available at http://arxiv.org/abs/1506.04091.
[2] D. Bakry, I. Gentil, and M. Ledoux. Analysis and Geometry of Markov Diffusion Operators.
Springer, 2014.
[3] R. Bassily, A. Smith, and A. Thakurta. Differentially private empirical risk minimization:
Efficient algorithms and tight error bounds. In FOCS, 2014.
[4] S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: A Nonasymptotic Theory
of Independence. Oxford University Press, 2013.
[5] S. Bubeck, R. Eldan, and J. Lehec. Finite-time analysis of projected Langevin Monte Carlo.
In NIPS, 2015.
[6] O. Catoni. Pac-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning. IMS, 2007.
[7] K. Chaudhuri, D. Hsu, and S. Song. The large margin mechanism for differentially private
maximization. In NIPS, 2014.
[8] K. Chaudhuri, C. Monteleoni, and A.D. Sarwate. Differentially private empirical risk minimization. Journal of Machine Learning Research, 12:1069?1109, 2011.
[9] A. Dalalyan. Theoretical guarantees for approximate sampling from smooth and log-concave
densities, 2014. Available at http://arxiv.org/abs/1412.7392.
[10] C. Dimitrakakis, B. Nelson, and B. Rubinstein. Robust and private Bayesian inference. In
Algorithmic Learning Theory, 2014.
[11] A. Durmus and E. Moulines. Non-asymptotic convergence analysis for the unadjusted langevin
algorithm, 2015. Available at http://arxiv.org/abs/1507.05021.
[12] C. Dwork. Differential privacy. In ICALP, pages 1?12, 2006.
[13] R. Hall, A. Rinaldo, and L. Wasserman. Differential privacy for functions and functional data.
Journal of Machine Learning Research, 14:703?727, 2013.
[14] D. Kifer, A. Smith, and A. Thakurta. Private convex empirical risk minimization and highdimensional regression. In COLT, 2012.
[15] M. Ledoux. Concentration of Measure and Logarithmic Sobolev Inequalities, volume 1709 of
S?eminaire de Probabilit?es XXXIII Lecture Notes in Mathematics. Springer, 1999.
[16] F. McSherry and K. Talwar. Mechanism design via differential privacy. In FOCS, 2007.
[17] E. Milman. Properties of isoperimetric, functional and Transport-Entropy inequalities via concentration. Probability Theory and Related Fields, 152:475?507, 2012.
[18] D. Mir. Differential privacy: an exploration of the privacy-utility landscape. PhD thesis,
Rutgers University, 2013.
[19] C. Villani. Optimal Transport: Old and New. Springer, 2009.
[20] Y. Wang, S. Fienberg, and A. Smola. Privacy for free: Posterior sampling and stochastic
gradient monte carlo. In ICML, 2015.
[21] T. Zhang. From ?-entropy to KL-entropy: Analysis of minimum information complexity density estimation. The Annals of Statistics, 34(5):2180?2210, 2006.
9
| 6050 |@word private:14 version:1 norm:1 villani:1 contraction:3 solid:1 boundedness:11 initial:1 contains:3 discretization:1 protection:2 yet:1 dx:1 stationary:1 smith:2 lr:5 provides:4 org:3 zhang:1 unbounded:7 dn:12 differential:60 focs:2 issei:1 consists:1 prove:3 manner:1 privacy:67 x0:2 roughly:2 behavior:1 examine:1 moulines:1 little:1 actual:1 equipped:1 considering:2 increasing:1 becomes:4 spain:1 provided:1 bounded:10 notation:2 moreover:1 mass:1 arai:2 argmin:2 minimizes:1 kind:1 developed:2 guarantee:3 every:3 concave:4 ensured:1 k2:7 classifier:1 control:2 unit:1 grant:1 positive:2 tends:1 oxford:1 lugosi:1 twice:7 studied:3 examined:1 range:2 acknowledgment:1 practice:2 procedure:3 poincar:1 probabilit:1 universal:2 empirical:5 word:1 cannot:3 operator:1 risk:5 context:2 restriction:2 equivalent:2 measurable:2 map:2 dalalyan:1 independently:1 convex:21 simplicity:1 wasserman:1 estimator:13 regarded:2 notion:2 variation:1 laplace:1 annals:1 construction:2 play:1 suppose:6 target:1 exact:1 origin:1 element:2 database:1 role:1 wang:6 decrease:1 removed:1 convexity:3 complexity:1 dynamic:1 isoperimetric:1 depend:2 tight:1 easily:1 various:1 chapter:1 describe:2 monte:4 hiroshi:1 rubinstein:1 neighborhood:1 quite:1 whose:4 valued:1 xxxiii:1 say:2 drawing:1 statistic:2 itself:1 differentiable:5 eigenvalue:1 ledoux:2 combining:1 chaudhuri:2 achieve:1 ridgway:1 mollifier:1 differentially:7 getting:1 convergence:3 regularity:3 ac:4 x0i:1 strong:5 implies:1 differ:1 radius:2 closely:1 tokyo:8 attribute:1 stochastic:3 centered:2 exploration:1 sgn:1 stringent:1 explains:2 require:2 proposition:7 tighter:1 elementary:1 minami:2 hold:10 mm:1 around:3 considered:1 hall:1 exp:17 sgld:2 algorithmic:1 claim:2 purpose:1 estimation:2 label:1 thakurta:2 largest:1 tool:1 minimization:4 gaussian:10 rather:1 pn:2 shrinkage:6 corollary:8 focus:1 kakenhi:1 bernoulli:3 likelihood:3 check:3 contrast:1 sense:3 inference:1 dependent:3 entire:2 chopin:1 classification:2 colt:1 denoted:1 special:1 field:2 construct:3 sampling:7 represents:1 icml:1 t2:2 simplify:1 dg:18 simultaneously:1 divergence:4 individual:1 geometry:1 lebesgue:1 statistician:1 ab:3 interest:1 highly:1 dwork:1 evaluation:1 unadjusted:1 mcsherry:2 chain:2 necessary:1 euclidean:2 logarithm:2 old:1 theoretical:1 maximization:2 cost:4 subset:1 entry:1 jsps:1 dependency:2 supx:1 chooses:1 density:18 fundamental:2 sensitivity:10 randomized:9 continuously:1 concrete:1 thesis:1 satisfied:2 nm:3 choose:1 nonasymptotic:1 de:1 satisfy:6 explicitly:1 kzk2:1 depends:4 sup:3 bayes:1 variance:4 yield:1 correspond:1 landscape:1 yes:2 conceptually:1 bayesian:5 carlo:4 influenced:1 suffers:1 monteleoni:1 definition:4 against:2 proof:7 riemannian:1 hamming:1 hsu:1 dataset:8 proved:2 appears:1 supervised:1 evaluated:1 strongly:12 smola:1 spiky:1 hand:6 sketch:2 transport:2 defines:1 logistic:4 grows:1 effect:6 k22:2 alquier:1 counterpart:1 hence:2 regularization:1 boucheron:1 leibler:1 eg:7 adjacent:8 lmc:4 mist:1 variational:2 recently:3 sigmoid:1 behaves:1 functional:2 jp:4 volume:1 sarwate:1 tail:4 m1:5 ims:1 gibbs:26 rd:14 mathematics:1 pointed:1 add:1 base:1 curvature:1 posterior:35 ldx:1 showed:1 recent:1 certain:5 inequality:16 binary:2 arbitrarily:1 yi:1 integrable:2 preserving:1 minimum:1 determine:1 dashed:1 ii:2 d0:34 smooth:4 mle:1 dkl:7 variant:1 regression:3 itc:2 expectation:2 essentially:2 metric:1 arxiv:3 iteration:2 rutgers:1 justified:1 crucial:1 unlike:1 massart:1 mir:2 noting:1 exceed:1 iii:2 easy:1 independence:1 zi:1 dtv:3 idea:1 chebyshev:1 logdensity:1 expression:1 utility:2 effort:1 song:1 speaking:3 hessian:1 useful:1 proportionally:2 involve:2 concentrated:1 http:3 exist:2 lsi:13 discrete:1 write:1 nevertheless:1 drawn:1 diffusion:1 utilize:1 vast:1 year:1 sum:2 dimitrakakis:2 talwar:2 inverse:1 extends:1 almost:2 x0n:1 sobolev:2 dy:2 vb:1 bound:13 milman:1 sato:2 infinity:1 precisely:1 x2:6 aspect:1 argument:5 cl2:1 ball:3 slightly:1 modification:1 intuitively:2 taken:1 fienberg:1 equation:1 mechanism:23 kifer:1 generalizes:1 available:4 appropriate:2 hiromi:1 alternative:1 hinge:1 durmus:1 widens:1 build:2 especially:1 classical:4 objective:1 quantity:1 concentration:6 said:4 gradient:5 dp:2 distance:2 tration:1 nelson:1 manifold:1 reason:1 ratio:5 difficult:1 statement:4 negative:3 design:2 implementation:1 contributed:1 allowing:1 upper:8 observation:2 datasets:9 markov:3 finite:5 langevin:4 perturbation:2 supa:1 lehec:1 arbitrary:4 namely:4 pair:4 kl:5 barcelona:1 nip:3 suggested:1 usually:2 below:1 suitable:1 natural:1 regularized:1 scheme:1 thermodynamics:1 inversely:1 prior:9 review:1 l2:10 removal:1 literature:1 determining:1 asymptotic:1 law:1 loss:28 lecture:1 permutation:1 icalp:1 proportional:5 x01:3 degree:2 sufficient:4 cd:1 eldan:1 supported:1 free:2 side:3 formal:1 taking:1 dimension:1 xn:3 numeric:1 concavity:3 author:1 commonly:1 projected:1 approximate:3 compact:2 kullback:1 supremum:1 xi:11 table:2 robust:3 ignoring:1 necessarily:2 constructing:1 domain:1 unboundedness:1 main:1 noise:2 x1:11 bassily:2 precision:1 sub:2 exponential:19 theorem:16 specific:2 pac:2 closeness:1 dl:6 exists:3 phd:1 catoni:1 margin:1 entropy:3 logarithmic:2 bubeck:1 rinaldo:1 contained:1 springer:3 corresponds:1 minimizer:2 satisfies:33 dh:3 kentaro:2 consequently:1 lipschitz:19 change:2 nakagawa:2 typical:2 infinite:1 sampler:3 lemma:2 total:1 e:1 rarely:1 highdimensional:1 mcmc:2 |
5,582 | 6,051 | Disentangling factors of variation in deep
representations using adversarial training
Michael Mathieu, Junbo Zhao, Pablo Sprechmann, Aditya Ramesh, Yann LeCun
719 Broadway, 12th Floor, New York, NY 10003
{mathieu, junbo.zhao, pablo, ar2922, yann}@cs.nyu.edu
Abstract
We introduce a conditional generative model for learning to disentangle the hidden
factors of variation within a set of labeled observations, and separate them into
complementary codes. One code summarizes the specified factors of variation
associated with the labels. The other summarizes the remaining unspecified variability. During training, the only available source of supervision comes from our
ability to distinguish among different observations belonging to the same class.
Examples of such observations include images of a set of labeled objects captured
at different viewpoints, or recordings of set of speakers dictating multiple phrases.
In both instances, the intra-class diversity is the source of the unspecified factors of
variation: each object is observed at multiple viewpoints, and each speaker dictates
multiple phrases. Learning to disentangle the specified factors from the unspecified
ones becomes easier when strong supervision is possible. Suppose that during
training, we have access to pairs of images, where each pair shows two different
objects captured from the same viewpoint. This source of alignment allows us to
solve our task using existing methods. However, labels for the unspecified factors
are usually unavailable in realistic scenarios where data acquisition is not strictly
controlled. We address the problem of disentaglement in this more general setting
by combining deep convolutional autoencoders with a form of adversarial training.
Both factors of variation are implicitly captured in the organization of the learned
embedding space, and can be used for solving single-image analogies. Experimental results on synthetic and real datasets show that the proposed method is capable
of generalizing to unseen classes and intra-class variabilities.
1
Introduction
A fundamental challenge in understanding sensory data is learning to disentangle the underlying
factors of variation that give rise to the observations [1]. For instance, the factors of variation involved
in generating a speech recording include the speaker?s attributes, such as gender, age, or accent, as
well as the intonation and words being spoken. Similarly, the factors of variation underlying the image
of an object include the object?s physical representation and the viewing conditions. The difficulty
of disentangling these hidden factors is that, in most real-world situations, each can influence the
observation in a different and unpredictable way. It is seldom the case that one has access to rich
forms of labeled data in which the nature of these influences is given explicitly.
Often times, the purpose for which a dataset is collected is to further progress in solving a certain
supervised learning task. This type of learning is driven completely by the labels. The goal is for
the learned representation to be invariant to factors of variation that are uninformative to the task
at hand. While recent approaches for supervised learning have enjoyed tremendous success, their
performance comes at the cost of discarding sources of variation that may be important for solving
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
other, closely-related tasks. Ideally, we would like to be able to learn representations in which the
uninformative factors of variation are separated from the informative ones, instead of being discarded.
Many other exciting applications require the use of generative models that are capable of synthesizing
novel instances where certain key factors of variation are held fixed. Unlike classification, generative
modeling requires preserving all factors of variation. But merely preserving these factors is not
sufficient for many tasks of interest, making the disentanglement process necessary. For example,
in speech synthesis, one may wish to transfer one person?s dialog to another person?s voice. Inverse
problems in image processing, such as denoising and super-resolution, require generating images that
are perceptually consistent with corrupted or incomplete observations.
In this work, we introduce a deep conditional generative model that learns to separate the factors of
variation associated with the labels from the other sources of variability. We only make the weak
assumption that we are able to distinguish between observations assigned to the same label during
training. To make disentanglement possible in this more general setting, we leverage both Variational
Auto-Encoders (VAEs) [12, 25] and Generative Adversarial Networks (GANs) [9].
2
Related work
There is a vast literature on learning disentangled representations. Bilinear models [26] were an early
approach to separate content and style for images of faces and text in various fonts. What-where
autoencoders [22, 28] combine discrimination and reconstruction criteria to attempt to recover the
factors of variation not associated with the labels. In [10], an autoencoder is trained to separate a
translation invariant representation from a code that is used to recover the translation information.
In [2], the authors show that standard deep architectures can discover and explicitly represent factors of
variation aside those relevant for classification, by combining autoencoders with simple regularization
terms during the training. In the context of generative models, the work in [23] extends the Restricted
Boltzmann Machine by partitioning its hidden state into distinct factors of variation. The work
presented in [11] uses a VAE in a semi-supervised learning setting. Their approach is able to
disentangle the label information from the hidden code by providing an additional one-hot vector as
input to the generative model. Similarly, [18] shows that autoencoders trained in a semi-supervised
manner can transfer handwritten digit styles using a decoder conditioned on a categorical variable
indicating the desired digit class. The main difference between these approaches and ours is that the
former cannot generalize to unseen identities.
The work in [5, 13] further explores the application of content and style disentanglement to computer
graphics. Whereas computer graphics involves going from an abstract description of a scene to a
rendering, these methods learn to go backward from the rendering to recover the abstract description.
This description can include attributes such as orientation and lighting information. While these
methods are capable of producing impressive results, they benefit from being able to use synthetic
data, making strong supervision possible.
Closely related to the problem of disentangling factors of variations in representation learning is that
of learning fair representations [17, 7]. In particular, the Fair Variational Auto-Encoder [17] aims
to learn representations that are invariant to certain nuisance factors of variation, while retaining
as much of the remaining information as possible. The authors propose a variant of the VAE that
encourages independence between the different latent factors of variation.
The problem of disentangling factors of variation also plays an important role in completing image
analogies, the goal of the end-to-end model proposed in [24]. Their method relies on having access to
matching examples during training. Our approach requires neither matching observations nor labels
aside from the class identities. These properties allow the model to be trained on data with a large
number of labels, enabling generalizing over the classes present in the training data.
3
3.1
Background
Variational autoencoder
The VAE framework is an approach for modeling a data distribution using a collection of independent
latent variables. Let x be a random variable (real or binary) representing the observed data and z
a collection of real-valued latent variables. The generative model over the pair (x, z) is given by
2
p(x, z) = p(x | z)p(z), where p(z) is the prior distribution over the latent variables and p(x | z) is
the conditional likelihood function. Generally, we assume that the components of z are independent
Bernoulli or Gaussian random variables. The likelihood function is parameterized by a deep neural
network referred to as the decoder.
A key aspect of VAEs is the use of a learned approximate inference procedure that is trained purely
using gradient-based methods [12, 25]. This is achieved by using a learned approximate posterior
q(z | x) = N (?, ?I) whose parameters are given by another deep neural network referred to as the
encoder. Thus, we have z ? Enc(x) = q(z|x) and x
? Dec(z) = p(x|z). The parameters of these
networks are optimized by minimizing the upper-bound on the expected negative log-likelihood of x,
which is given by
Eq(z | x) [? log p? (x | z)] + KL(q(z|x) || p(z)).
(1)
The first term in (1) corresponds to the reconstruction error, and the second term is a regularizer that
ensures that the approximate posterior stays close to the prior.
3.2
Generative adversarial networks
Generative Adversarial Networks (GAN) [9] have enjoyed great success at producing realistic natural
images [21]. The main idea is to use an auxiliary network Disc, called the discriminator, in conjunction
with the generative model, Gen. The training procedure establishes a min-max game between the
two networks as follows. On one hand, the discriminator is trained to differentiate between natural
samples sampled from the true data distribution, and synthetic images produced by the generative
model. On the other hand, the generator is trained to produce samples that confuse the discriminator
into mistaking them for genuine images. The goal is for the generator to produce increasingly more
realistic images as the discriminator learns to pick up on increasingly more subtle inaccuracies that
allow it to tell apart real and fake images.
Both Disc and Gen can be conditioned on the label of the input that we wish to classify or generate,
respectively [20]. This approach has been successfully used to produce samples that belong to a
specific class or possess some desirable property [4, 19, 21]. The training objective can be expressed
as a min-max problem given by
min max Lgan ,
Gen Disc
where
Lgan = log Disc(x, id) + log(1 ? Disc(Gen(z, id), id)).
(2)
where pd (x, id) is the data distribution conditioned on a given class label id, and p(z) is a generic
prior over the latent space (e.g. N (0, I)).
4
4.1
Model
Conditional generative model
We introduce a conditional probabilistic model admitting two independent sources of variation:
an observed variable s that characterizes the specified factors of variation, and a continuous latent
variable z that characterizes the remaining variability. The variable s is given by a vector of real
numbers, rather than a class ordinal or a one-hot vector, as we intend for the model to generalize to
unseen identities.
Given an observed specified component s, we can sample
z ? p(z) = N (0, I) and
x ? p? (x | z, s),
(3)
in order to generate a new instance x compatible with s.
The variables s and z are marginally independent, which promotes disentanglement between the
specified and unspecified factors of variation. Again here, p? (x|z, s) is a likelihood function described
by and decoder network, Dec, and the approximate posterior is modeled using an independent
Gaussian distribution, q? (z|x, s) = N (?, ?I), whose parameters are specified via an encoder network,
Enc. In this new setting, the variational upper-bound is be given by
Eq(z | x,s) [? log p? (x | z, s)] + KL(q(z | x, s) | p(z)).
(4)
The specified component s can be obtained from one or more images belonging to the same class.
In this work, we consider the simplest case in which s is obtained from a single image. To this end,
3
we define a deterministic encoder fs that maps images to their corresponding specified components.
All sources of stochasticity in s come from the data distribution. The conditional likelihood given
by (3) can now be written as x ? p? (x | z, fs (x0 )) where x0 is any image sharing the same label as x,
including x itself. In addition to fs , the model has an additional encoder fz that parameterizes the
approximate posterior q(z | x, s). It is natural to consider an architecture in which parameters of both
encoders are shared.
We now define a single encoder Enc by Enc(x) = (fs (x), fz (x)) = (s, (?, ?) = (s, z), where s is
the specified component, and z = (?, ?) the parameters of the approximate posterior that constitute
the unspecified component. To generate a new instance, we synthesize s and z using Dec to obtain
x
? = Dec(s, z).
The model described above cannot be trained by minimizing the log-likelihood alone. In particular,
there is nothing that prevents all of the information about the observation from flowing through the
unspecified component. The decoder could learn to ignore s, and the approximate posterior could
map images belonging to the same class to different regions of the latent space. This degenerate
solution can be easily prevented when we have access to labels for the unspecified factors of variation,
as in [24]. In this case, we could enforce that s be informative by requiring that Dec be able to
reconstruct two observations having the same unspecified label after their unspecified components
are swapped. But for many real-world scenarios, it is either impractical or impossible to obtain labels
for the unspecified factors of variation. In the following section, we explain a way of eliminating the
need for such labels.
4.2
Discriminative regularization
An alternative approach to preventing the degenerate solution described in the previous section,
without the need for labels for the unspecified components, makes use of GANs (3.2). As before,
we employ a procedure in which the unspecified components of a pair of observations are swapped.
But since the observations need not be aligned along the unspecified factors of variation, it no longer
makes sense to enforce reconstruction. After swapping, the class identities of both observations
will remain the same, but the sources of variability within their corresponding classes will change.
Hence, rather than enforcing reconstruction, we ensure that both observations are assigned high
probabilities of belonging to their original classes by an external discriminator. Formally, we introduce
the discriminative term given by (2) into the loss given by (5), yielding
Eq(z | x,s) [? log p? (x | z, s)] + KL(q(z | x, s) || p(z)) + ?Lgan ,
(5)
where ? is a non-negative weight.
Recent works have explored combining VAE with GAN [14, 6]. These approaches aim at including a
recognition network (allowing solving inference problems) to the GAN framework. In the setting
used in this work, GAN is used to compensate the lack of aligned training data. The work in [14]
investigates the use of GANs for obtaining perceptually better loss functions (beyond pixels). While
this is not the goal of our work, our framework is able to generate sharper images, which comes as
a side effect. We evaluated including a GAN loss also for samples, however, the system became
unstable without leading to perceptually better generations. An interesting variant could be to use
separate discriminator for images generated with and without supervision.
4.3
Training procedure
Let x1 and x01 be samples sharing the same label, namely id1 , and x2 a sample belonging to a different
class, id2 . On one hand we want to minimize the upper bound of negative log likelihood of x1 when
feeding to the decoder inputs of the form (z1 , fs (x1 )) and (z1 , fs (x01 )), where z1 are samples form
the approximate posterior q(z|x1 ). On the other hand, we want to minimize the adversarial loss of
samples generated by feeding to the decoder inputs given by (z, fs (x2 )), where z is sampled from
the approximate posterior q(z|x1 ). This corresponds to swapping specified and unspecified factors of
x1 and x2 . We could only use upper bound if we had access to aligned data. As in the GAN setting
described in Section 3.2, we alternate this procedure with updates of the adversary network. The
diagram of the network is shown in figure 1, and the described training procedure is summarized in
on Algorithm 1, in the supplementary material.
4
X1
Enc
X1'
Enc
X2
Enc
Z1
S1
Z1'
S1'
Z2
S2
N(0,1)
Dec
Dec
Dec
Dec
~
X11
X1
L
~
X11'
X1
L
~
X12
Adv
~ id(X2)
X.2
Adv
id(X2)
Figure 1: Training architecture. The inputs x1 and x01 are two different samples with the same label,
whereas x2 can have any label.
5
Experiments
Datasets. We evaluate our model on both synthetic and real datasets: Sprites dataset [24], MNIST [15],
NORB [16] and the Extended-YaleB dataset [8]. We used Torch7 [3] to conduct all experiments. The
network architectures follow that of DCGAN [21] and are described in detail in the supplementary
material.
Evaluation. To the best of our knowledge, there is no standard benchmark dataset (or task) for
evaluating disentangling performance [2]. We propose two forms of evaluation to illustrate the
behavior of the proposed framework, one qualitative and one quantitative.
Qualitative evaluation is obtained by visually examining the perceptual quality of single-image
analogies and conditional images generation. For all datasets, we evaluated the models in four
different settings: swapping: given a pair of images, we generate samples conditioning on the specified
component extracted from one of the images and sampling from the approximate posterior obtained
from the other one. This procedure is analogous to the sampling technique employed during training,
described in Section 4.3, and corresponds to solving single-image analogies; retrieval: in order to asses
the correlation between the specified and unspecified components, we performed nearest neighbor
retrieval in the learned embedding spaces. We computed the corresponding representations for all
samples (for the unspecified component we used the mean of the approximate posterior distribution)
and then retrieved the nearest neighbors for a given query image; interpolation: to evaluate the
coverage of the data manifold, we generated a sequence of images by linearly interpolating the codes
of two given test images (for both specified and unspecified representations); conditional generation:
given a test image, we generate samples conditioning on its specified component, sampling directly
from the prior distribution, p(z). In all the experiments images were randomly chosen from the test
set, please see specific details for each dataset.
The objective evaluation of generative models is a difficult task and itself subject of current research
[27]. Frequent evaluation metrics, such as measuring the log-likelihood of a set of validation samples,
are often not very meaningful as they do not correlate to the perceptual quality of the images [27].
Furthermore, the loss function used by our model does not correspond a bound on the likelihood of a
generative model, which would render this evaluation less meaningful. As a quantitative measure,
we evaluate the degree of disentanglement via a classification task. Namely, we measure how much
information about the identity is contained in the specified and unspecified components.
MNIST. In this setup, the specified part is simply the class of the digit. The goal is to show that the
model is able to learn to disentangle the style from the identity of the digit and to produce satisfactory
analogies. We cannot test the ability of the model to generalize to unseen identities. In this case, one
could directly condition on a class label [11, 18]. It is still interesting that the proposed model is
able to transfer handwriting style without having access to matched examples while still be able to
learn a smooth representation of the digits as show in the interpolation results. Results are shown in
Figure 2. We observe that the generated images are convincing and particularly sharp, the latter is an
?side-effect? produced by the GAN term in our training loss.
Sprites. The dataset is composed of 672 unique characters (we refer to them as sprites), each of
which is associated with 20 animations [24]. Any image of a sprite can present 7 sources of variation:
body type, gender, hair type, armor type, arm type, greaves type, and weapon type. Unlike the work
in [24], we do not use any supervision regarding the positions of the sprites. The results obtained for
5
Figure 2: left(a): A visualization grid of 2D MNIST image swapping generation. The top row and
leftmost column digits come from the test set. The other digits are generated using z from leftmost
digit, and s from the digit at the top of the column. The diagonal digits show reconstructions. Right(b):
Interpolation visualization. Digits located at top-left corner and bottom-right corner come from the
dataset. The rest digits are generated by interpolating s and z. Like (a), each row has constant a z
each column a constant s.
Figure 3: left(a): A visualization grid of 2D sprites swapping generation. Same visualization arrangement as in 2(a); right(b): Interpolation visualization. Same arrangement as in 2(b).
the swapping and interpolation settings are displayed in Figure 3 while retrieval result are showed
in 4. Samples from the conditional model are shown in 5(a). We observe that the model is able to
generalize to unseen sprites quite well. The generated images are sharp and single image analogies
are resolved successfully. The interpolation results show that one can smoothly transition between
identities or positions. It is worth noting that this dataset has a fixed number of discrete positions.
Thus, 3(b) shows a reasonable coverage of the manifold with some abrupt changes. For instance, the
hands are not moving up from the pixel space, but appearing gradually from the faint background.
NORB. For the NORB dataset we used instance identity (rather than object category) for defining the
labels. This results in 25 different object identities in the training set and another 25 distinct objects
identities in the testing set. As in the sprite dataset, the identities used at testing have never been
presented to the network at training time. In this case, however, the small number of identities seen at
training time makes the generalization more difficult. In Figure 6 we present results for interpolation
and swapping. We observe that the model is able to resolve analogies well. However, the quality
of the results are degraded. In particular, classes having high variability (such as planes) are not
reconstructed well. Also some of the models are highly symmetric, thus creating a lot of uncertainty.
We conjecture that these problems could be eliminated in the presence of more training data. Queries
in the case of NORB are not as expressive as with the sprites, but we can still observe good behavior.
We refer to these images to the supplementary material.
Extended-YaleB. The datasets consists of facial images of 28 individuals taken under different
positions and illuminations. The training and testing sets contains roughly 600 and 180 images
per individual respectively. Figure 7 shows interpolation and swapping results for a set of testing
images. Due to the small number of identities, we cannot test in this case the generalization to unseen
identities. We observe that the model is able to resolve the analogies in a satisfactory, position and
illumination are transferred correctly although these positions have not been seen at train time for
6
Figure 4: left(a): sprite retrieval querying on specified component; right(b): sprite retrieval querying
on u nspecified component. Sprites placed at the left of the white lane are used as the query.
Figure 5: left(a): sprite generation by sampling; right(b): NORB generation by sampling.
Figure 6: left(a): A visualization grid of 2D NORB image swapping generation. Same visualization
arrangement as in 2(a); right(b): Interpolation visualization. Same arrangement as in 2(b).
these individuals. In the supplementary material we show samples drawn from the conditional model
as well as other examples of interpolation and swapping.
Quantitative evaluation. We analyze the disentanglement of the specified and unspecified representations, by using them as input features for a prediction task. We trained a two-layer neural network
with 256 hidden units to predict structured labels for the sprite dataset, toy category for the NORB
dataset (four-legged animals, human figures, airplanes, trucks, and cars) and the subject identity for
Extended-YaleB dataset. We used early-stopping on a validation set to prevent overfitting. We report
both training and testing errors in Table 1. In all cases the unspecified component is agnostic to the
identity information, almost matching the performance of random selection. On the other hand, the
specified components are highly informative, producing almost the same results as a classifier directly
trained on a discriminative manner. In particular, we observe some overfitting in the NORB dataset.
This might also be due to the difficulty of generalizing to unseen identities using a small dataset.
Influence of components of the framework. It is worth evaluating the contribution of the different
components of the framework. Without the adversarial regularization, the model is unable to learn
disentangled representations. It can be verified empirically that the unspecified component is completely ignored, as discussed in Section 4.1. A valid question to ask is if the training of s has be
done jointly in an end-to-end manner or could be pre-computed. In Section 4 of the supplementary
material we run our setting by using an embedding trained before hand to classify the identities. The
model is still able to learned a disentangled representations. The quality of the generated images
as well as the analogies are compromised. Better pre-trained embeddings could be considered, for
example, enforcing the representation of different images to be close to each other and far from those
corresponding to different identities. However, joint end-to-end training has still the advantage of
requiring fewer parameters, due to the parameter sharing of the encoders.
7
Figure 7: left(a): A visualization grid of 2D Extended-YaleB face image swapping generation. right(b):
Interpolation visualization. See 2 for description.
Table 1: Comparison of classification upon z and s. Shown numbers are all error rate.
set
train
test
random-chance
6
Sprites
z
s
58.6% 5.5%
59.8% 5.2%
60.7%
NORB
z
s
79.8%
2.6%
79.9% 13.5%
80.0%
Extended-YaleB
z
s
96.4% 0.05%
96.4% 0.08%
96.4%
Conclusions and discussion
This paper presents a conditional generative model that learns to disentangle the factors of variations
of the data specified and unspecified through a given categorization. The proposed model does not
rely on strong supervision regarding the sources of variations. This is achieved by combining two
very successful generative models: VAE and GAN. The model is able to resolve the analogies in a
consistent way on several datasets with minimal parameter/architecture tuning. Although this initial
results are promising there is a lot to be tested and understood. The model is motivated on a general
settings that is expected to encounter in more realistic scenarios. However, in this initial study we
only tested the model on rather constrained examples. As was observed in the results shown using
the NORB dataset, given the weaker supervision assumed in our setting, the proposed approach
seems to have a high sample complexity relying on training samples covering the full range of
variations for both specified and unspecified variations. The proposed model does not attempt to
disentangle variations within the specified and unspecified components. There are many possible
ways of mapping a unit Gaussian to corresponding images, in the current setting, there is nothing
preventing the obtained mapping to present highly entangled factors of variations.
References
R in Machine Learning,
[1] Yoshua Bengio. Learning deep architectures for AI. Foundations and trends
2(1):1?127, 2009.
[2] Brian Cheung, Jesse A. Livezey, Arjun K. Bansal, and Bruno A. Olshausen. Discovering hidden factors of
variation in deep networks. CoRR, abs/1412.6583, 2014.
[3] Ronan Collobert, Koray Kavukcuoglu, and Cl?ment Farabet. Torch7: A matlab-like environment for
machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011.
[4] Emily Denton, Soumith Chintala, Arthur Szlam, and Rob Fergus. Deep generative image models using a
laplacian pyramid of adversarial networks. In NIPS, 2015.
[5] Alexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs with
convolutional neural networks. CoRR, abs/1411.5928, 2014.
[6] Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Martin Arjovsky, Olivier Mastropietro, and
Aaron Courville. Adversarially learned inference. arXiv preprint arXiv:1606.00704, 2016.
[7] Harrison Edwards and Amos Storkey. Censoring representations with an adversary. arXiv preprint
arXiv:1511.05897, 2015.
8
[8] Athinodoros S Georghiades, Peter N Belhumeur, and David J Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. Pattern Analysis and Machine
Intelligence, IEEE Transactions on, 23(6):643?660, 2001.
[9] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C.
Courville, and Yoshua Bengio. Generative adversarial networks. NIPS, 2014.
[10] Geoffrey E. Hinton, Alex Krizhevsky, and Sida D. Wang. Transforming auto-encoders. In Proceedings of
the 21th International Conference on Artificial Neural Networks - Volume Part I, ICANN?11, pages 44?51,
Berlin, Heidelberg, 2011. Springer-Verlag.
[11] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised
learning with deep generative models. In Advances in Neural Information Processing Systems, pages
3581?3589, 2014.
[12] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114,
2013.
[13] Tejas D Kulkarni, William F Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep convolutional inverse
graphics network. In Advances in Neural Information Processing Systems, pages 2530?2538, 2015.
[14] Anders Boesen Lindbo Larsen, S?ren Kaae S?nderby, and Ole Winther. Autoencoding beyond pixels using
a learned similarity metric. arXiv preprint arXiv:1512.09300, 2015.
[15] Yann LeCun, L?on Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to
document recognition. Proceedings of the IEEE, 86(11):2278?2324, 1998.
[16] Yann LeCun, Fu Jie Huang, and Leon Bottou. Learning methods for generic object recognition with
invariance to pose and lighting. In CVPR, 2004.
[17] Christos Louizos, Kevin Swersky, Yujia Li, Max Welling, and Richard Zemel. The variational fair autoencoder. ICLR, 2016.
[18] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian J. Goodfellow. Adversarial autoencoders.
CoRR, abs/1511.05644, 2015.
[19] Micha?l Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean
square error. ICLR, abs/1511.05440, 2015.
[20] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784,
2014.
[21] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015.
[22] Marc?Aurelio Ranzato, Fu-Jie Huang, Y-Lan Boureau, and Yann LeCun. Unsupervised learning of
invariant feature hierarchies with applications to object recognition. In Proc. Computer Vision and Pattern
Recognition Conference (CVPR?07). IEEE Press, 2007.
[23] Scott Reed, Kihyuk Sohn, Yuting Zhang, and Honglak Lee. Learning to disentangle factors of variation
with manifold interaction. In Proceedings of the 31st International Conference on Machine Learning
(ICML-14), pages 1431?1439, 2014.
[24] Scott E Reed, Yi Zhang, Yuting Zhang, and Honglak Lee. Deep visual analogy-making. In C. Cortes, N. D.
Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing
Systems 28, pages 1252?1260. Curran Associates, Inc., 2015.
[25] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014.
[26] Joshua B. Tenenbaum and William T. Freeman. Separating style and content with bilinear models. Neural
Comput., 12(6):1247?1283, June 2000.
[27] Lucas Theis, A?ron van den Oord, and Matthias Bethge. A note on the evaluation of generative models.
arXiv preprint arXiv:1511.01844, 2015.
[28] Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann LeCun. Stacked what-where auto-encoders. In
ICLR workshop submission, 2016.
9
| 6051 |@word kohli:1 eliminating:1 seems:1 pick:1 initial:2 contains:1 jimenez:2 ours:1 document:1 existing:1 greave:1 current:2 z2:1 diederik:2 written:1 ronan:1 realistic:4 informative:3 update:1 discrimination:1 aside:2 generative:24 alone:1 fewer:1 discovering:1 intelligence:1 alec:1 plane:1 ron:1 yuting:2 zhang:3 wierstra:1 along:1 junbo:3 qualitative:2 consists:1 combine:1 manner:3 introduce:4 x0:2 expected:2 roughly:1 behavior:2 dialog:1 nor:1 multi:1 lindbo:1 relying:1 freeman:1 resolve:3 soumith:2 unpredictable:1 becomes:1 spain:1 discover:1 underlying:2 matched:1 agnostic:1 what:2 unspecified:25 spoken:1 impractical:1 quantitative:3 classifier:1 sherjil:1 partitioning:1 unit:2 szlam:1 producing:3 before:2 understood:1 bilinear:2 encoding:1 id:7 interpolation:11 might:1 alexey:1 luke:1 mistaking:1 micha:1 range:1 id2:1 unique:1 lecun:6 testing:5 backpropagation:1 digit:12 procedure:7 dictate:1 matching:3 word:1 pre:2 cannot:4 close:2 selection:1 context:1 influence:3 impossible:1 deterministic:1 map:2 jesse:1 go:1 emily:1 resolution:1 abrupt:1 pouget:1 shlens:1 disentangled:3 embedding:3 variation:36 analogous:1 hierarchy:1 suppose:1 play:1 olivier:1 us:1 curran:1 goodfellow:2 jaitly:1 associate:1 synthesize:1 trend:1 recognition:6 particularly:1 located:1 storkey:1 nderby:1 submission:1 labeled:3 observed:5 role:1 bottom:1 preprint:7 wang:1 region:1 ensures:1 adv:2 ranzato:1 pd:1 environment:1 complexity:1 transforming:1 ideally:1 tobias:1 kriegman:1 warde:1 legged:1 trained:11 solving:5 purely:1 upon:1 completely:2 easily:1 resolved:1 joint:1 georghiades:1 various:1 regularizer:1 train:2 separated:1 distinct:2 stacked:1 armor:1 ole:1 query:3 artificial:1 tell:1 zemel:1 kevin:1 whose:2 quite:1 supplementary:5 solve:1 valued:1 jean:1 cvpr:2 reconstruct:1 encoder:6 ability:2 unseen:7 jointly:1 itself:2 shakir:2 differentiate:1 sequence:1 advantage:1 autoencoding:1 net:1 matthias:1 reconstruction:5 propose:2 ment:1 interaction:1 frequent:1 relevant:1 combining:4 enc:7 aligned:3 gen:4 degenerate:2 description:4 produce:4 generating:2 categorization:1 ben:1 object:10 illustrate:1 pose:2 nearest:2 progress:1 arjun:1 edward:1 eq:3 auxiliary:1 c:1 involves:1 come:6 coverage:2 goroshin:1 strong:3 kaae:1 closely:2 attribute:2 stochastic:1 human:1 jonathon:1 viewing:1 material:5 require:2 feeding:2 generalization:2 brian:1 disentanglement:6 strictly:1 considered:1 visually:1 great:1 lawrence:1 mapping:2 predict:1 early:2 purpose:1 proc:1 label:23 ross:1 establishes:1 successfully:2 amos:1 biglearn:1 gaussian:3 super:1 aim:2 rather:4 vae:5 conjunction:1 rezende:2 june:1 bernoulli:1 likelihood:9 adversarial:12 sense:1 inference:4 stopping:1 anders:1 epfl:1 hidden:6 going:1 pixel:3 x11:2 among:1 classification:4 orientation:1 retaining:1 lucas:1 animal:1 constrained:1 brox:1 genuine:1 never:1 having:4 sampling:5 eliminated:1 koray:1 adversarially:1 denton:1 unsupervised:2 icml:1 report:1 yoshua:3 dosovitskiy:1 mirza:2 employ:1 few:1 richard:1 randomly:1 composed:1 individual:3 william:2 attempt:2 ab:5 organization:1 interest:1 highly:3 intra:2 evaluation:8 alignment:1 admitting:1 yielding:1 swapping:11 farley:1 held:1 fu:2 capable:3 necessary:1 arthur:1 facial:1 conduct:1 incomplete:1 desired:1 minimal:1 instance:7 classify:2 modeling:2 column:3 measuring:1 ishmael:1 whitney:1 phrase:2 cost:1 krizhevsky:1 examining:1 successful:1 osindero:1 graphic:3 encoders:5 corrupted:1 synthetic:4 person:2 st:1 fundamental:1 explores:1 international:2 winther:1 stay:1 oord:1 probabilistic:1 lee:3 michael:2 synthesis:1 bethge:1 gans:3 again:1 huang:2 external:1 corner:2 creating:1 zhao:3 style:6 leading:1 conf:1 toy:1 li:1 diversity:1 summarized:1 inc:1 explicitly:2 collobert:1 performed:1 lot:2 dumoulin:1 analyze:1 characterizes:2 recover:3 bayes:1 metz:1 simon:1 contribution:1 minimize:2 ass:1 square:1 degraded:1 convolutional:4 became:1 correspond:1 generalize:4 weak:1 handwritten:1 vincent:1 kavukcuoglu:1 disc:5 produced:2 marginally:1 ren:1 lighting:3 worth:2 explain:1 sharing:3 farabet:1 acquisition:1 involved:1 mohamed:2 larsen:1 chintala:2 associated:4 handwriting:1 sampled:2 dataset:16 ask:1 knowledge:1 car:1 subtle:1 supervised:5 follow:1 danilo:2 flowing:1 evaluated:2 done:1 furthermore:1 autoencoders:5 correlation:1 hand:8 expressive:1 mehdi:2 lack:1 accent:1 quality:4 olshausen:1 effect:2 requiring:2 true:1 yaleb:5 former:1 regularization:3 assigned:2 hence:1 symmetric:1 satisfactory:2 white:1 during:6 game:1 nuisance:1 encourages:1 please:1 speaker:3 covering:1 criterion:1 leftmost:2 bansal:1 image:46 variational:6 novel:1 physical:1 empirically:1 conditioning:2 volume:1 belong:1 discussed:1 louizos:1 refer:2 honglak:2 ai:1 enjoyed:2 seldom:1 grid:4 tuning:1 similarly:2 stochasticity:1 bruno:1 sugiyama:1 had:1 moving:1 access:6 supervision:7 impressive:1 longer:1 similarity:1 patrick:1 disentangle:8 posterior:10 recent:2 showed:1 retrieved:1 boesen:1 driven:1 apart:1 scenario:3 certain:3 verlag:1 binary:1 success:2 yi:1 joshua:1 captured:3 preserving:2 additional:2 seen:2 floor:1 arjovsky:1 employed:1 belhumeur:1 sida:1 semi:3 multiple:3 desirable:1 full:1 smooth:1 compensate:1 retrieval:5 prevented:1 promotes:1 controlled:1 laplacian:1 prediction:2 variant:2 jost:1 hair:1 vision:1 metric:2 navdeep:1 arxiv:14 represent:1 alireza:1 pyramid:1 achieved:2 dec:9 whereas:2 uninformative:2 background:2 addition:1 want:2 entangled:1 diagram:1 harrison:1 source:10 swapped:2 weapon:1 unlike:2 posse:1 rest:1 recording:2 subject:2 leverage:1 noting:1 presence:1 bengio:3 embeddings:1 mastropietro:1 rendering:2 independence:1 architecture:6 idea:1 parameterizes:1 regarding:2 airplane:1 haffner:1 motivated:1 torch7:2 f:7 render:1 peter:1 sprite:15 speech:2 york:1 constitute:1 dictating:1 deep:15 ignored:1 generally:1 fake:1 matlab:1 jie:2 tenenbaum:2 sohn:1 category:2 simplest:1 generate:7 fz:2 per:1 correctly:1 discrete:1 key:2 four:2 lan:1 drawn:1 prevent:1 neither:1 verified:1 backward:1 vast:1 merely:1 cone:1 run:1 inverse:2 parameterized:1 uncertainty:1 springenberg:1 swersky:1 extends:1 almost:2 reasonable:1 lamb:1 yann:7 summarizes:2 investigates:1 bound:5 layer:1 completing:1 distinguish:2 courville:2 truck:1 alex:2 scene:1 x2:7 lane:1 aspect:1 min:3 chair:1 leon:1 x12:1 conjecture:1 martin:1 transferred:1 structured:1 alternate:1 belonging:5 remain:1 increasingly:2 character:1 rob:1 making:3 s1:2 den:1 invariant:4 restricted:1 gradually:1 taken:1 visualization:10 bing:1 sprechmann:1 ordinal:1 end:7 available:1 observe:6 generic:2 enforce:2 appearing:1 alternative:1 voice:1 encounter:1 original:1 thomas:1 top:3 remaining:3 include:4 ensure:1 gan:8 objective:2 intend:1 arrangement:4 question:1 font:1 makhzani:1 diagonal:1 gradient:2 iclr:3 separate:5 unable:1 berlin:1 separating:1 decoder:6 manifold:3 collected:1 unstable:1 enforcing:2 ozair:1 lgan:3 code:5 modeled:1 reed:2 providing:1 minimizing:2 convincing:1 difficult:2 disentangling:5 setup:1 sharper:1 broadway:1 negative:3 rise:1 synthesizing:1 boltzmann:1 allowing:1 upper:4 observation:14 datasets:6 discarded:1 benchmark:1 ramesh:1 enabling:1 daan:1 displayed:1 situation:1 extended:5 variability:6 defining:1 hinton:1 sharp:2 camille:1 pablo:2 david:2 pair:5 namely:2 specified:22 kl:3 optimized:1 discriminator:6 z1:5 learned:8 tremendous:1 barcelona:1 inaccuracy:1 nip:4 kingma:2 address:1 able:14 beyond:3 adversary:2 usually:1 poole:1 pattern:2 yujia:1 scott:2 challenge:1 max:6 including:3 video:1 hot:2 athinodoros:1 difficulty:2 natural:3 rely:1 arm:1 representing:1 mathieu:4 categorical:1 auto:5 autoencoder:3 livezey:1 text:1 prior:4 understanding:1 literature:1 theis:1 loss:6 generation:9 interesting:2 analogy:11 querying:2 geoffrey:1 age:1 generator:2 validation:2 foundation:1 x01:3 degree:1 sufficient:1 consistent:2 exciting:1 viewpoint:3 editor:1 translation:2 row:2 censoring:1 compatible:1 placed:1 side:2 allow:2 weaker:1 neighbor:2 face:3 benefit:1 van:1 world:2 evaluating:2 rich:1 transition:1 sensory:1 author:2 collection:2 preventing:2 valid:1 far:1 welling:3 correlate:1 transaction:1 reconstructed:1 approximate:12 pushmeet:1 ignore:1 implicitly:1 belghazi:1 overfitting:2 assumed:1 norb:10 discriminative:3 fergus:1 continuous:1 latent:7 compromised:1 table:2 promising:1 nature:1 learn:7 transfer:3 obtaining:1 unavailable:1 heidelberg:1 bottou:2 interpolating:2 cl:1 marc:1 garnett:1 icann:1 main:2 linearly:1 aurelio:1 s2:1 animation:1 nothing:2 fair:3 complementary:1 x1:11 body:1 xu:1 referred:2 ny:1 christos:1 position:6 wish:2 intonation:1 comput:1 perceptual:2 learns:3 ian:2 discarding:1 specific:2 nyu:1 explored:1 faint:1 abadie:1 cortes:1 workshop:2 mnist:3 corr:4 perceptually:3 conditioned:3 confuse:1 illumination:3 boureau:1 easier:1 smoothly:1 generalizing:3 simply:1 josh:1 visual:1 prevents:1 aditya:1 expressed:1 dcgan:1 contained:1 springer:1 gender:2 corresponds:3 radford:1 chance:1 relies:1 extracted:1 tejas:1 conditional:12 goal:5 identity:20 cheung:1 couprie:1 shared:1 content:3 change:2 denoising:1 called:1 invariance:1 experimental:1 id1:1 meaningful:2 vaes:2 indicating:1 formally:1 aaron:2 latter:1 kihyuk:1 kulkarni:1 evaluate:3 tested:2 |
5,583 | 6,052 | Launch and Iterate: Reducing Prediction Churn
Q. Cormier
ENS Lyon
15 parvis Ren? Descartes
Lyon, France
[email protected]
M. Milani Fard, K. Canini, M. R. Gupta
Google Inc.
1600 Amphitheatre Parkway
Mountain View, CA 94043
{mmilanifard,canini,mayagupta}@google.com
Abstract
Practical applications of machine learning often involve successive training iterations with changes to features and training examples. Ideally, changes in the output
of any new model should only be improvements (wins) over the previous iteration,
but in practice the predictions may change neutrally for many examples, resulting
in extra net-zero wins and losses, referred to as unnecessary churn. These changes
in the predictions are problematic for usability for some applications, and make it
harder and more expensive to measure if a change is statistically significant positive.
In this paper, we formulate the problem and present a stabilization operator to regularize a classifier towards a previous classifier. We use a Markov chain Monte Carlo
stabilization operator to produce a model with more consistent predictions without
adversely affecting accuracy. We investigate the properties of the proposal with
theoretical analysis. Experiments on benchmark datasets for different classification
algorithms demonstrate the method and the resulting reduction in churn.
1
The Curse of Version 2.0
In most practical settings, training and launching an initial machine-learned model is only the first
step: as new and improved features are created, additional training data is gathered, and the model
and learning algorithm are improved, it is natural to launch a series of ever-improving models. Each
new candidate may bring wins, but also unnecessary changes. In practice, it is desirable to minimize
any unnecessary changes for two key reasons. First, unnecessary changes can hinder usability
and debugability as they can be disorienting to users and follow-on system components. Second,
unnecessary changes make it more difficult to measure with statistical confidence whether the change
is truly an improvement. For both these reasons, there is great interest in making only those changes
that are wins, and minimizing any unnecessary changes, while making sure such process does not
hinder the overall accuracy objective.
There is already a large body of work in machine learning that treats the stability of learning
algorithms. These range from the early works of Devroye and Wagner [1] and Vapnik [2, 3] to more
recent studies of learning stability in more general hypothesis spaces [4, 5, 6]. Most of the literature
on this topic focus on stability of the learning algorithm in terms of the risk or loss function and how
such properties translate into uniform generalization with specific convergence rates. We build on
these notions, but the problem treated here is substantively different.
We address the problem of training consecutive classifiers to reduce unnecessary changes in the
presence of realistic evolution of the problem domain and the training sets over time. The main
contributions of this paper include: (I) discussion and formulation of the ?churn? metric between
trained models, (II) design of stabilization operators for regularization towards a previous model, (III)
proposing a Markov chain Monte Carlo (MCMC) stabilization technique, (VI) theoretical analysis of
the proposed stabilization in terms of churn, and (V) empirical analysis of the proposed methods on
benchmark datasets with different classification algorithms.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Table 1: Win-loss ratio (WLR) needed to establish a change is statistically significant at the p = 0.05
level for k wins out of n diffs from a binomial distribution. The empirical WLR column shows the
WLR one must actually see in the diffs. The true WLR column is the WLR the change must have so
that any random draw of diffs has at least a 95% chance of producing the needed empirical WLR.
1.1
# Diffs
Min # Wins
Needed
Max # Losses
Allowed
Empirical WLR
Needed
True WLR
Needed
10
100
1,000
10,000
9
59
527
5,083
1
41
473
4,917
9.000
1.439
1.114
1.034
26.195
1.972
1.234
1.068
Testing for Improvements
In the machine learning literature, it is common to compare classifiers on a fixed pre-labeled test set.
However, a fixed test set has a few practical downsides. First, if many potential changes to the model
are evaluated on the same dataset, it becomes difficult to avoid observing spurious positive effects that
are actually due to chance. Second, the true test distribution may be evolving over time, meaning that
a fixed test set will eventually diverge from the true distribution of interest. Third, and most important
to our discussion, any particular change may affect only a small subset of the test examples, leaving
too small a sample of differences (diffs) to determine whether a change is statistically significant.
For example, suppose one has a fixed test set of 10,000 samples with which to evaluate a classifier.
Consider a change to one of the features, say a Boolean string-similarity feature that causes the
feature to match more synonyms, and suppose that re-training a classifier with this small change to
this one feature impacts only 0.1% of random examples. Then only 10 of the 10,000 test examples
would be affected. As shown in the first row of Table 1, given only 10 diffs, there must be 9 or more
wins to declare the change statistically significantly positive for p = 0.05.
Note that cross-validation (CV), even in leave-one-out form, does not solve this issue. First, we are
still bound by the size of the training set which might not include enough diffs between the two
models. Second, and more importantly, the model in the previous iteration has likely seen the entire
dataset, which breaks the independence assumption needed for the statistical test.
To address these problems and ensure a fresh, sufficiently large test set for each comparison, practitioners often instead measure changes on a set of diffs for the proposed change. For example, to
compare classifier A and B, each classifier is evaluated on a billion unlabeled examples, and then the
set of diffs is defined as those examples for which classifiers A and B predict a different class.
1.2
Churn
We define the churn between two models as the expected percent of diffs sampled from the test
distribution. For a fixed accuracy gain, less churn is better. For example, if classifier A has accuracy
90% and classifier B has accuracy 91%, then the best case is if classifier B gets the same 90% of
examples correct as classifier A, while correcting A?s errors on 1% of the data. Churn is thus only
1% in this case, and all diffs between A and B will be wins for B. Therefore the improvement of
B over A will achieve statistical significance after labelling a mere 10 diffs. The worst case is if
classifier A is right on the 9% of examples that B gets wrong, and B is right on the 10% of examples
that A gets wrong. In this case, churn is 19%, and a given diff will only have probability of 10/19 of
being a win for B, and almost 1,000 diffs will have to be labeled to be confident that B is better.
On Statistical Significance: Throughout this paper, we assume that every diff is independent and
identically distributed with some probability of being a win for the test model vs. the base model.
Thus, the probability of k wins in n trials follows a binomial distribution. Confidence intervals can
provide more information than a p-value, but p-values are a useful summary statistic to motivate the
problem and proposed solution, and are relevant in practice; for a longer discussion see e.g. [7].
2
Reducing Churn for Classifiers
In this paper, we propose a new training strategy for reducing the churn between classifiers. One
special case is how to train a classifier B to be low-churn given a fixed classifier A. We treat that
2
De-Churning Markov Chain
A
B
T1
T2
...
TK
TA
TB
F1?
F2?
...
?
FK
A?
B?
Figure 1: The orange nodes illustrate a Markov Chain, at each step the classifier Ft? is regularized
?
towards the previous step?s classifier Ft?1
using the stabilization operator S, and each step trained on
a different random training set Tt . We run K steps of this Markov chain, for K large enough so that
?
the distribution of Fk? is close to a stationary distribution. The classifier A? = S(FK
, TA ) is then
?
deployed. Later, some changes are proposed, and a new classifier B is trained on training set TB but
regularized towards A? using B ? = S(A? , TB ). We compare this proposal in terms of churn and
accuracy to the green nodes, which do not use the proposed stabilization.
special case as well as a broader problem: a framework for training both classifiers A and B so that
classifier B is expected to have low-churn relative to classifier A, though when we train A we do not
yet know exactly the changes B will incorporate. We place no constraints on the kind of classifiers or
the kind of future changes allowed.
Our solution consists of two components: a stabilization operator that regularizes classifier B to be
closer in predictions to classifier A; and a randomization of the training set that attempts to mimic
expected future changes.
We consider a training set T = {(xi , yi )}m
i=1 of m samples with each D-dimensional feature vector
xi ? X ? RD and each label yi ? Y = {?1, 1}. Samples are drawn i.i.d. from distribution D.
Define a classifier f : RD ? {?1, 1}, and the churn between two classifiers f1 and f2 as:
C(f1 , f2 ) =
E [1f1 (X)f2 (X)<0 ],
(1)
(X,Y )?D
where 1 is the indicator function. We are given training sets TA and TB to train the first and second
version of the model respectively. TB might add or drop features or examples compared to TA .
2.1
Perturbed Training to Imitate Future Changes
Consider a random training set drawn from a distribution P(TA ), such that different draws may have
different training samples and different features. We show that one can train an initial classifier to be
more consistent in predictions for different realizations of the perturbed training set by iteratively
training on a series of i.i.d. random draws T1 , T2 , . . . from P(TA ). We choose P(TA ) to model a
typical expected future change to the dataset. For example, if we think a likely future change will
add 5% more training data and one new feature, then we would define a random training set to be a
random 95% of the m examples in TA , while dropping a feature at random.
2.2
Stabilized Training Based On A Previous Model using a Markov Chain
We propose a Markov chain Monte Carlo (MCMC) approach to form a distribution over classifiers
that are consistent in predictions w.r.t. the distribution P(TA ) on the training set. Let S denote
?
a regularized training that outputs a new classifier Ft+1
= S(Ft? , Tt+1 ) where Ft? is a previous
classifier and Tt+1 is the current training set. Applying S repeatedly to random training sets Tt forms
a Markov chain as shown in Figure 1. We expect this chain to produce a stationary peaked distribution
on classifiers robust to the perturbation P(TA ). We sample a model from this resulting distribution
after K steps.
We end the proposed Markov chain with a classifier A? trained on the full training set TA , that is,
?
A? = S(FK
, TA ). Classifier A? is the initial launched model, and has been pre-trained to be robust
to the kind of changes we expect to see in some future training set TB . Later, classifier B ? should be
trained as B ? = S(A? , TB ). We expect the chain to have reduced the churn C(A? , B ? ) compared to
the churn C(A, B) that would have resulted from training classifiers A and B without the proposed
stabilization. See Figure 1 for an illustration. Note that this chain only needs to be run for the first
version of the model.
3
On Regularization Effect of Perturbed Training: One can view the perturbation of the dataset
and random feature drops during the MCMC run as a form of regularization, resembling the dropout
technique [8] now popular in deep, convolutional and recurrent neural networks (see e.g. [9] for a
recent survey). Such regularization can result in better generalization error, and our empirical results
show some evidence of such an effect. See further discussion in the experiments section.
Perturbation Chain as Longitudinal Study: The chain in Figure 1 can also be viewed as a study
of the stabilization operator upon several iterations of the model, with each trained and anchored
on the previous version. It can help us assess if the successive application of the operator has any
adverse effect on the accuracy or if the resulting churn reduction diminishes over time.
3
Stabilization Operators
We propose two stabilization operators: (I) Regress to Corrected Prediction (RCP) which turns the
classification problem into a regression towards corrected predictions of an older model, and (II) the
Diplopia operator which regularizes the new model towards the older model using example weights.
3.1
RCP Stabilization Operator
We propose a stabilization operator S(f, T ) that can be used with almost any regression algorithm
and any type of change. The RCP operator re-labels each classification training label yj ? {?1, 1}
in T with a regularized label y?j ? R, using an anchor model f :
?f (xj ) + (1 ? ?)yj if yj f (xj ) ? 0
y?j =
(2)
yj
otherwise,
where ?, ? [0, 1] are hyperparameters of S that control the churn-accuracy trade-off, with larger
? corresponding to lower churn but less sensitive to good changes. Denote the set of all re-labeled
examples T?. The RCP stabilization operator S trains a regression model on T?, using the user?s choice
of regression algorithm.
3.2
Diplopia Stabilization Operator
The second stabilization operator, which we term Diplopia (double-vision), can be used with any
classification strategy that can output a probability estimate for each class, including algorithms like
SVMs and random forests (calibrated with a method like Platt scaling [10] or isotonic regression
[11]). This operator can be easily extended to multi-class problems.
For binary classification, the Diplopia operator copies each training example into two examples with
labels ?1, and assigns different weights to the two contradictorily labeled copies. If f (.) is the
probability estimate of class +1:
(xi , +1) with weight ?i
?f (xi ) + (1 ? ?)1yi ?0 if yi (f (xi ) ? 12 ) ? 0
(xi , yi ) ?
?i =
(xi , ?1) with weight 1 ? ?i
1/2 + yi
otherwise.
The formula always assigns the higher weight to the copy with the correct label. Notice that the roles
of ? and are very similar than to those in (2). To see the intuition behind this operator, note that
with ? = 1 and without the -correction, stochastic f (.) maximizes the likelihood of the new dataset.
The RCP operator requires using a regressor, but our preliminary experiments showed that it often
trains faster (without the need to double the dataset size) and reduces churn better than the Diplopia
operator. We therefore focus on the RCP operator for theoretical and empirical analysis.
4
Theoretical Results
In this section we present some general bounds on smoothed churn, assuming that the perturbation
does not remove any features, and that the training algorithm is symmetric in training examples (i.e.
independent of the order of the dataset). The analysis here assumes datasets for different models
are sampled i.i.d., ignoring the dependency between consecutive re-labeled datasets (through the
intermediate model). Proofs and further technical details are given in the supplemental material.
4
First, note that we can rewrite the definition of the churn in terms of zero-one loss:
C(f1 , f2 ) =
E
(X,Y )?D
[`0,1 (f1 (X), f2 (X))] =
E
(X,Y )?D
[|`0,1 (f1 (X), Y ) ? `0,1 (f2 (X), Y )|] . (3)
We define a relaxation of C that is similar to the loss used by [5] to study the stability of classification
algorithms, we call it smooth churn and it is parameterized by the choice of ?:
C? (f1 , f2 ) =
E
(X,Y )?D
[|`? (f1 (X), Y ) ? `? (f2 (X), Y )|] ,
(4)
where `? (y, y 0 ) = 1 if yy 0 ? 0, `? (y, y 0 ) = 1 ? yy 0 /? for 0 ? yy 0 ? ?, and `? (y, y 0 ) = 0 otherwise.
Smooth churn can be interpreted as ? playing the role of a ?confidence threshold? of the classifier f
such that |f (x)| ? means the classifier is not confident in its prediction. It is easy to verify that `?
is (1/?)-Lipschitz continuous with respect to y, when y 0 ? {?1, 1}.
Let fT (x) ? R be a classifier discriminant function (which can be thresholded to form a classifier)
trained on set T . Let T i be the same as T except with the ith training sample (xi , yi ) replaced by
another sample. Then, as in [4], define training algorithm f. (.) to be ?-stable if:
?x, T, T i : |fT (x) ? fT i (x)|? ?.
(5)
Many algorithms such as SVM and classical regularization networks have been shown to be ?-stable
with ? = O(1/m) [4, 5]. We can use ?-stability of learning algorithms to get a bound on the expected
churn between independent runs of the algorithms on i.i.d. datasets:
Theorem 1 (Expected Churn). Suppose f is ?-stable, and is used to train classifiers on i.i.d. training
sets T and T 0 sampled from Dm . We have:
?
? ?m
0
E
[C
(f
,
f
)]
?
.
(6)
? T
T
T,T 0 ?D m
?
?
Assuming ? = O(1/m) this bound is of order O(1/ m), in line with most concentration bounds on
the generalization error. We can further show that churn is concentrated around its expectation:
Theorem 2 (Concentration Bound on Churn). Suppose f is ?-stable, and is used to train classifiers
on i.i.d. training sets T and T 0 sampled from Dm . We have:
?
2 2
?m?
? m??2
0
Pr
C
(f
,
f
)
>
+
?
e
.
(7)
? T
T
T,T 0 ?D m
?
?-stability for learning algorithms often includes worst case bound on loss or Lipschitz-constant
of the loss function. Assuming we use the RCP operator with squared loss in a reproducing kernel
Hilbert space (RKHS), we can derive a distribution-dependent bound on the expected squared churn:
Theorem 3 (Expected Squared Churn). Let F be a reproducing kernel Hilbert space with kernel k
such that ?x ? X : k(x, x) ? ?2 < ?. Let fT be a model trained on T = {(xi , yi )}m
i=1 defined by:
m
fT = arg min
g?F
1 X
(g(xi ) ? yi )2 + ?kgk2k .
m 1
(8)
For models trained on i.i.d. training sets T and T 0 :
E m (`? (fT (X), Y ) ? `? (fT 0 (X), Y ))2 ?
0
T,T ?D
(X,Y )?D
2?4
m?2 ? 2
"
E
T ?D m
#
m
1 X
2
(fT (xi ) ? yi ) .
m i=1
(9)
We can further use Chebyshev?s inequality to get a concentration bound on the smooth churn C? .
Unlike the bounds in [4] and [5], the bound of Theorem 3 scales with the expected training error (note
that we must use y?i in place of of yi when applying the theorem, since training data is re-labeled by
the stabilization operator). We can thus use the above bound to analyse the effect of ? and on the
churn, through their influence on the training error.
Suppose the Markov chain described in Section 2.2 has reached a stationary distribution. Let Fk? be a
model sampled from the resulting stationary distribution, used with the RCP operator defined in (2)
5
Table 2: Description of the datasets used in the experimental analysis.
# Features
TA
TB
Validation set
Testing set
Nomao [13]
News Popularity [14]
Twitter Buzz [15]
89
4000 samples, 84 features
5000 samples, 89 features
1000 samples
28465 samples
61
8000 samples, 58 features
10000 samples, 61 features
1000 samples
28797 samples
77
4000 samples, 70 features
5000 samples, 77 features
1000 samples
45402 samples
?
to re-label the dataset Tk+1 . Since Fk+1
is the minimizer of objective in (8) on the re-labeled dataset
we have:
"
#
#
"
m
m
1 X ?
1 X ?
2
2
? 2
?
2
(Fk+1 (xi ) ? y?i )
?
E
(Fk (xi ) ? y?i ) + ?(kFk kk ?kFk+1 kk )
E
Tk+1 m
Tk+1 m
i=1
i=1
#
"
m
1 X ?
(Fk (xi ) ? y?i )2 ,
(10)
=
E
Tk+1 m
i=1
?
where line (10) is by the assumptions of stationary regime on Fk? and Fk+1
with similar dataset
sampling distributions for Tk and Tk+1 . If E is the set of examples that Fk? got wrong, using the
definition of the RCP operator we can replace y?i to get this bound on the squared churn:
"
#
?4
1?? X ?
1 X ?
2
2
E
(Fk (xi ) ? yi ) +
(Fk (xi ) + ) .
(11)
m?2 ? 2 Tk+1
m
m
i?E
i?E
/
We can see in Eqn. (11) that using an ? close to 1 can decrease the first part of the bound, but at the
same time it can negatively affect the error rate of the classifier, resulting in more samples in E and
consequently a larger second term. Decreasing can reduce the (Fk? (xi ) + )2 term of the bound, but
can again cause an increase in the error rate. As shown in the experimental results, there is often a
trade-off between the amount of churn reduction and the accuracy of the resulting model. We can
measure the accuracy on the training set or a validation set to make sure the choice of ? and does
not degrade the accuracy. To estimate churn reduction, we can use an un-labeled dataset.
5
Experiments
This section demonstrates the churn reduction effect of the RCP operator for three UCI benchmark
datasets (see Table 2) with three regression algorithms: ridge regression, random forest regression, and
support vector machine regression with RBF kernel, all implemented in Scikit-Learn [12] (additional
results for boosted stumps and linear SVM in the appendix). We randomly split each dataset into
three fixed parts: a training set, a validation set on which we optimized the hyper-parameters for
all algorithms, and a testing set. We impute any missing values by the corresponding mean, and
normalize the data to have zero mean and variance 1 on the training set. See the supplementary
material for more experimental details.
To compare two models by computing the WLR on a reasonable number of diffs, we have made the
testing sets as large as possible, so that the expected number of diffs between two different models
is large enough to derive accurate and statistically significant conclusions. Lastly, we note that the
churn metric does not require labels, so it can be computed on an unlabeled dataset.
5.1
Experimental Set-up and Metrics
We assume an initial classifier is to be trained on TA , and a later candidate trained on TB will be
tested against the initial classifier. For the baseline of our experiments, we train classifier A on TA
and classifier B on TB independently and without any stabilization, as shown in Figure 1.
For the RCP operator comparison, we train A on TA , then train B + = S(A, TB ). For the MCMC
operator comparison, we run the MCMC chain for k = 30 steps?empirically enough for convergence
6
C(Fi ,F i-1)
2.5
Fi Accuracy
94.8
C(F*i , F *i-1)
Test Accuracy (%)
Churn (%) between consecutive models
94.9
3
2
1.5
F*i Accuracy
94.7
94.6
94.5
94.4
94.3
94.2
1
94.1
5
10
15
20
25
30
5
10
Iteration of the Markov chain
15
20
25
30
Iteration of the Markov chain
Figure 2: Left: Churn between consecutive models during the MCMC run on Nomao Dataset, with
and without stabilization. Right: Accuracy of the intermediate models, with and without stabilization.
Values are averaged over 40 runs of the chain. Dotted lines show standard errors.
for the datasets we considered as seen in Figure 2?and set A? = S(Fk? , TA ) and B ? = S(A? , TA ).
The dataset perturbation sub-samples 80% of the examples in TA and randomly drops 3-7 features.
We run 40 independent chains to measure the variability, and report the average outcome and standard
deviation. Figure 2 (left) plots the average and standard deviation of the churn along the 40 traces,
and Figure 2 (right) shows the accuracy.
For each experiment we report the churn ratio Cr between the initial classifier and candidate change,
that is, Cr = C(B + , A)/C(B, A) for the RCP operator, and Cr = C(B ? , A? )/C(B, A) for the
MCMC operator, and Cr = C(B, A)/C(B, A) = 1 for the baseline experiment. The most important
metric in practice is how easy it is to tell if B is an improvement over A, which we quantify by the
WLR between the candidate and initial classifier for each experiment. To help interpret the WLR,
we also report the resulting probability pwin that we would conclude that the candidate change is
positive (p ? 0.05) with a random 100-example set of differences.
Lastly, to demonstrate that the proposed methods reduce the churn without adversely impacting the
accuracy of the models, we also report the accuracy of the different trained models for a large test set,
though the point of this work is that a sufficiently-large labeled test set may not be available in a real
setting (see Section 1.1), and note that even if available, using a fixed test set to test many different
changes will lead to overfitting.
5.2
Results
0.7
(A *- A) Accuracy
(B*- B) Accuracy
Churn Ratio
0.65
0.3
0.6
0.2
0.1
0.1
0.55
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.5
0.9
Epsilon Parameter for RCP
1
0.5
1.1
(A *- A) Accuracy
(B*- B) Accuracy
Churn Ratio
0.9
0
0.7
-0.5
0.5
-1
-1.5
0.1
Churn Ratio
0.4
Test accuracy compared to baseline (%)
0.5
Churn Ratio
Test accuracy compared to baseline (%)
Table 3 shows results using reasonable default values of ? = 0.5 and = 0.5 for both RCP and the
MCMC (for results with other values of ? and see Appendix D). As seen in the Cr rows of the table,
RCP reduces churn over the baseline in all 9 cases, generally by 20%, but as much as 46% for ridge
regression on the Nomao dataset. Similarly, running RCP in the Markov Chain also reduces the churn
compared to the baseline in all 9 cases, and by slightly more on average than with the one-step RCP.
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.3
0.9
Alpha Parameter for RCP
Figure 3: SVM on Nomao dataset. Left: Testing accuracy of A? and B ? compared to A and B, and
churn ratio Cr as a function of , for fixed ? = 0.7. Both the accuracy and the churn ratio tend to
increase with larger values of . Right: Accuracies and the churn ratio versus ?, for fixed = 0.1.
There is a sharp decrease in accuracy with ? > 0.8 likely due to divergence in the chain.
7
Table 3: Experiment results on 3 domains with 3 different training algorithms for a single step RCP
and the MCMC methods. For the MCMC experiment, we report the numbers with the standard
deviation over the 40 runs of the chain.
Nomao
Ridge
RF
SVM
News
Ridge
RF
SVM
Twitter Buzz
Ridge
RF
SVM
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
V2
V2
V2
V2
V2
V2
V2
V2
V2
Baseline
No Stabilization
1.24
26.5
1.00
93.1 / 93.4
1.02
5.6
1.00
94.8 / 94.8
1.70
82.5
1.00
94.6 / 95.1
0.95
2.5
1.00
65.1 / 65.0
1.07
8.5
1.00
64.5 / 65.1
1.17
18.4
1.00
64.9 / 65.4
1.71
83.1
1.00
89.7 / 89.9
1.35
41.5
1.00
96.2 / 96.4
1.35
42.2
1.00
96.0 / 96.1
RCP
? = 0.5, = 0.5
1.40
49.2
0.54
93.1 / 93.4
1.13
13.4
0.83
94.8 / 95.0
2.51
99.7
0.75
94.6 / 95.2
0.94
2.4
0.75
65.1 / 65.0
1.02
5.7
0.69
64.5 / 64.7
1.26
29.4
0.77
64.9 / 65.4
3.54
100.0
0.85
89.7 / 90.0
1.15
16.1
0.86
96.2 / 96.3
1.77
86.6
0.70
96.0 / 96.1
MCMC, k = 30
? = 0.5, = 0.5
1.31
36.5
0.54 ? 0.06
93.2 ? 0.1 / 93.4 ? 0.1
1.09
9.8
0.83 ? 0.05
94.9 ? 0.2 / 95.0 ? 0.2
2.32
99.2
0.69 ? 0.06
94.8 ? 0.2 / 95.3 ? 0.1
1.04
6.7
0.78 ? 0.04
65.0 ? 0.1 / 65.1 ? 0.1
1.10
10.8
0.67 ? 0.04
64.3 ? 0.3 / 64.8 ? 0.2
1.24
26.1
0.86 ? 0.02
64.8 ? 0.1 / 65.4 ? 0.1
1.53
66.4
0.65 ? 0.05
90.1 ? 0.1 / 90.2 ? 0.1
1.15
15.9
0.77 ? 0.07
96.3 ? 0.1 / 96.3 ? 0.1
1.55
68.4
0.70 ? 0.03
96.1 ? 0.1 / 96.2 ? 0.1
In some cases, the reduced churn has a huge impact on the WLR. For example, for the SVM on
Twitter, the 30% churn reduction by RCP raised the WLR from 1.35 to 1.77, making it twice as
likely that labelling 100 differences would have verified the change was good (compare pwin values).
MCMC provides a similar churn reduction, but the WLR increase is not as large.
In addition to the MCMC providing slightly more churn reduction on average than RCP, running
the Markov chain provides slightly higher accuracy on average as well, most notably for the ridge
classifier on the Twitter dataset, raising initial classifier accuracy by 2.3% over the baseline. We
hypothesize this is due to the regularization effect of the perturbed training during the MCMC run,
resembling the effect of dropout in neural networks.
We used fixed values of ? = 0.5 and = 0.5 for all the experiments in Table 3, but note that results
will vary with the choice of ? and , and if they can be tuned with cross-validation or otherwise,
results can be substantially improved. Figure 3 illustrates the dependence on these hyper-parameters:
the left plot shows that small values of result in lower churn with reduced improvement on accuracy,
and the right plot shows that increasing ? reduces churn, and also helps increase accuracy, but at
values larger than 0.8 causes the Markov chain to diverge.
8
References
[1] L. Devroye and T. Wagner. Distribution-free performance bounds for potential function rules. Information
Theory, IEEE Transactions on, 25(5):601?604, 1979.
[2] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag: New York, 1995.
[3] V. N. Vapnik. Statistical Learning Theory. John Wiley: New York, 1998.
[4] O. Bousquet and A. Elisseeff. Algorithmic stability and generalization performance. In Advances in Neural
Information Processing Systems 13: Proceedings of the 2000 Conference, volume 13, page 196. MIT Press,
2001.
[5] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research,
2(Mar):499?526, 2002.
[6] S. Mukherjee, P. Niyogi, T. Poggio, and R. Rifkin. Learning theory: stability is sufficient for generalization
and necessary and sufficient for consistency of empirical risk minimization. Advances in Computational
Mathematics, 25(1-3):161?193, 2006.
[7] A. Reinart. Statistics Done Wrong: The Woefully Complete Guide. No Starch Press, San Francisco, USA,
2015.
[8] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to
prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929?1958,
2014.
[9] L. Zhang and P. N. Suganthan. A survey of randomized algorithms for training neural networks. Information
Sciences, 2016.
[10] J. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood
methods. Advances in Large Margin Classifiers, 10(3):61?74, 1999.
[11] A. Niculescu-Mizil and R. Caruana. Predicting good probabilities with supervised learning. In Proceedings
of the 22nd International Conference on Machine Learning, pages 625?632. ACM, 2005.
[12] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer,
R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay.
Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825?2830, 2011.
[13] L. Candillier and V. Lemaire. Design and analysis of the Nomao challenge active learning in the real-world.
In Proceedings of the ALRA: Active Learning in Real-world Applications, Workshop ECML-PKDD, 2012.
[14] K. Fernandes, P. Vinagre, and P. Cortez. Progress in Artificial Intelligence: 17th Portuguese Conference
on Artificial Intelligence, EPIA 2015, Coimbra, Portugal, September 8-11, 2015. Proceedings, chapter
A Proactive Intelligent Decision Support System for Predicting the Popularity of Online News, pages
535?546. Springer International Publishing, Cham, 2015.
[15] F. Kawala, E. Gaussier, A. Douzal-Chouakria, and E. Diemert. Apprentissage d?ordonnancement et
influence de l?ambigu?t? pour la pr?diction d?activit? sur les r?seaux sociaux. In Coria?2014, pages 1?15,
Nancy, France, France, March 2014.
9
| 6052 |@word trial:1 version:4 nd:1 cortez:1 elisseeff:2 harder:1 reduction:8 nomao:6 sociaux:1 series:2 initial:8 tuned:1 rkhs:1 longitudinal:1 dubourg:1 current:1 com:1 yet:1 must:4 portuguese:1 john:1 realistic:1 remove:1 drop:3 plot:3 hypothesize:1 v:1 stationary:5 intelligence:2 imitate:1 ith:1 provides:2 node:2 successive:2 launching:1 diffs:15 zhang:1 along:1 diplopia:5 consists:1 blondel:1 notably:1 amphitheatre:1 expected:10 pkdd:1 pour:1 multi:1 salakhutdinov:1 decreasing:1 lyon:3 curse:1 increasing:1 becomes:1 spain:1 maximizes:1 mountain:1 kind:3 interpreted:1 string:1 substantially:1 proposing:1 supplemental:1 every:1 rcp:22 churning:1 exactly:1 classifier:55 wrong:4 platt:2 control:1 demonstrates:1 producing:1 positive:4 declare:1 t1:2 treat:2 might:2 twice:1 range:1 statistically:5 averaged:1 practical:3 testing:5 yj:4 practice:4 empirical:7 evolving:1 significantly:1 fard:1 got:1 confidence:3 pre:2 get:6 unlabeled:2 close:2 operator:31 risk:2 applying:2 influence:2 isotonic:1 missing:1 resembling:2 chouakria:1 independently:1 survey:2 formulate:1 assigns:2 correcting:1 rule:1 importantly:1 regularize:1 epia:1 quentin:1 stability:9 notion:1 suppose:5 user:2 suganthan:1 hypothesis:1 expensive:1 mukherjee:1 labeled:9 ft:13 role:2 worst:2 news:3 trade:2 decrease:2 intuition:1 ideally:1 hinder:2 trained:13 motivate:1 rewrite:1 passos:1 upon:1 negatively:1 f2:9 easily:1 chapter:1 train:11 monte:3 artificial:2 tell:1 hyper:2 outcome:1 larger:4 solve:1 supplementary:1 say:1 otherwise:4 statistic:2 niyogi:1 think:1 analyse:1 online:1 net:1 propose:4 milani:1 fr:1 relevant:1 uci:1 realization:1 rifkin:1 translate:1 achieve:1 description:1 normalize:1 billion:1 convergence:2 double:2 sutskever:1 produce:2 leave:1 tk:8 help:3 illustrate:1 recurrent:1 derive:2 progress:1 implemented:1 launch:2 quantify:1 correct:2 stochastic:1 stabilization:22 material:2 require:1 f1:9 generalization:6 preliminary:1 randomization:1 varoquaux:1 kgk2k:1 correction:1 sufficiently:2 around:1 considered:1 great:1 algorithmic:1 predict:1 vary:1 early:1 consecutive:4 diminishes:1 label:8 prettenhofer:1 sensitive:1 minimization:1 mit:1 always:1 avoid:1 cr:15 boosted:1 broader:1 focus:2 improvement:6 likelihood:2 grisel:1 baseline:8 lemaire:1 twitter:4 dependent:1 niculescu:1 entire:1 spurious:1 france:3 overall:1 classification:7 issue:1 arg:1 impacting:1 activit:1 raised:1 special:2 orange:1 gramfort:1 sampling:1 peaked:1 future:6 mimic:1 t2:2 report:5 intelligent:1 few:1 randomly:2 resulted:1 divergence:1 replaced:1 attempt:1 interest:2 huge:1 investigate:1 cournapeau:1 truly:1 behind:1 chain:25 accurate:1 closer:1 necessary:1 poggio:1 kawala:1 re:7 theoretical:4 column:2 downside:1 boolean:1 caruana:1 deviation:3 subset:1 uniform:1 krizhevsky:1 too:1 dependency:1 perturbed:4 calibrated:1 confident:2 international:2 randomized:1 probabilistic:1 off:2 diverge:2 regressor:1 squared:4 again:1 choose:1 adversely:2 michel:1 potential:2 de:2 stump:1 includes:1 inc:1 vi:1 cormier:2 view:2 break:1 later:3 proactive:1 observing:1 reached:1 contribution:1 minimize:1 ass:1 accuracy:32 convolutional:1 variance:1 gathered:1 ren:1 carlo:3 mere:1 buzz:2 churn:59 acc:9 definition:2 against:1 regress:1 dm:2 proof:1 sampled:5 gain:1 dataset:18 popular:1 nancy:1 substantively:1 hilbert:2 actually:2 ta:19 higher:2 supervised:1 follow:1 improved:3 wei:1 formulation:1 evaluated:2 though:2 mar:1 done:1 lastly:2 eqn:1 scikit:2 google:2 mayagupta:1 usa:1 effect:8 verify:1 true:4 evolution:1 regularization:6 symmetric:1 iteratively:1 during:3 impute:1 tt:4 demonstrate:2 ridge:6 complete:1 bring:1 percent:1 meaning:1 fi:2 common:1 empirically:1 volume:1 interpret:1 significant:4 cv:1 rd:2 wlr:23 fk:16 similarly:1 consistency:1 mathematics:1 portugal:1 stable:4 similarity:1 longer:1 base:1 add:2 recent:2 showed:1 verlag:1 inequality:1 binary:1 yi:12 cham:1 seen:3 additional:2 determine:1 ii:2 full:1 desirable:1 reduces:4 smooth:3 technical:1 usability:2 match:1 faster:1 cross:2 neutrally:1 impact:2 prediction:10 descartes:1 regression:10 vision:1 metric:4 expectation:1 iteration:6 kernel:4 proposal:2 affecting:1 addition:1 interval:1 leaving:1 extra:1 launched:1 unlike:1 sure:2 tend:1 practitioner:1 call:1 presence:1 intermediate:2 iii:1 enough:4 identically:1 easy:2 iterate:1 affect:2 independence:1 xj:2 split:1 reduce:3 chebyshev:1 whether:2 york:2 cause:3 repeatedly:1 deep:1 useful:1 generally:1 involve:1 amount:1 concentrated:1 svms:1 reduced:3 problematic:1 stabilized:1 notice:1 dotted:1 popularity:2 yy:3 pwin:11 brucher:1 dropping:1 affected:1 key:1 threshold:1 drawn:2 prevent:1 verified:1 thresholded:1 v1:9 relaxation:1 run:10 parameterized:1 place:2 almost:2 throughout:1 reasonable:2 draw:3 decision:1 appendix:2 scaling:1 dropout:3 bound:16 constraint:1 bousquet:2 min:2 march:1 slightly:3 making:3 pr:2 turn:1 eventually:1 thirion:1 needed:6 know:1 end:1 available:2 v2:9 fernandes:1 binomial:2 assumes:1 include:2 ensure:1 running:2 publishing:1 epsilon:1 build:1 establish:1 classical:1 objective:2 perrot:1 already:1 strategy:2 concentration:3 dependence:1 september:1 win:12 degrade:1 topic:1 discriminant:1 reason:2 fresh:1 assuming:3 devroye:2 sur:1 illustration:1 ratio:9 minimizing:1 kk:2 providing:1 difficult:2 gaussier:1 trace:1 kfk:2 design:2 markov:15 datasets:8 benchmark:3 ecml:1 canini:2 regularizes:2 extended:1 ever:1 variability:1 hinton:1 perturbation:5 reproducing:2 smoothed:1 sharp:1 vanderplas:1 optimized:1 raising:1 learned:1 diction:1 barcelona:1 nip:1 address:2 regime:1 challenge:1 douzal:1 tb:11 rf:3 max:1 green:1 including:1 natural:1 treated:1 regularized:5 predicting:2 indicator:1 mizil:1 older:2 created:1 literature:2 python:1 relative:1 loss:9 expect:3 versus:1 validation:5 sufficient:2 consistent:3 apprentissage:1 playing:1 row:2 summary:1 copy:3 free:1 guide:1 wagner:2 distributed:1 default:1 world:2 made:1 san:1 transaction:1 alpha:1 overfitting:2 active:2 anchor:1 parkway:1 unnecessary:7 conclude:1 francisco:1 xi:17 continuous:1 un:1 anchored:1 table:8 learn:2 nature:1 robust:2 ca:1 ignoring:1 improving:1 forest:2 domain:2 significance:2 main:1 synonym:1 hyperparameters:1 allowed:2 body:1 referred:1 en:2 deployed:1 wiley:1 sub:1 duchesnay:1 candidate:5 third:1 formula:1 theorem:5 specific:1 svm:7 gupta:1 evidence:1 workshop:1 vapnik:3 labelling:2 illustrates:1 margin:1 likely:4 springer:2 minimizer:1 chance:2 acm:1 mmilanifard:1 viewed:1 consequently:1 rbf:1 towards:6 seaux:1 lipschitz:2 replace:1 change:37 adverse:1 typical:1 except:1 reducing:3 diff:2 corrected:2 experimental:4 la:1 pedregosa:1 coimbra:1 support:3 incorporate:1 evaluate:1 mcmc:14 tested:1 srivastava:1 |
5,584 | 6,053 | Launch and Iterate: Reducing Prediction Churn
Q. Cormier
ENS Lyon
15 parvis Ren? Descartes
Lyon, France
[email protected]
M. Milani Fard, K. Canini, M. R. Gupta
Google Inc.
1600 Amphitheatre Parkway
Mountain View, CA 94043
{mmilanifard,canini,mayagupta}@google.com
Abstract
Practical applications of machine learning often involve successive training iterations with changes to features and training examples. Ideally, changes in the output
of any new model should only be improvements (wins) over the previous iteration,
but in practice the predictions may change neutrally for many examples, resulting
in extra net-zero wins and losses, referred to as unnecessary churn. These changes
in the predictions are problematic for usability for some applications, and make it
harder and more expensive to measure if a change is statistically significant positive.
In this paper, we formulate the problem and present a stabilization operator to regularize a classifier towards a previous classifier. We use a Markov chain Monte Carlo
stabilization operator to produce a model with more consistent predictions without
adversely affecting accuracy. We investigate the properties of the proposal with
theoretical analysis. Experiments on benchmark datasets for different classification
algorithms demonstrate the method and the resulting reduction in churn.
1
The Curse of Version 2.0
In most practical settings, training and launching an initial machine-learned model is only the first
step: as new and improved features are created, additional training data is gathered, and the model
and learning algorithm are improved, it is natural to launch a series of ever-improving models. Each
new candidate may bring wins, but also unnecessary changes. In practice, it is desirable to minimize
any unnecessary changes for two key reasons. First, unnecessary changes can hinder usability
and debugability as they can be disorienting to users and follow-on system components. Second,
unnecessary changes make it more difficult to measure with statistical confidence whether the change
is truly an improvement. For both these reasons, there is great interest in making only those changes
that are wins, and minimizing any unnecessary changes, while making sure such process does not
hinder the overall accuracy objective.
There is already a large body of work in machine learning that treats the stability of learning
algorithms. These range from the early works of Devroye and Wagner [1] and Vapnik [2, 3] to more
recent studies of learning stability in more general hypothesis spaces [4, 5, 6]. Most of the literature
on this topic focus on stability of the learning algorithm in terms of the risk or loss function and how
such properties translate into uniform generalization with specific convergence rates. We build on
these notions, but the problem treated here is substantively different.
We address the problem of training consecutive classifiers to reduce unnecessary changes in the
presence of realistic evolution of the problem domain and the training sets over time. The main
contributions of this paper include: (I) discussion and formulation of the ?churn? metric between
trained models, (II) design of stabilization operators for regularization towards a previous model, (III)
proposing a Markov chain Monte Carlo (MCMC) stabilization technique, (VI) theoretical analysis of
the proposed stabilization in terms of churn, and (V) empirical analysis of the proposed methods on
benchmark datasets with different classification algorithms.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Table 1: Win-loss ratio (WLR) needed to establish a change is statistically significant at the p = 0.05
level for k wins out of n diffs from a binomial distribution. The empirical WLR column shows the
WLR one must actually see in the diffs. The true WLR column is the WLR the change must have so
that any random draw of diffs has at least a 95% chance of producing the needed empirical WLR.
1.1
# Diffs
Min # Wins
Needed
Max # Losses
Allowed
Empirical WLR
Needed
True WLR
Needed
10
100
1,000
10,000
9
59
527
5,083
1
41
473
4,917
9.000
1.439
1.114
1.034
26.195
1.972
1.234
1.068
Testing for Improvements
In the machine learning literature, it is common to compare classifiers on a fixed pre-labeled test set.
However, a fixed test set has a few practical downsides. First, if many potential changes to the model
are evaluated on the same dataset, it becomes difficult to avoid observing spurious positive effects that
are actually due to chance. Second, the true test distribution may be evolving over time, meaning that
a fixed test set will eventually diverge from the true distribution of interest. Third, and most important
to our discussion, any particular change may affect only a small subset of the test examples, leaving
too small a sample of differences (diffs) to determine whether a change is statistically significant.
For example, suppose one has a fixed test set of 10,000 samples with which to evaluate a classifier.
Consider a change to one of the features, say a Boolean string-similarity feature that causes the
feature to match more synonyms, and suppose that re-training a classifier with this small change to
this one feature impacts only 0.1% of random examples. Then only 10 of the 10,000 test examples
would be affected. As shown in the first row of Table 1, given only 10 diffs, there must be 9 or more
wins to declare the change statistically significantly positive for p = 0.05.
Note that cross-validation (CV), even in leave-one-out form, does not solve this issue. First, we are
still bound by the size of the training set which might not include enough diffs between the two
models. Second, and more importantly, the model in the previous iteration has likely seen the entire
dataset, which breaks the independence assumption needed for the statistical test.
To address these problems and ensure a fresh, sufficiently large test set for each comparison, practitioners often instead measure changes on a set of diffs for the proposed change. For example, to
compare classifier A and B, each classifier is evaluated on a billion unlabeled examples, and then the
set of diffs is defined as those examples for which classifiers A and B predict a different class.
1.2
Churn
We define the churn between two models as the expected percent of diffs sampled from the test
distribution. For a fixed accuracy gain, less churn is better. For example, if classifier A has accuracy
90% and classifier B has accuracy 91%, then the best case is if classifier B gets the same 90% of
examples correct as classifier A, while correcting A?s errors on 1% of the data. Churn is thus only
1% in this case, and all diffs between A and B will be wins for B. Therefore the improvement of
B over A will achieve statistical significance after labelling a mere 10 diffs. The worst case is if
classifier A is right on the 9% of examples that B gets wrong, and B is right on the 10% of examples
that A gets wrong. In this case, churn is 19%, and a given diff will only have probability of 10/19 of
being a win for B, and almost 1,000 diffs will have to be labeled to be confident that B is better.
On Statistical Significance: Throughout this paper, we assume that every diff is independent and
identically distributed with some probability of being a win for the test model vs. the base model.
Thus, the probability of k wins in n trials follows a binomial distribution. Confidence intervals can
provide more information than a p-value, but p-values are a useful summary statistic to motivate the
problem and proposed solution, and are relevant in practice; for a longer discussion see e.g. [7].
2
Reducing Churn for Classifiers
In this paper, we propose a new training strategy for reducing the churn between classifiers. One
special case is how to train a classifier B to be low-churn given a fixed classifier A. We treat that
2
De-Churning Markov Chain
A
B
T1
T2
...
TK
TA
TB
F1?
F2?
...
?
FK
A?
B?
Figure 1: The orange nodes illustrate a Markov Chain, at each step the classifier Ft? is regularized
?
towards the previous step?s classifier Ft?1
using the stabilization operator S, and each step trained on
a different random training set Tt . We run K steps of this Markov chain, for K large enough so that
?
the distribution of Fk? is close to a stationary distribution. The classifier A? = S(FK
, TA ) is then
?
deployed. Later, some changes are proposed, and a new classifier B is trained on training set TB but
regularized towards A? using B ? = S(A? , TB ). We compare this proposal in terms of churn and
accuracy to the green nodes, which do not use the proposed stabilization.
special case as well as a broader problem: a framework for training both classifiers A and B so that
classifier B is expected to have low-churn relative to classifier A, though when we train A we do not
yet know exactly the changes B will incorporate. We place no constraints on the kind of classifiers or
the kind of future changes allowed.
Our solution consists of two components: a stabilization operator that regularizes classifier B to be
closer in predictions to classifier A; and a randomization of the training set that attempts to mimic
expected future changes.
We consider a training set T = {(xi , yi )}m
i=1 of m samples with each D-dimensional feature vector
xi ? X ? RD and each label yi ? Y = {?1, 1}. Samples are drawn i.i.d. from distribution D.
Define a classifier f : RD ? {?1, 1}, and the churn between two classifiers f1 and f2 as:
C(f1 , f2 ) =
E [1f1 (X)f2 (X)<0 ],
(1)
(X,Y )?D
where 1 is the indicator function. We are given training sets TA and TB to train the first and second
version of the model respectively. TB might add or drop features or examples compared to TA .
2.1
Perturbed Training to Imitate Future Changes
Consider a random training set drawn from a distribution P(TA ), such that different draws may have
different training samples and different features. We show that one can train an initial classifier to be
more consistent in predictions for different realizations of the perturbed training set by iteratively
training on a series of i.i.d. random draws T1 , T2 , . . . from P(TA ). We choose P(TA ) to model a
typical expected future change to the dataset. For example, if we think a likely future change will
add 5% more training data and one new feature, then we would define a random training set to be a
random 95% of the m examples in TA , while dropping a feature at random.
2.2
Stabilized Training Based On A Previous Model using a Markov Chain
We propose a Markov chain Monte Carlo (MCMC) approach to form a distribution over classifiers
that are consistent in predictions w.r.t. the distribution P(TA ) on the training set. Let S denote
?
a regularized training that outputs a new classifier Ft+1
= S(Ft? , Tt+1 ) where Ft? is a previous
classifier and Tt+1 is the current training set. Applying S repeatedly to random training sets Tt forms
a Markov chain as shown in Figure 1. We expect this chain to produce a stationary peaked distribution
on classifiers robust to the perturbation P(TA ). We sample a model from this resulting distribution
after K steps.
We end the proposed Markov chain with a classifier A? trained on the full training set TA , that is,
?
A? = S(FK
, TA ). Classifier A? is the initial launched model, and has been pre-trained to be robust
to the kind of changes we expect to see in some future training set TB . Later, classifier B ? should be
trained as B ? = S(A? , TB ). We expect the chain to have reduced the churn C(A? , B ? ) compared to
the churn C(A, B) that would have resulted from training classifiers A and B without the proposed
stabilization. See Figure 1 for an illustration. Note that this chain only needs to be run for the first
version of the model.
3
On Regularization Effect of Perturbed Training: One can view the perturbation of the dataset
and random feature drops during the MCMC run as a form of regularization, resembling the dropout
technique [8] now popular in deep, convolutional and recurrent neural networks (see e.g. [9] for a
recent survey). Such regularization can result in better generalization error, and our empirical results
show some evidence of such an effect. See further discussion in the experiments section.
Perturbation Chain as Longitudinal Study: The chain in Figure 1 can also be viewed as a study
of the stabilization operator upon several iterations of the model, with each trained and anchored
on the previous version. It can help us assess if the successive application of the operator has any
adverse effect on the accuracy or if the resulting churn reduction diminishes over time.
3
Stabilization Operators
We propose two stabilization operators: (I) Regress to Corrected Prediction (RCP) which turns the
classification problem into a regression towards corrected predictions of an older model, and (II) the
Diplopia operator which regularizes the new model towards the older model using example weights.
3.1
RCP Stabilization Operator
We propose a stabilization operator S(f, T ) that can be used with almost any regression algorithm
and any type of change. The RCP operator re-labels each classification training label yj ? {?1, 1}
in T with a regularized label y?j ? R, using an anchor model f :
?f (xj ) + (1 ? ?)yj if yj f (xj ) ? 0
y?j =
(2)
yj
otherwise,
where ?, ? [0, 1] are hyperparameters of S that control the churn-accuracy trade-off, with larger
? corresponding to lower churn but less sensitive to good changes. Denote the set of all re-labeled
examples T?. The RCP stabilization operator S trains a regression model on T?, using the user?s choice
of regression algorithm.
3.2
Diplopia Stabilization Operator
The second stabilization operator, which we term Diplopia (double-vision), can be used with any
classification strategy that can output a probability estimate for each class, including algorithms like
SVMs and random forests (calibrated with a method like Platt scaling [10] or isotonic regression
[11]). This operator can be easily extended to multi-class problems.
For binary classification, the Diplopia operator copies each training example into two examples with
labels ?1, and assigns different weights to the two contradictorily labeled copies. If f (.) is the
probability estimate of class +1:
(xi , +1) with weight ?i
?f (xi ) + (1 ? ?)1yi ?0 if yi (f (xi ) ? 12 ) ? 0
(xi , yi ) ?
?i =
(xi , ?1) with weight 1 ? ?i
1/2 + yi
otherwise.
The formula always assigns the higher weight to the copy with the correct label. Notice that the roles
of ? and are very similar than to those in (2). To see the intuition behind this operator, note that
with ? = 1 and without the -correction, stochastic f (.) maximizes the likelihood of the new dataset.
The RCP operator requires using a regressor, but our preliminary experiments showed that it often
trains faster (without the need to double the dataset size) and reduces churn better than the Diplopia
operator. We therefore focus on the RCP operator for theoretical and empirical analysis.
4
Theoretical Results
In this section we present some general bounds on smoothed churn, assuming that the perturbation
does not remove any features, and that the training algorithm is symmetric in training examples (i.e.
independent of the order of the dataset). The analysis here assumes datasets for different models
are sampled i.i.d., ignoring the dependency between consecutive re-labeled datasets (through the
intermediate model). Proofs and further technical details are given in the supplemental material.
4
First, note that we can rewrite the definition of the churn in terms of zero-one loss:
C(f1 , f2 ) =
E
(X,Y )?D
[`0,1 (f1 (X), f2 (X))] =
E
(X,Y )?D
[|`0,1 (f1 (X), Y ) ? `0,1 (f2 (X), Y )|] . (3)
We define a relaxation of C that is similar to the loss used by [5] to study the stability of classification
algorithms, we call it smooth churn and it is parameterized by the choice of ?:
C? (f1 , f2 ) =
E
(X,Y )?D
[|`? (f1 (X), Y ) ? `? (f2 (X), Y )|] ,
(4)
where `? (y, y 0 ) = 1 if yy 0 ? 0, `? (y, y 0 ) = 1 ? yy 0 /? for 0 ? yy 0 ? ?, and `? (y, y 0 ) = 0 otherwise.
Smooth churn can be interpreted as ? playing the role of a ?confidence threshold? of the classifier f
such that |f (x)| ? means the classifier is not confident in its prediction. It is easy to verify that `?
is (1/?)-Lipschitz continuous with respect to y, when y 0 ? {?1, 1}.
Let fT (x) ? R be a classifier discriminant function (which can be thresholded to form a classifier)
trained on set T . Let T i be the same as T except with the ith training sample (xi , yi ) replaced by
another sample. Then, as in [4], define training algorithm f. (.) to be ?-stable if:
?x, T, T i : |fT (x) ? fT i (x)|? ?.
(5)
Many algorithms such as SVM and classical regularization networks have been shown to be ?-stable
with ? = O(1/m) [4, 5]. We can use ?-stability of learning algorithms to get a bound on the expected
churn between independent runs of the algorithms on i.i.d. datasets:
Theorem 1 (Expected Churn). Suppose f is ?-stable, and is used to train classifiers on i.i.d. training
sets T and T 0 sampled from Dm . We have:
?
? ?m
0
E
[C
(f
,
f
)]
?
.
(6)
? T
T
T,T 0 ?D m
?
?
Assuming ? = O(1/m) this bound is of order O(1/ m), in line with most concentration bounds on
the generalization error. We can further show that churn is concentrated around its expectation:
Theorem 2 (Concentration Bound on Churn). Suppose f is ?-stable, and is used to train classifiers
on i.i.d. training sets T and T 0 sampled from Dm . We have:
?
2 2
?m?
? m??2
0
Pr
C
(f
,
f
)
>
+
?
e
.
(7)
? T
T
T,T 0 ?D m
?
?-stability for learning algorithms often includes worst case bound on loss or Lipschitz-constant
of the loss function. Assuming we use the RCP operator with squared loss in a reproducing kernel
Hilbert space (RKHS), we can derive a distribution-dependent bound on the expected squared churn:
Theorem 3 (Expected Squared Churn). Let F be a reproducing kernel Hilbert space with kernel k
such that ?x ? X : k(x, x) ? ?2 < ?. Let fT be a model trained on T = {(xi , yi )}m
i=1 defined by:
m
fT = arg min
g?F
1 X
(g(xi ) ? yi )2 + ?kgk2k .
m 1
(8)
For models trained on i.i.d. training sets T and T 0 :
E m (`? (fT (X), Y ) ? `? (fT 0 (X), Y ))2 ?
0
T,T ?D
(X,Y )?D
2?4
m?2 ? 2
"
E
T ?D m
#
m
1 X
2
(fT (xi ) ? yi ) .
m i=1
(9)
We can further use Chebyshev?s inequality to get a concentration bound on the smooth churn C? .
Unlike the bounds in [4] and [5], the bound of Theorem 3 scales with the expected training error (note
that we must use y?i in place of of yi when applying the theorem, since training data is re-labeled by
the stabilization operator). We can thus use the above bound to analyse the effect of ? and on the
churn, through their influence on the training error.
Suppose the Markov chain described in Section 2.2 has reached a stationary distribution. Let Fk? be a
model sampled from the resulting stationary distribution, used with the RCP operator defined in (2)
5
Table 2: Description of the datasets used in the experimental analysis.
# Features
TA
TB
Validation set
Testing set
Nomao [13]
News Popularity [14]
Twitter Buzz [15]
89
4000 samples, 84 features
5000 samples, 89 features
1000 samples
28465 samples
61
8000 samples, 58 features
10000 samples, 61 features
1000 samples
28797 samples
77
4000 samples, 70 features
5000 samples, 77 features
1000 samples
45402 samples
?
to re-label the dataset Tk+1 . Since Fk+1
is the minimizer of objective in (8) on the re-labeled dataset
we have:
"
#
#
"
m
m
1 X ?
1 X ?
2
2
? 2
?
2
(Fk+1 (xi ) ? y?i )
?
E
(Fk (xi ) ? y?i ) + ?(kFk kk ?kFk+1 kk )
E
Tk+1 m
Tk+1 m
i=1
i=1
#
"
m
1 X ?
(Fk (xi ) ? y?i )2 ,
(10)
=
E
Tk+1 m
i=1
?
where line (10) is by the assumptions of stationary regime on Fk? and Fk+1
with similar dataset
sampling distributions for Tk and Tk+1 . If E is the set of examples that Fk? got wrong, using the
definition of the RCP operator we can replace y?i to get this bound on the squared churn:
"
#
?4
1?? X ?
1 X ?
2
2
E
(Fk (xi ) ? yi ) +
(Fk (xi ) + ) .
(11)
m?2 ? 2 Tk+1
m
m
i?E
i?E
/
We can see in Eqn. (11) that using an ? close to 1 can decrease the first part of the bound, but at the
same time it can negatively affect the error rate of the classifier, resulting in more samples in E and
consequently a larger second term. Decreasing can reduce the (Fk? (xi ) + )2 term of the bound, but
can again cause an increase in the error rate. As shown in the experimental results, there is often a
trade-off between the amount of churn reduction and the accuracy of the resulting model. We can
measure the accuracy on the training set or a validation set to make sure the choice of ? and does
not degrade the accuracy. To estimate churn reduction, we can use an un-labeled dataset.
5
Experiments
This section demonstrates the churn reduction effect of the RCP operator for three UCI benchmark
datasets (see Table 2) with three regression algorithms: ridge regression, random forest regression, and
support vector machine regression with RBF kernel, all implemented in Scikit-Learn [12] (additional
results for boosted stumps and linear SVM in the appendix). We randomly split each dataset into
three fixed parts: a training set, a validation set on which we optimized the hyper-parameters for
all algorithms, and a testing set. We impute any missing values by the corresponding mean, and
normalize the data to have zero mean and variance 1 on the training set. See the supplementary
material for more experimental details.
To compare two models by computing the WLR on a reasonable number of diffs, we have made the
testing sets as large as possible, so that the expected number of diffs between two different models
is large enough to derive accurate and statistically significant conclusions. Lastly, we note that the
churn metric does not require labels, so it can be computed on an unlabeled dataset.
5.1
Experimental Set-up and Metrics
We assume an initial classifier is to be trained on TA , and a later candidate trained on TB will be
tested against the initial classifier. For the baseline of our experiments, we train classifier A on TA
and classifier B on TB independently and without any stabilization, as shown in Figure 1.
For the RCP operator comparison, we train A on TA , then train B + = S(A, TB ). For the MCMC
operator comparison, we run the MCMC chain for k = 30 steps?empirically enough for convergence
6
C(Fi ,F i-1)
2.5
Fi Accuracy
94.8
C(F*i , F *i-1)
Test Accuracy (%)
Churn (%) between consecutive models
94.9
3
2
1.5
F*i Accuracy
94.7
94.6
94.5
94.4
94.3
94.2
1
94.1
5
10
15
20
25
30
5
10
Iteration of the Markov chain
15
20
25
30
Iteration of the Markov chain
Figure 2: Left: Churn between consecutive models during the MCMC run on Nomao Dataset, with
and without stabilization. Right: Accuracy of the intermediate models, with and without stabilization.
Values are averaged over 40 runs of the chain. Dotted lines show standard errors.
for the datasets we considered as seen in Figure 2?and set A? = S(Fk? , TA ) and B ? = S(A? , TA ).
The dataset perturbation sub-samples 80% of the examples in TA and randomly drops 3-7 features.
We run 40 independent chains to measure the variability, and report the average outcome and standard
deviation. Figure 2 (left) plots the average and standard deviation of the churn along the 40 traces,
and Figure 2 (right) shows the accuracy.
For each experiment we report the churn ratio Cr between the initial classifier and candidate change,
that is, Cr = C(B + , A)/C(B, A) for the RCP operator, and Cr = C(B ? , A? )/C(B, A) for the
MCMC operator, and Cr = C(B, A)/C(B, A) = 1 for the baseline experiment. The most important
metric in practice is how easy it is to tell if B is an improvement over A, which we quantify by the
WLR between the candidate and initial classifier for each experiment. To help interpret the WLR,
we also report the resulting probability pwin that we would conclude that the candidate change is
positive (p ? 0.05) with a random 100-example set of differences.
Lastly, to demonstrate that the proposed methods reduce the churn without adversely impacting the
accuracy of the models, we also report the accuracy of the different trained models for a large test set,
though the point of this work is that a sufficiently-large labeled test set may not be available in a real
setting (see Section 1.1), and note that even if available, using a fixed test set to test many different
changes will lead to overfitting.
5.2
Results
0.7
(A *- A) Accuracy
(B*- B) Accuracy
Churn Ratio
0.65
0.3
0.6
0.2
0.1
0.1
0.55
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.5
0.9
Epsilon Parameter for RCP
1
0.5
1.1
(A *- A) Accuracy
(B*- B) Accuracy
Churn Ratio
0.9
0
0.7
-0.5
0.5
-1
-1.5
0.1
Churn Ratio
0.4
Test accuracy compared to baseline (%)
0.5
Churn Ratio
Test accuracy compared to baseline (%)
Table 3 shows results using reasonable default values of ? = 0.5 and = 0.5 for both RCP and the
MCMC (for results with other values of ? and see Appendix D). As seen in the Cr rows of the table,
RCP reduces churn over the baseline in all 9 cases, generally by 20%, but as much as 46% for ridge
regression on the Nomao dataset. Similarly, running RCP in the Markov Chain also reduces the churn
compared to the baseline in all 9 cases, and by slightly more on average than with the one-step RCP.
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.3
0.9
Alpha Parameter for RCP
Figure 3: SVM on Nomao dataset. Left: Testing accuracy of A? and B ? compared to A and B, and
churn ratio Cr as a function of , for fixed ? = 0.7. Both the accuracy and the churn ratio tend to
increase with larger values of . Right: Accuracies and the churn ratio versus ?, for fixed = 0.1.
There is a sharp decrease in accuracy with ? > 0.8 likely due to divergence in the chain.
7
Table 3: Experiment results on 3 domains with 3 different training algorithms for a single step RCP
and the MCMC methods. For the MCMC experiment, we report the numbers with the standard
deviation over the 40 runs of the chain.
Nomao
Ridge
RF
SVM
News
Ridge
RF
SVM
Twitter Buzz
Ridge
RF
SVM
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
WLR
pwin
Cr
Acc V1 /
V2
V2
V2
V2
V2
V2
V2
V2
V2
Baseline
No Stabilization
1.24
26.5
1.00
93.1 / 93.4
1.02
5.6
1.00
94.8 / 94.8
1.70
82.5
1.00
94.6 / 95.1
0.95
2.5
1.00
65.1 / 65.0
1.07
8.5
1.00
64.5 / 65.1
1.17
18.4
1.00
64.9 / 65.4
1.71
83.1
1.00
89.7 / 89.9
1.35
41.5
1.00
96.2 / 96.4
1.35
42.2
1.00
96.0 / 96.1
RCP
? = 0.5, = 0.5
1.40
49.2
0.54
93.1 / 93.4
1.13
13.4
0.83
94.8 / 95.0
2.51
99.7
0.75
94.6 / 95.2
0.94
2.4
0.75
65.1 / 65.0
1.02
5.7
0.69
64.5 / 64.7
1.26
29.4
0.77
64.9 / 65.4
3.54
100.0
0.85
89.7 / 90.0
1.15
16.1
0.86
96.2 / 96.3
1.77
86.6
0.70
96.0 / 96.1
MCMC, k = 30
? = 0.5, = 0.5
1.31
36.5
0.54 ? 0.06
93.2 ? 0.1 / 93.4 ? 0.1
1.09
9.8
0.83 ? 0.05
94.9 ? 0.2 / 95.0 ? 0.2
2.32
99.2
0.69 ? 0.06
94.8 ? 0.2 / 95.3 ? 0.1
1.04
6.7
0.78 ? 0.04
65.0 ? 0.1 / 65.1 ? 0.1
1.10
10.8
0.67 ? 0.04
64.3 ? 0.3 / 64.8 ? 0.2
1.24
26.1
0.86 ? 0.02
64.8 ? 0.1 / 65.4 ? 0.1
1.53
66.4
0.65 ? 0.05
90.1 ? 0.1 / 90.2 ? 0.1
1.15
15.9
0.77 ? 0.07
96.3 ? 0.1 / 96.3 ? 0.1
1.55
68.4
0.70 ? 0.03
96.1 ? 0.1 / 96.2 ? 0.1
In some cases, the reduced churn has a huge impact on the WLR. For example, for the SVM on
Twitter, the 30% churn reduction by RCP raised the WLR from 1.35 to 1.77, making it twice as
likely that labelling 100 differences would have verified the change was good (compare pwin values).
MCMC provides a similar churn reduction, but the WLR increase is not as large.
In addition to the MCMC providing slightly more churn reduction on average than RCP, running
the Markov chain provides slightly higher accuracy on average as well, most notably for the ridge
classifier on the Twitter dataset, raising initial classifier accuracy by 2.3% over the baseline. We
hypothesize this is due to the regularization effect of the perturbed training during the MCMC run,
resembling the effect of dropout in neural networks.
We used fixed values of ? = 0.5 and = 0.5 for all the experiments in Table 3, but note that results
will vary with the choice of ? and , and if they can be tuned with cross-validation or otherwise,
results can be substantially improved. Figure 3 illustrates the dependence on these hyper-parameters:
the left plot shows that small values of result in lower churn with reduced improvement on accuracy,
and the right plot shows that increasing ? reduces churn, and also helps increase accuracy, but at
values larger than 0.8 causes the Markov chain to diverge.
8
References
[1] L. Devroye and T. Wagner. Distribution-free performance bounds for potential function rules. Information
Theory, IEEE Transactions on, 25(5):601?604, 1979.
[2] V. N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag: New York, 1995.
[3] V. N. Vapnik. Statistical Learning Theory. John Wiley: New York, 1998.
[4] O. Bousquet and A. Elisseeff. Algorithmic stability and generalization performance. In Advances in Neural
Information Processing Systems 13: Proceedings of the 2000 Conference, volume 13, page 196. MIT Press,
2001.
[5] O. Bousquet and A. Elisseeff. Stability and generalization. Journal of Machine Learning Research,
2(Mar):499?526, 2002.
[6] S. Mukherjee, P. Niyogi, T. Poggio, and R. Rifkin. Learning theory: stability is sufficient for generalization
and necessary and sufficient for consistency of empirical risk minimization. Advances in Computational
Mathematics, 25(1-3):161?193, 2006.
[7] A. Reinart. Statistics Done Wrong: The Woefully Complete Guide. No Starch Press, San Francisco, USA,
2015.
[8] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Dropout: A simple way to
prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929?1958,
2014.
[9] L. Zhang and P. N. Suganthan. A survey of randomized algorithms for training neural networks. Information
Sciences, 2016.
[10] J. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood
methods. Advances in Large Margin Classifiers, 10(3):61?74, 1999.
[11] A. Niculescu-Mizil and R. Caruana. Predicting good probabilities with supervised learning. In Proceedings
of the 22nd International Conference on Machine Learning, pages 625?632. ACM, 2005.
[12] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer,
R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay.
Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825?2830, 2011.
[13] L. Candillier and V. Lemaire. Design and analysis of the Nomao challenge active learning in the real-world.
In Proceedings of the ALRA: Active Learning in Real-world Applications, Workshop ECML-PKDD, 2012.
[14] K. Fernandes, P. Vinagre, and P. Cortez. Progress in Artificial Intelligence: 17th Portuguese Conference
on Artificial Intelligence, EPIA 2015, Coimbra, Portugal, September 8-11, 2015. Proceedings, chapter
A Proactive Intelligent Decision Support System for Predicting the Popularity of Online News, pages
535?546. Springer International Publishing, Cham, 2015.
[15] F. Kawala, E. Gaussier, A. Douzal-Chouakria, and E. Diemert. Apprentissage d?ordonnancement et
influence de l?ambigu?t? pour la pr?diction d?activit? sur les r?seaux sociaux. In Coria?2014, pages 1?15,
Nancy, France, France, March 2014.
9
| 6053 |@word trial:1 version:4 nd:1 cortez:1 elisseeff:2 harder:1 reduction:8 nomao:6 sociaux:1 series:2 initial:8 tuned:1 rkhs:1 longitudinal:1 dubourg:1 current:1 com:1 yet:1 must:4 portuguese:1 john:1 realistic:1 remove:1 drop:3 plot:3 hypothesize:1 v:1 stationary:5 intelligence:2 imitate:1 ith:1 provides:2 node:2 successive:2 launching:1 diffs:15 zhang:1 along:1 diplopia:5 consists:1 blondel:1 notably:1 amphitheatre:1 expected:10 pkdd:1 pour:1 multi:1 salakhutdinov:1 decreasing:1 lyon:3 curse:1 increasing:1 becomes:1 spain:1 maximizes:1 mountain:1 kind:3 interpreted:1 string:1 substantially:1 proposing:1 supplemental:1 every:1 rcp:22 churning:1 exactly:1 classifier:55 wrong:4 platt:2 control:1 demonstrates:1 producing:1 positive:4 declare:1 t1:2 treat:2 might:2 twice:1 range:1 statistically:5 averaged:1 practical:3 testing:5 yj:4 practice:4 empirical:7 evolving:1 significantly:1 fard:1 got:1 confidence:3 pre:2 get:6 unlabeled:2 close:2 operator:31 risk:2 applying:2 influence:2 isotonic:1 missing:1 resembling:2 chouakria:1 independently:1 survey:2 formulate:1 assigns:2 correcting:1 rule:1 importantly:1 regularize:1 epia:1 quentin:1 stability:9 notion:1 suppose:5 user:2 suganthan:1 hypothesis:1 expensive:1 mukherjee:1 labeled:9 ft:13 role:2 worst:2 news:3 trade:2 decrease:2 intuition:1 ideally:1 hinder:2 trained:13 motivate:1 rewrite:1 passos:1 upon:1 negatively:1 f2:9 easily:1 chapter:1 train:11 monte:3 artificial:2 tell:1 hyper:2 outcome:1 larger:4 solve:1 supplementary:1 say:1 otherwise:4 statistic:2 niyogi:1 think:1 analyse:1 online:1 net:1 propose:4 milani:1 fr:1 relevant:1 uci:1 realization:1 rifkin:1 translate:1 achieve:1 description:1 normalize:1 billion:1 convergence:2 double:2 sutskever:1 produce:2 leave:1 tk:8 help:3 illustrate:1 recurrent:1 derive:2 progress:1 implemented:1 launch:2 quantify:1 correct:2 stochastic:1 stabilization:22 material:2 require:1 f1:9 generalization:6 preliminary:1 randomization:1 varoquaux:1 kgk2k:1 correction:1 sufficiently:2 around:1 considered:1 great:1 algorithmic:1 predict:1 vary:1 early:1 consecutive:4 diminishes:1 label:8 prettenhofer:1 sensitive:1 minimization:1 mit:1 always:1 avoid:1 cr:15 boosted:1 broader:1 focus:2 improvement:6 likelihood:2 grisel:1 baseline:8 lemaire:1 twitter:4 dependent:1 niculescu:1 entire:1 spurious:1 france:3 overall:1 classification:7 issue:1 arg:1 impacting:1 activit:1 raised:1 special:2 orange:1 gramfort:1 sampling:1 peaked:1 future:6 mimic:1 t2:2 report:5 intelligent:1 few:1 randomly:2 resulted:1 divergence:1 replaced:1 attempt:1 interest:2 huge:1 investigate:1 cournapeau:1 truly:1 behind:1 chain:25 accurate:1 closer:1 necessary:1 poggio:1 kawala:1 re:7 theoretical:4 column:2 downside:1 boolean:1 caruana:1 deviation:3 subset:1 uniform:1 krizhevsky:1 too:1 dependency:1 perturbed:4 calibrated:1 confident:2 international:2 randomized:1 probabilistic:1 off:2 diverge:2 regressor:1 squared:4 again:1 choose:1 adversely:2 michel:1 potential:2 de:2 stump:1 includes:1 inc:1 vi:1 cormier:2 view:2 break:1 later:3 proactive:1 observing:1 reached:1 contribution:1 minimize:1 ass:1 accuracy:32 convolutional:1 variance:1 gathered:1 ren:1 carlo:3 mere:1 buzz:2 churn:59 acc:9 definition:2 against:1 regress:1 dm:2 proof:1 sampled:5 gain:1 dataset:18 popular:1 nancy:1 substantively:1 hilbert:2 actually:2 ta:19 higher:2 supervised:1 follow:1 improved:3 wei:1 formulation:1 evaluated:2 though:2 mar:1 done:1 lastly:2 eqn:1 scikit:2 google:2 mayagupta:1 usa:1 effect:8 verify:1 true:4 evolution:1 regularization:6 symmetric:1 iteratively:1 during:3 impute:1 tt:4 demonstrate:2 ridge:6 complete:1 bring:1 percent:1 meaning:1 fi:2 common:1 empirically:1 volume:1 interpret:1 significant:4 cv:1 rd:2 wlr:23 fk:16 similarly:1 consistency:1 mathematics:1 portugal:1 stable:4 similarity:1 longer:1 base:1 add:2 recent:2 showed:1 verlag:1 inequality:1 binary:1 yi:12 cham:1 seen:3 additional:2 determine:1 ii:2 full:1 desirable:1 reduces:4 smooth:3 technical:1 usability:2 match:1 faster:1 cross:2 neutrally:1 impact:2 prediction:10 descartes:1 regression:10 vision:1 metric:4 expectation:1 iteration:6 kernel:4 proposal:2 affecting:1 addition:1 interval:1 leaving:1 extra:1 launched:1 unlike:1 sure:2 tend:1 practitioner:1 call:1 presence:1 intermediate:2 iii:1 enough:4 identically:1 easy:2 iterate:1 affect:2 independence:1 xj:2 split:1 reduce:3 chebyshev:1 whether:2 york:2 cause:3 repeatedly:1 deep:1 useful:1 generally:1 involve:1 amount:1 concentrated:1 svms:1 reduced:3 problematic:1 stabilized:1 notice:1 dotted:1 popularity:2 yy:3 pwin:11 brucher:1 dropping:1 affected:1 key:1 threshold:1 drawn:2 prevent:1 verified:1 thresholded:1 v1:9 relaxation:1 run:10 parameterized:1 place:2 almost:2 throughout:1 reasonable:2 draw:3 decision:1 appendix:2 scaling:1 dropout:3 bound:16 constraint:1 bousquet:2 min:2 march:1 slightly:3 making:3 pr:2 turn:1 eventually:1 thirion:1 needed:6 know:1 end:1 available:2 v2:9 fernandes:1 binomial:2 assumes:1 include:2 ensure:1 running:2 publishing:1 epsilon:1 build:1 establish:1 classical:1 objective:2 perrot:1 already:1 strategy:2 concentration:3 dependence:1 september:1 win:12 degrade:1 topic:1 discriminant:1 reason:2 fresh:1 assuming:3 devroye:2 sur:1 illustration:1 ratio:9 minimizing:1 kk:2 providing:1 difficult:2 gaussier:1 trace:1 kfk:2 design:2 markov:15 datasets:8 benchmark:3 ecml:1 canini:2 regularizes:2 extended:1 ever:1 variability:1 hinton:1 perturbation:5 reproducing:2 smoothed:1 sharp:1 vanderplas:1 optimized:1 raising:1 learned:1 diction:1 barcelona:1 nip:1 address:2 regime:1 challenge:1 douzal:1 tb:11 rf:3 max:1 green:1 including:1 natural:1 treated:1 regularized:5 predicting:2 indicator:1 mizil:1 older:2 created:1 literature:2 python:1 relative:1 loss:9 expect:3 versus:1 validation:5 sufficient:2 consistent:3 apprentissage:1 playing:1 row:2 summary:1 copy:3 free:1 guide:1 wagner:2 distributed:1 default:1 world:2 made:1 san:1 transaction:1 alpha:1 overfitting:2 active:2 anchor:1 parkway:1 unnecessary:7 conclude:1 francisco:1 xi:17 continuous:1 un:1 anchored:1 table:8 learn:2 nature:1 robust:2 ca:1 ignoring:1 improving:1 forest:2 domain:2 significance:2 main:1 synonym:1 hyperparameters:1 allowed:2 body:1 referred:1 en:2 deployed:1 wiley:1 sub:1 duchesnay:1 candidate:5 third:1 formula:1 theorem:5 specific:1 svm:7 gupta:1 evidence:1 workshop:1 vapnik:3 labelling:2 illustrates:1 margin:1 likely:4 springer:2 minimizer:1 chance:2 acm:1 mmilanifard:1 viewed:1 consequently:1 rbf:1 towards:6 seaux:1 lipschitz:2 replace:1 change:37 adverse:1 typical:1 except:1 reducing:3 diff:2 corrected:2 experimental:4 la:1 pedregosa:1 coimbra:1 support:3 incorporate:1 evaluate:1 mcmc:14 tested:1 srivastava:1 |
5,585 | 6,054 | Scalable Adaptive Stochastic Optimization Using
Random Projections
Gabriel Krummenacher? ?
[email protected]
Yannic Kilcher?
[email protected]
Brian McWilliams? ?
[email protected]
Joachim M. Buhmann?
[email protected]
Nicolai Meinshausen?
[email protected]
?
Institute for Machine Learning, Department of Computer Science, ETH Z?rich, Switzerland
?
Seminar for Statistics, Department of Mathematics, ETH Z?rich, Switzerland
?
Disney Research, Z?rich, Switzerland
Abstract
Adaptive stochastic gradient methods such as A DAG RAD have gained popularity in
particular for training deep neural networks. The most commonly used and studied
variant maintains a diagonal matrix approximation to second order information
by accumulating past gradients which are used to tune the step size adaptively. In
certain situations the full-matrix variant of A DAG RAD is expected to attain better
performance, however in high dimensions it is computationally impractical. We
present A DA - LR and R ADAG RAD two computationally efficient approximations
to full-matrix A DAG RAD based on randomized dimensionality reduction. They are
able to capture dependencies between features and achieve similar performance to
full-matrix A DAG RAD but at a much smaller computational cost. We show that the
regret of A DA - LR is close to the regret of full-matrix A DAG RAD which can have
an up-to exponentially smaller dependence on the dimension than the diagonal
variant. Empirically, we show that A DA - LR and R ADAG RAD perform similarly to
full-matrix A DAG RAD. On the task of training convolutional neural networks as
well as recurrent neural networks, R ADAG RAD achieves faster convergence than
diagonal A DAG RAD.
1
Introduction
Recently, so-called adaptive stochastic optimization algorithms have gained popularity for large-scale
convex and non-convex optimization problems. Among these, A DAG RAD [9] and its variants [21]
have received particular attention and have proven among the most successful algorithms for training
deep networks. Although these problems are inherently highly non-convex, recent work has begun to
explain the success of such algorithms [3].
A DAG RAD adaptively sets the learning rate for each dimension by means of a time-varying proximal
regularizer. The most commonly studied and utilised version considers only a diagonal matrix
proximal term. As such it incurs almost no additional computational cost over standard stochastic
?
Authors contributed equally.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
gradient descent (SGD). However, when the data has low effective rank the regret of A DAG RAD may
have a much worse dependence on the dimensionality of the problem than its full-matrix variant
(which we refer to as A DA - FULL). Such settings are common in high dimensional data where there
are many correlations between features and can also be observed in the convolutional layers of neural
networks. The computational cost of A DA - FULL is substantially higher than that of A DAG RAD? it
requires computing the inverse square root of the matrix of gradient outer products to evaluate the
proximal term which grows with the cube of the dimension. As such it is rarely used in practise.
In this work we propose two methods that approximate the proximal term used in A DA - FULL
drastically reducing computational and storage complexity with little adverse affect on optimization
performance. First, in Section 3.1 we develop A DA - LR, a simple approximation using random
projections. This procedure reduces the computational complexity of A DA - FULL by a factor of
p but retains similar theoretical guarantees. In Section 3.2 we systematically profile the most
computationally expensive parts of A DA - LR and introduce further randomized approximations
resulting in a truly scalable algorithm, R ADAG RAD. In Section 3.3 we outline a simple modification
to R ADAG RAD? reducing the variance of the stochastic gradients ? which greatly improves practical
performance. Finally we perform an extensive comparison between the performance of R ADAG RAD
with several widely used optimization algorithms on a variety of deep learning tasks. For image
recognition with convolutional networks and language modeling with recurrent neural networks we
find that R ADAG RAD and in particular its variance-reduced variant achieves faster convergence.
1.1
Related work
Motivated by the problem of training deep neural networks, very recently many new adaptive
optimization methods have been proposed. Most computationally efficient among these are first order
methods similar in spirit to A DAG RAD, which suggest alternative normalization factors [21, 28, 6].
Several authors propose efficient stochastic variants of classical second order methods such as LBFGS [5, 20]. Efficient algorithms exist to update the inverse of the Hessian approximation by
applying the matrix-inversion lemma or directly updating the Hessian-vector product using the
?double-loop? algorithm but these are not applicable to A DAG RAD style algorithms. In the convex
setting these methods can show great theoretical and practical benefit over first order methods but
have yet to be extensively applied to training deep networks.
On a different note, the growing zoo of variance reduced SGD algorithms [19, 7, 18] has shown
vastly superior performance to A DAG RAD-style methods for standard empirical risk minimization
and convex optimization. Recent work has aimed to move these methods into the non-convex setting
[1]. Notably, [22] combine variance reduction with second order methods.
Most similar to R ADAG RAD are those which propose factorized approximations of second order
information. Several methods focus on the natural gradient method [2] which leverages second
order information through the Fisher information matrix. [14] approximate the inverse Fisher matrix
using a sparse graphical model. [8] use low-rank approximations whereas [26] propose an efficient
Kronecker product based factorization. Concurrently with this work, [12] propose a randomized
preconditioner for SGD. However, their approach requires access to all of the data at once in order to
compute the preconditioning matrix which is impractical for training deep networks. [23] propose a
theoretically motivated algorithm similar to A DA - LR and a faster alternative based on Oja?s rule to
update the SVD.
Fast random projections. Random projections are low-dimensional embeddings ? : Rp ? R?
which preserve ? up to a small distortion ? the geometry of a subspace of vectors. We concentrate on the class of structured random projections, among which the Subsampled Randomized
Fourier Transform (SRFT) has particularly attractive properties [15]. The SRFT consists of a preconditioning
step after which ? columns of the new matrix are subsampled uniformly at random as
p
? = p/? S?D with the definitions: (i) S ? R? ?p is a subsampling matrix. (ii) D ? Rp?p is a
diagonal matrix whose entries are drawn independently from {?1, 1}. (iii) ? ? Rp?p is a unitary
discrete Fourier tranansform (DFT) matrix. This formulations allows very fast implementations using
the fast Fourier transform (FFT), for example using the popular FFTW package2 . Applying the FFT
to a p?dimensional vector can be achieved in O (p log ? ) time. Similar structured random projections
2
http://www.fftw.org/
2
have gained popularity as a way to speed up [24] and robustify [27] large-scale linear regression and
for distributed estimation [17, 16].
1.2
Problem setting
The problem considered by [9] is online stochastic optimization where the goal is, at each step,
to predict a point ? t ? Rp which achieves low regret with respect to a fixed optimal predictor,
? opt , for a sequence of (convex) functions Ft (?). After T rounds, the regret can be defined as
PT
PT
R(T ) = t=1 Ft (? t ) ? t=1 Ft (? opt ).
Initially, we will consider functions Ft of the form Ft (?) := ft (?) + ?(?) where ft and ? are
convex loss and regularization functions respectively. Throughout, the vector gt ? ?ft (? t ) refers to
a particular subgradient of the loss function. Standard first order methods update ? t at each step by
moving in the opposite direction of gt according to a step-size parameter, ?. The A DAG RAD family
of algorithms [9] instead use an adaptive learning rate which can be different for each feature.
Pt This is
controlled using a time-varying proximal term which we briefly review. Defining Gt = i=1 gi gi>
and Ht = ?Ip + (Gt?1 + gt gt> )1/2 , the A DA - FULL proximal term is given by ?t (?) = 12 h?, Ht ?i.
Clearly when p is large, constructing G and finding its root and inverse at each iteration is impractical.
In practice, rather than the full outer
product matrix, A DAG RAD
uses a proximal function consisting
of the diagonal of Gt , ?t (?) = 12 ?, ?Ip + diag(Gt )1/2 ? . Although the diagonal proximal
term is computationally cheaper, it is unable to capture dependencies between coordinates in the
gradient terms. Despite this, A DAG RAD has been found to perform very well empirically. One reason
for this is modern high-dimensional datasets are typically also very sparse. Under these conditions,
coordinates in the gradient are approximately independent.
2
Stochastic optimization in high dimensions
A DAG RAD has attractive theoretical and empirical properties and adds essentially no overhead above
a standard first order method such as SGD. It begs the question, what we might hope to gain by
introducing additional computational complexity. In order to motivate our contribution, we first
present an analogue of the discussion in [10] focussing on when data is high-dimensional and dense.
We argue that if the data has low-rank (rather than sparse) structure A DA - FULL can effectively adapt
to the intrinsic dimensionality. We also show in Section 3.1 that A DA - LR has the same property.
First, we review the theoretical properties of A DAG RAD algorithms, borrowing the g1:T,j notation[9].
Proposition 1. A DAG RAD and A DA - FULL achieve the following regret (Corollaries 6 & 11 from
[9]) respectively:
RD (T ) ? 2k? opt k?
p
X
1/2
kg1:T,j k + ?k? opt k1 ,
RF (T ) ? 2k? opt k ? tr(GT ) + ?k? opt k. (1)
j=1
The major difference between RD (T ) and RF (T ) is the inclusion of the final full-matrix and diagonal
proximal term, respectively. Under a sparse data generating distribution A DAG RAD achieves an
up-to exponential improvement over SGD which is optimal in a minimax sense [10]. While data
sparsity is often observed in practise in high-dimensional datasets (particularly web/text data) many
other problems are dense. Furthermore, in practise applying A DAG RAD to dense data results in a
learning rate which tends to decay too rapidly. It is therefore natural to ask how dense data affects the
performance of A DA - FULL.
For illustration, consider when the data points xi are sampled i.i.d. from a Gaussian distribution
PX = N (0, ?). The resulting variable will clearly be dense. A common feature of high dimensional
data is low effective rank defined for a matrix ? as r(?) = tr(?)/k?k ? rank(?) ? p. Low
effective rank implies that r p and therefore the eigenvalues of the covariance matrix decay
quickly. We will consider distributions parameterised by covariance matrices ? with eigenvalues
?j (?) = ?0 j ?? for j = 1, . . . , p.
Functions of the form Ft (?) = Ft (? > xt ) have gradients kgt k ? M kxt k. For example, the least
squares loss Ft (? > xt ) = 12 (yt ? ? > xt )2 has gradient gt = xt (yt ? x>
t ? t ) = xt ?t , such that
3
k?t k ? M . Let us consider the effect of distributions parametrised by ? on the proximal terms of
full, and diagonal A DAG RAD. Plugging X into the proximal terms of (1) and taking expectations
with respect to PX we obtain for A DAG RAD and A DA - FULL respectively:
v
p
p u
p
T
T
u
X
X
X
X
X
p
?
> 1/2
t
2
2
E
kg1:T,j k ?
M E
xt,j ? pM T , E tr((
gt gt ) ) ? M T ?0
j ??/2 ,
j=1
j=1
t=1
t=1
j=1
(2)
where the first inequality is from Jensen and the second is from noticing the sum of T squared
Gaussian random variables
a ?2 random variable. We can consider
Pp is??/2
Pp the effect of fast-decaying
spectrum: for ? ? 2, j=1 j
= O (log p) and for ? ? (1, 2), j=1 j ??/2 = O p1??/2 .
When the data (and thus the gradients) are dense, yet have low effective rank, A DA - FULL is able
to adapt to this structure. On the contrary, although A DAG RAD is computationally practical, in the
worst case it may have exponentially worse dependence on the data dimension (p compared with
log p). In fact, the discrepancy between the regret of A DA - FULL and that of A DAG RAD is analogous
to the discrepancy between A DAG RAD and SGD for sparse data.
Algorithm 1 A DA - LR
Algorithm 2 R ADAG RAD
Input: ? > 0, ? ? 0, ?
Input: ? > 0, ? ? 0, ?
1: for t = 1 . . . T do
2:
Receive gt = ?ft (? t ).
3:
Gt = Gt?1 + gt gt>
? t = Gt ?
4:
Project: G
? t {QR-decomposition}
5:
QR = G
6:
B = Q> Gt
7:
U, ?, V = B {SVD}
8:
9:
10:
? t+1 = ? t ? ?V(?1/2 + ?I)?1 V> gt
11: end for
1: for t = 1 . . . T do
2:
Receive gt = ?ft (? t ).
?t = ?gt
3:
Project: g
?t = G
? t?1 + gt g
?t>
4:
G
?t )
5:
Qt , Rt ? qr_update(Qt?1 , Rt?1 , gt , g
>
?
6:
B = Gt Qt
7:
U, ?, W = B {SVD}
8:
V = WQ>
9:
? t = ?(gt ? VV> gt )
10:
? t+1 = ? t ? ?V(?1/2 + ?I)?1 V> gt ? ? t
11: end for
Output: ? T
Output: ? T
3
Approximating A DA - FULL using random projections
It is clear that in certain regimes, A DA - FULL provides stark optimization advantages over A DAG RAD
in terms of the dependence on p. However, A DA - FULL requires maintaining a p ? p matrix, G and
computing its square root and inverse. Therefore, computationally the dependence of A DA - FULL on
p scales with the cube which is impractical in high dimensions.
? t ? R? =
A na?ve approach would be to simply reduce the dimensionality of the gradient vector, g
?gt . A DA - FULL is now directly applicable in this low-dimensional space, returning a solution vector
? ? R? at each iteration. However, for many problems, the original coordinates may have some
?
t
intrinsic meaning or in the case of deep networks, may be parameters in a model. In which case it
is important to return a solution in the original space. Unfortunately in general it is not possible to
? [30].
recover such a solution from ?
t
Instead, we consider a different approach to maintaining and updating an approximation of the
A DAG RAD matrix while retaining the original dimensionality of the parameter updates ? and
gradients g.
3.1
Randomized low-rank approximation
As a first approach we approximate the inverse square root of Gt using a fast randomized singular
value decomposition (SVD) [15]. We proceed in two stages: First we compute an approximate basis
4
Q for the range of Gt . Then we use Q to compute an approximate SVD of Gt by forming the
smaller dimensional matrix B = Q> Gt and then compute the low-rank SVD U?V> = B. This is
faster than computing the SVD of Gt directly if Q has few columns.
? t = Gt ? by means
An approximate basis Q can be computed efficiently by forming the matrix G
?t
of a structured random projection and then constructing an orthonormal basis for the range of G
by QR-decomposition. The randomized SVD allows us to quickly compute the square root and
? ?1 = V(?1/2 + ?I)?1 V> . We call this
pseudo-inverse of the proximal term Ht by setting H
t
approximation A DA - LR and describe the steps in full in Algorithm 1.
In practice, using a structured random projection
such as the
SRFT leads to an approximation of the
original matrix, Gt of the following form
Gt ? QQ> Gt
? , with high probability [15] where
depends on ? , the number of columns of Q; p and the ? th singular value of Gt . Briefly, if the
singular values of Gt decay quickly and ? is chosen appropriately, will be small (this is stated more
formally in Proposition 2). We leverage this result to derive the following regret bound for A DA - LR
(see C.1 for proof).
Proposition 2. Let ?k+1 be the kth largest singular value of Gt . Setting the projection dimension as
?
2
p
p
4
k + 8 log(kn) ? ? ? p and defining = 1 + 7p/? ? ?k+1 . With failure probability at
?
1/2
most O k ?1 A DA - LR achieves regret RLR (T ) ? 2k? opt ktr(GT ) + (2? + ?)k? opt k .
?
Due to the randomized approximation we incur an additional 2? k? opt k compared with the regret
of A DA - FULL (eq. 1). So, under the earlier stated assumption of fast decaying eigenvalues we can
use an identical argument as in eq. (2) to similarly obtain a dimension dependence of O (log p + ? ).
Approximating
the inverse square root decreases the complexity of each iteration from O p3
to O ? p2 . We summarize the cost of each step in Algorithm 1 and contrast it with the cost of
A DA - FULL in Table A.1 in Section A. Even though A DA - LR removes one factor of p form the runtime
of A DA - FULL it still needs to store the large matrix Gt . This prevents A DA - LR from being a truly
practical algorithm. In the following section we propose a second algorithm which directly stores a
low dimensional approximation
to Gt that can be updated cheaply. This allows for an improvement
in runtime to O ? 2 p .
3.2
R ADAG RAD: A faster approximation
From Table A.1, the expensive steps in Algorithm 1 are the update of Gt (line 3), the random
projection (line 4) and the projection onto the approximate range of Gt (line 6). In the following we
propose R ADAG RAD, an algorithm that reduces the complexity to O ? 2 p by only approximately
solving some of the expensive steps in A DA - LR while maintaining similar performance in practice.
To compute the approximate range Q, we do not need to store the full matrix Gt . Instead we only
? t = Gt ?. This matrix can be computed iteratively by setting
require the low dimensional matrix G
p??
>
?
?
Gt ? R
= Gt?1 + gt (?gt ) . This directly reduces the cost of the random projection to
O (p log ? ) since we only project the vector gt instead of the matrix Gt , it also makes the update of
? t faster and saves storage.
G
? t on the approximate range of Gt and use the SVD to compute the inverse square
We then project G
root. Since Gt is symmetric its row and column space are identical so little information is lost by
? t instead of Gt on the approximate range of Gt .3 The advantage is that we can now
projecting G
compute the SVD in O ? 3 and the matrix-matrix product on line 6 in O ? 2 p . See Algorithm 2
for the full procedure.
The most expensive steps are now the QR decomposition and the matrix multiplications in steps 6
? t with
and 8 (see Algorithm 2 and Table A.1). Since at each iteration we only update the matrix G
?t> we can use faster rank-1 QR-updates [11] instead of recomputing the
the rank-one matrix gt g
? > Q for very large problems (e.g.
full QR decomposition. To speed up the matrix-matrix product G
t
backpropagation in convolutional neural networks), a multithreaded BLAS implementation can be
used.
3
This idea is similar to bilinear random projections [13].
5
3.3
Practical algorithms
Here we outline several simple modifications to the R ADAG RAD algorithm to improve practical
performance.
Corrected update. The random projection step only retains at most ? eigenvalues of Gt . If the
assumption of low effective rank does not hold, important information from the p ? ? smallest
eigenvalues might be discarded. R ADAG RAD therefore makes use of the corrected update
? t+1 = ? t ? ?V(?1/2 + ?I)?1 V> gt ? ? t ,
? t = ?(I ? VV> )gt .
where
? t is the projection of the current gradient onto the space orthogonal to the one captured by the
random projection of Gt . This ensures that important variation in the gradient which is poorly
approximated by the random projection is not completely lost. Consequently, if the data has rank
less than ? , k?k ? 0. This correction only requires quantities which have already been computed but
greatly improves practical performance.
Variance reduction. Variance reduction methods based on SVRG [19] obtain lower-variance
gradient estimates by means of computing a ?pivot point? over larger batches of data. Recent work
has shown improved theoretical and empirical convergence in non-convex problems [1] in particular
in combination with A DAG RAD.
We modify R ADAG RAD to use the variance reduction scheme of SVRG. The full procedure is given
in Algorithm 3 in Section B. The majority of the algorithm is as R ADAG RAD except for the outer
loop which computes the pivot point, ? every epoch which is used to reduce the variance of the
stochastic gradient (line 4). The important additional parameter is m, the update frequency for ?. As
in [1] we set this to m = 5n. Practically, as is standard practise we initialise R ADA - VR by running
A DAG RAD for several epochs.
We study the empirical behaviour of A DA - LR, R ADAG RAD and its variance reduced variant in the
next section.
4
4.1
Experiments
Low effective rank data
We compare the performance
A DA - FULL
A of our proposed algorithms
A DA - LR
A R ADA G RAD
R
G
against both the diagonal and
A DA G RAD
A G
full-matrix A DAG RAD variants
in the idealised setting where
the data is dense but has low
effective rank.
We generate binary classification data
Principal component
Iteration
with n = 1000 and p =
(a) Logistic Loss
(b) Spectrum
125. The data is sampled i.i.d.
from a Gaussian distribution
N (?c , ?) where ? has with Figure 1: Comparison of: (a) loss and (b) the largest eigenvalues
rapidly decaying eigenvalues (normalised by their sum) of the proximal term on simulated data.
?j (?) = ?0 j ?? with ? = 1.3, ?0 = 30. Each of the two classes has a different mean, ?c .
100
100
Normalised eigenvalues
DA FULL
DA LR
ADA
Loss
DA
RAD
RAD
10?1
10?1
10?2
10?2
10?3
0
500
1000
1500
2000
2500
3000
3500
4000
0
10
20
30
40
50
60
For each algorithm learning rates are tuned using cross validation. The results for 5 epochs are
averaged over 5 runs with different permutations of the data set and instantiations of the random
projection for A DA - LR and R ADAG RAD. For the random projection we use an oversampling factor
so ? ? R(10+? )?p to ensure accurate recovery of the top ? singular values and then set the values of
?[? :p] to zero [15].
Figure 1a shows the mean loss on the training set. The performance of A DA - LR and R ADAG RAD
match that of A DA - FULL. On the other hand, A DAG RAD converges to the optimum much more
slowly. Figure 1b shows the largest eigenvalues (normalized by their sum) of the proximal matrix
for each method at the end of training. The spectrum of Gt decays rapidly which is matched by
6
R ADA G RAD
R ADA - VR
A DA G RAD
A DA G RAD +SVRG
10?2
R ADA G RAD
R ADA - VR
A DA G RAD
A DA G RAD +SVRG
10?3
100
100
Training Loss
Training Loss
Training Loss
10?1
R ADA G RAD
R ADA - VR
A DA G RAD
A DA G RAD +SVRG
10?1
0
5000
10000
15000
20000
Iteration
25000
30000
35000
0
40000
5000
10000
15000
20000
Iteration
25000
30000
0
35000
0.75
0.99
10000
20000
30000
Iteration
40000
50000
0.9
0.70
0.8
0.97
R ADA G RAD
R ADA - VR
A DA G RAD
A DA G RAD +SVRG
0.96
0
5000
10000
15000
20000
Iteration
25000
(a) MNIST
30000
35000
0.60
0.55
R ADA G RAD
R ADA - VR
A DA G RAD
A DA G RAD +SVRG
0.50
0.45
0.40
0.95
40000
Test Accuracy
Test Accuracy
Test Accuracy
0.65
0.98
0.7
0.6
R ADA G RAD
R ADA - VR
A DA G RAD
A DA G RAD +SVRG
0.5
0.4
0.3
0.35
0
5000
10000
15000
20000
Iteration
(b) CIFAR
25000
30000
35000
0
10000
20000
30000
Iteration
40000
50000
(c) SVHN
Figure 2: Comparison of training loss (top row) and test accuracy (bottom row) on (a) MNIST, (b)
CIFAR and (c) SVHN.
the randomized approximation. This illustrates the dependencies between the coordinates in the
gradients and suggests Gt can be well approximated by a low-dimensional matrix which considers
these dependencies. On the other hand the spectrum of A DAG RAD (equivalent to the diagonal of G)
decays much more slowly. The learning rate, ? chosen by R ADAG RAD and A DA - FULL are roughly
one order of magnitude higher than for A DAG RAD.
4.2
Non-convex optimization in neural networks
Here we compare R ADAG RAD and R ADA - VR against A DAG RAD and the combination of
A DAG RAD + SVRG on the task of optimizing several different neural network architectures.
Convolutional Neural Networks. We used modified variants of standard convolutional network
architectures for image classification on the MNIST, CIFAR-10 and SVHN datasets. These consist of
three 5 ? 5 convolutional layers generating 32 channels with ReLU non-linearities, each followed by
2 ? 2 max-pooling. The final layer was a dense softmax layer and the objevtive was to minimize the
categorical cross entropy.
We used a batch size of 8 and trained the networks without momentum or weight decay, in order
to eliminate confounding factors. Instead, we used dropout regularization (p = 0.5) in the dense
layers during training. Step sizes were determined by coarsely searching a log scale of possible
values and evaluating performance on a validation set. We found R ADAG RAD to have a higher
impact with convolutional layers than with dense layers, due to the higher correlations between
weights. Therefore, for computational reasons, R ADAG RAD was only applied on the convolutional
layers. The last dense classification layer was trained with A DAG RAD. In this setting A DA - FULL is
computationally infeasible. The number of parameters in the convolutional layers is between 50-80k.
Simply storing the full G matrix using double precision would require more memory than is available
on top-of-the-line GPUs.
The results of our experiments can be seen in Figure 2, where we show the objective value during
training and the test accuracy. We find that both R ADAG RAD variants consistently outperform
both A DAG RAD and the combination of A DAG RAD + SVRG on these tasks. In particular combining
R ADAG RAD with variance reduction results in the largest improvement for training although both
R ADAG RAD variants quickly converge to very similar values for test accuracy.
For all models, the learning rate selected by R ADAG RAD is approximately an order of magnitude
larger than the one selected by A DAG RAD. This suggests that R ADAG RAD can make more aggressive steps than A DAG RAD, which results in the relative success of R ADAG RAD over A DAG RAD,
especially at the beginning of the experiments.
7
We observed that R ADAG RAD performed 5-10? slower than A DAG RAD per iteration. This can be
attributed to the lack of GPU-optimized SVD and QR routines. These numbers are comparable with
other similar recently proposed techniques [23]. However, due to the faster convergence we found
that the overall optimization time of R ADAG RAD was lower than for A DAG RAD.
R ADA G RAD
R ADA G RAD
Recurrent Neural Networks.
A DA G RAD
A DA G RAD
We trained the strongly-typed
variant of the long short-term
memory network (T-LSTM, [4])
for language modelling, which
consists of the following task:
Given a sequence of words from
an original text, predict the next
word.
We used pre-trained
G LOV E embedding vectors [29]
as input to the T-LSTM layer
and a softmax over the vocabulary (10k words) as output. The
loss is the mean categorical crossIteration
Iteration
entropy. The memory size of
the T-LSTM units was set to
Figure 3: Comparison of training loss (left) and and test loss
256. We trained and evaluated
(right) on language modelling task with the T-LSTM.
our network on the Penn Treebank dataset [25]. We subsampled strings of length 20 from the dataset and asked the network to
predict each word in the string, given the words up to that point. Learning rates were selected by
searching over a log scale of possible values and measuring performance on a validation set.
10?1
Test Loss
Training Loss
100
10?2
10?1
10?3
0
20000
40000
60000
80000
100000
0
20000
40000
60000
80000
100000
We compared R ADAG RAD with A DAG RAD without variance reduction. The results of this experiment can be seen in Figure 3. During training, we found that R ADAG RAD consistently outperforms
A DAG RAD: R ADAG RAD is able to both quicker reduce the training loss and also reaches a smaller
value (5.62 ? 10?4 vs. 1.52 ? 10?3 , a 2.7? reduction in loss). Again, we found that the selected
learning rate is an order of magnitude higher for R ADAG RAD than for A DAG RAD. R ADAG RAD is
able to exploit the fact that T-LSTMs perform type-preserving update steps which should preserve
any low-rank structure present in the weight matrices. The relative improvement of R ADAG RAD
over A DAG RAD in training is also reflected in the test loss (1.15 ? 10?2 vs. 3.23 ? 10?2 , a 2.8?
reduction).
5
Discussion
We have presented A DA - LR and R ADAG RAD which approximate the full proximal term of A DAG RAD
using fast, structured random projections. A DA - LR enjoys similar regret to A DA - FULL and both
methods achieve similar empirical performance at a fraction of the computational cost. Importantly,
R ADAG RAD can easily be modified to make use of standard improvements such as variance reduction.
Using variance reduction in combination in particular has stark benefits for non-convex optimization
in convolutional and recurrent neural networks. We observe a marked improvement over widely-used
techniques such as A DAG RAD and SVRG, the combination of which has recently been proven to be
an excellent choice for non-convex optimization [1].
Furthermore, we tried to incorporate exponential forgetting schemes similar to RMS P ROP and A DAM
into the R ADAG RAD framework but found that these methods degraded performance. A downside of
such methods is that they require additional parameters to control the rate of forgetting.
Optimization for deep networks has understandably been a very active research area. Recent work has
concentrated on either improving estimates of second order information or investigating the effect of
variance reduction on the gradient estimates. It is clear from our experimental results that a thorough
study of the combination provides an important avenue for further investigation, particularly where
parts of the underlying model might have low effective rank.
Acknowledgements. We are grateful to David Balduzzi, Christina Heinze-Deml, Martin Jaggi,
Aurelien Lucchi, Nishant Mehta and Cheng Soon Ong for valuable discussions and suggestions.
8
References
[1] Z. Allen-Zhu and E. Hazan. Variance reduction for faster non-convex optimization. In Proceedings of the
33rd International Conference on Machine Learning, 2016.
[2] S.-I. Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251?276, 1998.
[3] D. Balduzzi. Deep online convex optimization with gated games. arXiv preprint arXiv:1604.01952, 2016.
[4] D. Balduzzi and M. Ghifary. Strongly-typed recurrent neural networks. In Proceedings of the 33rd
International Conference on Machine Learning, 2016.
[5] R. H. Byrd, S. Hansen, J. Nocedal, and Y. Singer. A stochastic quasi-newton method for large-scale
optimization. arXiv preprint arXiv:1401.7020, 2014.
[6] Y. N. Dauphin, H. de Vries, J. Chung, and Y. Bengio. Rmsprop and equilibrated adaptive learning rates for
non-convex optimization. arXiv preprint arXiv:1502.04390, 2015.
[7] A. Defazio, F. Bach, and S. Lacoste-Julien. Saga: A fast incremental gradient method with support for
non-strongly convex composite objectives. In Advances in Neural Information Processing Systems, 2014.
[8] G. Desjardins, K. Simonyan, R. Pascanu, et al. Natural neural networks. In Advances in Neural Information
Processing Systems, pages 2062?2070, 2015.
[9] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic
optimization. The Journal of Machine Learning Research, 12:2121?2159, 2011.
[10] J. C. Duchi, M. I. Jordan, and H. B. McMahan. Estimation, optimization, and parallelism when data is
sparse. In Advances in Neural Information Processing Systems, 2013.
[11] G. H. Golub and C. F. Van Loan. Matrix computations, volume 3. JHU Press, 2012.
[12] A. Gonen and S. Shalev-Shwartz. Faster sgd using sketched conditioning. arXiv preprint arXiv:1506.02649,
2015.
[13] Y. Gong, S. Kumar, H. Rowley, and S. Lazebnik. Learning binary codes for high-dimensional data using
bilinear projections. In Proceedings of CVPR, pages 484?491, 2013.
[14] R. Grosse and R. Salakhudinov. Scaling up natural gradient by sparsely factorizing the inverse fisher
matrix. In Proceedings of the 32nd International Conference on Machine Learning, pages 2304?2313,
2015.
[15] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms
for constructing approximate matrix decompositions. SIAM Review, 53(2):217?288, 2011.
[16] C. Heinze, B. McWilliams, and N. Meinshausen. Dual-loco: Distributing statistical estimation using
random projections. In Proceedings of AISTATS, 2016.
[17] C. Heinze, B. McWilliams, N. Meinshausen, and G. Krummenacher. Loco: Distributing ridge regression
with random projections. arXiv preprint arXiv:1406.3469, 2014.
[18] T. Hofmann, A. Lucchi, S. Lacoste-Julien, and B. McWilliams. Variance reduced stochastic gradient
descent with neighbors. In Advances in Neural Information Processing Systems, 2015.
[19] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In
Advances in Neural Information Processing Systems, pages 315?323, 2013.
[20] N. S. Keskar and A. S. Berahas. adaQN: An Adaptive Quasi-Newton Algorithm for Training RNNs. Nov.
2015.
[21] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
[22] A. Lucchi, B. McWilliams, and T. Hofmann. A variance reduced stochastic newton method. arXiv preprint
arXiv:1503.08316, 2015.
[23] H. Luo, A. Agarwal, N. Cesa-Bianchi, and J. Langford. Efficient second order online learning via sketching.
arXiv preprint arXiv:1602.02202, 2016.
[24] M. W. Mahoney. Randomized algorithms for matrices and data. Apr. 2011. arXiv:1104.5557v3 [cs.DS].
[25] M. P. Marcus, M. A. Marcinkiewicz, and B. Santorini. Building a large annotated corpus of english: The
penn treebank. Computational linguistics, 19(2):313?330, 1993.
[26] J. Martens and R. Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In
Proceedings of the 32nd International Conference on Machine Learning, 2015.
[27] B. McWilliams, G. Krummenacher, M. Lucic, and J. M. Buhmann. Fast and robust least squares estimation
in corrupted linear models. In Advances in Neural Information Processing Systems, volume 27, 2014.
[28] B. Neyshabur, R. R. Salakhutdinov, and N. Srebro. Path-sgd: Path-normalized optimization in deep neural
networks. In Advances in Neural Information Processing Systems, pages 2413?2421, 2015.
[29] J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In EMNLP,
volume 14, pages 1532?1543, 2014.
[30] L. Zhang, M. Mahdavi, R. Jin, T. Yang, and S. Zhu. Recovering optimal solution by dual random projection.
arXiv preprint arXiv:1211.3046, 2012.
9
| 6054 |@word briefly:2 version:1 inversion:1 nd:2 mehta:1 tried:1 covariance:2 decomposition:6 sgd:8 incurs:1 tr:3 reduction:14 tuned:1 past:1 outperforms:1 current:1 com:1 nicolai:1 luo:1 yet:2 gpu:1 hofmann:2 remove:1 update:12 v:2 kilcher:2 selected:4 beginning:1 short:1 lr:21 provides:2 math:1 pascanu:1 org:1 zhang:2 consists:2 combine:1 overhead:1 introduce:1 theoretically:1 lov:1 forgetting:2 notably:1 expected:1 roughly:1 p1:1 growing:1 salakhutdinov:1 byrd:1 little:2 spain:1 project:4 notation:1 matched:1 linearity:1 factorized:1 underlying:1 what:1 substantially:1 string:2 disneyresearch:1 finding:2 impractical:4 guarantee:1 pseudo:1 thorough:1 every:1 runtime:2 returning:1 control:1 mcwilliams:6 unit:1 penn:2 modify:1 tends:1 despite:1 multithreaded:1 bilinear:2 path:2 approximately:3 might:3 rnns:1 studied:2 meinshausen:4 suggests:2 factorization:1 range:6 averaged:1 practical:7 practice:3 regret:11 lost:2 loco:2 backpropagation:1 procedure:3 area:1 empirical:5 eth:2 attain:1 composite:1 projection:25 jhu:1 word:6 pre:1 refers:1 suggest:1 onto:2 close:1 storage:2 risk:1 applying:3 package2:1 dam:1 accumulating:1 www:1 equivalent:1 marten:1 yt:2 attention:1 independently:1 convex:16 recovery:1 factored:1 rule:1 importantly:1 orthonormal:1 initialise:1 embedding:1 searching:2 coordinate:4 variation:1 analogous:1 qq:1 updated:1 pt:3 us:1 expensive:4 recognition:1 updating:2 particularly:3 approximated:2 sparsely:1 observed:3 ft:13 bottom:1 quicker:1 preprint:9 capture:2 worst:1 ensures:1 decrease:1 valuable:1 complexity:5 rmsprop:1 rowley:1 asked:1 practise:4 ong:1 motivate:1 trained:5 solving:1 grateful:1 incur:1 predictive:1 basis:3 preconditioning:2 completely:1 easily:1 regularizer:1 fast:9 effective:8 describe:1 shalev:1 whose:1 widely:2 larger:2 cvpr:1 distortion:1 amari:1 statistic:1 gi:2 g1:1 simonyan:1 transform:2 ip:2 online:4 final:2 sequence:2 eigenvalue:9 kxt:1 advantage:2 propose:8 product:6 loop:2 combining:1 rapidly:3 poorly:1 achieve:3 ktr:1 qr:7 convergence:4 double:2 optimum:1 generating:2 incremental:1 converges:1 adam:1 derive:1 recurrent:5 develop:1 gong:1 stat:1 qt:3 received:1 equilibrated:1 eq:2 p2:1 recovering:1 c:1 implies:1 switzerland:3 concentrate:1 direction:1 kgt:1 annotated:1 stochastic:15 require:3 behaviour:1 marcinkiewicz:1 investigation:1 opt:9 brian:2 proposition:3 correction:1 hold:1 practically:1 considered:1 great:1 predict:3 major:1 achieves:5 desjardins:1 smallest:1 salakhudinov:1 estimation:4 applicable:2 hansen:1 largest:4 minimization:1 hope:1 concurrently:1 clearly:2 gaussian:3 modified:2 rather:2 varying:2 corollary:1 focus:1 joachim:1 improvement:6 consistently:2 rank:17 modelling:2 greatly:2 contrast:1 sense:1 typically:1 eliminate:1 initially:1 borrowing:1 quasi:2 sketched:1 overall:1 among:4 classification:3 dauphin:1 dual:2 retaining:1 rop:1 softmax:2 cube:2 once:1 identical:2 discrepancy:2 few:1 modern:1 oja:1 preserve:2 ve:1 cheaper:1 subsampled:3 geometry:1 consisting:1 highly:1 golub:1 mahoney:1 truly:2 parametrised:1 accurate:1 orthogonal:1 theoretical:5 recomputing:1 column:4 modeling:1 earlier:1 downside:1 measuring:1 retains:2 ada:18 cost:7 introducing:1 entry:1 predictor:1 successful:1 johnson:1 too:1 dependency:4 kn:1 corrupted:1 proximal:15 adaptively:2 lstm:4 randomized:10 international:4 siam:1 probabilistic:1 quickly:4 lucchi:3 sketching:1 na:1 vastly:1 squared:1 yannic:2 again:1 cesa:1 slowly:2 emnlp:1 worse:2 chung:1 style:2 return:1 stark:2 mahdavi:1 aggressive:1 de:1 depends:1 performed:1 root:7 utilised:1 hazan:2 decaying:3 maintains:1 recover:1 fftw:2 contribution:1 minimize:1 square:8 accuracy:6 convolutional:11 variance:19 degraded:1 efficiently:2 keskar:1 zoo:1 randomness:1 explain:1 reach:1 definition:1 failure:1 against:2 pp:2 frequency:1 typed:2 proof:1 attributed:1 gain:1 sampled:2 dataset:2 begun:1 popular:1 ask:1 dimensionality:5 improves:2 routine:1 higher:5 reflected:1 improved:1 formulation:1 evaluated:1 though:1 strongly:3 furthermore:2 parameterised:1 stage:1 robustify:1 correlation:2 preconditioner:1 hand:2 langford:1 d:1 web:1 lstms:1 tropp:1 lack:1 heinze:3 logistic:1 grows:1 building:1 effect:3 normalized:2 idealised:1 regularization:2 symmetric:1 iteratively:1 attractive:2 round:1 during:3 game:1 outline:2 ridge:1 duchi:2 rlr:1 svhn:3 allen:1 lucic:1 image:2 meaning:1 lazebnik:1 recently:4 common:2 superior:1 empirically:2 conditioning:1 exponentially:2 volume:3 blas:1 martinsson:1 refer:1 dag:51 dft:1 rd:4 mathematics:1 similarly:2 inclusion:1 pm:1 language:3 moving:1 access:1 gt:65 add:1 jaggi:1 curvature:1 recent:4 confounding:1 optimizing:2 inf:3 store:3 certain:2 kg1:2 inequality:1 binary:2 success:2 captured:1 seen:2 additional:5 preserving:1 converge:1 v3:1 focussing:1 ii:1 full:42 reduces:3 faster:10 adapt:2 match:1 cross:2 long:1 cifar:3 bach:1 christina:1 equally:1 plugging:1 controlled:1 impact:1 scalable:2 variant:13 regression:2 essentially:1 expectation:1 arxiv:19 iteration:13 normalization:1 agarwal:1 achieved:1 receive:2 whereas:1 singular:5 appropriately:1 pooling:1 contrary:1 spirit:1 jordan:1 call:1 unitary:1 leverage:2 yang:1 iii:1 embeddings:1 bengio:1 fft:2 variety:1 affect:2 relu:1 architecture:2 opposite:1 reduce:3 idea:1 avenue:1 pivot:2 motivated:2 rms:1 defazio:1 distributing:2 accelerating:1 hessian:2 proceed:1 deep:10 gabriel:2 clear:2 aimed:1 tune:1 extensively:1 concentrated:1 reduced:5 http:1 generate:1 outperform:1 exist:1 oversampling:1 popularity:3 per:1 discrete:1 coarsely:1 drawn:1 ht:3 lacoste:2 nocedal:1 subgradient:2 fraction:1 sum:3 run:1 inverse:10 noticing:1 almost:1 throughout:1 family:1 p3:1 scaling:1 comparable:1 dropout:1 layer:11 bound:1 followed:1 cheng:1 kronecker:2 krummenacher:4 aurelien:1 fourier:3 speed:2 argument:1 kumar:1 px:2 gpus:1 martin:1 department:2 structured:5 according:1 combination:6 manning:1 smaller:4 modification:2 projecting:1 computationally:8 singer:2 end:3 available:1 neyshabur:1 observe:1 save:1 alternative:2 batch:2 slower:1 rp:4 original:5 top:3 running:1 subsampling:1 ensure:1 linguistics:1 graphical:1 maintaining:3 newton:3 exploit:1 k1:1 especially:1 balduzzi:3 approximating:2 classical:1 move:1 objective:2 question:1 quantity:1 already:1 dependence:6 rt:2 diagonal:11 gradient:24 kth:1 subspace:1 unable:1 simulated:1 majority:1 outer:3 argue:1 considers:2 reason:2 marcus:1 length:1 code:1 illustration:1 unfortunately:1 stated:2 ba:1 implementation:2 perform:4 contributed:1 gated:1 bianchi:1 datasets:3 discarded:1 descent:3 jin:1 situation:1 defining:2 santorini:1 disney:1 david:1 extensive:1 optimized:1 rad:116 nishant:1 barcelona:1 kingma:1 nip:1 able:4 parallelism:1 regime:1 sparsity:1 summarize:1 gonen:1 rf:2 max:1 memory:3 analogue:1 natural:5 buhmann:2 zhu:2 minimax:1 scheme:2 improve:1 julien:2 categorical:2 text:2 review:3 epoch:3 acknowledgement:1 multiplication:1 relative:2 loss:19 permutation:1 suggestion:1 srft:3 proven:2 srebro:1 validation:3 jbuhmann:1 begs:1 treebank:2 systematically:1 storing:1 berahas:1 row:3 last:1 soon:1 svrg:11 infeasible:1 enjoys:1 drastically:1 english:1 normalised:2 vv:2 institute:1 neighbor:1 taking:1 sparse:6 benefit:2 distributed:1 van:1 dimension:9 vocabulary:1 evaluating:1 rich:3 computes:1 author:2 commonly:2 adaptive:8 approximate:13 nov:1 global:1 active:1 instantiation:1 investigating:1 corpus:1 xi:1 shwartz:1 spectrum:4 factorizing:1 table:3 channel:1 robust:1 inherently:1 improving:1 excellent:1 constructing:3 da:62 diag:1 understandably:1 aistats:1 dense:11 apr:1 profile:1 grosse:2 vr:8 precision:1 seminar:1 momentum:1 saga:1 exponential:2 mcmahan:1 xt:6 jensen:1 decay:6 intrinsic:2 consist:1 mnist:3 socher:1 effectively:1 gained:3 pennington:1 magnitude:3 illustrates:1 vries:1 entropy:2 halko:1 simply:2 lbfgs:1 forming:2 cheaply:1 prevents:1 ch:4 ghifary:1 goal:1 marked:1 consequently:1 fisher:3 adverse:1 loan:1 determined:1 except:1 reducing:2 uniformly:1 corrected:2 glove:1 lemma:1 principal:1 called:1 svd:11 experimental:1 rarely:1 formally:1 wq:1 support:1 ethz:4 incorporate:1 evaluate:1 |
5,586 | 6,055 | The Forget-me-not Process
Kieran Milan? , Joel Veness? , James Kirkpatrick, Demis Hassabis
Google DeepMind
{kmilan,aixi,kirkpatrick,demishassabis}@google.com
Anna Koop, Michael Bowling
University of Alberta
{anna,bowling}@cs.ualberta.ca
Abstract
We introduce the Forget-me-not Process, an efficient, non-parametric metaalgorithm for online probabilistic sequence prediction for piecewise stationary,
repeating sources. Our method works by taking a Bayesian approach to partitioning a stream of data into postulated task-specific segments, while simultaneously
building a model for each task. We provide regret guarantees with respect to piecewise stationary data sources under the logarithmic loss, and validate the method
empirically across a range of sequence prediction and task identification problems.
1
Introduction
Modeling non-stationary temporal data sources is a fundamental problem in signal processing,
statistical data compression, quantitative finance and model-based reinforcement learning. One
widely-adopted and successful approach has been to design meta-algorithms that automatically
generalize existing stationary learning algorithms to various non-stationary settings. In this paper
we introduce the Forget-me-not Process, a probabilistic meta-algorithm that provides the ability to
model the class of memory bounded, piecewise-repeating sources given an arbitrary, probabilistic
memory bounded stationary model.
The most well studied class of probabilistic meta-algorithms are those for piecewise stationary
sources, which model data sequences with abruptly changing statistics. Almost all meta-algorithms for
abruptly changing sources work by performing Bayesian model averaging over a class of hypothesized
temporal partitions. To the best of our knowledge, the earliest demonstration of this fundamental
technique was [21], for the purpose of data compression; closely related techniques have gained
popularity within the machine learning community for change point detection [1] and have been
proposed by neuroscientists as a mechanism by which humans deal with open-ended environments
composed of multiple distinct tasks [4?6]. One of the reasons for the popularity of this approach is
that the temporal structure can be exploited to make exact Bayesian inference tractable via dynamic
programming; in particular inference over all possible temporal partitions of n data points results in
an algorithm of O(n2 ) time complexity and O(n) space complexity [21, 1]. Many variants have been
proposed in the literature [20, 11, 10, 17], which trade off predictive accuracy for improved time and
space complexity; in particular the Partition Tree Weighting meta-algorithm [17] has O(n log n) time
and O(log n) space complexity, and has been shown empirically to exhibit superior performance
versus other low-complexity alternatives on piecewise stationary sources.
A key limitation of these aforementioned techniques is that they can perform poorly when there
exist multiple segments of data that are similarly distributed. For example, consider data generated
according to the schedule depicted in Figure 1. For all these methods, once a change-point occurs, the
base (stationary) model is invoked from scratch, even if the task repeats, which is clearly undesirable
? indicates joint first authorship.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Task
3
2
1
1
20
40
60
80
Time
100
120
140
160
Figure 1: An example task segmentation.
in many situations of interest. Our main contribution in this paper is to introduce the Forget-me-not
Process, which has the ability to avoid having to relearn repeated tasks, while still maintaining
essentially the same theoretical performance guarantees as Partition Tree Weighting on piecewise
stationary sources.
2
Preliminaries
We now introduce some notation and necessary background material.
Sequential Probabilistic Data Generators. We begin with some terminology for sequential, probabilistic data generating sources. An alphabet is a finite non-empty set of symbols, which we
will denote by X . A string x1 x2 . . . xn ? X n of length n is denoted by x1:n . The prefix x1:j of
x1:n , where jS? n, is denoted by x?j or x<j+1 . The empty string is denoted by and we define
?
X ? = {} ? i=1 X i . Our notation also generalizes to out of bounds indices; that is, given a string
x1:n and an integer m > n, we define x1:m := x1:n and xm:n := . The concatenation of two strings
s, r ? X ? is denoted by sr. Unless otherwise specified, base 2 is assumed for all logarithms.
A sequential probabilistic data generating source ? is defined by a sequence of probability
mass funcP
tions ?n : X n ? [0, 1], for all n ? N, satisfying the constraint that ?n (x1:n ) = y?X ?n+1 (x1:n y)
for all x1:n ? X n , with base case ?0 () = 1. From here onwards, whenever the meaning is clear
from the argument to ?, the subscripts on ? will be dropped. Under this definition, the conditional
probability of a symbol xn given previous data x<n is defined as ?(xn | x<n ) := ?(x1:n )/?(x<n )
Qj
provided ?(x<n ) > 0, with the familiar chain rule ?(xi:j | x<i ) = k=i ?(xk | x<k ) applying as
usual. Notice too that a new sequential probabilistic data generating source ? can be obtained from
an existing source ? by conditioning on a fixed sequence of input data. More explicitly, given a string
s ? X ? , one can define ?(x1:n ) := ?(x1:n | s) for all n; we will use the notation ?[s] to compactly
denote such a derived probabilistic data generating source.
Temporal Partitions, Piecewise Sources and Piecewise-repeating sources. We now introduce
some notation to formally describe temporal partitions and piecewise sources. A segment is a tuple
(a, b) ? N ? N with a ? b. A segment (a, b) is said to overlap with another segment (c, d) if there
exists an i ? N such that a ? i ? b and c ? i ? d. A temporal partition P of a set of time
indices S = {1, 2, . . . n}, for some n ? N, is a set of non-overlapping segments such that for all
x ? S, there exists a segment (a, b) ? P such that a ? x ? b. We also use the overloaded notation
P(a, b) := {(c, d) ? P : a ? c ? d ? b} to denote the set of segments falling inclusively within
the range (a, b). Finally, Tn will be used to denote the set of all possible temporal partitions of
{1, 2, . . . , n}.
We can now define a piecewise data generating source ?hP in terms of a partition P =
{(a1 , b1 ), (a2 , b2 ), . . . } and a set of probabilistic data generating sources {?1 , ?2 , . . . }, such that for
all n ? N, for all x1:n ? X n ,
Y
?hP (x1:n ) :=
?h(a) (xa:b ),
(a,b)?Pn
where Pn := {(a, b) ? P : a ? n} and h : N ? N is a task assignment function that maps
segment beginnings to task identifiers.
A piecewise repeating data generating source is a special case of a piecewise data generating source
that satisfies the additional constraint that ?a, c ? {x : (x, y) ? P} such that a 6= c and h(a) = h(c).
2
In terms of modeling a piecewise repeating source, there are three key unknowns: the partition
which defines the location of the change points, the task assignment function, and the model for each
individual task.
Bayesian Sequence Prediction. A fundamental technique for constructing algorithms that work
well under the logarithmic loss is Bayesian model averaging. We now provide a short overview
sufficient for the purposes of this paper; for more detail, we recommend the work of [12] and [14].
Given a non-empty discrete set of probabilistic data
P generating sources M := {?1 , ?2 , . . . } and a
prior weight w0? > 0 for each ? ? M such that ??M w0? = 1, the Bayesian mixture predictor
P
is defined in terms of its marginal by ?(x1:n ) := ??M w0? ?(x1:n ). The predictive probability is
thus given by the ratio of the marginals ?(xn | x<n ) = ?(x1:n ) / ?(x<n ). The predictive probability
can also be expressed in terms of a convex combination of conditional model predictions, with each
model weighted by its posterior probability. More explicitly,
P ?
w0 ?(x1:n )
X ?
w? ?(x<n )
??M
?
P
:= P 0 ?
=
wn?1 ?(xn | x<n ), where wn?1
.
?(xn | x<n ) =
?
w0 ?(x<n )
w0 ?(x<n )
??M
??M
??M
A fundamental property of Bayesian mixtures is that if there exists a model ?? ? M that predicts
well, then ? will predict well since the cumulative loss satisfies
X ?
?
? log ?(x1:n ) = ? log
w0 ?(x1:n ) ? ? log w0? ? log ?? (x1:n ).
(1)
??M
Equation 1 implies that a constant regret is suffered when using ? in place of the best (in hindsight)
model within M.
3
The Forget-me-not Process
We now introduce the Forget-me-not Process (FMN), a meta-algorithm designed to better model
piecewise-repeating data generating sources. As FMN is a meta-algorithm, it takes as input a base
model, which we will hereby denote as ?. At a high level, the main idea is to extend the Partition
Tree Weighting [17] algorithm to incorporate a memory of previous model states, which is used
to improve performance on repeated tasks. More concretely, our construction involves defining a
two-level hierarchical process, with each level performing exact Bayesian model averaging. The first
level will perform model averaging over a set of postulated segmentations of time, using the Partition
Tree Weighting technique. The second level will perform model averaging over a growing set of
stored base model states. We describe each level in turn before describing how to combine these
ideas into the Forget-me-not Process.
Averaging over Temporal Segmentations. We now define the class of binary temporal partitions,
which will correspond to the set of temporal partitions we perform model averaging over in the first
level of our hierarchical model. Although more restrictive than the class of all possible temporal
partitions, binary temporal partitions possess important computational advantages.
Definition 1. Given a depth parameter d ? N and a time t ? N, the set Cd (t) of all binary temporal
partitions from t is recursively defined by
Cd (t) := {(t, t + 2d ? 1)} ? S1 ? S2 : S1 ? Cd?1 (t) , S2 ? Cd?1 t + 2d?1 ,
with C0 (t) := {(t, t)} . We also define Cd := Cd (1).
Each binary temporal partition can be naturally mapped onto a tree structure known as a partition tree;
for example, Figure 2 shows the collection of partition trees represented by C2 ; the leaves of each
tree correspond to the segments within each particular partition. There are two important properties
of binary temporal partition trees. The first is that there always exists a partition P 0 ? Cd which is
close to any temporal partition P, in the sense that P 0 always starts a new segment whenever P does,
and |P 0 | ? |P|(dlog ne + 1) [17, Lemma 2]. The second is that exact Bayesian model averaging can
be performed efficiently with an appropriate choice of prior. This is somewhat surprising, since the
3
?
(1, 4)
?
(1, 2)
?
(3, 4)
(1, 1)
?
(3, 4)
?
?
(1, 2)
(2, 2)
(3, 3)
?
(4, 4)
(1,1)
?
(2,2) (3,3)
(4,4)
Figure 2: The set C2 represented as a collection of temporal partition trees.
number of binary temporal partitions |Cd | grows double exponentially in d. The trick is to define,
given a data sequence x1:n , the Bayesian mixture
X
Y
PTW d (x1:n ) :=
2??d (P)
?(xa:b ),
(2)
P?Cd
(a,b)?P
where ?d (P) gives the number of nodes in the partition tree associated with P that have a depth less
than d and ? denotes the base model to the PTW process. This prior weighting is identical to how
the Context Tree Weighting method [22] weighs over tree structures, and is an application of the
general techniqueP
used by the class of Tree Experts described in Section 5.3 of [3]. It is a valid prior,
as one can show P?Cd 2??d (P) = 1 for all d ? N. A direct computation of Equation 2 is clearly
intractable, but we can make use of the tree structured prior to recursively decompose Equation 2
using the following lemma.
Lemma 1 (Veness et al. [17]). For any depth d ? N, for all x1:n ? X n satisfying n ? 2d ,
PTW d (x1:n )
= 12 ?(x1:n ) + 12 PTWd?1 (x1:k ) PTWd?1 (xk+1:n ) ,
where k = 2d?1 .
Averaging over Previous Model States given a Known Temporal Partition. Given a data sequence x1:n ? X n , a base model ? and a temporal partition P := {(a1 , b1 ), . . . , (am , bm )} satisfying
P ? Tn , consider a sequential probabilistic model defined by
?
?
|P|
Y
X
1
?
?
?P (x1:n ) :=
|Mi | ?(xai :bi ) ,
i=1
??Mi
where M1 := {?} and Mi := Mi?1 ? {? [xai :bi ]}??Mi?1 for 1 < i ? |P|.
Here, whenever the ith segment of data is seen, each model in Mi is given the option of either
ignoring or adapting to this segment?s data, which implies |Mi | = 2 |Mi?1 |. Using an argument
h(t)
similar to Equation 1, and letting x<t denote the subsequence of x<t generated by ?h(t) , we can see
that the cumulative loss when the data is generated by a piecewise-repeating source ?hP is bounded by
?
?
?
?
|P|
|P|
Y
X
X
Y
1
?
?
? log ?P (x1:n ) = ? log
?(xai :bi )? = ? log
2?i+1 ?(xai :bi )?
|Mi |
i=1
? ? log
i=1
??Mi
??Mi
|P|
Y
|P|
|P|2 ? |P|
Y
h(a )
h(a )
2?i+1 ? xai :bi | x<aii =
? log
? xai :bi | x<aii . (3)
2
i=1
i=1
Roughly speaking, this bound implies that ?P (x1:n ) will perform
almost as well as if we knew
?
h(?) in advance, provided the number of segments grows o( n). The two main drawbacks with
this approach are that: a) computing ?P (x1:n ) takes time exponential in |P|; and b) a regret of
(|P|2 ? |P|)/2 seems overly large in cases where the source isn?t repeating. These problems can be
rectified with the following modified process,
?
?
|P|
Y
X
1
1
0
1
? ?(xai :bi ) +
?
?P (x1:n ) :=
(4)
|Mi |?1 ? (xai :bi )
2
2
0
i=1
? ?Mi \{?}
where now M1 := {?} and Mi := Mi?1
n
o
? ?? [xai :bi ] ?? = argmax??Mi?1 {? (xai :bi )} .
4
Depth
0
1
2
3
1
2
3
4
5
6
7
8
Time
Figure 3: A graphical depiction of the Forget Me Not process (d = 3) after processing 7 symbols.
With this modified definition of Mi , where the argmax implements a greedy approximation (ties are
broken arbitrarily), |Mi | now grows linearly with the number of segments, and thus the overall time
to compute ?P (x1:n ) is O(|P| n) assuming the base model runs in linear time. Although heuristic,
this approximation is justified provided that ?[] assigns the highest probability out of any model in
Mi whenever a task is seen for the first time, and that a model trained on k segments for a given task
is always better than a model trained on less than k segments for the same task (or a model trained on
any number of other tasks). Furthermore, using a similar dominance argument to Equations 1 and 3,
the cost of not knowing h(?) with respect to piecewise non-repeating sources is now |P| vs O(|P|2 ).
Averaging over Binary Temporal Segmentations and Previous Model States. This section describes how to hierarchically combine the PTW and ?P models to give rise to the Forget Me Not
process. Our goal will be to perform model averaging over both binary temporal segmentations and
previous model states. This can be achieved by instantiating the PTW meta-algorithm with a sequence
of time dependent base models similar in spirit to ?P .
Intuitively, this requires modifying the definition of Mi so that the best performing model state, for
any completed segment within the PTW process, is available for future predictions. For example,
Figure 3 provides a graphical depiction of our desired FMN3 process after processing 7 symbols.
The dashed segments ending in unfilled circles describe the segments whose set of base models
are contributing to the predictive distribution at time 8. The solid-line segments denote previously
completed segments for which we want the best performing model state to be remembered and made
available to segments starting at later times. A solid circle indicates a time where a model is added to
the pool of available models; note that now multiple models can be added at any particular time.
We now formalize the above intuitions. Let Bt := {(a, b) ? Cd : b = t} be the set of segments ending
at time t ? 2d . Given an an arbitrary string s ? X ? , our desired sequence of base models is given by
X
1
1
0
1
?t (s) := ?(s) +
(5)
|Mt |?1 ? (s),
2
2 0
? ?Mt \{?}
with the model pool defined by M1 := {?} and
[
?
?
Mt := Mt?1 ?
? [sa:b ] ? = argmax {? (sa:b )}
for t > 1.
(6)
??Ma
(a,b)?Bt?1
Finally, substituting Equation 5 in for the base model of PTW yields our Forget Me Not process
X
Y
FMN d (x1:n ) :=
2??d (P)
?a (xa:b ).
(7)
P?Cd
(a,b)?Pn
Algorithm. Algorithm 1 describes how to compute the marginal probability FMNd (x1:n ). The rj
variables store the segment start times for the unclosed segments at depth j; the bj variables implement
a dynamic programming caching mechanism to speed up the PTW computation as explained in Section
3.3 of [17]; the wj variables hold intermediate results needed to apply Lemma 1. The Most Significant
Changed Bit routine MSCBd (t), invoked at line 4, is used to determine the range of segments ending
at the current time t, and is defined for t > 1 as the number of bits to the left of the most significant
location at which the d-bit binary representations of t ? 1 and t ? 2 differ, with MSCBd (1) := 0 for all
d ? N. For example, in Figure 3, at t = 5, before processing x5 , we need to deal with the segments
5
Algorithm 1 F ORGET- ME - NOT - FMNd (x1:n )
Require: A depth parameter d ? N, and a base probabilistic model ?
Require: A data sequence x1:n ? X n satisfying n ? 2d
1: bj ? 1, wj ? 1, rj ? 1, for 0 ? j ? d
2: M ? {?}
3: for t = 1 to n do
4:
5:
i ? MSCBd (t)
bi ? wi+1
6:
7:
8:
9:
for j = i + 1 to d do
M ? U PDATE M ODEL P OOL(?rj , xrj :t?1 )
wj ? 1, bj ? 1, rj ? t
end for
10:
11:
12:
13:
wd ? ?rd (xrd :t )
for i = d ? 1 to 0 do
wi ? 21 ?ri (xri :t ) + 21 wi+1 bi
end for
14: end for
15: return w0
(1, 4), (3, 4), (4, 4) finishing. The method U PDATE M ODEL P OOL applies Equation 6 to remember
the best performing model in the mixture ?rj on the completed segment (rj , t ? 1). Lines 11 to 13
invoke Lemma 1 from bottom-up, to compute the desired marginal probability FMNd (x1:n ) = w0 .
(Space and Time Overhead) Under the assumption that each base model conditional probability can
be obtained in O(1) time, the time complexity to process a sequence of length n is O(nk log n),
where k is an upper bound on |M|. The n log n factor is due to the number of iterations in the inner
loops on Lines 6 to 9 and Lines 11 to 13 being upper bounded by d + 1. The k factor is due to the
cost of maintaining the vt terms for the segments which have not yet closed. An upper bound on k
can be obtained from inspection of Figure 3, where if we set n = 2d , we have that the number of
Pd
completed segments is given by i=0 2i = 2d+1 ? 1 = 2n + 1 = O(n); thus the time complexity is
O(n2 log n). The space overhead is O(k log n), due to the O(log n) instances of Equation 5.
(Complexity Reducing Operations) For many applications of interest, a running time of O(n2 log n)
is unacceptable. A workaround is to fix k in advance and use a model replacement strategy that
enforces |M| ? k via a modified U PDATE M ODEL P OOL routine; this reduces the time complexity to
O(nk log n). We found the following heuristic scheme to be effective in practice: when a segment
(a, b) closes, the best performing model ?? ? Ma for this segment is identified. Now, 1) letting y? ?
denote a uniform sub-sample of the data used to train ?? , if log ?? [xa:b ](y? ?) ? log ?? (y? ?) > ?
then replace ?? with ?? [xa:b ] in M; else 2) if a uniform Bayes mixture ? over M assigns sufficiently
higher probability to a uniform sub-sample s of xa:b than ?? does, that is log ?(s) ? log ?? (s) > ?,
then leave M unchanged; else 3) add ?? [xa:b ] to M; if |M| > k, remove the oldest model in M.
This requires choosing hyperparameters ?, ? ? R and appropriate constant sub-sample sizes. Step
1 avoids adding multiple models for the same task; Step 2 avoids adding a redundant model to the
model pool. Note that the per model and per segment sub-samples can be efficiently maintained
online using reservoir sampling [19]. As a further complexity reducing operation, one can skip calls
to U PDATE M ODEL P OOL unless (b ? a + 1) ? 2c for some c < d.
(Strongly Online Prediction) A strongly online FMN process, where Q
one does not need to fix a d in
n
advance such that n ? 2d , can be obtained by defining FMN(x1:n ) := i=1 FMNdlog ie (xi | x<i ), and
efficiently computed in the same manner as for PTW, with a similar loss bound ? log FMN(x1:n ) ?
? log FMNd (x1:n ) + dlog ne(log 3 ? 1) following trivially from Theorem 2 in [17].
Theoretical properties. We now show that the Forget Me Not process is competitive with any
piecewise stationary source, provided the base model enjoys sufficiently strong regret guarantees on
6
non-piecewise sources. Note that provided c = 0, Proposition 1 also holds when the complexity
reducing operations are used. While the following regret bound is of the same asymptotic order as
PTW for piecewise stationary sources, note that it is no tighter for sources that repeat; we will later
explore the advantage of the FMN process on repeating sources experimentally.
Proposition 1. For all n ? N, using FMN with d = dlog ne and a base model ? whose redundancy
is upper bounded by a non-negative, monotonically non-decreasing, concave function g : N ? R
with g(0) = 0 on some class G of bounded memory data generating sources, the regret
h
?P (x1:n )
n
log
? 2|Pn | (dlog ne + 1) + |Pn | g
(dlog ne + 1) + |Pn |,
FMN d (x1:n )
|Pn |(dlog ne + 1)
where ? is a piecewise stationary data generating source, and the data in each of the stationary
regions P ? Tn is distributed according to some source in G.
Proof. First observe that for all x1:n ? X n we can lower bound the probability
X
Y
X
Y
1
FMN d (x1:n ) =
2??d (P)
?a (xa:b ) ?
2??d (P)
2 ?(xa:b )
P?Cd
?|Pn |
=2
P?Cd
(a,b)?Pn
X
P?Cd
??d (P)
2
Y
?(xa:b ) = 2
(a,b)?Pn
?|Pn |
PTW d (x1:n ).
(a,b)?Pn
Hence we have that ? log FMNd (x1:n ) ? |P| ? log PTWd (x1:n ). The proof is completed by using
Theorem 1 from [17] to upper bound ? log PTWd (x1:n ).
4
Experimental Results
We now report some experimental results with the FMN algorithm across three test domains. The first
two domains, The Mysterious Bag of Coins and A Fistful of Digits, are repeating sequence prediction
tasks. The final domain, Continual Atari 2600 Task Identification, is a video stream of game-play
from a collection of Atari games provided by the ALE [2] framework; here we qualitatively assess the
capabilities of the FMN process to provide meaningful task labels online from high dimensional input.
Domain Description. (Mysterious Bag of Coins) Our first domain is a sequence prediction game
involving a predictor, an opponent and a bag of m biased coins. Flipping the ith coin involves
sampling a value from a parametrized Bernoulli distribution B(?i ), with ?i ? [0, 1] for 1 ? i ? m.
The predictor knows neither how many coins are in the bag, nor the value of the ?i parameters. The
data is generated by having the opponent flip a single coin (the choice of which is hidden from the
predictor) drawn uniformly from the bag for X ? G(0.005) flips, and repeating, where G(?) denotes
the geometric distribution with success probability ?. At each time step t, the predictor outputs a
distribution ?t : {0, 1} ? [0, 1], and suffers an instantaneous loss of `t (xt ) := ? log ?t (xt ). Here
we test whether the FMN process can robustly identify change points, and exploit the knowledge that
some segments of data appear to be similarly distributed.
(A Fistful of Digits) The second test domain uses a similar setup to The Mysterious Bag of Coins,
except that now each observation is a 28x28 binary image taken from the MNIST [15] data set.
We partitioned the MNIST data into m = 10 classes, one for each distinct digit, which we used
to derive ten digit-specific empirical distributions. After picking a digit class, a random number
Y = 200 + X ? G(0.01) of examples are sampled (with replacement) from the associated empirical
distribution, before repeating the digit selection and generation process. Similar to before, the
predictor is required to output a distribution ?t : {0, 1}28?28 ? [0, 1] over the possible outcomes,
suffering an instantaneous loss of `t (xt ) := ? log ?t (xt ) at each time step.
(Continual Atari 2600 Task Identification) Our third domain consists of a sequence of sampled Atari
2600 frames. Each frame has been downsampled to a 28 ? 28 resolution and a 3 bit color space for
reasons of computational efficiency. The sequence of frames is generated by first picking a game
uniformly at random from a set of 45 Atari games (for which a game-specific DQN [16] policy is
available), and then generating a random number Y = 200 + X of frames, where X ? G(0.005).
Each action is chosen by the relevant game specific DQN controller, which uses an epsilon-greedy
policy. Once Y frames have been generated, the process is then repeated.
7
Algorithm
Average Per Digit Loss
94.08 ? 0.05
94.08 ? 0.05
86.12 ? 0.28
Oracle
82.81 ? 0.06
Figure 4: (Left) Results on the Mysterious Bag of Coins; (Right) Results on a Fistful of Digits.
KT
PTW + KT
FMN + KT
FMN ? + KT
Average Cumulative Regret
783.86 ? 7.79
157.19 ? 0.77
148.43 ? 0.75
147.75 ? 0.74
Algorithm
MADE
PTW + MADE
FMN + MADE
Results. We now describe our experimental setup and results. The following base models were
chosen for each test domain: for the Mysterious Bag of Coins (MBOC), we used the KT-estimator
[13], a beta-binomial model; for A Fistful of Digits (FOD), we used MADE [9], a recently introduced,
general purpose neural density estimator, with 500 hidden units, trained online using A DAG RAD [8]
with a learning rate of 0.1; MADE was also the base model for the Continual Atari task, but here a
smaller network consisting of 50 neurons was used for reasons of computational efficiency.
(Sequence Prediction) For each domain, we compared the performance of the base model, the base
model combined with PTW and the base model combined with the FMN process. We also report
the performance relative to a domain specific oracle: for the MBOC domain, the oracle is the true
data generating source, which has the (unfair) advantage of knowing the location of all potential
change-points and task-specific data generating distributions; for the FOD domain, we trained a
class conditional MADE model for each digit offline, and applied the relevant task-specific model to
each segment. Regret is reported for MBOC since we know the true data generating source, whereas
loss is reported for FOD. All results are reported in nats. The sequence length and number of
repeated runs for MBOC and FOD was 5k/10k and 221 /64 respectively. For the MBOC experiment
we set m = 7 and generated each ?i uniformly at random. Our sequence prediction results for each
domain are summarized in Figure 4, with 95% confidence intervals provided. Here FMN? denotes the
Forget-me-not algorithm without the complexity reducing techniques previously described (these
results are only feasible to produce on MBOC). For the FMN results, the MBOC hyper-parameters
are k = 15, ? = 0, ? = 0, c = 4 and sub-sample sizes of 100; the FOD hyper-parameters are
k = 30, ? = 0.2, ? = 0.06, c = 4 with sub-sample sizes of 10. Here we see a clear advantage to
using the FMN process compared with PTW, and that no significant performance is lost by using the
low complexity version of the algorithm.
Digging a bit deeper, it is interesting to note the inability of PTW to improve upon the performance of
the base model on FOD. This is in contrast to the FMN process, whose ability to remember previous
model states allows it to, over time, develop specialized models across digit specific data from
multiple segments, even in the case where the base model can be relatively slow to adapt online.
The reverse effect occurs in MBOC, where both FMN and PTW provide a large improvement over the
performance of the base model. The advantage of being able to remember is much smaller here due
to the speed at which the KT base model can learn, although not insignificant. It is also worth noting
that a performance improvement is obtained even though each individual observation is by itself not
informative; the FMN process is exploiting the statistical similarity of the outcomes across time.
(Online Task Identification) A video demonstrating real-time segmentation of Atari frames can be
found at: http://tinyurl.com/FMNVideo. Here we see that the (low complexity) FMN
quickly learns 45 game specific models, and performs an excellent job of routing experience to
the appropriate model. These results provide evidence that this technique can scale to long, high
dimensional input sequences using state of the art density models.
5
Conclusion
We introduced the Forget-me-not Process, an efficient, non-parametric meta-algorithm for online
probabilistic sequence prediction and task-segmentation for piecewise stationary, repeating sources.
We provided regret guarantees with respect to piecewise stationary data sources under the logarithmic
loss, and validated the method empirically across a range of sequence prediction and task identification
problems. For future work, it would be interesting to see whether a single Multiple Model-based
Reinforcement Learning [7] agent could be constructed using the Forget-me-not process for task
identification. Alternatively, the FMN process could be used to augment the conditional state density
models used for value estimation in [18]. Such systems would have the potential to be able to learn to
simultaneously play many different Atari games from a single stream of experience, as opposed to
previous efforts [16, 18] where game specific controllers were learnt independently.
8
References
[1] Ryan Prescott Adams and David J.C. MacKay. Bayesian Online Changepoint Detection. In arXiv,
http://arxiv.org/abs/0710.3742, 2007.
[2] M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation
platform for general agents. Journal of Artificial Intelligence Research, 47:253?279, 06 2013.
[3] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press,
New York, NY, USA, 2006. ISBN 0521841089.
[4] Anne Collins and Etienne Koechlin. Reasoning, learning, and creativity: Frontal lobe function and human
decision-making. PLoS Biol, 10(3):1?16, 03 2012.
[5] Anne G.E. Collins and Michael J. Frank. Cognitive Control over Learning: Creating, Clustering and
Generalizing Task-Set Structure. Psychological review, 120.1:190?229, 2013.
[6] Ma?l Donoso, Anne G. E. Collins, and Etienne Koechlin. Foundations of human reasoning in the prefrontal
cortex. Science, 344(6191):1481?1486, 2014. doi: 10.1126/science.1252254.
[7] Kenji Doya and Kazuyuki Samejima. Multiple model-based reinforcement learning. Neural Computation,
14:1347?1369, 2002.
[8] John Duchi, Elad Hazan, and Yoram Singer. Adaptive Subgradient Methods for Online Learning and
Stochastic Optimization. Journal of Machine Learning Research (JMLR), 12:2121?2159, 07 2011.
[9] Mathieu Germain, Karol Gregor, Iain Murray, and Hugo Larochelle. MADE: masked autoencoder for
distribution estimation. In Proceedings of the 32nd International Conference on Machine Learning, JMLR
W&CP, volume 37, pages 881?889, 2015.
[10] A. Gy?rgy, T. Linder, and G. Lugosi. Efficient tracking of large classes of experts. IEEE Transactions on
Information Theory, 58(11):6709?6725, 2011.
[11] E. Hazan and C. Seshadhri. Efficient learning algorithms for changing environments. In Proceedings of
the 26th Annual International Conference on Machine Learning, pages 393?400. ACM, 2009.
[12] Marcus Hutter. On universal prediction and Bayesian confirmation. Theoretical Computer Science, 384(1):
33?48, 2007.
[13] R. Krichevsky and V. Trofimov. The performance of universal encoding. Information Theory, IEEE
Transactions on, 27(2):199?207, 1981.
[14] Tor Lattimore, Marcus Hutter, and Peter Sunehag. Concentration and confidence for discrete bayesian
sequence predictors. In Sanjay Jain, R?mi Munos, Frank Stephan, and Thomas Zeugmann, editors,
Proceedings of the 24th International Conference on Algorithmic Learning Theory, pages 324?338.
Springer, 2013.
[15] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, Nov 1998.
[16] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare,
Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie,
Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis
Hassabis. Human-level control through deep reinforcement learning. Nature, 518, 2015.
[17] J. Veness, M. White, M. Bowling, and A. Gyorgy. Partition tree weighting. In Data Compression
Conference (DCC), pages 321?330, March 2013.
[18] Joel Veness, Marc G. Bellemare, Marcus Hutter, Alvin Chua, and Guillaume Desjardins. Compress and
control. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 25-30,
2015, Austin, Texas, USA., pages 3016?3023, 2015.
[19] Jeffrey S. Vitter. Random sampling with a reservoir. ACM Trans. Math. Softw., 11(1):37?57, March 1985.
ISSN 0098-3500. doi: 10.1145/3147.3165.
[20] F. Willems and M. Krom. Live-and-die coding for binary piecewise i.i.d. sources. In Information Theory.
1997. Proceedings., 1997 IEEE International Symposium on, page 68, jun-4 jul 1997.
[21] Frans M. J. Willems. Coding for a binary independent piecewise-identically-distributed source. IEEE
Transactions on Information Theory, 42:2210?2217, 1996.
[22] Frans M.J. Willems, Yuri M. Shtarkov, and Tjalling J. Tjalkens. The Context Tree Weighting Method:
Basic Properties. IEEE Transactions on Information Theory, 41:653?664, 1995.
9
| 6055 |@word version:1 compression:3 seems:1 nd:1 c0:1 open:1 trofimov:1 lobe:1 solid:2 recursively:2 document:1 prefix:1 existing:2 current:1 com:2 wd:1 surprising:1 anne:3 yet:1 john:1 partition:30 informative:1 remove:1 designed:1 v:1 stationary:16 greedy:2 leaf:1 intelligence:2 amir:1 inspection:1 xk:2 beginning:1 oldest:1 ith:2 short:1 chua:1 provides:2 math:1 node:1 location:3 org:1 wierstra:1 unacceptable:1 c2:2 direct:1 beta:1 constructed:1 symposium:1 shtarkov:1 fmn:25 consists:1 combine:2 overhead:2 vitter:1 frans:2 manner:1 introduce:6 roughly:1 nor:1 growing:1 decreasing:1 alberta:1 automatically:1 provided:8 spain:1 bounded:6 notation:5 begin:1 mass:1 atari:8 string:6 deepmind:1 dharshan:1 hindsight:1 ended:1 guarantee:4 temporal:23 quantitative:1 remember:3 continual:3 concave:1 finance:1 tie:1 seshadhri:1 partitioning:1 unit:1 control:3 appear:1 before:4 dropped:1 encoding:1 subscript:1 lugosi:2 studied:1 range:4 bi:12 lecun:1 enforces:1 practice:1 regret:9 implement:2 lost:1 digit:11 demis:2 riedmiller:1 empirical:2 universal:2 adapting:1 gabor:1 confidence:2 prescott:1 downsampled:1 arcade:1 petersen:1 onto:1 undesirable:1 close:2 selection:1 context:2 applying:1 live:1 bellemare:3 map:1 helen:1 starting:1 independently:1 convex:1 tjalkens:1 resolution:1 assigns:2 rule:1 estimator:2 iain:1 construction:1 play:2 ualberta:1 exact:3 programming:2 aixi:1 us:2 trick:1 satisfying:4 recognition:1 predicts:1 bottom:1 wj:3 region:1 plo:1 trade:1 highest:1 intuition:1 environment:3 broken:1 complexity:14 pd:1 workaround:1 nats:1 dynamic:2 trained:5 segment:37 predictive:4 upon:1 efficiency:2 compactly:1 joint:1 aii:2 various:1 represented:2 pdate:4 alphabet:1 train:1 distinct:2 jain:1 describe:4 effective:1 doi:2 artificial:2 hyper:2 choosing:1 outcome:2 whose:3 heuristic:2 widely:1 elad:1 otherwise:1 ability:3 statistic:1 itself:1 final:1 online:11 sequence:23 advantage:5 isbn:1 relevant:2 loop:1 poorly:1 description:1 validate:1 milan:1 rgy:1 exploiting:1 empty:3 double:1 produce:1 generating:16 adam:1 leave:1 karol:1 silver:1 tions:1 derive:1 develop:1 sa:2 job:1 fistful:4 strong:1 c:1 involves:2 implies:3 skip:1 larochelle:1 differ:1 kenji:1 closely:1 drawback:1 modifying:1 stochastic:1 human:4 routing:1 material:1 require:2 alvin:1 fix:2 creativity:1 preliminary:1 decompose:1 proposition:2 tighter:1 ryan:1 hold:2 sufficiently:2 algorithmic:1 predict:1 bj:3 substituting:1 changepoint:1 tor:1 desjardins:1 a2:1 purpose:3 estimation:2 bag:8 label:1 weighted:1 clearly:2 always:3 modified:3 avoid:1 pn:12 caching:1 rusu:1 earliest:1 derived:1 validated:1 finishing:1 improvement:2 legg:1 bernoulli:1 indicates:2 contrast:1 sense:1 am:1 inference:2 dependent:1 bt:2 hidden:2 overall:1 aforementioned:1 denoted:4 augment:1 art:1 special:1 mackay:1 platform:1 marginal:3 once:2 having:2 veness:6 sampling:3 softw:1 koray:1 identical:1 future:2 report:2 recommend:1 piecewise:24 composed:1 simultaneously:2 individual:2 familiar:1 argmax:3 consisting:1 replacement:2 jeffrey:1 ab:1 detection:2 neuroscientist:1 interest:2 onwards:1 ostrovski:1 mnih:1 joel:3 evaluation:1 kirkpatrick:2 mixture:5 chain:1 kt:6 tuple:1 necessary:1 experience:2 unless:2 tree:17 logarithm:1 desired:3 circle:2 weighs:1 theoretical:3 hutter:3 psychological:1 instance:1 modeling:2 assignment:2 cost:2 predictor:7 uniform:3 masked:1 successful:1 too:1 stored:1 reported:3 learnt:1 combined:2 density:3 fundamental:4 international:4 ie:1 probabilistic:14 off:1 invoke:1 pool:3 michael:2 picking:2 quickly:1 aaai:1 cesa:1 opposed:1 prefrontal:1 cognitive:1 creating:1 expert:2 return:1 volodymyr:1 potential:2 gy:1 b2:1 summarized:1 ioannis:1 coding:2 postulated:2 explicitly:2 stream:3 performed:1 later:2 closed:1 hazan:2 start:2 bayes:1 option:1 competitive:1 odel:4 capability:1 jul:1 contribution:1 ass:1 accuracy:1 efficiently:3 correspond:2 yield:1 identify:1 generalize:1 bayesian:13 identification:6 kavukcuoglu:1 worth:1 rectified:1 suffers:1 whenever:4 definition:4 mysterious:5 james:1 hereby:1 naturally:1 associated:2 mi:21 proof:2 sampled:2 knowledge:2 color:1 segmentation:7 schedule:1 formalize:1 routine:2 metaalgorithm:1 higher:1 dcc:1 improved:1 though:1 strongly:2 furthermore:1 xa:10 relearn:1 overlapping:1 google:2 defines:1 grows:3 dqn:2 building:1 effect:1 hypothesized:1 usa:2 true:2 hence:1 unfilled:1 deal:2 white:1 x5:1 bowling:4 game:11 maintained:1 die:1 authorship:1 funcp:1 tn:3 performs:1 duchi:1 cp:1 tinyurl:1 reasoning:2 meaning:1 image:1 invoked:2 instantaneous:2 recently:1 lattimore:1 charles:1 superior:1 specialized:1 mt:4 empirically:3 overview:1 hugo:1 conditioning:1 exponentially:1 volume:1 extend:1 m1:3 marginals:1 significant:3 cambridge:1 dag:1 rd:1 trivially:1 similarly:2 hp:3 similarity:1 depiction:2 cortex:1 base:25 add:1 j:1 nicolo:1 posterior:1 reverse:1 store:1 meta:9 binary:12 arbitrarily:1 remembered:1 vt:1 success:1 yuri:1 exploited:1 seen:2 gyorgy:1 additional:1 somewhat:1 determine:1 redundant:1 monotonically:1 signal:1 dashed:1 ale:1 multiple:7 rj:6 reduces:1 adapt:1 x28:1 long:1 a1:2 prediction:14 koop:1 variant:1 instantiating:1 involving:1 essentially:1 controller:2 basic:1 arxiv:2 iteration:1 achieved:1 justified:1 background:1 want:1 whereas:1 interval:1 else:2 source:40 suffered:1 biased:1 sr:1 posse:1 shane:1 spirit:1 integer:1 call:1 noting:1 intermediate:1 bengio:1 stephan:1 wn:2 identically:1 identified:1 inner:1 idea:2 andreas:1 knowing:2 haffner:1 texas:1 qj:1 whether:2 effort:1 abruptly:2 ool:4 peter:1 speaking:1 york:1 action:1 deep:1 clear:2 repeating:14 ten:1 http:2 zeugmann:1 exist:1 notice:1 overly:1 popularity:2 per:3 naddaf:1 discrete:2 georg:1 dominance:1 key:2 redundancy:1 terminology:1 demonstrating:1 falling:1 drawn:1 changing:3 neither:1 subgradient:1 inclusively:1 run:2 place:1 almost:2 doya:1 decision:1 bit:5 bound:8 oracle:3 annual:1 constraint:2 alex:1 x2:1 ri:1 speed:2 argument:3 performing:6 relatively:1 martin:1 structured:1 according:2 combination:1 march:2 across:5 describes:2 smaller:2 wi:3 partitioned:1 making:1 s1:2 intuitively:1 dlog:6 explained:1 taken:1 equation:8 previously:2 turn:1 describing:1 mechanism:2 needed:1 know:2 letting:2 flip:2 tractable:1 singer:1 end:3 antonoglou:1 adopted:1 generalizes:1 available:4 operation:3 opponent:2 apply:1 observe:1 hierarchical:2 appropriate:3 stig:1 robustly:1 alternative:1 coin:9 hassabis:2 thomas:1 compress:1 denotes:3 running:1 binomial:1 clustering:1 completed:5 graphical:2 maintaining:2 etienne:2 exploit:1 yoram:1 restrictive:1 epsilon:1 murray:1 gregor:1 unchanged:1 added:2 occurs:2 flipping:1 parametric:2 strategy:1 concentration:1 usual:1 said:1 exhibit:1 gradient:1 krichevsky:1 mapped:1 fidjeland:1 concatenation:1 parametrized:1 w0:10 me:15 reason:3 marcus:3 assuming:1 length:3 issn:1 index:2 ratio:1 demonstration:1 setup:2 kieran:1 digging:1 frank:2 xri:1 negative:1 rise:1 design:1 policy:2 unknown:1 perform:6 bianchi:1 upper:5 twenty:1 observation:2 neuron:1 kumaran:1 willems:3 daan:1 finite:1 january:1 situation:1 defining:2 frame:6 ninth:1 arbitrary:2 community:1 overloaded:1 introduced:2 david:2 required:1 specified:1 germain:1 rad:1 barcelona:1 nip:1 trans:1 able:2 sanjay:1 xm:1 memory:4 video:2 overlap:1 ptw:17 scheme:1 improve:2 ne:6 mathieu:1 jun:1 autoencoder:1 isn:1 prior:5 literature:1 geometric:1 review:1 kazuyuki:1 contributing:1 asymptotic:1 relative:1 graf:1 loss:10 generation:1 limitation:1 interesting:2 versus:1 generator:1 foundation:1 agent:2 sufficient:1 editor:1 cd:15 austin:1 changed:1 repeat:2 enjoys:1 offline:1 sunehag:1 deeper:1 taking:1 munos:1 distributed:4 depth:6 xn:6 valid:1 cumulative:3 ending:3 avoids:2 concretely:1 collection:3 reinforcement:4 made:8 qualitatively:1 bm:1 adaptive:1 transaction:4 nov:1 xai:10 b1:2 assumed:1 knew:1 xi:2 samejima:1 alternatively:1 subsequence:1 scratch:1 learn:2 nature:1 ca:1 confirmation:1 ignoring:1 excellent:1 bottou:1 constructing:1 domain:13 marc:2 anna:2 main:3 hierarchically:1 linearly:1 s2:2 hyperparameters:1 n2:3 identifier:1 repeated:4 suffering:1 x1:51 reservoir:2 andrei:1 slow:1 ny:1 sub:6 exponential:1 unfair:1 jmlr:2 weighting:8 third:1 learns:1 theorem:2 specific:10 xt:4 symbol:4 insignificant:1 evidence:1 exists:4 intractable:1 mnist:2 sequential:5 adding:2 gained:1 nk:2 forget:14 logarithmic:3 depicted:1 generalizing:1 explore:1 expressed:1 tracking:1 applies:1 springer:1 sadik:1 satisfies:2 acm:2 ma:3 conditional:5 goal:1 king:1 replace:1 feasible:1 change:5 experimentally:1 except:1 reducing:4 uniformly:3 averaging:11 beattie:1 lemma:5 experimental:3 xrj:1 meaningful:1 formally:1 linder:1 guillaume:1 koechlin:2 inability:1 collins:3 frontal:1 incorporate:1 biol:1 |
5,587 | 6,056 | The Robustness of Estimator Composition
Jeff M. Phillips
School of Computing
University of Utah
Salt Lake City, UT 84112
[email protected]
Pingfan Tang
School of Computing
University of Utah
Salt Lake City, UT 84112
[email protected]
Abstract
We formalize notions of robustness for composite estimators via the notion of
a breakdown point. A composite estimator successively applies two (or more)
estimators: on data decomposed into disjoint parts, it applies the first estimator on
each part, then the second estimator on the outputs of the first estimator. And so
on, if the composition is of more than two estimators. Informally, the breakdown
point is the minimum fraction of data points which if significantly modified will
also significantly modify the output of the estimator, so it is typically desirable to
have a large breakdown point. Our main result shows that, under mild conditions
on the individual estimators, the breakdown point of the composite estimator is the
product of the breakdown points of the individual estimators. We also demonstrate
several scenarios, ranging from regression to statistical testing, where this analysis
is easy to apply, useful in understanding worst case robustness, and sheds powerful
insights onto the associated data analysis.
1
Introduction
Robust statistical estimators [5, 7] (in particular, resistant estimators), such as the median, are an
essential tool in data analysis since they are provably immune to outliers. Given data with a large
fraction of extreme outliers, a robust estimator guarantees the returned value is still within the nonoutlier part of the data. In particular, the role of these estimators is quickly growing in importance
as the scale and automation associated with data collection and data processing becomes more
commonplace. Artisanal data (hand crafted and carefully curated), where potential outliers can be
removed, is becoming proportionally less common. Instead, important decisions are being made
blindly based on the output of analysis functions, often without looking at individual data points
and their effect on the outcome. Thus using estimators as part of this pipeline that are not robust are
susceptible to erroneous and dangerous decisions as the result of a few extreme and rogue data points.
Although other approaches like regularization and pruning a constant number of obvious outliers
are common as well, they do not come with the important guarantees that ensure these unwanted
outcomes absolutely cannot occur.
In this paper we initiate the formal study of the robustness of composition of estimators through the
notion of breakdown points. These are especially important with the growth of data analysis pipelines
where the final result or prediction is the result of several layers of data processing. When each layer
in this pipeline is modeled as an estimator, then our analysis provides the first general robustness
analysis of these processes.
The breakdown point [4, 3] is a basic measure of robustness of an estimator. Intuitively, it describes
how many outliers can be in the data without the estimator becoming unreliable. However, the
literature is full of slightly inconsistent and informal definitions of this concept. For example:
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
? Aloupis [1] write ?the breakdown point is the proportion of data which must be moved to
infinity so that the estimator will do the same.?
? Huber and Ronchetti [8] write ?the breakdown point is the smallest fraction of bad observations that may cause an estimator to take on arbitrarily large aberrant values."
? Dasgupta, Kumar, and Srikumar [14] write ?the breakdown point of an estimator is the
largest fraction of the data that can be moved arbitrarily without perturbing the estimator to
the boundary of the parameter space.?
All of these definitions have similar meanings, and they are typically sufficient for the purpose of
understanding a single estimator. However, they are not mathematically rigorous, and it is difficult to
use them to discuss the breakdown point of composite estimators.
Composition of Estimators. In a bit more detail (we give formal definitions in Section 2.1), an
estimator E maps a data set to single value in another space, sometimes the same as a single data
point. For instance the mean or the median are simple estimators on one-dimensional data. A
composite E1 -E2 estimator applies two estimators E1 and E2 on data stored in a hierarchy. Let
P = {P1 , P2 , . . . , Pn } be a set of subdata sets, where each subdata set Pi = {pi,1 , pi,2 , . . . , pi,k }
has individual data readings. Then the E1 -E2 estimator reports E2 (E1 (P1 ), E1 (P2 ), . . . , E1 (Pn )),
that is the estimator E2 applied to the output of estimator E1 on each subdata set.
1.1
Examples of Estimator Composition
Composite estimators arise in many scenarios in data analysis.
Uncertain Data. For instance, in the last decade there has been increased focus on the study
of uncertainty data [10, 9, 2] where instead of analyzing a data set, we are given a model of the
uncertainty of each data point. Consider tracking the summarization of a group of n people based
on noisy GPS measurements. For each person i we might get k readings of their location Pi , and
use these k readings as a discrete probability distribution of where that person might be. Then in
order to represent the center of this set of people a natural thing to do would be to estimate the
location of each person as xi ? E1 (Pi ), and then use these estimates to summarize the entire group
E2 (x1 , x2 , . . . , xn ). Using the mean as E1 and E2 would be easy, but would be susceptible to even
a single outrageous outlier (all people are in Manhattan, but a spurious reading was at (0, 0) lat-long,
off the coast of Africa). An alternative is to use the L1 -median for E1 and E2 , that is known to have
an optimal breakdown point of 0.5. But what is the breakdown point of the E1 -E2 estimator?
Robust Analysis of Bursty Behavior. Understanding the robustness of estimators can also be
critical towards how much one can ?game? a system. For instance, consider a start-up media website
that gets bursts of traffic from memes they curate. They publish a statistic showing the median of the
top half of traffic days each month, and aggregate these by taking the median of such values over the
top half of all months. This is a composite estimator, and they proudly claim, even through they have
bursty traffic, it is robust (each estimator has a breakdown point of 0.25). If this composite estimator
shows large traffic, should a potential buyer of this website by impressed? Is there a better, more
robust estimator the potential buyer could request? If the media website can stagger the release of its
content, how should they distribute it to maximize this composite estimator?
Part of the Data Analysis Pipeline. This process of estimator composition is very common in
broad data analysis literature. This arises from the idea of an ?analysis pipeline? where at several
stages estimators or analysis is performed on data, and then further estimators and analysis are
performed downstream. In many cases a robust estimator like the median is used, specifically for its
robustness properties, but there is no analysis of how robust the composition of these estimators is.
1.2
Main Results
This paper initiates the formal and general study of the robustness of composite estimators.
? In Subsection 2.1, we give two formal definitions of breakdown points which are both
required to prove composition theorem. One variant of the definition closely aligns with
other formalizations [4, 3], while another is fundamentally different.
? The main result provides general conditions under which an E1 -E2 estimator with breakdown points ?1 and ?2 , has a breakdown point of ?1 ?2 (Theorem 2 in Subsection 2.2).
2
? Moreover, by showing examples where our conditions do not strictly apply, we gain an
understanding of how to circumvent the above result. An example is in composite percentile
estimators (e.g., E1 returns the 25th percentile, and E2 the 75th percentile of a ranked set).
These composite estimators have larger breakdown point than ?1 ? ?2 .
? The main result can extended to multiple compositions, under suitable conditions, so for
instance an E1 -E2 -E3 estimator has a breakdown point of ?1 ?2 ?3 (Theorem 3 in Subsection
2.3). This implies that long analysis chains can be very suspect to a few carefully places
outliers since the breakdown point decays exponentially in the length of the analysis chain.
? In Section 3, we highlight several applications of this theory, including robust regression,
robustness of p-values, a depth-3 composition, and how to advantageously manipulate the
observation about percentile estimator composition. We demonstrate a few more applications
with simulations in Section 4.
2
Robustness of Estimator Composition
2.1 Formal Definitions of Breakdown Points
In this paper, we give two definitions for the breakdown point: Asymptotic Breakdown Point and
Asymptotic Onto-Breakdown Point. The first definition, Asymptotic Breakdown Point, is similar
to the classic formal definitions in [4] and [3] (including their highly technical nature), although
their definitions of the estimator are slightly different leading to some minor differences in special
cases. However our second definition, Asymptotic Onto-Breakdown Point, is a structurally new
definition, and we illustrate how it can result in significantly different values on some common and
useful estimators. Our main theorem will require both definitions, and the differences in performance
will lead to several new applications and insights.
We define an estimator E as a function from the collection of some finite subsets of a metric space
(X , d) to another metric space (X 0 , d0 ):
E : A ? {X ? X | 0 < |X| < ?} 7? X 0 ,
(1)
where X is a multiset. This means if x ? X then x can appear more than once in X, and the
multiplicity of elements will be considered when we compute |X|.
Finite Sample Breakdown Point. For estimator E defined in (1) and positive integer n we define
its finite sample breakdown point gE (n) over a set M as
max(M ) if M 6= ?
gE (n) =
(2)
0
if M = ?
where for ?(x0 , X) = maxx?X d(x0 , x) is the distance from x0 to the furthest point in X,
M = {m ? [0, n] | ?X ? A , |X| = n, ? G1 > 0, ? G2 = G2 (X, G1 ) s.t. ?X 0 ? A ,
if |X 0 | = n and |{x0 ? X 0 | ?(x0 , X) > G1 }| ? m then d0 (E(X), E(X 0 )) ? G2 }.
(3)
For an estimator E in (1) and X ? A , the finite sample breakdown point gE (n) means if the number
of unbounded points in X 0 is at most gE (n), then E(X 0 ) will be bounded. Lets break this definition
down a bit more. The definition holds over all data sets X ? A of size n, and for all values G1 > 0
and some value G2 defined as a function G2 (X, G1 ) of the data set X and value G1 . Then gE (n) is
the maximum value m (over all X, G1 , and G2 above) such that for all X 0 ? A with |X 0 | = n then
|{x0 ? X 0 | ?(x0 , X) > G1 }| ? m (that is at most m points are further than G1 from X) where the
estimators are close, d0 (E(X), E(X 0 )) ? G2 .
For example, consider a point set X = {0, 0.15, 0.2, 0.25, 0.4, 0.55, 0.6, 0.65, 0.72, 0.8, 1.0} with
n = 11 and median 0.55. If we set G1 = 3, then we can consider sets X 0 of size 11 with fewer
than m points that are either greater than 3 or less than ?2. This means in X 0 there are at most m
points which are greater than 3 or less than ?2, and all other n ? m points are in [?2, 3]. Under these
conditions, we can (conservatively) set G2 = 4, and know that for values of m as 1, 2, 3, 4, or 5, then
the median of X 0 must be between ?3.45 and 4.55; and this holds no matter where we set those m
points (e.g., at 20 or at 1000). This does not hold for m ? 6, so gE (11) = 5.
3
Asymptotic Breakdown Point. If the limit limn??
gE (n)
n
exists, then we define this limit
gE (n)
(4)
n
as the asymptotic breakdown point, or breakdown point for short, of the estimator E.
Remark 1. It is not hard to see that many common estimators satisfy the conditions. For example, the
median, L1 -median [1], and Siegel estimators [11] all have asymptotic breakdown points of 0.5.
? = lim
n??
Asymptotic Onto-Breakdown Point. For an estimator E given in (1) and positive integer n, if
f = {0 ? m ? n | ? X ? A , |X| = n, ? y ? X 0 , ? X 0 ? A s.t. |X 0 | = n, |X ? X 0 | =
M
n ? m, E(X 0 ) = y} is not empty, we define
f).
fE (n) = min(M
(5)
The definition of fE (n) implies, if we change fE (n) elements in X, we can make E become any
value in X 0 : it is onto. In contrast gE (n) only requires E(X 0 ) to become far from E(X), perhaps
only in one direction. Then the asymptotic onto-breakdown point is defined as the following limit if
it exists
fE (n)
lim
.
(6)
n??
n
Remark 2. For a quantile estimator E that returns a percentile other than the 50th, then
limn?? gEn(n) 6= limn?? fEn(n) . For instance, if E returns the 25th percentile of a ranked set,
setting only 25% of the data points to ?? causes E to return ??; hence limn?? gEn(n) = 0.25.
And while any value less than the original 25th percentile can also be obtained; to return a value
larger than the largest element in the original set, at least 75% of the data must be modified, thus
limn?? fEn(n) = 0.75.
As we will observe in Section 3, this nuance in definition regarding percentile estimators will allow
for some interesting composite estimator design.
2.2
Definition of E1-E2 Estimators, and their Robustness
We consider the following two estimators:
E1 : A1 ? {X ? X1 | 0 < |X| < ?} 7? X2 ,
(7)
E2 : A2 ? {X ? X2 | 0 < |X| < ?} 7? X20 ,
(8)
where any finite subset of E1 (A1 ), the range of E1 , belongs to A2 . Suppose Pi ? A1 , |Pi | = k for
i = 1, 2, ? ? ? , n and Pflat = ]ni=1 Pi , where ] means if x appears n1 times in X1 and n2 times in X2
then x appears n1 + n2 times in X1 ] X2 . We define
E(Pflat ) = E2 (E1 (P1 ), E1 (P2 ), ? ? ? , E1 (Pn )) .
(9)
Theorem 1. Suppose gE1 (k) and gE2 (n) are the finite sample breakdown points of estimators E1
and E2 which are given by (7) and (8) respectively. If gE (nk) is the finite sample breakdown
g (k)
point of E given by (9), then we have gE2 (n)gE1 (k) ? gE (nk). If ?1 = limk?? E1k , ?2 =
limn??
gE2 (n)
n
and ? = limn,k??
gE (nk)
nk
all exist, then we have ?1 ?2 ? ?.
The proof of Theorem 1 and other theorems can be found in the full version of this paper [12].
Remark 3. Under the condition of Theorem 1, we cannot guarantee ? = ?1 ?2 . For example, suppose
E1 and E2 take the 25th percentile and the 75th percentile of a ranked set of real numbers respectively.
3
.
So, we have ?1 = ?2 = 14 . However, ? = 14 ? 43 = 16
(nk)
In fact, the limit of gEnk
as n, k ? ? may even not exist. For example, suppose E1 takes the 25th
percentile of a ranked set of real numbers. When n is odd E2 takes the the 25th percentile of a ranked
set of n real numbers, and when n is even E2 takes the the 75th percentile of a ranked set of n real
numbers. Thus, ?1 = ?2 = 14 , but gE (nk) ? 41 nk if n is odd, and gE (nk) ? 14 ? 34 nk if n is even,
(nk)
which implies limn,k?? gEnk
does not exist.
Therefore, to guarantee ? exist and ? = ?1 ?2 , we introduce the definition of asymptotic ontobreakdown point in (6). As shown in Remark 2, the values of (4) and (6) may be not equal. However,
with the condition of the asymptotic breakdown point and asymptotic onto-breakdown point of E1
being the same, we can finally state our desired clean result.
4
Theorem 2. For estimators E1 , E2 and E given by (7), (8) and (9) respectively, suppose gE1 (k),
gE2 (n) and gE (nk) are defined by (2), and fE1 (k) is defined by (5). Moreover, E1 is an onto function
and for any fixed positive integer n we have
? X ? A2 , |X| = n, G1 > 0, s.t. ? G2 > 0, ? X 0 ? A2 satisfying
(10)
|X 0 | = n, |X 0 \ X| = gE2 (n) + 1, and d02 (E2 (X), E2 (X 0 )) > G2 ,
where d02 is the metric of space X20 . If ?1 = limk??
g (n)
limn?? E2n
gE1 (k)
k
= limk??
fE1 (k)
,
k
and
?2 =
(nk)
limn,k?? gEnk
both exist, then ? =
exists, and ? = ?1 ?2 .
Remark 4. Without the introduction of fE (n), we cannot even guarantee ? ? ?1 or ? ? ?2 only
under the condition of Theorem 1, even if E1 and E2 are both onto functions. For example, for any
P = {p1 , p2 , ? ? ? , pk } ? R and X = {x1 , x2 , ? ? ? , xn } ? R, we define E1 (P ) = 1/median(P )
(if median(P ) 6= 0, otherwise define E1 (P ) = 0) and E2 (X) = median(y1 , y2 , ? ? ? , yn ), where yi
(1 ? y ? n) is given by yi = 1/xi (if xi 6= 0, otherwise define yi = 0). Since gE1 (k) = gE2 (n) = 0
for all n, k, we have ?1 = ?2 = 0. However, in order to make E2 (E1 (P1 ), E1 (P2 ), ? ? ? , E1 (Pn )) ?
+?, we need to make about n2 elements in {E(P1 ), E(P2 ), ? ? ? , E(Pn )} go to 0+. To make
E1 (Pi ) ? 0+, we need to make about k2 points in Pi go to +?. Therefore, we have gE (nk) ? n2 ? k2
and ? = 41 .
2.3 Multi-level Composition of Estimators
To study the breakdown point of composite estimators with more than two levels, we introduce the
following estimator:
E3 : A3 ? {X ? X20 | 0 < |X| < ?} 7? X30 ,
(11)
where any finite subset of E2 (A2 ), the range of E2 , belongs to A3 . Suppose Pi,j ? A1 , |Pi,j | = k
j
j
for i = 1, 2, ? ? ? , n, j = 1, 2, ? ? ? , m and Pflat
= ]ni=1 Pi,j , Pflat = ]m
j=1 Pflat . We define
1
2
m
E(Pflat ) = E3 E2 (Peflat
), E2 (Peflat
), ? ? ? , E2 (Peflat
) ,
(12)
j
where Peflat
= {E1 (P1,j ), E1 (P2,j ), ? ? ? , E1 (Pn,j )}, for j = 1, 2, ? ? ? , m.
From Theorem 2, we can obtain the following theorem about the breakdown point of E in (12).
Theorem 3. For estimators E1 , E2 , E3 and E given by (7), (8), (11) and (12) respectively, suppose
gE1 (k), gE2 (n), gE3 (m) and gE (mnk) are defined by (2), and fE1 (k), fE2 (n) are defined by (5).
Moreover, E1 and E2 are both onto functions, and for any fixed positive integer m we have
? X ? A3 , |X| = m, G1 > 0, s.t. ? G2 > 0, ? X 0 ? A3
satisfying |X 0 | = m, |X 0 \ X| = gE3 (m) + 1, and d03 (E3 (X), E3 (X 0 )) > G2 ,
where d03 is the metric of space X30 .
f (n)
g (n)
limn?? E2n
= limn?? E2n
and ?3
gE (mnk)
limm,n,k?? mnk exists, and ? = ?1 ?2 ?3
3
3.1
gE1 (k)
k
g (m)
limm?? E3m
all
If ?1 = limk??
=
.
= limk??
fE1 (k)
, ?2
k
=
exist, then we have ? =
Applications
Application 1 : Balancing Percentiles
For n companies, for simplicity, assume each company has k employees. We are interested in the
income of the regular employees of all companies, not the executives who may have much higher pay.
Let pi,j represents the income of the jth employee in the ith company. Set Pflat = ]ni=1 Pi where the
ith company has a set Pi = {pi,1 , pi,2 , ? ? ? , pi,k } ? R and for notational convenience pi,1 ? pi,2 ?
? ? ? ? pi,k for i ? {1, 2, ? ? ? , n}. Suppose the income data Pi of each company is preprocessed by a
45-percentile estimator E1 (median of lowest 90% of incomes), with breakdown point ?1 = 0.45. In
theory E1 (Pi ) can better reflect the income of regular employees in a company, since there may be
about 10% of employees in the management of a company and their incomes are usually much higher
than that of common employees. So, the preprocessed data is X = {E1 (P1 ), E1 (P2 ), ? ? ? , E1 (Pn )}.
5
If we define E2 (X) = median(X) and E(Pflat ) = E2 (X), then the breakdown point of E2 is
?2 = 0.5, and the breakdown points of E is ? = ?1 ?2 = 0.225.
However, if we use another E2 , then E can be more robust. For example, for X = {x1 , x2 , ? ? ? , xn }
where x1 ? x2 ? ? ? ? ? xn , we can define E2 as the 55-percentile estimator (median of largest
90% of incomes). In order to make E(Pflat ) = E2 (X) = E2 (E1 (P1 ), E1 (P2 ), ? ? ? , E1 (Pn )) go to
infinity, we need to either move 55% points of X to ?? or move 45% points of X to +?. In either
case, we need to move about 0.45 ? 0.55nk points of Pflat to infinity. This means the breakdown point
of E is ? = 0.45 ? 0.55 = 0.2475 which is greater than 0.225.
This example implies if we know how the raw data is preprocessed by estimator E1 , we can choose a
proper estimator E2 to make the E1 -E2 estimator more robust.
3.2
Application 2 : Regression of L1 Medians
Suppose we want to use linear regression to robustly predict the weight of a person from his or
her height, and we have multiple readings of each person?s height and weight. The raw data is
Pflat = ]ni=1 Pi where for the ith person we have a set Pi = {pi,1 , pi,2 , ? ? ? , pi,k } ? R2 and
pi,j = (xi,j , yi,j ) for i ? {1, 2, ? ? ? , n}, j ? {1, 2, ? ? ? , k}. Here, xi,j and yi,j are the height and
weight respectively of the ith person in their jth measurement.
One ?robust? way to process this data, is to first pre-process each Pi with its L1 -median [1]:
(?
xi , y?i ) ? E1 (Pi ), where E1 (Pi ) = L1 -median(Pi ) has breakdown point ?1 = 0.5. Then we could
generate a linear model to predict weight y?i = ax+b from the Siegel Estimator [11]: E2 (Z) = (a, b),
with breakdown point ?2 = 0.5. From Theorem 2 we immediately know the breakdown point of
E(Pflat ) = E2 (E1 (P1 ), E1 (P2 ), ? ? ? , E1 (Pn )) is ? = ?1 ?2 = 0.5 ? 0.5 = 0.25.
Alternatively, taking the Siegel estimator of Pflat (i.e., returning E2 (Pflat )) would have a much larger
breakdown point of 0.5. So a seemingly harmless operation of normalizing the data with a robust
estimator (with optimal 0.5 breakdown point) drastically decreases the robustness of the process.
3.3
Application 3 : Significance Thresholds
Suppose we are studying the distribution of the wingspread of fruit flies. There are n = 500 flies,
and the variance of the true wingspread among these flies is on the order of 0.1 units. Our goal is to
estimate the 0.05 significance level of this distribution of wingspread among normal flies.
To obtain a measured value of the wingspread of the ith fly, denoted Fi , we measure the wingspread
of ith fly k = 100 times independently, and obtain the measurement set Pi = {pi,1 , pi,2 , ? ? ? , pi,k }.
The measurement is carried out by a machine automatically and quickly, which implies the variance
of each Pi is typically very small, perhaps only 0.0001 units, but there are outliers in Pi with small
chance due to possible machine malfunction. This malfunction may be correlated to individual
flies because of anatomical issues, or it may have autocorrelation (the machine jams for a series of
consecutive measurements).
To perform hypothesis testing we desire the 0.05 significance level, so we are interested in the 95th
percentile of the set F = {F1 , F2 , ? ? ? , Fn }. So a post processing estimator E2 returns the 95th percentile of F and has a breakdown point of ?2 = 0.05 [6]. Now, we need to design an estimator E1 to
process the raw data Pflat = ]ni=1 Pi to obtain F = {F1 , F2 , ? ? ? , Fn }. For example, we can define E1
as Fi = E1 (Pi ) = median(Pi ) and estimator E as E(Pflat ) = E2 (E1 (P1 ), E1 (P2 ), ? ? ? , E1 (Pn )).
Then, the breakdown point of E1 is 0.5. Since the breakdown point of E2 is 0.05, the breakdown point
of the composite estimator E is ? = ?1 ?2 = 0.5 ? 0.05 = 0.025. This means if the measurement
machine malfunctioned only 2.5% of the time, we could have an anomalous significant level, leading
to false discovery. Can we make this process more robust by adjusting E1 ?
Actually, yes!, we can use another pre-processing estimator to get a more robust E. Since the variance
of each Pi is only 0.0001, we can let E1 return the 5th percentile of a ranked set of real numbers, then
there is not much difference between E1 (Pi ) and the median of Pi . (Note: this introduces a small
amount of bias that can likely be accounted for in other ways.) In order to make E(Pflat ) = E2 (F )
go to infinity we need to move 5% points of X to ?? (causing E2 to give an anomalous value) or
95% points of X to +? (causing many, 95%, of the E1 values, to give anomalous values). In either
case, we need to move about 5% ? 95% points of Pflat to infinity. So, the breakdown points of E is
6
? = 0.05 ? 0.95 = 0.0475 which is greater than 0.025. That is, we can now sustain up to 4.75% of
the measurement machine?s reading to be anomalous, almost double than before, without leading to
an anomalous significance threshold value.
This example implies if we know the post-processing estimator E2 , we can choose a proper method
to preprocess the raw data to make the E1 -E2 estimator more robust.
3.4
Application 4 : 3-Level Composition
Suppose we want to use a single value to represent the temperature of the US in a certain day.
There are m = 50 states in the country. Suppose each state has n = 100 meteorological stations,
and the station i in state j measures the local temperature k = 24 times to get the data Pi,j =
j
j
{ti,j,1 , ti,j,2 , ? ? ? , ti,j,k }. We define Pflat
= ]ni=1 Pi,j , Pflat = ]m
j=1 Pflat and
j
E1 (Pi,j ) = median(Pi,j ), E2 (Pflat
) = median (E1 (P1,j ), E1 (P1,j ), ? ? ? , E1 (Pn,j ))
1
2
m
1
2
m
E(Pflat ) = E3 (E2 (Pflat
), E2 (Pflat
), ? ? ? , E2 (Pflat
)) = median(E2 (Pflat
), E2 (Pflat
), ? ? ? , E2 (Pflat
)).
So, the break down points of E1 , E2 and E3 are ?1 = ?2 = ?3 = 0.5. From Theorem 3, we know
the break down point of E is ? = ?1 ?2 ?3 = 0.125. Therefore, we know the estimator E is not very
robust, and it may be not a good choice to use E(Pflat ) to represent the temperature of the US in a
certain day.
This example illustrates how the more times the raw data is aggregated, the more unreliable the final
result can become.
4
Simulation: Estimator Manipulation
In this simulation we actually construct a method to relocate an estimator by modifying the smallest
number of points possible. We specifically target the L1 -median of L1 -medians since its somewhat
non-trivial to solve for the new location of data points.
In particular, given a target point p0 ? R2 and a set of nk points Pflat = ]ni=1 Pi ,
where Pi = {pi,1 , pi,2 , ? ? ? , pi,k } ? R2 , we use simulation to show that we only need
to change n
? k? points of Pflat , then we can get a new set Peflat = ]ni=1 Pei such that
median(median(Pe1 ), median(Pe2 ), ? ? ? , median(Pen )) = p0 . Here, the "median" means L1 -median,
and
1
1
n
if n is even
k
if k is even
2
?
n
?= 1
, k = 21
.
if n is odd
if k is odd
2 (n + 1)
2 (k + 1)
To do this, we first show that, given k points S = {(xi , yi ) | 1 ? i ? k} in R2 , and a target point
(x0 , y0 ), we can change k? points of S to make (x0 , y0 ) as the L1 -median of the new set. As n and k
?
grow, then n
? k/(nk)
= 0.25 is the asymptotic breakdown point of this estimator, as a consequence of
Theorem 2, and thus we may need to move this many points to get the result.
If (x0 , y0 ) is the L1 -median of the set {(xi , yi ) | 1 ? i ? k}, then we have [13]:
k
X
p
i=1
k
X
yi ? y0
xi ? x0
p
= 0,
= 0.
2
2
(xi ? x0 ) + (yi ? y0 )
(xi ? x0 )2 + (yi ? y0 )2
i=1
(13)
We define ~x = (x1 , x2 , ? ? ? , xk? ), ~y = (y1 , y2 , ? ? ? , yk? ) and
h(~
x, ~
y) =
k
X
i=1
xi ? x0
p
(xi ? x0 )2 + (yi ? y0 )2
!2
+
k
X
i=1
yi ? y0
p
(xi ? x0 )2 + (yi ? y0 )2
!2
.
Since (13) is the sufficient and necessary condition for L1 -median, if we can find ~x and ~y such that
h(~x, ~y ) = 0, then (x0 , y0 ) is the L1 -median of the new set.
Since
?xi h(~
x, ~
y ) =2
k
X
p
j=1
?2
k
X
j=1
(yi ? y0 )2
xj ? x0
3
2
2
(xj ? x0 ) + (yj ? y0 )
(xi ? x0 )2 + (yi ? y0 )2 2
(xi ? x0 )(yi ? y0 )
yj ? y0
p
3 ,
(xj ? x0 )2 + (yj ? y0 )2 (xi ? x0 )2 + (yi ? y0 )2 2
7
?yi h(~
x, ~
y) = ? 2
k
X
j=1
+2
k
X
j=1
(xi ? x0 )(yi ? y0 )
xj ? x0
p
3
2
2
(xj ? x0 ) + (yj ? y0 )
(xi ? x0 )2 + (yi ? y0 )2 2
(xi ? x0 )2
yj ? y0
p
3 ,
2
2
(xj ? x0 ) + (yj ? y0 )
(xi ? x0 )2 + (yi ? y0 )2 2
we can use gradient descent to compute ~x, ~y to minimize h. For the input S = {(xi , yi )|1 ? i ? k},
we choose the initial value ~x0 = {x1 , x2 , ? ? ? , xk? }, ~y0 = {y1 , y2 , ? ? ? , yk? }, and then update ~x and ~y
along the negative gradient direction of h, until the Euclidean norm of gradient is less than 0.00001.
The algorithm framework is then as follows, using the above gradient descent formulation at each step.
We first compute the L1 -median mi for each Pi , and then change n
? points in {m1 , m2 , ? ? ? , mn } to
obtain {m01 , m02 , ? ? ? , m0n? , mn? +1 , ? ? ? , mn } such that median(m01 , m02 , ? ? ? , m0n? , mn? +1 , ? ? ? , mn ) =
p0 . For each m0i , we change k? points in Pi to obtain Pei = {p0i,1 , p0i,2 , ? ? ? , p0i,k? , pi,k+1
, ? ? ? , pi,k }
?
0
such that median(Pei ) = mi . Thus, we have
(14)
median median(Pe1 ), ? ? ? , median(Pen? ), median(Pn? +1 ), ? ? ? , median(Pn ) = p0 .
To show a simulation of this process, we use a uniform distribution to randomly generate nk
points in the region [?10, 10] ? [?10, 10], and generate a target point p0 = (x0 , y0 ) in the region
[?20, 20] ? [?20, 20], and then use our algorithm to change n
? k? points in the given set, to make
the new set satisfy (14). Table 1 shows the result of running this experiment for different n and
k, where (x00 , y00 ) is the median of medians for the new set obtained by our algorithm. It lists the
various values n and k, the corresponding values n
? and k? of points modified, and the target point
and result of our algorithm. If we reduce the terminating condition, which means increasing the
number of iteration, we can obtain a more accurate result, but only requiring the Euclidean norm of
gradient to be less than 0.00001, we get very accurate results, within about 0.01 in each coordinate.
We illustrate the results of this process graphically for a example in Table 1: for the cases n = 5,
The
The
The
The
The
The
The
25
20
given points that are not changed
given points that are changed
new locations for those changed points
medians of old subsets
medians of new subsets
median of medians for the given points
target point
n
k
n
?
?
k
(x0 , y0 )
0
(x00 , y0
)
5
8
3
4
(0.99, 1.01)
(0.99, 1.01)
5
8
3
4
(10.76, 11.06)
(10.70 11.06)
10
5
5
3
(-13.82, -4.74)
(-13.83, -4.74)
50
20
25
10
( -14.71, -13.67)
(-14.72, -13.67)
100
50
50
25
( -14.07, 18.36)
( -14.07, 18.36)
500
100
250
50
(-15.84, -6.42)
(-15.83, -6.42)
1000
200
500
100
(18.63, -12.10)
(18.78, -12.20)
15
10
5
0
?5
?10
?10
?5
0
5
10
15
Figure 1: The running result for the case n = 5,
k = 8, (x0 , y0 ) = (0.99, 1.01) in Table 1.
Table 1: The running result of simulation.
k = 8, (x0 , y0 ) = (0.99, 1.01), wihch is shown in Figure 1. In this figure, the green star is the
target point. Since n = 5, we use five different markers (circle, square, upward-pointing triangle,
downward-pointing triangle, and diamond) to represent five kinds of points. The given data Pflat are
shown by black points and unfilled points. Our algorithm changes those unfilled points to the blue
ones, and the green points are the medians of the new subsets. The red star is the median of medians
for Pflat , and other red points are the median of old subsets. So, we only changed 12 points out of 40,
and the median of medians for the new data set is very close to the target point.
5
Conclusion
We define the breakdown point of the composition of two or more estimators. These definitions
are technical but necessary to understand the robustness of composite estimators. Generally, the
composition of two of more estimators is less robust than each individual estimator. We highlight a
few applications and believe many more exist. These results already provide important insights for
complex data analysis pipelines common to large-scale automated data analysis.
8
References
[1] G. Aloupis. Geometric measures of data depth. In Data Depth: Robust Multivariate Analysis, Computational Geometry and Applications. AMS, 2006.
[2] G. Cormode and A. McGregor. Approximation algorithms for clustering uncertain data. In PODS, 2008.
[3] P. Davies and U. Gather. The breakdown point: Examples and counterexamples. REVSTAT ? Statitical
Journal, 5:1?17, 2007.
[4] F. R. Hampel. A general qualitative definition mof robustness. Annals of Mathematical Statistics, 42:1887?
1896, 1971.
[5] F. R. Hampel, E. M. Ronchetti, P. J. Rousseeuw, and W. A. Stahel. Robust Statistics: The Approach Based
on Influence Functions. Wiley, 1986.
[6] X. He, D. G. Simplson, and S. L. Portnoy. Breakdown robustness of tests. Journal of the Maerican
Statistical Association, 85:446?452, 1990.
[7] P. J. Huber. Robust Statistics. Wiley, 1981.
[8] P. J. Huber and E. M. Ronchetti. Breakdown point. In Robust Statistics, page 8. John Wiley & Sons, Inc.,
2009.
[9] A. G. J?rgensen, M. L?ffler, and J. M. Phillips. Geometric computation on indecisive points. In WADS,
2011.
[10] A. D. Sarma, O. Benjelloun, A. Halevy, S. Nabar, and J. Widom. Representing uncertain data: models,
properties, and algorithms. VLDBJ, 18:989?1019, 2009.
[11] A. F. Siegel. Robust regression using repeated medians. Biometrika, 82:242?244, 1982.
[12] P. Tang and J. M. Phillips. The robustness of estimator composition. Technical report, arXiv:1609.01226,
2016.
[13] E. Weiszfeld and F. Plastria. On the point for which the sum of the distances to n given points is minimum.
Annals of Operations Research, 167:7?41, 2009.
[14] A. H. Welsh. The standard deviation. In Aspects of Statistical Inference, page 245. Wiley-Interscience;,
1996.
9
| 6056 |@word mild:1 version:1 proportion:1 norm:2 widom:1 simulation:6 p0:5 ronchetti:3 initial:1 series:1 africa:1 aberrant:1 must:3 john:1 pe1:2 fn:2 update:1 half:2 fewer:1 website:3 xk:2 ith:6 short:1 stahel:1 cormode:1 provides:2 multiset:1 location:4 five:2 unbounded:1 height:3 burst:1 along:1 mathematical:1 become:3 qualitative:1 prove:1 interscience:1 autocorrelation:1 introduce:2 x0:35 huber:3 behavior:1 p1:13 growing:1 multi:1 decomposed:1 company:8 automatically:1 increasing:1 becomes:1 spain:1 moreover:3 bounded:1 medium:2 lowest:1 what:1 kind:1 fe2:1 guarantee:5 ti:3 growth:1 shed:1 unwanted:1 returning:1 k2:2 biometrika:1 unit:2 appear:1 yn:1 positive:4 before:1 local:1 modify:1 limit:4 consequence:1 analyzing:1 becoming:2 might:2 black:1 range:2 testing:2 yj:6 impressed:1 maxx:1 significantly:3 composite:16 davy:1 pre:2 regular:2 get:7 onto:10 cannot:3 close:2 convenience:1 wad:1 influence:1 map:1 center:1 go:4 graphically:1 independently:1 pod:1 simplicity:1 immediately:1 m2:1 estimator:101 insight:3 his:1 classic:1 harmless:1 notion:3 coordinate:1 annals:2 hierarchy:1 suppose:12 target:8 gps:1 hypothesis:1 element:4 satisfying:2 curated:1 breakdown:60 srikumar:1 role:1 fly:7 portnoy:1 worst:1 commonplace:1 region:2 decrease:1 removed:1 yk:2 meme:1 m0n:2 terminating:1 f2:2 triangle:2 various:1 m02:2 aggregate:1 outcome:2 larger:3 solve:1 otherwise:2 statistic:5 g1:12 noisy:1 final:2 seemingly:1 relocate:1 product:1 causing:2 gen:2 moved:2 empty:1 double:1 illustrate:2 measured:1 odd:4 minor:1 school:2 p2:11 c:2 come:1 implies:6 direction:2 closely:1 modifying:1 jam:1 require:1 f1:2 mathematically:1 strictly:1 m0i:1 hold:3 y00:1 considered:1 normal:1 bursty:2 predict:2 claim:1 pointing:2 consecutive:1 smallest:2 a2:5 purpose:1 e2n:3 genk:3 largest:3 city:2 tool:1 stagger:1 modified:3 fe1:4 pn:13 release:1 focus:1 ax:1 notational:1 contrast:1 rigorous:1 am:1 inference:1 typically:3 entire:1 spurious:1 her:1 limm:2 interested:2 provably:1 upward:1 issue:1 among:2 denoted:1 special:1 equal:1 once:1 construct:1 represents:1 broad:1 report:2 fundamentally:1 few:4 randomly:1 individual:6 geometry:1 welsh:1 n1:2 highly:1 introduces:1 extreme:2 chain:2 accurate:2 necessary:2 euclidean:2 old:2 desired:1 circle:1 uncertain:3 increased:1 instance:5 deviation:1 subset:7 uniform:1 stored:1 person:7 off:1 quickly:2 reflect:1 successively:1 management:1 choose:3 leading:3 return:7 potential:3 distribute:1 star:2 automation:1 matter:1 inc:1 satisfy:2 performed:2 break:3 traffic:4 red:2 start:1 curate:1 minimize:1 square:1 ni:8 variance:3 who:1 preprocess:1 yes:1 raw:5 aligns:1 definition:21 obvious:1 e2:59 associated:2 proof:1 mi:2 gain:1 adjusting:1 subsection:3 ut:2 lim:2 formalize:1 malfunction:2 carefully:2 actually:2 appears:2 higher:2 day:3 sustain:1 formulation:1 stage:1 until:1 hand:1 marker:1 meteorological:1 perhaps:2 believe:1 utah:4 effect:1 concept:1 y2:3 true:1 requiring:1 regularization:1 hence:1 unfilled:2 game:1 percentile:19 demonstrate:2 l1:13 temperature:3 ranging:1 coast:1 meaning:1 fi:2 common:7 perturbing:1 salt:2 exponentially:1 association:1 he:1 m1:1 employee:6 measurement:7 composition:17 significant:1 counterexample:1 phillips:3 immune:1 resistant:1 multivariate:1 sarma:1 belongs:2 scenario:2 manipulation:1 certain:2 arbitrarily:2 yi:22 fen:2 minimum:2 greater:4 somewhat:1 ge2:7 aggregated:1 maximize:1 full:2 desirable:1 multiple:2 d0:3 technical:3 long:2 post:2 e1:71 manipulate:1 a1:4 prediction:1 variant:1 regression:5 basic:1 anomalous:5 metric:4 publish:1 blindly:1 iteration:1 sometimes:1 represent:4 p0i:3 arxiv:1 want:2 indecisive:1 median:57 country:1 limn:12 grow:1 limk:5 suspect:1 thing:1 inconsistent:1 integer:4 easy:2 automated:1 xj:6 reduce:1 idea:1 regarding:1 advantageously:1 returned:1 e3:8 cause:2 remark:5 useful:2 generally:1 proportionally:1 informally:1 amount:1 rousseeuw:1 generate:3 exist:7 disjoint:1 anatomical:1 mnk:3 blue:1 discrete:1 write:3 dasgupta:1 group:2 threshold:2 preprocessed:3 clean:1 downstream:1 fraction:4 sum:1 powerful:1 uncertainty:2 place:1 almost:1 lake:2 decision:2 bit:2 layer:2 pay:1 dangerous:1 occur:1 infinity:5 x2:10 aspect:1 min:1 kumar:1 request:1 describes:1 slightly:2 son:1 y0:29 outlier:8 intuitively:1 multiplicity:1 pipeline:6 discus:1 initiate:2 know:6 ge:18 informal:1 studying:1 operation:2 apply:2 observe:1 m01:2 robustly:1 alternative:1 robustness:17 original:2 top:2 running:3 ensure:1 clustering:1 lat:1 quantile:1 especially:1 move:6 already:1 ge3:2 rgensen:1 gradient:5 distance:2 trivial:1 furthest:1 nuance:1 length:1 modeled:1 difficult:1 susceptible:2 x20:3 fe:5 negative:1 design:2 proper:2 summarization:1 pei:3 perform:1 diamond:1 observation:2 finite:8 descent:2 extended:1 looking:1 y1:3 station:2 required:1 barcelona:1 nip:1 usually:1 reading:6 summarize:1 including:2 max:1 green:2 critical:1 suitable:1 natural:1 ranked:7 circumvent:1 hampel:2 mn:5 representing:1 carried:1 understanding:4 literature:2 discovery:1 geometric:2 mof:1 asymptotic:13 manhattan:1 highlight:2 interesting:1 executive:1 gather:1 sufficient:2 fruit:1 pi:60 balancing:1 changed:4 accounted:1 last:1 jth:2 drastically:1 formal:6 allow:1 bias:1 understand:1 taking:2 boundary:1 depth:3 xn:4 conservatively:1 collection:2 made:1 far:1 income:7 pruning:1 unreliable:2 xi:23 alternatively:1 x00:2 pen:2 decade:1 table:4 nature:1 robust:23 complex:1 pk:1 main:5 significance:4 arise:1 n2:4 repeated:1 x1:9 crafted:1 siegel:4 wiley:4 formalization:1 structurally:1 tang:2 theorem:16 down:3 erroneous:1 bad:1 showing:2 r2:4 decay:1 list:1 a3:4 normalizing:1 essential:1 exists:4 false:1 importance:1 illustrates:1 downward:1 nk:17 x30:2 likely:1 desire:1 pe2:1 tracking:1 g2:12 applies:3 chance:1 month:2 goal:1 towards:1 jeff:1 content:1 hard:1 change:7 specifically:2 buyer:2 people:3 arises:1 absolutely:1 mcgregor:1 correlated:1 |
5,588 | 6,057 | Using Fast Weights to Attend to the Recent Past
Jimmy Ba
University of Toronto
Geoffrey Hinton
University of Toronto and Google Brain
[email protected]
[email protected]
Volodymyr Mnih
Google DeepMind
Joel Z. Leibo
Google DeepMind
Catalin Ionescu
Google DeepMind
[email protected]
[email protected]
[email protected]
Abstract
Until recently, research on artificial neural networks was largely restricted to systems with only two types of variable: Neural activities that represent the current
or recent input and weights that learn to capture regularities among inputs, outputs
and payoffs. There is no good reason for this restriction. Synapses have dynamics at many different time-scales and this suggests that artificial neural networks
might benefit from variables that change slower than activities but much faster
than the standard weights. These ?fast weights? can be used to store temporary
memories of the recent past and they provide a neurally plausible way of implementing the type of attention to the past that has recently proved very helpful in
sequence-to-sequence models. By using fast weights we can avoid the need to
store copies of neural activity patterns.
1
Introduction
Ordinary recurrent neural networks typically have two types of memory that have very different time
scales, very different capacities and very different computational roles. The history of the sequence
currently being processed is stored in the hidden activity vector, which acts as a short-term memory
that is updated at every time step. The capacity of this memory is O(H) where H is the number
of hidden units. Long-term memory about how to convert the current input and hidden vectors into
the next hidden vector and a predicted output vector is stored in the weight matrices connecting the
hidden units to themselves and to the inputs and outputs. These matrices are typically updated at the
end of a sequence and their capacity is O(H 2 ) + O(IH) + O(HO) where I and O are the numbers
of input and output units.
Long short-term memory networks [Hochreiter and Schmidhuber, 1997] are a more complicated
type of RNN that work better for discovering long-range structure in sequences for two main reasons:
First, they compute increments to the hidden activity vector at each time step rather than recomputing
the full vector1 . This encourages information in the hidden states to persist for much longer. Second,
they allow the hidden activities to determine the states of gates that scale the effects of the weights.
These multiplicative interactions allow the effective weights to be dynamically adjusted by the input
or hidden activities via the gates. However, LSTMs are still limited to a short-term memory capacity
of O(H) for the history of the current sequence.
Until recently, there was surprisingly little practical investigation of other forms of memory in recurrent nets despite strong psychological evidence that it exists and obvious computational reasons why
it was needed. There were occasional suggestions that neural networks could benefit from a third
form of memory that has much higher storage capacity than the neural activities but much faster
dynamics than the standard slow weights. This memory could store information specific to the history of the current sequence so that this information is available to influence the ongoing processing
1
This assumes the ?remember gates ? of the LSTM memory cells are set to one.
without using up the memory capacity of the hidden activities. Hinton and Plaut [1987] suggested
that fast weights could be used to allow true recursion in a neural network and Schmidhuber [1993]
pointed out that a system of this kind could be trained end-to-end using backpropagation, but neither
of these papers actually implemented this method of achieving recursion.
2
Evidence from physiology that temporary memory may not be stored as
neural activities
Processes like working memory, attention, and priming operate on timescale of 100ms to minutes.
This is simultaneously too slow to be mediated by neural activations without dynamical attractor
states (10ms timescale) and too fast for long-term synaptic plasticity mechanisms to kick in (minutes
to hours). While artificial neural network research has typically focused on methods to maintain
temporary state in activation dynamics, that focus may be inconsistent with evidence that the brain
also?or perhaps primarily?maintains temporary state information by short-term synaptic plasticity
mechanisms [Tsodyks et al., 1998, Abbott and Regehr, 2004, Barak and Tsodyks, 2007].
The brain implements a variety of short-term plasticity mechanisms that operate on intermediate
timescale. For example, short term facilitation is implemented by leftover [Ca2+ ] in the axon terminal after depolarization while short term depression is implemented by presynaptic neurotransmitter
depletion Zucker and Regehr [2002]. Spike-time dependent plasticity can also be invoked on this
timescale [Markram et al., 1997, Bi and Poo, 1998]. These plasticity mechanisms are all synapsespecific. Thus they are more accurately modeled by a memory with O(H 2 ) capacity than the O(H)
of standard recurrent artificial recurrent neural nets and LSTMs.
3
Fast Associative Memory
One of the main preoccupations of neural network research in the 1970s and early 1980s [Willshaw
et al., 1969, Kohonen, 1972, Anderson and Hinton, 1981, Hopfield, 1982] was the idea that memories
were not stored by somehow keeping copies of patterns of neural activity. Instead, these patterns
were reconstructed when needed from information stored in the weights of an associative network
and the very same weights could store many different memories An auto-associative memory that
has N 2 weights cannot be expected to store more that N real-valued vectors with N components
each. How close we can come to this upper bound depends on which storage rule we use. Hopfield
nets use a simple, one-shot, outer-product storage rule and achieve a capacity of approximately
0.15N binary vectors using weights that require log(N ) bits each. Much more efficient use can
be made of the weights by using an iterative, error correction storage rule to learn weights that can
retrieve each bit of a pattern from all the other bits [Gardner, 1988], but for our purposes maximizing
the capacity is less important than having a simple, non-iterative storage rule, so we will use an outer
product rule to store hidden activity vectors in fast weights that decay rapidly. The usual weights in
an RNN will be called slow weights and they will learn by stochastic gradient descent in an objective
function taking into account the fact that changes in the slow weights will lead to changes in what
gets stored automatically in the fast associative memory.
A fast associative memory has several advantages when compared with the type of memory assumed
by a Neural Turing Machine (NTM) [Graves et al., 2014], Neural Stack [Grefenstette et al., 2015], or
Memory Network [Weston et al., 2014]. First, it is not at all clear how a real brain would implement
the more exotic structures in these models e.g., the tape of the NTM, whereas it is clear that the brain
could implement a fast associative memory in synapses with the appropriate dynamics. Second, in
a fast associative memory there is no need to decide where or when to write to memory and where
or when to read from memory. The fast memory is updated all the time and the writes are all
superimposed on the same fast changing component of the strength of each synapse. Every time the
input changes there is a transition to a new hidden state which is determined by a combination of
three sources of information: The new input via the slow input-to-hidden weights, C, the previous
hidden state via the slow transition weights, W , and the recent history of hidden state vectors via
the fast weights, A. The effect of the first two sources of information on the new hidden state can be
computed once and then maintained as a sustained boundary condition for a brief iterative settling
process which allows the fast weights to influence the new hidden state. Assuming that the fast
weights decay exponentially, we now show that the effect of the fast weights on the hidden vector
2
Sustained
boundary
condition
.
.
.
.
Slow
transition
weights
Fast
transition
weights
Figure 1: The fast associative memory model.
during an iterative settling phase is to provide an additional input that is proportional to the sum over
all recent hidden activity vectors of the scalar product of that recent hidden vector with the current
hidden activity vector, with each term in this sum being weighted by the decay rate raised to the
power of how long ago that hidden vector occurred. So fast weights act like a kind of attention to
the recent past but with the strength of the attention being determined by the scalar product between
the current hidden vector and the earlier hidden vector rather than being determined by a separate
parameterized computation of the type used in neural machine translation models [Bahdanau et al.,
2015].
The update rule for the fast memory weight matrix, A, is simply to multiply the current fast weights
by a decay rate, ?, and add the outer product of the hidden state vector, h(t), multiplied by a learning
rate, ?:
A(t) = ?A(t ? 1) + ?h(t)h(t)T
(1)
The next vector of hidden activities, h(t + 1), is computed in two steps. The ?preliminary? vector
h0 (t + 1) is determined by the combined effects of the input vector x(t) and the previous hidden
vector: h0 (t + 1) = f (W h(t) + Cx(t)), where W and C are slow weight matrices and f (.)
is the nonlinearity used by the hidden units. The preliminary vector is then used to initiate an
?inner loop? iterative process which runs for S steps and progressively changes the hidden state into
h(t + 1) = hS (t + 1)
hs+1 (t + 1) = f ([W h(t) + Cx(t)] + A(t)hs (t + 1)),
(2)
where the terms in square brackets are the sustained boundary conditions. In a real neural net,
A could be implemented by rapidly changing synapses but in a computer simulation that uses sequences which have fewer time steps than the dimensionality of h, A will be of less than full rank
and it is more efficient to compute the term A(t)hs (t+1) without ever computing the full fast weight
matrix, A. Assuming A is 0 at the beginning of the sequence,
A(t) = ?
? =t
X
?t?? h(? )h(? )T
(3)
?t?? h(? )[h(? )T hs (t + 1)]
(4)
? =1
A(t)hs (t + 1) = ?
? =t
X
? =1
The term in square brackets is just the scalar product of an earlier hidden state vector, h(? ), with the
current hidden state vector, hs (t + 1), during the iterative inner loop. So at each iteration of the inner
loop, the fast weight matrix is exactly equivalent to attending to past hidden vectors in proportion
to their scalar product with the current hidden vector, weighted by a decay factor. During the inner
loop iterations, attention will become more focussed on past hidden states that manage to attract the
current hidden state.
The equivalence between using a fast weight matrix and comparing with a set of stored hidden state
vectors is very helpful for computer simulations. It allows us to explore what can be done with fast
3
weights without incurring the huge penalty of having to abandon the use of mini-batches during
training. At first sight, mini-batches cannot be used because the fast weight matrix is different for
every sequence, but comparing with a set of stored hidden vectors does allow mini-batches.
3.1
Layer normalized fast weights
A potential problem with fast associative memory is that the scalar product of two hidden vectors
could vanish or explode depending on the norm of the hidden vectors. Recently, layer normalization
[Ba et al., 2016] has been shown to be very effective at stablizing the hidden state dynamics in RNNs
and reducing training time. Layer normalization is applied to the vector of summed inputs to all the
recurrent units at a particular time step. It uses the mean and variance of the components of this
vector to re-center and re-scale those summed inputs. Then, before applying the nonlinearity, it includes a learned, neuron-specific bias and gain. We apply layer normalization to the fast associative
memory as follows:
hs+1 (t + 1) = f (LN [W h(t) + Cx(t) + A(t)hs (t + 1)])
(5)
where LN [.] denotes layer normalization. We found that applying layer normalization on each
iteration of the inner loop makes the fast associative memory more robust to the choice of learning
rate and decay hyper-parameters. For the rest of the paper, fast weight models are trained using
layer normalization and the outer product learning rule with fast learning rate of 0.5 and decay rate
of 0.95, unless otherwise noted.
4
Experimental results
To demonstrate the effectiveness of the fast associative memory, we first investigated the problems
of associative retrieval (section 4.1) and MNIST classification (section 4.2). We compared fast
weight models to regular RNNs and LSTM variants. We then applied the proposed fast weights
to a facial expression recognition task using a fast associative memory model to store the results
of processing at one level while examining a sequence of details at a finer level (section 4.3). The
hyper-parameters of the experiments were selected through grid search on the validation set. All
the models were trained using mini-batches of size 128 and the Adam optimizer [Kingma and Ba,
2014]. A description of the training protocols and the hyper-parameter settings we used can be
found in the Appendix. Lastly, we show that fast weights can also be used effectively to implement
reinforcement learning agents with memory (section 4.4).
4.1
Associative retrieval
We start by demonstrating that the method we propose for storing and retrieving temporary memories works effectively for a toy task to which it is very well suited. Consider a task where multiple
key-value pairs are presented in a sequence. At the end of the sequence, one of the keys is presented
and the model must predict the value that was temporarily associated with the key. We used strings
that contained characters from English alphabet, together with the digits 0 to 9. To construct a training sequence, we first randomly sample a character from the alphabet without replacement. This is
the first key. Then a single digit is sampled as the associated value for that key. After generating a
sequence of K character-digit pairs, one of the K different characters is selected at random as the
query and the network must predict the associated digit. Some examples of such string sequences
and their targets are shown below:
Input string Target
c9k8j3f1??c 9
j0a5s5z2??a 5
where ??? is the token to separate the query from the key-value pairs. We generated 100,000 training
examples, 10,000 validation examples and 20,000 test examples. To solve this task, a standard RNN
has to end up with hidden activities that somehow store all of the key-value pairs after the keys and
values are presented sequentially. This makes it a significant challenge for models only using slow
weights.
We used a neural network with a single recurrent layer for this experiment. The recurrent network
processes the input sequence one character at a time. The input character is first converted into a
4
Model
IRNN
R=20
62.11%
R=50
60.23%
R=100
0.34%
LSTM
60.81%
1.85%
0%
A-LSTM
60.13%
1.62%
0%
Fast weights
1.81%
0%
0%
Negative log likelihood
2.0
1.5
1.0
A-LSTM 50
IRNN 50
LSTM 50
FW 50
0.5
0.00
20 40 60 80 100 120 140
Updates x 5000
Table 1: Classification error rate comparison on the
Figure 2: Comparison of the test log likelihood on
associative retrieval task.
the associative retrieval task with 50 recurrent hidden
units.
learned 100-dimensional embedding vector which then provides input to the recurrent layer2 . The
output of the recurrent layer at the end of the sequence is then processed by another hidden layer
of 100 ReLUs before the final softmax layer. We augment the ReLU RNN with a fast associative
memory and compare it to an LSTM model with the same architecture. Although the original
LSTMs do not have explicit long-term storage capacity, recent work from Danihelka et al. [2016]
extended LSTMs by adding complex associative memory. In our experiments, we compared fast
associative memory to both LSTM variants.
Figure 2 and Table 1 show that when the number of recurrent units is small, the fast associative
memory significantly outperforms the LSTMs with the same number of recurrent units. The result
fits with our hypothesis that the fast associative memory allows the RNN to use its recurrent units
more effectively. In addition to having higher retrieval accuracy, the model with fast weights also
converges faster than the LSTM models.
4.2
Integrating glimpses in visual attention models
Despite their many successes, convolutional neural networks are computationally expensive and the
representations they learn can be hard to interpret. Recently, visual attention models [Mnih et al.,
2014, Ba et al., 2015, Xu et al., 2015] have been shown to overcome some of the limitations in
ConvNets. One can understand what signals the algorithm is using by seeing where the model is
looking. Also, the visual attention model is able to selectively focus on important parts of visual
space and thus avoid any detailed processing of much of the background clutter. In this section,
we show that visual attention models can use fast weights to store information about object parts,
though we use a very restricted set of glimpses that do not correspond to natural parts of the objects.
Given an input image, a visual attention model computes a sequence of glimpses over regions of the
image. The model not only has to determine where to look next, but also has to remember what it has
seen so far in its working memory so that it can make the correct classification later. Visual attention
models can learn to find multiple objects in a large static input image and classify them correctly,
but the learnt glimpse policies are typically over-simplistic: They only use a single scale of glimpses
and they tend to scan over the image in a rigid way. Human eye movements and fixations are far
more complex. The ability to focus on different parts of a whole object at different scales allows
humans to apply the very same knowledge in the weights of the network at many different scales,
but it requires some form of temporary memory to allow the network to integrate what it discovered
in a set of glimpses. Improving the model?s ability to remember recent glimpses should help the
visual attention model to discover non-trivial glimpse policies. Because the fast weights can store
all the glimpse information in the sequence, the hidden activity vector is freed up to learn how to
intelligently integrate visual information and retrieve the appropriate memory content for the final
classifier.
To explicitly verify that larger memory capacity is beneficial to visual attention-based models, we
simplify the learning process in the following way: First, we provide a pre-defined glimpse control
signal so the model knows where to attend rather than having to learn the control policy through
reinforcement learning. Second, we introduce an additional control signal to the memory cells so
the attention model knows when to store the glimpse information. A typical visual attention model is
2
To make the architecture for this task more similar to the architecture for the next task we first compute a
50 dimensional embedding vector and then expand this to a 100-dimensional embedding.
5
Update fast
weights and
wipe out
hidden state
Integration
transition
weights
Slow
transition
weights
Fast
transition
weights
Figure 3: The multi-level fast associative memory model.
Model
IRNN
50 features
12.95%
100 features
1.95%
200 features
1.42%
LSTM
12%
1.55%
1.10%
ConvNet
1.81%
1.00%
0.9%
Fast weights
7.21%
1.30%
0.85%
Table 2: Classification error rates on MNIST.
complex and has high variance in its performance due to the need to learn the policy network and the
classifier at the same time. Our simplified learning procedure enables us to discern the performance
improvement contributed by using fast weights to remember the recent past.
We consider a simple recurrent visual attention model that has a similar architecture to the RNN from
the previous experiment. It does not predict where to attend but rather is given a fixed sequence of
locations: the static input image is broken down into four non-overlapping quadrants recursively
with two scale levels. The four coarse regions, down-sampled to 7 ? 7, along with their the four
7 ? 7 quadrants are presented in a single sequence as shown in Figure 1. Notice that the two glimpse
scales form a two-level hierarchy in the visual space. In order to solve this task successfully, the
attention model needs to integrate the glimpse information from different levels of the hierarchy.
One solution is to use the model?s hidden states to both store and integrate the glimpses of different
scales. A much more efficient solution is to use a temporary ?cache? to store any of the unfinished
glimpse computation when processing the glimpses from a finer scale in the hierarchy. Once the
computation is finished at that scale, the results can be integrated with the partial results at the
higher level by ?popping? the previous result from the ?cache?. Fast weights, therefore, can act as
a neurally plausible ?cache? for storing partial results. The slow weights of the same model can
then specialize in integrating glimpses at the same scale. Because the slow weights are shared for
all glimpse scales, the model should be able to store the partial results at several levels in the same
set of fast weights, though we have only demonstrated the use of fast weights for storage at a single
level.
We evaluated the multi-level visual attention model on the MNIST handwritten digit dataset. MNIST
is a well-studied problem on which many other techniques have been benchmarked. It contains the
ten classes of handwritten digits, ranging from 0 to 9. The task is to predict the class label of an
isolated and roughly normalized 28x28 image of a digit. The glimpse sequence, in this case, consists
of 24 patches of 7 ? 7 pixels.
Table 2 compares classification results for a ReLU RNN with a multi-level fast associative memory against an LSTM that gets the same sequence of glimpses. Again the result shows that when
the number of hidden units is limited, fast weights give a significant improvement over the other
6
Figure 4: Examples of the near frontal faces from the MultiPIE dataset.
Test accuracy
IRNN
81.11
LSTM
81.32
ConvNet
88.23
Fast Weights
86.34
Table 3: Classification accuracy comparison on the facial expression recognition task.
models. As we increase the memory capacities, the multi-level fast associative memory consistently
outperforms the LSTM in classification accuracy.
Unlike models that must integrate a sequence of glimpses, convolutional neural networks process all
the glimpses in parallel and use layers of hidden units to hold all their intermediate computational
results. We further demonstrate the effectiveness of the fast weights by comparing to a three-layer
convolutional neural network that uses the same patches as the glimpses presented to the visual
attention model. From Table 2, we see that the multi-level model with fast weights reaches a very
similar performance to the ConvNet model without requiring any biologically implausible weight
sharing.
4.3
Facial expression recognition
To further investigate the benefits of using fast weights in the multi-level visual attention model, we
performed facial expression recognition tasks on the CMU Multi-PIE face database [Gross et al.,
2010]. The dataset was preprocessed to align each face by eyes and nose fiducial points. It was
downsampled to 48 ? 48 greyscale. The full dataset contains 15 photos taken from cameras with
different viewpoints for each illumination ? expression ? identity ? session condition. We used
only the images taken from the three central cameras corresponding to ?15? , 0? , 15? views since
facial expressions were not discernible from the more extreme viewpoints. The resulting dataset
contained > 100, 000 images. 317 identities appeared in the training set with the remaining 20
identities in the test set.
Given the input face image, the goal is to classify the subject?s facial expression into one of the six
different categories: neutral, smile, surprise, squint, disgust and scream. The task is more realistic
and challenging than the previous MNIST experiments. Not only does the dataset have unbalanced
numbers of labels, some of the expressions, for example squint and disgust, are are very hard to distinguish. In order to perform well on this task, the models need to generalize over different lighting
conditions and viewpoints. We used the same multi-level attention model as in the MNIST experiments with 200 recurrent hidden units. The model sequentially attends to non-overlapping 12x12
pixel patches at two different scales and there are, in total, 24 glimpses. Similarly, we designed a
two layer ConvNet that has a 12x12 receptive fields.
From Table 3, we see that the multi-level fast weights model that knows when to store information
outperforms the LSTM and the IRNN. The results are consistent with previous MNIST experiments.
However, ConvNet is able to perform better than the multi-level attention model on this near frontal
face dataset. We think the efficient weight-sharing and architectural engineering in the ConvNet
combined with the simultaneous availability of all the information at each level of processing allows
the ConvNet to generalize better in this task. Our use of a rigid and predetermined policy for where
to glimpse eliminates one of the main potential advantages of the multi-level attention model: It can
process informative details at high resolution whilst ignoring most of the irrelevant details. To realize
this advantage we will need to combine the use of fast weights with the learning of complicated
policies.
7
1.0
1.0
RNN
RNN+FW
LSTM
0.8
0.5
Avgerage Reward
Avgerage Reward
0.6
0.4
0.2
0.0
0.2
0.5
0.4
RNN
RNN+FW
LSTM
0.6
0.8
0.0
0
2
(a)
4
6
8
steps
10
12
(b)
14
1.0
0
5
10
15
steps
20
25
30
(c)
Figure 5: a) Sample screen from the game ?Catch? b) Performance curves for Catch with N =
16, M = 3. c) Performance curves for Catch with N = 24, M = 5.
4.4
Agents with memory
While different kinds of memory and attention have been studied extensively in the supervised
learning setting [Graves, 2014, Mnih et al., 2014, Bahdanau et al., 2015], the use of such models for
learning long range dependencies in reinforcement learning has received less attention.
We compare different memory architectures on a partially observable variant of the game ?Catch?
described in [Mnih et al., 2014]. The game is played on an N ? N screen of binary pixels and each
episode consists of N frames. Each trial begins with a single pixel, representing a ball, appearing
somewhere in the first row of the column and a two pixel ?paddle? controlled by the agent in the
bottom row. After observing a frame, the agent gets to either keep the paddle stationary or move it
right or left by one pixel. The ball descends by a single pixel after each frame. The episode ends
when the ball pixel reaches the bottom row and the agent receives a reward of +1 if the paddle
touches the ball and a reward of ?1 if it doesn?t. Solving the fully observable task is straightforward
and requires the agent to move the paddle to the column with the ball. We make the task partiallyobservable by providing the agent blank observations after the M th frame. Solving the partiallyobservable version of the game requires remembering the position of the paddle and ball after M
frames and moving the paddle to the correct position using the stored information.
We used the recently proposed asynchronous advantage actor-critic method [Mnih et al., 2016] to
train agents with three types of memory on different sizes of the partially observable Catch task. The
three agents included a ReLU RNN, an LSTM, and a fast weights RNN. Figure 5 shows learning
progress of the different agents on two variants of the game N = 16, M = 3 and N = 24, M = 5.
The agent using the fast weights architecture as its policy representation (shown in green) is able to
learn faster than the agents using ReLU RNN or LSTM to represent the policy. The improvement
obtained by fast weights is also more significant on the larger version of the game which requires
more memory.
5
Conclusion
This paper contributes to machine learning by showing that the performance of RNNs on a variety
of different tasks can be improved by introducing a mechanism that allows each new state of the
hidden units to be attracted towards recent hidden states in proportion to their scalar products with
the current state. Layer normalization makes this kind of attention work much better. This is a form
of attention to the recent past that is somewhat similar to the attention mechanism that has recently
been used to dramatically improve the sequence-to-sequence RNNs used in machine translation.
The paper has interesting implications for computational neuroscience and cognitive science. The
ability of people to recursively apply the very same knowledge and processing apparatus to a whole
sentence and to an embedded clause within that sentence or to a complex object and to a major part
of that object has long been used to argue that neural networks are not a good model of higher-level
cognitive abilities. By using fast weights to implement an associative memory for the recent past,
we have shown how the states of neurons could be freed up so that the knowledge in the connections
of a neural network can be applied recursively. This overcomes the objection that these models can
only do recursion by storing copies of neural activity vectors, which is biologically implausible.
8
References
Sepp Hochreiter and J?urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735?1780,
1997.
Geoffrey E Hinton and David C Plaut. Using fast weights to deblur old memories. In Proceedings of the ninth
annual conference of the Cognitive Science Society, pages 177?186. Erlbaum, 1987.
J Schmidhuber. Reducing the ratio between learning complexity and number of time varying variables in fully
recurrent nets. In ICANN93, pages 460?463. Springer, 1993.
Misha Tsodyks, Klaus Pawelzik, and Henry Markram. Neural networks with dynamic synapses. Neural
computation, 10(4):821?835, 1998.
LF Abbott and Wade G Regehr. Synaptic computation. Nature, 431(7010):796?803, 2004.
Omri Barak and Misha Tsodyks. Persistent activity in neural networks with dynamic synapses. PLoS Comput
Biol, 3(2):e35, 2007.
Robert S Zucker and Wade G Regehr. Short-term synaptic plasticity. Annual review of physiology, 64(1):
355?405, 2002.
Henry Markram, Joachim L?ubke, Michael Frotscher, and Bert Sakmann. Regulation of synaptic efficacy by
coincidence of postsynaptic aps and epsps. Science, 275(5297):213?215, 1997.
Guo-qiang Bi and Mu-ming Poo. Synaptic modifications in cultured hippocampal neurons: dependence on
spike timing, synaptic strength, and postsynaptic cell type. The Journal of neuroscience, 18(24):10464?
10472, 1998.
David J Willshaw, O Peter Buneman, and Hugh Christopher Longuet-Higgins. Non-holographic associative
memory. Nature, 1969.
Teuvo Kohonen. Correlation matrix memories. Computers, IEEE Transactions on, 100(4):353?359, 1972.
James A Anderson and Geoffrey E Hinton. Models of information processing in the brain. Parallel models of
associative memory, pages 9?48, 1981.
John J Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8):2554?2558, 1982.
Elizabeth Gardner. The space of interactions in neural network models. Journal of physics A: Mathematical
and general, 21(1):257, 1988.
Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014.
Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to transduce with
unbounded memory. In Advances in Neural Information Processing Systems, pages 1819?1827, 2015.
Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014.
D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In
International Conference on Learning Representations, 2015.
J. Ba, R. Kiros, and G. Hinton. Layer normalization. arXiv:1607.06450, 2016.
D. Kingma and J. L. Ba. Adam: a method for stochastic optimization. arXiv:1412.6980, 2014.
Ivo Danihelka, Greg Wayne, Benigno Uria, Nal Kalchbrenner, and Alex Graves. Associative long short-term
memory. arXiv preprint arXiv:1602.03032, 2016.
V. Mnih, N. Heess, A. Graves, and K. Kavukcuoglu. Recurrent models of visual attention. In Neural Information Processing Systems, 2014.
J. Ba, V. Mnih, and K. Kavukcuoglu. Multiple object recognition with visual attention. In International
Conference on Learning Representations, 2015.
Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International Conference on Machine Learning, 2015.
Ralph Gross, Iain Matthews, Jeffrey Cohn, Takeo Kanade, and Simon Baker. Multi-pie. Image and Vision
Computing, 28(5):807?813, 2010.
A. Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, 2014.
Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy P Lillicrap, Tim Harley,
David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, 2016.
9
| 6057 |@word h:9 trial:1 version:2 proportion:2 norm:1 simulation:2 shot:1 recursively:3 contains:2 efficacy:1 past:9 outperforms:3 current:11 com:4 comparing:3 blank:1 activation:2 must:3 attracted:1 john:1 realize:1 uria:1 realistic:1 takeo:1 informative:1 plasticity:6 predetermined:1 enables:1 discernible:1 designed:1 update:3 progressively:1 aps:1 stationary:1 discovering:1 fewer:1 selected:2 ivo:2 beginning:1 short:10 provides:1 plaut:2 coarse:1 toronto:3 location:1 unbounded:1 mathematical:1 along:1 become:1 retrieving:1 persistent:1 specialize:1 sustained:3 fixation:1 consists:2 combine:1 introduce:1 expected:1 roughly:1 themselves:1 kiros:2 multi:12 brain:6 terminal:1 salakhutdinov:1 ming:1 automatically:1 little:1 pawelzik:1 cache:3 begin:1 discover:1 exotic:1 baker:1 what:5 kind:4 benchmarked:1 string:3 deepmind:3 depolarization:1 whilst:1 remember:4 every:3 act:3 exactly:1 willshaw:2 classifier:2 control:3 unit:13 wayne:2 kelvin:1 danihelka:3 before:2 attend:4 engineering:1 timing:1 apparatus:1 despite:2 approximately:1 might:1 rnns:4 blunsom:1 studied:2 dynamically:1 suggests:1 equivalence:1 challenging:1 limited:2 range:2 bi:2 practical:1 camera:2 implement:5 lf:1 backpropagation:1 writes:1 digit:7 procedure:1 rnn:14 physiology:2 significantly:1 pre:1 integrating:2 regular:1 seeing:1 quadrant:2 downsampled:1 get:3 cannot:2 close:1 storage:7 influence:2 applying:2 restriction:1 equivalent:1 transduce:1 demonstrated:1 center:1 maximizing:1 poo:2 straightforward:1 attention:31 sepp:1 jimmy:3 phil:1 focused:1 resolution:1 rule:7 attending:1 higgins:1 iain:1 facilitation:1 retrieve:2 embedding:3 increment:1 updated:3 target:2 hierarchy:3 cultured:1 caption:1 us:3 hypothesis:1 recognition:5 expensive:1 persist:1 database:1 bottom:2 role:1 preprint:3 coincidence:1 capture:1 tsodyks:4 region:2 episode:2 plo:1 movement:1 gross:2 broken:1 complexity:1 mu:1 reward:4 dynamic:7 trained:3 solving:2 cdi:1 hopfield:3 emergent:1 neurotransmitter:1 alphabet:2 train:1 fast:68 effective:2 artificial:4 query:2 zemel:1 klaus:1 hyper:3 tell:1 h0:2 kalchbrenner:1 larger:2 plausible:2 valued:1 solve:2 vector1:1 otherwise:1 ability:5 timescale:4 think:1 jointly:1 abandon:1 final:2 associative:29 sequence:28 advantage:4 net:5 intelligently:1 propose:1 interaction:2 product:10 kohonen:2 loop:5 rapidly:2 translate:1 achieve:1 academy:1 description:1 regularity:1 generating:2 adam:2 converges:1 silver:1 object:7 help:1 depending:1 recurrent:18 attends:1 tim:1 received:1 progress:1 edward:1 epsps:1 implemented:4 predicted:1 descends:1 come:1 strong:1 hermann:1 correct:2 stochastic:2 human:2 implementing:1 require:1 benigno:1 investigation:1 preliminary:2 ryan:1 adjusted:1 correction:1 hold:1 predict:4 matthew:1 major:1 optimizer:1 early:1 purpose:1 ruslan:1 label:2 currently:1 leftover:1 successfully:1 partiallyobservable:2 weighted:2 sight:1 rather:4 avoid:2 varying:1 focus:3 joachim:1 improvement:3 consistently:1 rank:1 superimposed:1 likelihood:2 helpful:2 dependent:1 rigid:2 attract:1 scream:1 typically:4 integrated:1 hidden:51 expand:1 layer2:1 pixel:8 ralph:1 among:1 classification:7 augment:1 raised:1 summed:2 vmnih:1 urgen:1 softmax:1 integration:1 once:2 construct:1 having:4 field:1 koray:1 qiang:1 look:1 ubke:1 yoshua:1 mirza:1 simplify:1 richard:1 primarily:1 randomly:1 simultaneously:1 national:1 phase:1 replacement:1 jeffrey:1 attractor:1 maintain:1 harley:1 huge:1 investigate:1 mnih:8 multiply:1 joel:1 bracket:2 popping:1 extreme:1 misha:2 implication:1 partial:3 glimpse:25 facial:6 unless:1 old:1 puigdomenech:1 re:2 isolated:1 wipe:1 psychological:1 recomputing:1 classify:2 earlier:2 column:2 ordinary:1 introducing:1 neutral:1 holographic:1 examining:1 teuvo:1 paddle:6 too:2 erlbaum:1 sumit:1 stored:9 dependency:1 learnt:1 combined:2 cho:1 lstm:18 international:4 hugh:1 physic:1 michael:1 connecting:1 together:1 again:1 central:1 manage:1 cognitive:3 toy:1 volodymyr:2 account:1 potential:2 converted:1 includes:1 availability:1 explicitly:1 depends:1 multiplicative:1 later:1 performed:1 view:1 jason:1 observing:1 start:1 relus:1 maintains:1 complicated:2 parallel:2 simon:1 square:2 accuracy:4 convolutional:3 variance:2 largely:1 greg:2 correspond:1 generalize:2 handwritten:2 kavukcuoglu:3 accurately:1 multipie:1 lighting:1 finer:2 ago:1 history:4 simultaneous:1 synapsis:5 reach:2 implausible:2 sharing:2 synaptic:7 against:1 james:1 obvious:1 associated:3 psi:1 static:2 gain:1 sampled:2 proved:1 dataset:7 knowledge:3 dimensionality:1 actually:1 higher:4 supervised:1 improved:1 synapse:1 done:1 though:2 evaluated:1 anderson:2 just:1 lastly:1 until:2 convnets:1 working:2 receives:1 correlation:1 lstms:5 touch:1 christopher:1 cohn:1 overlapping:2 mehdi:1 google:8 somehow:2 perhaps:1 irnn:5 effect:4 lillicrap:1 regehr:4 true:1 normalized:2 verify:1 requiring:1 read:1 moritz:1 during:4 game:6 encourages:1 maintained:1 noted:1 m:2 hippocampal:1 demonstrate:2 image:11 ranging:1 invoked:1 recently:7 clause:1 physical:1 exponentially:1 occurred:1 interpret:1 significant:3 grid:1 session:1 pointed:1 similarly:1 nonlinearity:2 henry:2 moving:1 badia:1 zucker:2 longer:1 actor:1 add:1 align:2 recent:13 irrelevant:1 schmidhuber:4 store:15 binary:2 success:1 seen:1 additional:2 remembering:1 somewhat:1 determine:2 ntm:2 signal:3 catalin:1 neurally:2 full:4 multiple:3 faster:4 x28:1 long:10 retrieval:5 controlled:1 buneman:1 variant:4 simplistic:1 vision:1 cmu:1 arxiv:9 iteration:3 represent:2 normalization:8 hochreiter:2 cell:3 whereas:1 addition:1 background:1 objection:1 source:2 suleyman:1 operate:2 rest:1 unlike:1 eliminates:1 subject:1 tend:1 bahdanau:3 inconsistent:1 effectiveness:2 smile:1 near:2 kick:1 chopra:1 intermediate:2 bengio:2 variety:2 relu:4 fit:1 architecture:6 inner:5 idea:1 expression:8 six:1 penalty:1 peter:1 tape:1 depression:1 deep:1 dramatically:1 heess:1 clear:2 detailed:1 clutter:1 ten:1 extensively:1 processed:2 category:1 notice:1 neuroscience:2 correctly:1 ionescu:1 write:1 key:8 four:3 demonstrating:1 achieving:1 changing:2 preprocessed:1 neither:1 abbott:2 leibo:1 nal:1 freed:2 convert:1 sum:2 run:1 turing:2 parameterized:1 ca2:1 discern:1 disgust:2 decide:1 architectural:1 patch:3 appendix:1 bit:3 bound:1 layer:16 distinguish:1 played:1 courville:1 annual:2 activity:19 strength:3 alex:3 explode:1 x12:2 combination:1 ball:6 beneficial:1 character:6 postsynaptic:2 elizabeth:1 biologically:2 modification:1 restricted:2 depletion:1 taken:2 ln:2 computationally:1 mechanism:6 needed:2 initiate:1 know:3 nose:1 end:7 photo:1 available:1 incurring:1 multiplied:1 apply:3 occasional:1 appropriate:2 appearing:1 batch:4 ho:1 slower:1 gate:3 original:1 assumes:1 denotes:1 remaining:1 somewhere:1 society:1 objective:1 move:2 spike:2 receptive:1 dependence:1 usual:1 fiducial:1 antoine:1 gradient:1 convnet:7 separate:2 capacity:12 outer:4 presynaptic:1 argue:1 trivial:1 reason:3 assuming:2 modeled:1 mini:4 providing:1 ratio:1 pie:2 regulation:1 robert:1 greyscale:1 negative:1 ba:8 sakmann:1 collective:1 policy:8 squint:2 contributed:1 perform:2 upper:1 jzl:1 neuron:3 observation:1 descent:1 payoff:1 hinton:6 ever:1 extended:1 looking:1 frame:5 discovered:1 stack:1 ninth:1 bert:1 david:3 pair:4 sentence:2 connection:1 learned:2 temporary:7 hour:1 kingma:2 frotscher:1 able:4 suggested:1 dynamical:1 pattern:4 below:1 appeared:1 challenge:1 green:1 memory:65 wade:2 power:1 natural:1 settling:2 recursion:3 representing:1 improve:1 brief:1 eye:2 gardner:2 finished:1 mediated:1 auto:1 catch:5 review:1 graf:7 embedded:1 fully:2 suggestion:1 limitation:1 proportional:1 interesting:1 generation:1 geoffrey:3 validation:2 integrate:5 agent:12 consistent:1 viewpoint:3 unfinished:1 storing:3 critic:1 bordes:1 translation:3 row:3 karl:1 token:1 surprisingly:1 copy:3 keeping:1 english:1 asynchronous:2 bias:1 allow:5 understand:1 barak:2 taking:1 markram:3 focussed:1 face:5 benefit:3 boundary:3 overcome:1 curve:2 transition:7 computes:1 doesn:1 made:1 reinforcement:4 simplified:1 far:2 transaction:1 reconstructed:1 observable:3 keep:1 overcomes:1 mustafa:1 sequentially:2 assumed:1 search:1 iterative:6 why:1 table:7 kanade:1 learn:9 nature:2 robust:1 longuet:1 ignoring:1 contributes:1 improving:1 geoffhinton:1 investigated:1 priming:1 complex:4 protocol:1 main:3 whole:2 xu:2 screen:2 slow:12 axon:1 position:2 explicit:1 comput:1 vanish:1 third:1 omri:1 minute:2 down:2 specific:2 showing:1 decay:7 evidence:3 exists:1 ih:1 mnist:7 adding:1 effectively:3 preoccupation:1 illumination:1 surprise:1 suited:1 cx:3 timothy:1 simply:1 explore:1 visual:19 contained:2 temporarily:1 partially:2 scalar:6 deblur:1 springer:1 grefenstette:2 weston:2 identity:3 goal:1 adria:1 towards:1 shared:1 content:1 change:5 fw:3 hard:2 determined:4 typical:1 reducing:2 included:1 called:1 total:1 experimental:1 aaron:1 selectively:1 people:1 guo:1 scan:1 unbalanced:1 frontal:2 ongoing:1 biol:1 |
5,589 | 6,058 | Tight Complexity Bounds for Optimizing Composite
Objectives
Blake Woodworth
Toyota Technological Institute at Chicago
Chicago, IL, 60637
[email protected]
Nathan Srebro
Toyota Technological Institute at Chicago
Chicago, IL, 60637
[email protected]
Abstract
We provide tight upper and lower bounds on the complexity of minimizing the
average of m convex functions using gradient and prox oracles of the component
functions. We show a significant gap between the complexity of deterministic vs
randomized optimization. For smooth functions, we show that accelerated gradient descent (AGD) and an accelerated variant of SVRG are optimal in the deterministic and randomized settings respectively, and that a gradient oracle is sufficient for the optimal rate. For non-smooth functions, having access to prox oracles
reduces the complexity and we present optimal methods based on smoothing that
improve over methods using just gradient accesses.
1
Introduction
We consider minimizing the average of m
(
2 convex functions:
)
m
1 X
min F (x) :=
fi (x)
x2X
m i=1
(1)
where X ? Rd is a closed, convex set, and where the algorithm is given access to the following
gradient (or subgradient in the case of non-smooth functions) and prox oracle for the components:
?
?
hF (x, i, ) = fi (x), rfi (x), proxfi (x, )
(2)
where
?
proxfi (x, ) = arg min fi (u) +
u2X
2
kx
uk
2
(3)
A natural question is how to leverage the prox oracle, and how much benefit it provides over gradient
access alone. The prox oracle is potentially much more powerful, as it provides global, rather then
local, information about the function. For example, for a single function (m = 1), one prox oracle
call (with = 0) is sufficient for exact optimization. Several methods have recently been suggested
for optimizing a sum or average of several functions using prox accesses to each component, both in
the distributed setting where each components might be handled on a different machine (e.g. ADMM
[7], DANE [18], DISCO [20]) or for functions that can be decomposed into several ?easy? parts
(e.g. PRISMA [13]). But as far as we are aware, no meaningful lower bound was previously known
on the number of prox oracle accesses required even for the average of two functions (m = 2).
The optimization of composite objectives of the form (1) has also been extensively studied in the
context of minimizing empirical risk over m samples. Recently, stochastic methods such as SDCA
[16], SAG [14], SVRG [8], and other variants, have been presented which leverage the finite nature
of the problem to reduce the variance in stochastic gradient estimates and obtain guarantees that
dominate both batch and stochastic gradient descent. As methods with improved complexity, such
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Randomized Deterministic
Lower Upper Lower Upper
Convex,
kxk ? B
L-Lipschitz
-Strongly
Convex
mL
p
?
(Section 3)
mLB
?
(Section 3)
mLB
?
(Section 4)
2
L B
?2
2
?
?
p
^ m log 1? + mLB
?
(SGD, A-SVRG)
2
L B
?2
2
?
?
p
^ m+ mLB
?
(Section 5)
mL
p
?
(Section 4)
?
?
p
L
^ m log 1? + pmL
?
?
2
(SGD, A-SVRG)
?
?
p
L
^ m+ pmL
?
?
2
(Section 5)
Convex,
kxk ? B
q
B2
m
?
-Smooth
-Strongly
Convex
p
m
log ??0
(AGD)
q
m
(AGD)
B2
m
?
(Section 4)
m log ??0 +
q
m
p
m+
m
?
B2
?
(Section 5)
?0
?
(Section 4)
B2
m+
(A-SVRG)
q
log
pm
log ??0
(A-SVRG)
m+
pm
log
?0
?
(Section 5)
Table 1: Upper and lower bounds on the number of grad-and-prox oracle accesses needed to find ?-suboptimal
solutions for each function class. These are exact up to constant factors
p except for the lower bounds for smooth
and strongly convex functions, which hide extra log / and log m / factors for deterministic and randomized algorithms. Here, ?0 is the suboptimality of the point 0.
as accelerated SDCA [17], accelerated SVRG, and K ATYUSHA [3], have been presented, researchers
have also tried to obtain lower bounds on the best possible complexity in this settings?but as we
survey below, these have not been satisfactory so far.
In this paper, after briefly surveying methods for smooth, composite optimization, we present methods for optimizing non-smooth composite objectives, which show that prox oracle access can indeed
be leveraged to improve over methods using merely subgradient access (see Section 3). We then
turn to studying lower bounds. We consider algorithms that access the objective F only through
the oracle hF and provide lower bounds on the number of such oracle accesses (and thus the runtime) required to find ?-suboptimal solutions. We consider optimizing both Lipschitz (non-smooth)
functions and smooth functions, and guarantees that do and do not depend on strong convexity, distinguishing between deterministic optimization algorithms and randomized algorithms. Our upper
and lower bounds are summarized in Table 1.
As shown in the table, we provide matching upper and lower bounds (up to a log factor) for all
function and algorithm classes. In particular, our bounds establish the optimality (up to log factors) of accelerated SDCA, SVRG, and SAG for randomized finite-sum optimization, and also the
optimality of our deterministic smoothing algorithms for non-smooth composite optimization.
On the power of gradient vs prox oracles For non-smooth functions, we show that having access
to prox oracles forpthe components can reduce the polynomial dependence on ? from 1/?2 to 1/?, or
from 1/( ?) to 1/ ? for -strongly convex functions. However, all of the optimal complexities for
smooth functions can be attained with only component gradient access using accelerated gradient
descent (AGD) or accelerated SVRG. Thus the worst-case complexity cannot be improved (at least
not significantly) by using the more powerful prox oracle.
On the power of randomization We establish a significant gap between deterministic and randomized algorithms for finite-sum problems. Namely, the dependence on the number
of components
p
must be linear in m for any deterministic algorithm, but can be reduced to m (in the typically
significant term) using randomization. We emphasize that the randomization here is only in the
algorithm?not in the oracle. We always assume the oracle returns an exact answer (for the requested component) and is not a stochastic oracle. The distinction is that the algorithm is allowed
to flip coins in deciding what operations and queries to perform but the oracle must return an exact
answer to that query (of course, the algorithm could simulate a stochastic oracle).
Prior Lower Bounds Several authors recently presented lower bounds for optimizing (1) in the
smooth and strongly convex
psetting using component gradients. Agarwal and Bottou [1] presented
a lower bound of ? m + m log 1? . However, their bound is valid only for deterministic algorithms (thus not including SDCA, SVRG, SAG, etc.)?we not only consider randomized algorithms,
but also show a much higher lower bound for deterministic algorithms (i.e. the bound of Agarwal
2
and Bottou is loose). Improving upon this, Lan [9] shows a similar lower bound for a restricted
class of randomized algorithms: the algorithm must select which component to query for a gradient
by drawing an index from a fixed distribution, but the algorithm must otherwise be deterministic
in how it uses the gradients, and its iterates must lie in the span of the gradients it has received.
This restricted class includes SAG, but not SVRG nor perhaps other realistic attempts at improving
over these. Furthermore, both bounds allow only gradient accesses, not prox computations. Thus
SDCA, which requires prox accesses, and potential variants are not covered by such lower bounds.
We prove as similar lower bound to Lan?s, but our analysis is much more general and applies to any
randomized algorithm, making any sequence of queries to a gradient and prox oracle, and without
assuming that iterates lie in the span of previous responses. In addition to smooth functions, we
also provide lower bounds for non-smooth problems which were not considered by these previous
attempts. Another recent observation [15] was that with access only to random component subgradients without knowing the component?s identity, an algorithm must make ?(m2 ) queries to optimize
well. This shows how relatively subtle changes in the oracle can have a dramatic effect on the complexity of the problem. Since the oracle we consider is quite powerful, our lower bounds cover a
very broad family of algorithms, including SAG, SVRG, and SDCA.
Our deterministic lower bounds are inspired by a lower bound on the number of rounds of communication required for optimization when each fi is held by a different machine and when iterates lie in
the span of certain permitted calculations [5]. Our construction for m = 2 is similar to theirs (though
in a different setting), but their analysis considers neither scaling with m (which has a different role
in their setting) nor randomization.
Notation and Definitions We use k?k to denote the standard Euclidean norm on Rd . We say that
a function f is L-Lipschitz continuous on X if 8x, y 2 X |f (x) f (x)| ? L kx yk; -smooth
on X if it is differentiable and its gradient is -Lipschitz on X ; and -strongly convex on X if
2
8x, y 2 X fi (y)
fi (x) + hrfi (x), y xi + 2 kx yk . We consider optimizing (1) under
four combinations of assumptions: each component fi is either L-Lipschitz or -smooth, and either
F (x) is -strongly convex or its domain is bounded, X ? {x : kxk ? B}.
2
Optimizing Smooth Sums
We briefly review the best known methods for optimizing (1) when the components are -smooth,
yielding the upper bounds on the right half of Table 1. These upper bounds can be obtained using
only component gradient access, without need for the prox oracle.
We can obtain exact gradients of F (x) by computing all m component gradients rfi (x). Running
accelerated gradient descent (AGD) [12] on F (x) using these exact gradients achieves the upper
complexity bounds for deterministic algorithms and smooth problems (see Table 1).
SAG [14], SVRG [8] and related methods use randomization to sample components, but also leverage the finite nature of the objective to control the variance of the gradient estimator used. Accelerating these methods
using the Catalyst
framework [10] ensures
?
?
p that for -strongly convex objectives we have E F (x(k) ) F (x? ) < ? after k = O m + m log2 ??0 iterations, where
F (0) F (x? ) = ?0 . K ATYUSHA [3] is a more direct approachp
to accelerating SVRG which avoids
extraneous log-factors, yielding the complexity k = O m + m log ??0 indicated in Table 1.
When F is not strongly convex, adding a regularizer to the objective and??
instead optimizing
=
?F (x) ?
q
2
2
m
B
?
F (x) + 2 kxk with = ?/B 2 results in an oracle complexity of O
m+
log ?0 .
?
The log-factor in the second term can be removed using the more delicate reduction of Allen-Zhu
and Hazan [4], which involves optimizing F (x) for progressively smaller values of , yielding the
upper bound in the table.
K ATYUSHA and Catalyst-accelerated SAG or SVRG use only gradients of the components. Accelerated SDCA [17] achieves a similar complexity using gradient and prox oracle access.
3
Leveraging Prox Oracles for Lipschitz Sums
In this section, we present algorithms for leveraging the prox oracle to minimize (1) when each
component is L-Lipschitz. This will be done by using the prox oracle to ?smooth? each component,
3
and optimizing the new, smooth sum which approximates the original problem. This idea was used
in order to apply K ATYUSHA [3] and accelerated SDCA [17] to non-smooth objectives. We are not
aware of a previous explicit presentation of the AGD-based deterministic algorithm, which achieves
the deterministic upper complexity indicated in Table 1.
The key is using a prox oracle to obtain gradients of the -Moreau envelope of a non-smooth function, f , defined as:
f ( ) (x) = inf f (u) +
2
kx uk
(4)
2
Lemma 1 ([13, Lemma 2.2], [6, Proposition 12.29], following [11]). Let f be convex and LLipschitz continuous. For any > 0,
u2X
1. f (
)
2. r(f
is -smooth
( )
)(x) = (x
proxf (x, ))
3. f ( ) (x) ? f (x) ? f ( ) (x) +
L2
2
Consequently, we can consider the smoothed problem
(
)
m
X
1
(
)
min F? ( ) (x) :=
f (x) .
x2X
m i=1 i
(5)
While F? ( ) is not, in general, the -Moreau envelope of F , it is -smooth, we can calculate the
2
gradient of its components using the oracle hF , and F? ( ) (x) ? F (x) ? F? ( ) (x) + L
2 . Thus,
to obtain an ?-suboptimal solution to (1) using hF , we set
= L2 /? and apply any algorithm
2
which can optimize (5) using gradients of the L /?-smooth components, to within ?/2 accuracy.
With the rates presented in Section 2, using AGD on (5) yields a complexity of O mLB
in the
?
deterministic setting. When the functions are -strongly convex, smoothing with a fixed results in
a spurious log-factor. To avoid this, we again apply the reduction of Allen-Zhu and Hazan [4],
?
? this
mL
( )
?
time optimizing F
for increasingly large values of . This leads to the upper bound of O p
?
when used with AGD (see Appendix A for details).
Similarly, we can apply an accelerated randomized
algorithm
(such
smooth
?
? as K?ATYUSHA) topthe ?
p
mLB
mL
?0
?0
( )
?
p
problem F
to obtain complexities of O m log ? + ?
and O m log ? +
?this
?
matches the presentation of Allen-Zhu [3] and is similar to that of Shalev-Shwartz and Zhang [17].
Finally, if m > L2 B 2 /?2 or m > L2 /( ?), stochastic gradient descent is a better randomized
alternative, yielding complexities of O(L2 B 2 /?2 ) or O(L2 /( ?)).
4
Lower Bounds for Deterministic Algorithms
We now turn to establishing lower bounds on the oracle complexity of optimizing (1). We first
consider only deterministic optimization algorithms. What we would like to show is that for any
deterministic optimization algorithm we can construct a ?hard? function for which the algorithm
cannot find an ?-suboptimal solution until it has made many oracle accesses. Since the algorithm
is deterministic, we can construct such a function by simulating the (deterministic) behavior of the
algorithm. This can be viewed as a game, where an adversary controls the oracle being used by
the algorithm. At each iteration the algorithm queries the oracle with some triplet (x, i, ) and
the adversary responds with an answer. This answer must be consistent with all previous answers,
but the adversary ensures it is also consistent with a composite function F that the algorithm is
far from optimizing. The ?hard? function is then gradually defined in terms of the behavior of the
optimization algorithm.
To help us formulate our constructions, we define a ?round? of queries as a series of queries in which
dm
2 e distinct functions fi are queried. The first round begins with the first query and continues until
exactly d m
2 e unique functions have been queried. The second round begins with the next query, and
continues until exactly d m
2 e more distinct components have been queried in the second round, and so
on until the algorithm terminates. This definition is useful for analysis but requires no assumptions
about the algorithm?s querying strategy.
4
4.1
Non-Smooth Components
We begin by presenting a lower bound for deterministic optimization of (1) when each component
fi is convex and L-Lipschitz continuous, but is not necessarily strongly convex, on the domain
X = {x : kxk ? B}. Without loss of generality, we can consider L = B = 1. We will construct
functions of the following form:
1
fi (x) = p |b
2
k
1 X
hx, v0 i| + p
2 k r=1
i,r
|hx, vr
1i
hx, vr i| .
(6)
1
1
where k = b 12?
c, b = pk+1
, and {vr } is an orthonormal set of vectors in Rd chosen according to
the behavior of the algorithm such that vr is orthogonal to all points at which the algorithm queries
hF before round r, and where i,r are indicators chosen so that i,r = 1 if the algorithm does
not query component i in round r (and zero otherwise). To see how this is possible, consider the
following truncations of (6):
1
fit (x) = p |b
2
t 1
1 X
hx, v0 i| + p
2 k r=1
i,r
|hx, vr
1i
hx, vr i|
(7)
During each round t, the adversary answers queries according to fit , which depends only on vr , i,r
for r < t, i.e. from previous rounds. When the round is completed, i,t is determined and vt is
chosen to be orthogonal to the vectors {v0 , ..., vt 1 } as well as every point queried by the algorithm
so far, thus defining fit+1 for the next round. In Appendix B.1 we prove that these responses based
on fit are consistent with fi .
The algorithm can only learn vr after it completes round r?until then every iterate is orthogonal
to it by construction. The average of these functions reaches its minimum of F (x? ) = 0 at x? =
Pk
b r=0 vr , so we can view optimizing these functions as p
the task of discovering the vectors vr ?
even if only vk is missing, a suboptimality better than b/(6 k) > ? cannot be achieved. Therefore,
the deterministic
algorithm must complete at least k rounds of optimization, each comprising at
? ?
least m
queries
to hF in order to optimize F . The key to this construction is that even though
2
each term |hx, vr 1 i hx, vr i| appears in m/2 components, and hence has a strong effect on the
average F (x), we can force a deterministic algorithm to make ?(m) queries during each round
before it finds the next relevant term. We obtain (for complete proof see Appendix B.1):
Theorem 1. For any L, B > 0, any 0 < ? < LB
2, and any deterministic algorithm
12 , any m
A with access to hF , there exists a dimension d = O mLB
, and m functions fi defined over
?
X = x 2 Rd : kxk ? B , which are convex and L-Lipschitz continuous, such that in order to find
a point x
? for which F (?
x) F (x? ) < ?, A must make ? mLB
queries to hF .
?
Furthermore, we can always reduce optimizing a function over kxk ? B to optimizing a strongly
convex function by adding the regularizer ? kxk2 /(2B 2 ) to each component, implying (see complete
proof in Appendix B.2):
Theorem 2. For any L,
> 0, any 0 < ? <
L2
288
, any?m ?2, and any deterministic algorithm
mL
A with access to hF , there exists a dimension d = O p
, and m functions fi defined over
?
X ? Rd , which are L-Lipschitz continuous and ?-strongly
? convex, such that in order to find a point
mL
?
p
x
? for which F (?
x) F (x ) < ?, A must make ?
queries to hF .
?
4.2
Smooth Components
When the components fi are required to be smooth, the lower bound construction is similar to (6),
except it is based on squared differences instead of absolute differences. We consider the functions:
!
k
?
?
X
1
2
2
2
fi (x) =
2a hx, v0 i + i,k hx, vk i +
hx, vr i)
(8)
i,1 hx, v0 i
i,r (hx, vr 1 i
8
r=1
where i,r and vr are as before. Again, we can answer queries at round t based only on i,r , vr for
r < t. This construction yields the following lower bounds (full details in Appendix B.3):
5
Theorem 3. For any , B, ? > 0, any m
2, and any deterministic
algorithm A with access to
p
hF , there exists a sufficiently large dimension d = O m
B 2 /? , and m functions fi defined
over X = x 2 Rd : kxk ? B , which are convex and -smooth, such that in order to find a point
p
x
? 2 Rd for which F (?
x) F (x? ) < ?, A must make ? m
B 2 /? queries to hF .
In the strongly convex case, we use a very similar construction, adding the term
gives the following bound (see Appendix B.4):
2
kxk /2, which
Theorem 4. For any , > 0 such that
> 73, any ? > 0, any ?0 > 3 ? , any m
2, and
any? deterministic
algorithm
A
with
access
to
h
,
there
exists
a
sufficiently
large
dimension
d =
F
? ??
p
O m
log ??0 , and m functions fi defined over X ? Rd , which are -smooth and strongly convex and where F (0)
F (?
x)
5
?
) = ??0 , such
? for which
?F (x
?? that in order to find a point x
p
?0
?
F (x ) < ?, A must make ? m
log
queries to hF .
?
Lower Bounds for Randomized Algorithms
We now turn to randomized algorithms for (1). In the deterministic constructions, we relied on
being able to set vr and i,r based on the predictable behavior of the algorithm. This is impossible
for randomized algorithms, we must choose the ?hard? function before we know the random choices
the algorithm will make?so the function must be ?hard? more generally than before.
Previously, we chose vectors vr orthogonal to all previous queries made by the algorithm. For randomized algorithms this cannot be ensured. However, if we choose orthonormal vectors vr randomly
in a high dimensional space, they will be nearly orthogonal to queries with high probability. Slightly
modifying the absolute or squared difference from before makes near orthogonality sufficient. This
issue increases the required dimension but does not otherwise affect the lower bounds.
More problematic is our inability to anticipate the order in which the algorithm will query the components, precluding the use of i,r . In the deterministic setting, if a term revealing a new vr appeared
in half of the components, we could ensure that the algorithm must make m/2 queries to find it.
However, a randomized algorithm could find it in two queries in expectation, which would eliminate
the linear dependence on m in the lower bound! Alternatively, if only one component included the
term, a randomized algorithm would indeed need ?(m) queries to find it, but that term?s effect on
suboptimality of F would be scaled down by m, again eliminating the dependence on m.
p
To establish
we must take a new approach. We
? m ? a ?( m) lower bound for randomized
? algorithms
?
define 2 pairs of functions which operate on m
orthogonal
subspaces of Rd . Each pair of
2
functions resembles the constructions from the previous section, but since there are many of them,
the algorithm must solve ?(m) separate optimization problems in order to optimize F .
5.1
Lipschitz Continuous Components
First consider the non-smooth, non-strongly-convex setting and assume for simplicity m is even
(otherwise we simply let the last function be zero). We define the helper function c , which replaces
the absolute value operation and makes our construction resistant to small inner products between
iterates and not-yet-discovered components:
c (z)
= max (0, |z|
(9)
c)
Next, we define m/2 pairs of functions, indexed by i = 1..m/2:
1
fi,1 (x) = p |b
2
k
1 X
hx, vi,0 i| + p
2 k r even
k
1 X
fi,2 (x) = p
2 k r odd
c
(hx, vi,r
c
(hx, vi,r
1i
1i
hx, vi,r i)
(10)
hx, vi,r i)
where {vi,r }r=0..k,i=1..m/2 are random orthonormal vectors and k = ?( ?p1m ). With c sufficiently
small and the dimensionality sufficiently high, with high probability the algorithm only learns the
6
identity of new vectors vi,r by alternately querying fi,1 and fi,2 ; so revealing all k + 1 vectors
requires at least k + 1 total queries. Until vi,k is revealed, an iterate is ?(?)-suboptimal on (fi,1 +
fi,2 )/2. From here, we show that an ?-suboptimal solution to F (x) can be found only after at
least k + 1 queries are made to at least m/4 pairs, for a total of ?(mk) queries. This time, since
the optimum x? will need to have inner product b with ?(mk) vectors vi,r
, we need to have b =
p
p p
1
?( pmk
) = ?( ?/ m), and the total number of queries is ?(mk) = ?( ?m ). The ?(m) term of
p
the lower bound follows trivially since we require ? = O(1/ m), (proofs in Appendix C.1):
Theorem 5. For any L, B > 0, any 0 < ? <
with access to hF , there exists a dimension d
LB
p , any m
10 m ?
4 6
= O L ?B
log
4
2, and any randomized algorithm A
?
LB
, and m functions fi defined
?
over X = x 2 Rd : kxk ? B , which are convex and L-Lipschitz continuous, such that to find a
p
point x
? for which E [F (?
x) F (x? )] < ?, A must make ? m + mLB
queries to hF .
?
An added regularizer gives the result for strongly convex functions (see Appendix C.2):
2
Theorem 6. For any L, > 0, any 0 < ? < 200L m , any m 2, and any randomized algorithm A
4
with access to hF , there exists a dimension d = O L3 ? log pL ? , and m functions fi defined over
X ? Rd , which are L-Lipschitz continuous and -stronglypconvex, such that in order to find a point
x
? for which E [F (?
x) F (x? )] < ?, A must make ? m + pmL
queries to hF .
?
The large dimension required by these lower bounds is the cost of omitting the assumption that the
algorithm?s queries lie in the span of previous oracle responses. If we do assume that the queries lie
in that span, the necessary dimension is only on the order of the number of oracle queries needed.
p
When ? = ? (LB/ m) in the non-strongly convex case or ? = ? L2 /( m) in the strongly
convex case, the lower bounds for randomized algorithms presented above do not apply. Instead, we
can obtain a lower bound based on an information theoretic argument. We first uniformly randomly
choose a parameter p, which is either (1/2 2?) or (1/2 + 2?). Then for i = 1, ..., m, in the nonstrongly convex case we make fi (x) = x with probability p and fi (x) = x with probability 1 p.
Optimizing F (x) to within ? accuracy then implies recovering the bias of the Bernoulli random
variable, which requires ?(1/?2 ) queries based on a standard information theoretic result [2, 19].
2
Setting fi (x) = ?x + 2 kxk gives a ?(1/( ?)) lower bound in the -strongly convex setting. This
is formalized in Appendix C.5.
5.2
Smooth Components
When the functions fi are smooth and not strongly convex, we define another helper function c :
8
|z| ? c
<0
(11)
2(|z| c)2 c < |z| ? 2c
c (z) =
: 2
z
2c2
|z| > 2c
and the following pairs of functions for i = 1, ..., m/2:
?
k
X
1
2
fi,1 (x) =
hx, vi,0 i
2a hx, vi,0 i +
16
r even
?
k
X
1
fi,2 (x) =
c (hx, vi,k i) +
c (hx, vi,r
16
r odd
c
(hx, vi,r
1i
1i
hx, vi,r i)
hx, vi,r i)
?
?
(12)
with vi,r as before. The same arguments apply, after replacing the absolute difference with squared
difference. A separate argument is required in this case for the ?(m) term in the bound, which we
show using a construction involving m simple linear functions (see Appendix C.3).
Theorem 7. For any , B, ? > 0, any m 2, and
randomized
? any
? 2 ? algorithm A with
? access to hF ,
2 6
B
B
2
there exists a sufficiently large dimension d = O ?2 log ?
+ B m log m and m functions
fi defined over X = x 2 Rd : kxk ? B , which are convex
? and q-smooth,
? such that to find a point
m B2
d
?
x
? 2 R for which E [F (?
x) F (x )] < ?, A must make ? m +
queries to hF .
?
7
2
In the strongly convex case, we add the term kxk /2 to fi,1 and fi,2 (see Appendix C.4) to obtain:
p
Theorem 8. For any m 2, any , > 0 such that > 161m,?any ? > 0, any
?
>
60?
0
m , and
? ?
?
2.5
3
?0
m ?0
?0
any randomized algorithm A, there exists a dimension d = O
log
+
log
m ,
2.5 ?
?
?
domain X ? Rd , x0 2 X , and m functions fi defined on X which are -smooth and -strongly
convex, and such that F (x0 ) F (x? ) =
such that
order??
to find a point x
? 2 X such that
? ?0 and
? inq
pm
?0
m
?
E [F (?
x) F (x )] < ?, A must make ? m +
log ?
queries to hF .
Remark: We consider (1) as a constrained optimization problem, thus the minimizer of F could be
achieved on the boundary of X , meaning that the gradient need not vanish. If we make the additional
assumption that the minimizer of F lies on the interior of X (and is thus the unconstrained global
minimum), Theorems 1-8 all still apply, with a slight modification to Theorems 3 and 7. Since the
gradient now needs to vanish on X , 0 is always O( B 2 )-suboptimal, and only values of ? in the
B2
B2
range 0 < ? < 128
and 0 < ? < 9128
result in a non-trivial lower bound (see Remarks at the end
of Appendices B.3 and C.3).
6
Conclusion
We provide a tight (up to a log factor) understanding of optimizing finite sum problems of the form
(1) using a component prox oracle.
Randomized optimization of (1) has been the subject of much research in the past several years, starting with the presentation of SDCA and SAG, and continuing with accelerated variants. Obtaining
lower bounds can be very useful for better understanding the problem, for knowing where it might
or might not be possible to improve or where different assumptions would be needed to improve,
and for establishing optimality of optimization methods. Indeed, several attempts have been made
at lower bounds for the finite sum setting [1, 9]. But as we explain in the introduction, these were
unsatisfactory and covered only limited classes of methods. Here we show that in a fairly general
sense, accelerated SDCA, SVRG, SAG, and K ATYUSHA are optimal up to a log factor. Improving on their runtime would require additional assumptions, or perhaps a stronger oracle. However,
even if given ?full? access to the component functions, all algorithms that we can think of utilize
this information to calculate a prox vector. Thus, it is unclear what realistic oracle would be more
powerful than the prox oracle we consider.
Our results highlight the power of randomization,
p showing that no deterministic algorithm can beat
the linear dependence on m and reduce it to the m dependence of the randomized algorithms.
The deterministic algorithm for non-smooth problems that we present in Section 3 is also of interest in its own right. It avoids randomization, which is not usually problematic, but makes it fully
parallelizable unlike the optimal stochastic methods. Consider, for example, a supervised learning
problem where fi (x) = `(h i , xi, yi ) is the (non-smooth) loss on a single training example ( i , yi ),
and the data is distributed across machines. Calculating a prox oracle involves applying the Fenchel
conjugate of the loss function `, but even if a closed form is not available, this is often easy to compute numerically, and is used in algorithms such as SDCA. But unlike SDCA, which is inherently
sequential, we can calculate all m prox operations in parallel on the different machines, average the
resulting gradients of the smoothed function, and take an accelerated gradient step to implement our
optimal deterministic algorithm. This method attains a recent lower bound for distributed optimization, resolving a question raised by Arjevani and Shamir [5], and when the number of machines is
very large improves over all other known distributed optimization methods for the problem.
In studying finite sum problems, we were forced to explicitly study lower bounds for randomized
optimization as opposed to stochastic optimization (where the source of randomness is the oracle,
not the algorithm). Even for the classic problem of minimizing a smooth function using a first order
oracle, we could not locate a published proof that applies to randomized algorithms. We provide a
simple construction using ?-insensitive differences that allows us to easily obtain such lower bounds
without reverting to assuming the iterates are spanned by previous responses (as was done, e.g., in
[9]), and could potentially be useful for establishing randomized lower bounds also in other settings.
Acknowledgements: We thank Ohad Shamir for his helpful discussions and for pointing out [4].
8
References
[1] Alekh Agarwal and Leon Bottou. A lower bound for the optimization of finite sums. arXiv preprint
arXiv:1410.0723, 2014.
[2] Alekh Agarwal, Martin J Wainwright, Peter L Bartlett, and Pradeep K Ravikumar. Information-theoretic
lower bounds on the oracle complexity of convex optimization. In Advances in Neural Information Processing Systems, pages 1?9, 2009.
[3] Zeyuan Allen-Zhu. Katyusha: The first truly accelerated stochastic gradient descent. arXiv preprint
arXiv:1603.05953, 2016.
[4] Zeyuan Allen-Zhu and Elad Hazan. Optimal black-box reductions between optimization objectives. arXiv
preprint arXiv:1603.05642, 2016.
[5] Yossi Arjevani and Ohad Shamir. Communication complexity of distributed convex learning and optimization. In Advances in Neural Information Processing Systems, pages 1747?1755, 2015.
[6] Heinz H Bauschke and Patrick L Combettes. Convex analysis and monotone operator theory in Hilbert
spaces. Springer Science & Business Media, 2011.
[7] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization
and statistical learning via the alternating direction method of multipliers. Foundations and Trends R in
Machine Learning, 3(1):1?122, 2011.
[8] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315?323, 2013.
[9] Guanghui Lan. An optimal randomized incremental gradient method. arXiv preprint arXiv:1507.02000,
2015.
[10] Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A universal catalyst for first-order optimization. In
Advances in Neural Information Processing Systems, pages 3366?3374, 2015.
[11] Yu Nesterov. Smooth minimization of non-smooth functions. Mathematical programming, 103(1):127?
152, 2005.
[12] Yurii Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2).
Soviet Mathematics Doklady, 27(2):372?376, 1983.
[13] Francesco Orabona, Andreas Argyriou, and Nathan Srebro. Prisma: Proximal iterative smoothing algorithm. arXiv preprint arXiv:1206.2372, 2012.
[14] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average
gradient. arXiv preprint arXiv:1309.2388, 2013.
[15] Shai Shalev-Shwartz. Stochastic optimization for machine learning. Slides of presentation at ?Optimization Without Borders 2016?, http://www.di.ens.fr/~aspremon/Houches/talks/Shai.pdf,
2016.
[16] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss.
The Journal of Machine Learning Research, 14(1):567?599, 2013.
[17] Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Mathematical Programming, 155(1-2):105?145, 2016.
[18] Ohad Shamir, Nathan Srebro, and Tong Zhang. Communication efficient distributed optimization using
an approximate newton-type method. arXiv preprint arXiv:1312.7853, 2013.
[19] Bin Yu. Assouad, fano, and le cam. In Festschrift for Lucien Le Cam, pages 423?435. Springer, 1997.
[20] Yuchen Zhang and Lin Xiao. Communication-efficient distributed optimization of self-concordant empirical loss. arXiv preprint arXiv:1501.00263, 2015.
9
| 6058 |@word briefly:2 eliminating:1 polynomial:1 norm:1 stronger:1 tried:1 dramatic:1 sgd:2 reduction:4 series:1 precluding:1 past:1 yet:1 chu:1 must:21 realistic:2 chicago:4 zaid:1 progressively:1 v:2 alone:1 half:2 discovering:1 implying:1 provides:2 iterates:5 zhang:6 mathematical:2 c2:1 direct:1 prove:2 psetting:1 x0:2 houches:1 indeed:3 behavior:4 nor:2 heinz:1 inspired:1 decomposed:1 spain:1 begin:3 notation:1 bounded:1 medium:1 what:3 surveying:1 guarantee:2 every:2 sag:9 runtime:2 doklady:1 exactly:2 ensured:1 scaled:1 uk:2 control:2 k2:1 before:7 local:1 establishing:3 might:3 proxfi:2 chose:1 black:1 studied:1 resembles:1 limited:1 range:1 unique:1 implement:1 llipschitz:1 sdca:12 universal:1 empirical:2 significantly:1 composite:6 matching:1 revealing:2 boyd:1 cannot:4 interior:1 operator:1 risk:1 context:1 impossible:1 applying:1 optimize:4 www:1 deterministic:33 missing:1 starting:1 convex:39 survey:1 formulate:1 simplicity:1 formalized:1 roux:1 m2:1 estimator:1 dominate:1 orthonormal:3 spanned:1 his:1 classic:1 coordinate:2 construction:12 shamir:4 exact:6 programming:3 distinguishing:1 us:1 trend:1 continues:2 dane:1 role:1 preprint:8 worst:1 calculate:3 ensures:2 technological:2 removed:1 yk:2 predictable:1 convexity:1 complexity:20 nesterov:2 cam:2 depend:1 tight:3 solving:1 predictive:1 upon:1 eric:1 prisma:2 easily:1 talk:1 regularizer:3 soviet:1 distinct:2 forced:1 query:38 shalev:4 quite:1 elad:1 solve:1 say:1 drawing:1 otherwise:4 think:1 sequence:1 differentiable:1 product:2 fr:1 relevant:1 convergence:1 optimum:1 incremental:1 help:1 odd:2 received:1 strong:2 recovering:1 involves:2 implies:1 direction:1 modifying:1 stochastic:14 bin:1 require:2 hx:25 randomization:7 proposition:1 anticipate:1 pl:1 sufficiently:5 considered:1 blake:2 deciding:1 pointing:1 achieves:3 lucien:1 minimization:2 always:3 rather:1 avoid:1 vk:2 unsatisfactory:1 bernoulli:1 hongzhou:1 attains:1 sense:1 helpful:1 typically:1 eliminate:1 spurious:1 comprising:1 arg:1 issue:1 dual:2 extraneous:1 smoothing:4 constrained:1 fairly:1 raised:1 aware:2 construct:3 having:2 broad:1 yu:2 nearly:1 randomly:2 festschrift:1 delicate:1 attempt:3 interest:1 truly:1 pradeep:1 yielding:4 held:1 aspremon:1 helper:2 necessary:1 ohad:3 orthogonal:6 indexed:1 euclidean:1 continuing:1 yuchen:1 mk:3 fenchel:1 cover:1 cost:1 johnson:1 bauschke:1 atyusha:6 answer:7 proximal:2 guanghui:1 randomized:30 again:3 squared:3 opposed:1 leveraged:1 choose:3 return:2 potential:1 prox:27 b2:7 summarized:1 includes:1 explicitly:1 depends:1 vi:17 view:1 closed:2 hazan:3 francis:1 hf:20 relied:1 parallel:1 shai:4 minimize:1 il:2 accuracy:2 variance:3 yield:2 researcher:1 published:1 randomness:1 explain:1 reach:1 parallelizable:1 definition:2 dm:1 proof:4 di:1 dimensionality:1 improves:1 hilbert:1 subtle:1 appears:1 attained:1 higher:1 supervised:1 permitted:1 response:4 improved:2 katyusha:1 rie:1 done:2 though:2 strongly:23 generality:1 furthermore:2 just:1 box:1 until:6 replacing:1 indicated:2 perhaps:2 effect:3 omitting:1 multiplier:1 hence:1 alternating:1 satisfactory:1 neal:1 round:15 inq:1 game:1 during:2 self:1 proxf:1 suboptimality:3 pdf:1 presenting:1 complete:3 theoretic:3 allen:5 meaning:1 fi:36 recently:3 parikh:1 insensitive:1 slight:1 approximates:1 theirs:1 numerically:1 significant:3 queried:4 rd:13 unconstrained:1 trivially:1 pm:3 similarly:1 mathematics:1 fano:1 mlb:9 l3:1 access:27 resistant:1 u2x:2 alekh:2 v0:5 etc:1 add:1 patrick:1 own:1 hide:1 recent:2 optimizing:19 inf:1 certain:1 vt:2 yi:2 minimum:2 additional:2 zeyuan:2 stephen:1 resolving:1 full:2 harchaoui:1 reduces:1 smooth:43 borja:1 match:1 calculation:1 bach:1 lin:2 ravikumar:1 variant:4 involving:1 expectation:1 arxiv:16 iteration:2 agarwal:4 achieved:2 addition:1 x2x:2 completes:1 source:1 extra:1 envelope:2 operate:1 unlike:2 ascent:2 subject:1 leveraging:2 call:1 near:1 leverage:3 revealed:1 easy:2 forpthe:1 iterate:2 affect:1 fit:4 suboptimal:7 reduce:4 idea:1 inner:2 knowing:2 andreas:1 grad:1 handled:1 bartlett:1 accelerating:3 arjevani:2 peter:1 remark:2 rfi:2 useful:3 covered:2 generally:1 slide:1 extensively:1 reduced:1 http:1 problematic:2 key:2 four:1 lan:3 neither:1 utilize:1 subgradient:2 merely:1 monotone:1 sum:11 year:1 powerful:4 family:1 appendix:12 scaling:1 bound:55 replaces:1 oracle:45 orthogonality:1 nathan:3 simulate:1 argument:3 min:3 optimality:3 span:5 subgradients:1 leon:1 relatively:1 martin:1 according:2 combination:1 conjugate:1 smaller:1 terminates:1 increasingly:1 slightly:1 pml:3 across:1 making:1 modification:1 restricted:2 gradually:1 previously:2 turn:3 loose:1 needed:3 know:1 flip:1 reverting:1 yossi:1 end:1 yurii:1 studying:2 available:1 operation:3 apply:7 simulating:1 batch:1 coin:1 alternative:1 schmidt:1 original:1 running:1 ensure:1 completed:1 log2:1 newton:1 calculating:1 woodworth:1 establish:3 objective:9 question:2 added:1 strategy:1 dependence:6 responds:1 unclear:1 gradient:38 subspace:1 separate:2 thank:1 considers:1 trivial:1 assuming:2 index:1 minimizing:5 potentially:2 perform:1 upper:12 pmk:1 observation:1 francesco:1 finite:9 descent:7 beat:1 defining:1 communication:4 locate:1 discovered:1 smoothed:2 lb:4 peleato:1 ttic:2 namely:1 required:7 pair:5 eckstein:1 distinction:1 barcelona:1 nip:1 alternately:1 able:1 suggested:1 adversary:4 below:1 usually:1 appeared:1 including:2 max:1 wainwright:1 power:3 natural:1 force:1 business:1 regularized:2 indicator:1 zhu:5 improve:4 julien:1 prior:1 review:1 l2:8 nati:1 understanding:2 acknowledgement:1 catalyst:3 loss:6 nonstrongly:1 highlight:1 fully:1 srebro:3 querying:2 foundation:1 sufficient:3 consistent:3 xiao:1 course:1 last:1 truncation:1 svrg:16 bias:1 allow:1 institute:2 absolute:4 moreau:2 benefit:1 distributed:8 boundary:1 dimension:11 valid:1 avoids:2 author:1 made:4 far:4 agd:8 approximate:1 emphasize:1 ml:6 global:2 mairal:1 xi:2 shwartz:4 alternatively:1 continuous:8 iterative:1 triplet:1 table:8 nature:2 learn:1 nicolas:1 inherently:1 obtaining:1 improving:3 requested:1 bottou:3 necessarily:1 domain:3 pk:2 border:1 allowed:1 en:1 vr:20 combettes:1 tong:4 explicit:1 lie:6 kxk2:1 vanish:2 toyota:2 learns:1 theorem:10 down:1 showing:1 exists:8 adding:3 sequential:1 kx:4 gap:2 simply:1 kxk:13 applies:2 springer:2 minimizer:2 assouad:1 identity:2 presentation:4 viewed:1 consequently:1 orabona:1 lipschitz:13 admm:1 change:1 hard:4 included:1 determined:1 except:2 uniformly:1 disco:1 lemma:2 total:3 concordant:1 meaningful:1 select:1 mark:1 inability:1 jonathan:1 accelerated:17 argyriou:1 |
5,590 | 6,059 | Long-term causal effects via behavioral game theory
Panagiotis (Panos) Toulis
Econometrics & Statistics, Booth School
University of Chicago
Chicago, IL, 60637
[email protected]
David C. Parkes
Department of Computer Science
Harvard University
Cambridge, MA, 02138
[email protected]
Abstract
Planned experiments are the gold standard in reliably comparing the causal effect
of switching from a baseline policy to a new policy. One critical shortcoming of
classical experimental methods, however, is that they typically do not take into
account the dynamic nature of response to policy changes. For instance, in an
experiment where we seek to understand the effects of a new ad pricing policy on
auction revenue, agents may adapt their bidding in response to the experimental
pricing changes. Thus, causal effects of the new pricing policy after such adaptation period, the long-term causal effects, are not captured by the classical methodology even though they clearly are more indicative of the value of the new policy.
Here, we formalize a framework to define and estimate long-term causal effects
of policy changes in multiagent economies. Central to our approach is behavioral
game theory, which we leverage to formulate the ignorability assumptions that are
necessary for causal inference. Under such assumptions we estimate long-term
causal effects through a latent space approach, where a behavioral model of how
agents act conditional on their latent behaviors is combined with a temporal model
of how behaviors evolve over time.
1
Introduction
A multiagent economy is comprised of agents interacting under specific economic rules. A common
problem of interest is to experimentally evaluate changes to such rules, also known as treatments, on
an objective of interest. For example, an online ad auction platform is a multiagent economy, where
one problem is to estimate the effect of raising the reserve price on the platform?s revenue. Assessing
causality of such effects is a challenging problem because there is a conceptual discrepancy between
what needs to be estimated and what is available in the data, as illustrated in Figure 1.
What needs to be estimated is the causal effect of a policy change, which is defined as the difference
between the objective value when the economy is treated, i.e., when all agents interact under the
new rules, relative to when the same economy is in control, i.e., when all agents interact under the
baseline rules. Such definition of causal effects is logically necessitated from the designer?s task,
which is to select either the treatment or the control policy based on their estimated revenues, and
then apply such policy to all agents in the economy. The long-term causal effect is the causal effect
defined after the system has stabilized, and is more representative of the value of policy changes
in dynamical systems. Thus, in Figure 1 the long-term causal effect is the difference between the
objective values at the top and bottom endpoints, marked as the ?targets of inference?.
What is available in the experimental data, however, typically comes from designs such as the socalled A/B test, where we randomly assign some agents to the treated economy (new rules B) and
the others to the control economy (baseline rules A), and then compare the outcomes. In Figure 1
the experimental data are depicted as the solid time-series in the middle of the plot, marked as the
?observed data?.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Figure 1: The two inferential tasks for causal inference in multiagent economies. First, infer agent actions
across treatment assignments (y-axis), particularly, the assignment where all agents are in the treated economy
(top assignment, Z = 1), and the assignment where all agents are in the control economy (bottom assignment,
Z = 0). Second, infer across time, from t0 (last observation time) to long-term T . What we seek in order to
evaluate the causal effect of the new treatment is the difference between the objectives (e.g., revenue) at the two
inferential target endpoints.
Therefore the challenge in estimating long-term causal effects is that we generally need to perform
two inferential tasks simultaneously, namely,
(i) infer outcomes across possible experimental policy assignments (y-axis in Figure 1), and
(ii) infer long-term outcomes from short-term experimental data (x-axis in Figure 1).
The first task is commonly known as the ?fundamental problem of causal inference? [14, 19] because it underscores the impossibility of observing in the same experiment the outcomes for both
policy assignments that define the causal effect; i.e., that we cannot observe in the same experiment
both the outcomes when all agents are treated and the outcomes when all agents are in control, the
assignments of which are denoted by Z = 1 and Z = 0, respectively, in Figure 1. In fact the
role of experimental design, as conceived by R.A. Fisher [8], is exactly to quantify the uncertainty
about such causal effects that cannot be observed due to the aforementioned fundamental problem,
by using standard errors that can be observed in a carefully designed experiment.
The second task, however, is unique to causal inference in dynamical systems, such as the multiagent
economies that we study in this paper, and has received limited attention so far. Here, we argue that
it is crucial to study long-term causal effects, i.e., effects measured after the system has stabilized,
because such effects are more representative of the value of policy changes. If our analysis focused
only on the observed data part depicted in Figure 1, then policy evaluation would reflect transient
effects that might differ substantially from the long-term effects. For instance, raising the reserve
price in an auction might increase revenue in the short-term but as agents adapt their bids, or switch
to another platform altogether, the long-term effect could be a net decrease in revenue [13].
1.1
Related work and our contributions
There have been several important projects related to causal inference in multiagent economies. For
instance, Ostrovsky and Schwartz [16] evaluated the effects of an increase in the reserve price of
Yahoo! ad auctions on revenue. Auctions were randomly assigned to an increased reserve price
treatment, and the effect was estimated using difference-in-differences (DID), which is a popular
econometric method [6, 7, 16]. In relation to Figure 1, DID extrapolates across assignments (y-axis)
and across time (x-axis) by making a strong additivity assumption [1, 3, Section 5.2], specifically,
by assuming that the dependence of revenue on reserve price and time is additive.
In a structural approach, Athey et.al. [4] studied the effects of auction format (ascending versus
sealed bid) on competition for timber tracts. In relation to Figure 1, their approach extrapolates
2
across assignments by assuming that agent individual valuations for tracts are independent of the
treatment assignment, and extrapolates across time by assuming that the observed agent bids are
already in equilibrium. Similar approaches are followed in econometrics for estimation of general
equilibrium effects [11, 12].
In a causal graph approach [17] Bottou et.al. [5] studied effects of changes in the algorithm that
scores Bing ads on the ad platform?s revenue. In relation to Figure 1, their approach is nonexperimental and extrapolates across assignments and across time by assuming a directed acyclic
graph (DAG) as the correct data model, which is also assumed to be stable with respect to treatment
assignment, and by estimating counterfactuals through the fitted model.
Our work is different from prior work because it takes into account the short-term aspect of experimental data to evaluate long-term causal effects, which is the key conceptual and practical challenge
that arises in empirical applications. In contrast, classical econometric methods, such as DID, assume strong linear trends from short-term to long-term, whereas structural approaches typically
assume that the experimental data are already long-term as they are observed in equilibrium. We
refer the reader to Sections 2 and 3 of the supplement for more detailed comparisons.
In summary, our key contribution is that we develop a formal framework that (i) articulates the
distinction between short-term and long-term causal effects, (ii) leverages behavioral game-theoretic
models for causal analysis of multiagent economies, and (iiii) explicates theory that enables valid
inference of long-term causal effects.
2
Definitions
Consider a set of agents I and a set of actions A, indexed by i and a, respectively. The experiment
designer wants to run an experiment to evaluate a new policy against the baseline policy relative to
an objective. In the experiment each agent is assigned to one policy, and the experimenter observes
how agents act over time. Formally, let Z = (Zi ) be the |I| ? 1 assignment vector where Zi = 1
denotes that agent i is assigned to the new policy, and Zi = 0 denotes that i is assigned to the
baseline policy; as a shorthand, Z = 1 denotes that all agents are assigned to the new policy, and
Z = 0 denotes that all agents are assigned to the baseline policy, where 1, 0 generally denote an
appropriately-sized vector of ones and zeroes, respectively. In thePsimplest case, the experiment is
an A/B test, where Z is uniformly random on {0, 1}|I| subject to i Zi = |I|/2.
After the initial assignment Z agents play actions at discrete time points from t = 0 to t = t0 . Let
Ai (t; Z) ? A be the random variable that denotes the action of agent i at time t under assignment
Z. The population action ?j (t; Z) ? ?|A| , where ?p denotes the p-dimensional simplex, is the frequency of actions at time t under assignment Z of agents that were assigned to game j; for example,
assuming two actions A = {a1 , a2 }, then ?1 (0; Z) = [0.2, 0.8] denotes that, under assignment Z,
20% of agents assigned to the new policy play action a1 at t = 0, while the rest play a2 . We assume
that the objective value for the experimenter depends on the population action, in a similar way that,
say, auction revenue depends on agents? aggregate bidding. The objective value in policy j at time
t under assignment Z is denoted by R(?j (t; Z)), where R : ?|A| ? R. For instance, suppose in
the previous example that a1 and a2 produce revenue $10 and ?$2, respectively, each time they are
played, then R is linear and R([.2, .8]) = 0.2 ? $10 ? 0.8 ? $2 = $0.4.
Definition 1 The average causal effect on objective R at time t of the new policy relative to the
baseline is denoted by CE(t) and is defined as
CE(t) = E (R(?1 (t; 1)) ? R(?0 (t; 0))) .
(1)
Suppose that (t0 , T ] is the time interval required for the economy to adapt to the experimental conditions. The exact definition of T is important but we defer this discussion for Section 3.1. The
designer concludes that the new policy is better than the baseline if CE(T ) > 0. Thus, CE(T )
is the long-term average causal effect and is a function of two objective values, R(?1 (T ; 1)) and
R(?0 (T ; 0)), which correspond to the two inferential target endpoints in Figure 1. Neither value is
observed in the experiment because agents are randomly split between policies, and their actions are
observed only for the short-term period [0, t0 ]. Thus we need to (i) extrapolate across assignments
by pivoting from the observed assignment to the counterfactuals Z = 1 and Z = 0; (ii) extrapolate across time from the short-term data [0, t0 ] to the long-term t = T . We perform these two
extrapolations based on a latent space approach, which is described next.
3
2.1
Behavioral and temporal models
We assume a latent behavioral model of how agents select actions, inspired by models from behavioral game theory. The behavioral model is used to predict agent actions conditional on agent
behaviors, and is combined with a temporal model to predict behaviors in the long-term. The two
models are ultimately used to estimate agent actions in the long-term, and thus estimate long-term
causal effects. As the choice of the latent space is not unique, in Section 3.1 we discuss why we
chose to use behavioral models from game theory.
Let Bi (t; Z) denote the behavior that agent i adopts at time t under experimental assignment Z. The
following assumption puts a constraints on the space of possible behaviors that agents can adopt,
which will simplify the subsequent analysis.
Assumption 1 (Finite set of possible behaviors) There is a fixed and finite set of behaviors B such
that for every time t, assignment Z and agent i, it holds that Bi (t; Z) ? B; i.e., every agent can only
adopt a behavior from B.
Definition 2 (Behavioral model) The behavioral model for policy j defined by set B of behaviors
is the collection of probabilities
P (Ai (t; Z) = a|Bi (t; Z) = b, Gj ),
(2)
for every action a ? A and every behavior b ? B, where Gj denotes the characteristics of policy j.
As an example, a non-sophisticated behavior b0 could imply that P (Ai (t; Z) = a|b0 , Gj ) = 1/|A|,
i.e., that the agent adopting b0 simply plays actions at random. Conditioning on policy j in Definition 2 allows an agent to choose its actions based on expected payoffs, which depend on the
policy characteristics. For instance, in the application of Section 4 we consider a behavioral model
where an agent picks actions in a two-person game according to expected payoffs calculated from
the game-specific payoff matrix?in that case Gj is simply the payoff matrix of game j.
The population behavior ?j (t; Z) ? ?|B| denotes the frequency at time t under assignment Z of
the adopted behaviors of agents assigned to policy j. Let Ft denote the entire history of population
behaviors in the experiment up to time t. A temporal model of behaviors is defined as follows.
Definition 3 (Temporal model) For an experimental assignment Z a temporal model for policy j
is a collection of parameters ?j (Z), ?j (Z), and densities (?, f ), such that for all t,
?j (0; Z) ? ?(?; ?j (Z)),
?j (t; Z)| Ft?1 , Gj ? f (?|?j (Z), Ft?1 ).
(3)
A temporal model defines the distribution of population behavior as a time-series with a Markovian
structure. As defined, the temporal model imposes the restriction that the prior ? of population
behavior at t = 0 and the density f of behavioral evolution are both independent of treatment
assignment Z. In other words, regardless of how agents are assigned to games, the population
behavior in the game will evolve according to a fixed model described by f and ?. The model
parameters ?, ? may still depend on the treatment assignment Z.
3
Estimation of long-term causal effects
Here we develop the assumptions that are necessary for inference of long-term causal effects.
P
Assumption 2 (Stability of initial behaviors) Let ?Z = i?I Zi /|I| be the proportion of agents
assigned to the new policy under assignment Z. Then, for every possible assignment Z,
?Z ?1 (0; Z) + (1 ? ?Z )?0 (0; Z) = ? (0) ,
(4)
where ? (0) is a fixed population behavior invariant to Z.
Z
|=
Assumption 3 (Behavioral ignorability) The assignment is independent of population behavior at
time t, conditional on policy and behavioral history up to t; i.e., for every t > 0 and policy j,
?j (t; Z) | Ft?1 , Gj .
4
Remarks. Assumption 2 implies that the agents do not anticipate the assignment Z as they ?have
made up their minds? to adopt a population behavior ? (0) before the experiment. Quantities such as
that in Eq. (4) are crucial in causal inference because they can be used as a pivot for extrapolation
across assignments. Assumption 3 states that the treatment assignment does not add information
about the population behavior at time t, if we already know the full behavioral history of up to t,
and the policy which agents are assigned to; hence, the treatment assignment is conditionally ignorable. This ignorability assumption precludes, for instance, an agent adopting a different behavior
depending on whether it was assigned with friends or foes in the experiment.
Algorithm 1 is the main methodological contribution of this paper. It is a Bayesian procedure as it
puts priors on parameters ?, ? of the temporal model, and then marginalizes these parameters out.
Algorithm 1 Estimation of long-term causal effects
Input: Z, T, A, B, G1 , G0 , D1 = {a1 (t; Z) : t = 0, . . . , t0 }, D0 = {a0 (t; Z) : t = 0, . . . , t0 }.
Output: Estimate of long-term causal effect CE(T ) in Eq. (1).
1: By Assumption 3, define ?j ? ?j (Z), ?j ? ?j (Z).
2: Set ?1 ? 0 and ?0 ? 0, both of size |A|; set ?0 = ?1 = 0.
3: for iter = 1, 2, . . . do
4:
For j = 0, 1, sample ?j , ?j from prior, and sample ?j (0; Z) conditional on ?j .
5:
Calculate ? (0) = ?Z ?1 (0; Z) + (1 ? ?Z )?0 (0; Z).
6:
for j = 0, 1 do
7:
Set ?j (0; j1) = ? (0) .
8:
Sample Bj = {?j (t; j1) : t = 0, . . . , T } given ?j and ?j (0, j1). # temporal model
9:
Sample ?j (T ; j1) conditional on ?j (T ; j1). # behavioral model
10:
Set ?j ? ?j + P (Dj |Bj , Gj ) ? R(?j (T ; j1)).
11:
Set ?j ? ?j + P (Dj |Bj , Gj ).
12:
end for
13: end for
c ) = ?1 /?1 ? ?0 /?0 .
14: Return estimate CE(T
Theorem 1 (Estimation of long-term causal effects) Suppose that behaviors evolve according to
a known temporal model, and actions are distributed conditionally on behaviors according to a
known behavioral model. Suppose that Assumptions 1, 2 and 3 hold for such models. Then, for
every policy j ? {0, 1} as the iterations of Algorithm 1 increase, ?j /?j ? E (R(?j (T ; j1))|Dj ) .
c ) of Algorithm 1 asymptotically estimates the long-term causal effect, i.e.,
The output CE(T
c )) = E (R(?1 (T ; 1)) ? R(?0 (T ; 0))) ? CE(T ).
E(CE(T
c ) consistently estimates the long-term causal effect in Eq. (1).
Remarks. Theorem 1 shows that CE(T
We note that it is also possible to derive the variance of this estimator with respect to the randomization distribution of assignment Z. To do so we first create a set of assignments Z by repeatedly
sampling Z according to the experimental design. Then we adapt Algorithm 1 so that (i) Step 4 is
removed; (ii) in Step 5, ? (0) is sampled from its posterior distribution conditional on observed data,
which can be obtained from the original Algorithm 1. The empirical variance of the outputs over
c ) of the original algorithm.
Z from the adapted algorithm estimates the variance of the output CE(T
We leave the full characterization of this variance estimation procedure for future work.
3.1
Discussion
Methodologically, our approach is aligned with the idea that for long-term causal effects we need a
model for outcomes that leverages structural information pertaining to how outcomes are generated
and how they evolve. In our application such structural information is the microeconomic information that dictates what agent behaviors are successful in a given policy and how these behaviors
evolve over time.
In particular, Step 1 in the algorithm relies on Assumptions 2 and 3 to infer that model parameters,
?j , ?j are stable with respect to treatment assignment. Step 5 of the algorithm is the key estimation
pivot, which uses Assumption 2 to extrapolate from the experimental assignment Z to the counterfactual assignments Z = 1 and Z = 0, as required in our problem. Having pivoted to such
5
counterfactual assignment, it is then possible to use the temporal model parameters ?j , which are
unaffected by the pivot under Assumption 3, to sample population behaviors up to long-term T , and
subsequently sample agent actions at T (Steps 8 and 9).
Thus, a lot of burden is placed on the behavioral game-theoretic model to predict agent actions,
and the accuracy of such models is still not settled [10]. However, it does not seem necessary
that such prediction is completely accurate, but rather that the behavioral models can pull relevant
information from data that would otherwise be inaccessible without game theory, thereby improving
over classical methods. A formal assessment of such improvement, e.g., using information theory,
is open for future work. An empirical assessment can be supported by the extensive literature in
behavioral game theory [20, 15], which has been successful in predicting human actions in realworld experiments [22].
Another limitation of our approach is Assumption 1, which posits that there is a finite set of predefined behaviors. A nonparametric approach where behaviors are estimated on-the-fly might do
better. In addition, the long-term horizon, T , also needs to be defined a priori. We should be careful
how T interferes with the temporal model since such a model implies a time T 0 at which population
behavior reaches stationarity. Thus if T 0 ? T we implicitly assume that the long-term causal effect
of interest pertains to a stationary regime (e.g., Nash equilibrium), but if T 0 > T we assume that the
effect pertains to a transient regime, and therefore the policy evaluation might be misguided.
4
Application: Long-term causal effects from a behavioral experiment
In this section, we apply our methodology to experimental data from Rapoport and Boebel [18],
as reported by McKelvey and Palfrey [15]. The experiment consisted of a series of zero-sum twoagent games, and aimed at examining the hypothesis that human players play according to minimax
solutions of the game, the so-called minimax hypothesis initially suggested by von Neumann and
Morgenstern [21]. Here we repurpose the data in a slightly artificial way, including how we construct
the designer?s objective. This enables a suitable demonstration of our approach.
Each game in the experiment was a simultaneous-move game with five discrete actions for the row
player and five actions for the column player. The structure of the payoff matrix, given in the
supplement in Table 1, is parametrized by two values, namely W and L; the experiment used two
different versions of payoff matrices, corresponding to payments by the row agent to the column
agent when the row agent won (W ), or lost (L): modulo a scaling factor, Rapoport and Boebel [18]
used (W, L) = ($10, ?$6) for game 0 and (W, L) = ($15, ?$1) for game 1.
Forty agents, I = {1, 2, . . . , 40}, were randomized to one game design (20 agents per game), and
each agent played once as row and once as column, matched against two different agents. Every
match-up between a pair of agents lasted for two periods of 60 rounds, with each round consisting
of a selection of an action from each agent and a payment. Thus, each agent played for four periods
and 240 rounds in total. If Z is the entire assignment vector of length 40, Zi = 1 means that agent
i was assigned to game 1 with payoff matrix (W, L) = ($15, ?$1) and Zi = 0 means that i was
assigned to game 0 with payoff matrix (W, L) = ($10, ?$6).
In adapting the data, we take advantage of the randomization in the experiment, and ask a question
in regard to long-term causal effects. In particular, assuming that agents pay a fee for each action
taken, which accounts for the revenue of the game, we ask the following question:
?What is the long-term causal effect on revenue if we switch from payoffs (W, L) = ($10, ?$6) of
game 0 to payoffs (W, L) = ($15, ?$1) of game 1??.
The games induced by the two aforementioned payoff matrices represent the two different policies
we wish to compare. To evaluate our method, we consider the last period as long-term, and hold out
data from this period. We define the causal estimand in Eq. (1) as
CE = c| (?1 (T ; 1) ? ?0 (T ; 0)),
(5)
where T = 3 and c is a vector of coefficients. The interpretation is that, given an element ca of c, the
agent playing action a is assumed to pay a constant fee ca . To check the robustness of our method
we test Algorithm 1 over multiple values of c.
6
4.1
Implementation of Algorithm 1 and results
Here we demonstrate how Algorithm 1 can be applied to estimate the long-term causal effect in
Eq. (5) on the Rapoport & Boebel dataset. To this end we clarify Algorithm 1 step by step, and give
more details in the supplement.
Step 1: Model parameters. For simplicity we assume that the models in the two games share
common parameters, and thus (?1 , ?1 , ?1 ) = (?0 , ?0 , ?0 ) ? (?, ?, ?), where ? are the parameters of the behavioral model to be described in Step 8. Having common parameters also acts as
regularization and thus helps estimation.
Step 4: Sampling parameters and initial behaviors As explained later we assume that there are
3 different behaviors and thus ?, ?, ? are vectors with 3 components. Let x ? U (m, M ) denote
that every component of x is uniform on (m, M ), independently. We choose diffuse priors for our
parameters, specifically, ? ? U(0, 10), ? ? U(?5, 5), and ? ? U(?10, 10). Given ? we sample
the initial behaviors as Dirichlet, i.e., ?1 (0; Z) ? Dir(?) and ?0 (0; Z) ? Dir(?), independently.
Steps 5 & 7: Pivot to counterfactuals. Since we have a completely randomized experiment (A/B
test) it holds that ?Z = 0.5 and therefore ? (0) = 0.5(?1 (0; Z) + ?0 (0; Z)). Now we can pivot to the
counterfactual population behaviors under Z = 1 and Z = 0 by setting ?1 (0; 1) = ?0 (0; 0) = ? (0) .
Step 8: Sample counterfactual behavioral history. As the temporal model, we adopt the lag-one
vector autoregressive model, also known as VAR(1). We transform1 the population behavior into
a new variable wt = logit(?1 (t; 1)) ? R2 (also do so for ?0 (t; 0)). Such transformation with a
unique inverse is necessary because population behaviors are constrained on the simplex, and thus
form so-called compositional data [2, 9]. The VAR(1) model implies that
wt = ?[1]1 + ?[2]wt?1 + ?[3]t ,
(6)
where ?[k] is the kth component of ? and t ? N (0, I) is i.i.d. standard bivariate normal. Eq. (6)
is used to sample the behavioral history, Bj , in Step 8 of Algorithm 1.
Step 9: Behavioral model. For the behavioral model, we adopt the quantal p-response (QLp )
model [20], which has been successful in predicting human actions in real-world experiments [22].
We choose p = 3 behaviors, namely B = {b0 , b1 , b2 } of increased sophistication parametrized by
? = (?[1], ?[2], ?[3]) ? R3 . Let Gj denote the 5 ? 5 payoff matrix of game j and let the term
strategy denote a distribution over all actions. An agent with behavior b0 plays the uniform strategy,
P (Ai (t; Z) = a|Bi (t; Z) = b0 , Gj ) = 1/5.
An agent of level-1 (row player) assumes to be playing only against level-0 agents and thus expects
per-action profit u1 = (1/5)Gj 1 (for column player we use the transpose of Gj ). The level-1 agent
will then play a strategy proportional to e?[1]u1 , where ex for vector x denotes the element-wise
exponentiation, ex = (ex[k] ). The precision parameter ?[1] determines how much an agent insists
on maximizing expected utility; for example, if ?[1] = ?, the agent plays the action with maximum
expected payoff (best response); if ?[1] = 0, the agent acts as a level-0 agent. An agent of level2 (row player) assumes to be playing only against level-1 agents with precision ?[2] and therefore
expects to face strategy proportional to e?[2]u1 . Thus its expected per-action profit is u2 ? Gj e?[2]u1 ,
and plays strategy ? e?[3]u2 .
Given Gj and ? we calculate a 5 ? 3 matrix Qj where the kth column is the strategy played by an
agent with behavior bk?1 . The expected population action is therefore ?
? j (t; Z) = Qj ?j (t; Z). The
population action ?j (t; Z) is distributed as a normalized multinomial random variable with expectation ?
? j (t; Z), and so P (?j (t; 1)|?j (t; 1), Gj ) = Multi(|I| ? ?j (t; 1); ?
? j (t; 1)), where Multi(n; p)
is the multinomial density of observations n = (n1 , . . . , nK ) with probabilities p = (p1 , . . . , pK ).
Hence, the full likelihood for observed actions in game j in Steps 10 and 11 of Algorithm 1 is given
by the product
TY
?1
P (Dj |Bj , Gj ) =
Multi(|I| ? ?j (t; j1); ?
? j (t; j1)).
t=0
Running Algorithm 1 on the Rapoport and Boebel dataset yields the estimates shown in Figure 2,
for 25 different fee vectors c, where each component ca is sampled uniformly at random from (0, 1).
1
y = logit(x) is defined as the function ?m ? Rm?1 , y[i] = log(x[i + 1]/x[1]), where x[1] 6= 0 wlog.
7
Figure 2: Estimates of long-term effects of different methods corresponding to 25 random objective
coefficients c in Eq. (5). For estimates of our method we ran Algorithm 1 for 100 iterations.
We also test difference-in-differences (DID), which estimates the causal effect through
??did = [R(?1 (2; Z)) ? R(?1 (0; Z))] ? [R(?0 (2; Z)) ? R(?0 (0; Z))],
and a naive method (?naive? in the plot), which ignores the dynamical aspect and estimates the longterm causal effect as ??nai = [R(?1 (2; Z)) ? R(?0 (2; Z))]. Our estimates (?LACE? in the plot) are
closer to the truth (mse = 0.045) than the estimates from the naive method (mse = 0.185) and from
DID (mse = 0.361). This illustrates that our method can pull game-theoretic information from the
data for long-term causal inference, whereas the other methods cannot.
5
Conclusion
One critical shortcoming of statistical methods of causal inference is that they typically do not assess
the long-term effect of policy changes. Here we combined causal inference and game theory to
build a framework for estimation of such long-term effects in multiagent economies. Central to
our approach is behavioral game theory, which provides a natural latent space model of how agents
act and how their actions evolve over time. Such models enable to predict how agents would act
under various policy assignments and at various time points, which is key for valid causal inference.
Working on a real-world dataset [18] we showed how our framework can be applied to estimate the
long-term effect of changing the payoff structure of a normal-form game.
Our framework could be extended in future work by incorporating learning (e.g., fictitious play,
bandits, no-regret learning) to better model the dynamic response of multiagent systems to policy
changes. Another interesting extension would be to use our framework for optimal design of experiments in such systems, which needs to account for heterogeneity in agent learning capabilities and
for intrinsic dynamical properties of the systems? responses to experimental treatments.
Acknowledgements
The authors wish to thank Leon Bottou, the organizers and participants of CODE@MIT?15,
GAMES?16, the Workshop on Algorithmic Game Theory and Data Science (EC?15), and the anonymous NIPS reviewers for their valuable feedback. Panos Toulis has been supported in part by the
2012 Google US/Canada Fellowship in Statistics. David C. Parkes was supported in part by NSF
grant CCF-1301976 and the SEAS TomKat fund.
8
References
[1] Alberto Abadie. Semiparametric difference-in-differences estimators. The Review of Economic
Studies, 72(1):1?19, 2005.
[2] John Aitchison. The statistical analysis of compositional data. Springer, 1986.
[3] Joshua D Angrist and J?orn-Steffen Pischke. Mostly harmless econometrics: An empiricist?s
companion. Princeton university press, 2008.
[4] Susan Athey, Jonathan Levin, and Enrique Seira. Comparing open and sealed bid auctions:
Evidence from timber auctions. The Quarterly Journal of Economics, 126(1):207?257, 2011.
[5] L?eon Bottou, Jonas Peters, Joaquin Qui?nonero-Candela, Denis X Charles, D Max Chickering,
Elon Portugualy, Dipankar Ray, Patrice Simard, and Ed Snelson. Couterfactual reasoning and
learning systems. J. Machine Learning Research, 14:3207?3260, 2013.
[6] David Card and Alan B Krueger. Minimum wages and employment: A case study of the fast
food industry in New Jersey and Pennsylvania. American Economic Review, 84(4):772?793,
1994.
[7] Stephen G Donald and Kevin Lang. Inference with difference-in-differences and other panel
data. The review of Economics and Statistics, 89(2):221?233, 2007.
[8] Ronald Aylmer Fisher. The design of experiments. Oliver & Boyd, 1935.
[9] Gary K Grunwald, Adrian E Raftery, and Peter Guttorp. Time series of continuous proportions.
Journal of the Royal Statistical Society. Series B (Methodological), pages 103?116, 1993.
[10] P Richard Hahn, Indranil Goswami, and Carl F Mela. A bayesian hierarchical model for
inferring player strategy types in a number guessing game. The Annals of Applied Statistics,
9(3):1459?1483, 2015.
[11] James J Heckman, Lance Lochner, and Christopher Taber. General equilibrium treatment
effects: A study of tuition policy. American Economic Review, 88(2):3810386, 1998.
[12] James J Heckman and Edward Vytlacil. Structural equations, treatment effects, and econometric policy evaluation1. Econometrica, 73(3):669?738, 2005.
[13] John H Holland and John H Miller. Artificial adaptive agents in economic theory. The American Economic Review, pages 365?370, 1991.
[14] Paul W Holland. Statistics and causal inference. Journal of the American statistical Association, 81(396):945?960, 1986.
[15] Richard D McKelvey and Thomas R Palfrey. Quantal response equilibria for normal form
games. Games and economic behavior, 10(1):6?38, 1995.
[16] Michael Ostrovsky and Michael Schwarz. Reserve prices in internet advertising auctions: A
field experiment. In Proceedings of the 12th ACM conference on Electronic commerce, pages
59?60. ACM, 2011.
[17] Judea Pearl. Causality: models, reasoning and inference. Cambridge University Press, 2000.
[18] Amnon Rapoport and Richard B Boebel. Mixed strategies in strictly competitive games: A
further test of the minimax hypothesis. Games and Economic Behavior, 4(2):261?283, 1992.
[19] Donald B Rubin. Causal inference using potential outcomes. Journal of the American Statistical Association, 2011.
[20] Dale O Stahl and Paul W Wilson. Experimental evidence on players? models of other players.
Journal of Economic Behavior & Organization, 25(3):309?327, 1994.
[21] J Von Neumann and O Morgenstern. Theory of games and economic behavior. Princeton
University Press, 1944.
[22] James R Wright and Kevin Leyton-Brown. Beyond equilibrium: Predicting human behavior
in normal-form games. In Proc. 24th AAAI Conf. on Artificial Intelligence, 2010.
9
| 6059 |@word version:1 longterm:1 middle:1 proportion:2 logit:2 open:2 adrian:1 seek:2 methodologically:1 pick:1 thereby:1 profit:2 solid:1 initial:4 series:5 score:1 comparing:2 lang:1 john:3 ronald:1 chicago:2 additive:1 subsequent:1 j1:9 enables:2 plot:3 designed:1 fund:1 stationary:1 intelligence:1 indicative:1 short:7 parkes:3 characterization:1 provides:1 denis:1 five:2 jonas:1 shorthand:1 behavioral:28 ray:1 expected:6 behavior:47 p1:1 multi:3 steffen:1 inspired:1 food:1 spain:1 estimating:2 project:1 matched:1 panel:1 what:7 substantially:1 morgenstern:2 transformation:1 temporal:14 every:9 act:6 exactly:1 rm:1 ostrovsky:2 schwartz:1 control:5 grant:1 before:1 switching:1 might:4 chose:1 studied:2 challenging:1 repurpose:1 limited:1 bi:4 directed:1 unique:3 practical:1 commerce:1 lost:1 regret:1 qlp:1 procedure:2 empirical:3 adapting:1 dictate:1 inferential:4 boyd:1 word:1 donald:2 cannot:3 selection:1 put:2 restriction:1 reviewer:1 maximizing:1 attention:1 regardless:1 independently:2 economics:2 focused:1 formulate:1 simplicity:1 rule:6 estimator:2 pull:2 population:18 stability:1 harmless:1 annals:1 target:3 play:10 suppose:4 modulo:1 exact:1 carl:1 us:1 hypothesis:3 harvard:2 trend:1 chicagobooth:1 element:2 particularly:1 econometrics:3 ignorable:1 bottom:2 observed:11 role:1 ft:4 fly:1 calculate:2 susan:1 decrease:1 removed:1 observes:1 ran:1 valuable:1 inaccessible:1 nash:1 econometrica:1 dynamic:2 employment:1 ultimately:1 depend:2 explicates:1 completely:2 bidding:2 microeconomic:1 various:2 jersey:1 additivity:1 tomkat:1 fast:1 shortcoming:2 pertaining:1 artificial:3 aggregate:1 kevin:2 outcome:9 lag:1 say:1 otherwise:1 precludes:1 statistic:5 g1:1 patrice:1 online:1 advantage:1 net:1 interferes:1 product:1 adaptation:1 aligned:1 relevant:1 nonero:1 gold:1 competition:1 assessing:1 neumann:2 produce:1 sea:1 tract:2 leave:1 help:1 depending:1 develop:2 friend:1 derive:1 measured:1 school:1 b0:6 received:1 eq:7 edward:1 strong:2 come:1 implies:3 quantify:1 differ:1 posit:1 correct:1 subsequently:1 human:4 transient:2 enable:1 orn:1 assign:1 anonymous:1 randomization:2 anticipate:1 extension:1 strictly:1 clarify:1 hold:4 wright:1 normal:4 equilibrium:7 algorithmic:1 predict:4 bj:5 reserve:6 adopt:5 a2:3 estimation:8 pivoted:1 proc:1 panagiotis:1 schwarz:1 create:1 mit:1 clearly:1 rather:1 wilson:1 improvement:1 methodological:2 consistently:1 check:1 logically:1 lasted:1 underscore:1 impossibility:1 contrast:1 likelihood:1 baseline:8 nonexperimental:1 aylmer:1 inference:17 economy:16 typically:4 entire:2 a0:1 initially:1 relation:3 bandit:1 aforementioned:2 denoted:3 priori:1 socalled:1 yahoo:1 platform:4 constrained:1 field:1 construct:1 once:2 having:2 sampling:2 athey:2 discrepancy:1 simplex:2 others:1 future:3 simplify:1 richard:3 randomly:3 simultaneously:1 individual:1 consisting:1 n1:1 stationarity:1 interest:3 organization:1 evaluation:2 predefined:1 accurate:1 oliver:1 closer:1 necessary:4 necessitated:1 indexed:1 causal:52 fitted:1 instance:6 increased:2 column:5 industry:1 markovian:1 planned:1 assignment:42 expects:2 uniform:2 comprised:1 successful:3 examining:1 levin:1 reported:1 eec:1 dir:2 combined:3 person:1 density:3 fundamental:2 randomized:2 michael:2 von:2 aaai:1 settled:1 central:2 reflect:1 choose:3 marginalizes:1 transform1:1 conf:1 american:5 simard:1 return:1 account:4 potential:1 b2:1 coefficient:2 ad:5 depends:2 later:1 extrapolation:2 lot:1 candela:1 observing:1 counterfactuals:3 competitive:1 level2:1 capability:1 participant:1 defer:1 contribution:3 ass:1 il:1 accuracy:1 variance:4 characteristic:2 miller:1 correspond:1 yield:1 bayesian:2 advertising:1 unaffected:1 history:5 foe:1 simultaneous:1 reach:1 ed:1 definition:7 against:4 ty:1 frequency:2 james:3 judea:1 sampled:2 experimenter:2 treatment:15 popular:1 ask:2 counterfactual:4 dataset:3 formalize:1 carefully:1 sophisticated:1 insists:1 methodology:2 response:7 evaluated:1 though:1 working:1 joaquin:1 christopher:1 assessment:2 google:1 defines:1 pricing:3 effect:57 consisted:1 normalized:1 brown:1 ccf:1 evolution:1 hence:2 assigned:15 regularization:1 stahl:1 illustrated:1 conditionally:2 round:3 game:44 won:1 theoretic:3 demonstrate:1 auction:10 reasoning:2 wise:1 snelson:1 krueger:1 charles:1 pivoting:1 common:3 palfrey:2 multinomial:2 endpoint:3 conditioning:1 association:2 interpretation:1 refer:1 cambridge:2 dag:1 ai:4 sealed:2 dj:4 stable:2 gj:16 add:1 posterior:1 showed:1 joshua:1 captured:1 minimum:1 forty:1 period:6 ii:4 stephen:1 full:3 multiple:1 infer:5 d0:1 alan:1 match:1 adapt:4 long:43 alberto:1 a1:4 prediction:1 panos:3 expectation:1 iteration:2 represent:1 adopting:2 whereas:2 want:1 addition:1 iiii:1 interval:1 fellowship:1 semiparametric:1 crucial:2 appropriately:1 rest:1 subject:1 induced:1 seem:1 structural:5 leverage:3 split:1 bid:4 switch:2 zi:7 pennsylvania:1 economic:10 idea:1 pivot:5 t0:7 whether:1 qj:2 angrist:1 amnon:1 utility:1 peter:2 compositional:2 action:35 remark:2 repeatedly:1 generally:2 detailed:1 aimed:1 nonparametric:1 mckelvey:2 stabilized:2 nsf:1 designer:4 estimated:5 conceived:1 per:3 aitchison:1 discrete:2 key:4 iter:1 four:1 changing:1 neither:1 ce:12 econometric:3 graph:2 asymptotically:1 sum:1 estimand:1 run:1 realworld:1 inverse:1 uncertainty:1 exponentiation:1 reader:1 electronic:1 scaling:1 fee:3 qui:1 ignorability:3 internet:1 pay:2 followed:1 played:4 extrapolates:4 adapted:1 constraint:1 diffuse:1 aspect:2 misguided:1 u1:4 leon:1 format:1 department:1 according:6 across:12 slightly:1 making:1 organizer:1 explained:1 invariant:1 taken:1 equation:1 bing:1 payment:2 discus:1 r3:1 mind:1 know:1 ascending:1 end:3 adopted:1 available:2 apply:2 observe:1 quarterly:1 hierarchical:1 robustness:1 altogether:1 original:2 thomas:1 top:2 denotes:10 dirichlet:1 assumes:2 running:1 eon:1 build:1 hahn:1 classical:4 society:1 nai:1 objective:11 g0:1 already:3 quantity:1 move:1 question:2 strategy:8 dependence:1 guessing:1 heckman:2 kth:2 thank:1 card:1 parametrized:2 argue:1 valuation:1 enrique:1 assuming:6 length:1 code:1 quantal:2 demonstration:1 mostly:1 design:6 reliably:1 implementation:1 policy:46 perform:2 observation:2 finite:3 payoff:14 extended:1 heterogeneity:1 interacting:1 articulates:1 canada:1 david:3 bk:1 namely:3 required:2 pair:1 extensive:1 raising:2 distinction:1 extrapolate:3 barcelona:1 pearl:1 nip:2 beyond:1 suggested:1 dynamical:4 regime:2 challenge:2 including:1 max:1 royal:1 lance:1 critical:2 suitable:1 treated:4 natural:1 predicting:3 minimax:3 toulis:3 imply:1 axis:5 concludes:1 raftery:1 naive:3 prior:5 literature:1 acknowledgement:1 review:5 evolve:6 relative:3 multiagent:9 mixed:1 interesting:1 limitation:1 proportional:2 fictitious:1 acyclic:1 versus:1 var:2 revenue:13 wage:1 agent:75 imposes:1 rubin:1 playing:3 share:1 row:6 summary:1 placed:1 last:2 supported:3 transpose:1 formal:2 understand:1 face:1 distributed:2 regard:1 feedback:1 calculated:1 valid:2 world:2 empiricist:1 autoregressive:1 ignores:1 adopts:1 commonly:1 collection:2 made:1 author:1 adaptive:1 dale:1 far:1 ec:1 rapoport:5 implicitly:1 conceptual:2 b1:1 assumed:2 continuous:1 latent:6 why:1 table:1 guttorp:1 nature:1 ca:3 improving:1 interact:2 mse:3 bottou:3 did:6 pk:1 main:1 elon:1 paul:2 causality:2 representative:2 grunwald:1 wlog:1 precision:2 inferring:1 wish:2 chickering:1 theorem:2 companion:1 specific:2 r2:1 abadie:1 evidence:2 bivariate:1 burden:1 incorporating:1 intrinsic:1 workshop:1 supplement:3 illustrates:1 horizon:1 nk:1 booth:1 depicted:2 sophistication:1 simply:2 dipankar:1 u2:2 holland:2 springer:1 gary:1 truth:1 determines:1 relies:1 acm:2 ma:1 leyton:1 conditional:6 marked:2 sized:1 careful:1 price:6 fisher:2 change:10 experimentally:1 specifically:2 uniformly:2 wt:3 called:2 total:1 experimental:17 player:9 select:2 formally:1 arises:1 jonathan:1 pertains:2 evaluate:5 princeton:2 d1:1 ex:3 |
5,591 | 606 | Using Aperiodic Reinforcement for Directed
Self-Organization During Development
PR Montague P Dayan SJ Nowlan A Pouget TJ Sejnowski
CNL, The Salk Institute
10010 North Torrey Pines Rd.
La Jolla, CA 92037, USA
read~helmholtz.sdsc.edu
Abstract
We present a local learning rule in which Hebbian learning is
conditional on an incorrect prediction of a reinforcement signal.
We propose a biological interpretation of such a framework and
display its utility through examples in which the reinforcement
signal is cast as the delivery of a neuromodulator to its target.
Three exam pIes are presented which illustrate how this framework
can be applied to the development of the oculomotor system.
1 INTRODUCTION
Activity-dependent accounts of the self-organization of the vertebrate brain have
relied ubiquitously on correlational (mainly Hebbian) rules to drive synaptic learning. In the brain, a major problem for any such unsupervised rule is that many
different kinds of correlations exist at approximately the same time scales and each
is effectively noise to the next. For example, relationships within and between
the retinae among variables such as color, motion, and topography may mask one
another and disrupt their appropriate segregation at the level of the thalamus or
cortex.
It is known, however, that many of these variables can be segregrated both within
and between cortical areas suggesting that certain sets of correlated inputs are
somehow separated from the temporal noise of other inputs. Some form of supervised learning appears to be required. Unfortunately, detailed supervision and
969
970
Montague, Dayan, Nowlan, Pouget, and Sejnowski
selection in a brain region is not a feasible mechanism for the vertebrate brain. The
question thus arises: What kind of biological mechanism or signal could selectively
bias synaptic learning toward a particular subset of correlations? One answer lies
in the possible role played by diffuse neuromodulatory systems.
It is known that multiple diffuse modulatory systems are involved in the selforganization of cortical structures (eg Bear and Singer, 1986) and some of them
a ppear to deliver reward and/or salience signals to the cortex and other structures
to influence learning in the adult. Recent data (Ljunberg, et al, 1992) suggest that
this latter influence is qualitatively similar to that predicted by Sutton and Ba.rto's
(1981,1987) classical conditioning theory. These systems innervate large expanses
of cortical and subcortical turf through extensive axonal projections that originate
in midbrain and basal forebrain nuclei and deliver such compounds as dopamine,
serotonin, norepinephrine, and acetylcholine to their targets. The small number of
neurons comprising these subcortical nuclei relative to the extent of the territory
their axons innervate suggests that the nuclei are reporting scalar signals to their
target structures.
In this paper, these facts are synthesized into a single framework which relates
the development of brain structures and conditioning in adult brains. We postulate a modification to Hebbian accounts of self-organization: Hebbian learning
is conditional on a incorrect prediction of future delivered reinforcement from a
diffuse neuromodulatory system. This reinforcement signal can be derived both
from externally driven contingencies such as proprioception from eye movements
as well as from internal pathways leading from cortical areas to subcortical nuclei.
The next section presents our framework and proposes a specific model for how
predictions about future reinforcement could be made in the vertebrate brain utilizing the firing in a diffuse neuromodulatory system (figure 1). Using this model
we illustrate the framework with three examples suggesting how mappings in the
oculomotor system may develop. The first example shows how eye movement
commands could become appropriately calibrated in the absence of visual experience (figure 3). The second example demonstrates the development of a mapping
from a selected visual target to an eye movement which acquires the target. The
third example describes how our framework could permit the development and
alignment of multimodal maps (visual and auditory) in the superior colliculus. In
this example, the transformation of auditory signals from head-centered to eyecentered coordinates results implicitly from the development of the mapping from
parietal cortex onto the colliculus.
2
THEORY
We consider two classes of reinforcement learning (RL) rule: static and dynamic.
2.1
Static reinforcement learning
The simplest learning rule that incorporates a reinforcement signal is:
(1)
U sing Aperiodic Reinforcement for Directed Self-Organization During Development
where, all at times t, Wt is a connection weight, Xt an input measure, Yt an output
measure, 1't a reinforcement measure, and ex. is the learning rate.
In this case, l' can be driven by either external events in the world or by cortical
projections (internal events) and it picks out those correlations between x and Y
about which the system learns. Learning is shut down if nothing occurs that is
independently judged to be significant, i.e. events for which l' is O.
2.2 Dynamic Reinforcement learning -learning driven by prediction error
A more informative way to utilize reinforcement signals is to incorporate some
form of prediction. The predictive form of RL, called temporal difference learning
(TD, Sutton and Barto, 1981,1987), specifies weight changes according to:
(2)
where 1't+ 1 is the reward delivered in the next instant in time t + 1. V is called
a value function and its value at any time t is an estimate of the future reward.
This framework is closely related to dynamiC programming (Barto et aI, 1989) and
a body of theory has been built around it. The prediction error [(1"t+l + Vt + J) - Vt],
measures the degree to which the prediction of future reward Vt is higher or lower
than the combination of the actual future reward 1't+ 1 and the expectation of reward
from time t + 1 onward (V t +1).
To place dynamic RL in a biological context, we start with a simple Hebbian rule
but make learning contingent on this prediction error. Learning therefore slows as
the predictions about future rewards get better. In contrast with static RL, in a TD
account the value of l' per se is not important, only whether the system is able to
predict or anticipate the the future value of r. Therefore the weight changes are:
~Wt = ex.xtlJt[(1't+l
+ Vt+l) -
Vtl
(3)
including a measure of post-synaptic response, 1)t.
3 MAKING PREDICTIONS IN THE BRAIN
In our account of RL in the brain, the cortex is the structure tha t makes predictions of
future reinforcement. This reinforcement is envisioned as the output of subcortical
nuclei which deliver various neuromodulators to the cortex that permit Hebbian
learning. Experiments have shown that various of these nuclei, which have access
to cortical representations of complex sensory input, are necessary for instrumental
and classical conditioning to occur (Ljunberg et ai., 1992).
Figure 1 shows one TD scenario in which a pattern of activity in a region of cortex
makes a prediction about future expected reinforcement . At time t, the prediction
of future reward Vt is viewed as an excitatory drive from the cortex onto one or
more subcortical nuclei (pathway B). The high degree of convergence in B ensures
that this drive predicts only a scalar output of the nucleus R. Consider a pattern
of activity onto layer II which provides excitatory drive to R and concomitantly
causes some output, say a movement, at time t + 1. This movement provides a
separate source of excitatory drive rt+ 1 to the same nucleus through independent
971
972
Montague, Dayan, Nowlan, Pouget, and Sejnowski
Layer I
A
-
Layer II
,B
External
__
C_~I
R
contingencies
I
II-___D_ _ _ _ _ _--..
I
Figure 1: Making predictions about future reinforcement. Layer I is an array of units
that projects topographically onto layer II. (A) Weights from I onto II develop according to
equation 3 and represent the value function Vt. (B) The weights from II onto R are fixed. The
prediction of future reward by the weights onto II is a scalar because the highly convergent
excitatory drive from II to the reinforcement nucleus (R) effectively sums the input. (C)
External events in the world provide independent eXcitatory drive to the reinforcement nucleus. (D) Scalar signal which results from the output firing of R and is broadcast throughout
layer II. This activity delivers to layer II the neuromodulator required for Hebbian learning.
The output firing of R is controlled by temporal changes in its excitatory input and habituates to constant or slowly varying input. This makes for learning in layer II according to
equation 3 (see text).
connections conveying information from sensory structures such as stretch receptors (pathway C). Hence, at time t + 1, the excitatory input to R is the sum of
the 'immediate reward' Tt+ 1 and the new prediction of future reward Vt+ I. If the
reinforcement nucleus is driven primarily by changes in its input over some time
window, then the difference between the excitatory drive at time t and t + 1, ie
[(Tt+1 + Vt+d - Vt] is what its output reflects.
The output is distributed throughout a region of cortex (pathway D) and permits
Hebbian weight changes at the individual connections which determine the value
function Vt. The example hinges on two assumptions: 1) Hebbian learning in the
cortex is contingent upon delivery of the neuromodulator, and 2) the reinforcement
nucleus is sensitive to temporal changes in its input and otherwise habituates to
constant or slowly varying input.
Initially, before the system is capable of predicting future delivery of reinforcement
correctly, the arrival of TH 1 causes a large learning signal because the prediction
error [(Tt+1 + Vt+ 1) - Vtl is large. This error drives weight changes at synaptic
connections with correlated pre- and postsynaptic elements until the predictions
come to a pproximate the actual future delivered reinforcement. Once these predictions become accurate, learning abates. At that point, the system has learned
about whatever contingencies are currently controlling reinforcement delivery. For
the case in which the delivery of reinforcement is not controlled by any predictable
contingencies, Hebbian learning can still occur if the fluctuations of the prediction
error have a positive mean.
Using Aperiodic Reinforcement for Directed
rnotoneuron~
Drive to
_
Self~Organization
During Development
=:::::~~~~~~~~I~~~~~~~~"6.64x64
Motoneurons
L
~
R
4x4
D
EYE MUSCLE (up)
E
Figure 2: Upper layer is a 64 by 64 input array with 3 by 3 center-surround filters at each
position which projects topographically onto the middle layer. The middle layer projects
randomly to four 4 X4 motoneuron layers which code for an equilibrium eye position signal,
for example, through setting equilibrium muscle tensions in the 4 muscles. Reinforcement
signals originate from either eye movement (muscle' stretch') or foveation. The eye is moved
according to h = (T - t)g. " = (u - d)g where r,l,u,d are respectively the average activities
on the right, left, up, down motoneuron layers and 9 is a fixed gain parameter. hand" are
linearly combined to give the eye position.
In the presence of multiple statistically independent sources of control of the reinforcement signal (pathways onto R), the system can separately 'learn away' the
contingencies for each of these sources. This passage of control of reinforcement
delivery can allow the development of connections in a region to be staged. Hence,
control of reinforcement can be passed between contingencies without supervision. In this manner, a few nuclei can be used to deliver information globally about
many different circumstances. We illustrate this point below with development of
a sensorimotor mapping.
4
EXAMPLES
4.1 Learning to calibrate without sensory experience
Figure 2 illustra tes the architecture for the next two exam pIes. Briefly, cortical layers
drive four 'motor' layers of units which each provide an equilibrium command to
one of four extraocular muscles. The mapping from the cortical layers onto these
four layers is random and sparse (15%-35% connectivity) and is plastic according
to the learning rule described above. Two external events control the delivery of
reinforcement: eye movement and foveation of high contrast objects in the visual
input. The minimum eye movement necessary to cause a reinforcement is a change
of two pixels in any direction (see figure 3).
We begin by demonstrating how an unbalanced mapping onto the motoneuron
973
974
Montague, Dayan, Nowlan, Pouget, and Sejnowski
y
x
Figure 3: Learning to calibrate eye movement commands. This example illustrates how
a reinforcement signal could help to organize an appropriate balance in the sensorimotor
mapping before visual experience. The dark bounding box represents the 64x64 pixel
working area over which an 8x8 fovea can move. A Foveal position during the first 400
cycles of learning. The architecture is as in figure 2, but the weights onto the right/left and
up/down pairs are not balanced. Random activity in the layer providing the drive to the
motoneurons initially drives the eye to an extreme position at the upper right. From this
position, no movement of the eye can occur and thus no reinforcement can be delivered
from the proprioceptive feedback causing all the weights to begin to decrease. With time,
the weights onto the motoneurons become balanced and the eye moves. B Foveal position
after 400 cycles of learning and after increasing the gain 9 to 10 times its initial value. After
the weights onto antagonistic muscles become balanced, the net excursions of the eye are
small thus requiring an increase in 9 in order to allow the eye to explore its working range.
C Size of foveal region relative the working range of the eye. The fovea covered an 8x8
region of the working area of the eye and the learning rate ex was varied from 0.08 to 0.25
without changing the result.
layers can be automatically calibrated in the absence of visual experience. Imagine
that the weights onto the right/left and up/down pairs are initially unbalanced,
as might happen if one or more muscles are weak or the effective drives to each
muscle are unequal. Figure 3, which shows the position of the fovea during
learning, indicates that the initially unbalanced weights cause the eye to move
immediately to an extreme position (figure 3, A).
Since the reinforcement is controlled only by eye movement and foveation and
neither is occurring in this state, Tt+ 1 is roughly O. This is despite the (randomly
generated) activity in the motoneurons continually making predictions that reinforcement from eye-movement should be being delivered. Therefore all the weights
begin to decrease, with those mediating the unbalanced condition decreasing the
fastest, until balance is achieved (see path A). Once the eye reaches equilibrium,
further random noise will cause no mean net eye movement since the mappings
onto each of the four motoneuron layers are balanced. The larger amplitude eye
movements shown in the center of figure 3 (labeled B) are the result of increasing
the gain g (figure 2).
Using Aperiodic Reinforcement for Directed Self-Organization During Development
Figure 4: Development of foveation map. The map
after 2000 learning cycles shows the approximate eye
movement vector from stimulation of each position in
the visual field. Lengths were normalized to the size
of the largest movement. The undisplayed quadrants
were qualitatively similar. Note that this scheme does
not account for activity or contrast differences in the
input and assumes that these have already been normalized. Learning rate = 0.12. Connectivity from the
middle layer to the motoneurons was 35% and was randomized. Unlike the previous example, the weights
onto the four layers of motoneurons were initially balanced.
I7/
/ /
I I
II
1-
1/
, / V"-
-
i------o
4.2 Learning a foveation map with sensory experience
Although reinforcement would be delivered by foveation as well as successful
eye-movements, the former would be expected to be a comparatively rare event.
Once equilibrium is achieved, however, the reinforcement that comes from eye
movements is fully predicted by the prior activity of the motoneurons, and so other
contingencies, in this case foveation, grab control of the delivery of reinforcement.
The resulting TD signals now provide information about the link between visual
input on the top layer of figure 2 and the resulting command, and the system learns
how to foveate correctly. Figure 4 shows the motor map that has developed after
2000 learning cycles. In the current example, the weights onto the four layers of
motoneurons initially were balancedand the gain g was 10 times larger than before
calibration (see figure 3). This learning currently assumes that some cortical area
selects the salient targets.
4.3 Learning to align separate mappings
In the primate superior colliculus, it is known that cells can respond to multiple
modalities including auditory input which defines a head centered coordinate system. Auditory receptive fields shift their position in the colliculus with changing
eye position suggesting the existence of a mechanism which maintains the registration between auditory and visual maps (Jay and Sparks, 1984). Our framework
suggests a developmental explanation of these findings in terms of an activitydependent self-organizing principle.
Consider an intermediate layer, modeling the parietal cortex, which receives signals representing eye position (proprioception), retinal position of a visual target
(selected visual input), and head position of an auditory target and which projects
onto the superior colliculus. This can be visualized using figure 2 with parietal cortex as the top layer and the ,colliculus as the drive to the motoneurons. As before
(figure 2), assume that foveation of a target, whether auditory or visual, delivers
reinforcement and that learning in this layer and the colliculus follows equation 3.
In a manner analagous to the example in figure 4, those combinations of retinal, eye
position, and head centered signals in this parietal layer which predict a foveating
eye movement are selected by this learning rule. Hence, as before, the weights
from this layer onto the colliculus make predictions about future reinforcement. In
figure 4, a foveation map develops which codes for eye movements in absolute coordinates relative to some equilibrium position of the eye. In the current example,
such a foveation map would be inappropriate since it requires persistent activity
in the collicular layer to maintain a fixed eye position. Instead, the collicular to
motoneuron mapping must represent changes in the balance between antagonistic
muscles with some other system coding for current eye position.
975
976
Montague, Dayan, Nowlan, Pouget, and Sejnowski
Why would such an initial architecture, acting under the aegis of the learning rule
expressed in equation 3, develop the collicular mappings observed in experiments?
Those combinations of signals in the parietal layer that correctly predict foveation
have their connections onto the collicular layer stabilized. In the current representation, foveation of a target will occur if the correct change in firing between
antagonistic motoneurons occurs. After learning slows, the parietal layer is left
with cells whose visual and auditory responses are modulated by eye position
signals. In the collicular layer, the visual responses of a cell are not modulated by
eye position signals while the head-centered auditory responses are modulated by
eye position.
The reasons for these differences in thecolliculus layer and parietal layer are implicit
in the new motoneuron model and the way the equation 3 polices learning. The
collicular layer is driven by combinations of the three signals and the learning rule
enforces a common frame of reference for these combinations because foveation
of the target is the only source of reinforcement. Consider, for example, a visual
target on a region of retina for two different eye positions. The change in the
balance between right and left muscles required to foveate such a retinal target is
the same for each eye position hence the projection from the parietal to collicular
layer develops so that the influence of eye position for a fixed retinal target is
eliminated. The influence of eye position for an auditory target remains, however,
because successful foveation of an auditory target requires different regions of the
collicular map to be active as a function of eye position.
These examples illustrate how diffuse modulatory systems in the midbrain and
basal forebrain can be employed in single framework to guide activity-dependent
map development in the vertebrate brain. This framework gives a natural role to
such diffuse system for both development and conditioning in the adult brain and
illustrates how external contingencies can be incorporated into cortical representations through these crude scalar signals.
References
[1] Barto, AG, Sutton, RS & Watkins, CJCH (1989). Learning and Sequential Decision Making. Technical Report 89-95, Computer and Information Science, University of Mas-
[2]
[3]
[4]
[5]
[6]
[7]
sachusetts, Amherst, MA.
Bear, MF & Singer, W (1986). Modulation of visual cortical plasticity by acetylcholine
and noradrenaline. Nature, 320, 172-176.
Jay, MF & Sparks, DL (1984). Auditory receptive fields in primate superior colliculus
shift with changes in eye position. Nature, 309, 345-347.
Ljunberg, T, Apicella, P & Schultz, W (1992). Responses of monkey dopamine neurons
during learning of behavioral reactions. Journal of Neurophysiology, 67(1), 145-163.
Sutton, RS (1988). Learning to predict by the methods of temporal difference. Machine
Learning, 3, pp 9-44.
Sutton, RS & Barto, AG (1981). Toward a modern theory of adaptive networks: Expectation and prediction. Psychological Review, 882, pp 135-170.
Sutton, RS & Barto, AG (1987). A temporal-difference model of classical conditioning.
Proceedings of the Ninth Annual Conference of the Cognitive Science Society. Seattle, WA.
| 606 |@word neurophysiology:1 selforganization:1 middle:3 briefly:1 instrumental:1 r:4 pick:1 initial:2 foveal:3 reaction:1 current:4 nowlan:5 must:1 happen:1 informative:1 plasticity:1 motor:2 selected:3 shut:1 provides:2 become:4 persistent:1 incorrect:2 pathway:5 behavioral:1 manner:2 mask:1 expected:2 roughly:1 brain:11 globally:1 decreasing:1 td:4 automatically:1 actual:2 window:1 inappropriate:1 vertebrate:4 increasing:2 project:4 begin:3 what:2 kind:2 monkey:1 developed:1 finding:1 transformation:1 ag:3 cjch:1 temporal:6 demonstrates:1 whatever:1 unit:2 control:5 organize:1 continually:1 before:5 positive:1 local:1 sutton:6 receptor:1 despite:1 firing:4 fluctuation:1 approximately:1 path:1 might:1 modulation:1 suggests:2 fastest:1 range:2 statistically:1 directed:4 enforces:1 area:5 projection:3 pre:1 quadrant:1 suggest:1 get:1 onto:21 selection:1 judged:1 context:1 influence:4 map:10 yt:1 center:2 independently:1 vtl:2 spark:2 immediately:1 pouget:5 rule:10 utilizing:1 array:2 x64:2 coordinate:3 antagonistic:3 target:16 controlling:1 imagine:1 programming:1 element:1 helmholtz:1 predicts:1 labeled:1 observed:1 role:2 region:8 ensures:1 cycle:4 extraocular:1 movement:20 decrease:2 envisioned:1 balanced:5 predictable:1 developmental:1 reward:11 dynamic:4 topographically:2 predictive:1 deliver:4 upon:1 multimodal:1 montague:5 various:2 separated:1 effective:1 sejnowski:5 whose:1 larger:2 cnl:1 say:1 otherwise:1 serotonin:1 expanse:1 sdsc:1 torrey:1 rto:1 delivered:6 net:2 propose:1 causing:1 organizing:1 moved:1 seattle:1 convergence:1 object:1 help:1 illustrate:4 exam:2 develop:3 predicted:2 come:2 direction:1 aperiodic:4 closely:1 correct:1 filter:1 centered:4 d_:1 biological:3 anticipate:1 noradrenaline:1 onward:1 c_:1 stretch:2 around:1 equilibrium:6 mapping:11 predict:4 pine:1 major:1 currently:2 sensitive:1 largest:1 reflects:1 varying:2 command:4 acetylcholine:2 barto:5 derived:1 foveating:1 indicates:1 mainly:1 contrast:3 dayan:5 dependent:2 initially:6 comprising:1 selects:1 pixel:2 among:1 development:14 proposes:1 field:3 once:3 eliminated:1 x4:2 represents:1 unsupervised:1 future:16 sachusetts:1 report:1 develops:2 primarily:1 retina:2 few:1 randomly:2 modern:1 individual:1 pproximate:1 maintain:1 organization:6 highly:1 alignment:1 extreme:2 tj:1 accurate:1 capable:1 necessary:2 experience:5 concomitantly:1 psychological:1 modeling:1 calibrate:2 subset:1 rare:1 successful:2 answer:1 calibrated:2 combined:1 randomized:1 amherst:1 ie:1 collicular:8 connectivity:2 postulate:1 neuromodulators:1 broadcast:1 slowly:2 external:5 cognitive:1 leading:1 account:5 suggesting:3 retinal:4 coding:1 north:1 analagous:1 start:1 relied:1 maintains:1 conveying:1 weak:1 territory:1 plastic:1 drive:15 reach:1 synaptic:4 sensorimotor:2 pp:2 involved:1 static:3 gain:4 auditory:12 color:1 amplitude:1 appears:1 higher:1 supervised:1 tension:1 response:5 box:1 implicit:1 correlation:3 until:2 hand:1 working:4 receives:1 somehow:1 defines:1 usa:1 requiring:1 normalized:2 former:1 hence:4 read:1 proprioceptive:1 eg:1 during:7 self:7 acquires:1 tt:4 motion:1 delivers:2 passage:1 superior:4 common:1 stimulation:1 rl:5 turf:1 conditioning:5 interpretation:1 synthesized:1 significant:1 surround:1 ai:2 neuromodulatory:3 rd:1 innervate:2 access:1 calibration:1 cortex:11 supervision:2 align:1 recent:1 jolla:1 driven:5 scenario:1 compound:1 certain:1 vt:11 ubiquitously:1 muscle:10 motoneuron:16 contingent:2 minimum:1 employed:1 determine:1 signal:23 ii:12 relates:1 multiple:3 thalamus:1 hebbian:10 technical:1 post:1 controlled:3 prediction:23 circumstance:1 expectation:2 dopamine:2 represent:2 achieved:2 cell:3 separately:1 source:4 modality:1 appropriately:1 unlike:1 proprioception:2 incorporates:1 axonal:1 presence:1 intermediate:1 architecture:3 shift:2 i7:1 whether:2 utility:1 passed:1 cause:5 modulatory:2 detailed:1 se:1 covered:1 dark:1 visualized:1 simplest:1 specifies:1 exist:1 stabilized:1 per:1 correctly:3 basal:2 four:7 salient:1 demonstrating:1 changing:2 neither:1 registration:1 utilize:1 grab:1 sum:2 colliculus:9 respond:1 reporting:1 place:1 throughout:2 excursion:1 delivery:8 decision:1 layer:38 played:1 display:1 convergent:1 annual:1 activity:11 occur:4 diffuse:6 according:5 combination:5 describes:1 postsynaptic:1 modification:1 making:4 midbrain:2 primate:2 pr:1 segregation:1 equation:5 remains:1 forebrain:2 mechanism:3 singer:2 staged:1 permit:3 away:1 appropriate:2 existence:1 assumes:2 top:2 hinge:1 instant:1 classical:3 comparatively:1 society:1 move:3 question:1 already:1 occurs:2 receptive:2 rt:1 apicella:1 fovea:3 separate:2 link:1 originate:2 extent:1 toward:2 reason:1 code:2 length:1 relationship:1 providing:1 balance:4 pie:2 unfortunately:1 mediating:1 slows:2 ba:1 upper:2 neuron:2 sing:1 parietal:8 immediate:1 incorporated:1 head:5 frame:1 varied:1 ninth:1 police:1 cast:1 required:3 pair:2 extensive:1 connection:6 unequal:1 learned:1 adult:3 able:1 below:1 pattern:2 oculomotor:2 built:1 including:2 explanation:1 event:6 natural:1 predicting:1 representing:1 scheme:1 eye:44 x8:2 text:1 prior:1 review:1 relative:3 fully:1 bear:2 topography:1 subcortical:5 nucleus:14 contingency:8 degree:2 principle:1 neuromodulator:3 excitatory:8 salience:1 bias:1 allow:2 guide:1 institute:1 absolute:1 sparse:1 distributed:1 feedback:1 cortical:11 world:2 sensory:4 qualitatively:2 reinforcement:43 made:1 adaptive:1 schultz:1 sj:1 approximate:1 implicitly:1 active:1 disrupt:1 norepinephrine:1 habituates:2 activitydependent:1 why:1 learn:1 nature:2 ca:1 correlated:2 complex:1 linearly:1 bounding:1 noise:3 arrival:1 nothing:1 body:1 salk:1 axon:1 position:28 lie:1 crude:1 watkins:1 third:1 jay:2 learns:2 externally:1 down:4 specific:1 xt:1 dl:1 sequential:1 effectively:2 te:1 illustrates:2 occurring:1 mf:2 explore:1 visual:16 expressed:1 scalar:5 foveate:2 tha:1 ma:2 conditional:2 viewed:1 absence:2 feasible:1 change:12 foveation:14 wt:2 acting:1 correlational:1 called:2 la:1 selectively:1 internal:2 latter:1 arises:1 unbalanced:4 modulated:3 incorporate:1 ex:3 |
5,592 | 6,060 | A Probabilistic Programming Approach To
Probabilistic Data Analysis
Feras Saad
MIT Probabilistic Computing Project
[email protected]
Vikash Mansinghka
MIT Probabilistic Computing Project
[email protected]
Abstract
Probabilistic techniques are central to data analysis, but different approaches can
be challenging to apply, combine, and compare. This paper introduces composable
generative population models (CGPMs), a computational abstraction that extends
directed graphical models and can be used to describe and compose a broad class
of probabilistic data analysis techniques. Examples include discriminative machine
learning, hierarchical Bayesian models, multivariate kernel methods, clustering
algorithms, and arbitrary probabilistic programs. We demonstrate the integration
of CGPMs into BayesDB, a probabilistic programming platform that can express
data analysis tasks using a modeling definition language and structured query
language. The practical value is illustrated in two ways. First, the paper describes
an analysis on a database of Earth satellites, which identifies records that probably
violate Kepler?s Third Law by composing causal probabilistic programs with nonparametric Bayes in 50 lines of probabilistic code. Second, it reports the lines of
code and accuracy of CGPMs compared with baseline solutions from standard
machine learning libraries.
1
Introduction
Probabilistic techniques are central to data analysis, but can be difficult to apply, combine, and
compare. Such difficulties arise because families of approaches such as parametric statistical modeling,
machine learning and probabilistic programming are each associated with different formalisms and
assumptions. The contributions of this paper are (i) a way to address these challenges by defining
CGPMs, a new family of composable probabilistic models; (ii) an integration of this family into
BayesDB [10], a probabilistic programming platform for data analysis; and (iii) empirical illustrations
of the efficacy of the framework for analyzing a real-world database of Earth satellites.
We introduce composable generative population models (CGPMs), a computational formalism that
generalizes directed graphical models. CGPMs specify a table of observable random variables with
a finite number of columns and countably infinitely many rows. They support complex intra-row
dependencies among the observables, as well as inter-row dependencies among a field of latent random
variables. CGPMs are described by a computational interface for generating samples and evaluating
densities for random variables derived from the base table by conditioning and marginalization. This
paper shows how to package discriminative statistical learning techniques, dimensionality reduction
methods, arbitrary probabilistic programs, and their combinations, as CGPMs. We also describe
algorithms and illustrate new syntaxes in the probabilistic Metamodeling Language for building
composite CGPMs that can interoperate with BayesDB.
The practical value is illustrated in two ways. First, we describe a 50-line analysis that identifies
satellite data records that probably violate their theoretical orbital characteristics. The BayesDB script
builds models that combine non-parametric Bayesian structure learning with a causal probabilistic
program that implements a stochastic variant of Kepler?s Third Law. Second, we illustrate coverage
and conciseness of the CGPM abstraction by quantifying the improvement in accuracy and reduction
in lines of code achieved on a representative data analysis task.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
2
Composable Generative Population Models
A composable generative population model represents a data generating process for an exchangeable
sequence of random vectors (x1 , x2 , . . . ), called a population. Each member xr is T -dimensional,
and element x[r,t] takes values in an observation space Xt , for t ? [T ] and r ? N. A CGPM G is
formally represented by a collection of variables that characterize the data generating process:
G = (?, ?, Z = {zr : r ? N}, X = {xr : r ? N}, Y = {yr : r ? N}).
? ?: Known, fixed quantities about the population, such as metadata and hyperparameters.
? ?: Population-level latent variables relevant to all members of the population.
? zr = (z[r,1] , . . . z[r,L] ): Member-specific latent variables that govern only member r directly.
? xr = (x[r,1] , . . . x[r,T ] ): Observable output variables for member r. A subset of these variables
may be observed and recorded in a dataset D.
? yr = (y[r,1] , . . . y[r,I] ): Input variables, such as ?feature vectors? in a purely discriminative model.
A CGPM is required to satisfy the following conditional independence constraint:
?r 6= r0 ? N, ?t, t0 ? [T ] : x[r,t] ?? x[r0 ,t0 ] | {?, ?, zr , zr0 }.
(1)
Eq (1) formalizes the notion that all dependencies across members r ? N are completely mediated
by the population parameters ? and member-specific variables zr . However, elements x[r,i] and x[r,j]
within a member are generally free to assume any dependence structure. Similarly, the memberspecific latents in Z may be either uncoupled or highly-coupled given population parameters ?.
CGPMs differ from the standard mathematical definition of a joint density in that they are defined in
terms of a computational interface (Listing 1). As computational objects, they explicitly distinguish
between the sampler for the random variables from their joint distribution, and the assessor of their
joint density. In particular, a CGPM is required to sample/assess the joint distribution of a subset of
output variables x[r,Q] conditioned on another subset x[r,E] , and marginalizing over x[r,[T ]\(Q?E)] .
Listing 1 Computational interface for composable generative population models.
? s ? simulate (G, member: r, query: Q = {qk }, evidence : x[r,E] , input : yr )
Generate a sample from the distribution
s ?G x[r,Q] |{x[r,E] , yr , D}.
? c ? logpdf (G, member: r, query : x[r,Q] , evidence : x[r,E] , input : yr )
Evaluate the log density
log pG (x[r,Q] |{x[r,E] , yr , D}).
? G 0 ? incorporate (G, measurement : x[r,t] or yr )
Record a measurement x[r,t] ? Xt (or yr ) into the dataset D.
? G 0 ? unincorporate (G, member : r)
Eliminate all measurements of input and output variables for member r.
? G 0 ? infer (G, program : T )
Adjust internal latent state in accordance with the learning procedure specified by program T .
2.1
Primitive univariate CGPMs and their statistical data types
The statistical data type (Figure 1) of a population variable xt generated by a CGPM provides a
more refined taxonomy than its ?observation space? Xt . The (parameterized) support of a statistical
type is the set in which samples from simulate take values. Each statistical type is also associated
with a base measure which ensures logpdf is well-defined. In high-dimensional populations with
heterogeneous types, logpdf is taken against the product measure of these base measures. The
statistical type also identifies invariants that the variable maintains. For instance, the values of a
NOMINAL variable are permutation-invariant. Figure 1 shows statistical data types provided by the
Metamodeling Language from BayesDB. The final column shows some examples of primitive CGPMs
that are compatible with each statistical type; they implement logpdf directly using univariate
probability density functions, and algorithms for simulate are well known [4]. For infer their
parameters may be fixed, or learned from data using, e.g., maximum likelihood [2, Chapter 7] or
Bayesian priors [5]. We refer to an extended version of this paper [14, Section 3] for using these
primitives to implement CGPMs for a broad collection of model classes, including non-parametric
Bayes, nearest neighbors, PCA, discriminative machine learning, and multivariate kernel methods.
2
Statistical Data Type
BINARY
NOMINAL
COUNT/RATE
CYCLIC
MAGNITUDE
NUMERICAL
NUMERICAL-RANGED
Nominal
Parameters
symbols: S
base: b
period: p
?
?
low: l, high:h
Count
Measure/?-Algebra
{0, 1}
{0 . . . S?1}
{0, 1b , 2b , . . .}
(0, p)
(0, ?)
(??, ?)
(l, h) ? R
Magnitude
Poisson
Geometric
{0,1}
(#, 2
)
(#, 2[S] )
(#, 2N )
(?, B(R))
(?, B(R))
(?, B(R))
(?, B(R))
BERNOULLI
CATEGORICAL
POISSON, GEOMETRIC
VON-MISES
LOGNORMAL, EXPON
NORMAL
BETA, NORMAL-TRUNC
Cyclic
Numerical
Von-Mises
Lognormal
Exponential
Primitive CGPM
Numerical-Ranged
Normal
NormalTrunc
Beta
Frequency
Categorical
Support
Figure 1: Statistical data types for population variables generated by CGPMs available in the
BayesDB Metamodeling Language, and samples from their marginal distributions.
2.2
Implementing general CGPMs as probabilistic programs in VentureScript
In this section, we show how to implement simulate and logpdf (Listing 1) for composable generative models written in VentureScript [8], a probabilistic programming language with programmable
inference. For simplicity, this section assumes a stronger conditional independence constraint,
?l, l0 ? [L] such that (r, t) 6= (r0 , t0 ) =? x[r,t] ?? x[r0 ,t0 ] | {?, ?, z[r,l] , z[r0 ,l0 ] , yr , yr0 }.
(2)
In words, for every observable element x[r,t] , there exists a latent variable z[r,l] which (in addition
to ?) mediates all coupling with other variables in the population. The member latents Z may still
exhibit arbitrary dependencies. The approach for simulate and logpdf described below is based
on approximate inference in tagged subparts of the Venture trace, which carries a full realization
of all random choices (population and member-specific latent variables) made by the program. The
runtime system carries a set of K traces {(? k , Z k )}K
k=1 sampled from an approximate posterior
pG (?, Z|D). These traces are assigned weights depending on the user-specified evidence x[r,E] in
the simulate/logpdf function call. G represents the CGPM as a probabilistic program, and the
input yr and latent variables Z k are treated as ambient quantities in ? k . The distribution of interest is
Z
pG (x[r,Q] |x[r,E] , D) =
pG (x[r,Q] |x[r,E] , ?, D)pG (?|x[r,E] , D)d?
?
Z
pG (x[r,E] |?, D)pG (?|D)
=
pG (x[r,Q] |x[r,E] , ?, D)
d?
(3)
pG (x[r,E] |D)
?
? PK
K
X
1
k
k=1 w
pG (x[r,Q] |x[r,E] , ? k , D)wk
where ? k ?G |D. (4)
k=1
The weight wk = pG (x[r,E] |? k , D) of trace ? k is the likelihood of the evidence. The weighting
scheme (4) is a computational trade-off avoiding the requirement to run posterior inference on
population parameters ? for a query about member r. It suffices to derive the distribution for only ? k ,
Z
pG (x[r,Q] |x[r,E] , ? k , D) =
pG (x[r,Q] , zrk |x[r,E] , ? k , D)dzrk
(5)
zrk
Z
=
Y
zrk q?Q
J
1X Y
pG (x[r,q] |zrk , ? k ) pG (zrk |x[r,E] , ? k , D)dzrk ?
pG (x[r,q] |zrk,j , ? k ),
J j=1
(6)
q?Q
where zrk,j ?G |{x[r,E] , ? k , D}. Eq (5) suggests that simulate can be implemented by sampling
(x[r,Q] , zrk ) ?G |{x[r,E] , ? k , D} from the joint local posterior, then returning elements x[r,Q] . Eq (6)
shows that logpdf can be implemented by first sampling the member latents zrk ?G |{x[r,E] , ? k , D}
from the local posterior; using the conditional independence constraint (2), the query x[r,Q] then
factors into a product of density terms for each element x[r,q] .
3
To aggregate over {? k }K
k=1 , for simulate the runtime obtains the queried sample by first drawing
k ? C ATEGORICAL({w1 , . . . , wK }), then returns the sample x[r,Q] drawn from trace ? k . Similarly,
logpdf is computed using the weighted Monte Carlo estimator (6). Algorithms 2a and 2b summarize
implementations of simulate and logpdf in a general probabilistic programming environment.
Algorithm 2a simulate for CGPMs in a probabilistic programming environment.
1: function S IMULATE(G, r, Q, x[r,E] , yr )
2:
for k = 1, . . . , K do
3:
if zrk ?
6 Z k then
k
4:
zr ?G |{? k , Z k , D}
Q
k
5:
w ? e?E pG (x[r,e] |? k , zrk )
6:
k ? C ATEGORICAL ({w1 , . . . , wk })
7:
{x[r,Q] , zrk } ?G |{? k , Z k , D ? {yr , x[r,E] }}
8:
return x[r,Q]
. for each trace k
. if member r has unknown local latents
. sample them from the prior
. weight the trace by likelihood of evidence
. importance resample the traces
. run a transition operator leaving target invariant
. select query variables from the resampled trace
Algorithm 2b logpdf for CGPMs in a probabilistic programming environment.
1: function L OG P DF(G, r, x[r,Q] , x[r,E] , yr )
2:
for k = 1, . . . , K do
3:
Run steps 2 through 5 from Algorithm 2a
4:
for j = 1, . . . , J do
5:
zrk,j ?G |{? k , Z k , D ? {yr , x[r,E] }}
Q
6:
hk,j ? q?Q pG (x[r,q] |? k , zrk,j )
P
7:
rk ? J1 Jj=1 hk,j
k k
8:
q k ? r
w
P
PK
K
k
k
9:
return log
? log
k=1 q
k=1 w
2.3
. for each trace k
. retrieve the trace weight
. obtain J samples of latents in scope of member r
. run a transition operator leaving target invariant
. compute the density estimate
. aggregate density estimates by simple Monte Carlo
. importance weight the estimate
. weighted importance sampling over all traces
Inference in a composite network of CGPMs
This section shows how CGPMs are composed by applying the output of one to the input of another.
This allows us to build complex probabilistic models out of simpler primitives directly as software.
Section 3 demonstrates surface-level syntaxes in the Metamodeling Language for constructing these
composite structures. We report experiments including up to three layers of composed CGPMs.
Let G a be a CGPM with output xa? and input y?a , and G b have output xb? and input y?b (the symbol ?
a
indexes all members r ? N). The composition GBb ? GA
applies the subset of outputs xa[?,A] of G a to
b
the inputs y[?,B]
of G b , where |A| = |B| and the variables are type-matched (Figure 1). This operation
b
results in a new CGPM G c with output xa? ? xb? and input y?a ? y[?,\B]
. In general, a collection
k
{G : k ? [K]} of CGPMs can be organized into a generalized directed graph G [K] , which itself is a
CGPM. Node k is an ?internal? CGPM G k , and the labeled edge aA ? bB denotes the composition
a
GA
? GBb . The directed acyclic edge structure applies only to edges between elements of different
CGPMs in the network; elements xk[?,i] , xk[?,j] within G k may satisfy the more general constraint (1).
Algorithms 3a and 3b show sampling-importance-resampling and ratio-likelihood weighting algorithms that combine simulate and logpdf from each individual G k to compute queries against
network G [K] . The symbol ? k = {(p, t) : xp[?,t] ? y?k } refers to the set of all output elements from
upstream CGPMs connected to the inputs of G k , so that {? k : k ? [K]} encodes the graph adjacency
matrix. Subroutine 3c generates a full realization of all unconstrained variables, and weights forward
samples from the network by the likelihood of constraints. Algorithm 3b is based on ratio-likelihood
weighting (both terms in line 6 are computed by unnormalized importance sampling) and admits an
analysis with known error bounds when logpdf and simulate of each G k are exact [7].
Algorithm 3a simulate in a directed acyclic network of CGPMs.
1: function S IMULATE(G k , r, Qk , xk[r,E k ] , yrk , for k ? [K])
2:
for j = 1, . . . , J do
3:
(sj , wj ) ? W EIGHTED -S AMPLE ({xk[r,E k ] : k ? [K]})
4:
5:
m ? C ATEGORICAL ({w1 , . . . , wJ })
return {xk[r,Qk ] ? sm : k ? [K]}
. generate J importance samples
. retrieve jth weighted sample
. resample by importance weights
. return query variables from the selected sample
4
Algorithm 3b logpdf in a directed acyclic network of CGPMs.
1: function S IMULATE(G k , r, xkQ , xk[r,E k ] , yrk , for k ? [K])
2:
for j = 1, . . . , J do
3:
(sj , wj ) ? W EIGHTED -S AMPLE ({xk[r,Qk ?E k ] : k ? [K]})
4:
5:
6:
. generate J importance samples
. joint density of query/evidence
for j = 1, . . . , J 0 do
. generate J 0 importance samples
(s0j , w0j ) ? W EIGHTED -S AMPLE ({xk[r,E k ] : k ? [K]})
. marginal density of evidence
P
j P
0j
0
return log
? log(J/J )
. return likelihood ratio importance estimate
[J] w /
[J 0 ] w
Algorithm 3c Weighted forward sampling in a directed acyclic network of CGPMs.
1: function W EIGHTED -S AMPLE (constraints: xk[r,C k ] , for k ? [K])
2:
(s, log w) ? (?, 0)
. initialize empty sample with zero weight
3:
for k ? T OPO S ORT ({? 1 , . . . , ? K }) do
. topologically sort CGPMs using adjacency matrix
4:
y?rk ? yrk ? {xp[r,t] ? s : (p, t) ? ? k }
. retrieve required inputs at node k
5:
log w ? log w + logpdf (G k , r, xk[r,C k ] , ?, y?rk )
. update weight by likelihood of constraint
6:
xk[r,\C k ] ? simulate (G k , r, \C k , xk[r,C k ] , y?rk )
. simulate unconstrained nodes
7:
s ? s ? xk[r,C k ?\C k ]
. append all node values to sample
8:
3
return (s, w)
. return the overall sample and its weight
Analyzing satellites using CGPMs built from causal probabilistic
programs, discriminative machine learning, and Bayesian
non-parametrics
This section outlines a case study applying CGPMs to a database of 1163 satellites maintained by
the Union of Concerned Scientists [12]. The dataset contains 23 numerical and categorical features
of each satellite such as its material, functional, physical, orbital and economic characteristics. The
list of variables and examples of three representative satellites are shown in Table 1. A detailed
study of this database using BayesDB provided in [10]. Here, we compose the baseline CGPM
in BayesDB, CrossCat [9], a non-parametric Bayesian structure learner for high dimensional data
tables, with several CGPMs: a classical physics model written in VentureScript, a random forest
classifier, factor analysis, and an ordinary least squares regressor. These composite models allow us
to identify satellites that probably violate their orbital mechanics (Figure 2), as well as accurately
infer the anticipated lifetimes of new satellites (Figure 3). We refer to [14, Section 6] for several
more experiments on a broader set of data analysis tasks, as well as comparisons to baseline machine
learning solutions.
Name
Country of Operator
Operator Owner
Users
Purpose
Class of Orbit
Type of Orbit
Perigee km
Apogee km
Eccentricity
Period minutes
Launch Mass kg
Dry Mass kg
Power watts
Date of Launch
Anticipated Lifetime
Contractor
Country of Contractor
Launch Site
Launch Vehicle
Source Used for Orbital Data
longitude radians of geo
Inclination radians
International Space Station
Multinational
NASA/Multinational
Government
Scientific Research
LEO
Intermediate
401
422
0.00155
92.8
NaN
NaN
NaN
36119
30
Boeing Satellite Systems/Multinational
Multinational
Baikonur Cosmodrome
Proton
www.satellitedebris.net 12/12
NaN
0.9005899
AAUSat-3
Denmark
Aalborg University
Civil
Technology Development
LEO
NaN
770
787
0.00119
100.42
0.8
NaN
NaN
41330
1
Aalborg University
Denmark
Satish Dhawan Space Center
PSLV
SC - ASCR
NaN
1.721418241
Advanced Orion 5 (NRO L-32, USA 223)
USA
National Reconnaissance Office (NRO)
Military
Electronic Surveillance
GEO
NaN
35500
35500
0
NaN
5000
NaN
NaN
40503
NaN
National Reconnaissance Laboratory
USA
Cape Canaveral
Delta 4 Heavy
SC - ASCR
1.761037215
0
Table 1: Variables in the satellite population, and three representative satellites. The records are
multivariate, heterogeneously typed, and contain arbitrary patterns of missing data.
5
CREATE TABLE satellites_ucs FROM 'satellites.csv';
CREATE POPULATION satellites FOR satellites_ucs WITH SCHEMA ( GUESS STATTYPES FOR (*) );
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
CREATE METAMODEL satellites_hybrid FOR satellites WITH BASELINE CROSSCAT (
OVERRIDE GENERATIVE MODEL FOR type_of_orbit
GIVEN apogee_km, perigee_km, period_minutes, users, class_of_orbit
USING RANDOM_FOREST (num_categories = 7);
OVERRIDE GENERATIVE MODEL FOR launch_mass_kg, dry_mass_kg, power_watts, perigee_km, apogee_km
USING FACTOR_ANALYSIS (dimensionality = 2);
OVERRIDE GENERATIVE MODEL FOR period_minutes
AND EXPOSE kepler_cluster_id CATEGORICAL, kepler_noise NUMERICAL
GIVEN apogee_km, perigee_km USING VENTURESCRIPT (program = '
define dpmm_kepler = () -> {
// Definition of DPMM Kepler model program.
assume keplers_law = (apogee, perigee) -> {
(GM, earth_radius) = (398600, 6378);
a = .5*(abs(apogee) + abs(perigee)) + earth_radius;
2 * pi * sqrt(a**3 / GM) / 60 };
// Latent variable priors.
assume crp_alpha = gamma(1,1);
assume cluster_id_sampler = make_crp(crp_alpha);
assume noise_sampler = mem((cluster) -> make_nig_normal(1, 1, 1, 1));
// Simulator for latent variables (kepler_cluster_id and kepler_noise).
assume sim_cluster_id = mem((rowid, apogee, perigee) -> {
cluster_id_sampler() #rowid:1 });
assume sim_noise = mem((rowid, apogee, perigee) -> {
cluster_id = sim_cluster_id(rowid, apogee, perigee);
noise_sampler(cluster_id)() #rowid:2 });
// Simulator for observable variable (period_minutes).
assume sim_period = mem((rowid, apogee, perigee) -> {
keplers_law(apogee, perigee) + sim_noise(rowid, apogee, perigee) });
assume outputs = [sim_period, sim_cluster_id, sim_noise];
// List of output variables.
};
// Procedures for observing the output variables.
define obs_cluster_id = (rowid, apogee, perigee, value, label) -> {
$label: observe sim_cluster_id( $rowid, $apogee, $perigee) = atom(value); };
define obs_noise = (rowid, apogee, perigee, value, label) -> {
$label: observe sim_noise( $rowid, $apogee, $perigee) = value; };
define obs_period = (rowid, apogee, perigee, value, label) -> {
theoretical_period = run(sample keplers_law($apogee, $perigee));
obs_noise( rowid, apogee, perigee, value - theoretical_period, label); };
define observers = [obs_period, obs_cluster_id, obs_noise];
// List of observer procedures.
define inputs = ["apogee", "perigee"];
// List of input variables.
define transition = (N) -> { default_markov_chain(N) };
// Transition operator.
'));
INITIALIZE 10 MODELS FOR satellites_hybrid;
ANALYZE satellites_hybrid FOR 100 ITERATIONS;
INFER name, apogee_km, perigee_km, period_minutes, kepler_cluster_id, kepler_noise FROM satellites;
Clusters Identified by Kepler CGPM
Period [mins]
4000
Geotail
3000
28
Cluster 1
Cluster 2
Cluster 3
Cluster 4
Theoretically
Feasible Orbits
26
2000
Amos5
NavStar
1000 Meridian4
20000
30000
Perigee [km]
Geotail
Negligible
Noticeable
Large
Extreme
24
23
Amos5
Meridian4
Orion6
21
0
10000
25
22
Orion6
0
Empirical Distribution of Orbital Deviations
27
Number of Satellites
5000
20
1e-10
40000
1e-5
1e0
1e5
Magntiude of Deviation from Keplers? Law [mins2 ]
1e10
Figure 2: A session in BayesDB to detect satellites whose orbits are likely violations of
Kepler?s Third Law using a causal composable generative population model written in
VentureScript. The dpmm_kepler CGPM (line 17) learns a DPMM on the residuals of each
satellite?s deviation from its theoretical orbit. Both the cluster identity and inferred noise are
exposed latent variables (line 14). Each dot in the scatter plot (left) is a satellite in the population,
and its color represents the latent cluster assignment learned by dpmm_kepler. The histogram
(right) shows that each of the four detected clusters roughly translates to a qualitative description
of the deviation: yellow (negligible), magenta (noticeable), green (large), and blue (extreme).
6
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
CREATE TABLE data_train FROM 'sat_train.csv';
.nullify data_train 'NaN';
def dummy_code_categoricals(frame, maximum=10):
def dummy_code_categoricals(series):
categories = pd.get_dummies(
series, dummy_na=1)
if len(categories.columns) > maximum-1:
return None
if sum(categories[np.nan]) == 0:
del categories[np.nan]
categories.drop(
categories.columns[-1], axis=1,
inplace=1)
return categories
CREATE POPULATION satellites FOR data_train
WITH SCHEMA(
GUESS STATTYPES FOR (*)
);
CREATE METAMODEL crosscat_ols FOR satellites
WITH BASELINE CROSSCAT(
OVERRIDE GENERATIVE MODEL FOR
anticipated_lifetime
GIVEN
type_of_orbit, perigee_km, apogee_km,
period_minutes, date_of_launch,
launch_mass_kg
USING LINEAR_REGRESSION
);
def append_frames(base, right):
for col in right.columns:
base[col] = pd.DataFrame(right[col])
numerical = frame.select_dtypes([float])
categorical = frame.select_dtypes([object])
INITIALIZE 4 MODELS FOR crosscat_ols;
ANALYZE crosscat_ols FOR 100 ITERATION WAIT;
categorical_coded = filter(
lambda s: s is not None,
[dummy_code_categoricals(categorical[c])
for c in categorical.columns])
CREATE TABLE data_test FROM 'sat_test.csv';
.nullify data_test 'NaN';
.sql INSERT INTO data_train
SELECT * FROM data_test;
joined = numerical
CREATE TABLE predicted_lifetime AS
INFER EXPLICIT
PREDICT anticipated_lifetime
CONFIDENCE prediction_confidence
FROM satellites WHERE _rowid_ > 1000;
for sub_frame in categorical_coded:
append_frames(joined, sub_frame)
return joined
(a) Full session in BayesDB which loads the
training and test sets, creates a hybrid CGPM,
and runs the regression using CrossCat+OLS.
(b) Ad-hoc Python routine (used by baselines)
for coding nominal predictors in a dataframe
with missing values and mixed data types.
Mean Squared Error
102
101
ridge
ols
lasso
kernel
forest
bayesdb(crosscat+ols)
bayesdb(crosscat)
100 1
10
102
Lines of Code
Figure 3: In a high-dimensional regression problem with mixed data types and missing data,
the composite CGPM improves prediction accuracy over purely generative and purely discriminative baselines. The task is to infer the anticipated lifetime of a held-out satellite given categorical
and numerical features such as type of orbit, launch mass, and orbital period. As feature vectors in
the test set have missing entries, purely discriminative models (ridge, lasso, OLS) either heuristically
impute missing features, or ignore the features and predict the anticipated lifetime using the mean
in the training set. The purely generative model (CrossCat) can impute missing features from their
joint distribution, but only indirectly mediates dependencies between the predictors and response
through latent variables. The composite CGPM (CrossCat+OLS) in panel (a) combines advantages
of both approaches; statistical imputation followed by regression on the features leads to improved
predictive accuracy. The reduced code size is a result of using SQL, BQL, & MML, for preprocessing,
model-building and predictive querying, as opposed to collections of ad-hoc scripts such as panel (b).
Figure 2 shows the MML program for constructing the hybrid CGPM on the satellites population. In
terms of the compositional formalism from Section 2.3, the CrossCat CGPM (specified by the MML
BASELINE keyword) learns the joint distribution of variables at the ?root? of the network (i.e., all
variables from Table 1 which do not appear as arguments to an MML OVERRIDE command). The
dpmm_kepler CGPM in line 16 of the top panel in Figure 2 accepts apogee_km and perigee_km
as input variables y = (A, P ), and produces as output the period_minutes x = (T ). These
variables characterize the ellipticalporbit of a satellite and are constrained by the relationships
e = (A ? P )/(A + P ) and T = 2? ((A + P )/2))3 /GM where e is the eccentricity and GM
7
is a physical constant. The program specifies a stochastic version of Kepler?s Law using a Dirichlet
process mixture model for the distribution over errors (between the theoretical and observed period),
P ? DP(?, N ORMAL -I NVERSE -G AMMA(m, V, a, b)),
(?r , ?r2 )|P ? P
r |{?r , ?r2 , yr } ? N ORMAL(?|?r , ?r2 ), where r := Tr ? K EPLER(Ar , Pr ).
The lower panels of Figure 2 illustrate how the dpmm_kepler CGPM clusters satellites based on the
magnitude of the deviation from their theoretical orbits; the variables (deviation, cluster identity, etc)
in these figures are obtained from the BQL query on line 50. For instance, the satellite Orion6 shown
in the right panel of Figure 2, belongs to a component with ?extreme? deviation. Further investigation
reveals that Orion6 has a recorded period 23.94 minutes, most likely a data entry error for the true
period of 24 hours (1440 minutes); we have reported such errors to the maintainers of the database.
The data analysis task in Figure 3 is to infer the anticipated_lifetime xr of a new satellite, given
a set of features yr such as its type_of_orbit and perigee_km. A simple OLS regressor with
normal errors is used for the response pG ols (xr |yr ). The CrossCat baseline learns a joint generative
model for the covariates pG crosscat (yr ). The composite CGPM crosscat_ols built Figure 3 (left
panel) thus carries the full joint distribution over the predictors and response pG (xr , yr ), leading to
more accurate predictions. Advantages of this hybrid approach are further discussed in the figure.
4
Related Work and Discussion
This paper has shown that it is possible to use a computational formalism in probabilistic programming
to uniformly apply, combine, and compare a broad class of probabilistic data analysis techniques.
By integrating CGPMs into BayesDB [10] and expressing their compositions in the Metamodeling
Language, we have shown it is possible to combine CGPMs synthesized by automatic model discovery
[9] with custom probabilistic programs, which accept and produce multivariate inputs and outputs,
into coherent joint probabilistic models. Advantages of this hybrid approach to modeling and inference
include combining the strengths of both generative and discriminative techniques, as well as savings
in code complexity from the uniformity of the CGPM interface.
While our experiments have constructed CGPMs using VentureScript and Python implementations,
the general probabilistic programming interface of CGPMs makes it possible for BayesDB to interact
with a variety systems such as BUGS [15], Stan [1], BLOG [11], Figaro [13], and others. Each of
these systems provides varying levels of model expressiveness and inference capabilities, and can
be used to be construct domain-specific CGPMs with different performance properties based on
the data analysis task on hand. Moreover, by expressing the data analysis tasks in BayesDB using
the model-independent Bayesian Query Language [10, Section 3], CGPMs can be queried without
necessarily exposing their internal structures to end users. Taken together, these characteristics help
illustrate the broad utility of the BayesDB probabilistic programming platform and architecture [14,
Section 5], which in principle can be used to create and query novel combinations of black-box
machine learning, statistical modeling, computer simulation, and probabilistic generative models.
Our applications have so far focused on CGPMs for analyzing populations from standard multivariate
statistics. A promising area for future work is extending the computational abstraction of CGPMs,
as well as the Metamodeling and Bayesian Query Languages, to cover analysis tasks in other
domains such longitudinal populations [3], statistical relational settings [6], or natural language
processing and computer vision. Another extension, important in practice, is developing alternative
compositional algorithms for querying CGPMs (Section 2.3). The importance sampling strategy used
for compositional simulate and logpdf may only be feasible when the networks are shallow and
the constituent CGPMs are fairly noisy; better Monte Carlo strategies or perhaps even variational
strategies may be needed for deeper networks. Additional future work for composite CGPMs include
(i) algorithms for jointly learning the internal parameters of each individual CGPM, using, e.g.,
imputations from its parents, and (ii) new meta-algorithms for structure learning among a collection
of compatible CGPMs, in a similar spirit to the non-parametric divide-and-conquer method from [9].
We hope the formalisms in this paper lead to practical, unifying tools for data analysis that integrate
these ideas, and provide abstractions that enable the probabilistic programming community to
collaboratively explore these research directions.
8
References
[1] B. Carpenter, A. Gelman, M. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. A. Brubaker,
J. Guo, P. Li, and A. Riddell. Stan: A probabilistic programming language. J Stat Softw, 2016.
[2] G. Casella and R. Berger. Statistical Inference. Duxbury advanced series in statistics and
decision sciences. Thomson Learning, 2002.
[3] M. Davidian and D. M. Giltinan. Nonlinear models for repeated measurement data, volume 62.
CRC press, 1995.
[4] L. Devroye. Sample-based non-uniform random variate generation. In Proceedings of the 18th
conference on Winter simulation, pages 260?265. ACM, 1986.
[5] D. Fink. A compendium of conjugate priors. 1997.
[6] N. Friedman, L. Getoor, D. Koller, and A. Pfeffer. Learning probabilistic relational models. In
Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, IJCAI 99,
Stockholm, Sweden, July 31 - August 6, 1999. 2 Volumes, 1450 pages, pages 1300?1309, 1999.
[7] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT
press, 2009.
[8] V. Mansinghka, D. Selsam, and Y. Perov. Venture: a higher-order probabilistic programming
platform with programmable inference. CoRR, abs/1404.0099, 2014.
[9] V. Mansinghka, P. Shafto, E. Jonas, C. Petschulat, M. Gasner, and J. B. Tenenbaum. Crosscat:
A fully bayesian nonparametric method for analyzing heterogeneous, high dimensional data.
arXiv preprint arXiv:1512.01272, 2015.
[10] V. Mansinghka, R. Tibbetts, J. Baxter, P. Shafto, and B. Eaves. Bayesdb: A probabilistic programming system for querying the probable implications of data. arXiv preprint arXiv:1512.05006,
2015.
[11] B. Milch, B. Marthi, S. Russell, D. Sontag, D. L. Ong, and A. Kolobov. 1 blog: Probabilistic
models with unknown objects. Statistical relational learning, page 373, 2007.
[12] U. of Concerned Scientists. UCS Satellite Database, 2015.
[13] A. Pfeffer. Figaro: An object-oriented probabilistic programming language. Charles River
Analytics Technical Report, 137, 2009.
[14] F. Saad and V. Mansinghka. Probabilistic data analysis with probabilistic programming. arXiv
preprint arXiv:1608.05347, 2016.
[15] D. J. Spiegelhalter, A. Thomas, N. G. Best, W. Gilks, and D. Lunn. Bugs: Bayesian inference
using gibbs sampling. Version 0.5,(version ii) http://www. mrc-bsu. cam. ac. uk/bugs, 19, 1996.
9
| 6060 |@word version:4 stronger:1 heterogeneously:1 heuristically:1 km:3 simulation:2 pg:21 tr:1 carry:3 reduction:2 cyclic:2 contains:1 efficacy:1 series:3 longitudinal:1 scatter:1 written:3 exposing:1 numerical:9 j1:1 plot:1 drop:1 update:1 resampling:1 generative:16 selected:1 yr:19 guess:2 intelligence:1 xk:13 record:4 provides:2 node:4 kepler:7 simpler:1 mathematical:1 constructed:1 beta:2 jonas:1 qualitative:1 combine:7 compose:2 owner:1 introduce:1 theoretically:1 orbital:6 inter:1 roughly:1 mechanic:1 simulator:2 project:2 spain:1 provided:2 matched:1 panel:6 mass:3 moreover:1 kg:2 formalizes:1 every:1 fink:1 runtime:2 returning:1 demonstrates:1 classifier:1 uk:1 exchangeable:1 yr0:1 appear:1 negligible:2 scientist:2 accordance:1 local:3 analyzing:4 black:1 suggests:1 challenging:1 analytics:1 directed:7 practical:3 gilks:1 union:1 practice:1 implement:4 figaro:2 xr:6 procedure:3 area:1 empirical:2 composite:8 word:1 confidence:1 refers:1 integrating:1 wait:1 ga:2 operator:5 gelman:1 milch:1 applying:2 www:2 center:1 missing:6 primitive:5 focused:1 simplicity:1 estimator:1 retrieve:3 ormal:2 population:25 notion:1 target:2 nominal:4 gm:4 user:4 zr0:1 programming:17 exact:1 element:8 database:6 labeled:1 observed:2 pfeffer:2 preprint:3 eighted:4 bql:2 wj:3 ensures:1 connected:1 keyword:1 trade:1 russell:1 environment:3 govern:1 pd:2 covariates:1 complexity:1 ong:1 trunc:1 cam:1 uniformity:1 algebra:1 exposed:1 predictive:2 purely:5 creates:1 learner:1 observables:1 completely:1 joint:12 represented:1 chapter:1 leo:2 describe:3 monte:3 goodrich:1 artificial:1 query:13 metamodeling:6 aggregate:2 sc:2 detected:1 refined:1 whose:1 drawing:1 statistic:2 jointly:1 itself:1 noisy:1 final:1 hoc:2 sequence:1 advantage:3 net:1 product:2 relevant:1 combining:1 realization:2 date:1 sixteenth:1 description:1 bug:3 venture:2 constituent:1 parent:1 empty:1 requirement:1 satellite:30 eccentricity:2 cluster:11 generating:3 produce:2 extending:1 ijcai:1 object:4 help:1 illustrate:4 coupling:1 depending:1 stat:1 derive:1 ac:1 nearest:1 kolobov:1 noticeable:2 mansinghka:5 eq:3 longitude:1 coverage:1 implemented:2 launch:5 differ:1 direction:1 shafto:2 filter:1 stochastic:2 ucs:1 enable:1 material:1 implementing:1 adjacency:2 crc:1 government:1 suffices:1 investigation:1 probable:1 stockholm:1 insert:1 extension:1 normal:4 scope:1 predict:2 collaboratively:1 resample:2 earth:2 purpose:1 label:6 expose:1 create:9 tool:1 weighted:4 hoffman:1 hope:1 mit:5 og:1 surveillance:1 broader:1 office:1 command:1 varying:1 derived:1 l0:2 improvement:1 bernoulli:1 likelihood:8 hk:2 baseline:9 detect:1 inference:9 abstraction:4 eliminate:1 accept:1 koller:2 subroutine:1 overall:1 among:3 development:1 nverse:1 platform:4 integration:2 initialize:3 constrained:1 marginal:2 field:1 construct:1 saving:1 fairly:1 sampling:8 atom:1 softw:1 represents:3 broad:4 anticipated:4 bsu:1 future:2 report:3 np:2 others:1 aalborg:2 oriented:1 winter:1 composed:2 gamma:1 national:2 individual:2 ab:3 friedman:2 interest:1 highly:1 intra:1 custom:1 adjust:1 introduces:1 violation:1 extreme:3 crosscat:12 mixture:1 held:1 xb:2 implication:1 accurate:1 ambient:1 edge:3 sweden:1 divide:1 orbit:7 causal:4 e0:1 theoretical:4 instance:2 formalism:5 modeling:4 bayesdb:17 column:6 csv:3 ar:1 military:1 cover:1 perov:1 lunn:1 assignment:1 ordinary:1 geo:2 deviation:7 subset:4 latents:5 entry:2 predictor:3 uniform:1 satish:1 characterize:2 reported:1 dependency:5 density:10 international:2 river:1 probabilistic:42 off:1 physic:1 lee:1 regressor:2 reconnaissance:2 together:1 w1:3 von:2 central:2 recorded:2 squared:1 opposed:1 lambda:1 leading:1 return:12 li:1 coding:1 wk:4 satisfy:2 explicitly:1 ad:2 script:2 vehicle:1 observer:2 root:1 schema:2 observing:1 analyze:2 bayes:2 maintains:1 sort:1 len:1 capability:1 contribution:1 ass:1 square:1 accuracy:4 qk:4 characteristic:3 listing:3 identify:1 dry:1 yellow:1 bayesian:9 accurately:1 none:2 carlo:3 mrc:1 sqrt:1 casella:1 dataframe:2 definition:3 against:2 frequency:1 typed:1 associated:2 conciseness:1 mi:2 radian:2 sampled:1 dataset:3 color:1 dimensionality:2 improves:1 organized:1 routine:1 nasa:1 higher:1 specify:1 response:3 yrk:3 improved:1 box:1 lifetime:4 xa:3 hand:1 nonlinear:1 del:1 perhaps:1 scientific:1 building:2 name:2 usa:3 ranged:2 contain:1 true:1 tagged:1 assigned:1 laboratory:1 illustrated:2 impute:2 maintained:1 unnormalized:1 generalized:1 syntax:2 override:5 outline:1 ridge:2 demonstrate:1 thomson:1 interface:5 variational:1 novel:1 charles:1 ols:7 functional:1 nro:2 physical:2 conditioning:1 volume:2 discussed:1 synthesized:1 measurement:4 refer:2 composition:3 expressing:2 gibbs:1 queried:2 automatic:1 unconstrained:2 similarly:2 session:2 language:13 dot:1 sql:2 surface:1 ort:1 base:6 etc:1 multivariate:5 posterior:4 belongs:1 meta:1 binary:1 blog:2 additional:1 r0:5 period:7 july:1 ii:3 violate:3 full:4 infer:7 technical:1 prediction:2 variant:1 regression:3 heterogeneous:2 vision:1 poisson:2 expon:1 df:1 iteration:2 kernel:3 histogram:1 arxiv:6 achieved:1 addition:1 float:1 leaving:2 country:2 source:1 saad:2 probably:3 member:19 ample:4 spirit:1 call:1 intermediate:1 iii:1 concerned:2 baxter:1 variety:1 marginalization:1 independence:3 variate:1 architecture:1 identified:1 lasso:2 economic:1 idea:1 selsam:1 translates:1 vikash:1 t0:4 pca:1 utility:1 vkm:1 sontag:1 jj:1 compositional:3 programmable:2 generally:1 detailed:1 nonparametric:2 tenenbaum:1 category:7 reduced:1 generate:4 specifies:1 http:1 delta:1 subpart:1 blue:1 express:1 four:1 drawn:1 imputation:2 graph:2 sum:1 run:6 package:1 parameterized:1 topologically:1 extends:1 family:3 electronic:1 decision:1 layer:1 resampled:1 bound:1 nan:17 distinguish:1 def:3 followed:1 strength:1 constraint:7 x2:1 software:1 encodes:1 compendium:1 generates:1 simulate:16 argument:1 min:1 contractor:2 structured:1 developing:1 combination:2 watt:1 eaves:1 conjugate:1 describes:1 across:1 shallow:1 invariant:4 pr:1 taken:2 count:2 needed:1 mml:4 end:1 generalizes:1 available:1 operation:1 dhawan:1 apply:3 observe:2 hierarchical:1 indirectly:1 alternative:1 duxbury:1 thomas:1 assumes:1 clustering:1 include:3 denotes:1 top:1 graphical:3 dirichlet:1 unifying:1 cape:1 build:2 conquer:1 classical:1 quantity:2 parametric:5 strategy:3 dependence:1 exhibit:1 dp:1 denmark:2 code:6 devroye:1 index:1 relationship:1 illustration:1 ratio:3 berger:1 difficult:1 taxonomy:1 trace:12 append:1 boeing:1 implementation:2 dpmm:2 unknown:2 observation:2 sm:1 finite:1 nullify:2 defining:1 extended:1 relational:3 frame:3 brubaker:1 station:1 arbitrary:4 august:1 community:1 expressiveness:1 inferred:1 required:3 specified:3 proton:1 inclination:1 marthi:1 learned:2 uncoupled:1 accepts:1 coherent:1 barcelona:1 mediates:2 nip:1 hour:1 address:1 below:1 pattern:1 challenge:1 summarize:1 program:15 built:2 including:2 green:1 power:1 getoor:1 difficulty:1 treated:1 hybrid:4 natural:1 zr:5 residual:1 advanced:2 scheme:1 technology:1 spiegelhalter:1 library:1 identifies:3 axis:1 stan:2 categorical:8 mediated:1 metadata:1 coupled:1 prior:4 geometric:2 discovery:1 python:2 marginalizing:1 betancourt:1 law:5 fully:1 permutation:1 mixed:2 generation:1 acyclic:4 composable:8 querying:3 integrate:1 xp:2 principle:2 pi:1 heavy:1 row:3 compatible:2 free:1 jth:1 allow:1 deeper:1 neighbor:1 lognormal:2 apogee:17 world:1 evaluating:1 transition:4 forward:2 collection:5 made:1 preprocessing:1 far:1 bb:1 sj:2 approximate:2 observable:4 obtains:1 countably:1 ignore:1 reveals:1 mem:4 discriminative:8 latent:12 table:10 promising:1 composing:1 forest:2 e5:1 interact:1 s0j:1 complex:2 upstream:1 constructing:2 domain:2 necessarily:1 pk:2 noise:1 arise:1 hyperparameters:1 repeated:1 x1:1 carpenter:1 site:1 representative:3 assessor:1 maintainer:1 riddell:1 explicit:1 exponential:1 col:3 third:3 weighting:3 learns:3 rk:4 minute:3 magenta:1 load:1 xt:4 specific:4 symbol:3 list:4 r2:3 admits:1 evidence:7 exists:1 corr:1 importance:11 magnitude:3 conditioned:1 civil:1 univariate:2 infinitely:1 likely:2 explore:1 joined:3 orion:1 applies:2 aa:1 acm:1 conditional:3 identity:2 quantifying:1 feasible:2 uniformly:1 sampler:1 called:1 e10:1 formally:1 select:2 internal:4 support:3 guo:1 incorporate:1 evaluate:1 avoiding:1 |
5,593 | 6,061 | Solving Random Systems of Quadratic Equations via
Truncated Generalized Gradient Flow
?
Gang Wang?,? and Georgios B. Giannakis?
ECE Dept. and Digital Tech. Center, Univ. of Minnesota, Mpls, MN 55455, USA
?
School of Automation, Beijing Institute of Technology, Beijing 100081, China
{gangwang, georgios}@umn.edu
Abstract
This paper puts forth a novel algorithm, termed truncated generalized gradient flow (TGGF), to solve for x ? Rn /Cn a system of m quadratic equations
yi = |hai , xi|2 , i = 1, 2, . . . , m, which even for {ai ? Rn /Cn }m
i=1 random is
known to be NP-hard in general. We prove that as soon as the number of equations
m is on the order of the number of unknowns n, TGGF recovers the solution
exactly (up to a global unimodular constant) with high probability and complexity
growing linearly with the time required to read the data {(ai ; yi )}m
i=1 . Specifically,
TGGF proceeds in two stages: s1) A novel orthogonality-promoting initialization
that is obtained with simple power iterations; and, s2) a refinement of the initial estimate by successive updates of scalable truncated generalized gradient iterations.
The former is in sharp contrast to the existing spectral initializations, while the
latter handles the rather challenging nonconvex and nonsmooth amplitude-based
cost function. Empirical results demonstrate that: i) The novel orthogonalitypromoting initialization method returns more accurate and robust estimates relative
to its spectral counterparts; and, ii) even with the same initialization, our refinement/truncation outperforms Wirtinger-based alternatives, all corroborating the
superior performance of TGGF over state-of-the-art algorithms.
1
Introduction
Consider a system of m quadratic equations
2
yi = |hai , xi| ,
T
i ? [m] := {1, 2, . . . , m}
n
(1)
n
where data vector y := [y1 ? ? ? ym ] and feature vectors ai ? R /C , collected in the m ? n matrix
H
m
A := [a1 ? ? ? am ] are known, whereas vector x ? Rn /Cn is the wanted unknown. When {ai }i=1
and/or x are complex, their amplitudes are given but phase information is lacking; whereas in the real
case only the signs of {hai , xi} are unknown. Supposing that the system of equations in (1) admits
a unique solution x (up to a global unimodular constant), our objective is to reconstruct x from m
phaseless quadratic equations, or equivalently, recover the missing signs/phases of hai , xi in the
real-/complex-valued settings. Indeed, it has been established that m ? 2n?1 or m ? 4n?4 generic
m
data {(ai ; yi )}i=1 as in (1) suffice for uniqueness of an n-dimensional real- or complex-valued vector
x [1, 2], respectively, and the former with equality has also been shown to be necessary [1].
The problem in (1) constitutes an instance of nonconvex quadratic programming, that is generally
known to be NP-hard [3]. Specifically for real-valued vectors, this can be understood as a combinatorial optimization since one seeks a series of signs
? si = ?1, such that the solution to the
system of linear equations hai , xi = si ?i , where ?i := yi , obeys the given quadratic system (1).
m
T
Concatenating all amplitudes {?i }i=1 to form the vector ? := [?1 ? ? ? ?m ] , apparently there are a
m
m
total of 2 different combinations of {si }i=1 , among which only two lead to x up to a global sign.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
m
The complex case becomes even more complicated, where instead of a set of signs {si }i=1 , one must
m
specify for uniqueness a collection of unimodular complex scalars {?i ? C}i=1 . In many fields of
physical sciences and engineering, the problem of recovering the phase from intensity/magnitude-only
measurements is commonly referred to as phase retrieval [4, 5]. The plethora of applications include
X-ray crystallography, optics, as well as array imaging, where due to physical limitations, optical
detectors can record only (squared) modulus of the Fresnel or Fraunhofer diffraction pattern, while
losing the phase of the incident light reaching the object [5]. It has been shown that reconstructing a
discrete, finite-duration signal from its Fourier transform magnitude is NP-complete [6]. Despite its
simple form and practical relevance across various fields, tackling the quadratic system (1) under
real-/complex-valued settings is challenging and NP-hard in general.
1.1
Nonconvex Optimization
Adopting the least-squares criterion, the task of recovering x can be recast as that of minimizing the
following intensity-based empirical loss
m
2 2
1 X
minn f (z) :=
yi ? aH
(2)
i z
z?C
2m i=1
or, the amplitude-based one
m
2
1 X
.
?i ? aH
(3)
minn `(z) :=
i z
z?C
2m i=1
Unfortunately, both cost functions (2) and (3) are nonconvex. Minimizing nonconvex objectives,
which may exhibit many stationary points, is in general NP-hard [7]. In a nutshell, solving problems
of the form (2) or (3) is challenging.
Existing approaches to solving (2) (or related ones using the Poisson likelihood; see, e.g., [8]) or
(3) fall under two categories: nonconvex and convex ones. Popular nonconvex solvers include the
alternating projection such as Gerchberg-Saxton [9] and Fineup [10], AltMinPhase [11], and (Truncated) Wirtinger flow (WF/TWF) [12, 8], as well as trust-region methods [13]. Convex approaches
on the other hand rely on the so-called matrix-lifting technique to obtain the solvers abbreviated as
PhaseLift [14] and PhaseCut [15].
In terms of sample complexity for Gaussian {ai } designs, convex approaches enable exact recovery
from1 O(n) noiseless measurements [16], while they require solving a semidefinite program of a
matrix variable with size n ? n, thus incurring worst-case computational complexity on the order of
O(n4.5 ) [15], that does not scale well with dimensionality n. Upon exploiting the underlying problem
structure, O(n4.5 ) can be reduced to O(n3 ) [15]. Solving for vector variables, nonconvex approaches
achieve significantly improved computational performance. Using formulation (3), AltMinPhase
adopts a spectral initialization and establishes exact recovery with sample complexity O(n log3 n)
under Gaussian {ai } designs with resampling [11]. Concerning formulation (2), WF iteratively refines
the spectral initial estimate by means of a gradient-like update [12]. The follow-up TWF improves
upon WF through a truncation procedure to separate gradient components of excessively extreme
sizes. Likewise, at the initialization stage, since the term (aTi x)2 ai aH
i responsible for the spectral
initialization is heavy-tailed, data {yi }m
i=1 are pre-screened in the truncated spectral initialization to
yield improved initial estimates [8]. Under Gaussian sampling models, WF allows exact recovery
from O(n log n) measurements in O(mn2 log(1/)) time/flops to yield an -accurate solution for
any given > 0 [12], while TWF advances these to O(n) measurements and O(mn log(1/))
time [8]. Interestingly, the truncation procedure in the gradient stage turns out to be useful in avoiding
spurious stationary points in the context of nonconvex optimization. Although for large-scale linear
regressions, similar ideas including censoring have been studied [17, 18]. It is worth mentioning
that when m ? Cn log3 n for sufficiently large C > 0, the objective function in (3) admits benign
geometric structure that allows certain iterative algorithms (e.g., trust-region methods) to efficiently
find a global minimizer with random initializations [13].
Although achieving a linear (in the number of unknowns n) sample and computational complexity,
the state-of-the-art TWF scheme still requires at least 4n ? 5n equations to yield a stable empirical
success rate (e.g., ? 99%) under the real Gaussian model [8, Section 3], which are more than twice
the known information-limit of m = 2n ? 1 [1]. Similar though less obvious results hold also in
1
The notation ?(n) = O(g(n)) means that there is a constant c > 0 such that |?(n)| ? c|g(n)|.
2
the complex-valued scenario. Even though the truncated spectral initialization improves upon the
?plain vallina? spectral initialization, its performance still suffers when the number of measurements
is relatively small and its advantage (over the untruncated version) narrows as the number of measurements grows. Further, it is worth stressing that extensive numerical and experimental validation
confirms that the amplitude-based cost function performs better than the intensity-based one; that is,
formulation (3) is superior over (2) [19]. Hence, besides enhancing initialization, markedly improved
performance in the gradient stage could be expected by re-examining the amplitude-based cost
function and incorporating judiciously designed truncation rules.
2
Algorithm: Truncated Generalized Gradient Flow
Along the lines of suitably initialized nonconvex schemes, and building upon the amplitude-based
formulation (3), this paper develops a novel linear-time (in both m and n) algorithm, referred to
as truncated generalized gradient flow (TGGF), that provably recovers x ? Rn /Cn exactly from a
near-optimal number of noise-free measurements, while also featuring a near-perfect statistical performance in the noisy setup. Our TGGF proceeds in two stages: s1) A novel orthogonality-promoting
initialization that relies on simple power iterations to markedly improve upon spectral initialization;
and, s2) a refinement of the initial estimate by successive updates of truncated generalized gradient
iterations. Stages s1) and s2) are delineated next in reverse order. For concreteness, our analysis
will focus on the real Gaussian model with x ? Rn and independently and identically distributed
(i.i.d.) design vectors ai ? Rn ? N (0, In ), whereas numerical implementations for the complex
Gaussian model having x ? Cn and i.i.d. ai ? CN (0, In ) := N (0, In /2) + jN (0, In /2) will
be discussed briefly. To start, define the Euclidean distance of any estimate z to the solution set:
dist(z, x) := min kz ? xk for real signals, and dist(z, x) := min??[0,2?) kz ? xei? k for complex
ones [12]. Define also the indistinguishable global phase constant in real-valued settings as
0,
kz ? xk ? kz + xk,
?(z) :=
(4)
?, otherwise.
Henceforth, fixing x to be any solution of the given quadratic system (1), we always assume that
? (z) = 0; otherwise, z is replaced by e?j?(z) z, but for simplicity of presentation, the constant
phase adaptation term e?j?(z) is dropped whenever it is clear from the context.
Numerical tests comparing TGGF, TWF, and WF will be presented throughout our analysis, so let us
first describe our basic test settings. Simulated estimates will be averaged over 100 independent Monte
Carlo (MC) realizations without mentioning this explicitly each time. Performance is evaluated in
terms of the relative root mean-square error, i.e., Relative error := dist(z, x)/kxk, and the success
rate among 100 trials, where a success will be claimed for a trial if the resulting estimate incurs
relative error less than 10?5 [8]. Simulated
tests under both noiseless and noisy Gaussian models
are performed, corresponding to ?i = aH
x
+
?i with ?i = 0 and ?i ? N (0, ? 2 ) [11], respectively,
i
with i.i.d. ai ? N (0, In ) or ai ? CN (0, In ).
2.1
Truncated generalized gradient stage
Let us rewrite the amplitude-based cost function in a matrix-vector form as
1
? ? |Az|
2
minn `(z) =
(5)
z?R
2m
T
where |Az| := |aT1 z| ? ? ? |aTm z| . Apart from being nonconvex, `(z) is nondiffentiable. In the
presence of smoothness or convexity, convergence analysis of iterative algorithms relies either on
continuity of the gradient (gradient methods) [20], or, on the convexity of the objective functional
(subgradient methods) [20]. Although subgradient methods have found widespread applicability in
nonsmooth optimization, they are limited to the class of convex functions [20, Page 4]. In nonconvex
nonsmooth optimization, the so-termed generalized gradient broadens the scope of the (sub)gradient
to the class of almost everywhere differentiable functions [21]. Consider a continuous function
h(z) ? R defined over an open region S ? Rn .
Definition 1 [22, Definition 1.1] The generalized gradient of a function h at z, denoted by ?h, is the
convex hull of the set of limits of the form lim ?h(zk ), where zk ? z as k ? +?, i.e.,
3
?h(z) := conv
n
lim ?h(zk ) : zk ? z, zk ?
/ G`
o
k?+?
where the symbol ?conv? signifies the convex hull of a set, and G` denotes the set of points in S at
which h fails to be differentiable.
Having introduced the notion of generalized gradient, and with t denoting the iteration number, our
approach to solving (5) amounts to iteratively refining the initial guess z0 by means of the ensuing
truncated generalized gradient iterations
zt+1 = zt ? ?t ?`tr (zt )
(6)
where ?t > 0 is the stepsize, and a piece of the (truncated) generalized gradient ?`tr (zt ) is given by
X
aT zt
?`tr (zt ) :=
aTi zt ? ?i Ti
ai
(7)
|ai zt |
i?I
t+1
for some index set It+1 ? [m] to be designed shortly; and the convention
aTi
aT
i zt
|aT
i zt |
:= 0 is adopted, if
zt = 0. Further, it is easy to verify that the update in (6) monotonically decreases the objective
value in (5).
1
Empirical success rate
Recall that since they offer descent iterations, the alternating projection variants are guaranteed to converge
to a stationary point of `(z), and any limit point z ?
adheres to the following fixed-point equation [23]
Az ?
AT Az ? ? ?
=0
(8)
|Az ? |
0.8
0.6
0.4
for entry-wise product , which may have many so0.2
lutions. Clearly, if z ? is a solution, so is ?z ? . Further, both solutions/global minimizers x and ?x sat0
Ax
isfy (8) due to Ax ? ? |Ax|
= 0. Consider1
2
3
4
5
6
7
m/n for x? R1,000
ing any stationary point z ? 6= ?x that has been
adapted such that ?(z ? ) = 0, one can
write z ? =
?
Figure 1: Empirical success rate for WF,
Az
Ax
x+(AT A)?1 AT ? |Az
. A necessary
? | ? |Ax|
TWF,
and TGGF with the same truncated
Ax
Az ?
condition for z ? 6= x is |Az
spectral initialization under the noiseless real
? | 6= |Ax| . Expressed difAz ?
Gaussian model.
ferently, there must be sign differences between |Az
?|
Ax
and |Ax|
whenever one gets stuck with an undesirable
stationary point z ? . Building on this observation, it is reasonable to devise algorithms that can detect
and separate out the generalized gradient components corresponding to mistakenly estimated signs
aT
i zt
along the iterates {zt }. Precisely, if zt and x lie in different sides of the hyperplane aTi z = 0,
|aT zt |
WF
TWF
TAF
i
aT x
aT z
then the sign of aTi zt will be different than that of aTi x; that is, |aiT x| 6= |aiT z| . Specifically, one
i
i
can write the i-th generalized gradient component
aT x
aT z
aT x
aTi z
i
?`i (z) = aTi z ? ?i iT
ai = aTi z ? ?i iT
ai +
?
?i ai
|ai z|
|ai x|
|aTi x| |aTi z|
aT x
aTi z
4
i
= ai aTi h +
?
?i ai = ai aTi h + ri
(9)
T
T
|ai x| |ai z|
where h := z ? x. Apparently, the strong law of large numbers (SLLN) asserts that averaging the
first term ai aTi h over m instances approaches h, which qualifies it as a desirable search direction.
However, certain generalized gradient entries involve erroneously estimated signs of aTi x; hence,
nonzero ri terms exert a negative influence on the search direction h by dragging the iterate away
from x, and they typically have sizable magnitudes. To see why, recall that quantities maxi?[m] ?i
p
Pm
?
and (1/m) i=1 ?i have magnitudes on the order of mkxk and ?/2kxk, respectively, whereas
khk ? ?kxk for some small constant 0 < ? ? 1/10, to be discussed shortly. To maintain a
meaningful search direction, those ?bad? generalized gradient entries should be detected and excluded
from the search direction.
4
Nevertheless, it is difficult or even impossible to check whether the sign of aTi zt equals that of
aTi x. Fortunately, when the initialization is accurate enough, most spurious gradient entries (those
corrupted by nonzero ri terms) provably hover around the watershed hyperplane aTi zt = 0. For this
reason, TGGF includes only those components having zt sufficiently away from its watershed
|aT z |
n
1 o
t
It+1 := 1 ? i ? m iT
?
,
1+?
|ai x|
t?0
(10)
for an appropriately selected threshold ? > 0. It is worth stressing that our novel truncation rule
deviates from the intuition behind TWF. Among its complicated truncation procedures, TWF also
throws away large-size gradient components corresponding to (10), which is not the case with TGGF.
As demonstrated by our analysis, it rarely happens that a generalized gradient component having a
T
large |aTi zt |/ kzt k yields an incorrect sign of
discarding too many samples (those
Pami x. Further,
i ?
/ Tt+1 ) introduces large bias into (1/m) i?Tt+1 ai aTi ht , thus rendering TWF less effective
when m/n is small. Numerical comparison depicted in Fig. 1 suggests that even starting with
the same truncated spectral initialization, TGGF?s refinement outperforms those of TWF and WF,
corroborating the merits of our novel truncation and update rule over TWF/WF.
2.2
Orthogonality-promoting initialization stage
0
Squared normalized inner-product
10
Leveraging the SLLN, spectral methods estimate x
m=2n
m=4n
using the (appropriately
scaled)
leading
eigenvector
10
m=6n
P
1
T
m=8n
of Y := m
y
a
a
,
where
T
is
an
index
0
i?T0 i i i
m=10n
10
set accounting for possible truncation. As asserted
T
2
T
in [8], each summand (ai x) ai ai follows a heavy10
tail probability density function lacking a moment
10
generating function. This causes major performance
degradation especially when the number of measure10
ments is limited. Instead of spectral initialization, we
10
shall take another route to bypass this hurdle. To gain
intuition for selecting our alternate route, a motivat10
10
10
10
10
10
ing example is presented first that reveals fundamenNumber of points
tal characteristics among high-dimensional random
vectors.
Figure 2: Ordered squared normalized innern
Example: Fixing any nonzero vector x ? R , gen- product for pairs x and ai , ?i ? [m] with
3
erate data ? = |ha , xi| using i.i.d. a ? N (0, I ), m/n varying by 2 from 2 to 10, and n = 10 .
-2
-4
-6
-8
-10
-12
-14
0
i
i
i
1
2
3
4
n
?i ? [m], and evaluate the squared normalized innerproduct
2
cos2 ?i :=
|hai , xi|
?i2
=
, ?i ? [m]
2
2
kai k kxk
kai k2 kxk2
(11)
where ?i is the angle between ai and x. Consider ordering all cos2 ?i ?s in an ascending fashion,
T
and collectively denote them as ? := cos2 ?[m] ? ? ? cos2 ?[1] with cos2 ?[1] ? ? ? ? ? cos2 ?[m] .
Fig. 2 plots the ordered entries in ? for m/n varying by 2 from 2 to 10 with n = 103 . Observe that
almost all {ai } vectors have a squared normalized inner-product smaller than 10?2 , while half of the
inner-products are less than 10?3 , which implies that x is nearly orthogonal to many ai ?s.
This example corroborates that random vectors in high-dimensional spaces are almost always nearly
orthogonal to each other [24]. This inspired us to pursue an orthogonality-promoting initialization
method. Our key idea is to approximate x by a vector that is most orthogonal to a subset of vectors
{ai }i?I0 , where I0 is a setwith cardinality
|I0 | < m that includes indices of the smallest squared
normalized inner-products cos2 ?i . Since kxk appears in all inner-products, its exact value does
not influence their ordering. Henceforth, we assume without loss of generality that kxk = 1.
Using {(ai ; ?i )}, evaluate cos2 ?i according to (11) for each pair x and ai . Instrumental for the
ensuing derivations is noticing that the summation of cos2 ?i over indices i ? I0 is very small, while
rigorous justification is deferred to Section 3 and supplementary materials. Thus, a meaningful
approximation denoted by z0 ? Rn can be obtained by solving
5
min z
T
kzk=1
1 X ai aTi
|I0 |
kai k2
!
z
(12)
i?I0
P
a aT
which amounts to finding the smallest eigenvalue and the associated eigenvector of |I10 | i?I0 kai i ki2 .
Yet finding the smallest eigenvalue calls for eigen-decomposition or matrix inversion, each requiring
computational complexity O(n3 ). Such a computational burden can be intractable when n grows
large. Applying a standard concentration result simplifies greatly those computations next [25].
Since ai /kai k has unit norm and is uniformly distributed on the unit sphere, it is uniformly spherically distributed.2 Spherical symmetry implies that ai /kai k has zero mean and covariance matrix
Pm ai aTi
1
In /n [25]. Appealing again to the SLLN, the sample covariance matrix m
i=1 kai k2 approaches
P
Pm ai aTi
P
ai aT
ai aT
1
i
i
u
i?I0 kai k2 =
i=1 kai k2 ?
n In as m grows. Simple derivations lead to
i?I0 kai k2
T
P
a
a
i i
m
, where I0 is the complement of I0 in the set [m].
n In ?
i?I kai k2
0
T
Define S := [a1 /ka1 k ? ? ? am /kam k] ? Rm?n , and form S0 by removing the rows of S if their
indices do not belong to I0 . The task of seeking the smallest eigenvalue of Y0 := |I10 | S0T S0 reduces
to computing the largest eigenvalue of Y0 :=
1
ST
|I0 | 0
S0 , namely,
?0 := arg max z T Y0 z
z
(13)
kzk=1
which can be efficiently solved using simple power iterations. If, on the other hand, kxk =
6 1, the
?0 from (13) is further scaled so that its norm
estimate z
matches approximately that of x (which
p
Pm
Pm
1
is estimated to be m
z0 . It is worth stressing that the
i=1 yi ), thus yielding z0 =
i=1 yi /m?
constructed matrix Y0 does not depend on {yi } explicitly, saving our initialization from suffering
heavy-tails of the fourth order of {ai } in spectral initialization schemes.
3
1.3
Spectral
Truncated spectral
Orthogonality-promoting
1.2
1.1
1
Relative error
Fig. 3 compares three initialization schemes showing
their relative errors versus the measurement/unknown
ratio m/n under the noise-free real Gaussian model,
where x ? R1,000 and m/n increases by 2 from 2 to
20. Apparently, all schemes enjoy improved performance as m/n increases. In particular, the proposed
initialization method outperforms its spectral alternatives. Interestingly, the spectral and truncated spectral
schemes exhibit similar performance when m/n is
sufficiently large (e.g., m/n ? 14). This confirms
that truncation helps only if m/n is relatively small.
Indeed, truncation is effected by discarding measurements of excessively large sizes emerging from the
heavy tails of the data distribution. Hence, its advantage over the untruncated one narrows as the number
of measurements increases, thus straightening out
the heavy tails. On the contrary, the orthogonalitypromoting initialization method achieves consistently
superior performance over its spectral alternatives.
0.9
0.8
0.7
0.6
0.5
0.4
0.3
2
5
10
15
20
m/n for x?R 1,000
Figure 3: Relative error versus m/n for: i)
the spectral method; ii) the truncated spectral
method; and iii) our orthogonality-promoting
method for noiseless real Gaussian model.
Main results
TGGF is summarized in Algorithm 1 with default values set for pertinent algorithmic parameters.
Postulating independent samples {(ai ; ?i )}, the following result establishes the performance of our
TGGF approach.
2
A random vector z ? Rn is said to be spherical (or spherically symmetric) if its distribution does not change
under rotations of the coordinate system; that is, the distribution of P z coincides with that of z for any given
orthogonal n ? n matrix P .
6
Algorithm 1 Truncated generalized gradient flow (TGGF) solvers
m
1: Input: Data {?i }m
i=1 and feature vectors {ai }i=1 ; the maximum number of iterations T =
1, 000; by default, take constant step size ? = 0.6/1 for real/complex Gaussian models, truncation
thresholds |I0 | = d 61 me (d?e the ceil operation), and ? = 0.7.
2: Evaluate ?i /kai k, ?i ? [m], and find I0 comprising indices corresponding to the |I0 | largest
(?i /kai k)?s.
pPm
2
?0 is the unit leading eigenvector of Y0 :=
z0 , where z
3: Initialize z0 to
i=1 ?i /m?
T
P
ai ai
1
.
i?I kai k2
|I0 |
0
4: Loop: for t = 0 to T ? 1
where It+1
? X
aTi zt
T
zt+1 = zt ?
ai
ai zt ? ?i T
m
|ai zt |
i?It+1
1
:= 1 ? i ? m|aTi zt | ? 1+?
?i .
5: Output: zT
Theorem 1 Let x ? Rn be an arbitrary signal vector, and consider (noise-free) measurements
i.i.d.
?i = |aTi x|, in which ai ? N (0, In ), 1 ? i ? m. Then with probability at least 1 ? (m +
5)e?n/2 ? e?c0 m ? 1/n2 for some universal constant c0 > 0, the initialization z0 returned by the
orthogonality-promoting method in Algorithm 1 satisfies
dist(z0 , x) ? ? kxk
(14)
with ? = 1/10 (or any sufficiently small positive constant), provided that m ? c1 |I0 | ? c2 n for
some numerical constants c1 , c2 > 0, and sufficiently large n. Further, choosing a constant step size
? ? ?0 along with a fixed truncation level ? ? 1/2, and starting from any initial guess z0 satisfying
(14), successive estimates of the TGGF solver (tabulated in Algorithm 1) obey
t
dist (zt , x) ? ? (1 ? ?) kxk ,
t = 0, 1, . . .
(15)
for some 0 < ? < 1, which holds with probability exceeding 1 ? (m + 5)e?n/2 ? 8e?c0 m ? 1/n2 .
Typical parameters are ? = 0.6, and ? = 0.7.
Theorem 1 asserts that: i) TGGF recovers the solution x exactly as soon as the number of equations is
about the number of unknowns, which is theoretically order optimal. Our numerical tests demonstrate
that for the real Gaussian model, TGGF achieves a success rate of 100% when m/n is as small as 3,
which is slightly larger than the information limit of m/n = 2 (Recall that m ? 2n ? 1 is necessary
for a unique solution); this is a significant reduction in the sample complexity ratio, which is 5 for
TWF and 7 for WF. Surprisingly, TGGF enjoys also a success rate of over 50% when m/n is 2, which
has not yet been presented for any existing algorithm under Gaussian sampling models and thus, our
TGGF bridges the gap; see further discussion in Section 4; and, ii) TGGF converges exponentially fast.
Specifically, TGGF requires at most O(log(1/)) iterations to achieve any given solution accuracy
> 0 (a.k.a., dist(zt , x) ? kxk), with iteration cost O(mn). Since truncation takes time on the
order of O(m), the computational burden of TGGF per iteration is dominated by evaluating the
generalized gradients. The latter involves two matrix-vector multiplications that are computable in
ut
O(mn) flops, namely, Azt yields ut , and AT vt the generalized gradient, where vt := ut ?? |u
.
t|
Hence, the total running time of TGGF is O(mn log(1/)), which is proportional to the time taken
to read the data O(mn). The proof of Theorem 1 can be found in the supplementary material.
4
Simulated tests and conclusions
Additional numerical tests evaluating performance of the proposed scheme relative to TWF/WF
are presented in this section. For fairness, all pertinent algorithmic parameters involved in each
scheme are set to their default values. The Matlab implementations of TGGF are available at
http://www.tc.umn.edu/?gangwang/TAF. The initial estimate was found based on 50
power iterations, and was subsequently refined by T = 103 gradient-like iterations in each scheme.
Left panel in Fig. 4 presents average relative error of three initialization methods on a series of
noiseless/noisy real Gaussian problems with m/n = 6 fixed, and n varying from 500 to 104 ,
7
1.25
1.2
Spectral
Truncated spectral
Orthogonality-promoting
Spectral (noisy)
Truncated spectral (noisy)
Orthogonality-promoting (noisy)
1.15
1.1
1
Relative error
Relative error
1.1
Spectral
Truncated spectral
Orthogonality-promoting
Spectral (noisy)
Truncated spectral (noisy)
Orthogonality-promoting (noisy)
1.2
0.9
0.8
1.05
1
0.95
0.9
0.85
0.7
0.8
0.6
500
2,000
4,000
6,000
8,000
0.75
100
10,000
1,000
2,000
3,000
4,000
5,000
Complex signal dimension n
Real signal dimension n
Figure 4: The average relative error using: i) the spectral method [11, 12]; ii) the truncated spectral
method [8]; and iii) the proposed orthogonality-promoting method on noise-free (solid) and noisy
(dotted) instances with m/n = 6, and n varying from 500/100 to 10, 000/5, 000 for real/complex
2
vectors. Left: Real Gaussian model with x ? N (0, In ), ai ? N (0, In ), and ? 2 = 0.22 kxk . Right:
2
Complex Gaussian model with x ? CN (0, In ), ai ? CN (0, In ), and ? 2 = 0.22 kxk .
1
Empirical success rate
Empirical success rate
1
0.8
0.6
0.4
0.2
WF
TWF
TGGF
0
1
2
3
4
5
6
0.8
0.6
0.4
0.2
WF
TWF
TGGF
0
1
7
2
3
4
5
6
7
m/n for x?C1,000
m/n for x?R1,000
Figure 5: Empirical success rate for WF, TWF, and TGGF with n = 1, 000 and m/n varying from 1
to 7. Left: Noiseless real Gaussian model with x ? N (0, In ) and ai ? N (0, In ); Right: Noiseless
complex Gaussian model with x ? CN (0, In ) and ai ? CN (0, In ).
while those for the corresponding complex Gaussian instances are shown in the right panel. Fig. 5
compares empirical success rate of three schemes under both real and complex Gaussian models
with n = 103 and m/n varying by 1 from 1 to 7. Apparently, the proposed initialization method
returns more accurate and robust estimates than the spectral ones. Moreover, for real-valued vectors,
TGGF achieves a success rate of over 50% when m/n = 2, and guarantees perfect recovery from
about 3n measurements; while for complex-valued ones, TGGF enjoys a success rate of 95% when
m/n = 3.4, and ensures perfect recovery from about 4.5n measurements. Regarding running times,
TGGF converges slightly faster than TWF, while both are markedly faster than WF. Curves in Fig. 5
clearly corroborate the merits of TGGF over Wirtinger alternatives.
This paper developed a linear-time algorithm termed TGGF for solving random systems of quadratic
equations. TGGF builds on three key ingredients: a novel orthogonality-promoting initialization,
along with a simple yet effective truncation rule, as well as simple scalable gradient-like iterations.
Numerical tests corroborate the superior performance of TGGF over state-of-the-art solvers.
Acknowledgements
Work in this paper was supported in part by NSF grants 1500713 and 1514056.
8
References
[1] R. Balan, P. Casazza, and D. Edidin, ?On signal reconstruction without phase,? Appl. Comput. Harmon.
Anal., vol. 20, no. 3, pp. 345?356, May 2006.
[2] A. Conca, D. Edidin, M. Hering, and C. Vinzant, ?An algebraic characterization of injectivity in phase
retrieval,? Appl. Comput. Harmon. Anal., vol. 38, no. 2, pp. 346?356, Mar. 2015.
[3] P. M. Pardalos and S. A. Vavasis, ?Quadratic programming with one negative eigenvalue is NP-hard,? J.
Global Optim., vol. 1, no. 1, pp. 15?22, 1991.
[4] H. A. Hauptman, ?The phase problem of X-ray crystallography,? Rep. Prog. Phys., vol. 54, no. 11, p. 1427,
1991.
[5] E. J. Cand`es, Y. C. Eldar, T. Strohmer, and V. Voroninski, ?Phase retrieval via matrix completion,? SIAM
Rev., vol. 57, no. 2, pp. 225?251, May 2015.
[6] H. Sahinoglou and S. D. Cabrera, ?On phase retrieval of finite-length sequences using the initial time
sample,? IEEE Trans. Circuits and Syst., vol. 38, no. 8, pp. 954?958, Aug. 1991.
[7] K. G. Murty and S. N. Kabadi, ?Some NP-complete problems in quadratic and nonlinear programming,?
Math. Prog., vol. 39, no. 2, pp. 117?129, 1987.
[8] Y. Chen and E. J. Cand`es, ?Solving random quadratic systems of equations is nearly as easy as solving
linear systems,? Comm. Pure Appl. Math., 2016 (to appear).
[9] R. W. Gerchberg and W. O. Saxton, ?A practical algorithm for the determination of phase from image and
diffraction,? Optik, vol. 35, pp. 237?246, Nov. 1972.
[10] J. Fienup, ?Phase retrieval algorithms: A comparison,? Appl. Opt., vol. 21, no. 15, pp. 2758?2769, 1982.
[11] P. Netrapalli, P. Jain, and S. Sanghavi, ?Phase retrieval using alternating minimization,? IEEE Trans. Signal
Process., vol. 63, no. 18, pp. 4814?4826, Sept. 2015.
[12] E. J. Cand`es, X. Li, and M. Soltanolkotabi, ?Phase retrieval via Wirtinger flow: Theory and algorithms,?
IEEE Trans. Inf. Theory, vol. 61, no. 4, pp. 1985?2007, Apr. 2015.
[13] J. Sun, Q. Qu, and J. Wright, ?A geometric analysis of phase retrieval,? arXiv:1602.06664, 2016.
[14] E. J. Cand`es, T. Strohmer, and V. Voroninski, ?PhaseLift: Exact and stable signal recovery from magnitude
measurements via convex programming,? Appl. Comput. Harmon. Anal., vol. 66, no. 8, pp. 1241?1274,
Nov. 2013.
[15] I. Waldspurger, A. d?Aspremont, and S. Mallat, ?Phase recovery, maxcut and complex semidefinite
programming,? Math. Prog., vol. 149, no. 1-2, pp. 47?81, 2015.
[16] E. J. Cand`es and X. Li, ?Solving quadratic equations via PhaseLift when there are about as many equations
as unknowns,? Found. Comput. Math., vol. 14, no. 5, pp. 1017?1026, 2014.
[17] G. Wang, D. Berberidis, V. Kekatos, and G. B. Giannakis, ?Online reconstruction from big data via
compressive censoring,? in IEEE Global Conf. Signal and Inf. Process., Atlanta, GA, 2014, pp. 326?330.
[18] D. K. Berberidis, V. Kekatos, G. Wang, and G. B. Giannakis, ?Adaptive censoring for large-scale regressions,? in IEEE Intl. Conf. Acoustics, Speech and Signal Process., South Brisbane, QLD, Australia, 2015,
pp. 5475?5479.
[19] L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, ?Experimental
robustness of Fourier ptychography phase retrieval algorithms,? Opt. Express, vol. 23, no. 26, pp. 33 214?
33 240, Dec. 2015.
[20] N. Z. Shor, K. C. Kiwiel, and A. Ruszcay`nski, Minimization Methods for Non-differentiable Functions.
Springer-Verlag New York, Inc., 1985.
[21] F. H. Clarke, Optimization and Nonsmooth Analysis.
SIAM, 1990, vol. 5.
[22] ??, ?Generalized gradients and applications,? T. Am. Math. Soc., vol. 205, pp. 247?262, 1975.
[23] P. Chen, A. Fannjiang, and G.-R. Liu, ?Phase retrieval with one or two diffraction patterns by alternating
projections of the null vector,? arXiv:1510.07379v2, 2015.
[24] T. Cai, J. Fan, and T. Jiang, ?Distributions of angles in random packing on spheres,? J. Mach. Learn. Res.,
vol. 14, no. 1, pp. 1837?1864, Jan. 2013.
[25] R. Vershynin, ?Introduction to the non-asymptotic analysis of random matrices,? arXiv:1011.3027, 2010.
9
| 6061 |@word trial:2 erate:1 briefly:1 version:1 inversion:1 instrumental:1 norm:2 suitably:1 c0:3 phasecut:1 open:1 confirms:2 seek:1 cos2:9 accounting:1 decomposition:1 covariance:2 incurs:1 tr:3 solid:1 moment:1 initial:8 reduction:1 series:2 liu:1 selecting:1 denoting:1 interestingly:2 ati:27 outperforms:3 existing:3 comparing:1 optim:1 si:4 tackling:1 yet:3 must:2 refines:1 numerical:8 benign:1 pertinent:2 wanted:1 designed:2 plot:1 update:5 resampling:1 stationary:5 half:1 selected:1 guess:2 xk:3 record:1 iterates:1 characterization:1 math:5 successive:3 along:4 constructed:1 c2:2 incorrect:1 prove:1 khk:1 ray:2 kiwiel:1 theoretically:1 expected:1 indeed:2 cand:5 dist:6 growing:1 inspired:1 spherical:2 gangwang:2 solver:5 cardinality:1 becomes:1 spain:1 conv:2 underlying:1 suffice:1 notation:1 moreover:1 panel:2 circuit:1 provided:1 null:1 pursue:1 eigenvector:3 emerging:1 developed:1 compressive:1 finding:2 guarantee:1 ti:1 unimodular:3 nutshell:1 exactly:3 scaled:2 k2:8 rm:1 phaseless:1 unit:3 enjoy:1 grant:1 appear:1 positive:1 understood:1 engineering:1 dropped:1 ceil:1 limit:4 waller:1 despite:1 mach:1 jiang:1 approximately:1 pami:1 twice:1 initialization:29 china:1 studied:1 exert:1 suggests:1 challenging:3 appl:5 mentioning:2 limited:2 tian:1 obeys:1 averaged:1 unique:2 practical:2 responsible:1 fresnel:1 procedure:3 jan:1 empirical:9 universal:1 significantly:1 murty:1 projection:3 pre:1 kabadi:1 get:1 undesirable:1 ga:1 put:1 context:2 influence:2 impossible:1 applying:1 isfy:1 www:1 demonstrated:1 center:1 missing:1 starting:2 duration:1 convex:7 independently:1 simplicity:1 recovery:7 pure:1 rule:4 array:1 handle:1 notion:1 coordinate:1 justification:1 mallat:1 exact:5 programming:5 losing:1 satisfying:1 wang:3 solved:1 worst:1 region:3 ensures:1 sun:1 ordering:2 decrease:1 intuition:2 convexity:2 complexity:7 comm:1 saxton:2 ppm:1 depend:1 solving:11 rewrite:1 upon:5 packing:1 various:1 derivation:2 univ:1 jain:1 mn2:1 describe:1 effective:2 monte:1 fast:1 detected:1 broadens:1 choosing:1 refined:1 kai:14 solve:1 valued:8 supplementary:2 larger:1 reconstruct:1 otherwise:2 azt:1 transform:1 noisy:10 online:1 advantage:2 differentiable:3 eigenvalue:5 sequence:1 cai:1 reconstruction:2 product:7 hover:1 adaptation:1 loop:1 realization:1 gen:1 achieve:2 forth:1 asserts:2 az:10 waldspurger:1 exploiting:1 convergence:1 plethora:1 r1:3 intl:1 generating:1 perfect:3 converges:2 object:1 help:1 completion:1 fixing:2 school:1 aug:1 sizable:1 throw:1 netrapalli:1 soc:1 recovering:2 implies:2 involves:1 convention:1 strong:1 direction:4 hull:2 subsequently:1 australia:1 enable:1 material:2 pardalos:1 require:1 opt:2 summation:1 hold:2 sufficiently:5 around:1 wright:1 scope:1 algorithmic:2 major:1 achieves:3 smallest:4 uniqueness:2 combinatorial:1 bridge:1 largest:2 establishes:2 minimization:2 clearly:2 stressing:3 gaussian:20 always:2 rather:1 reaching:1 zhong:1 varying:6 ax:9 focus:1 refining:1 consistently:1 likelihood:1 dragging:1 check:1 tech:1 contrast:1 rigorous:1 greatly:1 cabrera:1 am:3 wf:15 detect:1 minimizers:1 i0:18 typically:1 spurious:2 comprising:1 voroninski:2 provably:2 arg:1 among:4 fannjiang:1 eldar:1 denoted:2 art:3 initialize:1 field:2 equal:1 saving:1 having:4 sampling:2 constitutes:1 nearly:3 fairness:1 sanghavi:1 np:7 nonsmooth:4 develops:1 summand:1 replaced:1 phase:20 maintain:1 atlanta:1 umn:2 introduces:1 deferred:1 extreme:1 semidefinite:2 light:1 behind:1 yielding:1 asserted:1 watershed:2 strohmer:2 accurate:4 necessary:3 orthogonal:4 harmon:3 mkxk:1 phaselift:3 euclidean:1 initialized:1 re:2 kam:1 instance:4 corroborate:2 taf:2 signifies:1 cost:6 applicability:1 entry:5 subset:1 examining:1 too:1 corrupted:1 nski:1 vershynin:1 st:1 density:1 siam:2 dong:1 ym:1 squared:6 again:1 henceforth:2 qualifies:1 conf:2 leading:2 return:2 li:2 syst:1 edidin:2 summarized:1 automation:1 includes:2 inc:1 explicitly:2 piece:1 performed:1 root:1 apparently:4 start:1 recover:1 effected:1 complicated:2 kekatos:2 square:2 atm:1 accuracy:1 characteristic:1 likewise:1 efficiently:2 yield:5 ka1:1 mc:1 carlo:1 worth:4 ah:4 detector:1 phys:1 suffers:1 whenever:2 definition:2 pp:18 involved:1 obvious:1 associated:1 proof:1 recovers:3 gain:1 popular:1 recall:3 lim:2 ut:3 dimensionality:1 improves:2 amplitude:8 appears:1 follow:1 specify:1 improved:4 formulation:4 evaluated:1 though:2 mar:1 generality:1 stage:8 hand:2 mistakenly:1 trust:2 nonlinear:1 widespread:1 continuity:1 grows:3 modulus:1 usa:1 building:2 excessively:2 requiring:1 verify:1 normalized:5 counterpart:1 former:2 equality:1 hence:4 read:2 alternating:4 iteratively:2 nonzero:3 excluded:1 i2:1 spherically:2 symmetric:1 indistinguishable:1 coincides:1 criterion:1 generalized:21 ptychography:1 complete:2 demonstrate:2 tt:2 performs:1 optik:1 image:1 wise:1 novel:8 superior:4 rotation:1 functional:1 physical:2 exponentially:1 discussed:2 tail:4 belong:1 measurement:14 significant:1 ai:57 smoothness:1 pm:5 maxcut:1 soltanolkotabi:2 minnesota:1 stable:2 inf:2 apart:1 reverse:1 termed:3 scenario:1 certain:2 nonconvex:12 claimed:1 route:2 rep:1 success:13 verlag:1 vt:2 yi:10 devise:1 injectivity:1 fortunately:1 additional:1 converge:1 monotonically:1 signal:10 ii:4 desirable:1 reduces:1 ing:2 match:1 faster:2 determination:1 offer:1 sphere:2 retrieval:10 dept:1 concerning:1 a1:2 scalable:2 regression:2 basic:1 variant:1 noiseless:7 enhancing:1 poisson:1 arxiv:3 iteration:15 adopting:1 dec:1 c1:3 whereas:4 hurdle:1 brisbane:1 appropriately:2 markedly:3 south:1 supposing:1 contrary:1 flow:7 leveraging:1 call:1 near:2 presence:1 wirtinger:4 iii:2 identically:1 easy:2 enough:1 iterate:1 rendering:1 shor:1 inner:5 idea:2 cn:12 simplifies:1 computable:1 regarding:1 judiciously:1 t0:1 whether:1 tabulated:1 returned:1 algebraic:1 speech:1 york:1 cause:1 matlab:1 generally:1 useful:1 clear:1 involve:1 amount:2 category:1 reduced:1 http:1 vavasis:1 nsf:1 dotted:1 sign:11 estimated:3 per:1 discrete:1 twf:18 write:2 shall:1 vol:18 express:1 key:2 nevertheless:1 threshold:2 achieving:1 ht:1 imaging:1 subgradient:2 concreteness:1 beijing:2 screened:1 angle:2 everywhere:1 noticing:1 fourth:1 xei:1 throughout:1 almost:3 reasonable:1 prog:3 diffraction:3 clarke:1 guaranteed:1 fan:1 quadratic:13 gang:1 adapted:1 optic:1 orthogonality:13 precisely:1 n3:2 ri:3 tal:1 dominated:1 erroneously:1 fourier:2 min:3 optical:1 altminphase:2 relatively:2 according:1 alternate:1 combination:1 across:1 giannakis:3 reconstructing:1 smaller:1 y0:5 slightly:2 appealing:1 qu:1 n4:2 delineated:1 s1:3 happens:1 rev:1 untruncated:2 taken:1 equation:14 abbreviated:1 turn:1 merit:2 ascending:1 adopted:1 available:1 operation:1 incurring:1 promoting:13 observe:1 obey:1 away:3 spectral:33 generic:1 i10:2 v2:1 stepsize:1 alternative:4 robustness:1 shortly:2 eigen:1 jn:1 denotes:1 running:2 include:2 especially:1 build:1 seeking:1 objective:5 quantity:1 concentration:1 hai:6 exhibit:2 gradient:32 said:1 distance:1 separate:2 simulated:3 ensuing:2 me:1 collected:1 reason:1 besides:1 length:1 minn:3 index:6 ratio:2 minimizing:2 equivalently:1 setup:1 unfortunately:1 difficult:1 negative:2 design:3 implementation:2 zt:29 anal:3 unknown:7 observation:1 finite:2 descent:1 truncated:23 flop:2 y1:1 rn:10 sharp:1 arbitrary:1 intensity:3 introduced:1 complement:1 pair:2 required:1 namely:2 extensive:1 acoustic:1 narrow:2 established:1 barcelona:1 nip:1 trans:3 proceeds:2 pattern:2 program:1 recast:1 including:1 max:1 power:4 rely:1 mn:6 innerproduct:1 scheme:10 improve:1 technology:1 fraunhofer:1 aspremont:1 sept:1 deviate:1 geometric:2 acknowledgement:1 yeh:1 multiplication:1 georgios:2 relative:12 lacking:2 loss:2 law:1 asymptotic:1 limitation:1 proportional:1 versus:2 ingredient:1 digital:1 validation:1 at1:1 incident:1 fienup:1 s0:3 bypass:1 heavy:4 row:1 censoring:3 balan:1 featuring:1 ki2:1 surprisingly:1 soon:2 truncation:14 free:4 enjoys:2 supported:1 side:1 bias:1 institute:1 fall:1 distributed:3 kzk:2 plain:1 default:3 evaluating:2 dimension:2 curve:1 kz:4 adopts:1 collection:1 refinement:4 commonly:1 stuck:1 adaptive:1 log3:2 approximate:1 nov:2 global:8 gerchberg:2 reveals:1 qld:1 corroborating:2 corroborates:1 xi:7 continuous:1 iterative:2 search:4 tailed:1 why:1 learn:1 zk:5 robust:2 symmetry:1 hering:1 adheres:1 complex:18 apr:1 main:1 linearly:1 s2:3 noise:4 big:1 n2:2 ait:2 suffering:1 conca:1 fig:6 referred:2 fashion:1 postulating:1 s0t:1 sub:1 fails:1 exceeding:1 kzt:1 concatenating:1 comput:4 lie:1 kxk2:1 tang:1 z0:9 removing:1 theorem:3 bad:1 discarding:2 showing:1 symbol:1 maxi:1 admits:2 ments:1 incorporating:1 burden:2 intractable:1 lifting:1 magnitude:5 gap:1 chen:3 crystallography:2 depicted:1 tc:1 kxk:12 expressed:1 ordered:2 scalar:1 collectively:1 springer:1 minimizer:1 satisfies:1 relies:2 presentation:1 hard:5 change:1 specifically:4 typical:1 uniformly:2 hyperplane:2 averaging:1 degradation:1 called:1 total:2 ece:1 experimental:2 e:5 meaningful:2 rarely:1 latter:2 relevance:1 evaluate:3 avoiding:1 |
5,594 | 6,062 | Balancing Suspense and Surprise: Timely Decision
Making with Endogenous Information Acquisition
Ahmed M. Alaa
Electrical Engineering Department
University of California, Los Angeles
Mihaela van der Schaar
Electrical Engineering Department
University of California, Los Angeles
Abstract
We develop a Bayesian model for decision-making under time pressure with endogenous information acquisition. In our model, the decision-maker decides
when to observe (costly) information by sampling an underlying continuoustime stochastic process (time series) that conveys information about the potential
occurrence/non-occurrence of an adverse event which will terminate the decisionmaking process. In her attempt to predict the occurrence of the adverse event, the
decision-maker follows a policy that determines when to acquire information from
the time series (continuation), and when to stop acquiring information and make
a final prediction (stopping). We show that the optimal policy has a "rendezvous"
structure, i.e. a structure in which whenever a new information sample is gathered
from the time series, the optimal "date" for acquiring the next sample becomes
computable. The optimal interval between two information samples balances a
trade-off between the decision maker?s "surprise", i.e. the drift in her posterior
belief after observing new information, and "suspense", i.e. the probability that
the adverse event occurs in the time interval between two information samples.
Moreover, we characterize the continuation and stopping regions in the decisionmaker?s state-space, and show that they depend not only on the decision-maker?s
beliefs, but also on the "context", i.e. the current realization of the time series.
1
Introduction
The problem of timely risk assessment and decision-making based on a sequentially observed time
series is ubiquitous, with applications in finance, medicine, cognitive science and signal processing
[1-7]. A common setting that arises in all these domains is that a decision-maker, provided with
sequential observations of a time series, needs to decide whether or not an adverse event (e.g. financial crisis, clinical acuity for ward patients, etc) will take place in the future. The decision-maker?s
recognition of a forthcoming adverse event needs to be timely, for that a delayed decision may hinder effective intervention (e.g. delayed admission of clinically acute patients to intensive care units
can lead to mortality [5]). In the context of cognitive science, this decision-making task is known
as the two-alternative forced choice (2AFC) task [15]. Insightful structural solutions for the optimal
Bayesian 2AFC decision-making policies have been derived in [9-16], most of which are inspired
by the classical work of Wald on sequential probability ratio tests (SPRT) [8].
In this paper, we present a Bayesian decision-making model in which a decision-maker adaptively
decides when to gather (costly) information from an underlying time series in order to accumulate
evidence on the occurrence/non-occurrence of an adverse event. The decision-maker operates under
time pressure: occurrence of the adverse event terminates the decision-making process. Our abstract
model is motivated and inspired by many practical decision-making tasks such as: constructing temporal patterns for gathering sensory information in perceptual decision-making [1], scheduling lab
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
tests for ward patients in order to predict clinical deterioration in a timely manner [3, 5], designing
breast cancer screening programs for early tumor detection [7], etc.
We characterize the structure of the optimal decision-making policy that prescribes when should
the decision-maker acquire new information, and when should she stop acquiring information and
issue a final prediction. We show that the decision-maker?s posterior belief process, based on which
policies are prescribed, is a supermartingale that reflects the decision-maker?s tendency to deny
the occurrence of an adverse event in the future as she observes the survival of the time series for
longer time periods. Moreover, the information acquisition policy has a "rendezvous" structure;
the optimal "date" for acquiring the next information sample can be computed given the current
sample. The optimal schedule for gathering information over time balances the information gain
(surprise) obtained from acquiring new samples, and the probability of survival for the underlying
stochastic process (suspense). Finally, we characterize the continuation and stopping regions in the
decision-maker?s state-space and show that, unlike previous models, they depend on the time series
"context" and not just the decision-maker?s beliefs.
Related Works Mathematical models and analyses for perceptual decision-making based on
sequential hypothesis testing have been developed in [9-17]. Most of these models use tools
from sequential analysis developed by Wald [8] and Shiryaev [21, 22]. In [9,13,14], optimal
decision-making policies for the 2AFC task were computed by modelling the decision-maker?s
sensory evidence using diffusion processes [20]. These models assume an infinite time horizon for
the decision-making policy, and an exogenous supply of sensory information.
The assumption of an infinite time horizon was relaxed in [10] and [15], where decision-making is
assumed to be performed under the pressure of a stochastic deadline; however, these deadlines were
considered to be drawn from known distributions that are independent of the hypothesis and the
realized sensory evidence, and the assumption of an exogenous information supply was maintained.
In practical settings, the deadlines would naturally be dependent on the realized sensory information
(e.g. patients? acuity events are correlated with their physiological information [5]), which induces
more complex dynamics in the decision-making process. Context-based decision-making models
were introduced in [17], but assuming an exogenous information supply and an infinite time horizon.
The notions of ?suspense" and ?surprise" in Bayesian decision-making have also been recently introduced in the economics literature (see [18] and the references therein). These models use measures
for Bayesian surprise, originally introduced in the context of sensory neuroscience [19], in order
to model the explicit preference of a decision-maker to non-instrumental information. The goal
there is to design information disclosure policies that are suspense-optimal or surprise-optimal. Unlike our model, such models impose suspense (and/or surprise) as a (behavioral) preference of the
decision-maker, and hence they do not emerge endogenously by virtue of rational decision making.
2
Timely Decision Making with Endogenous Information Gathering
Time Series Model The decision-maker has access to a time-series X(t) modeled as a continuoustime stochastic process that takes values in R, and is defined over the time domain t ? R+ , with an
underlying filtered probability space (?, F, {Ft }t?R+ , P). The process X(t) is naturally adapted
to {Ft }t?R+ , and hence the filtration Ft abstracts the information conveyed in the time series realization up to time t. The decision-maker extracts information from X(t) to guide her actions over
time.
We assume that X(t) is a stationary Markov process1 , with a stationary transition kernel
P? (X(t) ? A|Fs ) = P? (X(t) ? A|X(s)) , ?A ? R, ?s < t ? R+ , where ? is a realization
of a latent Bernoulli random variable ? ? {0, 1} (unobservable by the decision-maker), with
P(? = 1) = p. The distributional properties of the paths of X(t) are determined by ?, since
the realization of ? decides which Markov kernel (Po or P1 ) generates X(t). If the realization ? is
equal to 1, then an adverse event occurs almost surely at a (finite) random time ? , the distribution of
which is dependent on the realization of the path (X(t))0?t?? .
1
Most of the insights distilled from our results would hold for more general dependency structures. However,
we keep this assumption to simplify the exposition and maintain the tractability and interpretability of the
results.
2
Pt = {0, 0.1, 0.15, 0.325, 0.4, 0.45, 0.475, 0.5, 0.65, 0.7}
Adverse event
Information at t = 0.2:
1) ?(X(0), X(0.1), X(0.15))
2) S0.2: survival up to t = 0.2
0.1
X(t)
0.05
0
Stopping time ?
?0.05
?0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
Continuous-path X(t)
Partitioned path X(Pt )
0.7
0.8
0.9
1
Time t
Figure 1: An exemplary stopped sample path for X ? (t)|? = 1, with an exemplary partition Pt .
The decision-maker?s ultimate goal is to sequentially observe X(t), and infer ? before the adverse
event happens; inference is obsolete if it is declared after ? . Since ? is latent, the decision-maker is
unaware whether the adverse event will occur or not, i.e. whether her access to X(t) is temporary
(? < ? for ? = 1) or permanent (? = ? for ? = 0). In order to model the occurrence of the
adverse event; we define ? as an F-stopping time for the process X(t), for which we assume the
following:
? The stopping time ? |? = 1 is finite almost surely, whereas ? |? = 0 is infinite almost
surely, i.e. P (? < ? |? = 1 ) = 1, and P (? = ? |? = 0 ) = 1.
? The stopping time ? |? = 1 is accessible2 , with a Markovian dependency on history, i.e.
P ( ? < t| Fs ) = P ( ? < t| X(s)) , ?s < t, where P ( ? < t| X(s)) is an injective map from
R to [0, 1] and P ( ? < t| X(s)) is non-decreasing in X(s).
Thus, unlike the stochastic deadline models in [10] and [15], the decision deadline in our model (i.e.
occurrence of the adverse event) is context-dependent as it depends on the time series realization
(i.e. P ( ? < t| X(s)) is not independent of X(t) as in [15]). We use the notation X ? (t) = X(t ? ? ),
where t ? ? = min{t, ? } to denote the stopped process to which the decision-maker has access. Throughout the paper, the measures Po and P1 assign probability measures to the paths
X ? (t)|? = 0 and X ? (t)|? = 1 respectively, and we assume that Po << P1 3 .
Information The decision-maker can only observe a set of (costly) samples of X ? (t) rather
than the full continuous path. The samples observed by the decision-maker are captured by
partitioning X(t) over specific time intervals: we define Pt = {to , t1 , . . ., tN (Pt )?1 }, with
0 ? to < t1 < . . . < tN (Pt )?1 ? t, as a size-N (Pt ) partition of X ? (t) over the interval [0, t],
where N (Pt ) is the total number of samples in the partition Pt . The decision-maker observes the
values that X ? (t) takes at the time instances in Pt ; thus the sequence of observations is given by the
PN (P )?1
process X(Pt ) = i=0 t
X(ti )?ti , where ?ti is the Dirac measure. The space of all partitions
over the interval [0, t] is denoted by Pt = [0, t]N . We denote the probability measures for partitioned
? o (Pt ) and P
? 1 (Pt ) respectively.
paths generated under ? = 0 and 1 with a partition Pt as P
Since the decision-maker observes X ? (t) through the partition Pt , her information at time t is
conveyed in the ?-algebra ?(X ? (Pt )) ? Ft . The stopping event is observable by the decision
maker even if ? ?
/ P? . We denote the ?-algebra generated by the stopping event as St = ? 1{t?? } .
Thus, the information that the decision-maker has at time t is expressed by the filtration F?t =
?(X ? (Pt )) ? St . Hence, any decision-making policy needs to be F?t -measurable.
Figure 1 depicts a Brownian path (a sample path of a Wiener process, which satisfies all the
assumptions of our model)4 , with an exemplary partition Pt over the time interval [0, 1]. The
decision-maker observes the samples in X(Pt ) sequentially, and reasons about the realization of
the latent variable ? based on these samples and the process survival, i.e. at t = 0.2, the decisionmaker?s information resides in the ?-algebra ?(X(0), X(0.1), X(0.15)) generated by the samples
2
Our analyses hold if the stopping time is totally inaccessible.
The absolute continuity of Po with respect to P1 means that no sample path of X ? (t)|? = 0 should be
fully revealing of the realization of ?.
4
In Figure 1, the stopping event was simulated as a totally inaccessible first jump of a Poisson process.
3
3
in P0.2 = {0, 0.1, 0.15}, and the ?-algebra generated by the process? survival S0.2 = ?(1{? >0.2} ).
Policies and Risks The decision-maker?s goal is to come up with a (timely) decision ?? ? {0, 1},
that reflects her prediction for whether the actual realization ? is 0 or 1, before the process X ? (t)
potentially stops at the unknown time ? . The decision-maker follows a policy: a (continuous-time)
mapping from the observations gathered up to every time instance t to two types of actions:
? A sensing action ?t ? {0, 1}: if ?t = 1, then the decision-maker decides to observe a new
sample from the running process X ? (t) at time t.
? A continuation/stopping action ??t ? {?, 0, 1}: if ??t ? {0, 1}, then the decision-maker
decides to stop gathering samples from X ? (t), and declares a final decision (estimate) for
?. Whenever ??t = ?, the decision-maker continues observing X ? (t) and postpones her
declaration for the estimate of ?.
A policy ? = (?t )t?R+ is a (F?t -measurable) mapping rule that maps the information in F?t to an
action tuple ? t = (?t , ??t ) at every time instance t. We assume that every single observation that the
decision-maker draws from X ? (t) entails a fixed cost, hence the process (?t )t?R+ has to be a point
process under any optimal policy5 . We denote the space of all such policies by ?.
A policy ? generates the following random quantities as a function of the paths X ? (t) on the probability space (?, F, {Ft }t?R+ , P):
1- A stopping time T? : The first time at which the decision-maker declares its estimate for ?, i.e.
T? = inf{t ? R+ : ??t ? {0, 1}}.
2- A decision (estimate of ?) ??? : Given by ??? = ??T? ?? .
3- A random partition PT?? : A realization of the point process (?t )t?R+ , comprising a finite set of
strictly increasing F-stopping times at which the decision-maker decides to sample the path X ? (t).
A loss function is associated with every realization of the policy ?, representing the overall cost
incurred when following that policy for a specific path X ? (t). The loss function is given by
? (?; ?) , (C1 1{??? =0,?=1} + Co 1{??? =1,?=0} + Cd T? ) 1{T? ?? } + Cr 1{T? >? } + Cs N (PT?? ?? ),
| {z }
{z
}
|
| {z }
{z
} |
{z
} Delay
|
Type I error
Deadline missed
Type II error
Information
(1)
where C1 is the cost of type I error (failure to anticipate the adverse event), Co is the cost of type II
error (falsely predicting that an adverse event will occur), Cd is the cost of the delay in declaring the
estimate ??? , Cr is the cost incurred when the adverse event occurs before an estimate ??? is declared
(cost of missing the deadline), and Cs is the cost of every observation sample (cost of information).
The risk of each policy ? is defined as its expected loss
R(?) , E [? (?; ?)] ,
(2)
?
where the expectation is taken over the paths of X (t). In the next section, we characterize the
structure of the optimal policy ? ? = arg inf??? R(?).
3
Structure of the Optimal Policy
Since the decision-maker?s posterior belief at time t, defined as ?t = P( ? = 1| F?t ), is an important statistic for designing sequential policies [10, 21-22], we start our characterization for ? ? by
investigating the belief process (?t )t?R+ .
3.1
The Posterior Belief Process
Recall that the decision-maker distills information from two types of observations: the realization
of the partitioned time series X ? (Pt ) (i.e. the information in ?(X ? (Pt ))), and 2) the survival of the
5
Note that the cost of observing any local continuous path is infinite, hence any optimal policy must have
(?t )t?R+ being a point process to keep the number of observed samples finite.
4
Policy ?1 with partition P ?1
Policy ?2 , with P ?1 ? P ?2
Wait-and-watch policy
Suspense phase (risk bearing)
1.1
0.65
Posterior belief process ?t
Surprise phase (risk assessment)
1
0.6
Information gain
It1 (t2 ? t1 ) = ?t2 ? ?t1
0.9
0.55
Stopping time ?
0.8
0.5
500
0.7
600
700
800
0.6
0.5
0.4
0
200
400
600
800
1000
1200
1400
1600
Time t
Figure 2: Depiction for exemplary belief paths of different policies under ? = 1.
process up to time t (i.e. the information in St ). In the following Theorem, we study the evolution
of the decision-maker?s beliefs as she integrates these pieces of information over time6 .
Theorem 1 (Information and beliefs). Every posterior belief trajectory (?t )t?R+ associated with
a policy ? ? ? that creates a partition Pt? ? Pt of X ? (t) is a c?dl?g path given by
!?1
? o (P ? )
1 ? p dP
t
?
?t = 1{t?? } + 1{0?t<? } ? 1 +
,
? 1 (P ? )
p
dP
t
? (P ? )
dP
? o (P ? ) with respect to P
? 1 (P ? ),
where dP?o (Pt? ) is the Radon?Nikodym derivative7 of the measure P
t
t
1
t
and is given by the following elementary predictable process
1
?o (P ? )
dP
t
?1 (P ? )
dP
t
N (Pt? )?1
=
X
k=1
P(X(Pt? )|? = 1)
P(? > t|?(X(Pt? ), ? = 1) 1{Pt? (k)?t?Pt? (k+1)} ,
{z
}
P(X(Pt? )|? = 0) |
{z
}
|
Survival probability
Likelihood ratio
for t ? Pt? (1), and p P(? > t|? = 1)
N (PT?? ?? ) + 1{? <?} jumps at the time
for t < Pt? (k). Moreover, the path (?t )t?R+ has exactly
?
indexes in Pt??
? {? }.
Theorem 1 says that every belief path is right-continuous with left limits, and has jumps at the time
indexes in the partition Pt? , whereas between each two jumps, the paths (?t )t?[t1 ,t2 ) , t1 , t2 ? Pt?
are predictable (i.e. they are known ahead of time once we know the magnitudes of the jumps
preceding them). This means that the decision-maker obtains "active" information by probing
the time series to observe new samples (i.e. the information in ?(X ? (Pt ))), inducing jumps that
revive her beliefs, whereas the progression of time without witnessing a stopping event offers the
decision-maker "passive information" that is distilled just from the costless observation of process
survival information. Both sources of information manifest? themselves in terms of the likelihood
? (P )
dP
ratio, and the survival probability in the expression of dP?o (Pt? ) above.
1
t
In Figure 2, we plot the c?dl?g belief paths for policies ?1 and ?2 , where P ?1 ? P ?2 (i.e. policy
?1 observe a subset of the samples observed by ?2 ). We also plot the (predictable) belief path of
a wait-and-watch policy that observes no samples. We can see that ?2 , which has more jumps of
"active information", copes faster with the truthful belief over time. Between each two jumps, the
belief process exhibits a non-increasing predictable path until fed with a new piece of information.
The wait-and-watch policy has its belief drifting away from the prior p = 0.5 towards the wrong
belief ?t = 0 since it only distills information from the process survival, which favors the hypothesis
? = 0. This discussion motivates the introduction of the following key quantities.
Information gain (surprise) It (?t): The amount of drift in the decision-maker?s belief at time
t + ?t with respect to her belief at time t, given the information available up to time t, i.e.
It (?t) = (?t+?t ? ?t ) |F?t .
6
All proofs are provided in the supplementary material
Since we impose the condition Po << P1 and fix a partition Pt , then the Radon?Nikodym derivative
exists.
7
5
Posterior survival function (suspense) St (?t): The probability that a process generated
with ? = 1 survives up to time t + ?t given the information observed up to time t, i.e.
St (?t) = P(? > t + ?t|F?t , ? = 1). The function St (?t) is a non-increasing function in ?t, i.e.
?St (?t)
? 0.
??t
That is, the information gain is the amount of ?surprise" that the decision-maker experiences in
response to a new information sample expressed in terms of the change in here belief, i.e. the jumps
in ?t , whereas the survival probability (suspense) is her assessment for the risk of having the adverse
event taking places in the next ?t time interval. As we will see in the next subsection, the optimal
policy would balance the two quantities when scheduling the times to sense X ? (t).
We conclude our analysis for the process ?t by noting that lack of information samples creates bias
towards the belief that ? = 0 (e.g. see the belief path of the wait-and-watch policy in Figure 2). We
formally express this behavior in the following Corollary.
Corollary 1 (Leaning towards denial). For every policy ? ? ?, the posterior belief process ?t is
a supermartingale with respect to F?t , where
E[?t+?t |F?t ] = ?t ? ?2t St (?t)(1 ? St (?t)) ? ?t , ??t ? R+ .
Thus, unlike classical Bayesian learning models with a belief martingale [18, 21-23], the belief
process in our model is a supermartingale that leans toward decreasing over time. The reason for
this is that in our model, time conveys information. That is, unlike [10] and [15] where the decision
deadline is hypothesis-independent and is almost surely occurring in finite time for any path, in our
model the occurrence of the adverse event is itself a hypothesis, hence observing the survival of the
process is informative and contributes to the evolution of the belief. The informativeness of both the
acquired information samples and process survival can be disentangled using Doob decomposition,
by writing ?t as ?t = ??t + A(?t , St (?t)), where ??t is a martingale, capturing the information gain
from the acquired samples, and A(?t , St (?t)) is a predictable compensator process [23], capturing
information extracted from the process survival.
3.2
The Optimal Policy
The optimal policy ? ? minimizes the expected risk as defined in (1) and (2) by generating the tuple
of random processes (T? , ??? , Pt? ) in response to the paths of X ? (t) on (?, F, {Ft }t?R+ , P) in a
way that "shapes" a belief process ?t that maximizes informativeness, maintains timeliness and
controls cost. In the following, we introduce the notion of a "rendezvous policy", then in Theorem
2, we show that the optimal policy ? ? complies with this definition.
Rendezvous policies We say that a policy ? is a rendezvous policy, if the random partition PT??
constructed by the sequence of sensing actions (?t? )t?[0,T? ] , is a point process with predictable
?
?
?
jumps, where for every two consecutive jumps at times t and t , with t > t and t, t ? PT?? , we
?
have that t is F?t -measurable.
That is, a rendezvous policy is a policy that constructs a sensing schedule (?t? )t?[0,T? ] , such that
?
every time t at which the decision-maker acquires information is actually computable using the
information available up to time t, the previous time instance at which information was gathered.
Hence, the decision-maker can decide the next "date" in which she will gather information directly
after she senses a new information sample. This structure is a natural consequence of the information structure in Theorem 1, since the belief paths between every two jumps are predictable, then
they convey no "actionable" information, i.e. if the decision-maker was to respond to a predictable
belief path, say by sensing or making a stopping decision, then she should have taken that decision
right before the predictable path starts, which leads her to better off by saving the delay cost Cd .
We denote the space of all rendezvous policies by ?r . In the following Theorem, we establish that
the rendezvous structure is optimal.
Theorem 2 (Rendezvous). The optimal policy ? ? is a rendezvous policy (? ? ? ?r ).
6
A direct implication of Theorem 2 is that the time variable can now be viewed as a state
variable, whereas the problem is virtually solved in "discrete-time" since the decision-maker
effectively jumps from one time instance to another in a discrete manner. Hence, we alter the
definition of the action ?t from an indicator variable that indicates sensing the time series at time t,
to a "rendezvous action" that takes real values, and specifies the time after which the decision-maker
would sense a new sample, i.e. if ?t = ?t, then the decision-maker gathers the new sample at t+?t.
This transformation restricts our policy design problem to the space of rendezvous policies ?r ,
which we know from Theorem 2 that it contains the optimal policy (i.e. ? ? = arg inf???r R(?)).
Having established the result in Theorem 2, in the following Theorem, we characterize the optimal
?
policy ? ? in terms of the random process (T?? , ???? , Pt? ) using discrete-time Bellman optimality
conditions [24].
?
?
Theorem 3 (The optimal policy). The optimal policy ? ? is a sequence of actions (??t? , ?t? )t?R+ ,
?
resulting in a random process (???? , T?? , PT??? ) with the following properties:
(Continuation and stopping)
? t?? ))t?R is a Markov sufficient statistic for the distribution of
1. The process (t, ?t , X(P
+
?
? ?? ) is the most recent sample in the partition P ?? , i.e.
(???? , T?? , PT??? ), where X(P
t
t
? t?? ) = X(t? ), t? = max Pt?? .
X(P
?
2. The policy ? ? recommends continuation, i.e. ??t? = ?, as long as the belief ?t ?
?
?
? t? )), where C(t, X(P
? t? )), is a time and context-dependent continuation set with
C(t, X(P
?
?
?
?
the following properties: C(t , X) ? C(t, X), ?t > t, and C(t, X ) ? C(t, X), ?X > X.
(Rendezvous and decisions)
? t?? )), and t ? P ?? , then the rendezvous ?t?? is set as follows
1. Whenever ?t ? C(t, X(P
T? ?
?
?t? = arg inf??R+ f (E[It (?)], St (?)),
where f (E[It (?)], St (?)) is decreasing in E[It (?)] and St (?).
? ?? )), then a decision ???? = ???? ? {0, 1} is issued, and is based
2. Whenever ?t ?
/ C(t, X(P
t
t
o . The stopping time is given by
on a belief threshold as follows: ???? = 1n
C1
?t ? Co +C
T? ?
? ?? ))}.
= inf{t ? R+ : ?t ?
/ C(t, X(P
t
1
Theorem 3 establishes the structure of the optimal policy and its prescribed actions in the decisionmaker?s state-space. The first part of the Theorem says that in order to generate the random
?
tuple (T?? , ???? , Pt? ) optimally, we only need to keep track of the realization of the process
? t ))t?R in every time instance. That is, an optimal policy maps the current belief, the
(t, ?t , X(P
+
current time, and the most recently observed realization of the time series to an action tuple (??t? , ?t? ),
i.e. a decision on whether to stop and declare an estimate for ? or sense a new sample. Hence, the
? t ))t?R represents the "state" of the decision-maker, and the decision-maker?s
process (t, ?t , X(P
+
actions can partially influence the state through the belief process, i.e. a decision on when to acquire the next sample affects the distributional properties of the posterior belief. The remaining state
variables t and X(t) are beyond the decision-maker?s control.
We note that unlike the previous models in [9-16], with the exception of [17], a policy in our model
? ? )) and not just the time-belief tuple
is context-dependent. That is, since the state is (t, ?t , X(P
t
(t, ?t ), a policy ? can recommend different actions for the same belief and at the same time but for
a different context. This is because, while ?t captures what the decision-maker learned from the
? ? ) captures her foresightedness into the future, i.e. it can be that the belief ?t is not
history, X(P
t
? t? ) is large), which means that a potential
decisive (e.g. ?t ? p), but the context is "risky" (i.e. X(P
forthcoming adverse event is likely to happen in the near future, hence the decision-maker would be
more eager to make a stopping decision and declare an estimate ??? . This is manifested through the
? t? )) on both time and context; the continuation set is
dependence of the continuation set C(t, X(P
monotonically decreasing in time due to the deadline pressure, and is also monotonically decreasing
? ? ) due to the dependence of the deadline on the time series realization.
in X(P
t
7
?t
Sample path 1
Policy ?:
Continue sampling X ? (t)
Sample path 2
?
?
t
t?
??? = 1
X(t)
Policy ?:
Stop and declare ???
Figure 3: Context-dependence of the policy ?.
The context dependence of the optimal policy is pictorially depicted in Figure 3 where we show
two exemplary trajectories for the decision-maker?s state, and the actions recommended by a policy
? for the same time and belief, but a different context, i.e. a stopping action recommended when
X(t) is large since it corresponds to a low survival probability, whereas for the same belief and
time, a continuation action can be recommended if X(t) is low since it is safer to keep observing
the process for that the survival probability is high. Such a prescription specifies optimal decisionmaking in context-driven settings such as clinical decision-making in critical care environment [3-5],
where a combination of a patient?s length of hospital stay (i.e. t), clinical risk score (i.e. ?t ) and
? ? )) determine the decision on whether or not a
current physiological test measurements (i.e. X(P
t
patient should be admitted to an intensive care unit.
The second part of Theorem 3 says that whenever the optimal policy decides to stop gathering
information and issue a conclusive decision, it imposes a threshold on the posterior belief, based on
1
, and hence weights the estimates by their
which it issues the estimate ???? ; the threshold is CoC+C
1
respective risks. When the policy favors continuation, it issues a rendezvous action, i.e. the next time
instance at which information will be gathered. This rendezvous balances surprise and suspense:
the decision-maker prefers maximizing surprise in order to draw the maximum informativeness
from the costly sample it will acquire; this is captured in terms of the expected information gain
E[It (?)]. Maximizing surprise may increase suspense, i.e. the probability of process termination,
which is controlled by the survival function St (?), and hence it can be that harvesting the maximum
informativeness entails a survival risk when Cr is high. Therefore, the optimal policy selects a
?
rendezvous ?t? that optimizes a combination of the survival risk survival, captured by the cost Cr
and the survival function St (?t), and the value of information, captured by the costs Co , C1 and the
expected information gain E[It (?)].
4
Conclusions
We developed a model for decision-making with endogenous information acquisition under time
pressure, where a decision-maker needs to issue a conclusive decision before an adverse event (potentially) takes place. We have shown that the optimal policy has a "rendezvous" structure, i.e. the
optimal policy sets a "date" for gathering a new sample whenever the current information sample is
observed. The optimal policy selects the time between two information samples such that it balances
the information gain (surprise) with the survival probability (suspense). Moreover, we characterized
the optimal policy?s continuation and stopping conditions, and showed that they depend on the context and not just on beliefs. Our model can help understanding the nature of optimal decision-making
in settings where timely risk assessment and information gathering is essential.
5
Acknowledgments
This work was supported by the ONR and the NSF (Grant number: ECCS 1462245).
8
References
[1] Balci, F., Freestone, D., Simen, P., de Souza, L., Cohen, J. D., & Holmes, P. (2011) Optimal temporal risk
assessment, Frontiers in Integrative Neuroscience, 5(56), 1-15.
[2] Banerjee, T. & Veeravalli, V. V. (2012) Data-efficient quickest change detection with on?off observation
control, Sequential Analysis, 31(1), 40-77.
[3] Wiens, J., Horvitz, E., & Guttag, J. V. (2012) Patient risk stratification for hospital-associated c. diff as a
time-series classification task, In Advances in Neural Information Processing Systems, pp. 467-475.
[4] Schulam, P., & Saria, S. (2015) A Framework for Individualizing Predictions of Disease Trajectories by
Exploiting Multi-resolution Structure, In Advances in Neural Information Processing Systems, pp. 748-756.
[5] Chalfin, D. B., Trzeciak, S., Likourezos, A., Baumann, B. M., Dellinger, R. P., & DELAY-ED study group.
(2007) Impact of delayed transfer of critically ill patients from the emergency department to the intensive care
unit, Critical care medicine, 35(6), pp. 1477-1483.
[6] Bortfeld, T., Ramakrishnan, J., Tsitsiklis, J. N., & Unkelbach, J. (2015) Optimization of radiation therapy
fractionation schedules in the presence of tumor repopulation, INFORMS Journal on Computing, 27(4), pp.
788-803.
[7] Shapiro, S., et al., (1998) Breast cancer screening programmes in 22 countries: current policies, administration and guidelines, International journal of epidemiology, 27(5), pp. 735-742.
[8] Wald, A., Sequential analysis, Courier Corporation, 1973.
[9] Khalvati, K., & Rao, R. P. (2015) A Bayesian Framework for Modeling Confidence in Perceptual Decision
Making, In Advances in neural information processing systems, pp. 2404-2412.
[10] Dayanik, S., & Angela, J. Y. (2013) Reward-Rate Maximization in Sequential Identification under a
Stochastic Deadline, SIAM J. Control Optim., 51(4), pp. 2922?2948.
[11] Zhang, S., & Angela, J.Y. (2013) Forgetful Bayes and myopic planning: Human learning and decisionmaking in a bandit setting, In Advances in neural information processing systems, pp. 2607-2615.
[12] Shenoy, P., & Angela, J.Y. (2012) Strategic impatience in Go/NoGo versus forced-choice decision-making,
In Advances in neural information processing systems, pp. 2123-2131.
[13] Drugowitsch, J., Moreno-Bote, R., & Pouget, A. (2014) Optimal decision-making with time-varying
evidence reliability, In Advances in neural information processing systems, pp. 748-756.
[14] Yu, A. J., Dayan, P., & Cohen, J. D. (2009) Dynamics of attentional selection under conflict: toward a
rational Bayesian account, Journal of Experimental Psychology: Human Perception and Performance, 35(3),
700.
[15] Frazier, P. & Angela, J. Y. (2007) Sequential hypothesis testing under stochastic deadlines, In Advances in
Neural Information Processing Systems, pp. 465-472.
[16] Drugowitsch, J., Moreno-Bote, R., Churchland, A. K., Shadlen, M. N., & Pouget, A. (2012) The cost of
accumulating evidence in perceptual decision making, The Journal of Neuroscience, 32(11), 3612-3628.
[17] Shvartsman, M., Srivastava, V., & Cohen J. D. (2015) A Theory of Decision Making Under Dynamic
Context, In Advances in Neural Information Processing Systems, pp. 2476-2484. 2015.
[18] Ely, J., Frankel, A., & Kamenica, E. (2015) Suspense and surprise, Journal of Political Economy, 123(1),
pp. 215-260.
[19] Itti, L., & Baldi, P. (2005) Bayesian Surprise Attracts Human Attention, In Advances in Neural Information
Processing Systems, pp. 547-554.
[20] Bogacz, R., Brown, E., Moehlis, J., Holmes, P. J., & Cohen J. D. (2006) The physics of optimal decision
making: A formal analysis of models of performance in two-alternative forced-choice tasks, Psychological
Review, 113(4), pp. 700?765.
[21] Peskir, G., & Shiryaev, A. (2006) Optimal stopping and free-boundary problems, Birkh?user Basel.
[22] Shiryaev, A. N. (2007) Optimal stopping rules (Vol. 8). Springer Science & Business Media.
[23] Shreve, Steven E. (2004) Stochastic calculus for finance II: Continuous-time models (Vol. 11), Springer
Science & Business Media, 2004.
[24] Bertsekas, D. P., & Shreve, S. E. Stochastic optimal control: The discrete time case (Vol. 23), New York:
Academic Press, 1978.
9
| 6062 |@word instrumental:1 termination:1 calculus:1 integrative:1 decomposition:1 p0:1 pressure:5 it1:1 series:19 contains:1 score:1 horvitz:1 current:7 optim:1 mihaela:1 must:1 happen:1 partition:14 informative:1 shape:1 moreno:2 plot:2 stationary:2 obsolete:1 filtered:1 harvesting:1 characterization:1 preference:2 zhang:1 admission:1 mathematical:1 constructed:1 direct:1 supply:3 behavioral:1 baldi:1 introduce:1 manner:2 falsely:1 acquired:2 expected:4 behavior:1 p1:5 themselves:1 planning:1 multi:1 bellman:1 inspired:2 decreasing:5 actual:1 totally:2 becomes:1 provided:2 spain:1 underlying:4 moreover:4 notation:1 increasing:3 maximizes:1 medium:2 what:1 crisis:1 bogacz:1 minimizes:1 developed:3 transformation:1 corporation:1 temporal:2 every:12 ti:3 finance:2 exactly:1 wrong:1 partitioning:1 unit:3 control:5 intervention:1 grant:1 bertsekas:1 shenoy:1 before:5 t1:6 engineering:2 local:1 declare:3 ecc:1 limit:1 consequence:1 path:32 quickest:1 therein:1 co:4 practical:2 acknowledgment:1 testing:2 revealing:1 courier:1 confidence:1 wait:4 selection:1 scheduling:2 context:17 risk:14 writing:1 influence:1 accumulating:1 measurable:3 map:3 missing:1 maximizing:2 go:1 economics:1 attention:1 resolution:1 pouget:2 insight:1 rule:2 holmes:2 disentangled:1 financial:1 notion:2 pt:50 user:1 designing:2 hypothesis:6 recognition:1 continues:1 schaar:1 distributional:2 lean:1 observed:7 ft:6 steven:1 electrical:2 solved:1 capture:2 pictorially:1 region:2 trade:1 observes:5 disease:1 predictable:9 inaccessible:2 environment:1 reward:1 hinder:1 dynamic:3 prescribes:1 depend:3 denial:1 algebra:4 churchland:1 creates:2 po:5 forced:3 effective:1 birkh:1 supplementary:1 say:5 favor:2 statistic:2 ward:2 revive:1 itself:1 final:3 sequence:3 exemplary:5 realization:16 date:4 decisionmaker:3 inducing:1 dirac:1 los:2 exploiting:1 decisionmaking:3 generating:1 help:1 develop:1 informs:1 radiation:1 c:2 come:1 stochastic:9 human:3 material:1 assign:1 fix:1 anticipate:1 elementary:1 strictly:1 frontier:1 hold:2 therapy:1 considered:1 mapping:2 predict:2 early:1 consecutive:1 integrates:1 maker:58 establishes:1 tool:1 reflects:2 survives:1 rather:1 pn:1 cr:4 varying:1 corollary:2 derived:1 acuity:2 she:6 frazier:1 modelling:1 bernoulli:1 indicates:1 likelihood:2 political:1 sense:3 inference:1 economy:1 dependent:5 stopping:24 dayan:1 dayanik:1 her:12 bandit:1 doob:1 selects:2 comprising:1 arg:3 issue:5 unobservable:1 overall:1 denoted:1 classification:1 ill:1 equal:1 construct:1 once:1 distilled:2 having:2 sampling:2 stratification:1 saving:1 represents:1 yu:1 afc:3 alter:1 future:4 t2:4 recommend:1 simplify:1 delayed:3 phase:2 maintain:1 continuoustime:2 attempt:1 detection:2 screening:2 sens:1 myopic:1 implication:1 tuple:5 injective:1 experience:1 moehlis:1 respective:1 stopped:2 psychological:1 instance:7 modeling:1 markovian:1 rao:1 suspense:13 maximization:1 strategic:1 tractability:1 cost:15 subset:1 delay:4 eager:1 characterize:5 optimally:1 dependency:2 coc:1 adaptively:1 st:16 international:1 epidemiology:1 siam:1 stay:1 off:3 physic:1 mortality:1 cognitive:2 derivative:1 itti:1 account:1 potential:2 de:1 permanent:1 wiens:1 schulam:1 depends:1 decisive:1 piece:2 performed:1 ely:1 endogenous:4 lab:1 observing:5 exogenous:3 start:2 bayes:1 maintains:1 timely:7 wiener:1 gathered:4 bayesian:9 identification:1 critically:1 trajectory:3 history:2 whenever:6 ed:1 definition:2 failure:1 acquisition:4 pp:15 conveys:2 naturally:2 associated:3 proof:1 stop:7 gain:8 rational:2 recall:1 manifest:1 subsection:1 ubiquitous:1 schedule:3 actually:1 simen:1 originally:1 response:2 just:4 shreve:2 until:1 veeravalli:1 assessment:5 lack:1 banerjee:1 continuity:1 brown:1 evolution:2 hence:12 impatience:1 supermartingale:3 maintained:1 acquires:1 bote:2 tn:2 passive:1 recently:2 common:1 individualizing:1 cohen:4 accumulate:1 measurement:1 deny:1 reliability:1 access:3 entail:2 acute:1 longer:1 depiction:1 etc:2 posterior:10 brownian:1 recent:1 showed:1 inf:5 driven:1 optimizes:1 issued:1 manifested:1 onr:1 continue:1 der:1 frankel:1 captured:4 care:5 relaxed:1 impose:2 preceding:1 surely:4 determine:1 truthful:1 period:1 monotonically:2 signal:1 ii:3 recommended:3 full:1 infer:1 faster:1 characterized:1 ahmed:1 clinical:4 offer:1 long:1 academic:1 prescription:1 deadline:12 controlled:1 impact:1 prediction:4 wald:3 breast:2 patient:8 expectation:1 poisson:1 kernel:2 deterioration:1 c1:4 whereas:6 interval:7 source:1 country:1 unlike:6 virtually:1 structural:1 near:1 noting:1 presence:1 recommends:1 affect:1 psychology:1 forthcoming:2 attracts:1 computable:2 intensive:3 administration:1 angeles:2 whether:6 motivated:1 expression:1 ultimate:1 f:2 york:1 action:17 prefers:1 amount:2 induces:1 continuation:12 specifies:2 generate:1 baumann:1 restricts:1 nsf:1 shapiro:1 timeliness:1 shiryaev:3 neuroscience:3 track:1 discrete:4 vol:3 express:1 group:1 key:1 threshold:3 drawn:1 distills:2 diffusion:1 nogo:1 respond:1 place:3 almost:4 throughout:1 decide:2 missed:1 draw:2 decision:103 radon:2 capturing:2 emergency:1 adapted:1 occur:2 declares:2 ahead:1 generates:2 declared:2 prescribed:2 min:1 optimality:1 forgetful:1 department:3 clinically:1 combination:2 terminates:1 partitioned:3 making:30 happens:1 gathering:7 taken:2 know:2 disclosure:1 fed:1 complies:1 available:2 observe:6 progression:1 away:1 process1:1 occurrence:10 alternative:2 drifting:1 actionable:1 angela:4 running:1 remaining:1 medicine:2 establish:1 classical:2 realized:2 occurs:3 quantity:3 costly:4 dependence:4 compensator:1 exhibit:1 dp:8 attentional:1 simulated:1 reason:2 toward:2 assuming:1 guttag:1 length:1 modeled:1 index:2 ratio:3 balance:5 acquire:4 potentially:2 filtration:2 design:2 guideline:1 sprt:1 policy:70 unknown:1 motivates:1 basel:1 observation:8 markov:3 finite:5 drift:2 souza:1 introduced:3 conclusive:2 conflict:1 california:2 learned:1 temporary:1 established:1 barcelona:1 nip:1 beyond:1 pattern:1 perception:1 program:1 interpretability:1 max:1 belief:44 event:26 critical:2 endogenously:1 natural:1 business:2 predicting:1 indicator:1 representing:1 risky:1 extract:1 prior:1 literature:1 understanding:1 shvartsman:1 review:1 fully:1 loss:3 declaring:1 versus:1 incurred:2 conveyed:2 gather:3 sufficient:1 fractionation:1 s0:2 informativeness:4 imposes:1 shadlen:1 nikodym:2 leaning:1 balancing:1 cd:3 cancer:2 supported:1 free:1 tsitsiklis:1 guide:1 bias:1 formal:1 taking:1 emerge:1 absolute:1 van:1 boundary:1 transition:1 unaware:1 resides:1 sensory:6 drugowitsch:2 jump:13 programme:1 cope:1 observable:1 obtains:1 keep:4 decides:7 sequentially:3 investigating:1 active:2 assumed:1 conclude:1 continuous:6 latent:3 terminate:1 nature:1 transfer:1 correlated:1 contributes:1 bearing:1 complex:1 constructing:1 domain:2 convey:1 depicts:1 martingale:2 probing:1 explicit:1 perceptual:4 theorem:15 specific:2 insightful:1 sensing:5 physiological:2 virtue:1 evidence:5 survival:23 dl:2 exists:1 essential:1 sequential:9 effectively:1 magnitude:1 occurring:1 horizon:3 surprise:16 depicted:1 admitted:1 likely:1 expressed:2 partially:1 watch:4 acquiring:5 ramakrishnan:1 corresponds:1 springer:2 determines:1 satisfies:1 rendezvous:18 extracted:1 declaration:1 goal:3 viewed:1 exposition:1 towards:3 saria:1 adverse:21 change:2 safer:1 infinite:5 determined:1 operates:1 diff:1 khalvati:1 tumor:2 total:1 hospital:2 tendency:1 experimental:1 exception:1 formally:1 alaa:1 arises:1 witnessing:1 srivastava:1 |
5,595 | 6,063 | Structure-Blind Signal Recovery
Dmitry Ostrovsky? Zaid Harchaoui? Anatoli Juditsky? Arkadi Nemirovski?
[email protected]
Abstract
We consider the problem of recovering a signal observed in Gaussian noise. If
the set of signals is convex and compact, and can be specified beforehand, one
can use classical linear estimators that achieve a risk within a constant factor of
the minimax risk. However, when the set is unspecified, designing an estimator
that is blind to the hidden structure of the signal remains a challenging problem.
We propose a new family of estimators to recover signals observed in Gaussian
noise. Instead of specifying the set where the signal lives, we assume the existence
of a well-performing linear estimator. Proposed estimators enjoy exact oracle
inequalities and can be efficiently computed through convex optimization. We
present several numerical illustrations that show the potential of the approach.
1
Introduction
We consider the problem of recovering a complex-valued signal (xt )t?Z from the noisy observations
y? = x? + ??? ,
?n ? ? ? n.
(1)
Here n ? Z+ , and ?? ? CN (0, 1) are i.i.d. standard complex-valued Gaussian random variables,
meaning that ?0 = ?01 + ??02 with i.i.d. ?01 , ?02 ? N (0, 1). Our goal is to recover xt , 0 ? t ? n, given
the sequence of observations yt?n , ..., yt up to instant t, a task usually referred to as (pointwise) filtering in machine learning, statistics, and signal processing [5].
The traditional approach to this problem considers linear estimators, or linear filters, which write as
x
bt =
n
X
?? yt?? ,
0 ? t ? n.
? =0
Linear estimators have been thoroughly studied in various forms, they are both theoretically attractive [7, 3, 2, 16, 17, 11, 13] and easy to use in practice. If the set X of signals is well-specified, one
can usually compute a (nearly) minimax on X linear estimator in a closed form. In particular, if X
is a class of smooth signals, such as a H?older or a Sobolev ball, then the corresponding estimator is
given by the kernel estimator with the properly set bandwidth parameter [16] and is minimax among
all possible estimators. Moreover, as shown by [6, 2], if only X is convex, compact, and centrally
symmetric, the risk of the best linear estimator of xt is within a small constant factor of the minimax
risk over X . Besides, if the set X can be specified in a computationally tractable way, which clearly
is still a weaker assumption than classical smoothness assumptions, the best linear estimator can be
efficiently computed by solving a convex optimization problem on X . In other words, given a computationally tractable set X on the input, one can compute a nearly-minimax linear estimator and
the corresponding (nearly-minimax) risk over X . The strength of this approach, however, comes at
?
LJK, University of Grenoble Alpes, 700 Avenue Centrale, 38401 Domaine Universitaire de Saint-Martind?H`eres, France.
?
University of Washington, Seattle, WA 98195, USA.
?
Georgia Institute of Technology, Atlanta, GA 30332, USA.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
a price: the set X still must be specified beforehand. Therefore, when one faces a recovery problem
without any prior knowledge of X , this approach cannot be implemented.
We adopt here a novel approach to filtering, which we refer to as structure-blind recovery. While
we do not require X to be specified beforehand, we assume that there exists a linear oracle ? a wellperforming linear estimator of xt . Previous works [8, 10, 4], following a similar philosophy, proved
that one can efficiently adapt to the linear oracle filter of length m = O(n) if the corresponding filter
? is time-invariant, i.e. it recovers the target signal
?uniformly well in the O(n)-sized neighbourhood
of t, and if its `2 -norm is small ? bounded by ?/ m for a moderate ? ? 1. The adaptive estimator
is computed by minimizing the `? -norm of the filter discrepancy, in the Fourier domain, under the
constraint on the `1 -norm of the filter in the Fourier
domain. Put in contrast to the oracle
linear filter,
?
?
the price for adaptation is proved to be O(?3 ln n), with the lower bound of O(? ln n) [8, 4].
We make the following contributions:
? we propose a new family of recovery methods, obtained by solving a least-squares problem
constrained or penalized by the `1 -norm of the filter in the Fourier domain;
? we prove exact oracle inequalities for the `2 -risk of these methods;
?
2
? we show that the price for adaptation
? improves upon previous works [8, 4] to O(? ln n)
for the point-wise risk and to O(? ln n) for the `2 -risk.
? we present numerical experiments that show the potential of the approach on synthetic and
real-world images and signals.
Before presenting the theoretical results, let us introduce the notation we use throughout the paper.
Filters Let C(Z) be the linear space of all two-sided complex-valued sequences x = {xt ? C}t?Z .
For k, k 0 ? Z we consider finite-dimensional subspaces
0
C(Zkk ) = {x ? C(Z) :
t?
/ [k, k 0 ]} .
xt = 0,
It is convenient to identify m-dimensional complex vectors, m = k 0 ? k + 1, with elements of
0
C(Zkk ) by means of the notation:
0
0
xkk := [xk ; ...; xk0 ] ? Ck ?k+1 .
0
0
We associate to linear mappings C(Zkk ) ? C(Zjj ) (j 0 ?j +1)?(k 0 ?k +1) matrices with complex
entries. The convolution u ? v of two sequences u, v ? C(Z) is a sequence with elements
X
[u ? v]t =
u? vt?? , t ? Z.
? ?Z
Given observations (1) and ? ?
filter ?:
C(Zm
0 )
consider the (left) linear estimation of x associated with
x
bt = [? ? y]t
(b
xt is merely a kernel estimate of xt by a kernel ? supported on [0, ..., m]).
Discrete Fourier transform We define the unitary Discrete Fourier transform (DFT) operator
Fn : Cn+1 ? Cn+1 by
z 7? Fn z,
[Fn z]k = (n + 1)?1/2
n
X
zt e
2??kt
n+1
,
0 ? k ? n.
t=0
The inverse Discrete Fourier transform (iDFT) operator Fn?1 is given by Fn?1 := FnH (here AH
stands for Hermitian adjoint of A). By the Fourier inversion theorem, Fn?1 (Fn z) = z.
P
p
We denote k ? kp usual `p -norms on C(Z): kxkp = ( t?Z |xt | )1/p , p ? [1, ?]. Usually, the
0
argument will be finite-dimensional ? an element of C(Zkk ); we reserve the special notation
kxkn,p := kxn0 kp .
2
Furthermore, DFT allows to equip C(Zn0 ) with the norms associated with `p -norms in the spectral
domain:
kxk?n,p := kxn0 k?p := kFn xn0 kp , p ? [1, ?];
note that unitarity of the DFT implies the Parseval identity: kxkn,2 = kxk?n,2 .
Finally, c, C, and C 0 stand for generic absolute constants.
2
Oracle inequality for constrained recovery
Given observations (1) and % > 0, we first consider the constrained recovery x
bcon given by
[b
xcon ]t = [?
b ? y]t ,
t = 0, ..., n,
where ?
b is an optimal solution of the constrained optimization problem
?
minn ky ? ? ? ykn,2 : k?k?n,1 ? %/ n + 1 .
??C(Z0 )
(2)
The constrained recovery estimator minimizes a least-squares fit criterion under a constraint on
k?k?n,1 = kFn ?n0 k1 , that is an `1 constraint on the discrete Fourier transform of the filter. While the
least-squares objective naturally follows from the Gaussian noise assumption, the constraint can be
motivated as follows.
Small-error linear filters Linear filter ?o with a small `1 norm in the spectral domain and small
recovery error exists, essentially, whenever there exists a linear filter with small recovery error [8, 4].
Indeed, let us say that x ? C(Zn0 ) is simple [4] with parameters m ? Z+ and ? ? 1 if there exists
?o ? C(Zm
0 ) such that for all ?m ? ? ? 2m,
1/2
??
.
E |x? ? [?o ? y]? |2
??
m+1
(3)
In other words, x is (m, ?)-simple if there exists a hypothetical filter ?o of the length at most m + 1
? 2 ?2
which recovers x? with squared risk uniformly bounded by m+1
in the interval ?m ? ? ? 2m.
?
?
o
Note that (3) clearly implies that k? k2 ? ?/ m + 1, and that |[x ? ?o ? x]? | ? ??/ m + 1
??, ?m ? ? ? 2m. Now, let n = 2m, and let
?o = ?o ? ?o ? Cn+1 .
As proved in [15, Appendix C], we have
?
k?o k?n,1 ? 2?2 / n + 1,
(4)
and, for a moderate absolute constant c,
kx ? ?o ? ykn,2 ? c??2
p
1 + ln[1/?]
(5)
with probability 1??. To summarize, if x is (m, ?)-simple, i.e., when there exists a filter ?o of length
? m + 1 which recovers x with small risk on the interval [?m, 2m], then the filter ?o = ?o ? ?o
of the length at most n + 1, with n = 2m, has small norm k?o k?n,1 and recovers the signal x with
(essentially the same) small risk on the interval [0, n].
Hidden structure The constrained recovery estimator is completely blind to a possible hidden
structure of the signal, yet can seamlessly adapt to it when such a structure exists, in a way that
we can rigorously establish. Using the right-shift operator on C(Z), [?x]t = xt?1 , we formalize
the hidden structure as an unknown shift-invariant linear subspace of C(Z), ?S = S, of a small
dimension s. We do not assume that x belongs to that subspace. Instead, we make a more general
assumption that x is close to this subspace, that is, it may be decomposed into a sum of a component
that lies in the subspace and a component whose norm we can control.
3
Assumption A We suppose that x admits the decomposition
x = xS + ?,
xS ? S,
where S is an (unknown) shift-invariant, ?S = S, subspace of C(Z) of dimension s, 1 ? s ? n+1,
and ? is ?small?, namely,
k?? ?kn,2 ? ??, 0 ? ? ? n.
Shift-invariant subspaces of C(Z) are exactly the sets of solutions of homogeneous linear difference
equations with polynomial operators. This is summarized by the following lemma (we believe it is
a known fact; for completeness we provide a proof in [15, Appendix C]).
Lemma 2.1. Solution set of a homogeneous difference equation with a polynomial operator p(?),
" s
#
X
[p(?)x]t =
p? xt?? = 0, t ? Z,
(6)
? =0
with deg(p(?)) = s, p(0) = 1, is a shift-invariant subspace of C(Z) of dimension s. Conversely,
any shift-invariant subspace S ? C(Z), ?S ? S, dim(S) = s < ?, is the set of solutions of some
homogeneous difference equation (6) with deg(p(?)) = s, p(0) = 1. Moreover, such p(?) is unique.
On the other hand, for any polynomial p(?), solutions of (6) are exponential polynomials [?
] with frequencies determined by the roots of p(?). For instance, discrete-time polynomials
Ps?1
k
xt =
k=0 ck t , t ? Z of degree s ? 1 (that is, exponential polynomials with all zero frequencies) form a linear space of dimension s of solutions of the equation (6) with a polynomial
p(?) = (1 ? ?)s with a unique root of multiplicity s, having coefficients pk = (?1)k ks . Naturally, signals which are close, in the `2 distance, to discrete-time polynomials are Sobolev-smooth
Ps
functions sampled over the regular grid [10]. Sum of harmonic oscillations
xt = k=1 ck e??k t ,
Qs
?k ? [0, 2?) being all different, is another example; here, p(?) = k=1 (1 ? e??k ?).
We can now state an oracle inequality for the constrained recovery estimator; see [15, Appendix B].
Theorem 2.1. Let % ? 1, and let ?o ? C(Zn0 ) be such that
?
k?o k?n,1 ? %/ n + 1.
Suppose that Assumption A holds for some s ? Z+ and ? < ?. Then for any ?, 0 < ? ? 1, it
holds with probability at least 1 ? ?:
q
p
(7)
kx ? x
bcon kn,2 ? kx ? ?o ? ykn,2 + C? s + % ? ln [1/?] + ln [n/?] .
When considering simple signals, Theorem 2.1 gives the following.
Corollary 2.1. Assume that signal x is (m, ?)-simple, ? ? 1 and m ? Z+ . Let n = 2m, % ? 2?2 ,
and let Assumption A hold for some s ? Z+ and ? < ?. Then for any ?, 0 < ? ? 1, it holds with
probability at least 1 ? ?:
q
p
p
kx ? x
bcon kn,2 ? C??2 ln[1/?] + C 0 ? s + % ? ln [1/?] + ln [n/?] .
Adaptation and price The price for adaptation in Theorem 2.1 and Corollary 2.1 is determined
by three parameters: the bound on the filter norm %, the deterministic error ?, and the subspace
dimension s. Assuming that the signal to recover is simple, and that % = 2?2 , let us compare the
magnitude of the oracle error to the term of the risk which reflects ?price of adaptation?. Typically (in
fact, in all known
the parameter
? to us cases of recovery of signals from a shift-invariant subspace),
?
?
? is at least s. Therefore, the bound (5) implies the ?typical bound? O(? ??2 ) = ?s ? for
o
the term kx ? ? ? ykn,2 (we denote ? = ln(1/?)). As a result, for instance, in the ?parametric
situation?, when the signal belongs or is very close to
the subspace, that is when ? = O(ln(n)),
?
the price of adaptation O ?[s + ?2 (? + ? ln n)]1/2 is much smaller than the bound on the oracle
error. In the ?nonparametric situation?, when ? = O(?2 ), the price of adaptation has the same order
of magnitude as the oracle error.
Finally, note that under the premise of Corollary 2.1 we can also bound the pointwise error. We state
the result for % = 2?2 for simplicity; the proof can be found in [15, Appendix B].
4
Theorem 2.2. Assume that signal x is (m, ?)-simple, ? ? 1 and m ? Z+ . Let n = 2m, % = 2?2 ,
and let Assumption A hold for some s ? Z+ and ? < ?. Then for any ?, 0 < ? ? 1, the
constrained recovery x
bcon satisfies
q p
p
?
??
?2 ln[n/?] + ? ? ln [1/?] + s .
|xn ? [b
xcon ]n | ? C ?
m+1
3
Oracle inequality for penalized recovery
To use the constrained recovery estimator with a provable guarantee, see e.g. Theorem 2.1, one must
know the norm of a small-error linear filter %, or at least have an upper bound on it. However, if this
parameter is unknown, but instead the noise variance is known (or can be estimated from data), we
can build a more practical estimator that still enjoys an oracle inequality.
The penalized recovery estimator [b
xpen ]t = [?
b ? y]t is an optimal solution to a regularized leastsquares minimization problem, where the regularization penalizes the `1 -norm of the filter in the
Fourier domain:
?
?
b ? Argmin ky ? ? ? yk2n,2 + ? n + 1 k?k?n,1 .
(8)
??C(Zn
0)
Similarly to Theorem 2.1, we establish an oracle inequality for the penalized recovery estimator.
Theorem 3.1.?Let Assumption A hold for some s ? Z+ and ? < ?, and let ?o ? C(Zn0 ) satisfy
k?o k?n,1 ? %/ n + 1 for some % ? 1.
1o . Suppose that the regularization parameter of penalized recovery x
bpen satisfies ? ? ?,
? := 60? 2 ln[63n/?].
Then, for 0 < ? ? 1, it holds with probability at least 1 ? ?:
q
p
p
kx ? x
bpen kn,2 ? kx ? ?o ? ykn,2 + C %? + C 0 ? s + (b
% + 1)? ln[1/?],
?
b ?n,1 .
where %b := n + 1 k?k
2o . Moreover, if ? ? ?,
?
10 ln[42n/?]
?
? := p
,
ln [16/?]
and ? ? 2?, one has
kx ? x
bpen kn,2 ? kx ? ?o ? ykn,2 + C
p
?
%? + C 0 ? s.
The proof closely follows that of Theorem 2.1 and can also be found in [15, Appendix B].
4
Discussion
There is some redundancy between ?simplicity? of a signal, as defined by (3), and Assumption
A. Usually a simple signal or image x is also close to a low-dimensional subspace of C(Z) (see,
e.g., [10, section 4]), so that Assumption A holds ?automatically?. Likewise, x is ?almost? simple
when it is close to a low-dimensional time-invariant subspace. Indeed, if x ? C(Z) belongs to S,
i.e. Assumption A holds with ? = 0, one can easily verify that for n ? s there exists a filter
?o ? C(Zn?n ) such that
p
k?o k2 ? s/(n + 1), and x? = [?o ? x]? , ? ? Z .
(9)
See [15, Appendix C] for the proof. This implies that x can be recovered efficiently from observations (1):
r
s
o
2 1/2
??
E |x? ? [? ? y]? |
.
n+1
In other words, if instead of the filtering problem we were interested in the interpolation problem of
recovering xt given 2n + 1 observations yt?n , ..., yt+n on the left and on the right of t, Assumption
5
A would imply a kind of simplicity of x. On the other hand, it is clear that Assumption A is not
sufficient to imply the simplicity of x ?with respect to the filtering?, in the sense of the definition
we use in this paper, when we are allowed to use only observations on the left of t to compute the
estimation of xt . Indeed, one can see, for instance, that already signals from the parametric family
X? = {x ? C(Z) : x? = c?? , c ? C}, with a given |?| > 1, which form a one-dimensional
space of solutions of the equation x? = ?x? ?1 , cannot be estimated with small risk at t using only
observations on the left of t (unless c = 0), and thus are not simple in the sense of (3).
Of course, in the above example, the ?difficulty? of the family X? is due to instability of solutions
of the difference equation which explode when ? ? +?. Note that signals x ? X? with |?| ? 1
(linear functions, oscillations, or damped oscillations) are simple. More generally, suppose that x
satisfies a difference equation of degree s:
"
#
s
X
0 = p(?)x? =
pi x? ?i ,
(10)
i=0
Ps
where p(z) = i=0 pi z i is the corresponding characteristic polynomial and ? is the right shift operator. When p(z) is unstable ? has roots inside the unit circle ? (depending on ?initial conditions?)
the set of solutions to the equation (10) contains difficult to filter signals. Observe that stability of
solutions is related to the direction of the time axis; when the characteristic polynomial p(z) has
roots outside the unit circle, the corresponding solutions may be ?left unstable? ? increase exponentially when ? ? ??. In this case ?right filtering? ? estimating x? using observations on the right
of ? ? will be difficult. A special situation where interpolation and filtering is always simple arises
when the characteristic polynomial of the difference equation has all its roots on the unit circle. In
this case, solutions to (10) are ?generalized harmonic oscillations? (harmonic oscillations modulated
by polynomials), and such signals are known to be simple. Theorem 4.1 summarizes the properties
of the solutions of (10) in this particular case; see [15, Appendix C] for the proof.
Theorem
Let s be a positive integer, and let p = [p0 ; ...; ps ] ? Cs+1 be such that the polynomial
P4.1.
s
p(z) = i=0 pi z i has all its roots on the unit circle. Then for every integer m satisfying
m ? m(s) := Cs2 ln(s + 1),
one can point out q ? Cm+1 such that any solution to (10) satisfies
x? = [q ? x]? , ?? ? Z,
and
5
?
kqk2 ? ?(s, m)/ m
where
n
o
p
?
?(s, m) = C 0 min s3/2 ln s, s ln[ms] .
(11)
Numerical experiments
We present preliminary results on simulated data of the proposed adaptive signal recovery methods in several application scenarios. We compare the performance of the penalized `2 -recovery of
Sec. 3 to that of the Lasso recovery of [1] in signal and image denoising problems. Implementation
details for the penalized `2 -recovery are given in Sec. 6. Discussion of the discretization approach
underlying the competing Lasso method can be found in [1, Sec. 3.6].
We follow the same methodology in both signal and image denoising experiments. For each level of
the signal-to-noise ratio SNR ? {1, 2, 4, 8, 16}, we perform N Monte-Carlo trials. In each trial,
we generate a random signal x on a regular grid with n points, corrupted
? by the i.i.d. Gaussian noise
of variance ? 2 . The signal is normalized: kxk2 = 1 so SNR?1 = ? n. We set the regularization
2
penalty in each method as follows. For penalized `2 -recovery
? (8), we use ? = 2? log[63n/?] with
? = 0.1. For Lasso [1], we use the common setting ? = ? 2 log n. We report experimental results
by plotting the `2 -error kb
x ? xk2 , averaged over N Monte-Carlo trials, versus the inverse of the
?1
signal-to-noise ratio SNR .
Signal denoising We consider denoising of a one-dimensional signal in two different scenarios,
fixing N = 100 and n = 100. In the RandomSpikes scenario, the signal is a sum of 4 harmonic
oscillations, each characterized by a spike of a random amplitude at a random position in the continuous frequency domain [0, 2?]. In the CoherentSpikes scenario, the same number of spikes is
6
`2 -error
1
1
0.5
0.5
0.25
0.25
0.1
0.1
0.05
Lasso [1]
Pen. `2 -rec.
0.025
0.06 0.12 0.25 0.5
p
< n
1
2
4
0.05
Lasso [1]
Pen. `2 -rec.
0.025
0.06 0.12 0.25 0.5
p
1
2
1
0.5
0.25
1
0.5
0.25
0.1
0.05
0.025
0.1
0.05
0.025
0.01
0.005
Lasso [1]
Pen. `2 -rec.
0.06 0.12 0.25 0.5
p
4
< n
1
2
< n
4
0.01
0.005
Lasso [1]
Pen. `2 -rec.
0.06 0.12 0.25 0.5
p
1
2
4
< n
Figure 1: Signal and image denoising in different scenarios, left to right: RandomSpikes, CoherentSpikes, RandomSpikes-2D, and CoherentSpikes-2D. The steep parts of the curves on high noise
levels correspond to observations being thresholded to zero.
sampled by pairs. Spikes in each pair have the same amplitude and are separated by only 0.1 of
the DFT bin 2?/n which could make recovery harder due to high signal coherency. However, in
practice we found RandomSpikes to be slightly harder than CoherentSpikes for both methods, see
Fig. 1. As Fig. 1 shows, the proposed penalized `2 -recovery outperforms the Lasso method for all
noise levels. The performance gain is particularly significant for high signal-to-noise ratios.
Image Denoising We now consider recovery of an unknown regression function f on the regular
grid on [0, 1]2 given the noisy observations:
y? = x? + ??? ,
2
? ? {0, 1, ..., m ? 1} ,
(12)
where x? = f (? /m). We fix N = 40, and the grid dimension m = 40; the number of samples
is then n = m2 . For the penalized `2 -recovery, we implement the blockwise denoising strategy
(see Appendix for the implementation details) with just one block for the entire image. We present
additional numerical illustrations in the supplementary material.
We study three different scenarios for generating the ground-truth signal in this experiment. The
first two scenarios, RandomSpikes-2D and CoherentSpikes-2D, are two-dimensional counterparts of
those studied in the signal denoising experiment: the ground-truth signal is a sum of 4 harmonic
oscillations in R2 with random frequencies and amplitudes. The separation in the CoherentSpikes2D scenario is 0.2?/m in each dimension of the torus [0, 2?]2 . The results for these scenarios are
shown in Fig. 1. Again, the proposed penalized `2 -recovery outperforms the Lasso method for all
noise levels, especially for high signal-to-noise ratios.
In scenario DimensionReduction-2D we investigate the problem of estimating a function with a
hidden low-dimensional structure. We consider the single-index model of the regression function:
f (t) = g(?T t),
g(?) ? S?1 (1).
(13)
Here, S?1 (1) = {g : R ? R, kg (?) (?)k2 ? 1} is the Sobolev ball of smooth periodic functions on
[0, 1], and the unknown structure is formalized as the direction ?. In our experiments we sample
the direction ? uniformly at random and consider different values of the smoothness index ?. If
it is known a priori that the regression function possesses the structure (13), and only the index is
unknown, one can use estimators attaining ?one-dimensional? rates of recovery; see e.g. [12] and
references therein. In contrast, our recovery algorithms are not aware of the underlying structure but
might still adapt to it.
As shown in Fig. 2, the `2 -recovery performs well in this scenario despite the fact that the available
theoretical bounds are pessimistic. For example, the signal (13) with a smooth g can be approximated by a small number of harmonic oscillations in R2 . As follows from the proof of [9, Proposition 10] combined with Theorem 4.1, for a sum of k harmonic oscillations in Rd one can point out a
reproducing linear filter with %(k) = O(k 2d ) (neglecting the logarithmic factors), i.e. the theoretical
guarantee is quite conservative for small values of ?.
6
Details of algorithm implementation
Here we give a brief account of some techniques and implementation tricks exploited in our codes.
Solving the optimization problems Note that the optimization problems (2) and (8) underlying
the proposed recovery algorithms are well structured Second-Order Conic Programs (SOCP) and
7
?=2
?=1
`2 -error
1
? = 0.5
1
1
0.5
0.5
0.5
0.25
0.25
0.25
0.1
0.1
Lasso [1]
Pen. `2 -rec.
0.05
0.025
0.06 0.12 0.25
0.5
p
< n
1
2
0.1
Lasso [1]
Pen. `2 -rec.
0.05
4
0.025
0.06
0.12
0.25
0.5
p
< n
1
2
Lasso [1]
Pen. `2 -rec.
0.05
4
0.025
0.06
0.12
0.25
0.5
p
< n
1
2
4
Figure 2: Image denoising in DimensionReduction scenario; smoothness decreases from left to right.
can be solved using Interior-point methods (IPM). However, the computational complexity of IPM
applied to SOCP with dense matrices grows rapidly with problem dimension, so that large problems
of this type arising in signal and image processing are well beyond the reach of these techniques. On
the other hand, these problems possess nice geometry associated with complex `1 -norm. Moreover,
their first-order information ? the value of objective and its gradient at a given ? ? can be computed
using Fast Fourier Transform in time which is almost linear in problem size. Therefore, we used firstorder optimization algorithms, such as Mirror-Prox and Nesterov?s accelerated gradient algorithms
(see [14] and references therein) in our recovery implementation. A complete description of the
application of these optimization algorithms to our problem is beyond the scope of the paper; we
shall present it elsewhere.
Interpolating recovery In Sec. 2-3 we considered only recoveries which estimated the value xt
of the signal via the observations at n + 1 points t ? n, ..., t ?on the left? (filtering problem). To
recover the whole signal, one may consider a more flexible alternative ? interpolating recovery ?
which estimates xt using observations on the left and on the right of t. In particular, if the objective
is to recover a signal on the interval {?n, ..., n}, one can apply interpolating recoveries which use
the same observations y?n , ..., yn to estimate x? at any ? ? {?n, ..., n}, by altering the relative
position of the filter and the current point.
Blockwise recovery Ideally, when using pointwise recovery, a specific filter is constructed for
each time instant t. This may pose a tremendous amount of computation, for instance, when recovering a high-resolution image. Alternatively, one may split the signal into blocks, and process the
points of each block using the same filter (cf. e.g. Theorem 2.1). For instance, a one-dimensional
signal can be divided into blocks of length, say, 2m + 1, and to recover x ? C(Zm
?m ) in each
block one may fit one filter of length m + 1 recovering the right ?half-block? xm
0 and another filter
recovering the left ?half-block? x?1
?m .
7
Conclusion
We introduced a new family of estimators for structure-blind signal recovery that can be computed
using convex optimization. The proposed estimators enjoy oracle inequalities for the `2 -risk and for
the pointwise risk. Extensive theoretical discussions and numerical experiments will be presented
in the follow-up journal paper.
Acknowledgments
We would like to thank Arnak Dalalyan and Gabriel Peyr?e for fruitful discussions. DO, AJ, ZH were
supported by the LabEx PERSYVAL-Lab (ANR-11-LABX-0025) and the project Titan (CNRSMastodons). ZH was also supported by the project Macaron (ANR-14-CE23-0003-01), the MSRInria joint centre, and the program ?Learning in Machines and Brains? (CIFAR). Research of AN
was supported by NSF grants CMMI-1262063, CCF-1523768.
8
References
[1] B. N. Bhaskar, G. Tang, and B. Recht. Atomic norm denoising with applications to line spectral
estimation. IEEE Trans. Signal Processing, 61(23):5987?5999, 2013.
[2] D. L. Donoho. Statistical estimation and optimal recovery. Ann. Statist., 22(1):238?270, 03
1994.
[3] D. L. Donoho and M. G. Low. Renormalization exponents and optimal pointwise rates of
convergence. Ann. Statist., 20(2):944?970, 06 1992.
[4] Z. Harchaoui, A. Juditsky, A. Nemirovski, and D. Ostrovsky. Adaptive recovery of signals by
convex optimization. In Proceedings of The 28th Conference on Learning Theory, COLT 2015,
Paris, France, July 3-6, 2015, pages 929?955, 2015.
[5] S. Haykin. Adaptive filter theory. Prentice Hall, 1991.
[6] I. Ibragimov and R. Khasminskii. Nonparametric estimation of the value of a linear functional
in Gaussian white noise. Theor. Probab. & Appl., 29(1):1?32, 1984.
[7] I. Ibragimov and R. Khasminskii. Estimation of linear functionals in Gaussian noise. Theor.
Probab. & Appl., 32(1):30?39, 1988.
[8] A. Juditsky and A. Nemirovski. Nonparametric denoising of signals with unknown local structure, I: Oracle inequalities. Appl. & Comput. Harmon. Anal., 27(2):157?179, 2009.
[9] A. Juditsky and A. Nemirovski. Nonparametric estimation by convex programming. Ann.
Statist., 37(5a):2278?2300, 2009.
[10] A. Juditsky and A. Nemirovski. Nonparametric denoising signals of unknown local structure,
II: Nonparametric function recovery. Appl. & Comput. Harmon. Anal., 29(3):354?367, 2010.
[11] T. Kailath, A. Sayed, and B. Hassibi. Linear Estimation. Prentice Hall, 2000.
[12] O. Lepski and N. Serdyukova. Adaptive estimation under single-index constraint in a regression model. Ann. Statist., 42(1):1?28, 2014.
[13] S. Mallat. A wavelet tour of signal processing. Academic Press, 1999.
[14] Y. Nesterov and A. Nemirovski. On first-order algorithms for `1 /nuclear norm minimization.
Acta Num., 22:509?575, 2013.
[15] D. Ostrovsky, Z. Harchaoui, A. Juditsky, and A. Nemirovski. Structure-Blind Signal Recovery.
arXiv:1607.05712v2, Oct. 2016.
[16] A. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2008.
[17] L. Wasserman. All of Nonparametric Statistics. Springer, 2006.
9
| 6063 |@word trial:3 inversion:1 polynomial:13 norm:16 decomposition:1 p0:1 ipm:2 harder:2 initial:1 contains:1 outperforms:2 recovered:1 discretization:1 current:1 yet:1 must:2 fn:7 numerical:5 zaid:1 juditsky:6 n0:1 half:2 xk:1 haykin:1 num:1 completeness:1 constructed:1 prove:1 inside:1 hermitian:1 introduce:1 sayed:1 theoretically:1 indeed:3 brain:1 decomposed:1 automatically:1 considering:1 spain:1 estimating:2 moreover:4 bounded:2 notation:3 underlying:3 project:2 kg:1 argmin:1 unspecified:1 minimizes:1 kind:1 cm:1 guarantee:2 every:1 hypothetical:1 firstorder:1 exactly:1 k2:3 ostrovsky:3 control:1 unit:4 grant:1 imag:1 enjoy:2 yn:1 arnak:1 before:1 positive:1 local:2 despite:1 interpolation:2 might:1 therein:2 studied:2 k:1 acta:1 specifying:1 challenging:1 conversely:1 appl:4 nemirovski:7 averaged:1 unique:2 practical:1 acknowledgment:1 atomic:1 practice:2 block:7 implement:1 convenient:1 word:3 regular:3 kxkn:2 cannot:2 ga:1 close:5 operator:6 interior:1 put:1 risk:15 prentice:2 instability:1 fruitful:1 deterministic:1 yt:5 dalalyan:1 convex:7 bpen:3 resolution:1 simplicity:4 recovery:44 alpes:1 formalized:1 wasserman:1 m2:1 estimator:26 q:1 nuclear:1 stability:1 target:1 suppose:4 mallat:1 exact:2 programming:1 homogeneous:3 designing:1 associate:1 element:3 trick:1 satisfying:1 particularly:1 rec:7 approximated:1 observed:2 solved:1 decrease:1 complexity:1 ideally:1 nesterov:2 rigorously:1 solving:3 upon:1 xkk:1 zjj:1 completely:1 easily:1 joint:1 various:1 separated:1 fast:1 monte:2 universitaire:1 kp:3 outside:1 whose:1 quite:1 supplementary:1 valued:3 say:2 anr:2 statistic:2 transform:5 noisy:2 sequence:4 propose:2 fr:1 adaptation:7 zm:3 p4:1 rapidly:1 achieve:1 adjoint:1 description:1 ky:2 seattle:1 convergence:1 p:4 generating:1 depending:1 pose:1 fixing:1 recovering:6 implemented:1 c:1 come:1 implies:4 direction:3 closely:1 filter:28 kb:1 material:1 bin:1 require:1 premise:1 fix:1 preliminary:1 proposition:1 pessimistic:1 leastsquares:1 theor:2 hold:9 considered:1 ground:2 hall:2 mapping:1 scope:1 reserve:1 adopt:1 xk2:1 estimation:10 zn0:4 reflects:1 minimization:2 clearly:2 gaussian:7 unitarity:1 always:1 ck:3 corollary:3 properly:1 seamlessly:1 contrast:2 sense:2 dim:1 bt:2 typically:1 entire:1 hidden:5 france:2 interested:1 among:1 flexible:1 colt:1 priori:1 exponent:1 constrained:9 special:2 aware:1 having:1 washington:1 nearly:3 discrepancy:1 report:1 grenoble:1 geometry:1 atlanta:1 investigate:1 damped:1 kt:1 beforehand:3 neglecting:1 unless:1 harmon:2 penalizes:1 circle:4 theoretical:4 instance:5 altering:1 zn:2 entry:1 snr:3 tour:1 peyr:1 kn:5 corrupted:1 periodic:1 synthetic:1 combined:1 thoroughly:1 recht:1 squared:1 again:1 account:1 potential:2 prox:1 de:1 attaining:1 socp:2 persyval:1 summarized:1 sec:4 coefficient:1 titan:1 satisfy:1 blind:6 root:6 closed:1 lab:1 recover:6 arkadi:1 contribution:1 square:3 variance:2 characteristic:3 efficiently:4 ykn:6 likewise:1 identify:1 correspond:1 carlo:2 ah:1 reach:1 whenever:1 definition:1 kfn:2 frequency:4 naturally:2 associated:3 proof:6 recovers:4 sampled:2 gain:1 proved:3 knowledge:1 improves:1 formalize:1 amplitude:3 follow:2 methodology:1 furthermore:1 just:1 hand:3 aj:1 grows:1 believe:1 usa:2 verify:1 normalized:1 counterpart:1 ccf:1 regularization:3 symmetric:1 white:1 attractive:1 lastname:1 criterion:1 generalized:1 m:1 presenting:1 complete:1 performs:1 meaning:1 wise:1 image:10 novel:1 harmonic:7 common:1 functional:1 exponentially:1 refer:1 significant:1 dft:4 dimensionreduction:2 smoothness:3 rd:1 grid:4 similarly:1 centre:1 moderate:2 belongs:3 scenario:12 inequality:9 life:1 vt:1 exploited:1 additional:1 signal:58 july:1 ii:1 harchaoui:3 khasminskii:2 smooth:4 adapt:3 characterized:1 academic:1 cifar:1 divided:1 regression:4 essentially:2 arxiv:1 kernel:3 interval:4 posse:2 bhaskar:1 integer:2 unitary:1 split:1 easy:1 fit:2 bandwidth:1 lasso:12 competing:1 cn:4 avenue:1 ce23:1 shift:8 motivated:1 penalty:1 gabriel:1 generally:1 clear:1 ibragimov:2 amount:1 nonparametric:8 tsybakov:1 statist:4 generate:1 nsf:1 s3:1 coherency:1 estimated:3 arising:1 write:1 discrete:6 shall:1 redundancy:1 thresholded:1 merely:1 sum:5 inverse:2 family:5 throughout:1 almost:2 separation:1 sobolev:3 oscillation:9 appendix:8 summarizes:1 bound:8 centrally:1 oracle:15 strength:1 constraint:5 explode:1 fourier:10 argument:1 min:1 performing:1 structured:1 ball:2 centrale:1 smaller:1 slightly:1 invariant:8 multiplicity:1 sided:1 computationally:2 ln:22 equation:9 remains:1 know:1 tractable:2 available:1 apply:1 observe:1 v2:1 spectral:3 generic:1 neighbourhood:1 alternative:1 existence:1 cf:1 saint:1 instant:2 anatoli:1 k1:1 build:1 establish:2 especially:1 classical:2 objective:3 already:1 spike:3 parametric:2 strategy:1 cmmi:1 usual:1 traditional:1 gradient:2 subspace:14 distance:1 thank:1 simulated:1 considers:1 unstable:2 equip:1 provable:1 assuming:1 besides:1 length:6 index:4 pointwise:5 xcon:2 illustration:2 eres:1 minimizing:1 minn:1 ratio:4 difficult:2 steep:1 blockwise:2 implementation:5 xk0:1 zt:1 anal:2 unknown:8 perform:1 upper:1 observation:14 convolution:1 finite:2 situation:3 reproducing:1 introduced:1 namely:1 pair:2 specified:5 extensive:1 paris:1 tremendous:1 barcelona:1 nip:1 trans:1 beyond:2 usually:4 firstname:1 xm:1 summarize:1 program:2 difficulty:1 regularized:1 minimax:6 older:1 technology:1 brief:1 imply:2 conic:1 axis:1 prior:1 nice:1 probab:2 zh:2 relative:1 parseval:1 filtering:7 versus:1 labex:1 degree:2 sufficient:1 plotting:1 kxkp:1 pi:3 course:1 penalized:11 elsewhere:1 supported:4 enjoys:1 weaker:1 institute:1 face:1 absolute:2 curve:1 dimension:8 xn:1 world:1 stand:2 adaptive:5 functionals:1 compact:2 dmitry:1 deg:2 alternatively:1 kqk2:1 continuous:1 pen:7 lepski:1 fnh:1 complex:6 interpolating:3 domain:7 pk:1 dense:1 whole:1 noise:14 allowed:1 fig:4 referred:1 georgia:1 renormalization:1 cs2:1 hassibi:1 position:2 torus:1 exponential:2 comput:2 lie:1 kxk2:1 wavelet:1 tang:1 theorem:13 z0:1 xt:17 specific:1 r2:2 x:2 admits:1 macaron:1 exists:8 mirror:1 magnitude:2 kx:9 logarithmic:1 kxk:2 springer:2 truth:2 satisfies:4 labx:1 ljk:1 oct:1 goal:1 sized:1 identity:1 donoho:2 ann:4 kailath:1 price:8 determined:2 typical:1 uniformly:3 code:1 denoising:12 lemma:2 conservative:1 experimental:1 xn0:1 arises:1 modulated:1 accelerated:1 philosophy:1 zkk:4 |
5,596 | 6,064 | End-to-End Goal-Driven Web Navigation
Rodrigo Nogueira
Tandon School of Engineering
New York University
[email protected]
Kyunghyun Cho
Courant Institute of Mathematical Sciences
New York University
[email protected]
Abstract
We propose a goal-driven web navigation as a benchmark task for evaluating an
agent with abilities to understand natural language and plan on partially observed
environments. In this challenging task, an agent navigates through a website,
which is represented as a graph consisting of web pages as nodes and hyperlinks as
directed edges, to find a web page in which a query appears. The agent is required
to have sophisticated high-level reasoning based on natural languages and efficient
sequential decision-making capability to succeed. We release a software tool,
called WebNav, that automatically transforms a website into this goal-driven web
navigation task, and as an example, we make WikiNav, a dataset constructed from
the English Wikipedia. We extensively evaluate different variants of neural net
based artificial agents on WikiNav and observe that the proposed goal-driven web
navigation well reflects the advances in models, making it a suitable benchmark
for evaluating future progress. Furthermore, we extend the WikiNav with questionanswer pairs from Jeopardy! and test the proposed agent based on recurrent neural
networks against strong inverted index based search engines. The artificial agents
trained on WikiNav outperforms the engined based approaches, demonstrating the
capability of the proposed goal-driven navigation as a good proxy for measuring
the progress in real-world tasks such as focused crawling and question-answering.
1
Introduction
In recent years, there have been many exciting advances in building an artificial agent, which can be
trained with one learning algorithm, to solve many relatively large-scale, complicated tasks (see, e.g.,
[8, 10, 6].) In much of these works, target tasks were computer games such as Atari games [8] and
racing car game [6].
These successes have stimulated researchers to apply a similar learning mechanism to language-based
tasks, such as multi-user dungeon (MUD) games [9, 4]. Instead of visual perception, an agent
perceives the state of the world by its written description. A set of actions allowed to the agent is
either fixed or dependent on the current state. This type of task can efficiently evaluate the agent?s
ability of not only in planning but also language understanding.
We, however, notice that these MUD games do not exhibit the complex nature of natural languages
to the full extent. For instance, the largest game world tested by Narasimhan et al. [9] uses a
vocabulary of only 1340 unique words, and the largest game tested by He et al. [4] uses only 2258
words. Furthermore, the description of a state at each time step is almost always limited to the visual
description of the current scene, lacking any use of higher-level concepts present in natural languages.
In this paper, we propose a goal-driven web navigation as a large-scale alternative to the text-based
games for evaluating artificial agents with natural language understanding and planning capability.
The proposed goal-driven web navigation consists of the whole website as a graph, in which the web
pages are nodes and hyperlinks are directed edges. An agent is given a query, which consists of one
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
or more sentences taken from a randomly selected web page in the graph, and navigates the network,
starting from a predefined starting node, to find a target node in which the query appears. Unlike
the text-based games, this task utilizes the existing text as it is, resulting in a large vocabulary with
a truly natural language description of the state. Furthermore, the task is more challenging as the
action space greatly changes with respect to the state in which the agent is.
We release a software tool, called WebNav, that converts a given website into a goal-driven web
navigation task. As an example of its use, we provide WikiNav, which was built from English
Wikipedia. We design artificial agents based on neural networks (called NeuAgents) trained with
supervised learning, and report their respective performances on the benchmark task as well as the
performance of human volunteers. We observe that the difficulty of a task generated by WebNav is
well controlled by two control parameters; (1) the maximum number of hops from a starting to a
target node Nh and (2) the length of query Nq .
Furthermore, we extend the WikiNav with an additional set of queries that are constructed from
Jeopardy! questions, to which we refer by WikiNav-Jeopardy. We evaluate the proposed NeuAgents
against the three search-based strategies; (1) SimpleSearch, (2) Apache Lucene and (3) Google
Search API. The result in terms of document recall indicates that the NeuAgents outperform those
search-based strategies, implying a potential for the proposed task as a good proxy for practical
applications such as question-answering and focused crawling.
2
Goal-driven Web Navigation
A task T of goal-driven web navigation is characterized by
T = (A, sS , G, q, R, ?).
(1)
The world in which an agent A navigates is represented as a graph G = (N , E). The graph consists
NN
of a set of nodes N = {si }i=1
and a set of directed edges E = {ei,j } connecting those nodes. Each
node represents a page of the website, which, in turn, is represented by the natural language text
D(si ) in it. There exists an edge going from a page si to sj if and only if there is a hyperlink in D(si )
that points to sj . One of the nodes is designated as a starting node sS from which any navigation
begins. A target node is the one whose natural language description contains a query q, and there
may be more than one target node.
At each time step, the agent A reads the natural language description D(st ) of the current node in
which the agent has landed. At no point, the whole world, consisting of the nodes and edges, nor its
structure or map (graph structure without any natural language description) is visible to the agent,
thus making this task partially observed.
Once the agent A reads the description D(si ) of the current node si , it can take one of the actions
available. A set of possible actions is defined as a union of all the outgoing edges ei,? and the stop
action, thus making the agent have state-dependent action space.
Each edge ei,k corresponds to the agent jumping to a next node sk , while the stop action corresponds
to the agent declaring that the current node si is one of the target nodes. Each edge ei,k is represented
by the description of the next node D(sk ). In other words, deciding which action to take is equivalent
to taking a peek at each neighboring node and seeing whether that node is likely to lead ultimately to
a target node.
The agent A receives a reward R(si , q) when it chooses the stop action. This task uses a simple
binary reward, where
1, if q ? D(si )
R(si , q) =
0, otherwise
Constraints It is clear that there exists an ultimate policy for the agent to succeed at every trial,
which is to traverse the graph breadth-first until the agent finds a node in which the query appears. To
avoid this kind of degenerate policies, the task includes a set of four rules/constraints ?:
1. An agent can follow at most Nn edges at each node.
2. An agent has a finite memory of size smaller than T .
2
Table 1: Dataset Statistics of WikiNav-4-*, WikiNav-8-*, WikiNav-16-* and WikiNav-Jeopardy.
Train
WikiNav-4-*
6.0k
WikiNav-8-*
1M
WikiNav-16-*
12M
WikiNav-Jeopardy
113k
Valid
1k
20k
20k
10k
Test
1k
20k
20k
10k
3. An agent moves up to Nh hops away from sS .
4. A query of size Nq comes from at least two hops away from the starting node.
The first constraint alone prevents degenerate policies, such as breadth-first search, forcing the agent
to make good decisions as possible at each node. The second one further constraints ensure that the
agent does not cheat by using earlier trials to reconstruct the whole graph structure (during test time)
or to store the entire world in its memory (during training.) The third constraint, which is optional, is
there for computational consideration. The fourth constraint is included because the agent is allowed
to read the content of a next node.
3
WebNav: Software
As a part of this work, we build and release a software tool which turns a website into a goal-driven
web navigation task.1 We call this tool WebNav. Given a starting URL, the WebNav reads the whole
website, constructs a graph with the web pages in the website as nodes. Each node is assigned a
unique identifier si . The text content of each node D(si ) is a cleaned version of the actual HTML
content of the corresponding web page. The WebNav turns intra-site hyperlinks into a set of edges
ei,j .
In addition to transforming a website into a graph G from Eq. (1), the WebNav automatically selects
queries from the nodes? texts and divides them into training, validation, and test sets. We ensure that
there is no overlap among three sets by making each target node, from which a query is selected,
belongs to only one of them.
Each generated example is defined as a tuple
X = (q, s? , p? )
(2)
?
where q is a query from a web page s , which was found following a randomly selected path
p? = (sS , . . . , s? ). In other words, the WebNav starts from a starting page sS , random-walks the
graph for a predefined number of steps (Nh /2, in our case), reaches a target node s? and selects a
query q from D(s? ). A query consists of Nq sentences and is selected among the top-5 candidates
in the target node with the highest average TF-IDF, thus discouraging the WebNav from choosing a
trivial query.
For the evaluation purpose alone, it is enough to use only a query q itself as an example. However,
we include both one target node (among potentially many other target nodes) and one path from the
starting node to this target node (again, among many possible connecting paths) so that they can be
exploited when training an agent. They are not to be used when evaluating a trained agent.
4
WikiNav: A Benchmark Task
With the WebNav, we built a benchmark goal-driven navigation task using Wikipedia as a target
website. We used the dump file of the English Wikipedia from September 2015, which consists of
more than five million web pages. We built a set of separate tasks with different levels of difficulty by
varying the maximum number of allowed hops Nh ? {4, 8, 16} and the size of query Nq ? {1, 2, 4}.
We refer to each task by WikiNav-Nh -Nq .
For each task, we generate training, validation and test examples from the pages half as many hops
away from a starting page as the maximum number of hops allowed.2 We use ?Category:Main topic
classifications? as a starting node sS .
1
2
The source code and datasets are publicly available at github.com/nyu-dl/WebNav.
This limit is an artificial limit we chose for computational reasons.
3
Table 3: Sample query-answer pairs from WikiNav-Jeopardy.
Query
Answer
For the last 8 years of his life, Galileo was under
house arrest for espousing this man?s theory.
Copernicus
In the winter of 1971-72, a record 1,122 inches of snow fell
at Rainier Paradise Ranger Station in this state.
Washington
This company?s Accutron watch, introduced in 1960,
had a guarantee of accuracy to within one minute a month.
Bulova
As a minimal cleanup procedure, we excluded meta articles whose titles start with ?Wikipedia?.
Any hyperlink that leads to a web page outside Wikipedia is removed in advance together with the
following sections: ?References?, ?External Links?, ?Bibliography? and ?Partial Bibliography?.
In Table 2, we present basic per-article statistics of the
English Wikipedia. It is evident from these statistics that
the world of WikiNav-Nh -Nq is large and complicated,
even after the cleanup procedure.
We ended up with a fairly small dataset for WikiNav-4-*,
but large for WikiNav-8-* and WikiNav-16-*. See Table 1
for details.
4.1
Related Work: Wikispeedia
Avg.
?
Var
Max
Min
Hyperlinks
4.29
13.85
300
0
Words
462.5
990.2
132881
1
Table 2: Per-page statistics of English
Wikipedia.
This work is indeed not the first to notice the possibility of a website, or possibly the whole web, as a
world in which intelligent agents explore to achieve a certain goal. One most relevant recent work to
ours is perhaps Wikispeedia from [14, 12, 13].
West et al. [14, 12, 13] proposed the following game, called Wikispeedia. The game?s world is nearly
identical to the goal-driven navigation task proposed in this work. More specifically, they converted
?Wikipedia for Schools?, which contains approximately 4,000 articles as of 2008, into a graph whose
nodes are articles and directed edges are hyperlinks. From this graph, a pair of nodes is randomly
selected and provided to an agent.
The agent?s goal is to start from the first node, navigate the graph and reach the second node. Similarly
to the WikiNav, the agent has access to the text content of the current nodes and all the immediate
neighboring nodes. One major difference is that the target is given as a whole article, meaning that
there is a single target node in the Wikispeedia while there may be multiple target nodes in the
proposed WikiNav.
From this description, we see that the goal-driven web navigation is a generalization and re-framing
of the Wikispeedia. First, we constrain a query to contain less information, making it much more
difficult for an agent to navigate to a target node. Furthermore, a major research question by West and
Leskovec [13] was to ?understand how humans navigate and find the information they are looking
for ,? whereas in this work we are fully focused on proposing an automatic tool to build a challenging
goal-driven tasks for designing and evaluating artificial intelligent agents.
5
WikiNav-Jeopardy: Jeopardy! on WikiNav
One of the potential practical applications utilizing the goal-drive navigation is question-answering
based on world knowledge. In this Q&A task, a query is a question, and an agent navigates a given
information network, e.g., website, to retrieve an answer. In this section, we propose and describe
an extension of the WikiNav, in which query-target pairs are constructed from actual Jeopardy!
question-answer pairs. We refer to this extension of WikiNav by WikiNav-Jeopardy.
We first extract all the question-answer pairs from J! Archive3 , which has more than 300k such
pairs. We keep only those pairs whose answers are titles of Wikipedia articles, leaving us with 133k
pairs. We divide those pairs into 113k training, 10k validation, and 10k test examples while carefully
3
www.j-archive.com
4
ensuring that no article appears in more than one partition. Additionally, we do not shuffle the original
pairs to ensure that the train and test examples are from different episodes.
For each training pair, we find one path from the starting node ?Main Topic Classification? to the
target node and include it for supervised learning. For reference, the average number of hops to
the target node is 5.8, the standard deviation is 1.2, and the maximum and minimum are 2 and 10,
respectively. See Table 3 for sample query-answer pairs.
6
6.1
NeuAgent: Neural Network based Agent
Model Description
Core Function The core of the NeuAgent is a parametric function fcore that takes as input the
content of the current node ?c (si ) and a query ?q (q), and that returns the hidden state of the agent.
This parametric function fcore can be implemented either as a feedforward neural network fff :
ht = fff (?c (si ), ?q (q))
which does not take into account the previous hidden state of the agent or as a recurrent neural
network frec :
ht = frec (ht?1 , ?c (si ), ?q (q)).
We refer to these two types of agents by NeuAgent-FF and NeuAgent-Rec, respectively. For the
NeuAgent-FF, we use a single tanh layer, while we use long short-term memory (LSTM) units [5],
which have recently become de facto standard, for the NeuAgent-Rec.
Based on the new hidden state ht , the NeuAgent computes
the probability distribution over all the outgoing edges ei .
The probability of each outgoing edge is proportional to
the similarity between the hidden state ht such that
p(ei,j |?
p) ? exp ?c (sj )> ht .
(3)
Note that the NeuAgent peeks at the content of the next
node sj by considering its vector representation ?c (sj ).
In addition to all the outgoing edges, we also allow the
agent to stop with the probability
p(?|?
p) ? exp v?> ht ,
(4)
Figure 1: Graphical illustration of a
single step performed by the baseline where the stop action vector v? is a trainable parameter.
model, NeuAgent.
In the case of NeuAgent-Rec, all these (unnormalized)
probabilities are conditioned on the history p? which is
a sequence of actions (nodes) selected by the agent so
far. We apply a softmax normalization on the unnormalized probabilities to obtain the probability
distribution over all the possible actions at the current node si .
The NeuAgent then selects its next action based on this action probability distribution (Eqs. (3) and
(4)). If the stop action is chosen, the NeuAgent returns the current node as an answer and receives a
reward R(si , q), which is one if correct and zero otherwise. If the agent selects one of the outgoing
edges, it moves to the selected node and repeats this process of reading and acting.
See Fig. 1 for a single step of the described NeuAgent.
Content Representation The NeuAgent represents the content of a node si as a vector ?c (si ) ? Rd .
In this work, we use a continuous bag-of-words vector for each document:
|D(si )|
X
1
?c (si ) =
ek .
|D(si )|
k=1
Each word vector ek is from a pretrained continuous bag-of-words model [7]. These word vectors
are fixed throughout training.
5
Query Representation In the case of a query, we consider two types of representation. The first
one is a continuous bag-of-words (BoW) vector, just as used for representing the content of a node.
The other one is a dynamic representation based on the attention mechanism [2].
In the attention-based query representation, the query is first projected into a set of context vectors.
The context vector of the k-th query word is
k+u/2
ck =
X
Wk0 ek0 ,
k0 =k?u/2
where Wk0 ? Rd?d and ek0 are respectively a trainable weight matrix and a pretrained word vector.
u is the window size. Each context vector is scored at each time step t by ?kt = fatt (ht?1 , ck ) w.r.t.
the previous hidden state of the NeuAgent, and all the scores are normalized to be positive and sum
exp(? t )
to one, i.e., ?kt = P|q| k t . These normalized scores are used as the coefficients in computing
l=1
exp(?l )
the weighted-sum of query words to result in a query representation at time t:
|q|
1 X t
?q (q) =
?k ck .
|q|
k=1
Later, we empirically compare these two query representations.
6.2
Inference: Beam Search
Once the NeuAgent is trained, there are a number of approaches to using it for solving the proposed
task. The most naive approach is simply to let the agent make a greedy decision at each time step, i.e.,
following the outgoing edge with the highest probability arg maxk log p(ei,k | . . .). A better approach
is to exploit the fact that the agent is allowed to explore up to Nn outgoing edges per node. We use a
simple, forward-only beam search with the beam width capped at Nn . The beam search simply keeps
the Nn most likely traces, in terms of log p(ei,k | . . .), at each time step.
6.3
Training: Supervised Learning
In this paper, we investigate supervised learning, where we train the agent to follow an example trace
p? = (sS , . . . , s? ) included in the training set at each step (see Eq. (2)). In this case, the cost per
training example is
?
Csup = ? log p(?|p? , q) ?
|p |
X
log p(p?k |p?<k , q).
(5)
k=1
This per-example training cost is fully differentiable with respect to all the parameters of the neural
network, and we use stochastic gradient descent (SGD) algorithm to minimize this cost over the
whole training set, where the gradients can be computed by backpropagation [11]. This allows the
entire model to be trained in an end-to-end fashion, in which the query-to-target performance is
optimized directly.
7
Human Evaluation
One unique aspect of the proposed task is that it is very difficult for an average person who was not
trained specifically for finding information by navigating through an information network. There are
a number of reasons behind this difficulty. First, the person must be familiar with, via training, the
graph structure of the network, and this often requires many months, if not years, of training. Second,
the person must have in-depth knowledge of a broad range of topics in order to make a connection
via different concepts between the themes and topics of a query to a target node. Third, each trial
requires the person carefully to read the whole content of the nodes as she navigates, which is a
time-consuming and exhausting job.
We asked five volunteers to try up to 20 four-sentence-long queries4 randomly selected from the test
sets of WikiNav-{4, 8, 16}-4 datasets. They were given up to two hours, and they were allowed to
4
In a preliminary study with other volunteers, we found that, when the queries were shorter than 4, they
were not able to solve enough trials for us to have meaningful statistics.
6
Table 4: The average reward by the NeuAgents and humans on the test sets of WikiNav-Nh -Nq .
(a)
(b)
(c)
(d)
(e)
fcore
Layers?Units
?q
fff
frec
frec
frec
1 ? 512
1 ? 512
8 ? 2048
8 ? 2048
Humans
BoW
BoW
BoW
Att
Nq = 1
Nh = 4
8
21.5
22.0
17.7
22.9
-
4.7
5.1
10.9
15.8
-
16
4
2
8
16
4
4
8
16
1.2
1.7
8.0
12.5
-
40.0
41.1
35.8
41.7
-
9.2
9.2
19.9
24.5
-
1.9
2.1
13.9
17.8
-
45.1
44.8
39.5
46.8
14.5
12.9
13.3
28.1
34.2
8.8
2.9
3.6
21.9
28.2
5.0
choose up to the same maximum number of explored edges per node Nn as the NeuAgents (that
is, Nn = 4), and also were given the option to give up. The average reward was computed as the
fraction of correct trials over all the queries presented.
8
8.1
Results and Analysis
WikiNav
We report in Table 4 the performance of the NeuAgent-FF and NeuAgent-Rec models on the test
set of all nine WikiNav-{4, 8, 16}-{1, 2, 4} datasets. In addition to the proposed NeuAgents, we also
report the results of the human evaluation.
We clearly observe that the level of difficulty is indeed negatively correlated with the query length
Nq but is positively correlated with the maximum number of allowed hops Nh . The latter may be
considered trivial, as the size of the search space grows exponentially with respect to Nh , but the
former is not. The former negative correlation confirms that it is indeed easier to solve the task with
more information in a query. We conjecture that the agent requires more in-depth understanding of
natural languages and planning to overcome the lack of information in the query to find a path toward
a target node.
The NeuAgent-FF and NeuAgent-Rec shares similar performance when the maximum number of
allowed hops is small (Nh = 4), but NeuAgent-Rec ((a) vs. (b)) performs consistently better for
higher Nh , which indicates that having access to history helps in long-term planning tasks. We also
observe that the larger and deeper NeuAgent-Rec ((b) vs (c)) significantly outperforms the smaller
one, when a target node is further away from the starting node sS .
The best performing model in (d) used the attention-based query representation, especially as the
difficulty of the task increased (Nq ? and Nh ?), which supports our claim that the proposed task
of goal-driven web navigation is a challenging benchmark for evaluating future progress. In Fig. 2,
we present an example of how the attention weights over the query words dynamically evolve as the
model navigates toward a target node.
The human participants generally performed worse than the NeuAgents. We attribute this to a number
of reasons. First, the NeuAgents are trained specifically on the target domain (Wikipedia), while the
human participants have not been. Second, we observed that the volunteers were rapidly exhausted
from reading multiple articles in sequence. In other words, we find the proposed benchmark, WebNav,
as a good benchmark for machine intelligence but not for comparing it against human intelligence.
.
$
15,000
of
a
purse
received
The
winner
Payout
Results
.
Full
May
1918
took
place
The
race
Derby
[[
Kentuchy
of
the
the
running
was
Derby
The
Kentuchy
1918 Kentuchy Derby
Category: Kentuchy Derby Races
Category: Kentuchy Derby
Category: Sports events in Louisville, Kentuchy
Category: Sports Events by City
Category: Sports Events
Category: Sports
Category: Main Topic Classifications
Figure 2: Visualization of the attention weights over a test query. The horizontal axis corresponds to
the query words, and the vertical axis to the articles titles visited.
7
8.2
WikiNav-Jeopardy
Settings We test the best model from the previous experiment (NeuAgent-Rec with 8 layers of 2048
LSTM units and the attention-based query representation) on the WikiNav-Jeopardy. We evaluate
two training strategies. The first strategy is straightforward supervise learning, in which we train a
NeuAgent-Rec on WikiNav-Jeopardy from scratch. In the other strategy, we pretrain a NeuAgent-Rec
first on the WikiNav-16-4 and finetune it on WikiNav-Jeopardy.
We compare the proposed NeuAgent against three search strategies. The first one, SimpleSearch, is
a simple inverted index based strategy. SimpleSearch scores each Wikipedia article by the TF-IDF
weighted sum of words that co-occur in the articles and a query and returns top-K articles. Second,
we use Lucene, a popular open source information retrieval library, in its default configuration on
the whole Wikipedia dump. Lastly, we use Google Search API5 , while restricting the domain to
wikipedia.org.
Each system is evaluated by document recall at K (Recall@K). We vary K to be 1, 4 or 40. In
the case of the NeuAgent, we run beam search with width set to K and returns all the K final
nodes to compute the document recall. Since there is only one correct document/answer per query,
Precision@K = Recall@K / K and therefore we do not show this measure in the results.
Table 5: Recall on WikiNav-Jeopardy.
Model
(?) Pretrained on WikiNav-16-4.
Pre?
Recall@1
Recall@4
Recall@40
X
13.9
18.9
20.2
23.6
33.2
38.3
5.4
6.3
14.0
12.6
14.7
22.1
28.4
36.3
25.9
NeuAgent
NeuAgent
SimpleSearch
Lucene
Google
Result and Analysis In Table 5, we report the results on WikiNav-Jeopardy. The proposed
NeuAgent clearly outperforms all the three search-based strategies, when it was pretrained on the
WikiNav-16-4. The superiority of the pretrained NeuAgent is more apparent when the number of
candidate documents is constrained to be small, implying that the NeuAgent is able to accurately
rank a correct target article. Although the NeuAgent performs comparably to the other search-based
strategy even without pretraining, the benefit of pretraining on the much larger WikiNav is clear.
We emphasize that these search-based strategies have access to all the nodes for each input query.
The NeuAgent, on the other hand, only observes the nodes as it visits during navigation. This success
clearly demonstrates a potential in using the proposed NeuAgent pretrained with a dataset compiled
by the proposed WebNav for the task of focused crawling [3, 1], which is an interesting problem on
its own, as much of the content available on the Internet is either hidden or dynamically generated [1].
9
Conclusion
In this work, we describe a large-scale goal-driven web navigation task and argue that it serves as a
useful test bed for evaluating the capabilities of artificial agents on natural language understanding and
planning. We release a software tool, called WebNav, that compiles a given website into a goal-driven
web navigation task. As an example, we construct WikiNav from Wikipedia using WebNav. We
extend WikiNav with Jeopardy! questions, thus creating WikiNav-Jeopardy. We evaluate various
neural net based agents on WikiNav and WikiNav-Jeopardy. Our results show that more sophisticated
agents have better performance, thus supporting our claim that this task is well suited to evaluate
future progress in natural language understanding and planning. Furthermore, we show that our
agent pretrained on WikiNav outperforms two strong inverted-index based search engines on the
WikiNav-Jeopardy. These empirical results support our claim on the usefulness of the proposed task
and agents in challenging applications such as focused crawling and question-answering.
5
https://cse.google.com/cse
8
References
[1] Manuel ?lvarez, Juan Raposo, Alberto Pan, Fidel Cacheda, Fernando Bellas, and V?ctor
Carneiro. Deepbot: a focused crawler for accessing hidden web content. In Proceedings of
the 3rd international workshop on Data enginering issues in E-commerce and services: In
conjunction with ACM Conference on Electronic Commerce (EC?07), pages 18?25. ACM, 2007.
[2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly
learning to align and translate. In ICLR 2015, 2014.
[3] Soumen Chakrabarti, Martin Van den Berg, and Byron Dom. Focused crawling: a new approach
to topic-specific web resource discovery. Computer Networks, 31(11):1623?1640, 1999.
[4] Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, and Mari Ostendorf.
Deep reinforcement learning with an unbounded action space. arXiv preprint arXiv:1511.04636,
2015.
[5] Sepp Hochreiter and J?rgen Schmidhuber. Long short-term memory. Neural computation, 9(8):
1735?1780, 1997.
[6] Jan Koutn?k, J?rgen Schmidhuber, and Faustino Gomez. Evolving deep unsupervised convolutional networks for vision-based reinforcement learning. In Proceedings of the 2014 conference
on Genetic and evolutionary computation, pages 541?548. ACM, 2014.
[7] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word
representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
[8] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G
Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al.
Human-level control through deep reinforcement learning. Nature, 518(7540):529?533, 2015.
[9] Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay. Language understanding for textbased games using deep reinforcement learning. arXiv preprint arXiv:1506.08941, 2015.
[10] Sebastian Risi and Julian Togelius. Neuroevolution in games: State of the art and open
challenges. arXiv preprint arXiv:1410.7326, 2014.
[11] David Rumelhart, Geoffrey Hinton, and Ronald Williams. Learning representations by backpropagating errors. Nature, pages 323?533, 1986.
[12] Robert West and Jure Leskovec. Automatic versus human navigation in information networks.
In ICWSM, 2012.
[13] Robert West and Jure Leskovec. Human wayfinding in information networks. In 21st International World Wide Web Conference, pages 619?628. ACM, 2012.
[14] Robert West, Joelle Pineau, and Doina Precup. Wikispeedia: An online game for inferring
semantic distances between concepts. In IJCAI, pages 1598?1603, 2009.
9
| 6064 |@word trial:5 version:1 open:2 confirms:1 sgd:1 configuration:1 contains:2 score:3 att:1 genetic:1 document:6 ours:1 outperforms:4 existing:1 current:9 com:3 comparing:1 manuel:1 mari:1 si:22 crawling:5 jeopardy:20 written:1 must:2 ronald:1 visible:1 partition:1 v:2 implying:2 alone:2 selected:8 website:13 nq:10 half:1 greedy:1 intelligence:2 core:2 short:2 record:1 node:70 cse:2 traverse:1 org:1 five:2 unbounded:1 mathematical:1 constructed:3 become:1 chakrabarti:1 consists:5 indeed:3 planning:6 nor:1 multi:1 automatically:2 company:1 actual:2 window:1 considering:1 perceives:1 spain:1 begin:1 provided:1 atari:1 kind:1 narasimhan:2 proposing:1 finding:1 ended:1 guarantee:1 every:1 demonstrates:1 facto:1 control:2 unit:3 superiority:1 positive:1 service:1 engineering:1 limit:2 api:1 cheat:1 path:5 approximately:1 chose:1 dynamically:2 challenging:5 co:1 limited:1 range:1 directed:4 unique:3 practical:2 commerce:2 galileo:1 union:1 backpropagation:1 procedure:2 jan:1 riedmiller:1 empirical:1 evolving:1 significantly:1 word:18 pre:1 seeing:1 context:3 bellemare:1 www:1 equivalent:1 map:1 dean:1 straightforward:1 attention:6 starting:12 sepp:1 williams:1 focused:7 tomas:1 rule:1 utilizing:1 his:1 retrieve:1 target:28 tandon:1 user:1 us:3 designing:1 rumelhart:1 rec:10 racing:1 observed:3 preprint:4 episode:1 shuffle:1 highest:2 removed:1 observes:1 accessing:1 environment:1 transforming:1 reward:5 asked:1 dynamic:1 dom:1 ultimately:1 trained:8 solving:1 negatively:1 k0:1 represented:4 various:1 carneiro:1 train:4 describe:2 paradise:1 query:46 artificial:8 jianfeng:1 choosing:1 outside:1 whose:4 apparent:1 larger:2 solve:3 kai:1 tested:2 s:8 otherwise:2 reconstruct:1 ability:2 statistic:5 jointly:1 itself:1 final:1 online:1 sequence:2 differentiable:1 net:2 took:1 propose:3 neighboring:2 relevant:1 rapidly:1 bow:4 soumen:1 translate:1 degenerate:2 achieve:1 description:11 bed:1 ijcai:1 silver:1 help:1 recurrent:2 school:2 received:1 barzilay:1 job:1 progress:4 eq:3 implemented:1 strong:2 come:1 snow:1 correct:4 attribute:1 stochastic:1 human:12 fff:3 regina:1 landed:1 generalization:1 preliminary:1 koutn:1 extension:2 considered:1 deciding:1 exp:4 claim:3 rgen:2 major:2 vary:1 purpose:1 estimation:1 compiles:1 faustino:1 bag:3 tanh:1 visited:1 title:3 largest:2 tf:2 city:1 tool:6 reflects:1 weighted:2 clearly:3 always:1 ck:3 avoid:1 rusu:1 varying:1 conjunction:1 release:4 she:1 consistently:1 rank:1 indicates:2 greatly:1 pretrain:1 baseline:1 inference:1 dependent:2 textbased:1 nn:7 entire:2 hidden:7 going:1 selects:4 arg:1 among:4 html:1 classification:3 issue:1 plan:1 constrained:1 softmax:1 fairly:1 art:1 once:2 construct:2 having:1 washington:1 enginering:1 hop:9 identical:1 represents:2 broad:1 koray:1 unsupervised:1 nearly:1 veness:1 future:3 report:4 yoshua:1 intelligent:2 randomly:4 winter:1 ranger:1 familiar:1 consisting:2 jeffrey:1 karthik:1 ostrovski:1 possibility:1 investigate:1 intra:1 mnih:1 evaluation:3 joel:1 navigation:21 truly:1 behind:1 predefined:2 kt:2 edge:18 tuple:1 partial:1 respective:1 jumping:1 shorter:1 divide:2 walk:1 re:1 minimal:1 leskovec:3 instance:1 increased:1 earlier:1 measuring:1 cost:3 deviation:1 usefulness:1 answer:9 cho:3 chooses:1 st:2 person:4 lstm:2 international:2 connecting:2 together:1 precup:1 again:1 choose:1 possibly:1 payout:1 juan:1 worse:1 external:1 creating:1 ek:2 questionanswer:1 return:4 li:2 account:1 potential:3 converted:1 de:1 volodymyr:1 includes:1 coefficient:1 race:2 doina:1 performed:2 later:1 fatt:1 try:1 start:3 option:1 capability:4 complicated:2 participant:2 mud:2 minimize:1 publicly:1 accuracy:1 convolutional:1 greg:1 who:1 efficiently:1 inch:1 kavukcuoglu:1 accurately:1 comparably:1 researcher:1 drive:1 history:2 reach:2 sebastian:1 against:4 stop:6 dataset:4 popular:1 recall:9 knowledge:2 car:1 sophisticated:2 carefully:2 derby:5 appears:4 finetune:1 higher:2 courant:1 supervised:4 follow:2 raposo:1 evaluated:1 furthermore:6 just:1 lastly:1 until:1 correlation:1 hand:1 receives:2 horizontal:1 web:27 ei:9 ostendorf:1 lack:1 google:4 pineau:1 perhaps:1 grows:1 building:1 xiaodong:1 concept:3 contain:1 normalized:2 former:2 kyunghyun:3 assigned:1 read:5 excluded:1 semantic:1 game:14 during:3 width:2 backpropagating:1 arrest:1 unnormalized:2 evident:1 performs:2 reasoning:1 meaning:1 consideration:1 recently:1 wikipedia:15 apache:1 empirically:1 ek0:2 ji:1 winner:1 exponentially:1 nh:13 million:1 extend:3 he:3 refer:4 automatic:2 rd:3 similarly:1 language:16 had:1 lihong:1 access:3 similarity:1 compiled:1 align:1 navigates:6 own:1 recent:2 belongs:1 driven:18 forcing:1 schmidhuber:2 store:1 certain:1 meta:1 binary:1 success:2 life:1 joelle:1 exploited:1 inverted:3 minimum:1 additional:1 deng:1 fernando:1 corrado:1 full:2 multiple:2 characterized:1 long:4 retrieval:1 alberto:1 visit:1 controlled:1 ensuring:1 variant:1 basic:1 vision:1 volunteer:4 arxiv:8 normalization:1 hochreiter:1 beam:5 addition:3 whereas:1 source:2 peek:2 leaving:1 unlike:1 archive:1 file:1 fell:1 byron:1 bahdanau:1 call:1 feedforward:1 bengio:1 enough:2 frec:5 andreas:1 whether:1 ultimate:1 url:1 york:2 nine:1 pretraining:2 action:16 deep:4 generally:1 useful:1 clear:2 transforms:1 extensively:1 wk0:2 category:8 jianshu:1 generate:1 http:1 outperform:1 notice:2 webnav:16 per:7 georg:1 four:2 demonstrating:1 breadth:2 ht:8 graph:15 fraction:1 year:3 convert:1 sum:3 run:1 fourth:1 place:1 almost:1 throughout:1 electronic:1 utilizes:1 decision:3 layer:3 internet:1 gomez:1 occur:1 constraint:6 idf:2 constrain:1 alex:1 scene:1 software:5 bibliography:2 aspect:1 min:1 performing:1 mikolov:1 relatively:1 conjecture:1 martin:2 designated:1 smaller:2 pan:1 making:6 supervise:1 den:1 taken:1 resource:1 visualization:1 turn:3 mechanism:2 neuroevolution:1 end:4 serf:1 available:3 lucene:3 apply:2 observe:4 away:4 wikinav:50 alternative:1 original:1 top:2 running:1 ensure:3 include:2 graphical:1 exploit:1 risi:1 build:2 especially:1 move:2 question:10 strategy:10 parametric:2 exhibit:1 september:1 gradient:2 navigating:1 iclr:1 separate:1 link:1 evolutionary:1 fidjeland:1 distance:1 topic:6 argue:1 extent:1 trivial:2 reason:3 toward:2 dzmitry:1 length:2 code:1 index:3 illustration:1 julian:1 difficult:2 robert:3 potentially:1 trace:2 negative:1 design:1 policy:3 vertical:1 datasets:3 ctor:1 benchmark:8 finite:1 descent:1 optional:1 immediate:1 maxk:1 supporting:1 looking:1 hinton:1 station:1 introduced:1 david:2 pair:13 required:1 cleaned:1 lvarez:1 sentence:3 optimized:1 connection:1 engine:2 framing:1 barcelona:1 hour:1 nip:1 capped:1 able:2 jure:2 perception:1 reading:2 hyperlink:7 challenge:1 built:3 max:1 memory:4 nogueira:1 suitable:1 overlap:1 natural:13 difficulty:5 event:3 representing:1 github:1 library:1 axis:2 extract:1 naive:1 text:7 understanding:6 discovery:1 discouraging:1 evolve:1 graf:1 lacking:1 fully:2 interesting:1 proportional:1 declaring:1 var:1 geoffrey:1 versus:1 validation:3 agent:56 proxy:2 article:13 exciting:1 share:1 translation:1 repeat:1 last:1 english:5 allow:1 understand:2 deeper:1 institute:1 rodrigo:1 wide:1 taking:1 benefit:1 van:1 overcome:1 depth:2 vocabulary:2 evaluating:7 world:11 valid:1 computes:1 default:1 forward:1 avg:1 projected:1 reinforcement:4 far:1 ec:1 sj:5 emphasize:1 keep:2 consuming:1 search:16 continuous:3 sk:2 table:10 stimulated:1 additionally:1 nature:3 complex:1 domain:2 marc:1 main:3 whole:9 scored:1 identifier:1 allowed:8 positively:1 site:1 west:5 dump:2 fig:2 ff:4 fashion:1 andrei:1 precision:1 theme:1 inferring:1 candidate:2 house:1 answering:4 third:2 minute:1 specific:1 navigate:3 nyu:3 explored:1 dl:1 exists:2 workshop:1 restricting:1 sequential:1 conditioned:1 exhausted:1 chen:2 easier:1 suited:1 simply:2 likely:2 explore:2 gao:1 visual:2 prevents:1 partially:2 sport:4 watch:1 pretrained:7 corresponds:3 acm:4 succeed:2 tejas:1 goal:21 month:2 man:1 content:12 change:1 included:2 specifically:3 acting:1 exhausting:1 called:5 meaningful:1 berg:1 support:2 icwsm:1 latter:1 cleanup:2 crawler:1 kulkarni:1 evaluate:6 outgoing:7 trainable:2 scratch:1 correlated:2 |
5,597 | 6,065 | Stochastic Online AUC Maximization
Yiming Ying? , Longyin Wen? , Siwei Lyu?
?
Department of Mathematics and Statistics
SUNY at Albany, Albany, NY, 12222, USA
?
Department of Computer Science
SUNY at Albany, Albany, NY, 12222, USA
Abstract
Area under ROC (AUC) is a metric which is widely used for measuring the
classification performance for imbalanced data. It is of theoretical and practical
interest to develop online learning algorithms that maximizes AUC for large-scale
data. A specific challenge in developing online AUC maximization algorithm is that
the learning objective function is usually defined over a pair of training examples
of opposite classes, and existing methods achieves on-line processing with higher
space and time complexity. In this work, we propose a new stochastic online
algorithm for AUC maximization. In particular, we show that AUC optimization
can be equivalently formulated as a convex-concave saddle point problem. From
this saddle representation, a stochastic online algorithm (SOLAM) is proposed
which has time and space complexity of one datum. We establish theoretical
convergence of SOLAM with high probability and demonstrate its effectiveness
on standard benchmark datasets.
1
Introduction
Area Under the ROC Curve (AUC) [8] is a widely used metric for measuring classification performance. Unlike misclassification error that reflects a classifier?s ability to classify a single randomly
chosen example, AUC concerns the overall performance of a functional family of classifiers and
quantifies their ability of correctly ranking any positive instance with regards to a randomly chosen
negative instance. Most algorithms optimizing AUC for classification [5, 9, 12, 17] are for batch
learning, where we assume all training data are available.
On the other hand, online learning algorithms [1, 2, 3, 16, 19, 22], have been proven to be very
efficient to deal with large-scale datasets. However, most studies of online learning focus on the
misclassification error or its surrogate loss, in which the objective function depends on a sum of
losses over individual examples. It is thus desirable to develop online learning algorithms to optimize
the AUC metric. The main challenge for an online AUC algorithm is that the objective function of
AUC maximization depends on a sum of pairwise losses between instances from different classes
which is quadratic in the number of training examples. As such, directly deploying the existing online
algorithms will require to store all training data received, making it not feasible for large-scale data
analysis.
Several recent works [6, 11, 18, 20, 21] have studied a type of online AUC maximization method that
updates the classifier upon the arrival of each new training example. However, this type of algorithms
need to access all previous examples at iteration t, and has O(td) space and per-iteration complexity
where d is the dimension of the data. The scaling of per-iteration space and time complexity is an
undesirable property for online applications that have to use fixed resources. This problem is partially
alleviated by the use of buffers of a fixed size s in [11, 21], which reduces the per-iteration space and
time complexity to O(sd). Although this change makes the per-iteration space and time complexity
independent of the number of iterations, in practice, to reduce variance in learning performance, the
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
size of the buffer needs to be set sufficiently large. The work of [6] proposes an alternative method
that requires to update and store the first-order (mean) and second-order (covariance) statistics of the
training data, and the space and per-iteration complexity becomes O(d2 ). Although this eliminates
the needs to access all previous training examples, the per-iteration is now quadratic in data dimension,
which makes this method inefficient for high-dimensional data. To this end, the authors of [6] further
proposed to approximate the covariance matrices with low-rank random Gaussian matrices. However,
the approximation method is not a general solution to the original problem and its convergence was
only established under the assumption that the effective numerical rank for the set of covariance
matrices is small (i.e., they can be well approximated by low-rank matrices).
In this work, we present a new stochastic online AUC maximization (SOLAM) method associated
for the `2 loss function. In contrast to existing online AUC maximization methods, e.g. [6, 21],
SOLAM does not need to store previously received training examples or the covariance matrices,
while, at the same time, enjoys a comparable convergence rate, up to a logarithmic term, as in
[6, 21]. To our best knowledge, this is the first online learning algorithm for AUC optimization with
linear space and per-iteration time complexities of O(d), which are the same as the online gradient
descent algorithm [1, 2, 16, 22] for classification. The key step of SOLAM is to reformulate the
original problem as a stochastic saddle point problem [14]. This connection is the foundation of the
SOLAM algorithm and its convergence analysis. When evaluating on several standard benchmark
datasets, SOLAM achieves performances that are on par with state-of-the-art online AUC optimization
methods with significant improvement in running time.
The main contribution of our work can be summarized as follows:
? We provide a new formulation of the AUC optimization problem as stochastic Saddle Point Problem (SPP). This formulation facilitates the development of online algorithms for AUC optimization.
? Our algorithm SOLAM achieves a per-iteration space and time complexity that is linear in data
dimensionality.
? Our theoretical analysis provides guarantee of convergence, with high probability, of the proposed
algorithm.
2
Method
Let the input space X ? Rd and the output space Y = {?1, +1}. We assume the training data,
z = {(xi , yi ), i = 1, . . . , n} as i.i.d. sample drawn from an unknown distribution ? on Z = X ? Y.
The ROC curve is the plot of the true positive rate versus the false positive rate. The area under the
ROC curve (AUC) for any scoring function f : X ? R is equivalent to the probability of a positive
sample ranks higher than a negative sample (e.g. [4, 8]). It is defined as
AUC(f ) = Pr(f (x) ? f (x0 )|y = +1, y 0 = ?1),
(1)
where (x, y) and (x0 , y 0 ) are independent drawn from ?. The target of AUC maximization is to find
the optimal decision function f :
arg max AUC(f ) = arg min Pr(f (x) < f (x0 )|y = 1, y 0 = ?1)
f
f
h
i
= arg min E I[f (x0 )?f (x)>0] y = 1, y 0 = ?1 ,
f
(2)
where I(?) is the indicator function that takes value 1 if the argument is true and 0 otherwise. Let
p = Pr(y = 1). For any
RR random variable ?(z), recall that its conditional expectation is defined
by E[?(z)|y = 1] = p1
?(z)Iy=1 d?(z). Since I(?) is not continuous, it is often replaced by its
convex surrogates. Two
common
choices are the `2 loss (1 ? (f (x) ? f (x0 )))2 or the hinge loss
0
1 ? (f (x) ? f (x )) + . In this work, we use the `2 , as it has been shown to be statistically consistent
with AUC while the hinge loss is not [6, 7]. We also restrict our interests to the family of linear
functions, i.e., f (x) = w> x. In summary, the AUC maximization can be formulated by
h
i
argminkwk?R E (1 ? w> (x ? x0 ))2 |y = 1, y 0 = ?1
(3)
RR
1
= argminkwk?R p(1?p)
(1 ? w> (x ? x0 ))2 I[y=1] I[y0 =?1] d?(z)d?(z 0 ).
Z?Z
2
When ? is a uniform distribution over training data z, we obtain the empirical minimization (ERM)
problem for AUC optimization studied in [6, 21]1
argmin
kwk?R
n
n
1 XX
(1 ? w> (xi ? xj ))2 I[yi =1?yj =?1] ,
n+ n? i=1 j=1
(4)
where n+ and n? denote the numbers of instances in the positive and negative classes, respectively.
2.1
Equivalent Representation as a (Stochastic) Saddle Point Problem (SPP)
The main result of this work is the equivalence of problem (3) to a stochastic Saddle Point Problem
(SPP) (e.g., [14]). A stochastic SPP is generally in the form of
min max f (u, ?) := E[F (u, ?, ?)] ,
(5)
u??1 ???2
d
m
where ?1 ? R and ?2 ? R are nonempty closed convex sets, ? is a random vector
with non-empty
R
measurable set ? ? Rp , and F : ?1 ? ?2 ? ? ? R. Here E[F (u, ?, ?)] = ? F (u, ?, ?)d Pr(?),
and function f (u, ?) is convex in u ? ?1 and concave in ? ? ?2 . In general, u and ? are referred to
as the primal variable and the dual variable, respectively.
The following theorem shows that (3) is equivalent to a stochastic SPP (5). First, define F :
Rd ? R3 ? Z ? R, for any w ? Rd , a, b, ? ? R and z = (x, y) ? Z, by
F (w, a, b, ?; z) = (1 ? p)(w> x ? a)2 I[y=1] + p(w> x ? b)2 I[y=?1]
+ 2(1 + ?)(pw> xI[y=?1] ? (1 ? p)w> xI[y=1] ) ? p(1 ? p)?2 .
(6)
Theorem 1. The AUC optimization (3) is equivalent to
Z
n
o
min max f (w, a, b, ?) :=
F (w, a, b, ?; z)d?(z) .
kwk?R
(a,b)?R2
??R
(7)
Z
Proof. It suffices toR prove the claim that the objective function of (3) equals to 1 +
min(a,b)?R2 max??R Z F (w, a, b, ?; z)d?(z).
To this end, note that z = (x, y) and z = (x0 , y 0 ) are samples independently drawn from ?. Therefore,
the objective function of (3) can be rewritten as
E (1 ? w> (x ? x0 ))2 |y = 1, y 0 = ?1 = 1 + E[(w> x)2 |y = 1] + E[(w> x0 )2 |y 0 = ?1]
? 2E[w> x|y = 1] + 2E[w> x0 |y 0 = ?1] ? 2 E[w> x|y = 1] E[w> x0 |y 0 = ?1]
2
2
= 1 + E[(w> x)2 |y = 1] ? E[w> x|y = 1]
+ E[(w> x0 )2 |y 0 = ?1] ? E[w> x0 |y 0 = ?1]
2
? 2E[w> x|y = 1] + 2E[w> x0 |y 0 = ?1] + E[w> x|y = 1] ? E[w> x0 |y 0 = ?1] .
(8)
R
2
Note that E[(w> x)2 |y = 1] ? E[w> x|y = 1]
= p1 Z (w> x)2 I[y=1] d?(z) ?
R
R
2
1
>
= mina?R p1 Z (w> x?a)2 I[y=1] d?(z) = mina?R E[(w> x?a)2 |y = 1],
p Z w xI[y=1] d?(z)
where the minimization is achieved by
a = E[w> x|y = 1].
(9)
Likewise, min E[(w> x0 ? b)2 |y 0 = ?1] = E[(w> x0 )2 |y 0 = ?1] ? E[w> x0 |y 0 = ?1]
b
2
where the
minimization is obtained by letting
b = E[w> x0 |y 0 = ?1].
(10)
2
>
> 0 0
> 0 0
Moreover, observe that
E[w x|y = 1] ? E[w x |y = ?1] = max? 2?(E[w x |y = ?1] ?
>
2
E[w x|y = 1]) ? ? , where the maximization is achieved with
? = E[w> x0 |y 0 = ?1] ? E[w> x|y = 1].
1
minw?Rd n+1n?
(11)
Pn
Pn
>
The work [6, 21] studied the regularized ERM problem, i.e.
i=1
j=1 (1 ? w (xi ?
2
2
?
xj )) I[yi =1] I[yj =?1] + 2 kwk , which is equivalent to (3) with ? being a bounded ball in Rd .
3
Putting all these equalities into (8) implies that
R
h
i
F (w, a, b; z)d?(z)
>
0 2
0
E (1 ? w (x ? x )) |y = 1, y = ?1 = 1 + min 2 max Z
.
p(1 ? p)
(a,b)?R ??R
This proves the claim and hence the theorem.
In addition, we can prove the following result.
Proposition 1. Function f (w, a, b, ?) is convex in (w, a, b) ? Rd+2 and concave in ? ? R.
The proof of this proposition can be found in the Supplementary Materials.
2.2
Stochastic Online Algorithm for AUC Maximization
The optimal solution to an SPP problem is called a saddle point. Stochastic first-order methods are
widely used to get such an optimal saddle point. The main idea of such algorithms (e.g. [13, 14] is
to use an unbiased stochastic estimator of the true gradient to perform, at each iteration, gradient
descent in the primal variable and gradient ascent in the dual variable.
Using the stochastic SPP formulation (7) for AUC optimization, we can develop stochastic online learning algorithms which only need to pass the data once. For notational simplicity, let
vector v = (w> , a, b)> ? Rd+2 , and for any w ? Rd , a, b, ? ? R and z = (x, y) ? Z,
we denote f (w, a, b, ?) as f (v, ?), and F (w, a, b, ?, z) as F (v, ?, z). The gradient of the objective function in the stochastic SPP problem (7) is given by a (d + 3)-dimensional column vector
g(v, ?) = (?v f (v, ?), ??? f (v, ?)) and its unbiased stochastic estimator is given, for any z ? Z,
by G(v, ?, z) = (?u F (v, ?, z), ??? F (v, ?, z)). One could directly deploy the stochastic first-order
method in [14] to the stochastic SPP formulation (7) for AUC optimization. However, from the
definition of F in (6), this would require the knowledge of the unknown probability p = Pr(y = 1) a
priori. To overcome this problem, for any v > = (w> , a, b) ? Rd+2 , ? ? R and z ? Z, let
F?t (v, ?, z) = (1 ? p?t )(w> x ? a)2 I[y=1] + p?t (w> x ? b)2 I[y=?1]
+ 2(1 + ?)(?
pt w> xI[y=?1] ? (1 ? p?t )w> xI[y=1] ) ? p?t (1 ? p?t )?2 .
(12)
Pt
where p?t =
i=1 I[yi =1]
t
at iteration t. We propose, at iteration t, to use the stochastic estimator
? t (v, ?, z) = (?v F?t (v, ?, z), ??? F?t (v, ?, z))
G
(13)
to replace the unbiased, but practically inaccessible, stochastic estimator G(v, ?, z). Assume ? =
supx?X kxk < ?, and recall that kwk ? R. For any optimal solution (w?R, a? , b? ) of the stochastic
SPP (7) for AUC optimization, by (9), (10) and (11) we know that |a? | = p1 | Z hw? , xiI[y=1] d?(z)| ?
1 R
R
1
R?, |b? | = 1?p
| Z hw? , x0 iI[y0 =?1] d?(z 0 )| ? R?, and |?? | = 1?p
hw? , x0 iI[y0 =?1] d?(z 0 ) ?
Z
R
1
?
p Z hw , xiI[y=1] d?(z) ? 2R?. Therefore, we can restrict (w, a, b) and ? to the following bounded
domains:
?1 = (w, a, b) ? Rd+2 : kwk ? R, |a| ? R?, |b| ? R? , ?2 = ? ? R : |?| ? 2R? . (14)
In this case, the projection steps (e.g. steps 4 and 5) in Table 1 can be easily computed. The pseudocode of the online AUC optimization algorithm is described in Table 1, to which we refer as SOLAM.
3
Analysis
We now present the convergence results of the proposed algorithm for AUC optimization. Let
u = (v, ?) = (w, a, b, ?). The quality of an approximation solution (?
vt , ?
? t ) to the SPP problem (5)
at iteration t is measured by the duality gap:
?f (?
vt , ?
? t ) = max f (?
vt , ?) ? min f (v, ?
? t ).
???2
v??1
4
(15)
Stochastic Online AUC Maximization (SOLAM)
1. Choose step sizes {?t > 0 : t ? N}
2. Initialize t = 1, v1 ? ?1 , ?1 ? ?2 and let p?0 = 0, v?0 = 0, ?
? 0 = 0 and ??0 = 0.
(t?1)p?t?1 +I[yt =1]
3. Receive a sample zt = (xt , yt ) and compute p?t =
t
4. Update vt+1 = P?1 (vt ? ?t ?v F?t (vt , ?t , zt ))
5. Update ?t+1 = P?2 (?t + ?t ?? F?t (vt , ?t , zt ))
6. Update ??t = ??t?1 + ?t
?t?1 v?t?1 + ?t vt ), and ?
? t = ??1t (?
?t?1 ?
? t?1 + ?t ?t )
7. Update v?t = ??1t (?
8. Set t ? t + 1
Table 1: Pseudo code of the proposed algorithm. In steps 4 and 5, P?1 (?) and P?2 (?) denote the
projection to the convex sets ?1 and ?2 , respectively.
Theorem 2. Assume that samples {(x1 , y1 ), (x2 , y2 ), . . . , (xT , yT )} are i.i.d. drawn from a distribution ? over X ? Y, let ?1 and ?2 be given by (14) and the step sizes given by {?t > 0 : t ? N}.
For sequence {(?
vt , ?
? t ) : t ? [1, T ]} generated by SOLAM (Table (1)), and any 0 < ? < 1, with
probability 1 ? ?, the following holds
r
T
T
T
T
X
X
1 X
4T X ?1 h
? i
2
?j ,
?j
1+
?j2 +
?j2 2 +
?f (?
vT , ?
? T ) ? C? max(R , 1) ln
? j=1
j
j=1
j=1
j=1
where C? is an absolute constant independent of R and T (see its explicit expression in the proof).
Denote f ? as the optimum of (7) which, by Theorem 1, is identical to the optimal value of AUC
optimization (3). From Theorem 2, the following convergence rate is straightforward.
1
Corollary 1. Under the same assumptions as in Theorem 2, and ?j = ?j ? 2 : j ? N with constant
r
ln T ln 4T
?
?
?
? > 0, with probability 1 ? ?, it holds |f (?
vT , ?
? T ) ? f | ? ?f (?
uT ) = O
.
T
While the above convergence rate is obtained by choosing decaying step sizes, one can establish a
similar result when a constant step size is appropriately chosen.
The proof of Theorem 2 requires several lemmas. The first is a standard result from convex online
learning [16, 22]. We include its proof in the Supplementary Materials for completeness.
Lemma 1. For any T ? N, let {?j : j ? [1, T ]} be a sequence of vectors in Rm , and u
?1 ? ? where
? is a convex set. For any t ? [1, T ] define u
?t+1 = P? (?
ut ? ?t ). Then, for any u ? ?, there holds
2
PT
PT
ut ? u)> ?t ? k?u1 ?uk
+ 12 t=1 k?t k2 .
t=1 (?
2
The second lemma is the Pinelis-Bernstein inequality for martingale difference sequence in a Hilbert
space, which is from [15, Theorem 3.4]
Lemma 2. Let {Sk : k ? N} be a martingale difference sequence in a Hilbert space. Suppose that
PT
almost surely kSk k ? B and k=1 E[kSk k2 |S1
, . . . , Sk?1
] ? ?T2 . Then, for any 0 < ? < 1, there
Pj
holds, with probability at least 1 ? ?, sup1?j?T
k=1 Sk
? 2 B3 + ?T log 2? .
? j (u, z) defined by (13), is not
The third lemma indicates that the approximate stochastic estimator G
far away from the unbiased one G(u, z). Its proof is given in the Supplementary materials.
Lemma 3. Let ?1 and ?2 be given by (14) and denote by ? = ?1 ? ?2 . For any t ? N, with
1
? t (u, z) ? G(u, z)k ? 2?(4?R + 11R + 1) ln ( 2 )/t 2 .
probability 1 ? ?, there holds sup kG
?
u??,z?Z
Proof of Theorem 2. By the convexity of f (?, ?) and concavity of of f (v, ?), for any u = (v, ?) ?
?1 ? ?2 , we get f (vt , ?) ? f (v, ?t ) = (f (vt , ?t ) ? f (v, ?t )) + (f (vt , ?) ? f (vt , ?t )) ? (vt ?
v)> ?v f (vt , ?t ) ? (?t ? ?)?? f (vt , ?t ) = (ut ? u)> g(ut ). Hence, there holds
!
T
T
T
X
X
X
max f (?
vT , ?) ? min f (v, ?
?T ) ? (
?t )?1 max
?t f (vt , ?) ? min
?t f (v, ?t )
???2
v??1
???2
t=1
5
t=1
v??1
t=1
T
X
?(
?t )?1
t=1
max
u??1 ??2
T
X
?t (ut ? u)> g(ut )
(16)
t=1
Recall that ? = ?1 ? ?2 . The steps 4 and 5 in Algorithm SOLAM can be rewritten as ut+1 =
? t (ut , zt )). By applying Lemma 1 with ?t = ?t G
? t (ut , zt ), we have, for
(vt+1 , ?t+1 ) = P? (ut ? ?t G
PT
PT
ku1 ?uk2
1
>?
2 ?
any u ? ?, that t=1 ?t (ut ? u) Gt (ut , zt ) ?
+ 2 t=1 ?t kGt (ut , zt )k2 , which yields
2
that
T
T
X
ku1 ? uk2
1X 2 ?
sup
?t (ut ? u)> g(ut ) ? sup
+
? kGt (ut , zt )k2
2
2 t=1 t
u?? t=1
u??
T
T
X
2
X
? t (ut , zt )k2
? t (ut , zt )) ? sup ku1 ? uk + 1
? 2 kG
?t (ut ? u)> (g(ut ) ? G
2
2 t=1 t
u?? t=1
u??
+ sup
+ sup
T
X
?t (ut ? u)> (g(ut ) ? G(ut , zt )) + sup
u?? t=1
T
X
? t (ut , zt )) (17)
?t (ut ? u)> (G(ut , zt ) ? G
u?? t=1
Now we estimate the terms on the right hand side of (17) as follows.
For the first term, we have
1
sup ku1 ? uk2 ? 2
sup (kvk2 + |?|2 ) ? 2 sup kuk2 ? 2R2 (1 + 6?2 ).
2 u??
v??1 ,???2
u??
(18)
For the second term on
the right hand side of (17), observe that supx?X kxk ? ? and ut =
(wt , at , bt , ?t ) ? ? = (w, a, b, ?) : kwk ? R, |a| ? ?R, |b| ? ?R, |?| ? 2?R . Combining this
? t (ut , zt ) given by (13), one can easily get kG
? t (ut , zt )k ? k?w F?t (ut , zt )k +
with the definition of G
?
?
?
|?a Ft (ut , zt )| + |?b Ft (ut , zt )| + |?? Ft (ut , zt )| ? 2?(2R + 1 + 2R?). Hence, there holds
T
T
X
1X 2 ?
?t2 .
?t kGt (ut , zt )k2 ? 2?2 (2R + 1 + 2R?)2
(19)
2 t=1
t=1
PT
The third term on the right hand side of (17) can be bounded by supu?? t=1 ?t (ut ? u)> (g(ut ) ?
PT
PT
G(ut , zt )) ? supu?? [ t=1 ?t (?
ut ? u)> (g(ut ) ? G(ut , zt ))] + t=1 ?t (ut ? u
?t )> (g(ut ) ?
G(ut , zt )), where u
?1 = 0 ? ? and u
?t+1 = P? (?
ut ? ?t (g(ut ) ? G(ut , zt ))) for any t ? [1, T ].
Applying Lemma 1 with ?t = ?t (g(ut ) ? G(ut , zt )) yields that
sup
T
X
T
kuk2
1X 2
+
? kg(ut ) ? G(ut , zt )k2
2 t=1 t
u?? 2
?t (?
ut ? u)> (g(ut ) ? G(ut , zt )) ? sup
u?? t=1
T
X
1 2
2
2
2
? R (1 + 6? ) + 4? (2R + 1 + 2R?)
?t2 , (20)
2
t=1
where we used kG(ut , zt )k and kg(ut )k is uniformly bounded by 2?(2R + 1 + 2R?). Notice that
ut and u
?t are only dependent on {z1 , z2 , . . . , zt?1 }, {St = ?t (ut ? u
?t )> (g(ut ) ? G(ut , zt )) :
t =
a martingale difference sequence. Observe that E[kSt k2 |z1 , . . . , zt?1 ] =
RR 1, . . . , t} is
2
>
?t Z ((ut ? u
?t ) (g(ut )?G(ut , z)))2 d?(z) ? ?t2 supu??,z?Z [kut ? u
?t k2 kg(ut )?G(ut , zt )k2 ] ?
?
?
2
2
2
2
?t [2?R 1 + 6? (2R + 1 + 2R?)] . Applying Lemma 2 with ?T = [2?R 1 + 6?2 (2R + 1 +
PT
2R?)]2 t=1 ?t2 , B = supTt=1 ?t |(ut ? u
?t )> (g(ut ) ? G(ut , zt ))| ? ?T implies that, with probabili?
ty 1 ? 2 , there holds
v
u T
?
T
2
X
X
16?R 1 + 6? (2R + 1 + 2R?) u
>
t
?t (ut ? u
?t ) (g(ut ) ? G(ut , zt )) ?
?t2 .
(21)
3
t=1
t=1
Combining (20) with (21) implies, with probability 1 ? 2? ,
T
X
T
X
R2 (1 + 6?2 )
2
2
sup
?t (ut ? u) (g(ut ) ? G(ut , zt )) ?
+ 4? (2R + 1 + 2R?)
?t2
2
u?? t=1
t=1
>
6
datasets ]inst ]feat datasets
]inst ]feat datasets ]inst
]feat datasets ]inst
]feat
diabetes
768
8 fourclass
862
2 german 1,000
24
splice 3,175
60
usps
9,298 256
a9a
32,561 123 mnist 60,000 780 acoustic 78,823
50
ijcnn1 141,691 22 covtype 581,012 54
sector 9,619 55,197 news20 15,935 62,061
Table 2: Basic information about the benchmark datasets used in the experiments.
?
T
16?R 1 + 6?2 (2R + 1 + 2R?) X 2 1/2
+
. (22)
?t
3
t=1
?
? t (u, z)?G(u, z)k ?
By Lemma 3, for any t ? [1, T ] there holds, with probability 1? 2T
, sup kG
u??,z?Z
r
4T
2?(2R(? + 1) + 1) ln ( )/t. Hence, the fourth term on the righthand side of (17) can estimated
?
as follows: with probability 1 ? 2? , there holds
sup
T
X
? t (ut , zt )) ? 2 sup kuk
?t (ut ? u)> (G(ut , zt ) ? G
u?? t=1
u?
T
X
t=1
?t
? t (u, z) ? G(u, z)k
kG
sup
u??,z?Z
? 8R?(4R? + 11R + 1)
p
6?2 + 1
T
X
?
?t . (23)
t
t=1
Putting the estimations (18), (19), (22), (23) and (17) back into (16) implies that
r
T
T
T
T
X
X
1 X
4T X ?1 h
? i
2
?t ,
?t2 2 +
?t
?t2 +
1+
?f (?
uT ) ? C? max(R , 1) ln
? t=1
t
t=1
t=1
t=1
?
2
where C? = 52 (1 + 6?2 ) + 6?2 (? + 3)2 + 112
3 ? 6? + 1(2? + 3).
4
Experiments
In this section, we report experimental evaluations of the SOLAM algorithm and comparing its
performance with existing state-of-the-art learning algorithms for AUC optimization. SOLAM was
implemented in MATLAB, and MATLAB code of the compared methods were obtained from the
authors of corresponding papers. In the training phase, we use five-fold cross validation to determine
the initial learning rate ? ? [1 : 9 : 100] and the bound on w, R ? 10[?1:1:5] by a grid search.
Following the evaluation protocol of [6], the performance of SOLAM was evaluated by averaging
results from five runs of five-fold cross validations.
Our experiments were performed based on 12 datasets that had been used in previous studies. For
multi-class datasets, e.g., news20 and sector, we transform them into binary classification problems
by randomly partitioning the data into two groups, where each group includes the same number of
classes. Information about these datasets is summarized in Table 2.
On these datasets, we evaluate and compare SOLAM with four online and two offline learning
algorithms for AUC maximization, i.e. one-pass AUC maximization (OPAUC) [6], which uses the `2
loss surrogate of the AUC objective function; online AUC maximization [21] that uses the hinge loss
surrogate of the AUC objective function with two variants, one with sequential update (OAMseq) and
the other using gradient update (OAMgra); online Uni-Exp [12] which uses the weighted univariate
exponential loss; B-SVM-OR [10], which is a batch learning algorithm using the hinge loss surrogate
of the AUC objective function; and B-LS-SVM, which is a batch learning algorithm using the `2 loss
surrogate of the AUC objective function.
Classification performances on the testing dataset of all methods are given in Table 3. These results
show that SOLAM achieves similar performances as other state-of-the-art online and offline methods
based on AUC maximization. The performance of SOLAM is better than the offline methods on
acoustic and covtype which could be due to the normalization of features used in our experiments for
SOLAM. On the other hand, the main advantage of SOLAM is the running efficiency, as we pointed
out in the Introduction, its per-iteration running time and space complexity is linear in data dimension
and do not depend on the iteration number. In Figure 1, we show AUC vs. run time (seconds) for
7
Datasets
diabetes
fourclass
german
splice
usps
a9a
mnist
acoustic
ijcnn1
covtype
sector
news20
SOLAM
OPAUC
OAMseq
OAMgra online Uni-Exp B-SVM-OR B-LS-SVM
.8253?.0314 .8309?.0350 .8264?.0367 .8262?.0338 .8215?.0309 .8326?.0328 .8325?.0329
.8226?.0240 .8310?.0251 .8306?.0247 .8295?.0251 .8281?.0305 .8305?.0311 .8309?.0309
.7882?.0243 .7978?.0347 .7747?.0411 .7723?.0358 .7908?.0367 .7935?.0348 .7994?.0343
.9253?.0097 .9232?.0099 .8594?.0194 .8864?.0166 .8931?.0213 .9239?.0089 .9245?.0092
.9766?.0032 .9620?.0040 .9310?.0159 .9348?.0122 .9538?.0045 .9630?.0047 .9634?.0045
.9001?.0042 .9002?.0047 .8420?.0174 .8571?.0173 .9005?.0024 .9009?.0036 .8982?.0028
.9324?.0020 .9242?.0021 .8615?.0087 .8643?.0112 .7932?.0245 .9340?.0020 .9336?.0025
.8898?.0026 .8192?.0032 .7113?.0590 .7711?.0217 .8171?.0034 .8262?.0032 .8210?.0033
.9215?.0045 .9269?.0021 .9209?.0079 .9100?.0092 .9264?.0035 .9337?.0024 .9320?.0037
.9744?.0004 .8244?.0014 .7361?.0317 .7403?.0289 .8236?.0017 .8248?.0013 .8222?.0014
.9834?.0023 .9292?.0081 .9163?.0087 .9043?.0100 .9215?.0034
.9467?.0039 .8871?.0083 .8543?.0099 .8346?.0094 .8880?.0047
Table 3: Comparison of the testing AUC values (mean?std.) on the evaluated datasets. To accelerate the
experiments, the performances of OPAUC, OAMseq, OAMgra, online Uni-Exp, B-SVM-OR and B-LS-SVM were
taken from [6]
(a) a9a
(b) ups
(c) sector
Figure 1: AUC vs. time curves of SOLAM algorithm and three state-of-the-art AUC learning algorithms, i.e.,
OPAUC [6], OAMseq [21], and OAMgra [21]. The values in parentheses indicate the average running time
(seconds) per pass for each algorithm.
SOLAM and three other state-of-the-art online learning algorithms,i.e., OPAUC [6], OAMseq [21],
and OAMgra [21] over three datasets (a9a, ups, and sector), along with the per-iteration running time
in the legend2 . These results show that SOLAM in general reaches convergence faster in comparison
of, while achieving competitive performance.
5
Conclusion
In this paper we showed that AUC maximization is equivalent to a stochastic saddle point problem,
from which we proposed a novel online learning algorithm for AUC optimization. In contrast to
the existing algorithms [6, 21], the main advantage of our algorithm is that it does not need to store
all previous examples nor its second-order covariance matrix. Hence, it is a truly online learning
algorithm with one-datum space and per-iteration complexities, which are the same as online gradient
descent algorithms [22] for classification.
?
There are several research directions for future work. Firstly, the convergence rate O(1/ T ) for
SOLAM only matches that of the black-box sub-gradient method. It would be interesting to derive
fast convergence rate O(1/T ) by exploring the special structure of the objective function F defined
by (6). Secondly, the convergence was established using the duality gap associated with the stochastic
SPP formulation 7. It would be interesting to establish the strong convergence of the output w
?T of
algorithm SOLAM to its optimal solution of the actual AUC optimization problem (3). Thirdly, the
SPP formulation (1) holds for the least square loss. We do not know if the same formulation holds
true for other loss functions such as the logistic regression or the hinge loss.
2
Experiments were performed with running time reported based on a workstation with 12 nodes, each with
an Intel Xeon E5-2620 2.0GHz CPU and 64GB RAM.
8
References
[1] F. R. Bach and E. Moulines. Non-asymptotic analysis of stochastic approximation algorithms
for machine learning. In NIPS, 2011.
[2] L. Bottou and Y. LeCun. Large scale online learning. In NIPS, 2003.
[3] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning
algorithms. IEEE Trans. Information Theory, 50(9):2050?2057, 2004.
[4] S. Clemencon, G. Lugosi, and N. Vayatis. Ranking and empirical minimization of u-statistics.
The Annals of Statistics, 36(2):844?874, 2008.
[5] C. Cortes and M. Mohri. AUC optimization vs. error rate minimization. In NIPS, 2003.
[6] W. Gao, R. Jin, S. Zhu, and Z. H. Zhou. One-pass AUC optimization. In ICML, 2013.
[7] W. Gao and Z.H. Zhou. On the consistency of AUC pairwise optimization. In International
Joint Conference on Artificial Intelligence, 2015.
[8] J. A. Hanley and B. J. McNeil. The meaning and use of the area under of receiver operating
characteristic (roc) curve. Radiology, 143(1):29?36, 1982.
[9] T. Joachims. A support vector method for multivariate performance measures. In ICML, 2005.
[10] Thorsten Joachims. Training linear svms in linear time. In Proceedings of the Twelfth ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 217?226,
2006.
[11] P. Kar, B. K. Sriperumbudur, P. Jain, and H. Karnick. On the generalization ability of online
learning algorithms for pairwise loss functions. In ICML, 2013.
[12] W. Kotlowski, K. Dembczynski, and E. H?llermeier. Bipartite ranking through minimization of
univariate loss. In ICML, 2011.
[13] G. Lan. An optimal method for stochastic composite optimization. Math Programming,
133(1-2):365?397, 2012.
[14] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach
to stochastic programming. SIAM Journal on Optimization, 19(4):1574?1609, 2009.
[15] I. Pinelis. Optimum bounds for the distributions of martingales in banach spaces. The Annals of
Probability, 22(4):1679?1706, 1994.
[16] A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex
stochastic optimization. In ICML, 2012.
[17] A. Rakotomamonjy. Optimizing area under roc curve with svms. In 1st International Workshop
on ROC Analysis in Artificial Intelligence, 2004.
[18] Y. Wang, R. Khardon, D. Pechyony, and R. Jones. Generalization bounds for online learning
algorithms with pairwise loss functions. In COLT, 2012.
[19] Y. Ying and M. Pontil. Online gradient descent learning algorithms. Foundations of Computational Mathematics, 8(5):561?596, 2008.
[20] Y. Ying and D. X. Zhou. Online pairwise learning algorithms. Neural Computation, 28:743?777,
2016.
[21] P. Zhao, S. C. H. Hoi, R. Jin, and T. Yang. Online AUC maximization. In ICML, 2011.
[22] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In
ICML, 2003.
9
| 6065 |@word pw:1 twelfth:1 d2:1 covariance:5 initial:1 existing:5 z2:1 comparing:1 numerical:1 plot:1 update:8 juditsky:1 v:3 intelligence:2 completeness:1 provides:1 math:1 node:1 firstly:1 five:3 along:1 kvk2:1 prove:2 x0:23 pairwise:5 news20:3 p1:4 nor:1 multi:1 moulines:1 td:1 actual:1 cpu:1 becomes:1 spain:1 xx:1 moreover:1 bounded:4 maximizes:1 kg:9 argmin:1 guarantee:1 pseudo:1 concave:3 classifier:3 rm:1 uk:2 k2:10 partitioning:1 positive:5 sd:1 lugosi:1 black:1 studied:3 equivalence:1 nemirovski:1 statistically:1 practical:1 lecun:1 yj:2 testing:2 practice:1 supu:3 pontil:1 area:5 empirical:2 composite:1 alleviated:1 projection:2 ups:2 get:3 undesirable:1 applying:3 optimize:1 equivalent:6 measurable:1 zinkevich:1 yt:3 straightforward:1 independently:1 convex:10 l:3 simplicity:1 estimator:5 annals:2 target:1 deploy:1 pt:11 suppose:1 shamir:1 programming:3 us:3 diabetes:2 approximated:1 std:1 ft:3 probabili:1 wang:1 inaccessible:1 complexity:11 convexity:1 depend:1 upon:1 bipartite:1 efficiency:1 usps:2 easily:2 accelerate:1 joint:1 jain:1 fast:1 effective:1 artificial:2 choosing:1 widely:3 supplementary:3 otherwise:1 ability:4 statistic:4 radiology:1 transform:1 online:40 sequence:5 rr:3 advantage:2 propose:2 j2:2 combining:2 convergence:13 empty:1 optimum:2 yiming:1 fourclass:2 derive:1 develop:3 pinelis:2 measured:1 received:2 strong:1 implemented:1 implies:4 indicate:1 direction:1 kgt:3 stochastic:31 material:3 hoi:1 require:2 suffices:1 generalization:3 proposition:2 secondly:1 exploring:1 hold:12 practically:1 sufficiently:1 exp:3 lyu:1 claim:2 tor:1 achieves:4 estimation:1 albany:4 reflects:1 weighted:1 minimization:6 gaussian:1 pn:2 zhou:3 corollary:1 focus:1 joachim:2 improvement:1 notational:1 rank:4 indicates:1 a9a:4 contrast:2 sigkdd:1 inst:4 dependent:1 bt:1 arg:3 classification:7 overall:1 dual:2 colt:1 priori:1 proposes:1 development:1 art:5 special:1 initialize:1 equal:1 once:1 identical:1 jones:1 icml:7 future:1 t2:9 report:1 wen:1 randomly:3 individual:1 replaced:1 phase:1 interest:2 mining:1 righthand:1 evaluation:2 truly:1 primal:2 minw:1 clemencon:1 theoretical:3 instance:4 classify:1 column:1 xeon:1 measuring:2 maximization:18 rakotomamonjy:1 uniform:1 reported:1 supx:2 st:2 international:3 siam:1 kut:1 iy:1 cesa:1 choose:1 inefficient:1 zhao:1 ku1:4 summarized:2 includes:1 ranking:3 depends:2 performed:2 closed:1 kwk:6 sup:17 competitive:1 decaying:1 dembczynski:1 contribution:1 square:1 variance:1 characteristic:1 likewise:1 yield:2 pechyony:1 siwei:1 deploying:1 reach:1 definition:2 infinitesimal:1 sriperumbudur:1 ty:1 associated:2 proof:7 workstation:1 dataset:1 recall:3 knowledge:3 ut:78 dimensionality:1 hilbert:2 back:1 higher:2 formulation:7 evaluated:2 box:1 strongly:1 hand:5 logistic:1 quality:1 b3:1 usa:2 true:4 unbiased:4 y2:1 equality:1 hence:5 deal:1 auc:55 generalized:1 mina:2 demonstrate:1 meaning:1 novel:1 common:1 pseudocode:1 functional:1 thirdly:1 banach:1 significant:1 refer:1 rd:10 consistency:1 grid:1 mathematics:2 pointed:1 had:1 access:2 operating:1 gt:1 multivariate:1 imbalanced:1 recent:1 showed:1 optimizing:2 store:4 buffer:2 inequality:1 binary:1 kar:1 vt:21 yi:4 scoring:1 gentile:1 surely:1 determine:1 ii:2 desirable:1 reduces:1 match:1 faster:1 cross:2 bach:1 parenthesis:1 variant:1 basic:1 regression:1 expectation:1 metric:3 iteration:18 normalization:1 achieved:2 receive:1 addition:1 vayatis:1 appropriately:1 eliminates:1 unlike:1 kotlowski:1 ascent:2 facilitates:1 sridharan:1 effectiveness:1 yang:1 bernstein:1 xj:2 restrict:2 opposite:1 reduce:1 idea:1 expression:1 gb:1 matlab:2 generally:1 svms:2 shapiro:1 notice:1 llermeier:1 uk2:3 estimated:1 correctly:1 per:12 sup1:1 xii:2 group:2 key:1 putting:2 four:1 lan:2 achieving:1 suny:2 drawn:4 pj:1 kuk:1 v1:1 ram:1 mcneil:1 sum:2 run:2 fourth:1 family:2 almost:1 decision:1 scaling:1 comparable:1 bound:3 datum:2 fold:2 quadratic:2 x2:1 u1:1 argument:1 min:10 department:2 developing:1 ball:1 y0:3 making:2 s1:1 ijcnn1:2 pr:5 erm:2 thorsten:1 taken:1 ln:6 resource:1 previously:1 r3:1 nonempty:1 german:2 know:2 letting:1 end:2 available:1 rewritten:2 observe:3 away:1 batch:3 alternative:1 rp:1 original:2 running:6 include:1 hinge:5 hanley:1 prof:1 establish:3 objective:11 surrogate:6 gradient:11 code:2 kst:1 reformulate:1 ying:3 equivalently:1 sector:5 negative:3 zt:37 unknown:2 perform:1 bianchi:1 datasets:15 benchmark:3 descent:5 jin:2 y1:1 pair:1 connection:1 z1:2 acoustic:3 established:2 barcelona:1 nip:4 trans:1 usually:1 spp:13 challenge:2 max:12 misclassification:2 regularized:1 indicator:1 zhu:1 discovery:1 asymptotic:1 loss:18 par:1 ksk:2 interesting:2 proven:1 versus:1 validation:2 foundation:2 consistent:1 summary:1 mohri:1 enjoys:1 offline:3 side:4 absolute:1 ghz:1 regard:1 curve:6 dimension:3 overcome:1 evaluating:1 karnick:1 concavity:1 author:2 far:1 approximate:2 uni:3 feat:4 receiver:1 xi:8 continuous:1 search:1 quantifies:1 sk:3 table:8 robust:1 e5:1 bottou:1 domain:1 protocol:1 main:6 arrival:1 x1:1 referred:1 intel:1 roc:7 martingale:4 ny:2 sub:1 explicit:1 khardon:1 exponential:1 third:2 hw:4 splice:2 theorem:10 kuk2:2 specific:1 xt:2 r2:4 rakhlin:1 covtype:3 svm:6 cortes:1 concern:1 workshop:1 mnist:2 false:1 sequential:1 gap:2 logarithmic:1 saddle:9 univariate:2 gao:2 kxk:2 conconi:1 partially:1 acm:1 conditional:1 formulated:2 replace:1 feasible:1 change:1 uniformly:1 wt:1 averaging:1 lemma:10 called:1 pas:4 duality:2 experimental:1 support:1 evaluate:1 |
5,598 | 6,066 | f -GAN: Training Generative Neural Samplers using
Variational Divergence Minimization
Sebastian Nowozin, Botond Cseke, Ryota Tomioka
Machine Intelligence and Perception Group
Microsoft Research
{Sebastian.Nowozin, Botond.Cseke, ryoto}@microsoft.com
Abstract
Generative neural samplers are probabilistic models that implement sampling using
feedforward neural networks: they take a random input vector and produce a sample
from a probability distribution defined by the network weights. These models
are expressive and allow efficient computation of samples and derivatives, but
cannot be used for computing likelihoods or for marginalization. The generativeadversarial training method allows to train such models through the use of an
auxiliary discriminative neural network. We show that the generative-adversarial
approach is a special case of an existing more general variational divergence
estimation approach. We show that any f -divergence can be used for training
generative neural samplers. We discuss the benefits of various choices of divergence
functions on training complexity and the quality of the obtained generative models.
1
Introduction
Probabilistic generative models describe a probability distribution over a given domain X , for example
a distribution over natural language sentences, natural images, or recorded waveforms.
Given a generative model Q from a class Q of possible models we are generally interested in
performing one or multiple of the following operations:
? Sampling. Produce a sample from Q. By inspecting samples or calculating a function on
a set of samples we can obtain important insight into the distribution or solve decision
problems.
? Estimation. Given a set of iid samples {x1 , x2 , . . . , xn } from an unknown true distribution
P , find Q ? Q that best describes the true distribution.
? Point-wise likelihood evaluation. Given a sample x, evaluate the likelihood Q(x).
Generative-adversarial networks (GAN) in the form proposed by [10] are an expressive class of
generative models that allow exact sampling and approximate estimation. The model used in GAN is
simply a feedforward neural network which receives as input a vector of random numbers, sampled,
for example, from a uniform distribution. This random input is passed through each layer in the
network and the final layer produces the desired output, for example, an image. Clearly, sampling
from a GAN model is efficient because only one forward pass through the network is needed to
produce one exact sample.
Such probabilistic feedforward neural network models were first considered in [22] and [3], here we
call these models generative neural samplers. GAN is also of this type, as is the decoder model of
a variational autoencoder [18].
In the original GAN paper the authors show that it is possible to estimate neural samplers by
approximate minimization of the symmetric Jensen-Shannon divergence,
DJS (P kQ) = 12 DKL (P k 21 (P + Q)) + 12 DKL (Qk 12 (P + Q)),
(1)
where DKL denotes the Kullback-Leibler divergence. The key technique used in the GAN training
is that of introducing a second ?discriminator? neural networks which is optimized simultaneously.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
Because DJS (P kQ) is a proper divergence measure between distributions this implies that the true
distribution P can be approximated well in case there are sufficient training samples and the model
class Q is rich enough to represent P .
In this work we show that the principle of GANs is more general and we can extend the variational
divergence estimation framework proposed by Nguyen et al. [25] to recover the GAN training
objective and generalize it to arbitrary f -divergences.
More concretely, we make the following contributions over the state-of-the-art:
? We derive the GAN training objectives for all f -divergences and provide as example
additional divergence functions, including the Kullback-Leibler and Pearson divergences.
? We simplify the saddle-point optimization procedure of Goodfellow et al. [10] and provide
a theoretical justification.
? We provide experimental insight into which divergence function is suitable for estimating
generative neural samplers for natural images.
2
Method
We first review the divergence estimation framework of Nguyen et al. [25] which is based on
f -divergences. We then extend this framework from divergence estimation to model estimation.
2.1
The f-divergence Family
Statistical divergences such as the well-known Kullback-Leibler divergence measure the difference
between two given probability distributions. A large class of different divergences are the so called
f -divergences [5, 21], also known as the Ali-Silvey distances [1]. Given two distributions P and Q
that possess, respectively, an absolutely continuous density function p and q with respect to a base
measure dx defined on the domain X , we define the f -divergence,
Z
p(x)
dx,
(2)
Df (P kQ) =
q(x)f
q(x)
X
where the generator function f : R+ ? R is a convex, lower-semicontinuous function satisfying
f (1) = 0. Different choices of f recover popular divergences as special cases in (2). We illustrate
common choices in Table 1. See supplementary material for more divergences and plots.
2.2
Variational Estimation of f -divergences
Nguyen et al. [25] derive a general variational method to estimate f -divergences given only samples
from P and Q. An equivalent result has also been derived by Reid and Williamson [28]. We will
extend these results from merely estimating a divergence for a fixed model to estimating model
parameters. We call this new method variational divergence minimization (VDM) and show that
generative-adversarial training is a special case of our VDM framework.
For completeness, we first provide a self-contained derivation of Nguyen et al?s divergence estimation
procedure. Every convex, lower-semicontinuous function f has a convex conjugate function f ? , also
known as Fenchel conjugate [15]. This function is defined as
f ? (t) = sup {ut ? f (u)} .
(3)
u?domf
The function f ? is again convex and lower-semicontinuous and the pair (f, f ? ) is dual to another
in the sense that f ?? = f . Therefore, we can also represent f as f (u) = supt?domf ? {tu ? f ? (t)}.
Nguyen et al. leverage the above variational representation of f in the definition of the f -divergence
to obtain a lower bound on the divergence,
R
n
o
?
Df (P kQ) = X q(x) sup
t p(x)
?
f
(t)
dx
q(x)
t?domf ?
? supT ?T
R
X
p(x) T (x) dx ?
R
X
?
q(x) f ? (T (x)) dx
= sup (Ex?P [T (x)] ? Ex?Q [f (T (x))]) ,
T ?T
2
(4)
Jensen-Shannon
Df (P kQ)
R
p(x) log p(x)
q(x) dx
R
q(x)
q(x) log p(x)
dx
R (q(x)?p(x))2
dx
p(x)
2
p
R p
p(x) ? q(x) dx
R
2p(x)
1
p(x) log p(x)+q(x)
+ q(x) log
2
GAN
R
Name
Kullback-Leibler
Reverse KL
Pearson ?2
Squared Hellinger
p(x) log
2p(x)
p(x)+q(x)
+ q(x) log
Generator f (u)
T ? (x)
u log u
p(x)
q(x)
q(x)
? p(x)
2( p(x)
? 1)
qq(x)
p(x)
( q(x) ? 1)
2p(x)
log p(x)+q(x)
1 + log
? log u
(u ? 1)2
?
2
( u ? 1)
2q(x)
p(x)+q(x)
2q(x)
p(x)+q(x)
dx
dx ? log(4)
?(u + 1) log
1+u
2
+ u log u
u log u ? (u + 1) log(u + 1)
log
?
q
q(x)
p(x)
p(x)
p(x)+q(x)
Table 1: List of f -divergences Df (P kQ) together with generator functions. Part of the list of divergences and
their generators is based on [26]. For all divergences we have f : domf ? R ? {+?}, where f is convex and
lower-semicontinuous. Also we have f (1) = 0 which ensures that Df (P kP ) = 0 for any distribution P . As
shown by [10] GAN is related to the Jensen-Shannon divergence through DGAN = 2DJS ? log(4).
where T is an arbitrary class of functions T : X ? R. The above derivation yields a lower bound
because the class of functions T may contain only a subset of all possible functions. By taking the
variation of the lower bound in (4) w.r.t. T , we find that under mild conditions on f [25], the bound
is tight for
p(x)
?
0
T (x) = f
,
(5)
q(x)
where f 0 denotes the first order derivative of f . This condition can serve as a guiding principle for
choosing f and designing the class of functions T . For example, the popular reverse Kullback-Leibler
divergence corresponds to f (u) = ? log(u) resulting in T ? (x) = ?q(x)/p(x), see Table 1.
We list common f -divergences in Table 1 and provide their Fenchel conjugates f ? and the domains domf ? in Table 2. We provide plots of the generator functions and their conjugates in the
supplementary materials.
2.3
Variational Divergence Minimization (VDM)
We now use the variational lower bound (4) on the f -divergence Df (P kQ) in order to estimate a
generative model Q given a true distribution P .
To this end, we follow the generative-adversarial approach [10] and use two neural networks, Q and
T . Q is our generative model, taking as input a random vector and outputting a sample of interest.
We parametrize Q through a vector ? and write Q? . T is our variational function, taking as input a
sample and returning a scalar. We parametrize T using a vector ? and write T? .
We can train a generative model Q? by finding a saddle-point of the following f -GAN objective
function, where we minimize with respect to ? and maximize with respect to ?,
F (?, ?) = Ex?P [T? (x)] ? Ex?Q? [f ? (T? (x))] .
(6)
To optimize (6) on a given finite training data set, we approximate the expectations using minibatch
samples. To approximate Ex?P [?] we sample B instances without replacement from the training set.
To approximate Ex?Q? [?] we sample B instances from the current generative model Q? .
2.4
Representation for the Variational Function
To apply the variational objective (6) for different f -divergences, we need to respect the domain
domf ? of the conjugate functions f ? . To this end, we assume that variational function T? is
represented in the form T? (x) = gf (V? (x)) and rewrite the saddle objective (6) as follows:
F (?, ?) = Ex?P [gf (V? (x))] + Ex?Q? [?f ? (gf (V? (x)))] ,
(7)
where V? : X ? R without any range constraints on the output, and gf : R ? domf ? is an output
activation function specific to the f -divergence used. In Table 2 we propose suitable output activation
functions for the various conjugate functions f ? and their domains.1 Although the choice of gf is
somewhat arbitrary, we choose all of them to be monotone increasing functions so that a large output
Note that for numerical implementation we recommend directly implementing the scalar function f ? (gf (?))
robustly instead of evaluating the two functions in sequence; see Figure 1.
1
3
Name
Output activation gf
domf ?
Conjugate f ? (t)
f 0 (1)
Kullback-Leibler (KL)
Reverse KL
Pearson ?2
Squared Hellinger
Jensen-Shannon
GAN
v
? exp(?v)
v
1 ? exp(?v)
log(2) ? log(1 + exp(?v))
? log(1 + exp(?v))
R
R?
R
t<1
t < log(2)
R?
exp(t ? 1)
?1 ? log(?t)
1 2
4t + t
1
?1
0
0
0
? log(2)
t
1?t
? log(2 ? exp(t))
? log(1 ? exp(t))
Table 2: Recommended final layer activation functions and critical variational function level defined by f 0 (1).
The critical value f 0 (1) can be interpreted as a classification threshold applied to T (x) to distinguish between
true and generated samples.
gf (v)
10
?f ? (gf (v))
10
KL
Reverse KL
Pearson ? 2
5
0
?5
?5
?4
?2
Squared Hellinger
Jensen-Shannon
GAN
5
0
?10
?6
KL
Reverse KL
Pearson ? 2
Squared Hellinger
Jensen-Shannon
GAN
0
2
4
?10
?6
6
?4
?2
0
2
4
6
Figure 1: The two terms in the saddle objective (7) are plotted as a function of the variational function V? (x).
V? (x) corresponds to the belief of the variational function that the sample x comes from the data
distribution P as in the GAN case; see Figure 1. It is also instructive to look at the second term
?f ? (gf (v)) in the saddle objective (7). This term is typically (except for the Pearson ?2 divergence)
a decreasing function of the output V? (x) favoring variational functions that output negative numbers
for samples from the generator.
We can see the GAN objective,
F (?, ?) = Ex?P [log D? (x)] + Ex?Q? [log(1 ? D? (x))] ,
(8)
as a special instance of (7) by identifying each terms in the expectations of (7) and (8). In particular,
choosing the last nonlinearity in the discriminator as the sigmoid D? (x) = 1/(1 + e?V? (x) ),
corresponds to output activation function is gf (v) = ? log(1 + e?v ); see Table 2.
2.5
Example: Univariate Mixture of Gaussians
To demonstrate the properties of the different f -divergences and to validate the variational divergence
estimation framework we perform an experiment similar to the one of [24].
Setup. We approximate a mixture of Gaussians by learning a Gaussian distribution. We represent our
model Q? using a linear function which receives a random z ? N (0, 1) and outputs G? (z) = ? + ?z,
where ? = (?, ?) are the two scalar parameters to be learned. For the variational function T? we use
a neural network with two hidden layers having 64 units each and tanh activations. We optimize the
objective F (?, ?) by using the single-step gradient method presented in Section 3. In each step we
sample batches of size 1024 from p(x) and p(z) and we use a step-size of ? = 0.01 for updating
both ? and ?. We compare the results to the best fit provided by the exact optimization of Df (P kQ? )
w.r.t. ?, which is feasible in this case by solving the required integrals in (2) numerically. We use
? (learned) and ?? (best fit) to distinguish the parameters sets used in these two approaches.
(?
? , ?)
Results. The left side of Table 3 shows the optimal divergence and objective values Df (P ||Q?? )
? as well as the corresponding (optimal) means and standard deviations. Note that the
and F (?
? , ?)
? There is a good
results are in line with the lower bound property, having Df (P ||Q?? ) ? F (?
? , ?).
correspondence between the gap in objectives and the difference between the fitted means and
standard deviations. The right side of Table 3 shows the results of the following experiment: (1) we
train T? and Q? using a particular divergence, then (2) we estimate the divergence and re-train T?
while keeping Q? fixed. As expected, Q? performs best on the divergence it was trained with. We
present further details and plots of the fitted Gaussians and variational functions in the supplementary
materials.
4
KL
KL-rev
JS
Jeffrey
Pearson
Df (P ||Q?? )
?
F (?,
? ?)
0.2831
0.2801
0.2480
0.2415
0.1280
0.1226
0.5705
0.5151
0.6457
0.6379
??
?
?
1.0100
1.0335
1.5782
1.5624
1.3070
1.2854
1.3218
1.2295
0.5737
0.6157
??
?
?
1.8308
1.8236
1.6319
1.6403
1.7542
1.7659
1.7034
1.8087
1.9274
1.9031
train \ test
KL
KL-rev
JS
Jeffrey
Pearson
KL
KL-rev
JS
Jeffrey
Pearson
0.2808
0.3518
0.2871
0.2869
0.2970
0.3423
0.2414
0.2760
0.2975
0.5466
0.1314
0.1228
0.1210
0.1247
0.1665
0.5447
0.5794
0.5260
0.5236
0.7085
0.7345
1.3974
0.92160
0.8849
0.648
Table 3: Gaussian approximation of a mixture of Gaussians. Left: optimal objectives, and the learned mean
?, ?
? ) (learned) and ?? = (?? , ? ? ) (best fit). Right: objective values to the true
and the standard deviation: ?? = (?
distribution for each trained model. For each divergence, the lowest objective function value is achieved by the
model that was trained for this divergence.
In summary, our results demonstrate that when the generative model is misspecified, the divergence
function used for estimation has a strong influence on which model is learned.
3
Algorithms for Variational Divergence Minimization (VDM)
We now discuss numerical methods to find saddle points of the objective (6). To this end, we
distinguish two methods; first, the alternating method originally proposed by Goodfellow et al. [10],
and second, a more direct single-step optimization procedure.
In our variational framework, the alternating gradient method can be described as a double-loop
method; the internal loop tightens the lower bound on the divergence, whereas the outer loop improves
the generator model. While the motivation for this method is plausible, in practice a popular choice is
taking a single step in the inner loop, requiring two backpropagation passes for one outer iteration.
Goodfellow et al. [10] provide a local convergence guarantee.
3.1
Single-Step Gradient Method
Motivated by the success of the alternating gradient method with a single inner step, we propose an
even simpler algorithm shown in Algorithm 1. The algorithm differs from the original one in that there
is no inner loop and the gradients with respect to ? and ? are computed in a single back-propagation.
Algorithm 1 Single-Step Gradient Method
1: function S INGLE S TEP G RADIENT I TERATION(P, ?t , ? t , B, ?)
2:
Sample XP = {x1 , . . . , xB } and XQ = {x01 , . . . , x0B }, from P and Q?t , respectively.
3:
Update: ? t+1 = ? t + ? ?? F (?t , ? t ).
4:
Update: ?t+1 = ?t ? ? ?? F (?t , ? t ).
5: end function
Analysis. Here we show that Algorithm 1 geometrically converges to a saddle point (?? , ? ? ) if
there is a neighborhood around the saddle point in which F is strongly convex in ? and strongly
concave in ?. These assumptions are similar to those made in [10]. Formally, we assume:
?? F (?? , ? ? ) = 0,
?? F (?? , ? ? ) = 0,
?
?2? F (?, ?) ?I,
?2? F (?, ?) ??I,
(9)
?
for (?, ?) in the neighborhood of (? , ? ). Note that although there could be many saddle points that
arise from the structure of deep networks [6], they would not qualify as the solution of our variational
framework under these assumptions.
For convenience, let?s define ? t = (?t , ? t ). Now the convergence of Algorithm 1 can be stated as
follows (the proof is given in the supplementary material):
Theorem 1. Suppose that there is a saddle point ? ? = (?? , ? ? ) with a neighborhood that satisfies
conditions (9). Moreover, we define J(?) = 12 k?F (?)k22 and assume that in the above neighborhood,
F is sufficiently smooth so that there is a constant L > 0 such that k?J(? 0 )??J(?)k2 ? Lk? 0 ??k2
for any ?, ? 0 in the neighborhood of ? ? . Then using the step-size ? = ?/L in Algorithm 1, we have
t
?2
J(? t ) ? 1 ?
J(? 0 ).
L
5
That is, the squared norm of the gradient ?F (?) decreases geometrically.
3.2
Practical Considerations
Here we discuss principled extensions of the heuristic proposed in [10] and real/fake statistics
discussed by Larsen and S?nderby2 . Furthermore we discuss practical advice that slightly deviate
from the principled viewpoint.
Goodfellow et al. [10] noticed that training GAN can be significantly sped up by maximizing
Ex?Q? [log D? (x)] instead of minimizing Ex?Q? [log (1 ? D? (x))] for updating the generator. In
the more general f -GAN Algorithm (1) this means that we replace line 4 with the update
?t+1 = ?t + ? ?? Ex?Q?t [gf (V?t (x))],
(10)
thereby maximizing the variational function output on the generated samples. We can show that this
transformation preserves the stationary point as follows (which is a generalization of the argument in
[10]): note that the only difference between the original direction (line 4) and (10) is the scalar factor
f ?0 (T? (x)), which is the derivative of the conjugate function f ? . Since f ?0 is the inverse of f 0 (see
Cor. 1.4.4, Chapter E, [15]), if T = T ? , using (5), we can see that this factor would be the density
ratio p(x)/q(x), which would be one at the stationary point. We found this transformation useful
also for other divergences. We found Adam [17] and gradient clipping to be useful especially in the
large scale experiment on the LSUN dataset.
The original implementation [10] of GANs3 and also Larsen and S?nderby monitor certain real and
fake statistics, which are defined as the true positive and true negative rates of the variational function
viewing it as a binary classifier. Since our output activation gf are all monotone, we can derive similar
statistics for any f -divergence by only changing the decision threshold. Due to the link between the
density ratio and the variational function (5), the threshold lies at f 0 (1) (see Table 2). That is, we
can interpret the output of the variational function as classifying the input x as a true sample if the
variational function T? (x) is larger than f 0 (1), and classifying it as a generator sample otherwise.
4
Experiments
We now train generative neural samplers based on VDM on the MNIST and LSUN datasets.
MNIST Digits. We use the MNIST training data set (60,000 samples, 28-by-28 pixel images) to
train the generator and variational function model proposed in [10] for various f -divergences. With
z ? Uniform100 (?1, 1) as input, the generator model has two linear layers each followed by batch
normalization and ReLU activation and a final linear layer followed by the sigmoid function. The
variational function V? (x) has three linear layers with exponential linear unit [4] in between. The
final activation is specific to each divergence and listed in Table 2. As in [27] we use Adam with a
learning rate of ? = 0.0002 and update weight ? = 0.5. We use a batchsize of 4096, sampled from
the training set without replacement, and train each model for one hour. We also compare against
variational autoencoders [18] with 20 latent dimensions.
Results and Discussion. We evaluate the performance using the kernel density estimation (Parzen
window) approach used in [10]. To this end, we sample 16k images from the model and estimate
a Parzen window estimator using an isotropic Gaussian kernel bandwidth using three fold cross
validation. The final density model is used to evaluate the average log-likelihood on the MNIST test
set (10k samples). We show the results in Table 4, and some samples from our models in Figure 2.
The use of the KDE approach to log-likelihood estimation has known deficiencies [33]. In particular,
for the dimensionality used in MNIST (d = 784) the number of model samples required to obtain
accurate log-likelihood estimates is infeasibly large. We found a large variability (up to 50 nats)
between multiple repetitions. As such the results are not entirely conclusive. We also trained the
same KDE estimator on the MNIST training set, achieving a significantly higher holdout likelihood.
However, it is reassuring to see that the model trained for the Kullback-Leibler divergence indeed
achieves a high holdout likelihood compared to the GAN model.
2
3
http://torch.ch/blog/2015/11/13/gan.html
Available at https://github.com/goodfeli/adversarial
6
Training divergence
Kullback-Leibler
Reverse Kullback-Leibler
Pearson ?2
Neyman ?2
Squared Hellinger
Jeffrey
Jensen-Shannon
GAN
KDE hLLi (nats)
? SEM
416
319
429
300
-708
-2101
367
305
5.62
8.36
5.53
8.33
18.1
29.9
8.19
8.97
445
502
5.36
5.99
Variational Autoencoder [18]
KDE MNIST train (60k)
Table 4: Kernel Density Estimation evaluation on the MNIST test data set. Each
KDE model is build from 16,384 samples from the learned generative model.
We report the mean log-likelihood on the MNIST test set (n = 10, 000) and the
standard error of the mean. The KDE MNIST result is using 60,000 MNIST
training images to fit a single KDE model.
Figure 2: MNIST model
samples trained using KL,
reverse KL, Hellinger,
Jensen from top to bottom.
LSUN Natural Images. Through the DCGAN work [27] the generative-adversarial approach has
shown real promise in generating natural looking images. Here we use the same architecture as as
in [27] and replace the GAN objective with our more general f -GAN objective.
We use the large scale LSUN database [35] of natural images of different categories. To illustrate
the different behaviors of different divergences we train the same model on the classroom category
of images, containing 168,103 images of classroom environments, rescaled and center-cropped to
96-by-96 pixels.
Setup. We use the generator architecture and training settings proposed in DCGAN [27]. The model
receives z ? Uniformdrand (?1, 1) and feeds it through one linear layer and three deconvolution
layers with batch normalization and ReLU activation in between. The variational function is the same
as the discriminator architecture in [27] and follows the structure of a convolutional neural network
with batch normalization, exponential linear units [4] and one final linear layer.
Results. Figure 3 shows 16 random samples from neural samplers trained using GAN, KL, and
squared Hellinger divergences. All three divergences produce equally realistic samples. Note that the
difference in the learned distribution Q? arise only when the generator model is not rich enough.
(a) GAN
(b) KL
(c) Squared Hellinger
Figure 3: Samples from three different divergences.
5
Related Work
We now discuss how our approach relates to existing work. Building generative models of real world
distributions is a fundamental goal of machine learning and much related work exists. We only
discuss work that applies to neural network models.
7
Mixture density networks [2] are neural networks which directly regress the parameters of a finite
parametric mixture model. When combined with a recurrent neural network this yields impressive
generative models of handwritten text [12].
NADE [19] and RNADE [34] perform a factorization of the output using a predefined and somewhat
arbitrary ordering of output dimensions. The resulting model samples one variable at a time conditioning on the entire history of past variables. These models provide tractable likelihood evaluations
and compelling results but it is unclear how to select the factorization order in many applications .
Diffusion probabilistic models [31] define a target distribution as a result of a learned diffusion
process which starts at a trivial known distribution. The learned model provides exact samples and
approximate log-likelihood evaluations.
Noise contrastive estimation (NCE) [14] is a method that estimates the parameters of unnormalized
probabilistic models by performing non-linear logistic regression to discriminate the data from
artificially generated noise. NCE can be viewed as a special case of GAN where the discriminator
is constrained to a specific form that depends on the model (logistic regression classifier) and the
generator (kept fixed) is providing the artificially generated noise (see supplementary material).
The generative neural sampler models of [22] and [3] did not provide satisfactory learning methods;
[22] used importance sampling and [3] expectation maximization. The main difference to GAN and
to our work really is in the learning objective, which is effective and computationally inexpensive.
Variational auto-encoders (VAE) [18, 29] are pairs of probabilistic encoder and decoder models
which map a sample to a latent representation and back, trained using a variational Bayesian learning
objective. The advantage of VAEs is in the encoder model which allows efficient inference from
observation to latent representation and overall they are a compelling alternative to f -GANs and
recent work has studied combinations of the two approaches [23]
As an alternative to the GAN training objective the work [20] and independently [7] considered the
use of the kernel maximum mean discrepancy (MMD) [13, 9] as a training objective for probabilistic
models. This objective is simpler to train compared to GAN models because there is no explicitly
represented variational function. However, it requires the choice of a kernel function and the reported
results so far seem slightly inferior compared to GAN. MMD is a particular instance of a larger class of
probability metrics [32] which all take the form D(P, Q) = supT ?T |Ex?P [T (x)] ? Ex?Q [T (x)]|,
where the function class T is chosen in a manner specific to the divergence. Beyond MMD other
popular metrics of this form are the total variation metric (also an f -divergence), the Wasserstein
distance, and the Kolmogorov distance.
A previous attempt to enable minimization of the KL-divergence in deep generative models is due to
Goodfellow et al. [11], where an approximation to the gradient of the KL divergence is derived.
In [16] another generalization of the GAN objective is proposed by using an alternative JensenShannon divergence that interpolates between the KL and the reverse KL divergence and has JensenShannon as its mid-point. We discuss this work in more detail in the supplementary materials.
6
Discussion
Generative neural samplers offer a powerful way to represent complex distributions without limiting
factorizing assumptions. However, while the purely generative neural samplers as used in this paper
are interesting their use is limited because after training they cannot be conditioned on observed data
and thus are unable to provide inferences.
We believe that in the future the true benefits of neural samplers for representing uncertainty will be
found in discriminative models and our presented methods extend readily to this case by providing
additional inputs to both the generator and variational function as in the conditional GAN model [8].
We hope that the practical difficulties of training with saddle point objectives are not an underlying
feature of the model but instead can be overcome with novel optimization algorithms. Further
investigations, such as [30], are needed to investigate and hopefully overcome these difficulties.
Acknowledgements. We thank Ferenc Husz?ar for discussions on the generative-adversarial approach.
8
References
[1] S. M. Ali and S. D. Silvey. A general class of coefficients of divergence of one distribution from another.
JRSS (B), pages 131?142, 1966.
[2] C. M. Bishop. Mixture density networks. Technical report, Aston University, 1994.
[3] C. M. Bishop, M. Svens?en, and C. K. I. Williams. GTM: The generative topographic mapping. Neural
Computation, 10(1):215?234, 1998.
[4] D. A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential
linear units (ELUs). arXiv:1511.07289, 2015.
[5] I. Csisz?ar and P. C. Shields. Information theory and statistics: A tutorial. Foundations and Trends in
Communications and Information Theory, 1:417?528, 2004.
[6] Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the
saddle point problem in high-dimensional non-convex optimization. In NIPS, pages 2933?2941, 2014.
[7] G. K. Dziugaite, D. M. Roy, and Z. Ghahramani. Training generative neural networks via maximum mean
discrepancy optimization. In UAI, pages 258?267, 2015.
[8] J. Gauthier. Conditional generative adversarial nets for convolutional face generation. Class Project for
Stanford CS231N: Convolutional Neural Networks for Visual Recognition, Winter semester 2014, 2014.
[9] T. Gneiting and A. E. Raftery. Strictly proper scoring rules, prediction, and estimation. JASA, 102(477):
359?378, 2007.
[10] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio.
Generative adversarial nets. In NIPS, pages 2672?2680, 2014.
[11] I. J. Goodfellow. On distinguishability criteria for estimating generative models. In International Conference on Learning Representations (ICLR2015), 2015. arXiv:1412.6515.
[12] A. Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, 2013.
[13] A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Sch?olkopf, and A. J. Smola. A kernel statistical test of
independence. In NIPS, pages 585?592, 2007.
[14] M. Gutmann and A. Hyv?arinen. Noise-contrastive estimation: A new estimation principle for unnormalized
statistical models. In AISTATS, pages 297?304, 2010.
[15] J. B. Hiriart-Urruty and C. Lemar?echal. Fundamentals of convex analysis. Springer, 2012.
[16] F. Husz?ar. How (not) to train your generative model: scheduled sampling, likelihood, adversary?
arXiv:1511.05101, 2015.
[17] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv:1412.6980, 2014.
[18] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. arXiv:1402.0030, 2013.
[19] H. Larochelle and I. Murray. The neural autoregressive distribution estimator. In AISTATS, 2011.
[20] Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. In ICML, 2015.
[21] F. Liese and I. Vajda. On divergences and informations in statistics and information theory. Information
Theory, IEEE, 52(10):4394?4412, 2006.
[22] D. J. C. MacKay. Bayesian neural networks and density networks. Nucl. Instrum. Meth. A, 354(1):73?80,
1995.
[23] A. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow. Adversarial autoencoders. arXiv:1511.05644, 2015.
[24] T. Minka. Divergence measures and message passing. Technical report, Microsoft Research, 2005.
[25] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Estimating divergence functionals and the likelihood ratio
by convex risk minimization. Information Theory, IEEE, 56(11):5847?5861, 2010.
[26] F. Nielsen and R. Nock. On the chi-square and higher-order chi distances for approximating f-divergences.
Signal Processing Letters, IEEE, 21(1):10?13, 2014.
[27] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional
generative adversarial networks. arXiv:1511.06434, 2015.
[28] M. D. Reid and R. C. Williamson. Information, divergence and risk for binary experiments. Journal of
Machine Learning Research, 12(Mar):731?817, 2011.
[29] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in
deep generative models. In ICML, pages 1278?1286, 2014.
[30] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for
training GANs. In NIPS, 2016.
[31] J. Sohl-Dickstein, E. A. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsupervised learning using
non-equilibrium thermodynamics. ICML, pages 2256?2265, 2015.
[32] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Sch?olkopf, and G. Lanckriet. Hilbert space embeddings
and metrics on probability measures. JMLR, 11:1517?1561, 2010.
[33] L. Theis, A. v.d. Oord, and M. Bethge. A note on the evaluation of generative models. arXiv:1511.01844,
2015.
[34] B. Uria, I. Murray, and H. Larochelle. RNADE: The real-valued neural autoregressive density-estimator.
In NIPS, pages 2175?2183, 2013.
[35] F. Yu, Y. Zhang, S. Song, A. Seff, and J. Xiao. LSUN: Construction of a large-scale image dataset using
deep learning with humans in the loop. arXiv:1506.03365, 2015.
9
| 6066 |@word mild:1 norm:1 hyv:1 semicontinuous:4 contrastive:2 thereby:1 moment:1 past:1 existing:2 current:1 com:2 rnade:2 activation:10 dx:11 readily:1 uria:1 realistic:1 numerical:2 plot:3 update:4 stationary:2 generative:38 intelligence:1 isotropic:1 completeness:1 provides:1 pascanu:1 semester:1 simpler:2 zhang:1 wierstra:1 direct:1 manner:1 hellinger:8 indeed:1 expected:1 behavior:1 chi:2 decreasing:1 window:2 increasing:1 spain:1 estimating:5 provided:1 moreover:1 underlying:1 project:1 lowest:1 interpreted:1 finding:1 transformation:2 guarantee:1 every:1 concave:1 zaremba:1 returning:1 k2:2 classifier:2 unit:4 reid:2 positive:1 local:1 gneiting:1 encoding:1 studied:1 factorization:2 limited:1 range:1 practical:3 practice:1 implement:1 differs:1 backpropagation:2 digit:1 procedure:3 significantly:2 matching:1 cannot:2 convenience:1 risk:2 influence:1 optimize:2 equivalent:1 map:1 center:1 maximizing:2 williams:1 independently:1 convex:9 identifying:2 pouget:1 insight:2 estimator:4 rule:1 shlens:1 variation:2 justification:1 qq:1 limiting:1 target:1 suppose:1 construction:1 exact:4 designing:1 goodfellow:9 jaitly:1 lanckriet:1 trend:1 roy:1 approximated:1 satisfying:1 updating:2 nderby:1 recognition:1 database:1 bottom:1 observed:1 ensures:1 gutmann:1 ordering:1 decrease:1 rescaled:1 principled:2 environment:1 complexity:1 nats:2 warde:1 cs231n:1 trained:8 tight:1 rewrite:1 solving:1 ali:2 ferenc:1 serve:1 purely:1 maheswaranathan:1 various:3 represented:2 chapter:1 kolmogorov:1 gtm:1 derivation:2 train:12 fast:1 describe:1 effective:1 kp:1 zemel:1 pearson:10 choosing:2 neighborhood:5 heuristic:1 supplementary:6 solve:1 plausible:1 larger:2 stanford:1 otherwise:1 valued:1 encoder:2 statistic:5 topographic:1 final:6 sequence:2 advantage:1 net:2 propose:2 outputting:1 hiriart:1 clevert:1 tu:1 loop:6 radient:1 validate:1 csisz:1 olkopf:2 convergence:2 double:1 produce:5 generating:2 adam:3 converges:1 illustrate:2 derive:3 recurrent:2 strong:1 auxiliary:1 implies:1 come:1 elus:1 larochelle:2 direction:1 waveform:1 nock:1 stochastic:2 vajda:1 human:1 viewing:1 enable:1 material:6 implementing:1 arinen:1 generalization:2 really:1 investigation:1 inspecting:1 extension:1 teration:1 strictly:1 batchsize:1 around:1 considered:2 sufficiently:1 exp:7 equilibrium:1 mapping:1 achieves:1 estimation:18 tanh:1 teo:1 repetition:1 minimization:7 hope:1 fukumizu:2 clearly:1 gaussian:3 supt:3 husz:2 vae:1 cseke:2 derived:2 rezende:1 likelihood:13 adversarial:11 sense:1 inference:3 ganguli:2 typically:1 entire:1 torch:1 hidden:1 favoring:1 interested:1 pixel:2 overall:1 dual:1 classification:1 html:1 dauphin:1 art:1 special:5 constrained:1 mackay:1 having:2 sampling:6 look:1 icml:3 unsupervised:2 yu:1 discrepancy:2 future:1 report:3 recommend:1 simplify:1 mirza:1 winter:1 simultaneously:1 divergence:75 preserve:1 replacement:2 jeffrey:4 microsoft:3 attempt:1 interest:1 message:1 investigate:1 evaluation:5 mixture:6 farley:1 goodfeli:1 silvey:2 xb:1 predefined:1 accurate:2 integral:1 infeasibly:1 unterthiner:1 desired:1 plotted:1 re:1 theoretical:1 fitted:2 fenchel:2 instance:4 compelling:2 ar:3 maximization:1 clipping:1 introducing:1 deviation:3 subset:1 uniform:1 kq:8 lsun:5 reported:1 encoders:1 combined:1 cho:1 density:10 fundamental:2 international:1 oord:1 probabilistic:7 jensenshannon:2 together:1 parzen:2 bethge:1 gans:3 squared:8 again:1 recorded:1 containing:1 choose:1 derivative:3 li:1 coefficient:1 explicitly:1 depends:1 sup:3 start:1 recover:2 bayes:1 metz:1 contribution:1 minimize:1 square:1 botond:2 convolutional:4 qk:1 yield:2 generalize:1 handwritten:1 bayesian:2 iid:1 history:1 sebastian:2 definition:1 against:1 inexpensive:1 sriperumbudur:1 larsen:2 regress:1 minka:1 mohamed:1 chintala:1 proof:1 sampled:2 dataset:2 holdout:2 popular:4 ut:1 improves:1 dimensionality:1 hilbert:1 classroom:2 nielsen:1 back:2 feed:1 originally:1 higher:2 follow:1 improved:1 wei:1 strongly:2 mar:1 furthermore:1 smola:1 autoencoders:2 receives:3 expressive:2 gauthier:1 hopefully:1 propagation:1 minibatch:1 logistic:2 quality:1 scheduled:1 believe:1 name:2 building:1 k22:1 contain:1 true:10 requiring:1 dziugaite:1 alternating:3 symmetric:1 leibler:9 satisfactory:1 self:1 inferior:1 seff:1 liese:1 unnormalized:2 criterion:1 tep:1 demonstrate:2 performs:1 image:12 variational:39 wise:1 consideration:1 novel:1 misspecified:1 common:2 sigmoid:2 sped:1 tightens:1 conditioning:1 extend:4 discussed:1 numerically:1 interpret:1 nonlinearity:1 language:1 dj:3 impressive:1 base:1 j:3 recent:1 reverse:8 certain:1 binary:2 success:1 blog:1 qualify:1 scoring:1 additional:2 somewhat:2 wasserstein:1 attacking:1 maximize:1 recommended:1 signal:1 relates:1 multiple:2 gretton:2 smooth:1 technical:2 cross:1 offer:1 equally:1 dkl:3 prediction:1 regression:2 expectation:3 df:10 metric:4 arxiv:10 iteration:1 represent:4 normalization:3 kernel:6 mmd:3 achieved:1 hochreiter:1 whereas:1 cropped:1 sch:2 posse:1 pass:1 seem:1 jordan:1 call:2 leverage:1 feedforward:3 bengio:2 enough:2 embeddings:1 marginalization:1 fit:4 relu:2 independence:1 architecture:3 bandwidth:1 inner:3 motivated:1 passed:1 song:2 interpolates:1 passing:1 deep:7 generally:1 fake:2 useful:2 listed:1 mid:1 category:2 http:2 tutorial:1 write:2 promise:1 dickstein:1 group:1 key:1 threshold:3 monitor:1 achieving:1 changing:1 diffusion:2 kept:1 merely:1 monotone:2 geometrically:2 nce:2 inverse:1 letter:1 powerful:1 uncertainty:1 swersky:1 family:1 decision:2 entirely:1 bound:7 layer:10 followed:2 distinguish:3 courville:1 correspondence:1 fold:1 constraint:1 deficiency:1 svens:1 your:1 x2:1 argument:1 performing:2 combination:1 conjugate:8 jr:1 describes:1 slightly:2 rev:3 computationally:1 neyman:1 discus:7 needed:2 urruty:1 tractable:1 end:5 cor:1 gulcehre:1 parametrize:2 operation:1 gaussians:4 available:1 apply:1 salimans:1 robustly:1 batch:4 alternative:3 original:4 denotes:2 top:1 gan:33 calculating:1 ghahramani:1 especially:1 build:1 murray:2 approximating:1 objective:24 noticed:1 parametric:1 makhzani:1 unclear:1 gradient:9 distance:4 link:1 unable:1 thank:1 decoder:2 outer:2 trivial:1 ozair:1 ratio:3 minimizing:1 providing:2 setup:2 kde:7 ryota:1 negative:2 ingle:1 stated:1 ba:1 implementation:2 proper:2 unknown:1 perform:2 observation:1 datasets:1 finite:2 variability:1 looking:1 communication:1 arbitrary:4 domf:8 pair:2 required:2 kl:21 sentence:1 discriminator:4 optimized:1 conclusive:1 learned:9 barcelona:1 hour:1 nip:6 distinguishability:1 kingma:2 beyond:1 adversary:1 perception:1 including:1 belief:1 wainwright:1 suitable:2 critical:2 natural:6 difficulty:2 nucl:1 meth:1 representing:1 thermodynamics:1 github:1 aston:1 lk:1 raftery:1 autoencoder:2 auto:2 gf:13 xq:1 deviate:1 review:1 text:1 acknowledgement:1 theis:1 graf:1 interesting:1 generation:1 generator:15 validation:1 foundation:1 x01:1 jasa:1 sufficient:1 xp:1 xiao:1 principle:3 viewpoint:1 nowozin:2 classifying:2 echal:1 summary:1 last:1 keeping:1 side:2 allow:2 taking:4 face:1 benefit:2 overcome:2 dimension:2 xn:1 evaluating:1 world:1 rich:2 autoregressive:2 forward:1 author:1 concretely:1 made:1 nguyen:6 far:1 welling:1 functionals:1 approximate:8 kullback:9 uai:1 discriminative:2 factorizing:1 continuous:1 latent:3 table:15 sem:1 williamson:2 complex:1 artificially:2 domain:5 did:1 aistats:2 main:1 motivation:1 noise:4 arise:2 x1:2 xu:1 advice:1 nade:1 en:1 shield:1 tomioka:1 guiding:1 exponential:3 lie:1 jmlr:1 theorem:1 specific:4 bishop:2 jensen:8 list:3 abadie:1 deconvolution:1 exists:1 mnist:12 sohl:1 importance:1 conditioned:1 gap:1 chen:1 simply:1 saddle:12 univariate:1 visual:1 contained:1 dcgan:2 scalar:4 applies:1 springer:1 ch:1 corresponds:3 radford:2 satisfies:1 reassuring:1 conditional:2 goal:1 viewed:1 cheung:1 replace:2 lemar:1 feasible:1 instrum:1 except:1 sampler:12 called:1 total:1 pas:1 discriminate:1 experimental:1 shannon:7 vaes:1 formally:1 select:1 internal:1 absolutely:1 evaluate:3 instructive:1 ex:15 |
5,599 | 6,067 | Tagger: Deep Unsupervised Perceptual Grouping
Klaus Greff* , Antti Rasmus, Mathias Berglund, Tele Hotloo Hao,
J?rgen Schmidhuber* , Harri Valpola
The Curious AI Company {antti,mathias,hotloo,harri}@cai.fi
*
IDSIA {klaus,juergen}@idsia.ch
Abstract
We present a framework for efficient perceptual inference that explicitly reasons
about the segmentation of its inputs and features. Rather than being trained for
any specific segmentation, our framework learns the grouping process in an unsupervised manner or alongside any supervised task. We enable a neural network to
group the representations of different objects in an iterative manner through a differentiable mechanism. We achieve very fast convergence by allowing the system
to amortize the joint iterative inference of the groupings and their representations.
In contrast to many other recently proposed methods for addressing multi-object
scenes, our system does not assume the inputs to be images and can therefore directly handle other modalities. We evaluate our method on multi-digit classification
of very cluttered images that require texture segmentation. Remarkably our method
achieves improved classification performance over convolutional networks despite
being fully connected, by making use of the grouping mechanism. Furthermore,
we observe that our system greatly improves upon the semi-supervised result of a
baseline Ladder network on our dataset. These results are evidence that grouping
is a powerful tool that can help to improve sample efficiency.
1
Introduction
Humans naturally perceive the world as being structured into different
objects, their properties and relation to each other. This phenomenon
which we refer to as perceptual grouping is also known as amodal
perception in psychology. It occurs effortlessly and includes a segmentation of the visual input, such as that shown in in Figure 1. This
grouping also applies analogously to other modalities, for example
in solving the cocktail party problem (audio) or when separating the
sensation of a grasped object from the sensation of fingers touching
each other (tactile). Even more abstract features such as object class,
color, position, and velocity are naturally grouped together with the
inputs to form coherent objects. This rich structure is crucial for many
real-world tasks such as manipulating objects or driving a car, where
awareness of different objects and their features is required.
In this paper, we introduce a framework for learning efficient iterative inference of such perceptual grouping which we call iTerative
Amortized Grouping (TAG). This framework entails a mechanism for
iteratively splitting the inputs and internal representations into several
different groups. We make no assumptions about the structure of this
segmentation and rather train the model end-to-end to discover which
Figure 1: An example of perare the relevant features and how to perform the splitting.
ceptual grouping for vision.
30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain.
?z0
z
?z1
z
L(m0)
m1
L(m1)
m2
1
q1(x)
2
q2 (x)
?z2
L(m2)
z
m3
?z i-1
zi
L(mi-1)
3
m i-1
z
m0
0
mi
z i-1
PARAM
ETRIC
MAPPIN
G
PARAMETRIC MAPPING
PARAM
ETRIC
MAPPIN
G
PARAM
ETRIC
MAPPIN
G
q3 (x)
?x
q1 (x|g)
x
x
iteration 1
iteration 2
q1(x)
?x
iteration 3
Figure 2: Left: Three iterations of the TAG system which learns by denoising its input using several
groups (shown in color). Right: Detailed view of a single iteration on the TextureMNIST1 dataset.
Please refer to the supplementary material for further details.
By using an auxiliary denoising task we train the system to directly amortize the posterior inference
of the object features and their grouping. Because our framework does not make any assumptions
about the structure of the data, it is completely domain agnostic and applicable to any type of data.
The TAG framework works completely unsupervised, but can also be combined with supervised
learning for classification or segmentation.
2
Iterative Amortized Grouping (TAG)1
Grouping. Our goal is to enable neural networks to split inputs and internal representations into
coherent groups. We define a group to be a collection of inputs and internal representations that are
processed together, but (largely) independent of each other. By processing each group separately the
network can make use of invariant distributed features without the risk of interference and ambiguities,
which might arise when processing everything in one clump. We make no assumptions about the
correspondence between objects and groups. If the network can process several objects in one group
without unwanted interference, then the network is free to do so. The ?correct? grouping is often
dynamic, ambiguous and task dependent. So rather than training it as a separate task, we allow the
network to split the processing of the inputs, and let it learn how to best use this ability for any given
problem. To make the task of instance segmentation easy, we keep the groups symmetric in the sense
that each group is processed by the same underlying model.
Amortized Iterative Inference. We want our model to reason not only about the group assignments
but also about the representation of each group. This amounts to inference over two sets of variables:
the latent group assignments and the individual group representations; A formulation very similar to
mixture models for which exact inference is typically intractable. For these models it is a common
approach to approximate the inference in an iterative manner by alternating between (re-)estimation
of these two sets (e.g., EM-like methods [4]). The intuition is that given the grouping, inferring the
object features becomes easy, and vice versa. We employ a similar strategy by allowing our network
to iteratively refine its estimates of the group assignments as well as the object representations.
Rather than deriving and then running an inference algorithm, we train a parametric mapping to arrive
at the end result of inference as efficiently as possible [9]. This is known as amortized inference [31],
and it is used, for instance, in variational autoencoders where the encoder learns to amortize the
posterior inference required by the generative model represented by the decoder. Here we instead
apply the framework of denoising autoencoders [6, 15, 34] which are trained to reconstruct original
inputs x from corrupted versions x
?. This encourages the network to implement useful amortized
posterior inference without ever having to specify or even know the underlying generative model
whose inference is implicitly learned.
1
Note: This section only provides a short and high-level overview of the TAG framework and Tagger. For
a more detailed description please refer to the supplementary material or the extended version of this paper:
https://arxiv.org/abs/1606.06724
2
Data: x, K, T, ?, v, Wh , Wu , ?
Result: z T , mT , C
begin Initialization:
x
? ? x + N (0, ? 2 I);
m0 ? softmax(N (0, I));
z 0 ? E[x];
end
for i = 0 . . . T ? 1 do
for k = 1 . . . K do
z?k ? N (?
x; zki , (v + ? 2 )I);
i
?zk ? (?
x ? zki )mik z?k ;
i
L(mk ) ? Pz?kz?h ;
h
hik ? f (Wh zki , mik , ?zki , L(mik ) );
i
[zki+1 , mi+1
k ] ? Wu Ladder(hk , ?);
end
mi+1 ? softmax(mi+1 );
PK
qi+1 (x) ? k=1 N (x; zki+1 , vI)mi+1 ;
end
PT
C ? ? i=1 log qi (x);
Figure 3: An example of how Tagger
would use a 3-layer-deep Ladder Network
as its parametric mapping to perform its
iteration i + 1. Note the optional class prediction output ygi for classification tasks.
Algorithm 1: Pseudocode for running Tagger on a sin- See supplementary
material for details.
gle real-valued example x. For details and a binaryinput version please refer to supplementary material.
P
Putting it together. By using the negative log likelihood C(x) = ? i log qi (x) as a cost function,
we train our system to compute an approximation qi (x) of the true denoising posterior p(x|?
x) at
each iteration i. An overview of the whole system is given in Figure 2. For each input element xj we
introduce K latent binary variables gk,j that take a value of 1 if this element is generated by group
k. This way inference is split into K groups, and we can write the approximate posterior in vector
notation as follows:
X
X
qi (x) =
qi (x|gk )qi (gk ) =
N (x; zki , vI)mik ,
(1)
k
k
where we model the group reconstruction qi (x|gk ) as a Gaussian with mean zki and variance v, and
the group assignment posterior qi (gk ) as a categorical distribution mk .
The trainable part of the TAG framework is given by a parametric mapping that operates independently
on each group k and is used to compute both zki and mik (which is afterwards normalized using an
elementwise softmax over the groups). This parametric mapping is usually implemented by a neural
network and the whole system is trained end-to-end using standard backpropagation through time.
The input to the network for the next iteration consists of the vectors zki and mik along with two
additional quantities: The remaining modelling error ?zki and the group assignment likelihood ratio
L(mik ) which carry information about how the estimates can be improved:
?zki ?
?C(?
x)
?zki
qi (?
x|gk )
L(mik ) ? P
x|gh )
h qi (?
and
Note that they are derived from the corrupted input x
?, to make sure we don?t leak information about
the clean input x into the system.
Tagger. For this paper we chose the Ladder network [19] as the parametric mapping because its
structure reflects the computations required for posterior inference in hierarchical latent variable
models. This means that the network should be well equipped to handle the hierarchical structure one
might expect to find in many domains. We call this Ladder network wrapped in the TAG framework
Tagger. This is illustrated in Figure 3 and the corresponding pseudocode can be found in Algorithm 1.
3
3
Experiments and results
We explore the properties and evaluate the performance of Tagger both in fully unsupervised settings
and in semi-supervised tasks in two datasets2 . Although both datasets consist of images and grouping
is intuitively similar to image segmentation, there is no prior in the Tagger model for images: our
results (unlike the ConvNet baseline) generalize even if we permute all the pixels .
Shapes. We use the simple Shapes dataset [21] to examine the basic properties of our system. It
consists of 60,000 (train) + 10,000 (test) binary images of size 20x20. Each image contains three
randomly chosen shapes (4, 5, or ) composed together at random positions with possible overlap.
Textured MNIST. We generated a two-object supervised dataset (TextureMNIST2) by sequentially
stacking two textured 28x28 MNIST-digits, shifted two pixels left and up, and right and down,
respectively, on top of a background texture. The textures for the digits and background are different
randomly shifted samples from a bank of 20 sinusoidal textures with different frequencies and
orientations. Some examples from this dataset are presented in the column of Figure 4b. We use
a 50k training set, 10k validation set, and 10k test set to report the results. We also use a textured
single-digit version (TextureMNIST1) without a shift to isolate the impact of texturing from multiple
objects.
3.1
Training and evaluation
We train Tagger in an unsupervised manner by only showing the network the raw input example
x, not ground truth masks or any class labels, using 4 groups and 3 iterations. We average the cost
over iterations and use ADAM [14] for optimization. On the Shapes dataset we trained for 100
epochs with a bit-flip probability of 0.2, and on the TextureMNIST dataset for 200 epochs with a
corruption-noise standard deviation of 0.2. The models reported in this paper took approximately 3
and 11 hours in wall clock time on a single Nvidia Titan X GPU for Shapes and TextureMNIST2
datasets respectively.
We evaluate the trained models using two metrics: First, the denoising cost on the validation set, and
second we evaluate the segmentation into objects using the adjusted mutual information (AMI) score
[35] and ignore the background and overlap regions in the Shapes dataset (consistent with Greff et al.
[8]). Evaluations of the AMI score and classification results in semi-supervised tasks were performed
using uncorrupted input. The system has no restrictions regarding the number of groups and iterations
used for training and evaluation. The results improved in terms of both denoising cost and AMI score
when iterating further, so we used 5 iterations for testing. Even if the system was trained with 4
groups and 3 shapes per training example, we could test the evaluation with, for example, 2 groups
and 3 shapes, or 4 groups and 4 shapes.
3.2
Unsupervised Perceptual Grouping
Table 1 shows the median performance of Tagger on the Shapes dataset over 20 seeds. Tagger is able
to achieve very fast convergences, as shown in Table 1a. Through iterations, the network improves its
denoising performances by grouping different objects into different groups. Comparing to Greff et al.
[8], Tagger performs significantly better in terms of AMI score (see Table 1b). We found that for
this dataset using LayerNorm [1] instead of BatchNorm [13] greatly improves the results as seen in
Table 1.
Figure 4a and Figure 4b qualitatively show the learned unsupervised groupings for the Shapes and
textured MNIST datasets. Tagger uses its TAG mechanism slightly differently for the two datasets.
For Shapes, zg represents filled-in objects and masks mg show which part of the object is actually
visible. For textured MNIST, zg represents the textures while masks mg capture texture segments.
In the case of the same digit or two identical shapes, Tagger can segment them into separate groups,
and hence, performs instance segmentation. We used 4 groups for training even though there are only
3 objects in the Shapes dataset and 3 segments in the TexturedMNIST2 dataset. The excess group is
left empty by the trained system but its presence seems to speed up the learning process.
2
The datasets and a Theano [33] reference implementation of Tagger are available at http://github.com/
CuriousAI/tagger
4
1: 00
1: 00
1: 00
1: 00
1: 00
0: 85
0: 60
A
B
original
reconst:
i=0
i=1
i=2
i=3
i=4
i=5
(a) Results for Shapes dataset. Left column: 7 examples from the test set along with
their resulting groupings in descending AMI score order and 3 hand-picked examples
(A, B, and C) to demonstrate generalization. A: Testing 2-group model on 3 object data.
B: Testing a 4-group model trained with 3-object data on 4 objects. C: Testing 4-group
model trained with 3-object data on 2 objects. Right column: Illustration of the inference
process over iterations for four color-coded groups; mk and zk .
C
reconst:
z0
m0
z1
m1
z2
m2
z3
m3
0: 95
0: 92
0: 90
0: 89
0: 87
0: 86
0: 85
D
E1
original
reconst:
i=0
i=1
i=2
i=3
i=4
i=5
Class
(b) Results for the TextureMNIST2 dataset. Left column: 7 examples from the test set
along with their resulting groupings in descending AMI score order and 3 hand-picked
examples (D, E1, E2). D: An example from the TextureMNIST1 dataset. E1-2: A
hand-picked example from TextureMNIST2. E1 demonstrates typical inference, and E2
demonstrates how the system is able to estimate the input when a certain group (topmost
digit 4) is removed. Right column: Illustration of the inference process over iterations
for four color-coded groups; mk and zk .
E2
reconst:
z0
m0
z1
m1
z2
m2
z3
m3
Pred: : 0
Pred: : no class
Pred: : 2
Pred: : no class
5
Denoising cost
AMI
Iter 1
0.094
0.58
Iter 2
0.068
0.73
Iter 3
0.063
0.77
Iter 4
0.063
0.79
Iter 5
0.063
0.79
Denoising cost*
AMI*
0.100
0.70
0.069
0.90
0.057
0.95
0.054
0.96
0.054
0.97
RC [8]
Tagger
Tagger*
AMI
0.61 ? 0.005
0.79 ? 0.034
0.97 ? 0.009
(b) Method comparison
(a) Convergence of Tagger over iterative inference
Table 1: Table (a) shows how quickly the algorithm evaluation converges over inference iterations
with the Shapes dataset. Table (b) compares segmentation quality to previous work on the Shapes
dataset. The AMI score is defined in the range from 0 (guessing) to 1 (perfect match). The results
with a star (*) are using LayerNorm [1] instead of BatchNorm.
The hand-picked examples A-C in Figure 4a illustrate the robustness of the system when the number
of objects changes in the evaluation dataset or when evaluation is performed using fewer groups.
Example E is particularly interesting; E2 demonstrates how we can remove the topmost digit from
the normal evaluated scene E1 and let the system fill in digit below and the background. We do
this by setting the corresponding group assignment probabilities mg to a large negative number just
before the final softmax over groups in the last iteration.
To solve the textured two-digit MNIST task, the system has to combine texture cues with high-level
shape information. The system first infers the background texture and mask which are finalized
on the first iteration. Then the second iteration typically fixes the texture used for topmost digit,
while subsequent iterations clarify the occluded digit and its texture. This demonstrates the need for
iterative inference of the grouping.
3.3
Classification
To investigate the role of grouping for the task of classification, we evaluate Tagger against four
baseline models on the textured MNIST task. As our first baseline we use a fully connected network
(FC) with ReLU activations and BatchNorm [13] after each layer. Our second baseline is a ConvNet
(Conv) based on Model C from [30], which has close to state-of-the-art results on CIFAR-10. We
removed dropout, added BatchNorm after each layer and replaced the final pooling by a fully
connected layer to improve its performance for the task. Furthermore, we compare with a fully
connected Ladder [19] (FC Ladder) network.
All models use a softmax output and are trained with 50,000 samples to minimize the categorical cross
entropy error. In case there are two different digits in the image (most examples in the TextureMNIST2
dataset), the target is p = 0.5 for both classes. We evaluate the models based on classification errors,
which we compute based on the two highest predicted classes (top 2) for the two-digit case.
For Tagger, we first train the system in an unsupervised phase for 150 epochs and then add two
fresh randomly initialized layers on top and continue training the entire system end to end using the
sum of unsupervised and supervised cost terms for 50 epochs. Furthermore, the topmost layer has a
per-group softmax activation that includes an added ?no class? neuron for groups that do not contain
any digit. The final classification is then performed by summing the softmax output over all groups
for the true 10 classes and renormalizing it.
As shown in Table 2, Tagger performs significantly better than all the fully connected baseline models
on both variants, but the improvement is more pronounced for the two-digit case. This result is
expected because for cases with multi-object overlap, grouping becomes more important. Moreover, it
confirms the hypothesis that grouping can help classification and is particularly beneficial for complex
inputs. Remarkably, Tagger is on par with the convolutional baseline for the TexturedMNIST1 dataset
and even outperforms it in the two-digit case, despite being fully connected itself. We hypothesize
that one reason for this result is that grouping allows for the construction of efficient invariant features
already in the low layers without losing information about the assignment of features to objects.
Convolutional networks solve this problem to some degree by grouping features locally through the
use of receptive fields, but that strategy is expensive and can break down in cases of heavy overlap.
6
Dataset
TextureMNIST1
chance level: 90%
Method
FC MLP
FC Ladder
FC Tagger (ours)
ConvNet
Error 50k
Error 1k
31.1 ? 2.2
7.2 ? 0.1
4.0 ? 0.3
3.9 ? 0.3
89.0 ? 0.2
30.5 ? 0.5
10.5 ? 0.9
52.4 ? 5.3
Model details
2000-2000-2000 / 1000-1000
3000-2000-1000-500-250
3000-2000-1000-500-250
based on Model C [30]
FC MLP
55.2 ? 1.0 79.4 ? 0.3 2000-2000-2000 / 1000-1000
chance level: 80% FC Ladder
41.1 ? 0.2 68.5 ? 0.2 3000-2000-1000-500-250
FC Tagger (ours)
7.9 ? 0.3 24.9 ? 1.8 3000-2000-1000-500-250
ConvNet
12.6 ? 0.4 79.1 ? 0.8 based on Model C [30]
Table 2: Test-set classification errors in % for both textured MNIST datasets. We report mean and
sample standard deviation over 5 runs. FC = Fully Connected, MLP = Multi Layer Perceptron.
TextureMNIST2
3.4
Semi-Supervised Learning
The TAG framework does not rely on labels and is therefore directly usable in a semi-supervised
context. For semi-supervised learning, the Ladder [19] is arguably one of the strongest baselines with
SOTA results on 1,000 MNIST and 60,000 permutation invariant MNIST classification. We follow
the common practice of using 1,000 labeled samples and 49,000 unlabeled samples for training
Tagger and the Ladder baselines. For completeness, we also report results of the convolutional
(ConvNet) and fully-connected (FC) baselines trained fully supervised on only 1,000 samples.
From Table 2, it is obvious that all the fully supervised methods fail on this task with 1,000 labels.
The best baseline result is achieved by the FC Ladder, which reaches 30.5 % error for one digit but
68.5 % for TextureMNIST2. For both datasets, Tagger achieves by far the lowest error rates: 10.5 %
and 24.9 %, respectively. Again, this difference is amplified for the two-digit case, where Tagger
with 1,000 labels even outperforms the Ladder baseline with all 50k labels. This result matches our
intuition that grouping can often segment even objects of an unknown class and thus help select the
relevant features for learning. This is particularly important in semi-supervised learning where the
inability to self-classify unlabeled samples can mean that the network fails to learn from them at all.
To put these results in context, we performed informal tests with five human subjects. The subjects
improved significantly over training for a few days but there were also significant individual differences. The task turned out to be quite difficult and strenuous, with the best performing subjects
scoring around 10 % error for TextureMNIST1 and 30 % error for TextureMNIST2.
4
Related work
Attention models have recently become very popular, and similar to perceptual grouping they help
in dealing with complex structured inputs. These approaches are not, however, mutually exclusive
and can benefit from each other. Overt attention models [28, 5] control a window (fovea) to focus on
relevant parts of the inputs. Two of their limitations are that they are mostly tailored to the visual
domain and are usually only suited to objects that are roughly the same shape as the window. But
their ability to limit the field of view can help to reduce the complexity of the target problem and thus
also help segmentation. Soft attention mechanisms [26, 3, 40] on the other hand use some form of
top-down feedback to suppress inputs that are irrelevant for a given task. These mechanisms have
recently gained popularity, first in machine translation [2] and then for many other problems such as
image caption generation [39]. Because they re-weigh all the inputs based on their relevance, they
could benefit from a perceptual grouping process that can refine the precise boundaries of attention.
Our work is primarily built upon a line of research based on the concept that the brain uses synchronization of neuronal firing to bind object representations together. This view was introduced by
[37] and has inspired many early works on oscillations in neural networks (see the survey [36] for a
summary). Simulating the oscillations explicitly is costly and does not mesh well with modern neural
network architectures (but see [17]). Rather, complex values have been used to model oscillating
activations using the phase as soft tags for synchronization [18, 20]. In our model, we further abstract
them by using discretized synchronization slots (our groups). It is most similar to the models of
Wersing et al. [38], Hyv?rinen & Perki? [12] and Greff et al. [8]. However, our work is the first to
combine this with denoising autoencoders in an end-to-end trainable fashion.
7
Another closely related line of research [23, 22] has focused on multi-causal modeling of the inputs.
Many of the works in that area [16, 32, 29, 11] build upon Restricted Boltzmann Machines. Each
input is modeled as a mixture model with a separate latent variable for each object. Because exact
inference is intractable, these models approximate the posterior with some form of expectation
maximization [4] or sampling procedure. Our assumptions are very similar to these approaches, but
we allow the model to learn the amortized inference directly (more in line with Goodfellow et al. [7]).
Since recurrent neural networks (RNNs) are general purpose computers, they can in principle
implement arbitrary computable types of temporary variable binding [25, 26], unsupervised segmentation [24], and internal [26] and external attention [28]. For example, an RNN with fast weights [26]
can rapidly associate or bind the patterns to which the RNN currently attends. Similar approaches
even allow for metalearning [27], that is, learning a learning algorithm. Hochreiter et al. [10], for example, learned fast online learning algorithms for the class of all quadratic functions of two variables.
Unsupervised segmentation could therefore in principle be learned by any RNN as a by-product
of data compression or any other given task. That does not, however, imply that every RNN will,
through learning, easily discover and implement this tool. From that perspective, TAG can be seen as
a way of helping an RNN to quickly learn and efficiently implement a grouping mechanism.
5
Conclusion
In this paper, we have argued that the ability to group input elements and internal representations
is a powerful tool that can improve a system?s ability to handle complex multi-object inputs. We
have introduced the TAG framework, which enables a network to directly learn the grouping and
the corresponding amortized iterative inference in a unsupervised manner. The resulting iterative
inference is very efficient and converges within five iterations. We have demonstrated the benefits
of this mechanism for a heavily cluttered classification task, in which our fully connected Tagger
even significantly outperformed a state-of-the-art convolutional network. More impressively, we have
shown that our mechanism can greatly improve semi-supervised learning, exceeding conventional
Ladder networks by a large margin. Our method makes minimal assumptions about the data and can
be applied to any modality. With TAG, we have barely scratched the surface of a comprehensive
integrated grouping mechanism, but we already see significant advantages. We believe grouping to
be crucial to human perception and are convinced that it will help to scale neural networks to even
more complex tasks in the future.
Acknowledgments
The authors wish to acknowledge useful discussions with Theofanis Karaletsos, Jaakko S?rel?, Tapani
Raiko, and S?ren Kaae S?nderby. And further acknowledge Rinu Boney, Timo Haanp?? and the rest
of the Curious AI Company team for their support, computational infrastructure, and human testing.
This research was supported by the EU project ?INPUT? (H2020-ICT-2015 grant no. 687795).
References
[1] Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv:1607.06450 [cs, stat], July 2016.
[2] Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate.
arXiv preprint arXiv:1409.0473, 2014.
[3] Deco, G. Biased competition mechanisms for visual attention in a multimodular neurodynamical system.
In Emergent Neural Computational Architectures Based on Neuroscience, pp. 114?126. Springer, 2001.
[4] Dempster, A. P., Laird, N. M., and Rubin, D. B. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the royal statistical society., pp. 1?38, 1977.
[5] Eslami, S. M., Heess, N., Weber, T., Tassa, Y., Kavukcuoglu, Y., and Hinton, G. E. Attend, infer, repeat:
Fast scene understanding with generative models. preprint arXiv:1603.08575, 2016.
[6] Gallinari, P., LeCun, Y., Thiria, S., and Fogelman-Soulie, F. M?moires associatives distribu?es: Une
comparaison (distributed associative memories: A comparison). In Cesta-Afcet, 1987.
[7] Goodfellow, I. J., Bulatov, Y., Ibarz, J., Arnoud, S., and Shet, V. Multi-digit number recognition from street
view imagery using deep convolutional neural networks. arXiv preprint arXiv:1312.6082, 2013.
[8] Greff, K., Srivastava, R. K., and Schmidhuber, J. Binding via reconstruction clustering. arXiv:1511.06418
[cs], November 2015.
[9] Gregor, K. and LeCun, Y. Learning fast approximations of sparse coding. In Proceedings of the 27th
International Conference on Machine Learning (ICML-10), pp. 399?406, 2010.
8
[10] Hochreiter, S., Younger, A. S., and Conwell, P. R. Learning to learn using gradient descent. In Proc.
International Conference on Artificial Neural Networks, pp. 87?94. Springer, 2001.
[11] Huang, J. and Murphy, K. Efficient inference in occlusion-aware generative models of images. arXiv
preprint arXiv:1511.06362, 2015.
[12] Hyv?rinen, A. and Perki?, J. Learning to segment any random vector. In The 2006 IEEE International
Joint Conference on Neural Network Proceedings, pp. 4167?4172. IEEE, 2006.
[13] Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal
covariate shift. arXiv preprint arXiv:1502.03167, 2015.
[14] Kingma, D. and Ba, J. Adam: A method for stochastic optimization. CBLS, 2015.
[15] Le Cun, Y. Mod?les Connexionnistes de L?apprentissage. PhD thesis, Paris 6, 1987.
[16] Le Roux, N., Heess, N., Shotton, J., and Winn, J. Learning a generative model of images by factoring
appearance and shape. Neural Computation, 23(3):593?650, 2011.
[17] Meier, M., Haschke, R., and Ritter, H. J. Perceptual grouping through competition in coupled oscillator
networks. Neurocomputing, 141:76?83, 2014.
[18] Rao, R. A., Cecchi, G., Peck, C. C., and Kozloski, J. R. Unsupervised segmentation with dynamical units.
Neural Networks, IEEE Transactions on, 19(1):168?182, 2008.
[19] Rasmus, A., Berglund, M., Honkala, M., Valpola, H., and Raiko, T. Semi-supervised learning with ladder
networks. In NIPS, pp. 3532?3540, 2015.
[20] Reichert, D. P. and Serre, T. Neuronal synchrony in complex-valued deep networks. arXiv:1312.6115 [cs,
q-bio, stat], December 2013.
[21] Reichert, D. P., Series, P, and Storkey, A. J. A hierarchical generative model of recurrent object-based
attention in the visual cortex. In ICANN, pp. 18?25. Springer, 2011.
[22] Ross, D. A. and Zemel, R. S. Learning parts-based representations of data. The Journal of Machine
Learning Research, 7:2369?2397, 2006.
[23] Saund, E. A multiple cause mixture model for unsupervised learning. Neural Computation, 7(1):51?71,
1995.
[24] Schmidhuber, J. Learning complex, extended sequences using the principle of history compression. Neural
Computation, 4(2):234?242, 1992.
[25] Schmidhuber, J. Learning to control fast-weight memories: An alternative to dynamic recurrent networks.
Neural Computation, 4(1):131?139, 1992.
[26] Schmidhuber, J. Reducing the ratio between learning complexity and number of time varying variables in
fully recurrent nets. In ICANN?93, pp. 460?463. Springer, 1993.
[27] Schmidhuber, J. A ?self-referential? weight matrix. In ICANN?93, pp. 446?450. Springer, 1993.
[28] Schmidhuber, J. and Huber, R. Learning to generate artificial fovea trajectories for target detection.
International Journal of Neural Systems, 2(01n02):125?134, 1991.
[29] Sohn, K., Zhou, G., Lee, C., and Lee, H. Learning and selecting features jointly with point-wise gated
Boltzmann machines. In Proceedings of The 30th International Conference on Machine Learning, pp.
217?225, 2013.
[30] Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M. Striving for simplicity: The all
convolutional net. arXiv preprint arXiv:1412.6806, 2014.
[31] Srikumar, V., Kundu, G., and Roth, D. On amortizing inference cost for structured prediction. In EMNLPCoNLL ?12, pp. 1114?1124, Stroudsburg, PA, USA, 2012. Association for Computational Linguistics.
[32] Tang, Y., Salakhutdinov, R., and Hinton, G. Robust boltzmann machines for recognition and denoising. In
Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2264?2271. IEEE, 2012.
[33] Team, The Theano Development. Theano: A Python framework for fast computation of mathematical
expressions. arXiv:1605.02688 [cs], May 2016.
[34] Vincent, P., Larochelle, H., Bengio, Y., and Manzagol, P. A. Extracting and composing robust features
with denoising autoencoders. In ICML, pp. 1096?1103. ACM, 2008.
[35] Vinh, N. X., Epps, J., and Bailey, J. Information theoretic measures for clusterings comparison: Variants,
properties, normalization and correction for chance. JMLR, 11:2837?2854, 2010.
[36] von der Malsburg, C. Binding in models of perception and brain function. Current opinion in neurobiology,
5(4):520?526, 1995.
[37] von der Malsburg, Christoph. The Correlation Theory of Brain Function. Departmental technical report,
MPI, 1981.
[38] Wersing, H., Steil, J. J., and Ritter, H. A competitive-layer model for feature binding and sensory
segmentation. Neural Computation, 13(2):357?387, 2001.
[39] Xu, K., Ba, J., Kiros, R., Courville, A., Salakhutdinov, R., Zemel, R., and Bengio, Y. Show, attend and tell:
Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2015.
[40] Yli-Krekola, A., S?rel?, J., and Valpola, H. Selective attention improves learning. In Artificial Neural
Networks?ICANN 2009, pp. 285?294. Springer, 2009.
9
| 6067 |@word version:4 compression:2 seems:1 hyv:2 confirms:1 q1:3 carry:1 etric:3 contains:1 score:7 series:1 selecting:1 ours:2 outperforms:2 current:1 z2:3 comparing:1 com:1 activation:3 gpu:1 mesh:1 visible:1 subsequent:1 shape:20 enables:1 remove:1 hypothesize:1 generative:6 fewer:1 cue:1 une:1 timo:1 short:1 infrastructure:1 provides:1 completeness:1 org:1 tagger:29 five:2 rc:1 along:3 mathematical:1 become:1 consists:2 combine:2 introduce:2 manner:5 huber:1 mask:4 expected:1 roughly:1 ibarz:1 examine:1 kiros:2 multi:7 brain:3 discretized:1 inspired:1 salakhutdinov:2 company:2 bulatov:1 param:3 equipped:1 window:2 becomes:2 spain:1 discover:2 underlying:2 begin:1 notation:1 agnostic:1 conv:1 moreover:1 lowest:1 project:1 q2:1 every:1 unwanted:1 demonstrates:4 control:2 gallinari:1 grant:1 unit:1 bio:1 peck:1 arguably:1 before:1 attend:2 bind:2 limit:1 despite:2 eslami:1 firing:1 approximately:1 might:2 chose:1 rnns:1 initialization:1 christoph:1 range:1 clump:1 acknowledgment:1 lecun:2 testing:5 practice:1 implement:4 backpropagation:1 digit:19 procedure:1 grasped:1 area:1 riedmiller:1 rnn:5 significantly:4 close:1 unlabeled:2 put:1 risk:1 context:2 descending:2 restriction:1 conventional:1 demonstrated:1 roth:1 attention:9 cluttered:2 independently:1 survey:1 focused:1 roux:1 splitting:2 simplicity:1 perceive:1 m2:4 deriving:1 fill:1 handle:3 pt:1 target:3 construction:1 rinen:2 exact:2 losing:1 caption:2 us:2 heavily:1 hypothesis:1 goodfellow:2 pa:1 hotloo:2 storkey:1 velocity:1 idsia:2 amortized:7 element:3 particularly:3 expensive:1 associate:1 nderby:1 recognition:3 connexionnistes:1 srikumar:1 labeled:1 role:1 preprint:7 capture:1 region:1 connected:9 eu:1 removed:2 highest:1 topmost:4 intuition:2 weigh:1 leak:1 complexity:2 dempster:1 jaakko:1 occluded:1 dynamic:2 trained:11 solving:1 segment:5 texturing:1 upon:3 efficiency:1 completely:2 textured:8 easily:1 joint:2 differently:1 emergent:1 represented:1 finger:1 harri:2 train:7 fast:8 artificial:3 zemel:2 tell:1 klaus:2 whose:1 quite:1 supplementary:4 valued:2 solve:2 cvpr:1 reconstruct:1 encoder:1 ability:4 jointly:2 itself:1 laird:1 final:3 online:1 associative:1 advantage:1 differentiable:1 mg:3 sequence:1 cai:1 took:1 reconstruction:2 net:2 product:1 relevant:3 turned:1 rapidly:1 translate:1 achieve:2 amplified:1 description:1 pronounced:1 competition:2 convergence:3 empty:1 oscillating:1 renormalizing:1 adam:2 converges:2 perfect:1 object:34 help:7 illustrate:1 batchnorm:4 recurrent:4 stat:2 attends:1 stroudsburg:1 auxiliary:1 implemented:1 predicted:1 c:4 larochelle:1 kaae:1 sensation:2 closely:1 correct:1 stochastic:1 human:4 enable:2 opinion:1 material:4 everything:1 require:1 argued:1 fix:1 generalization:1 wall:1 conwell:1 adjusted:1 helping:1 clarify:1 correction:1 effortlessly:1 around:1 ground:1 normal:1 seed:1 mapping:6 rgen:1 driving:1 m0:5 achieves:2 early:1 purpose:1 estimation:1 proc:1 overt:1 applicable:1 outperformed:1 label:5 currently:1 honkala:1 ross:1 grouped:1 vice:1 tool:3 reflects:1 gaussian:1 rather:5 zhou:1 arnoud:1 varying:1 haschke:1 q3:1 derived:1 focus:1 karaletsos:1 improvement:1 modelling:1 likelihood:3 greatly:3 contrast:1 hk:1 baseline:12 sense:1 inference:28 multimodular:1 dependent:1 factoring:1 typically:2 entire:1 integrated:1 relation:1 manipulating:1 selective:1 pixel:2 fogelman:1 classification:13 orientation:1 development:1 art:2 softmax:7 mutual:1 brox:1 field:2 aware:1 having:1 sampling:1 identical:1 represents:2 unsupervised:14 icml:2 future:1 report:4 dosovitskiy:1 employ:1 few:1 primarily:1 randomly:3 modern:1 composed:1 comprehensive:1 individual:2 zki:13 murphy:1 neurocomputing:1 replaced:1 phase:2 occlusion:1 n02:1 ab:1 detection:1 mlp:3 moire:1 investigate:1 evaluation:7 mixture:3 filled:1 incomplete:1 initialized:1 re:2 causal:1 minimal:1 mk:4 instance:3 column:5 classify:1 soft:2 amodal:1 modeling:1 rao:1 assignment:7 juergen:1 maximization:1 cost:8 stacking:1 addressing:1 deviation:2 reported:1 corrupted:2 combined:1 cho:1 international:5 ritter:2 lee:2 analogously:1 together:5 quickly:2 again:1 ambiguity:1 deco:1 imagery:1 thesis:1 huang:1 von:2 berglund:2 external:1 usable:1 szegedy:1 amortizing:1 sinusoidal:1 de:1 star:1 coding:1 includes:2 titan:1 explicitly:2 scratched:1 vi:2 performed:4 view:4 picked:4 break:1 saund:1 competitive:1 synchrony:1 vinh:1 minimize:1 convolutional:7 variance:1 largely:1 efficiently:2 generalize:1 raw:1 vincent:1 kavukcuoglu:1 ren:1 trajectory:1 corruption:1 history:1 strongest:1 reach:1 metalearning:1 against:1 frequency:1 pp:14 obvious:1 e2:4 naturally:2 mi:6 dataset:21 popular:1 wh:2 color:4 car:1 improves:4 infers:1 segmentation:16 actually:1 supervised:15 follow:1 day:1 specify:1 improved:4 formulation:1 evaluated:1 datasets2:1 though:1 furthermore:3 just:1 layernorm:2 autoencoders:4 clock:1 ygi:1 hand:5 correlation:1 quality:1 believe:1 rinu:1 usa:1 serre:1 normalized:1 true:2 contain:1 concept:1 hence:1 alternating:1 symmetric:1 iteratively:2 illustrated:1 sin:1 wrapped:1 self:2 encourages:1 please:3 ambiguous:1 mpi:1 hik:1 theoretic:1 demonstrate:1 performs:3 gh:1 greff:5 image:12 variational:1 weber:1 kozloski:1 fi:1 recently:3 wise:1 common:2 strenuous:1 steil:1 pseudocode:2 mt:1 overview:2 tassa:1 association:1 m1:4 elementwise:1 refer:4 significant:2 versa:1 ai:2 entail:1 perki:2 surface:1 cortex:1 add:1 align:1 posterior:8 touching:1 perspective:1 irrelevant:1 schmidhuber:7 nvidia:1 certain:1 binary:2 continue:1 der:2 uncorrupted:1 scoring:1 seen:2 additional:1 tapani:1 july:1 semi:9 afterwards:1 multiple:2 infer:1 technical:1 match:2 x28:1 cross:1 cifar:1 sota:1 e1:5 coded:2 qi:11 prediction:2 impact:1 basic:1 variant:2 vision:2 metric:1 expectation:1 arxiv:18 iteration:21 normalization:3 tailored:1 achieved:1 hochreiter:2 younger:1 background:5 remarkably:2 separately:1 want:1 winn:1 median:1 modality:3 crucial:2 biased:1 rest:1 unlike:1 sure:1 isolate:1 pooling:1 subject:3 bahdanau:1 december:1 mod:1 call:2 extracting:1 curious:2 presence:1 split:3 easy:2 bengio:3 shotton:1 xj:1 relu:1 psychology:1 zi:1 architecture:2 reduce:1 regarding:1 computable:1 shift:2 expression:1 cecchi:1 accelerating:1 tactile:1 cause:1 deep:5 cocktail:1 useful:2 iterating:1 detailed:2 heess:2 amount:1 gle:1 referential:1 locally:1 processed:2 sohn:1 http:2 generate:1 shifted:2 neuroscience:1 per:2 popularity:1 write:1 group:45 putting:1 four:3 iter:5 clean:1 sum:1 run:1 powerful:2 springenberg:1 arrive:1 wu:2 oscillation:2 epps:1 bit:1 dropout:1 layer:10 courville:1 correspondence:1 quadratic:1 refine:2 scene:3 thiria:1 tag:13 speed:1 performing:1 structured:3 beneficial:1 slightly:1 em:2 cun:1 making:1 intuitively:1 invariant:3 restricted:1 theano:3 interference:2 mutually:1 mechanism:11 fail:1 know:1 flip:1 end:12 informal:1 available:1 apply:1 observe:1 hierarchical:3 simulating:1 bailey:1 batch:1 robustness:1 alternative:1 reichert:2 original:3 top:4 running:2 remaining:1 clustering:2 linguistics:1 malsburg:2 build:1 society:1 gregor:1 added:2 quantity:1 occurs:1 reconst:4 parametric:6 strategy:2 already:2 receptive:1 exclusive:1 guessing:1 costly:1 gradient:1 fovea:2 convnet:5 valpola:3 separate:3 separating:1 decoder:1 street:1 reason:3 fresh:1 barely:1 emnlpconll:1 modeled:1 rasmus:2 ratio:2 illustration:2 z3:2 manzagol:1 x20:1 difficult:1 mostly:1 boney:1 hao:1 gk:6 negative:2 ba:3 suppress:1 implementation:1 boltzmann:3 unknown:1 perform:2 allowing:2 gated:1 yli:1 neuron:1 datasets:7 acknowledge:2 november:1 tele:1 descent:1 optional:1 extended:2 ever:1 precise:1 team:2 hinton:3 neurobiology:1 arbitrary:1 pred:4 introduced:2 meier:1 required:3 paris:1 z1:3 theofanis:1 coherent:2 learned:4 temporary:1 barcelona:1 hour:1 nip:2 kingma:1 able:2 alongside:1 usually:2 perception:3 below:1 pattern:2 dynamical:1 built:1 royal:1 memory:2 overlap:4 rely:1 kundu:1 improve:4 github:1 ladder:15 imply:1 shet:1 raiko:2 categorical:2 coupled:1 h2020:1 prior:1 epoch:4 ict:1 understanding:1 python:1 synchronization:3 fully:13 expect:1 par:1 permutation:1 interesting:1 limitation:1 generation:2 impressively:1 validation:2 awareness:1 degree:1 consistent:1 apprentissage:1 rubin:1 principle:3 bank:1 heavy:1 translation:2 summary:1 convinced:1 supported:1 last:1 antti:2 free:1 repeat:1 distribu:1 allow:3 mik:8 perceptron:1 sparse:1 distributed:2 benefit:3 feedback:1 boundary:1 soulie:1 world:2 rich:1 kz:1 sensory:1 author:1 collection:1 qualitatively:1 party:1 far:1 transaction:1 excess:1 approximate:3 finalized:1 ignore:1 implicitly:1 keep:1 dealing:1 sequentially:1 ioffe:1 summing:1 don:1 iterative:11 latent:4 table:10 learn:6 zk:3 robust:2 composing:1 permute:1 complex:7 domain:3 pk:1 icann:4 whole:2 noise:1 arise:1 xu:1 neuronal:2 fashion:1 amortize:3 fails:1 position:2 inferring:1 exceeding:1 wish:1 perceptual:8 jmlr:1 learns:3 tang:1 z0:3 down:3 specific:1 covariate:1 showing:1 pz:1 striving:1 evidence:1 grouping:35 intractable:2 consist:1 mnist:9 rel:2 gained:1 texture:10 phd:1 margin:1 suited:1 entropy:1 fc:11 explore:1 appearance:1 visual:5 applies:1 binding:4 ch:1 springer:6 truth:1 chance:3 acm:1 slot:1 goal:1 oscillator:1 comparaison:1 change:1 wersing:2 typical:1 operates:1 ami:10 reducing:2 denoising:12 mathias:2 e:1 m3:3 zg:2 select:1 neurodynamical:1 internal:6 support:1 inability:1 relevance:1 evaluate:6 audio:1 trainable:2 phenomenon:1 srivastava:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.